This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).
Contents Hide Contents S 76000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550 6600 6650 6700 6750 6800 6850 6900 6950
6150 - 6175 -
Message: 6175 Date: Fri, 24 Jan 2003 09:31:29 Subject: Re: A 13-limit comma list From: Graham Breed Gene Ward Smith wrote: > Here is a list of 70 13-limit commas, with epimericity less than 0.25 > and size less than 50 cents. There is a very good chance it is complete, since I took it from a similar list of 499 commas with epimericity less than 0.5 after I could add nothing further to it. > > [36/35, 77/75, 40/39, 128/125, 45/44, 143/140, 49/48, 50/49, 55/54, 56/55, 64/63, 65/64, 66/65, 78/77, 81/80, 245/242, 91/90, 99/98, 100/99, 105/104,121/120, 245/243, 126/125, 275/273, 144/143, 169/168, 176/175, 896/891, 196/195, 1029/1024, 640/637, 225/224, 1188/1183, 243/242, 1573/1568, 325/324, 351/350, 352/351, 364/363, 385/384, 847/845, 441/440, 1375/1372, 540/539, 4000/3993, 625/624, 676/675, 729/728, 2200/2197, 1575/1573, 5632/5625, 1001/1000, 4459/4455, 10985/10976, 1716/1715, 2080/2079, 2401/2400, 3025/3024, 4096/4095, > 4225/4224, 4375/4374, 6656/6655, 140625/140608, 9801/9800, 10648/10647, 196625/196608, 151263/151250, 1990656/1990625, 123201/123200, 5767168/5767125] I left a script running overnight to generate linear temperaments from all combinations (there are nearly a million) of these columns. The results are below. It took 7.8 hours in pure Python (using wedge products, so no Numeric libraries). You could probably knock 2 orders of magnitude off that if you rewrote the whole thing in C, so it'd only be about 5 minutes. Which is still a lot slower than the search by equal temperaments, but that isn't comprehensive yet because I'm not taking all versions of inconsistent temperaments. I might look at it sometime. It's difficult to recognise the temperaments without the ETs as hints, so I don't know if there's anything new or missing. But here they are... 0/1, 15.8 cent generator basis: (0.034482758620689655, 0.013188156675199889) mapping by period and generator: [(29, 0), (46, 0), (67, 1), (81, 1), (100, 1), (107, 1)] mapping by steps: [(29, 0), (46, 0), (-1, 1), (13, 1), (32, 1), (39, 1)] highest interval width: 1 complexity measure: 29 (58 for smallest MOS) highest error: 0.003552 (4.263 cents) unique 1/2, 351.6 cent generator basis: (1.0, 0.29301432925652815) mapping by period and generator: [(1, 0), (1, 2), (-5, 25), (-1, 13), (2, 5), (4, -1)] mapping by steps: [(1, 1), (-1, 1), (-30, -5), (-14, -1), (-3, 2), (5, 4)] highest interval width: 26 complexity measure: 26 (41 for smallest MOS) highest error: 0.006546 (7.855 cents) 1/6, 183.2 cent generator basis: (0.5, 0.15268768272511829) mapping by period and generator: [(2, 0), (5, -6), (8, -11), (5, 2), (6, 3), (8, -2)] mapping by steps: [(10, 2), (1, -1), (-4, -3), (33, 7), (42, 9), (32, 6)] highest interval width: 15 complexity measure: 30 (46 for smallest MOS) highest error: 0.005815 (6.978 cents) 0/1, 103.8 cent generator basis: (0.5, 0.086488824362764394) mapping by period and generator: [(2, 0), (3, 1), (5, -2), (7, -8), (9, -12), (10, -15)] mapping by steps: [(2, 0), (-1, 1), (13, -2), (39, -8), (57, -12), (70, -15)] highest interval width: 17 complexity measure: 34 (46 for smallest MOS) highest error: 0.005094 (6.113 cents) unique 1/7, 186.0 cent generator basis: (0.5, 0.15499541683101503) mapping by period and generator: [(2, 0), (1, 7), (0, 15), (5, 2), (6, 3), (4, 11)] mapping by steps: [(12, 2), (-1, 1), (-15, 0), (28, 5), (33, 6), (13, 4)] highest interval width: 15 complexity measure: 30 (32 for smallest MOS) highest error: 0.005555 (6.666 cents) 0/1, 497.9 cent generator basis: (1.0, 0.41489358489291139) mapping by period and generator: [(1, 0), (2, -1), (-1, 8), (-3, 14), (13, -23), (12, -20)] mapping by steps: [(1, 0), (-1, 1), (23, -8), (39, -14), (-56, 23), (-48, 20)] highest interval width: 37 complexity measure: 37 (41 for smallest MOS) highest error: 0.004468 (5.362 cents) unique 1/10, 310.3 cent generator basis: (1.0, 0.25859402576420476) mapping by period and generator: [(1, 0), (-1, 10), (0, 9), (1, 7), (-3, 25), (5, -5)] mapping by steps: [(9, 1), (1, -1), (9, 0), (16, 1), (-2, -3), (40, 5)] highest interval width: 30 complexity measure: 30 (31 for smallest MOS) highest error: 0.006590 (7.908 cents) 0/1, 475.7 cent generator basis: (1.0, 0.39641218196468747) mapping by period and generator: [(1, 0), (0, 4), (-6, 21), (4, -3), (-12, 39), (-7, 27)] mapping by steps: [(1, 0), (-4, 4), (-27, 21), (7, -3), (-51, 39), (-34, 27)] highest interval width: 42 complexity measure: 42 (43 for smallest MOS) highest error: 0.003409 (4.090 cents) unique 2/5, 339.4 cent generator basis: (1.0, 0.28284398720617654) mapping by period and generator: [(1, 0), (3, -5), (6, -13), (-2, 17), (6, -9), (2, 6)] mapping by steps: [(3, 2), (-1, 1), (-8, -1), (28, 13), (0, 3), (18, 10)] highest interval width: 30 complexity measure: 30 (32 for smallest MOS) highest error: 0.006663 (7.995 cents) 1/3, 166.1 cent generator basis: (0.5, 0.13840115997461999) mapping by period and generator: [(2, 0), (4, -3), (-2, 24), (7, -5), (0, 25), (11, -13)] mapping by steps: [(4, 2), (-1, 1), (68, 22), (-1, 2), (75, 25), (-17, -2)] highest interval width: 38 complexity measure: 76 (94 for smallest MOS) highest error: 0.000971 (1.165 cents) unique 0/1, 83.0 cent generator basis: (0.33333333333333331, 0.06917176981320311) mapping by period and generator: [(3, 0), (6, -6), (8, -5), (8, 2), (11, -3), (14, -14)] mapping by steps: [(3, 0), (-6, 6), (-2, 5), (12, -2), (5, 3), (-14, 14)] highest interval width: 16 complexity measure: 48 (57 for smallest MOS) highest error: 0.002358 (2.830 cents) unique 0/1, 15.8 cent generator basis: (0.027777777777777776, 0.013143428289896716) mapping by period and generator: [(36, 0), (57, 0), (83, 1), (101, 0), (124, 1), (133, 0)] mapping by steps: [(36, 0), (57, 0), (-1, 1), (101, 0), (40, 1), (133, 0)] highest interval width: 1 complexity measure: 36 (72 for smallest MOS) highest error: 0.005995 (7.194 cents) 0/1, 104.9 cent generator basis: (0.5, 0.087412649330450343) mapping by period and generator: [(2, 0), (3, 1), (5, -2), (3, 15), (5, 11), (6, 8)] mapping by steps: [(2, 0), (-1, 1), (13, -2), (-57, 15), (-39, 11), (-26, 8)] highest interval width: 17 complexity measure: 34 (46 for smallest MOS) highest error: 0.006039 (7.247 cents) 1/3, 234.5 cent generator basis: (1.0, 0.19540011144114833) mapping by period and generator: [(1, 0), (1, 3), (-1, 17), (3, -1), (6, -13), (8, -22)] mapping by steps: [(2, 1), (-1, 1), (-19, -1), (7, 3), (25, 6), (38, 8)] highest interval width: 39 complexity measure: 39 (41 for smallest MOS) highest error: 0.005231 (6.277 cents) unique 1/3, 165.8 cent generator basis: (0.5, 0.13817544367976164) mapping by period and generator: [(2, 0), (4, -3), (11, -23), (7, -5), (13, -22), (11, -13)] mapping by steps: [(4, 2), (-1, 1), (-47, -12), (-1, 2), (-40, -9), (-17, -2)] highest interval width: 23 complexity measure: 46 (58 for smallest MOS) highest error: 0.003280 (3.935 cents) unique 4/11, 263.7 cent generator basis: (1.0, 0.21974602053976744) mapping by period and generator: [(1, 0), (4, -11), (1, 6), (5, -10), (5, -7), (7, -15)] mapping by steps: [(8, 3), (-1, 1), (26, 9), (10, 5), (19, 8), (11, 6)] highest interval width: 28 complexity measure: 28 (32 for smallest MOS) highest error: 0.008185 (9.822 cents) 0/1, 565.9 cent generator basis: (1.0, 0.47161343493195518) mapping by period and generator: [(1, 0), (3, -3), (16, -29), (8, -11), (11, -16), (7, -7)] mapping by steps: [(1, 0), (-3, 3), (-42, 29), (-14, 11), (-21, 16), (-7, 7)] highest interval width: 29 complexity measure: 29 (36 for smallest MOS) highest error: 0.010144 (12.173 cents) 1/6, 116.8 cent generator basis: (1.0, 0.097316259752627809) mapping by period and generator: [(1, 0), (1, 6), (3, -7), (3, -2), (2, 15), (0, 38)] mapping by steps: [(5, 1), (-1, 1), (22, 3), (17, 3), (-5, 2), (-38, 0)] highest interval width: 45 complexity measure: 45 (72 for smallest MOS) highest error: 0.003454 (4.145 cents) unique 1/4, 175.9 cent generator basis: (1.0, 0.14656139906015953) mapping by period and generator: [(1, 0), (1, 4), (1, 9), (-1, 26), (2, 10), (4, -2)] mapping by steps: [(3, 1), (-1, 1), (-6, 1), (-29, -1), (-4, 2), (14, 4)] highest interval width: 28 complexity measure: 28 (34 for smallest MOS) highest error: 0.009313 (11.176 cents) 0/1, 476.0 cent generator basis: (1.0, 0.39667339152715836) mapping by period and generator: [(1, 0), (0, 4), (17, -37), (4, -3), (11, -19), (16, -31)] mapping by steps: [(1, 0), (-4, 4), (54, -37), (7, -3), (30, -19), (47, -31)] highest interval width: 45 complexity measure: 45 (48 for smallest MOS) highest error: 0.003774 (4.529 cents) unique Graham
Message: 6176 Date: Sat, 25 Jan 2003 13:16:56 Subject: Re: Graham's Top 20 13-limit temperaments From: Graham Breed Gene Ward Smith wrote: > The way I find a kernel basis is to find the invariant commas of the wedgie; that's four commas missing a prime in the 7-limit case, and 10 commas missing two primes in the 11-limit case. Then I LLL reduce that, which gives me a kernel basis, which I can then TM reduce if need be. The trouble is, I don't have a list of all interesting wedgies. I wasn't talking about that. If you have a set of candidate unison vectors, you can check it's complete without doing the full serach. > I don't think the same temperaments will always fall through for you quite this neatly, because the badness of the worst temperaments on the list is no longer so much higher than badness of the best ones. Yes, that's why I always take the best ones. If the filter's strict enough, the badness doesn't matter. > It sounds like I am too slow, and you have bugs which need to be worked out. I've sorted out the bug. I was getting temperaments with the same period and octave-equivalent mapping as mystery, but silly period mappings. But I was rejecting the real mystery because it looked the same. So now I have to reject fewer temperaments, and it's taking 82 minutes, but seems to be working correctly. Graham
Message: 6177 Date: Sat, 25 Jan 2003 13:56:07 Subject: Re: Graham's Top 20 13-limit temperaments From: Graham Breed Gene Ward Smith wrote: > What kind of complexity? I didn't find anything nearly as high as 100 for unweighted complexity. I found 103089 such in the 13-limit search, some of them probably duplicates. Graham
Message: 6179 Date: Sat, 25 Jan 2003 21:45:47 Subject: Temperament finder update From: Graham Breed The optimized script for finding temperaments from unison vectors is at #!/c/Python22/python.exe * You need the latest version of Python, the Numeric extensions and my temperament library: http://microtonal.co.uk/temper.py * That's been updated to search on all versions of equal temperaments where we allow inconsistency. So I ran the 13-limit search on both. The unison vectors 0/1, 16.4 cent generator * take about 70 minutes. Searching through pairs of the simplest 100 ETs with a consistency cutoff of 0.8 scale steps, using # duplicate selectNumeric.py but do an ET search * takes about 50 seconds. Those results 1/2, 16.4 cent generator * are the same as for the unison vector search! I'll look at getting the CGI updated to use the new inconsistency search. Graham
Message: 6183 Date: Sat, 25 Jan 2003 08:12:59 Subject: Re: Graham's Top 20 13-limit temperaments From: Graham Breed Gene Ward Smith wrote: > Here they are again, in the form I would have given them. Do you have a script that could have given them, as you do for the 11-limit? > Three comments--first, I don't know how this list was filtered. Second, I think 20 is too small a number for the 13-limit. Third, I used unweighted complexity because I don't have geometric complexity in the 13-limit coded as yet. Complexity<100, RMS error < 6 or so cents. Why so? How many have you tuned up? Graham
Message: 6185 Date: Sat, 25 Jan 2003 10:47:50 Subject: Re: Graham's Top 20 13-limit temperaments From: Graham Breed Gene Ward Smith wrote: > I'm running on Maple, which is more powerful but much slower than Python, so it's getting to the point where I should really use something else, or else get you to do it. I'd have thought Maple would be better optimized for numerical work. Anyway, I do have a version now using Numerical Python, which links to C and Fortran libraries for matrix operations. I can now do the 13-limit search within an hour, but there are bugs -- I'm not getting a correct period mapping for mystery. It wouldn't be that difficult to convert the script to a more efficient language like C++ or Java, with a suitable numerical library. I can invert matrices in C++, but that isn't optimized code. Currently I still use my high-level Python library to do the RMS optimization, but I think that can be re-written. >>Complexity<100, RMS error < 6 or so cents. > > > What kind of complexity? I didn't find anything nearly as high as 100 for unweighted complexity. It's the usual max-min complexity. And not calculated correctly because I'm only using primes. Maybe nothing's that high. I'm not recording what I throw away. A lot of inaccurate temperaments were getting in, which is why I tightened up the error cutoff. > As I say, I think you might be better for the job at this point. If I calculated the coefficients for determining geometric complexity, would you try that? Yes, if you give me an algorithm I can copy. Ideally Python code or pseudocode. If it's only a question of checking that the unison vectors can produce all the temperaments you're interested, there are more efficient ways of doing it. Start with the wedgie for the temperament, and you can filter for unison vectors that are consistent with the temperament. Then you only need to take subsets of those until you get one that's linearly independent. > As for why 20 is too small, as we go up in prime limit, the number of reasonable systems increases; it more and more becomes the case that the lists will be completely different if we use slightly different complexity or error measures. I think we should take in a bigger haul at least to start out with. Good algorithms will be more likely to agree on the best temperaments than the mediocre ones. The best thing's to restrict the range of complexities and errors so the same temperaments will fall through. I can also run the ET and unison vector searches with the same parameters. Oh, and if two sets of results are held in RAM they can be compared automatically, so larger sets can be used. Graham
Message: 6188 Date: Sun, 26 Jan 2003 11:52:24 Subject: Re: hi everybodyyyyyyyyyyyyyyyyyyyyyyyyy From: Dave Keenan Hi Paul, I think I understand the point you're making, and it's a good one, but I don't think you have described the situation correctly. --- In tuning-math@xxxxxxxxxxx.xxxx "wallyesterpaulrus <wallyesterpaulrus@y...>" <wallyesterpaulrus@y...> wrote: > on this list, > > one group of people is talking about equal temperaments which belong > to important families of tunings (each family being where a > particular set of unison vectors vanishes), You mean the folks searching for and cataloging good temperaments, in particular linear temperaments (LTs) at various odd limits? > another group is concerned with notating equal temperaments according > to where the potential unison vectors lie relative to their chains of > fifths, You mean the "Common notation .." thread. I don't know what you mean by "potential" unison vectors. Only commas that _don't_ vanish are of any use in notating a temperament. > and the two groups are not talking to one another. > > am i perceiving the situation correctly? I was actively involved in the linear temperament effort up until 7-limit. Gene has contributed to the Common notation thread regarding notating linear temperaments. But I agree he seems to have been following the idea that a temperament can be notated adequately using the notation for its most representative ET, and apparently assuming that George and I have already found the best notation for that ET (within the constraints we have imposed upon ourselves). neither of which may e true. Although chains of fifths are the backbone of the sagittal notation, this does not prevent it from notating ETs in LT-specific ways, so the same ET can be notated differently depending on which LT you are considering it as. I recently gave some examples in a "Notating Linear Temperaments" thread (or some such), but no one responded or carried it forward.
Message: 6189 Date: Sun, 26 Jan 2003 00:26:12 Subject: Re: A common notation for JI and ETs From: David C Keenan Hi George, I agree with most of your suggestions re single ASCII characters for sagittal. Previously I figured there were so few approximate up-down pairs that the left-right pairs <> [] {} had to get used somewhere. Sure they're laterally confusable, but so are / and \. However I think you've shown that we can get by without using them. How's this? (in order of size relative to strict Pythagorean) '| ' 5'-comma sharp 32768:32805 .! . 5'-comma flat |( ` 5:7-comma sharp 5103:5120 !( , 5:7-comma flat ~| ~ 17-comma sharp 2176:2187 ~! $ or z 17-comma flat ~|( h 17'-comma sharp 4096:4131 ~!( y 17'-comma flat /| / 5-comma sharp 80:81 \! \ 5-comma flat |) f 7-comma sharp 63:64 !) t 7-comma flat |\ & 55-comma sharp 54:55 !/ % 55-comma flat (| ? 7:11-comma sharp 45056:45927 (! j 7:11-comma flat (|( d 5:11-comma sharp 44:45 (!( q 5:11-comma flat //| // 25-diesis sharp 6400:6561 \\! \\ 25-diesis flat /|) n 13-diesis sharp 1024:1053 \!) u 13-diesis flat /|\ ^ 11-diesis sharp 32:33 \!/ v 11-diesis flat (|) @ 11'-diesis sharp 704:729 (!) U or o 11'-diesis flat (|\ m 13'-diesis sharp 26:27 (!/ w 13'-diesis flat /||\ # apotome sharp 2048:2187 \!!/ b apotome flat Which do you prefer out of $ or z and U or o? I note that some folk have in the past used t for the tartini half-sharp and d for the backwards-flat (meaning half-flat), but I don't think that should stop us using them in other ways here. >For 217-ET that's 12 pairs of characters. I don't think I would want >to see ) paired with (, for example. OK. Let's not use ( or ) at all in the single ASCII character version. > For the 5:7 comma why not ( for >up and { for down, and the 19 comma would be ) and }. No. I find absolutely nothing to suggest which is up and which is down in each pair, or even that they _are_ a pair. > Maybe a 17 comma >up could be S and down s, or would that be better for the 23 comma? I don't like using uppercase-lowercase pairs. Too confusable. I know we're using x and X in the multi-ASCII, but these should be very rare and will often have additional direction cues provided by straight flags, e.g. \x/ /X\ The 23-comma is so rare it doesn't need a single ASCII character. >After that it gets more difficult. Yes. No need to go beyond 17. >Do we really need shorthand ascii notation for anything more than >straight and convex-flag symbols? It would be nice to have the full 217-ET set provided they're reasonably memorable, or figure-out-able. > (Or are you intending to combine >those single-character ascii symbols in any way?) No. > I think that it >would get pretty complicated (hence difficult to remember), and the >result would have very resemblance to what sagittal symbols look >like. Well how do you think we're doing above? > Wouldn't it be more productive just to use the sagittal ascii >system that we're already using for these things? I'm not sure what you mean by "productive". The thing to do is to provide a key or legend like those above, whenever you use these ASCII symbols. -- Dave Keenan Brisbane, Australia Dave Keenan's Home Page *
Message: 6190 Date: Sun, 26 Jan 2003 13:00:42 Subject: Re: Graham's top 20, with standard vals From: Graham Breed Gene Ward Smith wrote: >This is the same list as before, only this time I included all of the standard vals consistent with the temperament. The good news is this: > >(1) 18 out of the 20 systems can be defined by wedging two standard val > So these have to be consistent? >(2) In no case is it neccessary to use vals where the number of divisions of the octave of one is more than twice that of another > Now that is interesting.. >(3) The vals could have been filtered. Here are the ets from 30 to 130, with badness less than 1.6: > >[31, 36, 37, 41, 43, 44, 46, 50, 53, 56, 58, 62, 63, 68, 70, 72, 77, 79, 80, 87, 94, 103, 111, 113, 118, 121, 130] > >We see that this would have more than sufficed to get our 18 > I misread that, and generated 27 ETs with a cutoff of 0.6 scale step errors. [17, 26, 27, 29, 31, 34, 36, 41, 43, 46, 50, 53, 58, 60, 62, 63, 65, 72, 77, 80, 87, 94, 103, 104, 111, 113, 121] They give me all 20 temperaments (it took 5 seconds). And also one more, h26&h104, with a generator of 9.8 and a period of 1/26 octaves. Generator mapping [1 2 0 0 1]. It breaks your rule because 26*2<104. I'll have missed it before because I wasn't going high enough to get 104-equal. If I'd taken 104 ETs instead of 100 I would have got it. It takes a whole minute to search through 200 ETs. >The bad news is we missed two; one had the biggest error and second lowest complexity on the list, and the other had a very small period of 1/36 octave. A separate system for dealing with small periods may be required. The other bad news is that we are skating on thin ice, as many of the temperaments had only the minimum two standard vals. > > Was that one of them? The error's 2.5 cents (minimax) and the complexity's 52. So it doesn't sound like either. I can't find another one that was missing. Graham
Message: 6193 Date: Sun, 26 Jan 2003 01:54:43 Subject: Re: Temperament finder update From: Carl Lumma >You need the latest version of Python, the Numeric extensions numpy or Numarray? -C. > 0/1, 16.4 cent generator * > 1/2, 16.4 cent generator * Awesome!
Message: 6194 Date: Sun, 26 Jan 2003 15:25:21 Subject: Re: Graham's top 20, with standard vals From: Graham Breed Gene Ward Smith wrote: > Vals are consistent by definition. What I meant was, wedging the val with the wedgie will give a zero vector. Say what? I thought a val was the complement of a vector. > The missing ones are the two on the list with only one standard val; you may not be using only standard vals. I wish you would explain this. By "standard" I mean the mapping is determined by rounding > log2(p) for prime p to the nearest integer. I did explain this -- I'm not only taking nearest prime approximations any more. Tricontaheximal is h72&h36. I don't see why you missed that. The other one, where you only give 41, requires an alternative mapping of 34-equal. This is actually more accurate than the nearest-prime mapping, and so the only one I use in the search. [34, 54, 79, 96, 118, 126] I didn't realize alternative mappings were getting in with such a low cutoff. g104 is another one, so here's the temperament that the unison vectors don't seem to cover: 1/5, 9.8 cent generator basis: (0.038461538461538464, 0.0081713150481090846) mapping by period and generator: [(26, 0), (41, 1), (60, 2), (73, 0), (90, 0), (96, 1)] mapping by steps: [(104, 26), (165, 41), (242, 60), (292, 73), (360, 90), (385, 96)] highest interval width: 2 complexity measure: 52 (78 for smallest MOS) highest error: 0.002107 (2.528 cents) unique I've updated the scripts at temperament finding scripts * to use alternative mappings now. Graham
Message: 6198 Date: Sun, 26 Jan 2003 15:46:50 Subject: Re: Vals vs commas From: Graham Breed Gene Ward Smith wrote: > We run into a problem with the strictly val approach if fewer than two standard vals cover the temperament in question; this can happen because it is of relatively low complexity, or because it has a small period. One way out of this is to include non-standard vals--that is, look at various second best choices for mappings to primes; this however, increases the computational burden also. It doesn't add much to the computational burden. You can still search with the same number of ETs, because the new, simpler ones are as likely to give a hit as the old, accurate ones. Even using twice as many ETs only makes the search 4 times as hard. The vector search is orders of magnitude harder, and gets worse the more primes you use. It may be possible to tame it by pruning branches, but that'll also make it much more complex. > Graham reports that he didn't find he needed the comma list approach. > I can't quite see how this happened, since the Tricontaheximal temperament, with a period of 33 1/3 cents, has only one standard val, namely 72. Other temperaments are skating close to the wire by having only two standard vals--Hemififths with 41 and 58, and Diaschismic with 46 and 58 among the top four, which is all I have checked. Because I'm using non-standard vals. And this Tricontaheximal works anyway. Inconsistent ETs are more important the higher the odd limit, as simple temperaments can be missed. Probably non-standard vals will be important at the same time. I wasn't confident the ET search would compete with the vectors until I implemented them. There are other ways of doing the search that should complete with the pairs of ETs. One is to go through the list of ETs and choose each number of steps as the generator for a linear temperament. The number of LTs you look at is roughly proportional to the number of ETs, so it's equivalent to the pairs search in complexity. The list of ETs should be smaller. You also have to look at some alternative mappings with the same ET and generator. The other way is to start with a huge list of 5-limit linear temperaments (which is easy) and keep adding primes. Reject any LTs that are too complex or inaccurate as you go along. Graham
Message: 6199 Date: Sun, 26 Jan 2003 02:50:04 Subject: Re: Graham's Top 20 13-limit temperaments From: Carl Lumma >i have access to a 2.4 GHz machine for running Matlab overnight or >for however long it takes. i'd be happy to try whatever algorithms >you wish to spell out. One page claims Matlab is implemented in C. I seem to think Maple is implemented in Maple, but I can't find that in the manual now. I'd be surprised if either of them were faster than python, but I could very well be wrong. -Carl
6000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550 6600 6650 6700 6750 6800 6850 6900 6950
6150 - 6175 -