This is an
**
Opt In Archive
.
**
We would like to hear from you if you want your posts included. For the contact address see
About this archive. All posts are copyright (c).

9000
9050
9100
9150
9200
9250
9300
9350
9400
9450
9500
9550
9600
9650
9700
9750
**9800**
9850
9900
9950

**9800 -**
9825 -

Message: 9801 - Contents - Hide Contents Date: Wed, 04 Feb 2004 22:10:51 Subject: Re: Comma reduction? From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Paul G Hjelmstad" <paul.hjelmstad@u...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> > wrote:>> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> > wrote: >>>>>> Are the 2 commas in the 7-limit always linearly independent? >>>>>> Yes, they are never 'collinear'. >>>> By definition of a 7-limit linear temperament. >> >>>> How>>>> are they generated, (from wedgies OR matrices)? >>>>>> You can pick them off the tree. We've been looking at some of >>> the 'fruits' here. >>>> Trees don't work for me. You can get them from direct comma searches >> or extract them out of temperaments, etc. >> I know how to find them using Python. I was kind of interested in the > algorithm used to generate them...In Matlab, I simply generate a gigantic n-dimensional lattice in prime space, and calculate the error and complexity of each point. This approach cannot be beat for the codimension-1 case, it seems to me.>>>> Also, was told >>>> that the complement of a wedge product in the 5-limit is the > same>>>> as the cross-product, how does this work in the 7-limit? >>>> The complement of a 2-val is a 2-monzo, and vice-versa, which just >> involves reordering. >> >> ~<<||l1 l2 l3 l4 l5 l6|| = <<l6 -l5 l4 l3 -l2 l1|| >> Thanks. Are they called 2-val and 2-monzo because they are "linear"or is there some other reason? I think the terms bival and bimonzo for these are closer to the standard "bivector", but since these prefixes can get large and hard to remember, I guess people here decided to use numbers instead.

Message: 9802 - Contents - Hide Contents Date: Wed, 04 Feb 2004 06:16:17 Subject: Re: Comma reduction? From: Gene Ward Smith --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> Shouldn't that be > > ~||l1 l2 l3 l4 l5 l6>> = <<l6 -l5 l4 l3 -l2 l1|| > > or something?Ooops. Yeah, both ways.

Message: 9804 - Contents - Hide Contents Date: Wed, 04 Feb 2004 06:34:29 Subject: Re: I guess Pajara's not #2 From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> > wrote: >>> So the below was wrong. I forgot that you reverse the order of the >> elements to convert a multival wedgie into a multimonzo wedgie! > Doing>> so would, indeed, give the same rankings as my original L_1 >> calculation. But that's gotta be the right norm. The Tenney lattice >> is set up to measure complexity, and the norm we always associate >> with it is the L_1 norm. Isn't that right? The L_1 norm on the > monzo>> is what I've been using all along to calculate complexity for the >> codimension-1 case, in my graphs and in the "Attn: Gene 2"> post . . . > > To me it seemed there was a reasonable case for either norm, which > means you could argue you could use any Lp norm also, since they lie > between L1 and L_inf. Do you need me to redo the calculation using > the L1 norm?How's it coming along? P.S. see my post on the tuning list about the bizarre results of using L1 norm for monzos . . .

Message: 9805 - Contents - Hide Contents Date: Wed, 04 Feb 2004 22:19:06 Subject: Re: Comma reduction? From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Paul G Hjelmstad" <paul.hjelmstad@u...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> > wrote:>> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul G Hjelmstad" >>> >>> Thanks. Are they called 2-val and 2-monzo because they > are "linear">>> or is there some other reason? >>>> 2-vals are two vals wedged, 2-monzos are two monzos wedged. The > former>> is linear unless it reduces to the zero wedgie, the latter is linear >> only in the 7-limit. >> Thanks! So the latter is linear in the 7-limit because the 7-limit is > formed from two commas...I see.The 7-limit is 4-dimensional, so if you temper out 2 commas you're left with a 2-dimensional system, which is what we usually refer to as "linear". Is that what you meant?

Message: 9806 - Contents - Hide Contents Date: Wed, 04 Feb 2004 00:35:23 Subject: Re: [tuning] Re: question about 24-tET From: Carl Lumma>> > just continued a thread on tuning-math in which Gene apparently >> demonstrated odd-limit TOP... >>Nothing particularly odd-limit about it. You just need a set of >consonances, with weights (or multiplicities) if you choose.Can we get generators for 5-limit meantone, 7-limit schismic, and 11-limit miracle for each of: (1) TOP (2) odd-limit TOP (3) rms TOP (or can you only do integer-limit rms TOP?) (4) rms odd-limit TOP Thanks, -Carl

Message: 9807 - Contents - Hide Contents Date: Wed, 04 Feb 2004 06:46:07 Subject: Re: I guess Pajara's not #2 From: Gene Ward Smith --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> How's it coming along?I'm working to give people what they seem to want. See my next post.

Message: 9808 - Contents - Hide Contents Date: Wed, 04 Feb 2004 22:32:26 Subject: Re: Paul32 ordered by a beep-ennealimmal measure From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> > wrote:>> Here are the same temperaments, ordered by error * complexity^ (2.8). >> If the exponent was 2.7996... then ennealimma and beep would be the >> same, but why get fancy? I'm thinking an error cutoff of 15 and a >> badness cutoff of 4200 might work, looking at this; that would > include>> schismic. More ruthlessly, we might try 3500. Really savage would be >> 3000; bye-bye miracle. >> Strange; miracle was #3 according to log-flat, wasn't it?Among your original L_inf top 64, it seems it was indeed #3 by L_1 log-flat: Yahoo groups: /tuning-math/message/9048 * [with cont.]

Message: 9809 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:01:46 Subject: Re: Acceptance regions From: Gene Ward Smith --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> wrote:> This defines a triangular region in the xy plane. We could define a > region with smooth boundry instead, in particular an ellipse. If we > took a set of temperaments we wanted on our list, and analyzed them > statistically, we might have an idea of what region we are looking > for. One way might be to do principle component analysis, and convert > the data set into something we can draw a nice circle around. All of > which leads to the quesiton, which temperaments do we start out with?Here is Paul's true L1 top list of 32 temperaments. I hope I made this so Paul can process it easily. On each line is the temperament, the coordinates after performing a principle component analysis, and the radius--distance from the midpoint. It is sorted in order of increasing radius, but this is not a badness figure per se; the question is, however, if a good list can be obtained by reducing the radius, so you want to check the bottom part of the list and see what is essential. I started with stuff like father and ennealimmal, so they are still in there. I don't know what temperaments not on the list fall inside the circle. The results strike me as a little weird, in the sense that meantone comes in at number 29. However the last three temperaments on the list are outliers in terms of radius, and we could get a list of 29 by deep-sixing them. Since the temperaments in question are beep, father and the {21/20, 28/27} temperament, that would definately please some people.

Message: 9810 - Contents - Hide Contents Date: Wed, 04 Feb 2004 22:38:44 Subject: Re: Comma reduction? From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Paul G Hjelmstad" <paul.hjelmstad@u...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> > wrote:>> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul G Hjelmstad" >>> >>> Thanks. Are they called 2-val and 2-monzo because they > are "linear">>> or is there some other reason? >>>> 2-vals are two vals wedged, 2-monzos are two monzos wedged. The > former>> is linear unless it reduces to the zero wedgie, the latter is linear >> only in the 7-limit. >> Thanks! So the latter is linear in the 7-limit because the 7-limit is > formed from two commas...I see.I thought I'd also clarify that the dimension of (or number of entries in) the wedgie can be read off Pascal's triangle -- look at the row corresponding to the dimension of the prime space (the first row is row 0), and the entry in that row corresponding to the dimension of the kernel (the first entry is entry 0). 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 So 4-dimensional prime space with two independent commas tempered out requires a 6-dimensional wedgie, as we've seen.

Message: 9812 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:08:06 Subject: Re: Acceptance regions From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> > wrote: >>> This defines a triangular region in the xy plane. We could define a >> region with smooth boundry instead, in particular an ellipse. If we >> took a set of temperaments we wanted on our list, and analyzed them >> statistically, we might have an idea of what region we are looking >> for. One way might be to do principle component analysis, and convert >> the data set into something we can draw a nice circle around. All of >> which leads to the quesiton, which temperaments do we start out with? >

Message: 9813 - Contents - Hide Contents Date: Wed, 04 Feb 2004 23:27:08 Subject: Re: finding a moat in 7-limit commas a bit tougher . . . From: Dave Keenan --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> > wrote:>> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> > wrote:>>> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> >>> wrote:>>>> Perhaps we should limit such tests to otonalities having at > most one>>>> note per prime (or odd) in the limit. e.g. If you can't make a >>>> convincing major triad then it aint 5-limit. And you can't use >>>> scale-spectrum timbres although you can use inharmonics that > have no>>>> relation to the scale. >>>>>> yes, mastuuuhhhhh . . . =( >>>> It was just a suggestion. I wrote "perhaps we should" and "e.g.". >> >> What does "=(" mean? >> >> I'm guessing you think it's a bad idea. >> It's a picture of me succumbing to your authority.I don't have any, and I'd rather you succumbed to the good sense of my arguments. :-)

Message: 9814 - Contents - Hide Contents Date: Wed, 04 Feb 2004 23:51:14 Subject: Re: Some convex hull badness measures From: Dave Keenan --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> wrote: Hi Gene, To be able to comment on any of this, I really need to see them plotted in the (linear) error vs. complexity plane. Could you just post something like that list of 114 TOP 7-limit linear temps again, but with Paul's latest favourite complexity measure (the spooky one) and with the log-flat badness cutoff raised somewhat so we get a few previously-unnamed temps in the middle region. One temp per line, tab delimited? We only need columns for complexity, error, comma-pair, and name (if already in use). Thanks.

Message: 9815 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:11:38 Subject: Re: Acceptance regions From: Gene Ward Smith --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> wrote:> Here is Paul's true L1 top list of 32 temperaments.Somehow it didn't actually get added; here it is. Blackwood rules, but if we toss beep and father it won't anymore, I imagine. [0, 5, 0, 8, 0, -14] [.2064288959e-1, -.1043499280] [4, 2, 2, -6, -8, -1] [.1314173477, -.5776059996e-1] [3, 0, 6, -7, 1, 14] [-.9759478278e-1, -.1184137769] [6, 5, 22, -6, 18, 37] [-.1480758164, .5924299839e-1] [4, 4, 4, -3, -5, -2] [-.1139119995, -.1276836173] [9, 5, -3, -13, -30, -21] [-.1902784347, .1548210912e-1] [16, 2, 5, -34, -37, 6] [.7687685623e-1, .1955158230] [10, 9, 7, -9, -17, -9] [-.2605524521, -.2903797753e-1] [2, 8, 8, 8, 7, -4] [-.2717885664, -.1286298091] [1, -8, -14, -15, -25, -10] [-.2989669753, -.4080104036e-1] [6, -7, -2, -25, -20, 15] [-.3049734797, -.3330484564e-1] [7, -3, 8, -21, -7, 27] [-.3074384258, -.4688241454e-1] [1, 4, -2, 4, -6, -16] [-.2696069971, -.1733403969] [2, 8, 1, 8, -4, -20] [-.2896647403, -.1420876517] [6, 5, 3, -6, -12, -7] [-.3075281408, -.1335897048] [4, -3, 2, -14, -8, 13] [-.3195568516, -.1404043854] [1, 9, -2, 12, -6, -30] [-.3339264533, -.1182544459] [2, 1, 6, -3, 4, 11] [.3610308982, .5104682762e-2] [0, 0, 7, 0, 11, 16] [.3740392389, .1111744330e-1] [3, 0, -6, -7, -18, -14] [-.3548963487, -.1508618972] [2, 25, 13, 35, 15, -40] [.2647446362, .3022722620] [1, -3, -4, -7, -9, -1] [.4175560613, .2406964027e-1] [2, -4, -4, -11, -12, 2] [-.4040260830, -.1851377308] [5, 1, 12, -10, 5, 25] [-.4482934949, -.1394673397] [7, 9, 13, -2, 1, 5] [-.4461970788, -.1511829310] [5, 13, -17, 9, -41, -76] [.3651340235, .3600565223] [18, 27, 18, 1, -22, -34] [.4028591494, .3906980840] [13, 14, 35, -8, 19, 42] [.4248301718, .3944244021] [1, 4, 10, 4, 13, 12] [-.5476384245, -.2120574236] [2, 3, 1, 0, -4, -6] [.9385116395, .1474599147] [1, -1, 3, -4, 2, 10] [.9519941420, .1568469925] [1, 4, 3, 4, 2, -4] [.9852784843, .1709570383]

Message: 9816 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:14:51 Subject: Re: Duals to ems optimization From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:>> For any set of consonances C we want to do an rms optimization for, we >> can find a corresponding Euclidean norm on the val space (or >> octave-excluding subspace if we are interested in the odd limit) by >> taking the sum of terms >> >> (c2 x2 + c3 x3 + ... + cp xp)^2 >> >> for each monzo |c2 c3 ... cp> in C. If we want something corresponded >> to weighted optimization we would add weights, and if we wanted the >> odd limit, the consonances in C can be restricted to quotients of odd >> integers, in which case c2 will always be zero. > //>> In the 11-limit and beyond, of course, things become more complicated >> because we will want to introduce ratios of odd numbers which are not >> necessarily primes. If we take ratios of odd numbers up to 11 for our >> set of consonances, we get >> >> sqrt(20x3^2+5x5^2-2x7x11-6x3x5+5x7^2+5x11^2-6x3x7-2x5x7-2x5x11- 6x3x11) >> >> as our norm on vals, and correspondingly, >> >> sqrt(18e3^2+36e3e5+36e3e7+36e3e11+62e5^2+58e5e7+58e5e11// >> >> as our norm on octave classes. This norm is not altogether >> satisfactory; for instance it gives a length of sqrt(44) to 5/3 and >> 6/5, and a length of sqrt(62) to 5/4. This suggests to me that there >> is something a little dubious in theory about using unweighted rms >> optimization, at least in the 11 limit and beyond. An alternative rms >> optimization scheme would be to use dual of the norm I've been using >> on octave classes as the norm for a weighted rms optimization. >> >> In the 5-limit, this norm on octave classes is >> >> sqrt(p3e3^2 + p3e3e5 + p5e5^2) >> >> where p3 = log2(3), p5 = log2(5). The dual norm on vals is >> >> sqrt(p5x3^2 - p3x3x5 + p3x5^2) >> >> These norms will weigh lower prime errors a little higher than higher >> prime errors, which of course is also what TOP does. Now I need a >> catchy name for them. >> Have you checked that this weighted version is ok at the 11-limit > (ie doesn't make 5/4 shorter than 5/3)?I don't know of> > Also: If you leave in the c2 term does this optimize over "all the > intervals" like TOP? No. > (3) odd-limit TOPWe have several different ideas of what this means floating around.

Message: 9817 - Contents - Hide Contents Date: Wed, 04 Feb 2004 21:31:47 Subject: Re: Acceptance regions From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:>> I don't understand. Those clearly aren't error and complexity numbers >> you're giving after the wedgie. What am I to do with them? >> I thought you might like to plot them. However what I really need is a > list of temperaments you think we ought to include.I'm not omniscient.> Don't worry about > excluding.This isn't helping.> The numbers have a mean value of zero, and are adjusted so that the 32 > temperaments are in a more-or-less circular blob. Apparently blackwood > was near the average values for complexity and error, and hence ended > up closest to the center (in a suitably adjusted sense of "closest" > the principle component analysis gave.) Center of the accpetance > region is not a goodness measure; all it means is that you are stuck > with blackwood if you use this sort of region. No getting rid of it > other than by starting with a new set of temperaments, which moves the > midpoint somewhere else and changes the ellipses you draw. > > Have you listed your must-have 7-limit linear temperaments already?No; the idea was to do a complete search within an extra-large region and then look for the widest moats. Dave and I have done this for equal temperaments, 5-limit linear temperaments, 7-limit planar temperaments. Now we're asking for your help.

Message: 9818 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:16:32 Subject: Re: Acceptance regions From: Paul Erlich I don't understand. Those clearly aren't error and complexity numbers you're giving after the wedgie. What am I to do with them? --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> > wrote: >>> Here is Paul's true L1 top list of 32 temperaments. >> Somehow it didn't actually get added; here it is. Blackwood rules, but > if we toss beep and father it won't anymore, I imagine. > > [0, 5, 0, 8, 0, -14] [.2064288959e-1, -.1043499280] > [4, 2, 2, -6, -8, -1] [.1314173477, -.5776059996e-1] > [3, 0, 6, -7, 1, 14] [-.9759478278e-1, -.1184137769] > [6, 5, 22, -6, 18, 37] [-.1480758164, .5924299839e-1] > [4, 4, 4, -3, -5, -2] [-.1139119995, -.1276836173] > [9, 5, -3, -13, -30, -21] [-.1902784347, .1548210912e-1] > [16, 2, 5, -34, -37, 6] [.7687685623e-1, .1955158230] > [10, 9, 7, -9, -17, -9] [-.2605524521, -.2903797753e-1] > [2, 8, 8, 8, 7, -4] [-.2717885664, -.1286298091] > [1, -8, -14, -15, -25, -10] [-.2989669753, -.4080104036e-1] > [6, -7, -2, -25, -20, 15] [-.3049734797, -.3330484564e-1] > [7, -3, 8, -21, -7, 27] [-.3074384258, -.4688241454e-1] > [1, 4, -2, 4, -6, -16] [-.2696069971, -.1733403969] > [2, 8, 1, 8, -4, -20] [-.2896647403, -.1420876517] > [6, 5, 3, -6, -12, -7] [-.3075281408, -.1335897048] > [4, -3, 2, -14, -8, 13] [-.3195568516, -.1404043854] > [1, 9, -2, 12, -6, -30] [-.3339264533, -.1182544459] > [2, 1, 6, -3, 4, 11] [.3610308982, .5104682762e-2] > [0, 0, 7, 0, 11, 16] [.3740392389, .1111744330e-1] > [3, 0, -6, -7, -18, -14] [-.3548963487, -.1508618972] > [2, 25, 13, 35, 15, -40] [.2647446362, .3022722620] > [1, -3, -4, -7, -9, -1] [.4175560613, .2406964027e-1] > [2, -4, -4, -11, -12, 2] [-.4040260830, -.1851377308] > [5, 1, 12, -10, 5, 25] [-.4482934949, -.1394673397] > [7, 9, 13, -2, 1, 5] [-.4461970788, -.1511829310] > [5, 13, -17, 9, -41, -76] [.3651340235, .3600565223] > [18, 27, 18, 1, -22, -34] [.4028591494, .3906980840] > [13, 14, 35, -8, 19, 42] [.4248301718, .3944244021] > [1, 4, 10, 4, 13, 12] [-.5476384245, -.2120574236] > [2, 3, 1, 0, -4, -6] [.9385116395, .1474599147] > [1, -1, 3, -4, 2, 10] [.9519941420, .1568469925] > [1, 4, 3, 4, 2, -4] [.9852784843, .1709570383]

Message: 9819 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:17:14 Subject: Re: Duals to ems optimization From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:>>> For any set of consonances C we want to do an rms optimization > for, we>>> can find a corresponding Euclidean norm on the val space (or >>> octave-excluding subspace if we are interested in the odd limit) by >>> taking the sum of terms >>> >>> (c2 x2 + c3 x3 + ... + cp xp)^2 >>> >>> for each monzo |c2 c3 ... cp> in C. If we want something > corresponded>>> to weighted optimization we would add weights, and if we wanted the >>> odd limit, the consonances in C can be restricted to quotients of > odd>>> integers, in which case c2 will always be zero. >> //>>> In the 11-limit and beyond, of course, things become more > complicated>>> because we will want to introduce ratios of odd numbers which are > not>>> necessarily primes. If we take ratios of odd numbers up to 11 for > our>>> set of consonances, we get >>> >>> sqrt(20x3^2+5x5^2-2x7x11-6x3x5+5x7^2+5x11^2-6x3x7-2x5x7-2x5x11- > 6x3x11) >>>>>> as our norm on vals, and correspondingly, >>> >>> sqrt(18e3^2+36e3e5+36e3e7+36e3e11+62e5^2+58e5e7+58e5e11// >>> >>> as our norm on octave classes. This norm is not altogether >>> satisfactory; for instance it gives a length of sqrt(44) to 5/3 and >>> 6/5, and a length of sqrt(62) to 5/4. This suggests to me that > there>>> is something a little dubious in theory about using unweighted rms >>> optimization, at least in the 11 limit and beyond. An alternative > rms>>> optimization scheme would be to use dual of the norm I've been > using>>> on octave classes as the norm for a weighted rms optimization. >>> >>> In the 5-limit, this norm on octave classes is >>> >>> sqrt(p3e3^2 + p3e3e5 + p5e5^2) >>> >>> where p3 = log2(3), p5 = log2(5). The dual norm on vals is >>> >>> sqrt(p5x3^2 - p3x3x5 + p3x5^2) >>> >>> These norms will weigh lower prime errors a little higher than > higher>>> prime errors, which of course is also what TOP does. Now I need a >>> catchy name for them. >>>> Have you checked that this weighted version is ok at the 11-limit >> (ie doesn't make 5/4 shorter than 5/3)? >> I don't know ofWhoops . . . I don't know if you followed the thread this far: Yahoo groups: /tuning-math/message/8692 * [with cont.]

Message: 9820 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:31:05 Subject: Re: Duals to ems optimization From: Carl Lumma>> >|5/4|| = ||7/4|| = ||11/8|| = sqrt(11). >>OK . . . >(3.3166)How is that ok? -C.

Message: 9821 - Contents - Hide Contents Date: Wed, 04 Feb 2004 07:33:18 Subject: Re: Duals to ems optimization From: Paul Erlich --- In tuning-math@xxxxxxxxxxx.xxxx "Carl Lumma" <ekin@l...> wrote:>>> ||5/4|| = ||7/4|| = ||11/8|| = sqrt(11). >>>> OK . . . >> (3.3166) >> How is that ok? > > -C.It's not surprising given how Gene set it up: with the same weighting that gives equilateral triangles and tetrahedra in the 5-limit and 7- limit lattices . . .

Message: 9822 - Contents - Hide Contents Date: Wed, 04 Feb 2004 08:04:41 Subject: Re: Acceptance regions From: Gene Ward Smith --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> I don't understand. Those clearly aren't error and complexity numbers > you're giving after the wedgie. What am I to do with them?I thought you might like to plot them. However what I really need is a list of temperaments you think we ought to include. Don't worry about excluding. The numbers have a mean value of zero, and are adjusted so that the 32 temperaments are in a more-or-less circular blob. Apparently blackwood was near the average values for complexity and error, and hence ended up closest to the center (in a suitably adjusted sense of "closest" the principle component analysis gave.) Center of the accpetance region is not a goodness measure; all it means is that you are stuck with blackwood if you use this sort of region. No getting rid of it other than by starting with a new set of temperaments, which moves the midpoint somewhere else and changes the ellipses you draw. Have you listed your must-have 7-limit linear temperaments already?

Message: 9823 - Contents - Hide Contents Date: Wed, 04 Feb 2004 08:13:19 Subject: Re: Duals to ems optimization From: Gene Ward Smith --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> It's not surprising given how Gene set it up: with the same weighting > that gives equilateral triangles and tetrahedra in the 5-limit and 7- > limit lattices . . .Actually, isosceles triangles. The fifth gets a length of log(3) (or cents(3) or whatever log you are using) and the major and minor thirds have the same length, log(5). The septimal consonances are even farther out, 7/6, 7/5 and 7/4 all having the same length. Carl presumably would like the symmetrical lattices even less.

Message: 9824 - Contents - Hide Contents Date: Wed, 04 Feb 2004 08:15:22 Subject: Re: Duals to ems optimization From: Gene Ward Smith --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:> --- In tuning-math@xxxxxxxxxxx.xxxx "Carl Lumma" <ekin@l...> wrote:>>>> ||5/4|| = ||7/4|| = ||11/8|| = sqrt(11).Oops. I see it was symmetrical.

9000
9050
9100
9150
9200
9250
9300
9350
9400
9450
9500
9550
9600
9650
9700
9750
**9800**
9850
9900
9950

**9800 -**
9825 -