About this archive This is an opt in archive. To add your posts click here and send e-mail: tuning_archive@rcwalker.freeserve.co.uk.
S 2

First Previous Next Last

4000 4050 4100 4150 4200 4250 4300 4350 4400 4450 4500 4550 4600 4650 4700 4750 4800 4850 4900 4950 5000 5050 5100 5150 5200 5250 5300 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 5850 5900 5950 6000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550

5300 - 5325 -


top of page bottom of page down

Message: 5326

Date: Sat, 08 Dec 2001 06:22:38

Subject: Re: More lists

From: dkeenanuqnetau

--- In tuning-math@y..., graham@m... wrote:
> It may depend on whether or not you include the zero error for 1/1 
in the 
> mean.

I don't. Seems like a silly idea. And that wouldn't change _where_ the 
minimum occurs.

Are you able to look at the Excel spreadsheet I gave the URL for in my 
previous message in this thread?


top of page bottom of page up down Message: 5328 Date: Sat, 08 Dec 2001 06:37:30 Subject: Re: Diophantine approximation alternatives From: dkeenanuqnetau --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > Dave was questioning the lack of alternatives, so let's look at the > standard Diophantine approximation ones. Why not look outside Diophantine approximation alternatives? > In general, if f(n)>0 is such that its integral is unbounded, then > for d irrational numbers xi, > max f(n)^(-1/d) |round(n*xi) - n*xi| < c > "almost always" has an infinite number of solutions. This isn't as > tight a theorem as when we use exponents, but in practice it works > for our problem. ... > I don't see any advantages here, but there it is. There probably aren't any advantages here. But why does badness have to be of the form f(n)* |round(n*xi) - n*xi| at all? Why not f(n) + |round(n*xi) - n*xi| or f(n) * g(|round(n*xi) - n*xi|) ?
top of page bottom of page up down Message: 5331 Date: Sat, 8 Dec 2001 21:32 +00 Subject: Re: Wedge products From: graham@xxxxxxxxxx.xx.xx Update! The code at <############################################################################### *> has been updated to do most of the stuff I used to use matrices and Numeric for, but with wedge products and standard Python 1.5.2. It's passed all the tests I've tried so far, still some cleaning up to do. My wedge invariants can't be made unique and invariant in all cases, but they work most of the time. I could have a method for declaring of two wedgable objects are equivalent. Also, my invariant is very different to Gene's. I still don't get the process for calculating unison vectors with wedge products, especially in the general case. One good thing is that the generator mapping (ignoring the period mapping) which I'm using as my invariant key, is simply the octave-equivalent part of the wedge product of the commatic unison vectors! Example: >>> h31 = temper.PrimeET(31, temper.primes[:4]) >>> h41 = temper.PrimeET(41, temper.primes[:4]) >>> h31^h41 {(2, 3): 15, (0, 4): 15, (1, 4): 3, (1, 2): -25, (0, 3): -2, (2, 4): 59, (0, 2): -7, (3, 4): 49, (1, 3): -20, (0, 1): 6} >>> (h31^h41).invariant() (6, -7, -2, 15, -25, -20, 3, 15, 59, 49) Gene: > First you order the basis so that a wedge product taken from two ets > or two unison vectors will correspond: > > Yahoo groups: /tuning-math/message/1553 * I've got mine ordered, but it looks like a different order to yours. > Then you put the wedge product into a standard form, by > > (1) Dividing through by the gcd of the coefficients, and Okay, done that > (2) Changing sign if need be, so that the 5-limit comma (or unison) > 2^w[6] * 3^(-w[2])*5^w[1] where w is the wedgie, is greater than 1. > If it equals 1, go on to the next invariant comma, which leaves out > 5, and if that is 1 also to the one which leaves out 3. See > > Yahoo groups: /tuning-math/message/1555 * > > for the invariant commas. The result of this standardization is the > wedge invariant, or wedgie, which uniquely determins the temperament. Done something like this. The problem is with zeroes. As it stands, the 5-limit interval 5:4 is the same as the 7-limit interval 5:4 as far as the wedge products are concerned. But some zero elements aren't always present. Either I can get rid of them, which might mean that different products have the same invariant, or enumerate the missing bases when I calculate the invariant. The latter problem is the same as the one I'm trying to solve to get all combinations of a list of unison vectors. Another thing would be to ignore the invariants, and add a weak comparison function. As to the unison vectors, in the 7-limit I seem to be getting 4 when I only wanted 2, so how can I be sure they're linearly independent? Graham
top of page bottom of page up down Message: 5333 Date: Sat, 8 Dec 2001 21:32 +00 Subject: Re: More lists From: graham@xxxxxxxxxx.xx.xx Me: > > It may depend on whether or not you include the zero error for 1/1 > in the > > mean. Dave: > I don't. Seems like a silly idea. And that wouldn't change _where_ the > minimum occurs. Yes, won't change the position. But, looking carefully at your previous mail, I see you're including 1/3, 9/3 and 9/1, so that'll be it. I remove the duplicates. > Are you able to look at the Excel spreadsheet I gave the URL for in my > previous message in this thread? I'll be able to look at it on Monday, when I get back to work. I *might* be able to check it in Star Office first, but probably won't. Graham
top of page bottom of page up down Message: 5334 Date: Sat, 08 Dec 2001 07:14:09 Subject: Re: The grooviest linear temperaments for 7-limit music From: dkeenanuqnetau --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > > > ET list 2 > > > > Steps 7-limit > > per RMS > > octave error (cents) > > --------------------- > > 15 18.5 > > 19 12.7 > > 22 8.6 > > 24 15.1 > > 26 10.4 > > 27 7.9 > > 31 4.0 > > 35 9.9 > > 36 8.6 > > 37 7.6 > > 41 4.2 > > If you're going to do this, let's at least do it right and use the > right list: > > 1 884.3587134 > 2 839.4327178 > 4 647.3739047 > 5 876.4669184 > 9 920.6653451 > 10 955.6795096 > 12 910.1603254 > 15 994.0402775 > 31 580.7780905 > 41 892.0787789 > 72 892.7193923 > 99 716.7738001 > 171 384.2612749 > 270 615.9368489 > 342 968.2768986 > 441 685.5766666 > 1578 989.4999106 But this doesn't look like an approximate isobad. It looks like a list of ETs less than a certain badness. i.e. it's a top 17. Right? We can do it that way if you like. So I'll have to give my top 17. I wasn't proposing that we give the badness measure (since it was meant to be an isobad). But I guess we could if it's a top 17. However I don't want people distracted by 9 significant digits of badness. Couldn't we normalise to a 10 point scale and only give whole numbers. And you need to supply the RMS error. > The first point to note is that the two lists are clearly not > intended to do the same thing. Mine is intended to pack the maximum number of ETs likely to be of interest to musicians, composers, music theorists etc. who are interested in 7-limit music, into a list of a given size. Maybe you need to explain what yours is intended to do. > The second is that while you object to > this characterization, your list seems to want to do our thinking for > us more than mine; you've decided the important place to look is > around 27. Not at all. It just comes out that way. I simply decided that an extra note per octave was worth about the same badness as an increase of 0.5 cent in the RMS error. This comes thru experience and tuning list discussions. > The third thing to notice is that if you want to look at a > limited range, you always can. Suppose I look from 10 to 50 and see > what the top 11 are, using my measure: > > 10 .796 > 12 .758 > 15 .828 > 16 1.113 > 19 .906 > 22 .898 > 26 1.122 > 27 .924 > 31 .484 > 41 .743 > 46 1.181 Sure. I can do that too. > I'm afraid I like this list better than yours, but your milage may > vary. I might like it better than mine too. Mine's still got problems. But you had to arbitrarily limit it to 10<n<50 to get this list. This is clearly doing our thinking for us. I thought we we're talking about a single published list, not a piece of software that lets you enter your favourite limits.
top of page bottom of page up down Message: 5340 Date: Sat, 08 Dec 2001 00:47:17 Subject: Re: The grooviest linear temperaments for 7-limit music From: paulerlich --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > > > > If with all quantities positive we have g^2 c < A and c > B, then > > > 1/c < 1/B, and so g^2 < A/B and g < sqrt(A/B). However, it > probably > > > makes more sense to use g>=1, so that if g^2 c <= A then c <= A. > > > Are you saying that using g>=1 is enough to make this a closed > search? > > All it does is put an upper limit on how far out of tune the worst > cases can be, so we really need to bound c below or g above to get a > finite search. So do you still stand by this statement: "If we bound one of them and gens^2 cents, we've bound the other; that's what I'd do." (which you wrote after I said that a single cufoff point wouldn't be enough, that we would need a cutoff curve)?
top of page bottom of page up down Message: 5341 Date: Sat, 08 Dec 2001 08:21:24 Subject: Re: The grooviest linear temperaments for 7-limit music From: dkeenanuqnetau --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > > > But this doesn't look like an approximate isobad. It looks like a > list > > of ETs less than a certain badness. i.e. it's a top 17. Right? > > Right, but your list looked like a top 11 in a certain range also. It happens to also be the top 11 by the 0.5*steps + cents metric, but not limited to any range. > > We can do it that way if you like. So I'll have to give my top 17. > I > > wasn't proposing that we give the badness measure (since it was > meant > > to be an isobad). > > The things on your list didn't make sense to me as an isobad, Obviously they wouldn't, given what your isobad looked like. > and I > didn't know that was what it was supposed to be. I thought I made that pretty clear. > Trying a top n and > comparing makes more sense to me, Fine. > but I need to pick a range. Objectively of course. Ha ha. If you have to pick a range then your so-called badness metric obviously isn't really a badness metric at all! > > Mine is intended to pack the maximum number of ETs likely to be of > > interest to musicians, composers, music theorists etc. who are > > interested in 7-limit music, into a list of a given size. > > It needs work. I think I said that. > Mine is intended to show what the relatively best 7-limit ets are, in > a measurement which has the logarithmic flatness I describe in > another posting. Even if you and Paul are the only folks on the planet who find that interesting? In that case I think its very misleading to call it a badness metric when it only gives relative badness _locally_. > > I might like it better than mine too. Mine's still got problems. > But > > you had to arbitrarily limit it to 10<n<50 to get this list. This > is > > clearly doing our thinking for us. > > And I can reduce that problem to essentially nil, by putting in a > high cut-off and leaving it at that. How high? How will this fix the problem that folks will assume you're saying that 3-tET and 1547-tET are about as useful as 22-tET for 7-limit. > You are stuck with it as an > intrinsic feature. And a damn fine feature it is too. :-) Seriously, mine was proposed without any great amount of research or deliberation to show that it is easy to find alternatives that do _much_ better than yours _globally_ and about the same _locally_.
top of page bottom of page up down Message: 5344 Date: Sat, 08 Dec 2001 01:48:14 Subject: Re: The grooviest linear temperaments for 7-limit music From: dkeenanuqnetau Thanks Gene, for taking the time to explain this in a way that a mere computer scientist can understand. :-) --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > > > So ... What is n? What is a 7-limit et? How does one use n^(4/3) to > > get a list of them? How would one check to see whether the list > > favours high or low n. > > "n" is how many steps to the octave, or in other words what 2 is > mapped to. By a "7-limit et" I mean something which maps 7-limit > intervals to numbers of steps in a consistent way. Since we are > looking for the best, we can safely restrict these to what we get by > rounding n*log2(3), n*log2(5) and n*log2(7) to the nearest integer, > and defining the n-et as the map one gets from this. OK so far. > Let's call this map "h"; > for the 12-et, h(2)=12, h(3)=19, h(5)=28 and > h(7)=34; this entails that h(5/3) = h(5)-h(3) = 9, h(7/3)=15 and > h(7/5)=6. Fine. > I can now measure the relative badness of "h" by taking the > sum, or maximum, or rms, of the differences of |h(3)-n*log2(3)|, > |h(5)-n*log2(5)|, |h(7)-n*log2(7)|, |h(5/3)-n*log2(5/3)|, > |h(7/3)-n*log2(7/3)| and |h(7/5)-n*log2(7/5)|. I'd say this is just one component of badness. Its the error expressed as a proportion of the step size. The number of steps in the octave n has an effect on badness independent of the relative error. > This measure of badness is flat in the sense that the density is the > same everywhere, so that we would be selecting about the same number > of ets in a range around 12 as we would in a range around 1200. Yes. I believe this. See the two charts near the end of Harmonic errors in equal tempered musical scales * although it uses a weighting error that only includes the primes (only the "rooted" intervals) that I now find dubious. > I don't really want this sort of "flatness", Hardly anyone would. Not without some additional penalty for large n, even if it's just a crude sudden cutoff. But _why_ don't you want this sort of flatness? Did you reject it on "objective" grounds? Is there some other sort of flatness that you _do_ want? If so, what is it? How many sorts of flatness are there and how did you choose between them? > so I use the theory of > Diophantine approximation to tell we that if I multiply this badness > by the cube root of n, so that the density falls off at a rate of > n^(-1/3), I will still get an infinite list of ets, but if I make it > fall off faster I probably won't. Here's where the real leap-of-faith occurs. First of all, I take it that when you say you will (or wont) "get an infinite list of ets", you mean "when the list is limited to ETs whose badness does not exceed a given badness limit, greater than zero". There are an infinite number of ways of defining badness to achieve a finite list with a cutoff only on badness itself. Most of these will produce a finite list that is of of absolutely no interest to 99.99% of the population (of people who are interested in the topic at all). Why do you immediately leap to the theory of Diophantine approximation as giving the best way to achieve a finite list? I think a good way to achieve it is simply to add an amount k*n to the error in cents (absolute, not relative to step size). I suggest initially trying a k of about 0.5 cents per step. The only way to tell if this is better than something based on the theory of Diophantine equations is to suck it and see. Some of us have been on the tuning lists long enough to know what a lot of other people find useful or interesting, even though we don't necessarily find them so ourselves. > I can use either the maximum of the > above numbers, or the sum, or the rms, and the same conclusion holds; > in fact, I can look at the 9-limit instead of the 7-limit and the > same conclusion holds. If I look at the maximum, and multiply by 1200 > so we are looking at units of n^(4/3) cents, I get the following list > of ets which come out as less than 1000, for n going from 1 to 2000: > > 1 884.3587134 > 2 839.4327178 > 4 647.3739047 > 5 876.4669184 > 9 920.6653451 > 10 955.6795096 > 12 910.1603254 > 15 994.0402775 > 31 580.7780905 > 41 892.0787789 > 72 892.7193923 > 99 716.7738001 > 171 384.2612749 > 270 615.9368489 > 342 968.2768986 > 441 685.5766666 > 1578 989.4999106 > > This list just keeps on going, so I cut it off at 2000. I might look > at it, and decide that it doesn't have some important ets on it, such > as 19,22 and 27; I decide to put those on, not really caring about > any other range, by raising the ante to 1200; I then get the > following additions: > > 3 1154.683345 > 6 1068.957518 > 19 1087.886603 > 22 1078.033523 > 27 1108.589256 > 68 1090.046322 > 130 1182.191130 > 140 1091.565279 > 202 1143.628876 > 612 1061.222492 > 1547 1190.434242 > > My decision to add 19,22, and 27 leads me to add 3 and 6 at the low > end, and 68 and so forth at the high end. It tells me that if I'm > interested in 27 in the range around 31, I should also be interested > in 68 in the range around 72, in 140 and 202 around 171, 612 around > 441, and 1547 near 1578. That's the sort of "flatness" Paul was > talking about; it doesn't favor one range over another. But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 are of approximately equal interest to 19, 22 and 27. Sure you'll always be able to find one person who'll say they are. But ask anyone who has actually used 19-tET or 22-tET when they plan to try 3-tET or 1547-tET. It's just a joke. I suspect you've been seduced by the beauty of the math and forgotten your actual purpose. This metric clearly favours both very small and very large n over middle ones. > > But no matter what you come up with I can't see how you can get > past > > the fact that gens and cents are fundamentally incomensurable > > quantities, so somewhere there has to be a parameter that says how > bad > > they are relative to each other. > > "n" and cents are incommeasurable also, Yes. > and n^(4/3) is only right for > the 7 and 9 limits, and wrong for everything else, so I don't think > this is the issue if we adopt this point of view. > > Why not > > use k*gens + cents. e.g. if badness was simply gens + cents and you > > listed everything with badness not more than 30 then you don't need > > any additional cutoffs. You automatically eliminate anything with > gens > > > 30 or cents > 30 (actually cents > 29 because gens can't go below > > 1). > > Gens^3 cents also automatically cuts things off, but I rather like > the idea of keeping it "flat" in the above sense and doing the > cutting off deliberately, it seems more objective. _Seems_ more objective? You mean that subjectively, to you, it seems more objective? Well I'm afraid that it seems to me that this quest for an "objective" badness metric (with ad hoc cutoffs) is the silliest thing I've heard in quite a while. If you're combining two or more incomensurable quantities into a single badness metric, the choice of the constant of proportionality between them (and the choice of whether this constant should relate the plain quantities or their logarithms or whatever) should be decided so that as many people as possible agree that it actually gives something like what they perceive as badness, even if its only roughly so. An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one. The fact that its based on the theory of Diophantine equations is utterly irrelevant.
top of page bottom of page up down Message: 5346 Date: Sat, 08 Dec 2001 02:11:04 Subject: Re: More lists From: dkeenanuqnetau --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > I got 116.672264296056... which checks with Graham, so that's > progress of some kind. So what's wrong with this spreadsheet? ΠΟ ± *
top of page bottom of page up down Message: 5348 Date: Sat, 08 Dec 2001 02:12:56 Subject: Re: More lists From: paulerlich --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > I got 116.672264296056... which checks with Graham, so that's > progress of some kind. I get 116.6775720762089, which agrees with Dave. Gene, did you have 15 error terms like we did?
top of page bottom of page up

First Previous Next Last

4000 4050 4100 4150 4200 4250 4300 4350 4400 4450 4500 4550 4600 4650 4700 4750 4800 4850 4900 4950 5000 5050 5100 5150 5200 5250 5300 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 5850 5900 5950 6000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550

5300 - 5325 -

top of page