This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).

- Contents - Hide Contents - Home - Section 3

Previous Next

2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500 2550 2600 2650 2700 2750 2800 2850 2900 2950

2350 - 2375 -



top of page bottom of page up down


Message: 2350 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 02:12:56

Subject: Re: More lists

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:

> I got 116.672264296056... which checks with Graham, so that's > progress of some kind.
I get 116.6775720762089, which agrees with Dave. Gene, did you have 15 error terms like we did?
top of page bottom of page up down


Message: 2351 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 10:16:46

Subject: Re: What's so Super about Superparticularity?

From: genewardsmith

--- In tuning-math@y..., "unidala" <JGill99@i...> wrote:
> Gene, > > Thanks for your excellent description of the mathematical > significance of "superparticularity" and "Farey series adjacence" > found in certain sub-branches of the Stern-Brocot tree structure, and > their inclusion in certain types of JI musical scales (quoted below):
Actually, superparticular ratios are associated with each branch of the Stern-Brocot tree, and not confined to any sub-branch. Simply take the ratio between the node at level n and a branch node at level n+1, and label the branch connecting them with this superparticular ratio.
> What I would really like to know is the musical *benefits* of > utilizing such ratios in a JI scale (independent from their possibly > being constructed out of low-valued integer values in such a scale).
I don't see anything in the old Greek theory that any old superparticular ratio has benefits, but the ratios connecting branchs of the Stern-Brocot tree are a different matter, as are the ratios of second order, between these ratios. If you do a search in the p-limit for superparticular ratios you get lists such as the one very recently posted here; if you look at (say) (n+2)/n for odd n you get nothing like as many. It may be people noticed the things popping up constantly, and attributed special benefits to them. If they had done the same with superparticulars with square or triangular or fourth power, etc. numerators it would have been more to the point, if so.
top of page bottom of page up down


Message: 2352 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 02:22:56

Subject: Re: The grooviest linear temperaments for 7-limit music

From: paulerlich

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 are > of approximately equal interest to 19, 22 and 27. Sure you'll always > be able to find one person who'll say they are. But ask anyone who has > actually used 19-tET or 22-tET when they plan to try 3-tET or > 1547-tET. It's just a joke.
For the third or fourth time Dave, this isn't intended to appeal to any one person, but rather to the widest possible audience. Since this is a "flat" measure, it will rank the systems in the _vicinity_ of *your* #1 system, the same way you would, whoever *you* happen to be. But it makes absolutely no preference for one end of the spectrum over another, or the middle. That's what makes it flat and "objective". Look at Gene's list for 7-limit ETs again. Can it be denied that 31-tET is by far the best _in its vicinity_, and 171-tET is by far the best _in its vicinity_?
top of page bottom of page up down


Message: 2353 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 10:34:50

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
>> Even if you and Paul are the only folks on the planet who find that >> interesting? In that case I think its very misleading to call it a >> badness metric when it only gives relative badness _locally_. >
> Global relative badness means what, exactly? This makes no sense to > me.
It means if two ETs have around the same badness number then are are about as bad as each other, no matter how far apart they are on the spectrum.
>> How high? How will this fix the problem that folks will assume > you're
>> saying that 3-tET and 1547-tET are about as useful as 22-tET for >> 7-limit. >
> I think you would be one of the very few who looked at it that way. > After all, this is hardly the first time such a thing has been done.
Ok. So I'm the only person who will assume that two ETs with about the same badness number are roughly as bad as each other. In that case, I shant bother you any more. We are apparently speakimg different languages.
top of page bottom of page up down


Message: 2354 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 03:23:48

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> So do you still stand by this statement: > > "If we bound one of them and gens^2 cents, we've bound the other; > that's what I'd do." > > (which you wrote after I said that a single cufoff point wouldn't be > enough, that we would need a cutoff curve)?
Sure. I think bounding g makes the most sense, since we can calculate it more easily. I've been thinking about how one might calculate cents without going through the map stage, but for gens we can get it immediately from the wedgie with no trouble. We could then toss anything with too high a gens figure before even calculating anything else, which should help.
top of page bottom of page up down


Message: 2356 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 03:27:45

Subject: Re: The grooviest linear temperaments for 7-limit music

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: >
>> So do you still stand by this statement: >> >> "If we bound one of them and gens^2 cents, we've bound the other; >> that's what I'd do." >> >> (which you wrote after I said that a single cufoff point wouldn't > be
>> enough, that we would need a cutoff curve)? >
> Sure. I think bounding g makes the most sense, since we can calculate > it more easily. I've been thinking about how one might calculate > cents without going through the map stage, but for gens we can get it > immediately from the wedgie with no trouble.
I don't immediately know what "the map stage" means, but I've been thinking that, in regarding to "standardizing the wedge product", we might want to use something that has the Tenney lattice built in.
> We could then toss > anything with too high a gens figure before even calculating anything > else, which should help.
So I'm not getting where g>=1 comes into all this.
top of page bottom of page up down


Message: 2358 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 03:39:09

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> So I'm not getting where g>=1 comes into all this.
What I wrote was confused, but you've already replied, I see. Bounding g from below is easy, since it bounds itself. Bounding it from above could mean just setting a bound, or bounding g^2 c; I think just setting an upper bound to it makes a lot of sense.
top of page bottom of page up down


Message: 2359 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 03:48:56

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: >
>> But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 > are
>> of approximately equal interest to 19, 22 and 27. Sure you'll > always
>> be able to find one person who'll say they are. But ask anyone who > has
>> actually used 19-tET or 22-tET when they plan to try 3-tET or >> 1547-tET. It's just a joke. >
> For the third or fourth time Dave, this isn't intended to appeal to > any one person, but rather to the widest possible audience.
But that's exactly my intention too. I'm trying to help you find a metric that will appeal, not to me, but to all those people whose divergent views I've read on the tuning list over the years. I'm simply claiming that your metric is seriously flawed in acheiving your intended goal. Practically _nobody_ thinks 3,6,612,1547 are equally as good or bad or interesting as 19 or 22. If you include fluff like that then there will be less room for ETs of interest to actual humans.
> Since > this is a "flat" measure, it will rank the systems in the _vicinity_ > of *your* #1 system, the same way you would, whoever *you* happen to > be. But it makes absolutely no preference for one end of the spectrum > over another, or the middle. That's what makes it flat > and "objective".
You seem to be arguing in circles.
> Look at Gene's list for 7-limit ETs again. Can it be > denied that 31-tET is by far the best _in its vicinity_, and 171-tET > is by far the best _in its vicinity_?
Of course I don't deny that. I claim that it is irrelevant. _Any_ old half-baked way of monotonically combining steps and cents into a badness metric will be the same as any other, _locally_. You said the same yourself in regard to your HE curves. Maybe you need more sleep. :-) Since when does merely local behaviour determine if something is _flat_ or not? In any case, I don't think you understand Gene's particular kind of flatness, you certainly weren't able to explain it to me, as Gene has now done. This particular kind of "flatness" is just one of many. There's nothing objective about a decision to favour it, and then to ad hoc introduce additional cutoffs besides the one for badness.
top of page bottom of page up down


Message: 2360 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 03:55:30

Subject: Re: The grooviest linear temperaments for 7-limit music

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: >
>> So I'm not getting where g>=1 comes into all this. >
> What I wrote was confused, but you've already replied, I see. Bounding > g from below is easy, since it bounds itself. Bounding it from above > could mean just setting a bound, or bounding g^2 c; I think just > setting an upper bound to it makes a lot of sense.
Yes -- g could play the role than N plays in your ET lists. One would order the results by g, give the g^2 c score for each (or not), and give about a page of nice musician-friendly information on each. Gene, there are a lot of outstanding questions and comments . . . I wanted to know if there would have been a lot more "slippery" ones had you included simpler unison vectors in your source list . . . I want to use a Tenney-distance weighted "gens" measure . . . but for now, a master list would be great. Can someone produce such a list, with columns for "cents" and "gens" at least as currently defined? I'd like to try to find omissions.
top of page bottom of page up down


Message: 2361 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 04:00:12

Subject: Re: The grooviest linear temperaments for 7-limit music

From: genewardsmith

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> I'd say this is just one component of badness. Its the error expressed > as a proportion of the step size. The number of steps in the octave n > has an effect on badness independent of the relative error.
Then you should be happier with an extra cube root of n adjustment.
> Hardly anyone would. Not without some additional penalty for large n, > even if it's just a crude sudden cutoff. But _why_ don't you want this > sort of flatness?
Because my interest isn't independent of size--you need more at higher levels to make me care. Did you reject it on "objective" grounds? Is there
> some other sort of flatness that you _do_ want? If so, what is it? How > many sorts of flatness are there and how did you choose between them?
You could use the Riemann Zeta function and the omega estimates based on the assumption of the Riemann hypothesis and do it that way, if you liked. Or there are no doubt other ways; this one seems the simplest and it gets the job done, and the alternatives would have a certain family resemblence.
> Why do you immediately leap to the theory of Diophantine approximation > as giving the best way to achieve a finite list?
It gives me a measure which is connected to the nature of the problem, which is a Diophantine approximation problem, which seems to make a lot of sense both in practice and theory to me, if not to you.
> I think a good way to achieve it is simply to add an amount k*n to the > error in cents (absolute, not relative to step size). I suggest > initially trying a k of about 0.5 cents per step.
Should I muck around in the dark until I make this measure behave in a way something like the measure I already have behaves, which would be both pointless and inelegant, or is there something about it to recommend it?
> The only way to tell if this is better than something based on the > theory of Diophantine equations is to suck it and see.
Better how? The measure I already have does exactly what I'd want a measure to do. Some of us have
> been on the tuning lists long enough to know what a lot of other > people find useful or interesting, even though we don't necessarily > find them so ourselves.
One of the advantages of the measure I'm using is that it accomodates this well.
> But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 are > of approximately equal interest to 19, 22 and 27.
I'm not trying to measure your interest, I'm only saying if you want to look at a certain range, look at these. Sure you'll always
> be able to find one person who'll say they are. But ask anyone who has > actually used 19-tET or 22-tET when they plan to try 3-tET or > 1547-tET. It's just a joke.
The 4-et is actually interesting in connection with the 7-limit, as the 3-et is with the 5-limit, and the large ets have uses other than tuning up a set of marimbas as well. I suspect you've been seduced by the
> beauty of the math and forgotten your actual purpose. This metric > clearly favours both very small and very large n over middle ones.
In other words, the range *you* happen to care about is the only interesting range; it's that which I was regarding as not objective.
> An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one.
An isobad which passes near 3, 6, 19, 22, 612 and 1547 makes a lot of sense to me, so I think I would probably *not* like your alternative as well.
top of page bottom of page up down


Message: 2362 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 04:03:48

Subject: Re: The grooviest linear temperaments for 7-limit music

From: paulerlich

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
>> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: >>
>>> But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 >> are
>>> of approximately equal interest to 19, 22 and 27. Sure you'll >> always
>>> be able to find one person who'll say they are. But ask anyone who >> has
>>> actually used 19-tET or 22-tET when they plan to try 3-tET or >>> 1547-tET. It's just a joke. >>
>> For the third or fourth time Dave, this isn't intended to appeal to >> any one person, but rather to the widest possible audience. >
> But that's exactly my intention too. I'm trying to help you find a > metric that will appeal, not to me, but to all those people whose > divergent views I've read on the tuning list over the years. I'm > simply claiming that your metric is seriously flawed in acheiving your > intended goal. Practically _nobody_ thinks 3,6,612,1547 are equally as > good or bad or interesting as 19 or 22. If you include fluff like that > then there will be less room for ETs of interest to actual humans.
Dave, if you don't have a cutoff, you'd have an infinite number of ETs better than 1547. Of course there has to be a cutoff.
>
>> Look at Gene's list for 7-limit ETs again. Can it > be
>> denied that 31-tET is by far the best _in its vicinity_, and 171- tET >> is by far the best _in its vicinity_? >
> Of course I don't deny that. I claim that it is irrelevant. _Any_ old > half-baked way of monotonically combining steps and cents into a > badness metric will be the same as any other, _locally_. You said the > same yourself in regard to your HE curves. Maybe you need more sleep. > :-)
I mean that only Gene's measure tells you exactly _how much_ better a system is than the systems in their vicinity, _in units of_ the average differences between different systems in their vicinity.
> Since when does merely local behaviour determine if something is > _flat_ or not? It doesn't. > In any case, I don't think you understand Gene's particular kind of > flatness, you certainly weren't able to explain it to me, as Gene has > now done. This particular kind of "flatness" is just one of many.
I'd like to see a list of ETs, as far as you'd like to take it, above some cutoff different from Gene's, that shows this kind of behavior (not just the flatness of the measure itself, but also the flatness of the size of the wiggles).
top of page bottom of page up down


Message: 2363 - Contents - Hide Contents

Date: Sat, 8 Dec 2001 05:01 +00

Subject: Re: More lists

From: graham@xxxxxxxxxx.xx.xx

Me:
>> To check my RMS optimization's working, is a 116.6722643 cent > generator
>> right for Miracle in the 11-limit? RMS error of 1.9732 cents. Dave:
> I get 116.678 and 1.9017. Did you include the squared error for 1:3 > twice? I think you should since it occurs twice in an 11-limit hexad, > as both 1:3 and 3:9. So then you must divide by 15, not 14, to get the > mean.
I include 1:3 and 1:9
> Actually, I see that this doesn't explain our discrepancy.
It may depend on whether or not you include the zero error for 1/1 in the mean. Graham
top of page bottom of page up down


Message: 2364 - Contents - Hide Contents

Date: Sat, 8 Dec 2001 05:01 +00

Subject: Re: The grooviest linear temperaments for 7-limit music

From: graham@xxxxxxxxxx.xx.xx

Gene wrote:

> Sure. I think bounding g makes the most sense, since we can calculate > it more easily. I've been thinking about how one might calculate > cents without going through the map stage, but for gens we can get it > immediately from the wedgie with no trouble. We could then toss > anything with too high a gens figure before even calculating anything > else, which should help.
My program throws out bad temperaments before doing the optimization, if that's what you're suggesting. It's on of the changes I made this, er, yesterday morning. It does make a difference, but not much now my optimization's faster. Big chunks of time are being spent generating the ETs and formatting the results currently. Graham
top of page bottom of page up down


Message: 2365 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 05:47:05

Subject: Diophantine approximation alternatives

From: genewardsmith

Dave was questioning the lack of alternatives, so let's look at the 
standard Diophantine approximation ones. In the 7-limit, the 
n^(-1/3) exponent puts the solutions into a cube of side n^(-1/3), 
and hence of volume 1/n. This gives a density of solutions 
proportional to 1/n, and since the integral of 1/n is unbounded, an 
infinity of solutions may be expected.

In general, if f(n)>0 is such that its integral is unbounded, then 
for d irrational numbers xi, 
max f(n)^(-1/d) |round(n*xi) - n*xi| < c
"almost always" has an infinite number of solutions. This isn't as 
tight a theorem as when we use exponents, but in practice it works 
for our problem.

Most obviously, we could use an exponent less than 1/3, so that using 
fourth roots instead will still give an infinite number of soutions--
a fact which already is obvious without the above. We could even use
1/ln(x), and get solutions in droves, like prime numbers. On the 
other hand, we could fade just a little faster by using 1/(n ln n),
which makes the high end die out more, but probably not quite enough 
to kill off an infinite list of solutions.

I don't see any advantages here, but there it is.


top of page bottom of page up down


Message: 2366 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 05:54:48

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
>> I'd say this is just one component of badness. Its the error > expressed
>> as a proportion of the step size. The number of steps in the > octave n
>> has an effect on badness independent of the relative error. >
> Then you should be happier with an extra cube root of n adjustment.
Yes I am. But still way from as happy as I think most people would be with something not based on k*log(gens) + log(cents) but instead on k*gens + cents (or maybe something else).
>> But _why_ don't you want this >> sort of flatness? >
> Because my interest isn't independent of size--you need more at > higher levels to make me care. Indeed. > Did you reject it on "objective" grounds? Is there
>> some other sort of flatness that you _do_ want? If so, what is it? > How
>> many sorts of flatness are there and how did you choose between > them? >
> You could use the Riemann Zeta function and the omega estimates based > on the assumption of the Riemann hypothesis and do it that way, if > you liked. Or there are no doubt other ways; this one seems the > simplest and it gets the job done, and the alternatives would have a > certain family resemblence.
But there's nothing "objective" about these decisions. You're just finding stuff so it matches what you think everyone likes. Right?
>> Why do you immediately leap to the theory of Diophantine > approximation
>> as giving the best way to achieve a finite list? >
> It gives me a measure which is connected to the nature of the > problem, which is a Diophantine approximation problem, which seems to > make a lot of sense both in practice and theory to me, if not to you.
There are probably many such things "connected to the nature of the problem" which give entirely different results.
>> I think a good way to achieve it is simply to add an amount k*n to > the
>> error in cents (absolute, not relative to step size). I suggest >> initially trying a k of about 0.5 cents per step. >
> Should I muck around in the dark until I make this measure behave in > a way something like the measure I already have behaves, which would > be both pointless and inelegant, or is there something about it to > recommend it?
Yes. The fact that I've been reading the tuning list and thinking about and discussing these things with others for many years. So it's hardly groping in the dark. I'm not saying this particular one I pulled out of the air is the one most representative of all views, but I do know that we can do a lot better than your current proposal.
>> The only way to tell if this is better than something based on the >> theory of Diophantine equations is to suck it and see. >
> Better how? The measure I already have does exactly what I'd want a > measure to do. Answered below. > Some of us have
>> been on the tuning lists long enough to know what a lot of other >> people find useful or interesting, even though we don't necessarily >> find them so ourselves. >
> One of the advantages of the measure I'm using is that it accomodates > this well.
How do you know that?
>> But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 > are
>> of approximately equal interest to 19, 22 and 27. >
> I'm not trying to measure your interest,
I keep saying that I'm trying to consider as wide a set of interests as possible. You and Paul keep accusing me of only trying to serve my own interests. I accept that you're trying to consider as wide a set of interests as possible, I just claim that you're failing.
> I'm only saying if you want > to look at a certain range, look at these.
Yes, but some _ranges_ are more interesting than others and so if you include an equal number in every range then you won't be including enough in the most interesting ranges. It isn't just _my_ prejudice that there are more ETs of interest in the vicinity of 26-tET than there are in the vicinity of 3-tET or 1550-tET. It's practically everyone's.
> Sure you'll always
>> be able to find one person who'll say they are. But ask anyone who > has
>> actually used 19-tET or 22-tET when they plan to try 3-tET or >> 1547-tET. It's just a joke. >
> The 4-et is actually interesting in connection with the 7-limit, as > the 3-et is with the 5-limit, and the large ets have uses other than > tuning up a set of marimbas as well.
Those are good points, which maybe says that my metric is too harsh on the extremes, but I still say yours is way too soft. There's got to be something pretty damn exceptional about an ET greater than 100 for it to be of interest. But note that our badness metric is only based on steps and cents (or gens and cents for temperaments) so we can't claim that our metric should include some exceptional high ET if it's exceptional property has nothing to do with the magnitude of the number of steps or the cents error.
> I suspect you've been seduced by the
>> beauty of the math and forgotten your actual purpose. This metric >> clearly favours both very small and very large n over middle ones. >
> In other words, the range *you* happen to care about is the only > interesting range; it's that which I was regarding as not objective.
There you go again. Accusing me of only trying to serve my own interests.
>> An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one. >
> An isobad which passes near 3, 6, 19, 22, 612 and 1547 makes a lot of > sense to me, so I think I would probably *not* like your alternative > as well.
Whether you or I would like it, isn't the point. The only way this could be settled is by some kind of experiment or survey, say on the tuning list. We could put together two lists of ETs of roughly equally "badness". One using your metric, one using mine. They should contain the same number of ETs (you've already given a suitable list of 11). They should have as many ETs as possible in common. We would tell people the 7-limit rms error of each and the number of steps per octave in each, but nothing more. Then we'd ask them to choose which list was a better example of a list of ETs of approximately equal 7-limit goodness, badness, usefulness, interestingness or whatever you want to call it, based only on considerations of the number of steps and the error. We could even ask them to rate each list on a scale of 1 to 10 according to how well they think each list manages to capture equal 7-limit interestingness or whatever, based only on considerations of the number of steps and the error. Here they are: ET List 1 Steps 7-limit per RMS octave error (cents) --------------------- 3 176.9 6 66.9 19 12.7 22 8.6 27 7.9 68 2.4 130 1.1 140 1.0 202 0.61 612 0.15 1547 0.040 ET list 2 Steps 7-limit per RMS octave error (cents) --------------------- 15 18.5 19 12.7 22 8.6 24 15.1 26 10.4 27 7.9 31 4.0 35 9.9 36 8.6 37 7.6 41 4.2 Do we really need to do the experiment? Paul?
top of page bottom of page up down


Message: 2367 - Contents - Hide Contents

Date: Sat, 08 Dec 2001 06:16:46

Subject: Re: The grooviest linear temperaments for 7-limit music

From: dkeenanuqnetau

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> Dave, if you don't have a cutoff, you'd have an infinite number of > ETs better than 1547. Of course there has to be a cutoff.
Yes. This just shows that this isn't a very good badness metric. A decent badness metric would not need a cutoff in anything but badness in order to arrive at a finite list.
> I mean that only Gene's measure tells you exactly _how much_ better a > system is than the systems in their vicinity,
How do you know it does that? "Exactly"?
> _in units of_ the > average differences between different systems in their vicinity.
I don't understand that bit. Can you explain.
> I'd like to see a list of ETs, as far as you'd like to take it, above > some cutoff different from Gene's, that shows this kind of behavior > (not just the flatness of the measure itself, but also the flatness > of the size of the wiggles).
But why ever do you think the size of the wiggles should be flat? I think it is quite expected that the size of the wiggles in badness around 1-tET to 9-tET are _much_ bigger than the wiggles around 60-tET to 69-tET. Apparently you agree that the wiggles around 100000-tET are completely irrelevant, since you're happy to have a cutoff in steps, somewhere below that.
top of page bottom of page up down


Message: 2368 - Contents - Hide Contents

Date: Sun, 09 Dec 2001 21:44:58

Subject: Re: Unison vector finder (Was: The grooviest linear temperaments for 7-limit

From: genewardsmith

--- In tuning-math@y..., graham@m... wrote:
> Gene wrote:
>> I don't know what good Maple code will do, but here it is: >> >> findcoms := proc(l) >> local p,q,r,p1,q1,r1,s,u,v,w; >
> More descriptive variable names might help. Is l the wedge invariant? Yes.
>> s := igcd(l[1], l[2], l[6]); >> u := [l[6]/s, -l[2]/s, l[1]/s,0]; >
> Presumably this is simplifying the octave-equivalent part?
"s" is the gcd of the first, second and sixth coordinates of the wedgie, these are the ones used to construct the 5-limit comma. I divide out by s, and get u, which is a vector representing this comma.
>> v := [p,q,r,1]; >
> What values do p, q and r have? Is it important?
p, q, and r are indeterminates, and the "1" above should be "s", the gcd I obtained before. Here is a more recent version, which should be used instead of the old one as a reference: findcoms := proc(l) local p,q,r,p1,q1,r1,s,t,u,v,w; s := igcd(l[1], l[2], l[6]); u := [l[6]/s, -l[2]/s, l[1]/s,0]; v := [p,q,r,s]; w := w7l(u,v); t := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l[6]-w [6]}); t := subs(_N1=0,t); p1 := subs(t,p); q1 := subs(t,q); r1 := subs(t,r); v := 2^p1 * 3^q1 * 5^r1 * 7^s; if v < 1 then v := 1/v fi; w := 2^u[1] * 3^u[2] * 5^u[3]; if w < 1 then w := 1/w fi; [w, v] end:
> So w is the wedge product of u and v, whatever they are.
Right, and "u" is the 5-limit comma, while "v" is undetermined aside from the fact that the power of 7 is "s".
>> s := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l [6]-w >> [6]}); > >> "isolve" gives integer solutions to a linear >> equation; >
> Oh, that sounds useful.
It is; a linear Diophantine equation routine would be a good thing to acquire.
>> p1 := subs(s,p); >> q1 := subs(s,q); >> r1 := subs(s,r); >
> What about this?
I've now re-named "s" (bad programming style if I was going to publish the code, but I didn't write it with that in mind) to be the set of solutions of the linear Diophantine equation. In my newer version, that is "t"; t is a particular solution, and I substitute this solution into the indeterminates, getting a specific value. It's Maple-specific idiocy, and you would no doubt do something different using Python.
>> v := 2^p1 * 3^q1 * 5^r1 * 7; >
> And here ^ is exponentiation instead of a wedge product.
Right, and 7 should be "7^s".
>> if v < 1 then v := 1/v fi; >
> So v must be a ratio, and you want it to be ascending.
I just like to standardize things.
>> w := 2^u[1] * 3^u[2] * 5^u[3]; >> if w < 1 then w := 1/w fi; >
> Same for w. >
>> [w, v] end: >
> And that's the result, is it? Two unison vectors?
Correct; two unison vectors free of torsion problems which define the linear temperament.
> Looks like the magic is being done by "isolve" which I presume is built-in > to Maple.
It's a built-in Maple function; however much of the magic can still be had by solving the system over the rationals, because part of the magic was to start out in such a way that torsion problems would be exterminated. One way to solve a linear Diophantine system is to solve over the rationals, and then solve the congruence conditions required to give an integer solution, in fact. You might look in Niven and Zuckerman if you have a copy for linear Diophantine equations.
top of page bottom of page up down


Message: 2369 - Contents - Hide Contents

Date: Sun, 09 Dec 2001 23:26:16

Subject: Re: Wedge products

From: genewardsmith

--- In tuning-math@y..., graham@m... wrote:

> Is your matrix of vals my mapping by steps? [(41, 31), (65, 49), (95, > 72), (115, 87), (142, 107)] for Miracle. If so, I'm with you until you > get to the Diophantine equations. I think it's solving systems of linear > Diophantine equations that I need to know how to do.
No, but if you have an easy way to get your two vals, and if they produce the correct wedgie, then they will work also.
>>>> wedgie = reduce(temper.wedgeProduct, > map(temper.WedgableRatio, > [(225,224),(385,384),(243,242)]))
I get 225/224^385/384^243^242 = h31^h41 = [6,-7,-2,15,-25,-20,3,15,59,49] in the ordering I'm using now; this has the correct number of dimensions, ten. If you want to mess around with wedging equivalence classes (but what's the point?) then they should come out in six dimensions. The equivalence class wedgies are just subsets of the full wedgie, but they don't correspond any more, and don't get rid of torsion, and so don't seem very useful.
>>>> wedgie.octaveEquivalent().flatten()
> (0, -6, 7, 2, -15) > > but with the wedge of the temperaments > >>>> (h31^h41).octaveEquivalent().flatten()
> (-25, -20, 3, 15, 59, 49)
Both of these are only part of the correct wedgie, so naturally they are not in correspondence.
> Yes, for matrices you need to have consistent dimensions, but you can get > away without them for wedge products. At least the way I've implemented > them.
That may be your problem. You could do this by assuming infinite dimensions, and ignoring things after the dimension becomes larger than your inputs, where all coefficients become zero, but the normal way is to stick with a certain number of dimensions.
>
>> But some zero elements aren't always present. Either I can
>>> get rid of them, which might mean that different products have the >> same
>>> invariant, or enumerate the missing bases when I calculate the >> invariant.
You'd certainly can't ignore a basis element with a coefficient of zero unless it is beyond the range of dimensions you are working in.
> Take the multiple-29 wedgie: >
>>>> h29 = temper.PrimeET(29,temper.primes[:5]) >>>> h58 = temper.PrimeET(58,temper.primes[:5]) >>>> (h29^h58).invariant()
> (0, 29, 29, 29, 29, 46, 46, 46, 46, -14, -33, -40, -19, -26, -7)
There are fifteen of these, so presumably it is 13-limit, but you don't say. For a 13-limit wedgie of these two, I get your result, so that seems to be what it is.
>>>> wedgie.invariant()
> (29, 29, 29, 29, 46, 46, 46, 46, -14, -33, -40, -19, -26, -7)
This has dimension 14, which is wrong, and there is your missing zero.
> If I could enumerate over all pairs, I could fix that. But that still > leave the general problem of all combinations of N items taken from a set. > I'd prefer to get rid of zero elements altogether.
For programming purposes? I think the program should follow the math, and not vice-versa; otherwise you are asking for trouble.
> Right? Some pairs have torsion as well:
My algorithm gets rid of the torsion, that's really the point of it all.
top of page bottom of page up down


Message: 2370 - Contents - Hide Contents

Date: Sun, 09 Dec 2001 02:55:32

Subject: Re: Stern-Brocot commas

From: genewardsmith

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: >
>> It looks a little cheesy, now that I look at it; I'd better check > my >> program. :)
> I think the problem is my definition; I should confine it to the > branch of the tree coming from 3/2.
The first definition was too soft, and this one too hard. "Just right" seems to be taking the comma function of every rational number greater than 1 which is not an integer. This gives the following 11-limit list of SB commas: 9801/9800, 4375/4374, 41503/41472, 6250/6237, 1375/1372, 441/440, 8019/8000, 5120/5103, 243/242, 225/224, 2200/2187, 2835/2816, 1728/1715, 126/125, 245/243, 1944/1925, 81/80, 875/864, 64/63 It seems that SB commas are good commas, but unfortunately not every good comma is an SB comma. We can make the definition recursive, and define an SB comma of level n to be the ratio of the SB commas of level n-1 at each of the subnodes of a given node. If we do that, for level 2 11-limit SB commas we may add the following: 2401/2376 and 4000/3969. It was nice to see my old friend 4000/3969 again, but I would have been much happier to get 2401/2400 than 2401/2376. I didn't get any new 11-limit commas from level 3.
top of page bottom of page up down


Message: 2371 - Contents - Hide Contents

Date: Sun, 09 Dec 2001 07:40:08

Subject: Re: Wedge products

From: genewardsmith

--- In tuning-math@y..., graham@m... wrote:

> (6, -7, -2, 15, -25, -20, 3, 15, 59, 49) > I've got mine ordered, but it looks like a different order to yours.
I don't think I talked about an 11-limit order. I have a program which orders the above [6,-7,-2,15,20,-25,15,3,59,49] but I'm hardly fixated on that ordering. One thing which works well for wedge products of a pair of vectors but which doesn't work so well for more is the skew-symmetric matrix form. You take the outer product of the two vectors, and its transpose, and subtract. It has some redundancy but it's pretty nice; however for three vectors you get a cubical array and more redundancy, and so forth.
top of page bottom of page up down


Message: 2372 - Contents - Hide Contents

Date: Sun, 09 Dec 2001 08:26:50

Subject: Re: Wedge products

From: genewardsmith

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., graham@m... wrote: >
>> (6, -7, -2, 15, -25, -20, 3, 15, 59, 49) > >> I've got mine ordered, but it looks like a different order to yours. >
> I don't think I talked about an 11-limit order. I have a program > which orders the above [6,-7,-2,15,20,-25,15,3,59,49] but I'm hardly > fixated on that ordering.
Here's the matrix form: [ 0 6 -7 -2 15] [-6 0 -25 -20 3] [ 7 25 0 15 59] [ 2 20 -15 0 49] [-15 -3 -59 -49 0] One nice thing about this form is that the previous prime limits are included as the principal minors.
top of page bottom of page up down


Message: 2373 - Contents - Hide Contents

Date: Sun, 9 Dec 2001 16:02 +00

Subject: Re: Wedge products

From: graham@xxxxxxxxxx.xx.xx

Gene wrote:

> What's the best version of Python for Win98, do you know? In > particular, what is the deal with the "stackless" version?
Usually the latest stable ActiveState release, so long as you don't quibble with the license. Stackless is an experimental implementation that has continuations, and doesn't need the Global Interpreter Lock.
>> My wedge invariants can't be made unique and invariant in all > cases, but
>> they work most of the time. I could have a method for declaring of > two
>> wedgable objects are equivalent. >
> You don't need to use my system; you could make the first non-zero > coefficient in the basis ordering you use positive.
Yes, that's what I do.
> Also, my invariant is very different to >> Gene's. >
> It should differ only in the sign or order of basis elements.
Looks like it, except for the zeros.
>> I still don't get the process for calculating unison vectors with > wedge
>> products, especially in the general case. >
> One way to think of the general case is to get the associated matrix > of what I call "vals", reduce by dividing out by gcds, and solve the > resultant system of linear Diophantine equations, which set each of > the val maps to zero.
Is your matrix of vals my mapping by steps? [(41, 31), (65, 49), (95, 72), (115, 87), (142, 107)] for Miracle. If so, I'm with you until you get to the Diophantine equations. I think it's solving systems of linear Diophantine equations that I need to know how to do.
>> One good thing is that the generator mapping (ignoring the period > mapping)
>> which I'm using as my invariant key, is simply the octave- > equivalent part
>> of the wedge product of the commatic unison vectors! >
> Or of the wedge product of two ets.
Ah, no, not quite. This works:
>>> wedgie = reduce(temper.wedgeProduct, map(temper.WedgableRatio, [(225,224),(385,384),(243,242)])) >>> wedgie.octaveEquivalent().flatten()
(0, -6, 7, 2, -15) but with the wedge of the temperaments
>>> (h31^h41).octaveEquivalent().flatten()
(-25, -20, 3, 15, 59, 49) so what I have to do is
>>> (h31^h41).complement().octaveEquivalent().flatten()
(0, -6, 7, 2, -15) The complement() method is something like a transpose. Would that be a better name for it? Anyway, my invariant usually works so that wedge products related in this way compare the same, but not always.
>> I've got mine ordered, but it looks like a different order to yours. >
> That's not surprising; the order is not determined by the definition > of wedge product, and I chose mine in a way I thought made sense from > the point of view of usability for music theory.
Oh, well, mine's numerical order.
>> The problem is with zeroes. As it stands, the 5-limit interval 5:4 > is the
>> same as the 7-limit interval 5:4 as far as the wedge products are >> concerned. >
> This has me confused, because it's the same as far as I'm concerned > too, unless you mean its vector representation.
Yes, for matrices you need to have consistent dimensions, but you can get away without them for wedge products. At least the way I've implemented them.
> But some zero elements aren't always present. Either I can
>> get rid of them, which might mean that different products have the > same
>> invariant, or enumerate the missing bases when I calculate the > invariant. >
> I don't know what is going on here.
Take the multiple-29 wedgie:
>>> h29 = temper.PrimeET(29,temper.primes[:5]) >>> h58 = temper.PrimeET(58,temper.primes[:5]) >>> (h29^h58).invariant()
(0, 29, 29, 29, 29, 46, 46, 46, 46, -14, -33, -40, -19, -26, -7) Note it starts with a zero, which corresponds to the (0,1) element. But, if you build it up from the right set of unison vectors,
>>> wedgie = reduce(temper.wedgeProduct, ( (46, -29),
(-14, 0, -29, 29), (33, 0, 29, 0, -29), (7, 0, 0, 0, 29, -29)))
>>> wedgie.simplify() >>> wedgie.complement()
{(0, 5): 29, (0, 4): 29, (1, 4): 46, (1, 5): 46, (1, 2): 46, (1, 3): 46, (2, 5): -40, (2, 4): -33, (2, 3): -14, (3, 4): -19, (3, 5): -26, (0, 2): 29, (4, 5): -7, (0, 3): 29} The (0,1) element isn't there. That means it's also missing from the invariant
>>> wedgie.invariant()
(29, 29, 29, 29, 46, 46, 46, 46, -14, -33, -40, -19, -26, -7) If I could enumerate over all pairs, I could fix that. But that still leave the general problem of all combinations of N items taken from a set. I'd prefer to get rid of zero elements altogether.
>> As to the unison vectors, in the 7-limit I seem to be getting 4 > when I
>> only wanted 2, so how can I be sure they're linearly independent? >
> They are never linearly independent. Why do they need to be?
I need a pair of unison vectors to define a 7-limit linear temperament. Right? Some pairs have torsion as well:
>>> for i in range(4):
for j in range(3): print temper.wedgeProduct(vectors[i], vectors[j]).torsion(), 0 2 12 2 0 4 12 4 0 11 4 2 The aim is to get a pair without torsion. And then generalize the process for any number of dimensions. Graham
top of page bottom of page up down


Message: 2374 - Contents - Hide Contents

Date: Sun, 9 Dec 2001 16:02 +00

Subject: Unison vector finder (Was: The grooviest linear temperaments for 7-limit

From: graham@xxxxxxxxxx.xx.xx

Gene wrote:
> I don't know what good Maple code will do, but here it is: > > findcoms := proc(l) > local p,q,r,p1,q1,r1,s,u,v,w;
More descriptive variable names might help. Is l the wedge invariant?
> s := igcd(l[1], l[2], l[6]); > u := [l[6]/s, -l[2]/s, l[1]/s,0];
Presumably this is simplifying the octave-equivalent part?
> v := [p,q,r,1];
What values do p, q and r have? Is it important?
> w := w7l(u,v); > "w7l" takes two vectors representing intervals, and computes the > wegdge product.
So w is the wedge product of u and v, whatever they are.
> s := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l[6]-w > [6]}); > "isolve" gives integer solutions to a linear > equation;
Oh, that sounds useful.
> s := subs(_N1=0,s); > I get an undeterminded varable "_N1" in this way which I > can set equal to any integer, so I set it to 0. Okay. > p1 := subs(s,p); > q1 := subs(s,q); > r1 := subs(s,r);
What about this?
> v := 2^p1 * 3^q1 * 5^r1 * 7;
And here ^ is exponentiation instead of a wedge product.
> if v < 1 then v := 1/v fi;
So v must be a ratio, and you want it to be ascending.
> w := 2^u[1] * 3^u[2] * 5^u[3]; > if w < 1 then w := 1/w fi;
Same for w.
> [w, v] end:
And that's the result, is it? Two unison vectors?
> coms := proc(l) > local v; > v := findcoms(l); > com7(v[1],v[2]) end: > The pair of unisons > returned in this way can be LLL reduced by the "com7" function, which > takes a pair of intervals and LLL reduces them.
That makes sense. Return the reduced results of the other function.
> "w7l" takes two vectors representing intervals, and computes the > wegdge product. "isolve" gives integer solutions to a linear > equation; I get an undeterminded varable "_N1" in this way which I > can set equal to any integer, so I set it to 0. The pair of unisons > returned in this way can be LLL reduced by the "com7" function, which > takes a pair of intervals and LLL reduces them.
Looks like the magic is being done by "isolve" which I presume is built-in to Maple. Graham
top of page bottom of page up

Previous Next

2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500 2550 2600 2650 2700 2750 2800 2850 2900 2950

2350 - 2375 -

top of page