This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).

- Contents - Hide Contents - Home - Section 10

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9650 - 9675 -



top of page bottom of page up down


Message: 9675 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 23:42:16

Subject: Re: finding a moat in 7-limit commas a bit tougher . . .

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote: >> Yahoo groups: /tuning_files/files/Erlich/plana... * [with cont.] > > Paul, > > Please do another one of these without the labels, so we have a chance > of eyeballing the moats. Yahoo groups: /tuning_files/files/Erlich/plana... * [with cont.] ________________________________________________________________________ ________________________________________________________________________ ------------------------------------------------------------------------
Yahoo! Groups Links To visit your group on the web, go to: Yahoo groups: /tuning-math/ * [with cont.] To unsubscribe from this group, send an email to: tuning-math-unsubscribe@xxxxxxxxxxx.xxx Your use of Yahoo! Groups is subject to: Yahoo! Terms of Service * [with cont.] (Wayb.)
top of page bottom of page up down


Message: 9676 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 23:47:51

Subject: Re: Back to the 5-limit cutoff

From: Gene Ward Smith

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:

> My favourite cutoff for 5-limit temperaments is now. > > (error/8.13)^2 + (complexity/30.01)^2 < 1
Where do these numbers come floating in from--why 30.01, and not just 30, for instance?
> meantone 80:81 > augmented 125:128 > porcupine 243:250 > diaschismic 2025:2048 > diminished 625:648 > magic 3072:3125 > blackwood 243:256 > kleismic 15552:15625 > pelogic 128:135 > 6561/6250 6250:6561 > quartafifths (tetracot) 19683:20000 > negri 16384:16875 > 2187/2048 2048:2187 > neutral thirds (dicot) 24:25 > superpythag 19683:20480 > schismic 32768:32805 > 3125/2916 2916:3125
The only thing which might qualify as microtempering is schismic, which I presume is the idea. It looks OK at first glance, and could even be shorted on the high-error side without upsetting me any. By the way, if you use 81/80 instead of 80:81, you are not going to be inconsistent with that other fellow who uses 81:80 for the exact same ratio. You will aslo be specifying an actual number. Numbers are nice. This whole obsession with colons makes me want to give the topic a colostomy. I have read no justification for it which makes any sense to me.
top of page bottom of page up down


Message: 9677 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 00:32:43

Subject: Weighting (was: 114 7-limit temperaments

From: Graham Breed

> Observation One: The extent and intensity of the influence of a > magnet is in inverse proportion to its ratio to 1.
Hmm, that's fairly impenetrable. But it does say "extend" and "inverse proportion".
> "To be taken in conjunction with the following" > > Observation Two: The intensity of the urge for resolution is in > direct proportion to the proximity of the temporarily magnetized > tone to the magnet.
So it's only about resolution? Carl:
> ? The more complex ones already have the highest entropy. You mean > they gain the most entropy from the mistuning? I think Paul's saying > the entropy gain is about constant per mistuning of either complex > or simple putative ratios.
Oh no, the simple intervals gain the most entropy. That's Paul's argument for them being well tuned. After a while, the complex intervals stop gaining entropy altogether, and even start losing it. At that point I'd say they should be ignored altogether, rather than included with a weighting that ensures they can never be important. Some of the temperaments being bandied around here must get way beyond that point. Actually, any non-unique temperament will be a problem. What I meant is that, because the simple intervals have the least entropy to start with, they still have the least after mistuning, although they're gaining it more rapidly. Carl:
> I was thinking about this last night before I passed out. If you > tally the number of each dyad at every beat in a piece of music and > average, I think you'd find the most common dyads are octaves, to be > followed by fifths and so on. Thus if consonance really *does* > deteriorate at the same rate for all ratios as Paul claims, one > would place less mistuning on the simple ratios because they occur > more often. This is, I believe, what TOP does.
It depends on the music, of course. My decimal counterpoint tends to use 4:6:7 a lot because it's simple, and not much of 6:5. So tuning for such pieces would be different to TOP, which assumes a different pattern of intervals. This would make more sense for evaluating complexity, although I'm not sure how you can write a piece of music without knowing what temperament you want it in. Why temper at all in that situation? But if you have some idea of the intervals you like, perhaps with a body of music in JI to count them from, you could find a temperament that makes them all nicely in tune and easy to find. Graham
top of page bottom of page up down


Message: 9678 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 04:00:09

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:

>>> Thus if consonance really *does* >>> deteriorate at the same rate for all ratios as Paul claims, >>
>> Where did I claim that? >
> In your decatonic paper you say the consonance deteriorates > 'at least as fast', and opt to go sans weighting, IIRC.
Yes, the mathematics underlying harmonic entropy makes it clear that simpler ratios have more "room" around them, but when you actually calculate harmonic entropy itself, you end up finding that this doesn't translate into less sensitivity to mistuning. The paper is pre-harmonic entropy (it was invented too late) . . .
top of page bottom of page up down


Message: 9679 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:32:08

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>>>>>>> I'm arguing that, along this particular line of thinking, >>>>>>>> complexity does one thing to music, and error another, but >>>>>>>> there's no urgent reason more of one should limit your >>>>>>>> tolerance for the other . . . >>>>>>>
>>>>>>> Taking this to its logical extreme, wouldn't we abandon >>>>>>> badness alltogether? >>>>>
>>>>>> No, it would just become 'rectangular', as Dave noted. >>>>>
>>>>> I didn't follow that. >>>>
>>>> Your badness function would become max(a*complexity, b*error), >>>> thus having rectangular contours. >>>
>>> More of one can here influence the tolerance for the other. >> >> Not true. >
> Actually what are a and b? Constants. > But Yes, true. Increasing my tolerance for complexity simultaneously > increases my tolerance for error, since this is Max().
I have no idea why you say that. However, when I said "more of one", I didn't mean "more tolerance for one", I simply meant "higher values of one".
top of page bottom of page up down


Message: 9680 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 08:55:58

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>> It's what you said yesterday (I think). >> >> At some point (1 cent, 0.5 cent?) the error is so low and the >> complexity so high, that any further reduction in error is irrelevant >> and will not cause you to allow any further complexity. So it should >> be straight down to the complexity axis from there. >
> Picking a single point is hard. It should be asymptotic.
Surely you don't mean asymptotic here, since asymptotic means "getting closer and closer to a line but never reaching it except in the limit of infinitely distance from the origin", right? Asymptote -- from MathWorld * [with cont.] Unless you're talking about log-flat badness, in which case you're not really responding to Dave's comment at all . . .
top of page bottom of page up down


Message: 9681 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 23:59:25

Subject: Re: finding a moat in 7-limit commas a bit tougher . . .

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote: >> Yahoo groups: /tuning_files/files/Erlich/plana... * [with cont.] >
> And could you please multiply the vertical axis numbers by 1200. I'm > getting tired of doing this mentally all the time, to make them mean > something. I re-uploaded Yahoo groups: /tuning_files/files/Erlich/plana... * [with cont.] for you.
top of page bottom of page up down


Message: 9682 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 01:02:00

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>Even if you accept this (which I don't), wouldn't it merely tell you >that the power should be *at least 2* or something, rather than >*exactly 2*?
Yes. I was playing with things like comp**5(err**2) back in the day. But I may have been missing out on the value of adding... -Carl
top of page bottom of page up down


Message: 9683 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 02:05:27

Subject: Re: 114 7-limit temperaments

From: Gene Ward Smith

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>> You're using temperaments to construct scales, aren't you? >>
>> Not me, for the most part. I think the non-keyboard composer is >> simply being ignored in these discussions, and I think I'll stand >> up for him. >
> How *are* you constructing scales, and what does it have to do > with keyboards?
Often I'm not constructing them because I'm not using them.
top of page bottom of page up down


Message: 9684 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 04:10:54

Subject: Re: Weighting

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:

>> Anyway, Partch is saying you can create a dissonance by using a >> complex interval that's close in size to a simple one. I translate >> his Observations into the present context thus... >> >> 'The size (in cents) of the 'field of attraction' of an interval >> is proportional to the size of the numbers in the ratio, and >> the dissonance (as opposed to discordance) becomes *greater* as >> it gets closer to the magnet.' >
> Since I don't know what he, or you, mean by a "magnet" I can only > comment on the first part of this purported translation. And I find > that it is utterly foreign to my experience, and I think yours. Did > you accidentally drop an "inversely". Yes. > i.e. we can safely assume that > Partch is only considering ratios in othe superset of all his JI > scales, so things like 201:301 do not arise. Yes. > i.e. he's ignoring > TOLERANCE and only considering COMPLEXITY. Yes. > So surely he means that as > the numbers in the ratio get larger, the width of the field of > attraction gets smaller. Yes. > To me, that's an argument for why TOP isn't necessarily what you >want.
Why, if this only addresses complexity and ignores tolerance? Partch isn't expressing his views on tolerance/mistuning here. And while the Farey or whatever series that are used to calculate harmonic entropy follow this same observation if one equates "field of attraction" with "interval between it and adjacent ratios", the harmonic entropy that comes out of this shows that simpler ratios are most sensitive to mistuning, precisely because their great consonance arises from this very remoteness from neighbors, a unique property that rapidly subsides as one shifts away from the correct tuning.
top of page bottom of page up down


Message: 9685 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:33:29

Subject: Re: The true top 32 in log-flat?

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>>>>>>>>> TOP generators [1201.698520, 504.1341314] >>>>>>>
>>>>>>> So how are these generators being chosen? Hermite? >>>>>>
>>>>>> No, just assume octave repetition, find the period (easy) >>>>>> and then the unique generator that is between 0 and 1/2 >>>>>> period. >>>>>> >>>>>>> I confess
>>>>>>> I don't know how to 'refactor' a generator basis. >>>>>>
>>>>>> What do you have in mind? >>>>>
>>>>> Isn't it possible to find alternate generator pairs that give >>>>> the same temperament when carried out to infinity? >>>>
>>>> Yup! You can assume tritave-equivalence instead of octave- >>>> equivalence, for one thing . . . >>>
>>> And can doing so change the DES series? >>
>> Well of course . . . can you think of any octave-repeating DESs >> that are also tritave-repeating? >
> Right, so when trying to explain a creepy coincidence between > complexity and DES cardinalities, might not we take this into > account?
Sure . . . some of the ones that 'don't work' may be working for tritave-DESs rather than octave-DESs, is that what you were thinking?
top of page bottom of page up down


Message: 9686 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 01:07:08

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>>> >t's what you said yesterday (I think). >>> >>> At some point (1 cent, 0.5 cent?) the error is so low and the >>> complexity so high, that any further reduction in error is >>> irrelevant and will not cause you to allow any further complexity. >>> So it should be straight down to the complexity axis from there. >>
>> Picking a single point is hard. It should be asymptotic. >
>Surely you don't mean asymptotic here, since asymptotic >means "getting closer and closer to a line but never reaching it >except in the limit of infinitely distance from the origin", right? > >Asymptote -- from MathWorld * [with cont.] That's right. >Unless you're talking about log-flat badness, in which case you're >not really responding to Dave's comment at all . . .
No, I was talking about what happens to error's contribution to badness as it approaches zero. -Carl
top of page bottom of page up down


Message: 9687 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 02:27:18

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>> I'm arguing that, along this particular line of thinking, complexity >> does one thing to music, and error another, but there's no urgent >> reason more of one should limit your tolerance for the other . . . >
> Taking this to its logical extreme, wouldn't we abandon badness > alltogether? > > -Carl
No, it would just become 'rectangular', as Dave noted.
top of page bottom of page up down


Message: 9688 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 04:18:15

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> > wrote:
>> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> > wrote:
>>> You should know how to calculate them by now: log(n/d)*log(n*d) >>> and log(n*d) respectively. >> >> You mean >> >> log(n/d)/log(n*d) >>
>> where n:d is the comma that vanishes. >> >> I prefer these scalings >> >> complexity = lg2(n*d) >> >> error = comma_size_in_cents / complexity >> = 1200 * log(n/d) / log(n*d) >> >> My favourite cutoff for 5-limit temperaments is now. >> >> (error/8.13)^2 + (complexity/30.01)^2 < 1 >> >> This has an 8.5% moat, in the sense that we must go out to >> >> (error/8.13)^2 + (complexity/30.01)^2 < 1.085 >> >> before we will include another temperament (semisixths). >> >> Note that I haven't called it a "badness" function, but rather a >> "cutoff" function. So there's no need to see it as competing with >> log-flat badness. What it is competing with is log-flat badness plus >> cutoffs on error and complexity (or epimericity). >> >> Yes it's arbitrary, but at least it's not capricious, thanks to the >> existence of a reasonable-sized moat around it. >> >> It includes the following 17 temperaments. >
> is this in order of (error/8.13)^2 + (complexity/30.01)^2 ? > >> meantone 80:81 >> augmented 125:128 >> porcupine 243:250 >> diaschismic 2025:2048 >> diminished 625:648 >> magic 3072:3125 >> blackwood 243:256 >> kleismic 15552:15625 >> pelogic 128:135 >> 6561/6250 6250:6561
>> quartafifths (tetracot) 19683:20000 >> negri 16384:16875 >> 2187/2048 2048:2187 >> neutral thirds (dicot) 24:25 >> superpythag 19683:20480 >> schismic 32768:32805 >> 3125/2916 2916:3125 >> >> Does this leave out anybody's "must-have"s? >> >> Or include anybody's "no-way!"s? >
> I suspect you could find a better moat if you included semisixths > too -- but you might need to hit at least one axis at less than a 90- > degree angle. Then again, you might not.
Also try including semisixths *and* wuerschmidt -- for a list of 19 -- particularly if you're willing to try a straighter curve.
top of page bottom of page up down


Message: 9689 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:35:51

Subject: Re: 7-limit horagrams

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>>>>>> Beautiful! I take it the green lines are proper scales? >>>>>>
>>>>>> Guess again (it's easy)! >>>>>
>>>>> Obviously not easy enough if we've had to exchange three >>>>> messages about it. >>>>
>>>> Then you can't actually be looking at the horagrams ;) >>>
>>> Why not just explain things rather than riddling your users? >>
>> Because I'm trying to encourage some looking. >
> I've tested several possibilities about what the green could mean, > and your continued refusal to simply provide the answer is assinine, > with a double s. I'm ssorry. Green-black-green-black-green-black-green-black-green-black-green-
black . . . Wasn't that your idea in the first place?
top of page bottom of page up down


Message: 9690 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 01:11:46

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>> >f I have a certain expectation of max error and a separate >> expectation of max complexity, but I can't measure them directly, >> I have to use Dave's formula, I wind up with more of whatever I >> happened to expect less of. >
>More of whatever you happened to expect less of? What do you mean? >Can you explain with an example?
If I'm bounding a list of temperaments with Dave's formula only, and I desire that error not exceed 10 cents rms and complexity not exceed 20 notes (and a and b somehow put cents and notes into the same units), what bound on Dave's formula should I use? If I pick 10 I won't see the larger temperaments I want, and if I pick 20 I'll see the less accurate temperaments I don't want.
>> Dave's function is thus a badness >> function, since it represents both error and complexity. >
>A badness function has to take error and complexity as inputs, and >give a number as output.
That's why the notion of badness is incompatible with the logical extreme of your suggestion. -C.
top of page bottom of page up down


Message: 9691 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 02:31:55

Subject: Re: The true top 32 in log-flat?

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:

>>>> TOP generators [1201.698520, 504.1341314] >
> So how are these generators being chosen? Hermite?
No, just assume octave repetition, find the period (easy) and then the unique generator that is between 0 and 1/2 period.
> I confess > I don't know how to 'refactor' a generator basis.
What do you have in mind?
top of page bottom of page up down


Message: 9692 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 16:07:56

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>By the way, if you use 81/80 instead of 80:81, you are not going to >be inconsistent with that other fellow who uses 81:80 for the exact >same ratio. You will aslo be specifying an actual number. Numbers are >nice. This whole obsession with colons makes me want to give the >topic a colostomy. I have read no justification for it which makes >any sense to me.
There's a history in the literature of using ratios to notate pitches. Normally around here we use them to notate intervals, but confusion between the two has caused tragic misunderstandings and more than a few flame wars. So we adopted colon notation for intervals. I have no idea what the idea behind putting the smaller number first is, and I don't approve of it. -Carl
top of page bottom of page up down


Message: 9693 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 02:32:30

Subject: Re: 7-limit horagrams

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Carl Lumma" <ekin@l...> wrote:
> Beautiful! I take it the green lines are proper scales? > > -C.
Guess again (it's easy)!
top of page bottom of page up down


Message: 9694 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:38:56

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> Also try including semisixths *and* wuerschmidt -- for a list of 19 -- > particularly if you're willing to try a straighter curve.
No. There's no way to get a better moat by adding wuerschmidt. It's too close to aristoxenean, and if you also add aristoxenean it's too close to ... etc.
top of page bottom of page up down


Message: 9695 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 09:12:17

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>>> It's what you said yesterday (I think). >>>> >>>> At some point (1 cent, 0.5 cent?) the error is so low and the >>>> complexity so high, that any further reduction in error is >>>> irrelevant and will not cause you to allow any further complexity. >>>> So it should be straight down to the complexity axis from there. >>>
>>> Picking a single point is hard. It should be asymptotic. >>
>> Surely you don't mean asymptotic here, since asymptotic >> means "getting closer and closer to a line but never reaching it >> except in the limit of infinitely distance from the origin", right? >> >> Asymptote -- from MathWorld * [with cont.] > > That's right.
But without the "infinite distance" part?
>> Unless you're talking about log-flat badness, in which case you're >> not really responding to Dave's comment at all . . . >
> No, I was talking about what happens to error's contribution to > badness as it approaches zero.
I've always seen 'asymptote' defined as in the diagrams, with something approaching infinity, not a finite limit. But OK.
top of page bottom of page up down


Message: 9696 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 02:56:20

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Graham Breed <graham@m...> wrote:
> Paul Erlich wrote: >
>> I don't think that's quite what Partch says. Manuel, at least, has >> always insisted that simpler ratios need to be tuned more accurately, >> and harmonic entropy and all the other discordance functions I've >> seen show that the increase in discordance for a given amount of >> mistuning is greatest for the simplest intervals. >
> Did you ever track down what Partch said?
Can't find my copy of Genesis!
> Harmonic entropy can obviously be used to prove whatever you like. It > also shows that the troughs get narrower the more complex the limit, so > it takes a smaller mistuning before the putative ratio becomes irrelevant.
Yes, this is what Partch and the mathematics that underlies harmonic entropy say.
> It also shows that, if all intervals are equally mistuned, the more > complex ones will have the highest entropy.
They had the highest entropy to begin with, and will get less on the margin.
> So they're the ones for > which the mistuning is most problematic, > and where you should start for > optimization.
I've offered some arguments against this here, but 13:8 vs. 14:13 example below seems to make it a bit moot . . .
>> Such distinctions may be important for *scales*, but for >> temperaments, I'm perfectly happy not to have to worry about them. >> Any reasons I shouldn't be? >
> You're using temperaments to construct scales, aren't you?
Not necessarily -- they can be used directly to construct music, mapped say to a MicroZone or a Z-Board. * [with cont.] (Wayb.)
> If you don't > want more than 18 notes in your scale, miracle is a contender in the > 7-limit but not the 9-limit. And if you don't want errors more than 6 > cents, you can use meantone in the 7-limit but not the 9-limit.
What if you don't assume total octave-equivalence?
> There's > no point in using intervals that are uselessly complex or inaccurate so > you need to know whether you want the wider 9-limit when choosing the > temperament.
In the Tenney-lattice view of harmony, 'limit' and chord structure is a more fluid concept.
>> Tenney weighting can be conceived of in other ways than you're >> conceiving of it. For example, if you're looking at 13-limit, it >> suffices to minimize the maximum weighted error of {13:8, 13:9, >> 13:10, 13:11, 13:12, 14:13} or any such lattice-spanning set of >> intervals. Here the weights are all very close (13:8 gets 1.12 times >> the weight of 14:13), *all* the ratios are ratios of 13 so simpler >> intervals are not directly weighted *at all*, and yet the TOP result >> will still be the same as if you just used the primes. I think TOP is >> far more robust than you're giving it credit for. >
> It's really an average over all odd-limit minimaxes. And the higher you > get probably the less difference it makes -- but then the harder the > consonances will be to hear anyway. For the special case of 7 vs 9 > limit, which is the most important, it seems to make quite a difference. Any examples? > Oh, yes, I think the 9-limit calculation can be done by giving 3 a > weight of a half.
Which calculation are you referring to, exactly?
> That places 9 on an equal footing with 5 and 7, and I > think it works better than vaguely talking about the number of > consonances.
Number of consonances?
> After all, how do you share a comma between 3:2 and 9:8?
I'm not sure why you're asking this at this point, or what it means . . .
> I still don't know how the 15-limit would work. ?shrug? > I'm expecting the limit of this calculation as the odd limit tends to > infinity will be the same as this Kees metric.
Can you clarify which calculation and which Kees metric you're talking about?
> And as the integer limit > goes to infinity, it'll probably give the Tenney metric.
I haven't the foggiest idea what you mean. All I can say at this point is that n*d seems to be to be a better criterion to 'limit' than n (integer limit).
> But as the > integers don't get much beyond 10, infinity isn't really an important > consideration.
I wish I knew what it would be important for . . .
> Not that it does much harm either, because the minimax always depends on > the most complex intervals, which will have roughly equal weighting. > The same as octave specific metrics give roughly the same results as > odd-limit style octave equivalent ones if you allow for octave stretching.
I still remain unclear on what you were doing with your octave- equivalent TOP stuff. Gene ended up interested in the topic later but you missed each other. I rediscovered your 'worst comma in 12-equal' when playing around with "orthogonalization" and now figure I must have misunderstood your code. You weren't searching an infinite number of commas, but just three, right?
top of page bottom of page up down


Message: 9697 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 09:14:44

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>> If I have a certain expectation of max error and a separate >>> expectation of max complexity, but I can't measure them directly, >>> I have to use Dave's formula, I wind up with more of whatever I >>> happened to expect less of. >>
>> More of whatever you happened to expect less of? What do you mean? >> Can you explain with an example? >
> If I'm bounding a list of temperaments with Dave's formula only, > and I desire that error not exceed 10 cents rms and complexity not > exceed 20 notes (and a and b somehow put cents and notes into the > same units), what bound on Dave's formula should I use?
You'd pick a and b such that max(cents/10,complexity/20) < 1.
>>> Dave's function is thus a badness >>> function, since it represents both error and complexity. >>
>> A badness function has to take error and complexity as inputs, and >> give a number as output. >
> That's why the notion of badness is incompatible with the logical > extreme of your suggestion.
Why? Max(cents/10,complexity/20) gives a number as output.
top of page bottom of page up down


Message: 9698 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 01:24:22

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>>>>> >'m arguing that, along this particular line of thinking, >>>>> complexity does one thing to music, and error another, but >>>>> there's no urgent reason more of one should limit your >>>>> tolerance for the other . . . // >>
>> If I'm bounding a list of temperaments with Dave's formula only, >> and I desire that error not exceed 10 cents rms and complexity not >> exceed 20 notes (and a and b somehow put cents and notes into the >> same units), what bound on Dave's formula should I use? >
>You'd pick a and b such that max(cents/10,complexity/20) < 1.
Ok, I walked into that one by giving fixed bounds on what I wanted. But re. your original suggestion (above), for any fixed version of the formula, more of one *increases* my tolerance for the other. -Carl
top of page bottom of page up down


Message: 9699 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 03:01:49

Subject: Re: The true top 32 in log-flat?

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> > wrote:
>> There's something VERY CREEPY about my complexity values. I'm going >> to have to accept this as *the* correct scaling for complexity (I'm >> already convinced this is the correct formulation too, i.e. L_1 > norm,
>> for the time being) . . . >
> That's great, Paul. So what's the scaling?
I'm using your formula from Yahoo groups: /tuning-math/message/8806 * [with cont.] but instead of "max", I'm using "sum" . . .
top of page bottom of page up

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9650 - 9675 -

top of page