This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).

- Contents - Hide Contents - Home - Section 10

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9700 - 9725 -



top of page bottom of page up down


Message: 9725 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 06:15:11

Subject: Re: Weighting

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> We don't yet know what harmonic entropy says about the tolerance of > the tuning of individual intervals in a consonant chord. And in the > past, complexity computations have often been geared around complete > consonant chords. They're definitely an important consideration . . . > > For dyads, you have more of a point. As I mentioned before, TOP can > be viewed as an optimization over *only* a set of equally-complex, > fairly complex ratios, all containing the largest prime in your > lattice as one term, and a number within a factor of sqrt(2) or so of > it as the other. So as long as these ratios have a standard of error > applied to them which keeps them "meaningful", you should have no > objection. Otherwise, you had no business including that prime in > your lattice in the first place, something I've used harmonic entropy > to argue before. But clearly you are correct in implying we'll need > to tighten our error tolerance when we do 13-limit "moats", etc. I > think that's true but really just tells us that with the kinds of > timbres and other musical factors that high-error low-limit timbres > are useful for, you simply won't have access to any 13-limit effects - > - from dyads alone.
Yes. I can agree to all that.
top of page bottom of page up down


Message: 9726 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:16:03

Subject: Re: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>>>>> Such distinctions may be important for *scales*, but for >>>>>> temperaments, I'm perfectly happy not to have to worry about >>>>>> them. Any reasons I shouldn't be? >>>>>
>>>>> You're using temperaments to construct scales, aren't you? >>>>
>>>> Not necessarily -- they can be used directly to construct music, >>>> mapped say to a MicroZone or a Z-Board. >>>> >>>> * [with cont.] (Wayb.) >>>
>>> ??? Doing so creates a scale. >>> >>> -Carl >>
>> A 108-tone scale? >
> "Scale" is a term with a definition. I was simply using it. You > meant (and thought I meant?) "diatonic", or "diatonic scale", maybe.
Did you mean a 108-tone scale?
top of page bottom of page up down


Message: 9727 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 06:17:30

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
>> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> >> wrote:
>>> However I'm coming around to thinking that the power of 2 (and the >>> resultant meeting the axes at right angles) is far easier to justify >>> than anything else. >> >> How so? >
> It's what you said yesterday (I think). > > At some point (1 cent, 0.5 cent?) the error is so low and the > complexity so high, that any further reduction in error is irrelevant > and will not cause you to allow any further complexity. So it should > be straight down to the complexity axis from there.
I don't buy this argument. You're in fact allowing that a tiny, "irrelevant" reduction in error to warrant a tiny increase in complexity over the bulk of your curve. Thus you have a negative, finite slope. Why this allowance, or its implications, should be qualitatively different at the 1 cent or 0.5 cent point, and at a low complexity value, I'm not seeing. A tiny increase in allowed complexity for a tiny reduction in error makes sense everywhere on the curve if it makes sense anywhere, though the quantitative relationship between the two can certainly change somewhat from one end of the curve to the other.
> It also corresponds to mistuning-pain being the square of the error. > As you pointed out, that may have just been used by JdL as it is > convenient, but don't the bottoms of your HE notches look parabolic?
Yes, they do. But how can you justify squared complexity?
> To justify using the square of complexity (as I think Carl suggested) > we also have the fact that the number of intervals is O(comp**2).
No, it seems that it would be O(2*comp). You mean the number of dyads, some of which will be the same size as one another, found in a typical scale? Remember we're not really talking about scales . . . But anyway then the number of triads will be O(comp**3), etc. . . . Why assume 2 voices is most important?
top of page bottom of page up down


Message: 9728 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 10:04:35

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>> Yes -- thus more of one has no effect on the tolerance for the >> other -- it's either the bigger thing, making the tolerance for >> the other irrelevant anyway, or it's the smaller thing, it which >> case the tolerance for the other is a constant. >
> If you make the bigger one bigger, you're also allowing the smaller > one to get bigger without knowing about it.
I'm afraid I can make no sense of this, no matter which way I think about it. Can you give an example?
> Or maybe I'm > misunderstanding "tolerance" here, or the setup of the procedure.
Did what I tried to clarify at the end of this post make sense to you: Yahoo groups: /tuning-math/message/9139 * [with cont.] ?
>>> You can tweak your >>> precious constants after the fact to fix it, but not before >>> the fact. >>
>> Isn't this true of any badness criterion? >
> Yes, that's why I said someone who wants to change his expectations > of error without changing his expectations of complexity shouldn't > use badness.
Hmm . . . changing expectations? Not sure quite what you mean by that . . .
top of page bottom of page up down


Message: 9729 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:17:00

Subject: Re: The true top 32 in log-flat?

From: Gene Ward Smith

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
wrote:

> I'm using your formula from > > Yahoo groups: /tuning-math/message/8806 * [with cont.] > > but instead of "max", I'm using "sum" . . .
So these cosmically great answers are coming from the L1 norm applied to the scaling we got from vals, where we divide by log2(p)'s. What does that mean, I wonder?
top of page bottom of page up down


Message: 9730 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 06:23:05

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
>> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> >> wrote:
>>> However I'm coming around to thinking that the power of 2 (and the >>> resultant meeting the axes at right angles) is far easier to justify >>> than anything else. >> >> How so? >
> It's what you said yesterday (I think). > > At some point (1 cent, 0.5 cent?) the error is so low and the > complexity so high, that any further reduction in error is irrelevant > and will not cause you to allow any further complexity. So it should > be straight down to the complexity axis from there. > > Similarly, at some point (10 notes per whatever, 5?) the complexity is > so low and the error so high, that any further reduction will not > cause you to allow any further error. So it should be straight across > to the error axis from there.
Even if you accept this (which I don't), wouldn't it merely tell you that the power should be *at least 2* or something, rather than *exactly 2*?
top of page bottom of page up down


Message: 9731 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 10:35:43

Subject: finding a moat in 7-limit commas a bit tougher . . .

From: Paul Erlich

Yahoo groups: /tuning_files/files/Erlich/plana... * [with cont.] 


top of page bottom of page up down


Message: 9732 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:18:12

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
>> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> >> wrote:
>>> My favourite cutoff for 5-limit temperaments is now. >>> >>> (error/8.13)^2 + (complexity/30.01)^2 < 1 >>> >>> This has an 8.5% moat, in the sense that we must go out to >>> >>> (error/8.13)^2 + (complexity/30.01)^2 < 1.085 >>> >>> before we will include another temperament (semisixths). >
> That was wrong. I forgot to square the radius. > > It has an 8.5% moat in the sense that we must go out to > > (error/8.13)**2 + (complexity/30.01)**2 < 1.085**2 > > before we will include another temperament (semisixths). > > I'm trying to remember to use "**" for power now that "^" is wedge > product. >
>>> It includes the following 17 temperaments. >>
>> is this in order of (error/8.13)^2 + (complexity/30.01)^2 ? >
> Yes. Or if it isn't, it's pretty close to it. The last four are > essentially _on_ the curve, so their order is irrelevant. > >>> meantone 80:81 >>> augmented 125:128 >>> porcupine 243:250 >>> diaschismic 2025:2048 >>> diminished 625:648 >>> magic 3072:3125 >>> blackwood 243:256 >>> kleismic 15552:15625 >>> pelogic 128:135 >>> 6561/6250 6250:6561
>>> quartafifths (tetracot) 19683:20000 >>> negri 16384:16875 >>> 2187/2048 2048:2187 >>> neutral thirds (dicot) 24:25 >>> superpythag 19683:20480 >>> schismic 32768:32805 >>> 3125/2916 2916:3125 >>> >>> Does this leave out anybody's "must-have"s? >>> >>> Or include anybody's "no-way!"s? >>
>> I suspect you could find a better moat if you included semisixths >> too -- but you might need to hit at least one axis at less than a 90- >> degree angle. Then again, you might not. >
> If you keep the power at 2, there is no better moat that includes > semisixths. The best such only has a 6.7% moat. > > This is > > (error/8.04)**2 + (complexity/32.57)**2 = 1**2 > > However if the power is reduced to 1.75 then we get a 9.3% moat outside of > > (error/8.25)**1.75 + (complexity/32.62)**1.75 = 1**1.75 > > which adds only semisixths to the above list.
Nice. Any even wider moat if we allow wuerschmidt in too?
> However I'm coming around to thinking that the power of 2 (and the > resultant meeting the axes at right angles) is far easier to justify > than anything else. How so?
top of page bottom of page up down


Message: 9733 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 06:52:36

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> > wrote:
>> It's what you said yesterday (I think). >> >> At some point (1 cent, 0.5 cent?) the error is so low and the >> complexity so high, that any further reduction in error is > irrelevant
>> and will not cause you to allow any further complexity. So it should >> be straight down to the complexity axis from there. >
> I don't buy this argument. You're in fact allowing that a > tiny, "irrelevant" reduction in error to warrant a tiny increase in > complexity over the bulk of your curve. Thus you have a negative, > finite slope. Why this allowance, or its implications, should be > qualitatively different at the 1 cent or 0.5 cent point, and at a low > complexity value, I'm not seeing.
I thought everyone accepted the existence of a just noticeable difference, even if they can't agree on what it is.
> A tiny increase in allowed > complexity for a tiny reduction in error makes sense everywhere on > the curve if it makes sense anywhere, though the quantitative > relationship between the two can certainly change somewhat from one > end of the curve to the other.
I see what you mean. So you are arguing for a straight line. But I can just argue that what we want is not a straight line on the error versus complexity plot, but a straight line on the error-pain (mistuning-pain) versus complexity-pain plot, and that these are most simply modelled as the squares of the respective measures.
>> It also corresponds to mistuning-pain being the square of the error. >> As you pointed out, that may have just been used by JdL as it is >> convenient, but don't the bottoms of your HE notches look parabolic? >
> Yes, they do. OK! > But how can you justify squared complexity? >
>> To justify using the square of complexity (as I think Carl > suggested)
>> we also have the fact that the number of intervals is O(comp**2).
Actually Carl wasn't so specific as to claim squared. He just claimed it was definitely worse than linear .
> No, it seems that it would be O(2*comp). You mean the number of > dyads, some of which will be the same size as one another, found in a > typical scale?
Yes, sorry, that's exactly what I meant.
> Remember we're not really talking about scales . . .
I don't buy this. But in any case, in another thread you've just unmasked an extraordinary relationship between a certain complexity measure and the size of typical scales. Is that measure still proportional to log(n*d) for 5-limit linear temps? I assume so.
> But anyway then the number of triads will be O(comp**3), etc. . . . > Why assume 2 voices is most important?
Sure, but this is arguing _further_ away from linear. So you should see quadratic as a compromise. Surely you don't want us to use exponential or factorial! But I suppose you can still argue that it might be to the power 1.75.
top of page bottom of page up down


Message: 9735 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:19:11

Subject: Re: Back to the 5-limit cutoff (was: 60 for Dave)

From: Gene Ward Smith

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> 
wrote:

>> Recall that any goofy, ad-hoc weirdness may need to be both > explained >> and justified. >
> What did you have in mind?
I'm perfectly happy with badness, complexity and error, and suggest that if we don't use that, we not use something utterly loony, which all this talk of concavity makes me think might be contemplated.
top of page bottom of page up down


Message: 9736 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 06:59:07

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
>> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> >> wrote:
>>> It's what you said yesterday (I think). >>> >>> At some point (1 cent, 0.5 cent?) the error is so low and the >>> complexity so high, that any further reduction in error is >> irrelevant
>>> and will not cause you to allow any further complexity. So it should >>> be straight down to the complexity axis from there. >>
>> I don't buy this argument. You're in fact allowing that a >> tiny, "irrelevant" reduction in error to warrant a tiny increase in >> complexity over the bulk of your curve. Thus you have a negative, >> finite slope. Why this allowance, or its implications, should be >> qualitatively different at the 1 cent or 0.5 cent point, and at a low >> complexity value, I'm not seeing. >
> I thought everyone accepted the existence of a just noticeable > difference, even if they can't agree on what it is.
Yes, but the just noticeable difference between two error values is about the same, whether the pair of similar error values is high or low.
>> A tiny increase in allowed >> complexity for a tiny reduction in error makes sense everywhere on >> the curve if it makes sense anywhere, though the quantitative >> relationship between the two can certainly change somewhat from one >> end of the curve to the other. >
> I see what you mean. So you are arguing for a straight line.
Or a curve.
> Sure, but this is arguing _further_ away from linear. So you should > see quadratic as a compromise. Surely you don't want us to use > exponential or factorial!
Fokker used 2^n for equal temperaments.
> But I suppose you can still argue that it might be to the power >1.75. Yup.
Sure, why not let *both* exponents and
top of page bottom of page up down


Message: 9737 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 22:52:24

Subject: Re: Weighting (was: 114 7-limit temperaments

From: Graham Breed

Me:
>> Oh no, the simple intervals gain the most entropy. That's Paul's >> argument for them being well tuned. After a while, the complex >> intervals stop gaining entropy altogether, and even start losing > it. At
>> that point I'd say they should be ignored altogether, rather than >> included with a weighting that ensures they can never be important. >> Some of the temperaments being bandied around here must get way > beyond >> that point. Paul: > Examples?
Well, meantone isn't 9-limit unique, so anything else will have mythical approximations by this dyadic harmonic entropy viewpoint. That includes dominant seventh and pajara
>> Actually, any non-unique temperament will be a problem. > > > ?
If a temperament isn't unique, two consonant (meaning we care about the tuning) intervals must get approximated to the same interval. That tempered interval can't be in the troughs of both ratios! Graham
top of page bottom of page up down


Message: 9738 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 07:15:17

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:

> Sure, why not let *both* exponents and and what?
top of page bottom of page up down


Message: 9739 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 22:54:01

Subject: Re: Weighting (was: 114 7-limit temperaments

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Graham Breed <graham@m...> wrote:
> Me:
>>> Oh no, the simple intervals gain the most entropy. That's Paul's >>> argument for them being well tuned. After a while, the complex >>> intervals stop gaining entropy altogether, and even start losing >> it. At
>>> that point I'd say they should be ignored altogether, rather than >>> included with a weighting that ensures they can never be important. >>> Some of the temperaments being bandied around here must get way >> beyond >>> that point. > > Paul: >> Examples? >
> Well, meantone isn't 9-limit unique, so anything else will have mythical > approximations by this dyadic harmonic entropy viewpoint.
Dicot isn't 5-limit unique . . .
> That includes > dominant seventh and pajara
OK -- fortunately, the TOP weighting scheme is completely robust to whether or not more complex ratios n*d>c are included, as long as c>= the highest prime.
>>> Actually, any non-unique temperament will be a problem. >> >> >> ? >
> If a temperament isn't unique, two consonant (meaning we care about the > tuning) intervals must get approximated to the same interval. That > tempered interval can't be in the troughs of both ratios!
Right. But a tempered chord using the interval twice *could* be in the trough of a triad containing both ratios . . .
top of page bottom of page up down


Message: 9740 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 03:44:49

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
>> You should know how to calculate them by now: log(n/d)*log(n*d) >> and log(n*d) respectively. > > You mean > > log(n/d)/log(n*d) >
> where n:d is the comma that vanishes. > > I prefer these scalings > > complexity = lg2(n*d) > > error = comma_size_in_cents / complexity > = 1200 * log(n/d) / log(n*d) > > My favourite cutoff for 5-limit temperaments is now. > > (error/8.13)^2 + (complexity/30.01)^2 < 1 > > This has an 8.5% moat, in the sense that we must go out to > > (error/8.13)^2 + (complexity/30.01)^2 < 1.085 > > before we will include another temperament (semisixths). > > Note that I haven't called it a "badness" function, but rather a > "cutoff" function. So there's no need to see it as competing with > log-flat badness. What it is competing with is log-flat badness plus > cutoffs on error and complexity (or epimericity). > > Yes it's arbitrary, but at least it's not capricious, thanks to the > existence of a reasonable-sized moat around it. > > It includes the following 17 temperaments.
is this in order of (error/8.13)^2 + (complexity/30.01)^2 ?
> meantone 80:81 > augmented 125:128 > porcupine 243:250 > diaschismic 2025:2048 > diminished 625:648 > magic 3072:3125 > blackwood 243:256 > kleismic 15552:15625 > pelogic 128:135 > 6561/6250 6250:6561 > quartafifths (tetracot) 19683:20000 > negri 16384:16875 > 2187/2048 2048:2187 > neutral thirds (dicot) 24:25 > superpythag 19683:20480 > schismic 32768:32805 > 3125/2916 2916:3125 > > Does this leave out anybody's "must-have"s? > > Or include anybody's "no-way!"s?
I suspect you could find a better moat if you included semisixths too -- but you might need to hit at least one axis at less than a 90- degree angle. Then again, you might not.
top of page bottom of page up down


Message: 9741 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 07:26:28

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:

> Is that measure still > proportional to log(n*d) for 5-limit linear temps?
The measure would seem to be lg2(n*d)/(lg2(3)*lg2(5)) for linear temperaments. We've seen a lot of calculations are off by factors like exactly 2 and exactly 3 (for one example, in the "cross-check" post, and Gene seemed to say he understood this but didn't explain it. So hand-wavingly, I'll multiply by exactly 2 to get these results: dicot - 5.0154 meantone - 6.8811 diaschsimic - 11.947 kleismic - 15.139 schismic - 16.304 (close to improper 17)
top of page bottom of page up down


Message: 9742 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 23:13:38

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> > wrote:
>> Paul and Carl, >> >> I think you're both right. You're just talking about slightly >> different things. >> >> As a function, max(x,y) "depends on" both x and y but at any given >> point on the "curve" it only "depends on" one of them in the sense >> that if you take the partial derivatives wrt x and y, one of them > will
>> always be zero. >
> That's what I was saying. So what was Carl saying?
I thought he was saying only the latter, and to him that disqualifies it as being considered as a "badness" function. We may disagree, but that's hardly important since no one is proposing to actually use anything like it. Let's drop it.
top of page bottom of page up down


Message: 9743 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 03:51:09

Subject: Re: Weighting

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>> Observation One: The extent and intensity of the influence of a >>> magnet is in inverse proportion to its ratio to 1.
Can you give us Partch's definition of "magnet". And "in inverse proportion to its ratio to 1" makes no sense whatsoever. For a start "to 1" is completely redundant. And it would make a lot more sense if it said "in inverse proportion to the size of the numbers in the ratio".
> Does this mean you don't have a copy of _Genesis_? Wait, let me > guess: Gene and Dave don't either. God almighty.
Correct. But neither of us is is God almighty, and neither is Partch, although he is perhaps closer. ;-).
> > Anyway, Partch is saying you can create a dissonance by using a > complex interval that's close in size to a simple one. I translate > his Observations into the present context thus... > > 'The size (in cents) of the 'field of attraction' of an interval > is proportional to the size of the numbers in the ratio, and > the dissonance (as opposed to discordance) becomes *greater* as > it gets closer to the magnet.'
Since I don't know what he, or you, mean by a "magnet" I can only comment on the first part of this purported translation. And I find that it is utterly foreign to my experience, and I think yours. Did you accidentally drop an "inversely". i.e. we can safely assume that Partch is only considering ratios in othe superset of all his JI scales, so things like 201:301 do not arise. i.e. he's ignoring TOLERANCE and only considering COMPLEXITY. So surely he means that as the numbers in the ratio get larger, the width of the field of attraction gets smaller. To me, that's an argument for why TOP isn't necessarily what you want.
top of page bottom of page up down


Message: 9744 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 07:27:37

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote: >
>> Sure, why not let *both* exponents and > > and what?
and both constants vary when optimizing a moat . . .
top of page bottom of page up down


Message: 9745 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 23:19:49

Subject: Re: finding a moat in 7-limit commas a bit tougher . . .

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> Yahoo groups: /tuning_files/files/Erlich/plana... * [with cont.] Paul,
Please do another one of these without the labels, so we have a chance of eyeballing the moats.
top of page bottom of page up down


Message: 9746 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 03:54:27

Subject: Re: Back to the 5-limit cutoff

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx Carl Lumma <ekin@l...> wrote:
>>>> I'm arguing that, along this particular line of thinking, >>>> complexity does one thing to music, and error another, but >>>> there's no urgent reason more of one should limit your >>>> tolerance for the other . . . >>>
>>> Taking this to its logical extreme, wouldn't we abandon badness >>> alltogether? >>> >>> -Carl >>
>> No, it would just become 'rectangular', as Dave noted. >
> I didn't follow that.
Your badness function would become max(a*complexity, b*error), thus having rectangular contours.
> Maybe you could explain how it explains > how someone who sees no relation between error and complexity > could possibly be interested in badness.
Dave and I abandoned badness in favor of a "moat".
top of page bottom of page up down


Message: 9747 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 05:28:05

Subject: Re: Back to the 5-limit cutoff (was: 60 for Dave)

From: Paul Erlich

--- In tuning-math@xxxxxxxxxxx.xxxx "Gene Ward Smith" <gwsmith@s...> 
wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> > wrote: >
>>> Recall that any goofy, ad-hoc weirdness may need to be both >> explained >>> and justified. >>
>> What did you have in mind? >
> I'm perfectly happy with badness, complexity and error, and suggest > that if we don't use that, we not use something utterly loony, which > all this talk of concavity makes me think might be contemplated.
Why would you think that? I think Dave and I have been putting down our thoughts with well-thought-out and reasonable explanations here.
top of page bottom of page up down


Message: 9748 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 08:28:24

Subject: Re: Back to the 5-limit cutoff

From: Dave Keenan

--- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> wrote:
> --- In tuning-math@xxxxxxxxxxx.xxxx "Dave Keenan" <d.keenan@b...> > wrote:
>> --- In tuning-math@xxxxxxxxxxx.xxxx "Paul Erlich" <perlich@a...> > wrote: >>
>>> Sure, why not let *both* exponents and >> >> and what? >
> and both constants vary when optimizing a moat . . .
OK. But I'd like to limit the exponent to between 1 and 2 inclusive.
top of page bottom of page up down


Message: 9749 - Contents - Hide Contents

Date: Mon, 02 Feb 2004 21:24:08

Subject: Re: Back to the 5-limit cutoff

From: Carl Lumma

>Sorry for my delay in entering this discussion, but I'm a Digest subscriber. >I think Carl's objection is that he has a expectation that the badness >function ought to be strictly monotonic in both its arguments. That is to >say that an increase in error with constant complexity should result in an >increase in badness. Likewise an increase in complexity with error held >constant should result in an increase in badness. The use of max (x,y) >violates that expectation where something like x+y does not. I'm sure Carl >can correct me if I've misunderstood his posts. I hope this clarifies the >confusion over Carl's objection.
Hi David! I didn't know you read tuning-math. Well, I was actually arguing that badness of any kind would be of no use to someone who considers error and complexity to be independent. -Carl
top of page bottom of page up

Previous Next

9000 9050 9100 9150 9200 9250 9300 9350 9400 9450 9500 9550 9600 9650 9700 9750 9800 9850 9900 9950

9700 - 9725 -

top of page