Tuning-Math Digests messages 2400 - 2424

This is an Opt In Archive . We would like to hear from you if you want your posts included. For the contact address see About this archive. All posts are copyright (c).

Contents Hide Contents S 3

Previous Next

2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500 2550 2600 2650 2700 2750 2800 2850 2900 2950

2400 - 2425 -



top of page bottom of page down


Message: 2400

Date: Tue, 11 Dec 2001 07:30:38

Subject: Re: More lists

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> 
> > 1:3:5:9
> > 1:3:7:9
> > 1:3:9:11
> > 10:12:15:18
> > 12:14:18:21
> > 18:22:24:33
> > 
> > which contain only 11-limit consonant intervals, would be 
important 
> > to your music?
> 
> Indeed they are, but they are taken care of.

Shouldn't you weight them _twice_ if they're occuring twice as often? 
How can you justify equal-weighting?


top of page bottom of page up down


Message: 2401

Date: Tue, 11 Dec 2001 07:58:41

Subject: Re: More lists

From: genewardsmith

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Shouldn't you weight them _twice_ if they're occuring twice as 
often? 
> How can you justify equal-weighting?

I do weight them twice, more or less, depending on how you define 
this. 3 is weighted once as a 3, and then its error is doubled, so it 
is weighted again 4 times as much from 3 and 9 together; so 3 is 
weighted 5 times, or 9 1.25 times, from one point of view. Then we 
double dip with 5/3, 5/9 etc. with similar effect.


top of page bottom of page up down


Message: 2402

Date: Tue, 11 Dec 2001 08:06:23

Subject: Re: More lists

From: paulerlich

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> 
> > Shouldn't you weight them _twice_ if they're occuring twice as 
> often? 
> > How can you justify equal-weighting?
> 
> I do weight them twice, more or less, depending on how you define 
> this. 3 is weighted once as a 3, and then its error is doubled, so 
it 
> is weighted again 4 times as much from 3 and 9 together; so 3 is 
> weighted 5 times, or 9 1.25 times, from one point of view. Then we 
> double dip with 5/3, 5/9 etc. with similar effect.

I don't see it that way. 9:1 is an interval of its own and needs to 
be weighted independently of whether any 3:1s or 9:3s are actually 
used. 5/3 and 5/9 could be seen as weighting 5 commensurately more, I 
don't buy the "double dip" bit one bit!

These are conclusions I've reached after years of playing with 
tunings with large errors and comparing them and thinking hard about 
this problem.


top of page bottom of page up down


Message: 2403

Date: Tue, 11 Dec 2001 12:55 +0

Subject: Re: Wedge products

From: graham@xxxxxxxxxx.xx.xx

In-Reply-To: <9v36kb+mvhf@xxxxxxx.xxx>
gene wrote:

> The 5-limit wedge product of two ets is the corresponding comma, and 
> of two commas the corresponding et. We've been doing this all along; 
> you can consider it to be a cross-product, or a matrix determinant.
> The wedgie of a 5-limit linear temperament would reduce the cross-
> product until it was not a power (getting rid of any torsion) and 
> standardize it to be greater than one.

The thing about the 5-limit is that the complements can have the same 
dimensions.  Here's where the problem arises:

>>> comma = temper.WedgableRatio(81,80)
>>> comma.octaveEquivalent()
{(2,): -1, (1,): 4}
>>> comma.octaveEquivalent().complement()
{(2,): 4, (1,): 1}

That shows the octave equivalent part of the syntonic comma is not the 
meantone mapping.  You need to take the complement.  But in your system, 
which ignores this distinction, how to you know that the vector as it 
stands isn't right?  You don't get any clues from the dimensions, because 
they're the same.  Do you have an algorithm that would give (0 4 1) as the 
invariant of (-4 4 -1) wedged with (1 0 0)?  I don't, so I explicitly take 
the complement in the code.


Me:
> > But before you said the definition of wedge products was
> > 
> > ei^ej = -ej^ei
> > 
> > nothing about keeping zero elements.  

Gene:
> That's the defintion, but it's an element in a vector space. You 
> can't wish way a basis element ei^ej simply because it has a 
> coefficient of zero, that isn't the way linear algebra works. A zero 
> vector is not the same as the number zero.

So how about ei^ei, can I wish that away?

Seriously, until we get to complements and listifying, it doesn't make any 
difference of those dimensions go.  At least, not the way I wrote the code 
it doesn't.  I can always listify the generator and ET mappings.  The only 
problem would be if a temperament-defining wedgie didn't depend on a 
particular prime interval, in which case I don't think it would define an 
n-limit temperament.

> > > > If I could enumerate over all pairs, I could fix that.  But 
> that 
> > > still 
> > > > leave the general problem of all combinations of N items taken 
> from 
> > > a set. 
> > > >  I'd prefer to get rid of zero elements altogether.
> 
> Why not simply order a list of size n choose m, and if one entry has 
> the value zero, so be it? A function which goes from combinations of 
> the first n integers, takrn m at a time, to unique integers in the 
> range from 1 to n choose m might help.

Yes, it's getting the combinations of the first n integers that's the 
problem.  But I'm sure I can solve it if I sit down and think about it.

To make you happy, I'll do that.

>>> def combinations(number, input):
...   if number==1:
...     return [[x] for x in input]
...   output = []
...   for i in range(len(input)-number+1):
...     for element in combinations(number-1, input[i+1:]):
...       output.append([input[i]]+element)
...   return output

(Hopefully the indentation will be okay on that, at least in View Source)

Inexistent entries are already the same as entries with the value zero in 
that w[1,2] will be zero if w doesn't have an element (1,2).  For example

>>> comma[0,]
-4
>>> comma[1,]
4
>>> comma[2,]
-1
>>> comma[3,]
0


So the 5-limit comma I used above is already a 7-limit interval.  What I'd 
like to change is for w[1,2]=0 to remove the element (1,2) instead of 
assigning zero to it.


                  Graham


top of page bottom of page up down


Message: 2404

Date: Tue, 11 Dec 2001 12:55 +0

Subject: Re: Systems of Diophantine equations

From: graham@xxxxxxxxxx.xx.xx

In-Reply-To: <9v35h4+amle@xxxxxxx.xxx>
In article <9v35h4+amle@xxxxxxx.xxx>, genewardsmith@xxxx.xxx 
(genewardsmith) wrote:

> That's a fancy new method for an old classic problem, and presumably 
> becomes interesting mostly when the number of simultaneous 
> Diophantine equations are high. Can you solve a system of linear 
> equations over the rationals in Python?

With Numeric, you can solve for floating point numbers (using a wrapper 
around the Fortran LAPACK) but there's no support for integers or 
rationals.


                    Graham


top of page bottom of page up down


Message: 2405

Date: Tue, 11 Dec 2001 12:55 +0

Subject: Re: More lists

From: graham@xxxxxxxxxx.xx.xx

In-Reply-To: <9v3e5l+102l7@xxxxxxx.xxx>
Paul:
> > > Sorry, I have to disagree. Graham is specifically considering the 
> > > harmonic entity that consists of the first N odd numbers, in a 
> > chord. 

I'm considering a set of consonant intervals with equal weighting.  9:3 
and 3:1 are the same interval.  Perhaps I should improve my minimax 
algorithm so the whole debate becomes moot.

Gene:
> > That may be what Graham was doing, but it wasn't what I was doing; 
> I 
> > seldom go beyond four parts.

Paul:
> Even if you don't, don't you think chords like
> 
> 1:3:5:9
> 1:3:7:9
> 1:3:9:11
> 10:12:15:18
> 12:14:18:21
> 18:22:24:33
> 
> which contain only 11-limit consonant intervals, would be important 
> to your music?

Yes, but so is 3:4:5:6 which involves both 2:3 and 3:4.  And 1/1:11/9:3/2, 
which has two neutral thirds (and so far I've not used 11-limit 
temperaments in which this doesn't work) so should they be weighted 
double?  My experience so far of Miracle is that the "wolf fourth" of 4 
secors is also important, but I don't have a rational approximation (21:16 
isn't quite right).  It may be that chords of 0-2-4-6-8 secors become 
important, in which case 8:7 should be weighted three times as high as 
12:7 and twice as high as 3:2.

I'd much rather stay with the simple rule that all consonant intervals are 
weighted equally until we can come up with an improved, subjective 
weighting.  For that, I'm thinking of taking Partch at his word weighting 
more complex intervals higher.  But Paul was talking about a Tenney 
metric, which would have the opposite effect.  So it looks like we're not 
going to agree on that one.


                            Graham


top of page bottom of page up down


Message: 2406

Date: Wed, 12 Dec 2001 02:21:53

Subject: Re: More lists

From: paulerlich

--- In tuning-math@y..., graham@m... wrote:

> Yes, but so is 3:4:5:6 which involves both 2:3 and 3:4.

But you can do that with _any_ interval.

> And 1/1:11/9:3/2, 
> which has two neutral thirds (and so far I've not used 11-limit 
> temperaments in which this doesn't work) so should they be weighted 
> double?

Only if that were your target harmony. I thought hexads were your 
target harmony.

> My experience so far of Miracle is that the "wolf fourth" of 4 
> secors is also important, but I don't have a rational approximation 
(21:16 
> isn't quite right).

What do you mean it's important?

> It may be that chords of 0-2-4-6-8 secors become 
> important, in which case 8:7 should be weighted three times as high 
as 
> 12:7 and twice as high as 3:2.

If that was the harmony you were targeting, sure.

> I'd much rather stay with the simple rule that all consonant 
intervals are 
> weighted equally until we can come up with an improved, subjective 
> weighting.  For that, I'm thinking of taking Partch at his word 
weighting 
> more complex intervals higher.  But Paul was talking about a Tenney 
> metric, which would have the opposite effect.  So it looks like 
we're not 
> going to agree on that one.

If you don't agree with me that you're targeting the hexad (I thought 
you had said as much at one point, when I asked you to consider 
running some lists for other saturated chords), then maybe we better 
go to minimax (of course, we'll still have a problem in cases like 
paultone, where the maximum error is fixed -- what do we do then, go 
to 2nd-worst error?).


top of page bottom of page up down


Message: 2407

Date: Wed, 12 Dec 2001 20:20:39

Subject: Re: Badness with gentle rolloff

From: clumma

>I now understand that Gene's logarithmically flat distribution
>is a very important starting point.

Wow- how did that happen?  One heckuva switch from the last
post I can find in this thread.  Not that I understand what
any of this is about.  Badness??

-Carl


top of page bottom of page up down


Message: 2408

Date: Wed, 12 Dec 2001 21:04:37

Subject: Re: More lists

From: paulerlich

--- In tuning-math@y..., graham@m... wrote:

> so if you'd like to check this should be 
> Paultone minimax:
> 
> 
> 2/11, 106.8 cent generator

That's clearly wrong, as the 7:4 is off by 17.5 cents!

> basis:
> (0.5, 0.089035952556318909)
> 
> mapping by period and generator:
> [(2, 0), (3, 1), (5, -2), (6, -2)]
> 
> mapping by steps:
> [(12, 10), (19, 16), (28, 23), (34, 28)]
> 
> highest interval width: 3
> complexity measure: 6  (8 for smallest MOS)
> highest error: 0.014573  (17.488 cents)
> unique

I don't think it should count as unique since

> 7:5 =~ 10:7


top of page bottom of page up down


Message: 2409

Date: Wed, 12 Dec 2001 04:13:51

Subject: Re: More lists

From: dkeenanuqnetau

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:
> If you don't agree with me that you're targeting the hexad (I 
thought 
> you had said as much at one point, when I asked you to consider 
> running some lists for other saturated chords), then maybe we better 
> go to minimax (of course, we'll still have a problem in cases like 
> paultone, where the maximum error is fixed -- what do we do then, go 
> to 2nd-worst error?).

Yes. That's what I do. You still give the error as the worst one, but 
you give the optimum generator based on the worst error that actually 
_depends_ on the generator (as opposed to being fixed because it only 
depends on the period).


top of page bottom of page up down


Message: 2410

Date: Wed, 12 Dec 2001 21:12:22

Subject: Re: Badness with gentle rolloff

From: paulerlich

--- In tuning-math@y..., graham@m... wrote:
> In-Reply-To: <9v741u+62fl@e...>
> Gene wrote:
> 
> > This seems to be a big improvement, though the correct power is 
> > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. 
It 
> > still seems to me that a rolloff is just as arbitary as a sharp 
> > cutoff, and disguiss the fact that this is what it is, so tastes 
will 
> > differ about whether it is a good idea.
> 
> A sharp cutoff won't be what most people want.  For example, in 
looking 
> for an 11-limit temperament I might have thought, well, I don't 
want more 
> than 24 notes in the scale because then it can't be mapped to two 
keyboard 
> octaves.  So, if I want three identical, transposable hexads in 
that scale 
> I need to set a complexity cutoff at 21.  But I'd still be very 
pleased if 
> the program throws up a red hot temperament with a complexity of 
22, 
> because it was only an arbitrary criterion I was applying.

That's why I suggested that we place our sharp cutoffs where we find 
some big gaps -- typically right after the "capstone" temperaments.

> I suggest the flat badness be calculated first, and then shelving 
> functions applied for worst error and complexity.  The advantage of 
a 
> sharp cutoff would be that you could store the temperaments in a 
database, 
> to save repetitive calculations, and get the list from a single SQL 
> statement, like
> 
> SELECT * FROM Scales WHERE complexity<25 AND minimax<10.0 ORDER BY 
> goodness
> 
> but you'd have to go to all the trouble of setting up a database.
> 
> 
>                                 Graham

Well, database or no, I still like the idea of using a flat badness 
measure, since it doesn't automatically have to be modified just 
because we decide to look outside our original range.


top of page bottom of page up down


Message: 2411

Date: Wed, 12 Dec 2001 08:17:41

Subject: Temperaments from wedgies

From: genewardsmith

I decided to start from arbitary 7-limit wedgies, and investigate the 
conditions that lead to a good temperament. To start with, for a 
wedgie [u1,u2,u3,v1,v2,v3] we should have u1^2+u2^2+u3^2!=0,
v1^2+v2^2+v3^2!=0, and u1*v1+u2*v2+u3*v3=0, but this is far from 
enough to give a good temperament, though it will defined one.

In requiring the error to be small, I was led to the condition that 
the wedgie be the product of [0,u1,u2,u3] and [g(2),g(3),g(5),g(7)] 
for a good et val g. Then to get the number of generator steps small, 
one may want to have a factor of g(2) one can divide out, and so are 
led to u1 = g(3)*t mod m, u2 = g(5)*t mod m, u3 = g(7)*t mod m,
where m divides g(2) and gcd(t,m)=1. This, however, is just a form of 
finding good temperaments from the et-and-generator system, so I 
conclude that if done correctly, this method should be exhaustive for 
good temperaments.


top of page bottom of page up down


Message: 2412

Date: Wed, 12 Dec 2001 22:44:37

Subject: Re: Badness with gentle rolloff

From: paulerlich

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:
> --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> > --- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:
> > 
> > > steps^(4/3) * exp((cents/k)^r)
> > 
> > This seems to be a big improvement, though the correct power is 
> > steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit.
> 
> Ok. Sorry. But I find I don't really understand _log_ flat. I only 
> understand flat. When I plot steps*cents against steps I can _see_ 
> that this is flat. I expect steps*cents to be flat irrespective of 
the 
> odd-limit. If I change the steps axis to logarithmic its still 
gonna 
> look flat, and anything with a higher power of steps is gonna have 
a 
> general upward trend to the right.
> 
> So please tell me again how I can tell if something is log flat, in 
> such a way that I can check it empirically in my spreadsheet. And 
> please tell me why a log-flat distribution should be of more 
interest 
> than a simply flat one.

My (incomplete) understanding is that flatness is flatness. It's what 
you acheive when you hit the critical exponent. The "logarithmic" 
character that we see is simply a by-product of the criticality.

I look forward to a fuller and more accurate reply from Gene.


top of page bottom of page up down


Message: 2413

Date: Wed, 12 Dec 2001 08:21:18

Subject: Re: Badness with gentle rolloff

From: genewardsmith

--- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:

> steps^(4/3) * exp((cents/k)^r)

This seems to be a big improvement, though the correct power is 
steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. It 
still seems to me that a rolloff is just as arbitary as a sharp 
cutoff, and disguiss the fact that this is what it is, so tastes will 
differ about whether it is a good idea.


top of page bottom of page up down


Message: 2414

Date: Wed, 12 Dec 2001 23:11:19

Subject: Re: Badness with gentle rolloff

From: paulerlich

--- In tuning-math@y..., "clumma" <carl@l...> wrote:

> Badness??

The "badness" of a linear temperament is a function of two 
components -- how many generators it takes to get the consonant 
intervals, and how large the deviations from JI are in the consonant 
intervals. Total "badness" is therefore some function of these two 
components.


top of page bottom of page up down


Message: 2415

Date: Wed, 12 Dec 2001 10:47 +0

Subject: Re: More lists

From: graham@xxxxxxxxxx.xx.xx

In-Reply-To: <9v6f01+1bdh@xxxxxxx.xxx>
Paul wrote:

> If you don't agree with me that you're targeting the hexad (I thought 
> you had said as much at one point, when I asked you to consider 
> running some lists for other saturated chords), then maybe we better 
> go to minimax (of course, we'll still have a problem in cases like 
> paultone, where the maximum error is fixed -- what do we do then, go 
> to 2nd-worst error?).

The hexads are targetted by the complexity formula.  But that's because 
it's the simplest such measure, not because I actually think they're 
musically useful.  I'm coming to the opinion that anything over a 7-limit 
tetrad is quite ugly, but some smaller 11-limit chords (and some chords 
with the for secor wolf) are strikingly beautiful, if they're tuned right. 
 So Blackjack is a good 11-limit scale although it doesn't contain any 
hexads.

I've always preferred minimax as a measure, but currently speed is the 
most important factor.  The RMS optimum can be calculated much faster, and 
although I can improve the minimax algorithm I don't think it can be made 
as fast.

Scales such as Paultone can be handled by excluding all intervals that 
don't depend on the generator.  But the value used for rankings still has 
to include all intervals.  My program should be doing this, but I'm not 
sure if it is working correctly, so if you'd like to check this should be 
Paultone minimax:


2/11, 106.8 cent generator

basis:
(0.5, 0.089035952556318909)

mapping by period and generator:
[(2, 0), (3, 1), (5, -2), (6, -2)]

mapping by steps:
[(12, 10), (19, 16), (28, 23), (34, 28)]

highest interval width: 3
complexity measure: 6  (8 for smallest MOS)
highest error: 0.014573  (17.488 cents)
unique

9:7 =~ 32:25 =~ 64:49
8:7 =~ 9:8
4:3 =~ 21:16
35:32 =~ 10:9

consistent with: 10, 12, 22


Hmm, why isn't 7:5 =~ 10:7 on that list?


                          Graham


top of page bottom of page up down


Message: 2416

Date: Wed, 12 Dec 2001 23:34:52

Subject: Re: Badness with gentle rolloff

From: dkeenanuqnetau

--- In tuning-math@y..., "clumma" <carl@l...> wrote:
> >I now understand that Gene's logarithmically flat distribution
> >is a very important starting point.
> 
> Wow- how did that happen?  One heckuva switch from the last
> post I can find in this thread.  

Hey. I had a coupla days to cool off. :-) I'm only saying its a 
starting point and I probably should have written "I now understand 
that Gene's flat distribution is a very important starting point". 
Since I now realise I don't really understand the 
"logarithmically-flat" business. 

While I'm waiting for clarification on that I should point out that 
once we go to the rolled-off version, the power that "steps" is raised 
to, is not an independent parameter, (it can be subsumed in k) so it 
doesn't really matter where we start from.

  steps^p *  exp((cents/k)^r)
= ( steps * (exp((cents/k)^r))^(1/p)      )^p
= ( steps *  exp((cents/k)^r /p)          )^p
= ( steps *  exp((cents/(k * p^(1/r)))^r) )^p

Now raising badness to a positive power doesn't affect the ranking so 
we can just use

steps * exp((cents/(k * p^(1/r))^r)

and we can call simply treat k * p^(1/r) a new version of k.

So we have
steps * exp((cents/k)^r)
so my old k of 2.1 cents becomes a new one of 3.7 cents and my 
proposed badness measure becomes

steps * exp(sqrt(cents/3.7))

> Not that I understand what
> any of this is about.  Badness??

As has been done many times before, we are looking for a single figure 
that combines the error in cents with the number of notes in the 
tuning to give a single figure-of-demerit with which to rank tunings 
for the purpose of deciding what to leave out of a published list or 
catalog for which limited space is available. One simply lowers the 
maximum badness bar until the right number of tunings get under it.

It is ultimately aimed at automatically generated linear temperaments. 
But we are using 7-limit ETs as a trial run since we have much more 
collective experience of their subjective badness to draw on.

So "steps" is the number of divisions in the octave and "cents" is the 
7-limit rms error.

I understand that Paul and Gene favour a badness metric for these that 
looks like this

steps^2 * cents * if(min<=steps<=max, 1, infinity)

I think they have agreed to set min = 1 and max will correspond to 
some locally-good ET, but I don't know how they will decide exactly 
which one. This sharp cutoff in number of steps seems entirely 
arbitrary to me and (as Graham pointed out) doesn't correspond to the 
human experience of these things. I would rather use a gentle rolloff 
that at least makes some attempt to represent the collective 
subjective experience of people on the tuning lists.


top of page bottom of page up down


Message: 2417

Date: Wed, 12 Dec 2001 10:47 +0

Subject: Re: Badness with gentle rolloff

From: graham@xxxxxxxxxx.xx.xx

In-Reply-To: <9v741u+62fl@xxxxxxx.xxx>
Gene wrote:

> This seems to be a big improvement, though the correct power is 
> steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit. It 
> still seems to me that a rolloff is just as arbitary as a sharp 
> cutoff, and disguiss the fact that this is what it is, so tastes will 
> differ about whether it is a good idea.

A sharp cutoff won't be what most people want.  For example, in looking 
for an 11-limit temperament I might have thought, well, I don't want more 
than 24 notes in the scale because then it can't be mapped to two keyboard 
octaves.  So, if I want three identical, transposable hexads in that scale 
I need to set a complexity cutoff at 21.  But I'd still be very pleased if 
the program throws up a red hot temperament with a complexity of 22, 
because it was only an arbitrary criterion I was applying.

I suggest the flat badness be calculated first, and then shelving 
functions applied for worst error and complexity.  The advantage of a 
sharp cutoff would be that you could store the temperaments in a database, 
to save repetitive calculations, and get the list from a single SQL 
statement, like

SELECT * FROM Scales WHERE complexity<25 AND minimax<10.0 ORDER BY 
goodness

but you'd have to go to all the trouble of setting up a database.


                                Graham


top of page bottom of page up down


Message: 2418

Date: Wed, 12 Dec 2001 23:56:39

Subject: Re: Badness with gentle rolloff

From: paulerlich

--- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote:

> But we are using 7-limit ETs as a trial run since we have much more 
> collective experience of their subjective badness to draw on.
> 
> So "steps" is the number of divisions in the octave and "cents" is 
the 
> 7-limit rms error.
> 
> I understand that Paul and Gene favour a badness metric for these 
that 
> looks like this
> 
> steps^2 * cents * if(min<=steps<=max, 1, infinity)

The exponent would be 4/3, not 2, for ETs.


top of page bottom of page up down


Message: 2419

Date: Wed, 12 Dec 2001 13:26:14

Subject: Re: Badness with gentle rolloff

From: dkeenanuqnetau

--- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote:
> --- In tuning-math@y..., David C Keenan <d.keenan@u...> wrote:
> 
> > steps^(4/3) * exp((cents/k)^r)
> 
> This seems to be a big improvement, though the correct power is 
> steps^2 for the 7 or 9 limit, and steps^(5/3) for the 11-limit.

Ok. Sorry. But I find I don't really understand _log_ flat. I only 
understand flat. When I plot steps*cents against steps I can _see_ 
that this is flat. I expect steps*cents to be flat irrespective of the 
odd-limit. If I change the steps axis to logarithmic its still gonna 
look flat, and anything with a higher power of steps is gonna have a 
general upward trend to the right.

So please tell me again how I can tell if something is log flat, in 
such a way that I can check it empirically in my spreadsheet. And 
please tell me why a log-flat distribution should be of more interest 
than a simply flat one.


top of page bottom of page up down


Message: 2420

Date: Wed, 12 Dec 2001 15:52 +0

Subject: Temperament calculations online

From: graham@xxxxxxxxxx.xx.xx

<temperament finding scripts *>

Early days yet, but it is working.


           Graham


top of page bottom of page up down


Message: 2421

Date: Thu, 13 Dec 2001 16:25 +0

Subject: Re: A hidden message (was: Re: Badness with gentle rolloff)

From: graham@xxxxxxxxxx.xx.xx

In-Reply-To: <9vaje4+gncf@xxxxxxx.xxx>
paulerlich wrote:

> Take a look at the two pictures in
> 
> Yahoo groups: /tuning-math/files/Paul/ *
> 
> (I didn't enforce consistency, but we're only focusing on 
> the "goodest" ones, which are consistent anyway).
> 
> In both of them, you can spot the same periodicity, occuring 60 times 
> with regular frequency among the first 100,000 ETs.
> 
> Thus we see a frequency of about 1670 in the wave, agreeing closely 
> with the previous estimate?
> 
> What the heck is going on here? Riemann zetafunction weirdness?

I don't know either, but I'll register an interest in finding out.  I've 
thought for a while that the set of consistent ETs may have properties 
similar to the set of prime numbers.  It really gets down to details of 
the distribution of rational numbers.  One thing I noticed is that you 
seem to get roughly the same number of consistent ETs within any linear 
range.  Is that correct?

As to these diagrams, one thing I notice is that the resolution is way 
below the number of ETs being considered.  So could this be some sort of 
aliasing problem?  Best way of checking is to be sure each bin contains 
the same *number* of ETs, not merely that the x axis is divided into 
near-enough equal parts.


                       Graham


top of page bottom of page up down


Message: 2422

Date: Thu, 13 Dec 2001 19:46:45

Subject: A hidden message (was: Re: Badness with gentle rolloff)

From: paulerlich

--- In tuning-math@y..., "paulerlich" <paul@s...> wrote:

> Result: one big giant spike right at 1665-1666.

Actually, the Nyquist resolution (?) prevents me from saying whether 
it's 1659.12658227848 (the nominal peak) or something plus or minus a 
dozen or so. But clearly my visual estimate of 1664 has been 
corroborated.


top of page bottom of page up down


Message: 2423

Date: Thu, 13 Dec 2001 16:30:51

Subject: the 75 "best" 7-limit ETs below 100,000-tET

From: paulerlich

Out of the consistent ones:

          rank         ET      "badness"
            1          171      0.20233
            2        18355      0.25034
            3           31      0.30449
            4        84814      0.33406
            5            4      0.33625
            6          270      0.34265
            7           99      0.35282
            8         3125      0.35381
            9          441      0.37767
           10         6691      0.42354
           11           72      0.44575
           12         3566      0.45779
           13           10      0.46883
           14            5      0.47022
           15        11664      0.48721
           16           41      0.48793
           17           12      0.49554
           18          342      0.50984
           19        21480      0.51987
           20           68      0.52538
           21         3395      0.53654
           22           19      0.53668
           23           15      0.53966
           24           27      0.54717
           25          140       0.5502
           26            9      0.55191
           27            6      0.55842
           28           22      0.56091

Up through this point in the list, most of the results tend to 
be "small" ETs . . . hereafter, they don't.

           29         1578      0.56096
           30         6520      0.56344
           31        14789      0.56416
           32        39835       0.5856
           33          612      0.58643
           34        33144      0.59123
           35          202      0.59316
           36          130      0.59628
           37         1547      0.59746
           38         5144       0.5982
           39        11835      0.62134
           40        63334      0.63002
           41        36710      0.63082
           42         2954      0.63236
           43           53      0.63451
           44         2019      0.63526
           45         3296       0.6464
           46        44979      0.65123
           47         8269      0.65424
           48        51499      0.67526
           49          301      0.68163
           50         1376      0.68197
           51        51670      0.68505
           52         1848      0.68774
           53        66459      0.68876
           54        14960      0.68915
           55          103      0.69097
           56           16      0.69137
           57        33315      0.69773
           58         1749      0.70093
           59         1407      0.70125
           60           46      0.71008
           61           37      0.71553
           62        26624        0.732
           63         4973      0.73284
           64         1106      0.73293
           65          239      0.73602
           66          472      0.75857
           67        30019         0.76
           68           26      0.76273
           69         9816      0.76717
           70           62      0.76726
           71           58      0.77853
           72         1718      0.77947
           73        15230       0.7845
           74        25046      0.78996
           75        58190      0.79264


What seems to be happening is that whatever effect is creating 60 
equally-spaced "waves" in the data starts to dominate the result 
after about the top 30 or so; or after the cutoff 'badness' value 
exceeds approximately e times its "global" minimum value, the 
logarithmic character of the results begins to loosen it grip . . . 
is there anything to this observation, Gene?


top of page bottom of page up down


Message: 2424

Date: Thu, 13 Dec 2001 20:03:39

Subject: A hidden message (was: Re: Badness with gentle rolloff)

From: paulerlich

I wrote,

> But clearly my visual estimate of 1664 has been 
> corroborated.

1664 = 2^7 * 13

Pretty spooky!!

Thus,
103169 = 2^8 * 13 * 31 + 1


top of page bottom of page up

Previous Next

2000 2050 2100 2150 2200 2250 2300 2350 2400 2450 2500 2550 2600 2650 2700 2750 2800 2850 2900 2950

2400 - 2425 -

top of page