4000 4050 4100 4150 4200 4250 4300 4350 4400 4450 4500 4550 4600 4650 4700 4750 4800 4850 4900 4950 5000 5050 5100 5150 5200 5250 5300 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 5850 5900 5950 6000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550
5350 - 5375 -
![]()
![]()
Message: 5350 Date: Sat, 08 Dec 2001 02:22:56 Subject: Re: The grooviest linear temperaments for 7-limit music From: paulerlich --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 are > of approximately equal interest to 19, 22 and 27. Sure you'll always > be able to find one person who'll say they are. But ask anyone who has > actually used 19-tET or 22-tET when they plan to try 3-tET or > 1547-tET. It's just a joke. For the third or fourth time Dave, this isn't intended to appeal to any one person, but rather to the widest possible audience. Since this is a "flat" measure, it will rank the systems in the _vicinity_ of *your* #1 system, the same way you would, whoever *you* happen to be. But it makes absolutely no preference for one end of the spectrum over another, or the middle. That's what makes it flat and "objective". Look at Gene's list for 7-limit ETs again. Can it be denied that 31-tET is by far the best _in its vicinity_, and 171-tET is by far the best _in its vicinity_?
![]()
![]()
![]()
Message: 5351 Date: Sat, 08 Dec 2001 10:34:50 Subject: Re: The grooviest linear temperaments for 7-limit music From: dkeenanuqnetau --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > > Even if you and Paul are the only folks on the planet who find that > > interesting? In that case I think its very misleading to call it a > > badness metric when it only gives relative badness _locally_. > > Global relative badness means what, exactly? This makes no sense to > me. It means if two ETs have around the same badness number then are are about as bad as each other, no matter how far apart they are on the spectrum. > > How high? How will this fix the problem that folks will assume > you're > > saying that 3-tET and 1547-tET are about as useful as 22-tET for > > 7-limit. > > I think you would be one of the very few who looked at it that way. > After all, this is hardly the first time such a thing has been done. Ok. So I'm the only person who will assume that two ETs with about the same badness number are roughly as bad as each other. In that case, I shant bother you any more. We are apparently speakimg different languages.
![]()
![]()
![]()
Message: 5354 Date: Sat, 08 Dec 2001 03:27:45 Subject: Re: The grooviest linear temperaments for 7-limit music From: paulerlich --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > > > So do you still stand by this statement: > > > > "If we bound one of them and gens^2 cents, we've bound the other; > > that's what I'd do." > > > > (which you wrote after I said that a single cufoff point wouldn't > be > > enough, that we would need a cutoff curve)? > > Sure. I think bounding g makes the most sense, since we can calculate > it more easily. I've been thinking about how one might calculate > cents without going through the map stage, but for gens we can get it > immediately from the wedgie with no trouble. I don't immediately know what "the map stage" means, but I've been thinking that, in regarding to "standardizing the wedge product", we might want to use something that has the Tenney lattice built in. > We could then toss > anything with too high a gens figure before even calculating anything > else, which should help. So I'm not getting where g>=1 comes into all this.
![]()
![]()
![]()
Message: 5357 Date: Sat, 08 Dec 2001 03:48:56 Subject: Re: The grooviest linear temperaments for 7-limit music From: dkeenanuqnetau --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > > > But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 > are > > of approximately equal interest to 19, 22 and 27. Sure you'll > always > > be able to find one person who'll say they are. But ask anyone who > has > > actually used 19-tET or 22-tET when they plan to try 3-tET or > > 1547-tET. It's just a joke. > > For the third or fourth time Dave, this isn't intended to appeal to > any one person, but rather to the widest possible audience. But that's exactly my intention too. I'm trying to help you find a metric that will appeal, not to me, but to all those people whose divergent views I've read on the tuning list over the years. I'm simply claiming that your metric is seriously flawed in acheiving your intended goal. Practically _nobody_ thinks 3,6,612,1547 are equally as good or bad or interesting as 19 or 22. If you include fluff like that then there will be less room for ETs of interest to actual humans. > Since > this is a "flat" measure, it will rank the systems in the _vicinity_ > of *your* #1 system, the same way you would, whoever *you* happen to > be. But it makes absolutely no preference for one end of the spectrum > over another, or the middle. That's what makes it flat > and "objective". You seem to be arguing in circles. > Look at Gene's list for 7-limit ETs again. Can it be > denied that 31-tET is by far the best _in its vicinity_, and 171-tET > is by far the best _in its vicinity_? Of course I don't deny that. I claim that it is irrelevant. _Any_ old half-baked way of monotonically combining steps and cents into a badness metric will be the same as any other, _locally_. You said the same yourself in regard to your HE curves. Maybe you need more sleep. :-) Since when does merely local behaviour determine if something is _flat_ or not? In any case, I don't think you understand Gene's particular kind of flatness, you certainly weren't able to explain it to me, as Gene has now done. This particular kind of "flatness" is just one of many. There's nothing objective about a decision to favour it, and then to ad hoc introduce additional cutoffs besides the one for badness.
![]()
![]()
![]()
Message: 5358 Date: Sat, 08 Dec 2001 03:55:30 Subject: Re: The grooviest linear temperaments for 7-limit music From: paulerlich --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > > > So I'm not getting where g>=1 comes into all this. > > What I wrote was confused, but you've already replied, I see. Bounding > g from below is easy, since it bounds itself. Bounding it from above > could mean just setting a bound, or bounding g^2 c; I think just > setting an upper bound to it makes a lot of sense. Yes -- g could play the role than N plays in your ET lists. One would order the results by g, give the g^2 c score for each (or not), and give about a page of nice musician-friendly information on each. Gene, there are a lot of outstanding questions and comments . . . I wanted to know if there would have been a lot more "slippery" ones had you included simpler unison vectors in your source list . . . I want to use a Tenney-distance weighted "gens" measure . . . but for now, a master list would be great. Can someone produce such a list, with columns for "cents" and "gens" at least as currently defined? I'd like to try to find omissions.
![]()
![]()
![]()
Message: 5360 Date: Sat, 08 Dec 2001 04:03:48 Subject: Re: The grooviest linear temperaments for 7-limit music From: paulerlich --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > > > > > But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 > > are > > > of approximately equal interest to 19, 22 and 27. Sure you'll > > always > > > be able to find one person who'll say they are. But ask anyone who > > has > > > actually used 19-tET or 22-tET when they plan to try 3-tET or > > > 1547-tET. It's just a joke. > > > > For the third or fourth time Dave, this isn't intended to appeal to > > any one person, but rather to the widest possible audience. > > But that's exactly my intention too. I'm trying to help you find a > metric that will appeal, not to me, but to all those people whose > divergent views I've read on the tuning list over the years. I'm > simply claiming that your metric is seriously flawed in acheiving your > intended goal. Practically _nobody_ thinks 3,6,612,1547 are equally as > good or bad or interesting as 19 or 22. If you include fluff like that > then there will be less room for ETs of interest to actual humans. Dave, if you don't have a cutoff, you'd have an infinite number of ETs better than 1547. Of course there has to be a cutoff. > > > Look at Gene's list for 7-limit ETs again. Can it > be > > denied that 31-tET is by far the best _in its vicinity_, and 171- tET > > is by far the best _in its vicinity_? > > Of course I don't deny that. I claim that it is irrelevant. _Any_ old > half-baked way of monotonically combining steps and cents into a > badness metric will be the same as any other, _locally_. You said the > same yourself in regard to your HE curves. Maybe you need more sleep. > :-) I mean that only Gene's measure tells you exactly _how much_ better a system is than the systems in their vicinity, _in units of_ the average differences between different systems in their vicinity. > Since when does merely local behaviour determine if something is > _flat_ or not? It doesn't. > In any case, I don't think you understand Gene's particular kind of > flatness, you certainly weren't able to explain it to me, as Gene has > now done. This particular kind of "flatness" is just one of many. I'd like to see a list of ETs, as far as you'd like to take it, above some cutoff different from Gene's, that shows this kind of behavior (not just the flatness of the measure itself, but also the flatness of the size of the wiggles).
![]()
![]()
![]()
Message: 5361 Date: Sat, 8 Dec 2001 05:01 +00 Subject: Re: More lists From: graham@xxxxxxxxxx.xx.xx Me: > > To check my RMS optimization's working, is a 116.6722643 cent > generator > > right for Miracle in the 11-limit? RMS error of 1.9732 cents. Dave: > I get 116.678 and 1.9017. Did you include the squared error for 1:3 > twice? I think you should since it occurs twice in an 11-limit hexad, > as both 1:3 and 3:9. So then you must divide by 15, not 14, to get the > mean. I include 1:3 and 1:9 > Actually, I see that this doesn't explain our discrepancy. It may depend on whether or not you include the zero error for 1/1 in the mean. Graham
![]()
![]()
![]()
Message: 5362 Date: Sat, 8 Dec 2001 05:01 +00 Subject: Re: The grooviest linear temperaments for 7-limit music From: graham@xxxxxxxxxx.xx.xx Gene wrote: > Sure. I think bounding g makes the most sense, since we can calculate > it more easily. I've been thinking about how one might calculate > cents without going through the map stage, but for gens we can get it > immediately from the wedgie with no trouble. We could then toss > anything with too high a gens figure before even calculating anything > else, which should help. My program throws out bad temperaments before doing the optimization, if that's what you're suggesting. It's on of the changes I made this, er, yesterday morning. It does make a difference, but not much now my optimization's faster. Big chunks of time are being spent generating the ETs and formatting the results currently. Graham
![]()
![]()
![]()
Message: 5364 Date: Sat, 08 Dec 2001 05:54:48 Subject: Re: The grooviest linear temperaments for 7-limit music From: dkeenanuqnetau --- In tuning-math@y..., "genewardsmith" <genewardsmith@j...> wrote: > --- In tuning-math@y..., "dkeenanuqnetau" <d.keenan@u...> wrote: > > I'd say this is just one component of badness. Its the error > expressed > > as a proportion of the step size. The number of steps in the > octave n > > has an effect on badness independent of the relative error. > > Then you should be happier with an extra cube root of n adjustment. Yes I am. But still way from as happy as I think most people would be with something not based on k*log(gens) + log(cents) but instead on k*gens + cents (or maybe something else). > > But _why_ don't you want this > > sort of flatness? > > Because my interest isn't independent of size--you need more at > higher levels to make me care. Indeed. > Did you reject it on "objective" grounds? Is there > > some other sort of flatness that you _do_ want? If so, what is it? > How > > many sorts of flatness are there and how did you choose between > them? > > You could use the Riemann Zeta function and the omega estimates based > on the assumption of the Riemann hypothesis and do it that way, if > you liked. Or there are no doubt other ways; this one seems the > simplest and it gets the job done, and the alternatives would have a > certain family resemblence. But there's nothing "objective" about these decisions. You're just finding stuff so it matches what you think everyone likes. Right? > > Why do you immediately leap to the theory of Diophantine > approximation > > as giving the best way to achieve a finite list? > > It gives me a measure which is connected to the nature of the > problem, which is a Diophantine approximation problem, which seems to > make a lot of sense both in practice and theory to me, if not to you. There are probably many such things "connected to the nature of the problem" which give entirely different results. > > I think a good way to achieve it is simply to add an amount k*n to > the > > error in cents (absolute, not relative to step size). I suggest > > initially trying a k of about 0.5 cents per step. > > Should I muck around in the dark until I make this measure behave in > a way something like the measure I already have behaves, which would > be both pointless and inelegant, or is there something about it to > recommend it? Yes. The fact that I've been reading the tuning list and thinking about and discussing these things with others for many years. So it's hardly groping in the dark. I'm not saying this particular one I pulled out of the air is the one most representative of all views, but I do know that we can do a lot better than your current proposal. > > The only way to tell if this is better than something based on the > > theory of Diophantine equations is to suck it and see. > > Better how? The measure I already have does exactly what I'd want a > measure to do. Answered below. > Some of us have > > been on the tuning lists long enough to know what a lot of other > > people find useful or interesting, even though we don't necessarily > > find them so ourselves. > > One of the advantages of the measure I'm using is that it accomodates > this well. How do you know that? > > But this is nonsense. It simply isn't true that 3, 6, 612 and 1547 > are > > of approximately equal interest to 19, 22 and 27. > > I'm not trying to measure your interest, I keep saying that I'm trying to consider as wide a set of interests as possible. You and Paul keep accusing me of only trying to serve my own interests. I accept that you're trying to consider as wide a set of interests as possible, I just claim that you're failing. > I'm only saying if you want > to look at a certain range, look at these. Yes, but some _ranges_ are more interesting than others and so if you include an equal number in every range then you won't be including enough in the most interesting ranges. It isn't just _my_ prejudice that there are more ETs of interest in the vicinity of 26-tET than there are in the vicinity of 3-tET or 1550-tET. It's practically everyone's. > Sure you'll always > > be able to find one person who'll say they are. But ask anyone who > has > > actually used 19-tET or 22-tET when they plan to try 3-tET or > > 1547-tET. It's just a joke. > > The 4-et is actually interesting in connection with the 7-limit, as > the 3-et is with the 5-limit, and the large ets have uses other than > tuning up a set of marimbas as well. Those are good points, which maybe says that my metric is too harsh on the extremes, but I still say yours is way too soft. There's got to be something pretty damn exceptional about an ET greater than 100 for it to be of interest. But note that our badness metric is only based on steps and cents (or gens and cents for temperaments) so we can't claim that our metric should include some exceptional high ET if it's exceptional property has nothing to do with the magnitude of the number of steps or the cents error. > I suspect you've been seduced by the > > beauty of the math and forgotten your actual purpose. This metric > > clearly favours both very small and very large n over middle ones. > > In other words, the range *you* happen to care about is the only > interesting range; it's that which I was regarding as not objective. There you go again. Accusing me of only trying to serve my own interests. > > An isobad that passes near 3, 6, 19, 22, 612 and 1547, isn't one. > > An isobad which passes near 3, 6, 19, 22, 612 and 1547 makes a lot of > sense to me, so I think I would probably *not* like your alternative > as well. Whether you or I would like it, isn't the point. The only way this could be settled is by some kind of experiment or survey, say on the tuning list. We could put together two lists of ETs of roughly equally "badness". One using your metric, one using mine. They should contain the same number of ETs (you've already given a suitable list of 11). They should have as many ETs as possible in common. We would tell people the 7-limit rms error of each and the number of steps per octave in each, but nothing more. Then we'd ask them to choose which list was a better example of a list of ETs of approximately equal 7-limit goodness, badness, usefulness, interestingness or whatever you want to call it, based only on considerations of the number of steps and the error. We could even ask them to rate each list on a scale of 1 to 10 according to how well they think each list manages to capture equal 7-limit interestingness or whatever, based only on considerations of the number of steps and the error. Here they are: ET List 1 Steps 7-limit per RMS octave error (cents) --------------------- 3 176.9 6 66.9 19 12.7 22 8.6 27 7.9 68 2.4 130 1.1 140 1.0 202 0.61 612 0.15 1547 0.040 ET list 2 Steps 7-limit per RMS octave error (cents) --------------------- 15 18.5 19 12.7 22 8.6 24 15.1 26 10.4 27 7.9 31 4.0 35 9.9 36 8.6 37 7.6 41 4.2 Do we really need to do the experiment? Paul?
![]()
![]()
![]()
Message: 5365 Date: Sat, 08 Dec 2001 06:16:46 Subject: Re: The grooviest linear temperaments for 7-limit music From: dkeenanuqnetau --- In tuning-math@y..., "paulerlich" <paul@s...> wrote: > Dave, if you don't have a cutoff, you'd have an infinite number of > ETs better than 1547. Of course there has to be a cutoff. Yes. This just shows that this isn't a very good badness metric. A decent badness metric would not need a cutoff in anything but badness in order to arrive at a finite list. > I mean that only Gene's measure tells you exactly _how much_ better a > system is than the systems in their vicinity, How do you know it does that? "Exactly"? > _in units of_ the > average differences between different systems in their vicinity. I don't understand that bit. Can you explain. > I'd like to see a list of ETs, as far as you'd like to take it, above > some cutoff different from Gene's, that shows this kind of behavior > (not just the flatness of the measure itself, but also the flatness > of the size of the wiggles). But why ever do you think the size of the wiggles should be flat? I think it is quite expected that the size of the wiggles in badness around 1-tET to 9-tET are _much_ bigger than the wiggles around 60-tET to 69-tET. Apparently you agree that the wiggles around 100000-tET are completely irrelevant, since you're happy to have a cutoff in steps, somewhere below that.
![]()
![]()
![]()
Message: 5371 Date: Sun, 9 Dec 2001 16:02 +00 Subject: Re: Wedge products From: graham@xxxxxxxxxx.xx.xx Gene wrote: > What's the best version of Python for Win98, do you know? In > particular, what is the deal with the "stackless" version? Usually the latest stable ActiveState release, so long as you don't quibble with the license. Stackless is an experimental implementation that has continuations, and doesn't need the Global Interpreter Lock. > > My wedge invariants can't be made unique and invariant in all > cases, but > > they work most of the time. I could have a method for declaring of > two > > wedgable objects are equivalent. > > You don't need to use my system; you could make the first non-zero > coefficient in the basis ordering you use positive. Yes, that's what I do. > Also, my invariant is very different to > > Gene's. > > It should differ only in the sign or order of basis elements. Looks like it, except for the zeros. > > I still don't get the process for calculating unison vectors with > wedge > > products, especially in the general case. > > One way to think of the general case is to get the associated matrix > of what I call "vals", reduce by dividing out by gcds, and solve the > resultant system of linear Diophantine equations, which set each of > the val maps to zero. Is your matrix of vals my mapping by steps? [(41, 31), (65, 49), (95, 72), (115, 87), (142, 107)] for Miracle. If so, I'm with you until you get to the Diophantine equations. I think it's solving systems of linear Diophantine equations that I need to know how to do. > > One good thing is that the generator mapping (ignoring the period > mapping) > > which I'm using as my invariant key, is simply the octave- > equivalent part > > of the wedge product of the commatic unison vectors! > > Or of the wedge product of two ets. Ah, no, not quite. This works: >>> wedgie = reduce(temper.wedgeProduct, map(temper.WedgableRatio, [(225,224),(385,384),(243,242)])) >>> wedgie.octaveEquivalent().flatten() (0, -6, 7, 2, -15) but with the wedge of the temperaments >>> (h31^h41).octaveEquivalent().flatten() (-25, -20, 3, 15, 59, 49) so what I have to do is >>> (h31^h41).complement().octaveEquivalent().flatten() (0, -6, 7, 2, -15) The complement() method is something like a transpose. Would that be a better name for it? Anyway, my invariant usually works so that wedge products related in this way compare the same, but not always. > > I've got mine ordered, but it looks like a different order to yours. > > That's not surprising; the order is not determined by the definition > of wedge product, and I chose mine in a way I thought made sense from > the point of view of usability for music theory. Oh, well, mine's numerical order. > > The problem is with zeroes. As it stands, the 5-limit interval 5:4 > is the > > same as the 7-limit interval 5:4 as far as the wedge products are > > concerned. > > This has me confused, because it's the same as far as I'm concerned > too, unless you mean its vector representation. Yes, for matrices you need to have consistent dimensions, but you can get away without them for wedge products. At least the way I've implemented them. > But some zero elements aren't always present. Either I can > > get rid of them, which might mean that different products have the > same > > invariant, or enumerate the missing bases when I calculate the > invariant. > > I don't know what is going on here. Take the multiple-29 wedgie: >>> h29 = temper.PrimeET(29,temper.primes[:5]) >>> h58 = temper.PrimeET(58,temper.primes[:5]) >>> (h29^h58).invariant() (0, 29, 29, 29, 29, 46, 46, 46, 46, -14, -33, -40, -19, -26, -7) Note it starts with a zero, which corresponds to the (0,1) element. But, if you build it up from the right set of unison vectors, >>> wedgie = reduce(temper.wedgeProduct, ( (46, -29), (-14, 0, -29, 29), (33, 0, 29, 0, -29), (7, 0, 0, 0, 29, -29))) >>> wedgie.simplify() >>> wedgie.complement() {(0, 5): 29, (0, 4): 29, (1, 4): 46, (1, 5): 46, (1, 2): 46, (1, 3): 46, (2, 5): -40, (2, 4): -33, (2, 3): -14, (3, 4): -19, (3, 5): -26, (0, 2): 29, (4, 5): -7, (0, 3): 29} The (0,1) element isn't there. That means it's also missing from the invariant >>> wedgie.invariant() (29, 29, 29, 29, 46, 46, 46, 46, -14, -33, -40, -19, -26, -7) If I could enumerate over all pairs, I could fix that. But that still leave the general problem of all combinations of N items taken from a set. I'd prefer to get rid of zero elements altogether. > > As to the unison vectors, in the 7-limit I seem to be getting 4 > when I > > only wanted 2, so how can I be sure they're linearly independent? > > They are never linearly independent. Why do they need to be? I need a pair of unison vectors to define a 7-limit linear temperament. Right? Some pairs have torsion as well: >>> for i in range(4): for j in range(3): print temper.wedgeProduct(vectors[i], vectors[j]).torsion(), 0 2 12 2 0 4 12 4 0 11 4 2 The aim is to get a pair without torsion. And then generalize the process for any number of dimensions. Graham
![]()
![]()
![]()
Message: 5372 Date: Sun, 9 Dec 2001 16:02 +00 Subject: Unison vector finder (Was: The grooviest linear temperaments for 7-limit From: graham@xxxxxxxxxx.xx.xx Gene wrote: > I don't know what good Maple code will do, but here it is: > > findcoms := proc(l) > local p,q,r,p1,q1,r1,s,u,v,w; More descriptive variable names might help. Is l the wedge invariant? > s := igcd(l[1], l[2], l[6]); > u := [l[6]/s, -l[2]/s, l[1]/s,0]; Presumably this is simplifying the octave-equivalent part? > v := [p,q,r,1]; What values do p, q and r have? Is it important? > w := w7l(u,v); > "w7l" takes two vectors representing intervals, and computes the > wegdge product. So w is the wedge product of u and v, whatever they are. > s := isolve({l[1]-w[1],l[2]-w[2],l[3]-w[3],l[4]-w[4],l[5]-w[5],l[6]-w > [6]}); > "isolve" gives integer solutions to a linear > equation; Oh, that sounds useful. > s := subs(_N1=0,s); > I get an undeterminded varable "_N1" in this way which I > can set equal to any integer, so I set it to 0. Okay. > p1 := subs(s,p); > q1 := subs(s,q); > r1 := subs(s,r); What about this? > v := 2^p1 * 3^q1 * 5^r1 * 7; And here ^ is exponentiation instead of a wedge product. > if v < 1 then v := 1/v fi; So v must be a ratio, and you want it to be ascending. > w := 2^u[1] * 3^u[2] * 5^u[3]; > if w < 1 then w := 1/w fi; Same for w. > [w, v] end: And that's the result, is it? Two unison vectors? > coms := proc(l) > local v; > v := findcoms(l); > com7(v[1],v[2]) end: > The pair of unisons > returned in this way can be LLL reduced by the "com7" function, which > takes a pair of intervals and LLL reduces them. That makes sense. Return the reduced results of the other function. > "w7l" takes two vectors representing intervals, and computes the > wegdge product. "isolve" gives integer solutions to a linear > equation; I get an undeterminded varable "_N1" in this way which I > can set equal to any integer, so I set it to 0. The pair of unisons > returned in this way can be LLL reduced by the "com7" function, which > takes a pair of intervals and LLL reduces them. Looks like the magic is being done by "isolve" which I presume is built-in to Maple. Graham
4000 4050 4100 4150 4200 4250 4300 4350 4400 4450 4500 4550 4600 4650 4700 4750 4800 4850 4900 4950 5000 5050 5100 5150 5200 5250 5300 5350 5400 5450 5500 5550 5600 5650 5700 5750 5800 5850 5900 5950 6000 6050 6100 6150 6200 6250 6300 6350 6400 6450 6500 6550
5350 - 5375 -