Hacker News new | past | comments | ask | show | jobs | submit | mb7733's comments login

Part of what drives that is that Ableton is designed to be used in-performance, so even minor instability isn't acceptable at all

Hold a rag on the chain and spin the cranks?

We know they have been colluding, the question is on what other kinds of goods?

https://en.wikipedia.org/wiki/Bread_price-fixing_in_Canada


Thanks for the context. I'm still not certain that an instance of collusion on the price of bread in 2015 implies wider collusion in 2024.

Ideally, the data would be proving this, but I guess my skepticism is the cost of making a claim before the research is done.


I'm the creator, and I think you are spot-on. It is my wish that this data will help increase competition/reduce collusion, but until others analyze it we cannot make assumptions about what prices/grocers are doing.


Your point stands, but it wasn't an "instance of collusion on the price of bread in 2015", but widespread collusion on the price bread and other baked goods from 2001-2015 (some say 2017), which was discovered in 2015.


> you're going more granular, you have Newfoundland, which is very different from the rest of the Maritime provinces (they have their own dialect of English)

Just a heads up, Newfoundland isn't a part of the Maritimes at all, and those from Newfoundland will certainly remind you of that if you lump them in :)


I think the real-world resolution to this problem is straightforward though. You should look at the finest level of granularity available, and pick the best treatment in the relevant subpopulation for the patient.


Unfortunately our level of certainty generally falls off as we increase the granularity. For example, imagine the patient is a 77yo Polish-American man, and we're lucky enough to have one historical result for 77yo Polish-American men. That man got treatment A and did better than expected. But say if we go out to 70-79y white men we have 1,000 people, of which 500 got treatment A and generally did significantly worse than the 500 who got treatment B. While the more granular category gives us a little information, the sample size is so small that we would be foolish to discard the less granular information.


This is all true. I originally added a disclaimer to my post that said "assuming you have enough data to support the level of granularity" but I removed it for brevity because I thought it was implied -- small sample size isn't part of Simpson's paradox. My apologies for being unclear


The smaller the subpopulation, the higher the variance, and the less significant the result.


That is exactly what happens almost every time... Just not this time


It does. This is why the fastest qualifiers get to swim in the middle lanes.


The actual article had nothing to do with training 1RM, this whole thread comes from a misinformed top level comment.


This. I misread:

The periodisation programme was machine based and each exercise included 3 sets of 6–12 repetitions at ~70%–85% of 1 RM


For trigonometry/calculus/physics radians are by far the most practical because they are dimensionless, so no constants appear when differentiating or integrating. (By the way, these constants will involve factors of pi anyway, it's inherent.)

For example, try to work out the Taylor series for sin(x) using degrees (or rotations). It's awful.


I don't see how Taylor series specifically would be affected. Differentiation of sin x and cos x is the same independent of the unit of x, and nothing else is used in the series.

Fourier transform would have 4π² instead of 2π under the exponent, no big deal.

The Euler's formula gets a factor of 2π under the exponent though. Given its wide application, it adds plenty of noise, of course.


> Differentiation of sin x and cos x is the same independent of the unit of x, and nothing else is used in the series.

Implicit in that statement is the use of the series definition of sine, which is a "meaningful" or "natural" definition insofar as it represents some function we care about. I'll address at the end what happens if we assume that definition regardless of its independent plausibility, but first consider:

The semantic meaning of sine, at least from its historical roots and how you might independently uncover it from earlier fields like geometry instead of later fields like differential equations, is that given an angle (in some units, we'll touch on that in a moment) we'd like to know the ratio two sides of a particular triangle associated with that angle inscribed in a circle. Given a choice of units for the angle, the triangle is fixed, and so the result (that ratio of side lengths) is also fixed.

Suppose you want to know how that ratio varies with respect to the angle. You can imagine a change of coordinates `y = cx` and consider the derivative of `sin(x)` vs `sin(y/c)`. The latter will have a numeric value `1/c` times less than the former. E.g., imagine a whole circle represented `1` angle instead of `2pi`. Then converting from our normal radians baseline to that new unit you have `y = (1/2pi)x`, and the derivative of the semantic ratio we're considering with respect to the new measure of angle is multiplicatively `2pi` greater than the original.

Going back to your series definition, suppose we pick that series as the definition of sine, independent of units. The problem that arises is that particular uses of sine do have units, and converting from the problem you care about to your particular from-on-high chosen definition of sine will run into the exact sort of problem our `y = cx` paragraph above touched on. The derivative with respect to the quantity of interest still has an extra `1/c` factor, and the fact that our God-blessed choice of sine is independent of units didn't actually solve anything in the composite problem.


If you define variants sint(x) = sin(2πx) and cost(x) = cos(2πx) that takes x in units of turns instead of radians, then d/dx sint(x) = 2π cost(x) etc.

I agree with you that this is completely fine though. I also find it more natural to think of “how many percent of a turn” an angle is than how many “degrees” or “radians” something is, since we use base-10 everywhere else. My workaround is to mostly write everything in terms of sin(2πτ), cos(2πτ), and exp(2πiτ) when I can, where τ measures turns.


> Differentiation of sin x and cos x is the same independent of the unit of x,

That’s not true. If the unit is degrees, d/dx sin(x) = pi/180 * cos(x).


I don't understand your post. There is no mention of testing 1 rep max (1 RM) in the article.

Are you under the impression that the HRT group was training by only performing 1 RM lifts? That wasn't the case:

> At a commercial gym, HRT performed a supervised full body programme three times per week, with 6–8 weeks of initial habituation. The periodisation programme was machine based and each exercise included 3 sets of 6–12 repetitions at ~70%–85% of 1 RM, which was estimated using the prediction equation according to methods by Brzycki

Meanwhile the moderate group did something very similar, just with more repetitions and less weight:

> 3 sets of 10–18 repetitions at ~50%–60% of 1 RM

I don't see how one is more "realistic" than the other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: