I love the modern game. I just think the pendulum has swung a tiny bit too far toward 3s in the past 3-4 years, that's all. Just a nudge in the other direction.
My ideal would be to try changing 2s and 3s to 3s and 4s. But that will never happen.
I think it would be enough to simply move the 3 point line back a couple feet AND have it follow its natural arc out of bounds, thus eliminating the shorter and easier corner 3 shot.
I’d rather defenses covered it better to give up more 2s in the paint. If we make it too far out it no longer spaces the floor and then we’re back to a game in the paint.
This is my preferred answer. Bring back physicality into the sport, especially on defense. There are moving screens on every play, and yet a defender can stand and get jumped into resulting in free throws.
couple of feet is not enough. the line needs to move far enough such that vast majority of the players (more than 95%) shoot less than 30% from there. so probably 8 to 10 feet back. absolutely should happen but they will likely do something awesome like shortening quarters to 10 minutes
curry and bol bol are two good candidates. in today’s nba you can’t tell difference between them on the court except for several feet of height difference. but practically same player as they shoot about the same shots
I don't watch basketball, so I am speculating, but isn't this a case where defence hasn't yet adapted to the new attacking strategies? Wouldn't you expect that in a few years teams will be better at defending against 3s, reducing their expected value and therefore swinging the pendulum back towards more 2s?
The Nash equilibrium should be that the expected values of 2s and 3s are equal. If you're off, you would expect a trend toward that equilibrium, possibly with some overcorrection.
Ironically Steve Nash didn't shoot enough 3s, so his Nash equilibrium was pretty off.
In all seriousness, yes, there's a lot more to modern basketball than just "take more 3s". It's more like "try to get dunks and layups, but if that doesn't work get 3s, but also mid range is still valuable if it softens up defense". And defenses have learned to scramble and switch to cover a lot of 3 pointers, but that can still be exploited with cross court passing and switch hunting. Check out some cool plays here: https://www.youtube.com/watch?v=qo-V_ujmMFo
If the answer is “they have to make less money”, it’s just very unlikely to happen. Most sports, but especially the global behemoths that are basketball and football (soccer), have made more money over time just because of population.
The comparison IMO is that how baseball is played changed over time as teams optimized, and some of those changes are undesirable from the perspective of an entertainment product. So MLB changed the rules to increase plays at the margin that are on average considered "more exciting."
Every league does this of course, NBA did it just last year with the stealth rule changes around fouls.
One of my favorite ridiculous stats. Bruce Bowen had one year where he shot better from 3 than he did on free throws. He was a dreadful shooter, but somehow he taught himself to be passable at this one specific skill, corner 3s.
Collaborative coding is powerful. But to be at your team’s most optimized state, you need automated branch management that enables multiple developers to commit code on a daily basis, without frustration. This can happen if your team’s branch is busy with many team members onboarding the same commit onramp. This can be frustrating for your team, but, more importantly, it gets in the way of shipping velocity. We don’t want that journey for you!
This is why we built merge queue. We’ve reduced the tension between branch stability and velocity. Merge queue takes care of making sure your pull request is compatible with other changes ahead of it and alerting you if something goes wrong. The result: your team can focus on the good stuff—write, submit, and commit. No tool sprawls here. This flow is still in the same place with the enablement of a modified merge button because GitHub remains your one-stop-shop for an integrated, enterprise-ready platform with the industry’s best collaboration tools.
In the NHL, you get 2 points for winning, 0 points for losing in regulation, and 1 point for losing in overtime.
The obvious result (to everyone but the creators of the rule I guess) is that, if a game is tied near the end of regulation, it is best for both sides if the game goes to overtime. There are 2 points available for a game decided in regulation, but 3 if decided in overtime. I assume both teams would sit quietly and wait for overtime if it were tolerated.
> Feb 22, 2020
Zamboni driver for the Toronto Marlies and emergency backup goalie David Ayres makes his NHL debut at 42 years of age, stopping 8 of 10 shots to give the Carolina Hurricanes a 6-3 win, all while stealing the show in Toronto.
Is there a reason for this difference compared to hockey everywhere else? As far as I'm aware both IIHF and most leagues do 3 points which are either split 3/0 or 2/1, withe same 5minute 3v3 followed by shootout as the NHL does. The only difference being the 2pt win thing which is really odd if you ask me. The idea that a fixed number of points exists (3 multiplied by the number of games played) in the table regardless of outcomes feels natural.
This one boggles my mind because broadcasters don't want games to go into overtime. A 3-point system where overtime win/losses are split 2/1 emphasizes winning in the final few minutes, which is exciting to both viewers and businessmen. Maybe one day they'll switch.
The NHL’s overtime is pretty efficient though. There’s a short commercial break, then 5 minutes of 3-3 which is exciting and has a high probability of goals. Then a well paced shootout if it is still tied. So perhaps broadcasters are ok with a little extra time if they keep a large audience.
Note for the casual hockey fan: The NHL overtime system is different in the playoffs, since the playoffs requires clear winners and losers.
(Game 1 of the 2023 Eastern Conference Finals featured four overtime periods; and the tie-breaking/game-winning goal was scored after 139 minutes and 47 seconds of total game time, at 1:54 am EDT.)
That's only the case if neither team feels like they have a decided advantage before OT to win. Otherwise, you don't want the other team to get a point, because you're competing against them in total points for playoff position.
If it's tied near the end of regulation, your expectation value is 1. But if you and the other team wait it out and let it go into overtime, your expectation value is 1.5.
If teams don't play to win, and collaborate instead of compete every sport will be boring while maximizing "points" but loosing viewership.
This is why all teams don't always play for a draw and only go for victory only on minimum risk. This kind of cooperative behavior is awfully close to match fixing and likely to get banned / fined by a competent league (one that wants to make money).
Soccer has its share of embarrassments like the CONCAF game in the article or the disgrace of Gijón in 1980, usually this happened in national games (not much money), modern clubs play to win like what FC Mainz did with Dortmund on Saturday, because playing to win is why crowds watch and that what makes money.
Do we know for a fact the breakdown of 2nd choices on ballots where Palin was the first choice? If not, you are only speculating.
It seems possible that Palin lost due to voters incorrectly expressing their preferences by not putting a 2nd choice. If that is a routine thing, then it is indeed a problem with ranked choice voting in the real world, but it is NOT a structural flaw as you are claiming.
Yes, the cast vote record has been released, and Begich wins the head to head vs both Palin and Peltota. [1] is a nice writeup on the election (highly recommend the What's Interesting header if you don't have time for the whole thing) and [2] is a link to the released ballot data.
I don't know whether this was true in the 1940s, but today, sliced bread is extremely different from other bread. Not because of the slicing, but because it's a very different, longer lasting product that also happens to be sliced.
Not all sliced bread. The bakery at my grocery store offers all of their breads in sliced and non-sliced forms. I'm sure they put in some type of preservative, as it lasts longer than when I bake bread, but it lasts no where near as long as the sandwich bread that is sold in a different section of the store.
It’s cheaper to physically put the functionality in all the cars than to differentiate at assembly time. We don’t have the same visceral reaction to paying for options at the dealer.
(I am assuming they aren’t adding this charge RETROACTIVELY. That’s very different)
I’d be fine if every vehicle had the hardware in it and it was activated via an upsell at the dealership. I think the truly insidious thing here is the subscription aspect, for all the various reasons other people in the responses have stated.
I’m ok with Intel doing the same thing. But if Intel started selling AVX512 with a monthly subscription I’d ever buy an Intel chip again.
Isn’t that most of the low end processors have deffects, so they trim down (yes, blocking some features, cores, etc) and selling you at lower prices, but the thing is, you know beforehand what are you buying, without any hidden subscription, etc, also is cheaper than the full featured one, so I’m not particularly mad about it.
No, although IBM did the same thing with mainframes, and yes customers did make a phone call to unlock some extra hardware. Or pay per instruction executed. IBM had lots of creative ways to charge for mainframes.
You are correct, it is no different, and it's wrong when Intel does it too, assuming the CPU hardware really is identical and it's purely a software lock that the user can't work around.
This is a good point, but there's also a key difference.
There's a big difference between "code being in one file" and "code being in one function." It sounds like the OP had something reasonably close to "one function," whereas the HN code has a lot of (what appear to be) small well designed methods.
# Retries a command a with backoff.
#
# The retry count is given by ATTEMPTS (default 100), the
# initial backoff timeout is given by TIMEOUT in seconds
# (default 5.)
#
# Successive backoffs increase the timeout by ~33%.
#
# Beware of set -e killing your whole script!
function try_till_success {
local max_attempts=${ATTEMPTS-100}
local timeout=${TIMEOUT-5}
local attempt=0
local exitCode=0
while [[ $attempt < $max_attempts ]]
do
"$@"
exitCode=$?
if [[ $exitCode == 0 ]]
then
break
fi
echo "Failure! Retrying in $timeout.." 1>&2
sleep $timeout
attempt=$(( attempt + 1 ))
timeout=$(( timeout * 40 / 30 ))
done
if [[ $exitCode != 0 ]]
then
echo "You've failed me for the last time! ($@)" 1>&2
fi
return $exitCode
}
My ideal would be to try changing 2s and 3s to 3s and 4s. But that will never happen.
reply