Hacker News new | past | comments | ask | show | jobs | submit | jpeanuts's comments login

This is not so crazy - this was exactly the progression in screw drives. First came slotted screws (2-fold rotational symmetry), then Phillips/posidrive/Robertson (4-fold), and now Torx (6-fold). Going back to slotted now is actually irritating.

Of course the constraints and trade-offs are very different... still it would be a piece of cake to plug them in on the backside of a box with your eyes closed.


I think the paper is a satire on "evidence-based medicine" - a framework which insists upon randomized controlled trials as the primary basis for medical decisions.

Notably this is exactly such a trial, while also absolutely irrelevant to the question of needing a parachute or not (the trial planes were all on the ground at the time of the jumps).


I guess I'm insane then... I built his pantorouter, and am half-way through his 16" bandsaw build. One good reason to build the bandsaw is that it's actually a very high-quality machine - if executed well. It has an extremely stiff frame (for a relatively light weight), and a large capacity. Buying a similar quality saw of a similar size would be a lot more than a few 100 dollars. That is in fact part of the reason I'm building it (though mostly it's for the challenge).

The other advantage of machines you make yourself is that you can always fix and improve them yourself.

Both machines are an engineering challenge by the way. In the pantorouter, alignment/calibration was a big issue for me - there too many of degrees of freedom, combined with inevitable slop in the mechanism, that has to take a lot of force from the router. Also, even after calibration you need to be careful to not cut too deeply, and in consistent direction to prevent the router bit pulling too hard. But it's all worth it once you're making perfect mortice and tenons with minimal effort :-)


Or: there are a lot of people who think that they're in on a scam, but in fact are lower down the pyramid than they are told. That explanation requires no faith, and can also explain participation of entirely rational actors.


A place where autocorrect might be considered is in REPLs. Out of habit I still regularly write "print 'a'" in the Python REPL although I've been using Python 3 for a while. You get:

SyntaxError: Missing parentheses in call to 'print'. Did you mean print("a")?

Well yes... obviously... so please just print it.


That would mean adding a rule to the language. The rule of "you can use 'print' as an statement". A rule that, time after time, has been shot down.

Which leads us to the real issue at hand: if the compiler is going to do anything by itself, that means it is following well defined rules. Therefore, whatever automatic thing the compiled does is part of the language. And, sometimes, the design rules of said language plain and simply do not allow for that.


Be careful what you wish for. Parsers guessing at the authors intention is what gave us HTML.


Looking at the documentation - einops seems to only implement operations on a single tensor, so it's quite far from a general replacement for einsum, which can perform tensor-products on an arbitrary number of tensors.


The idea of using a multi-agent game, rather than minimizing a single loss function, is already used in GANs - and there it seems to be an very powerful generalization of optimization. Personally I would say the idea is extremely interesting.

GANs have lots of applications, and PCA is useful for various tasks in data-analysis: compression, feature selection, reduced-dimensional modelling. I doubt finding applications will be a problem.

Reliably finding solutions (Nash equilibria), is much harder than optimization for minimum loss however. So I see these being much harder to train than loss-based models.


> applications

It appears to be a very important result.

I should say this is the first CS paper I've ever read that evoked a mild sense of dread in me. Although the positive applications can and no doubt will be substantial.


I've been worrying about tech for a while now, but mostly on the cybersecurity side since I figured AI was still too far away. That more or less changed when I read a post by Gwern on GPT-3, where he makes a compelling case that "there is a hardware overhang" or, in other words, as research in AI advances we're going to find that we do not need substantially more compute in order to achieve the capabilities of advanced AI. I think this was the post where he talked about it:

https://www.gwern.net/Scaling-hypothesis

"GPT-3 could have been done decades ago with global computing resources & scientific budgets; what could be done with today’s hardware & budgets that we just don’t know or care to do? There is a hardware overhang."

And thinking about it more, this multi-agent method should work in the offensive cybersecurity world if one could figure out how to crack the reward functions like they did for PCA. I think the core insight they found was a hierarchy of agents. If one could formulate the reward functions for the different agents intelligently enough it could allow layered privilege escalation to achieve RCE without random thrashing.


Thanks for the link.


Not at all serious but... maybe all electrons have the same mass, charge, etc. because there is only one electron, bouncing backwards and forwards through time. As it passes backwards we see it as the anti-electron (positron). When they meet they annihilate from our perspective, but that's just the electron being reflected and becoming a positron heading backwards.

To be clear this is all not at all consistent with observations - just a fun(?) thought experiment.



Out of interest, what observations is this not consistent with?


The argument against is summarized in the Wikipedia article.

Basically it is this:

We measure electrons in different places all the time.

Due to the speed of light it can't instantly move from A to B, so for this to actually be one electron it would have to travel back in time to be at some place at the right time.

However, an electron traveling back in time would appear as a positron, so if that what was going on we should be seeing a fairly equal number of positrons as we do electrons, as the one electron rushes around to appear as an electron where it needs to.

Except we don't, electrons outnumber positrons by a huge margin.


> However, an electron traveling back in time would appear as a positron, so if that what was going on we should be seeing a fairly equal number of positrons as we do electrons, as the one electron rushes around to appear as an electron where it needs to.

What about this part of the article though...?

  "According to Feynman he raised this issue with Wheeler, who speculated that the missing positrons might be hidden within protons."


As noted, the discussion between Wheeler and Feynman was in 1940, long before the development of the Standard Model[1].

In the Standard Model there is a sea of virtual particles in the nucleus, but they're virtual and hence not real in the sense that the positron in the One Electron model would have to be. At least that's my understanding.

Also, electrons can travel over large distances, CRT monitors do that all the time for example. So I'm not entirely sure how Wheeler imagined hiding the positrons in the nucleus would solve the whole positron problem.

[1]: https://en.wikipedia.org/wiki/Quark#History


The imbalace in observed particles could come from the fact that we move in the same (time) direction as electrons but the opposite (time) direction of positrons, couldn't it?


I went to one of these Sunday dinners about 5 years ago. I was living in Paris and taken there by friends - who didn't explain the concept to me in advance. So I found myself in someone's house, with a massive buffet table and was expected to help myself - there was then a long conversation with my friends about what was going on. Much incredulity on my part :-)

I met Jim very briefly - I guess I was one of hundreds of new people he meets every month. Mostly I hung out with other guests - who were mostly expats (I didn't hear much French) - and who also didn't know Jim very well.

He definitely stepped outside the box with this idea. This is something almost anyone with a big house could do in any place, but which only he actually did.


Maths notation can be wonderfully concise and precise, so it worth thinking about following it closely when programming. One of my favorite examples of this is the numpy `einsum` call [1]. It implements Einstein summation convention [2] - thereby making working with the many dimensions of high-rank tensors feasible.

E.g. this (Latex):

$C_{ml} = A_{ijkl} B_{ijkm}$

becomes (in Python):

C = einsum('ijkl,ijkm->ml', A, B)

[1] https://docs.scipy.org/doc/numpy/reference/generated/numpy.e... [2] https://en.wikipedia.org/wiki/Einstein_notation


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: