Hacker News new | past | comments | ask | show | jobs | submit | sterling312's comments login

Actually it's interesting that you mention this. Technically, you can also make it atoma-ku, (like ato like in later, and ma-ku as in mark). I wonder if the dip-tone was intentional to make it sound more like foreign word.


Too bad it's a queue and not a stack. I'll show myself out now.


How is it so? I mean the last paper placed (on top) is the first to be fed to the machine, right? Fits the definition of stack as far as I can see.


Well, I guess it depends on how you insert new pages. If you remove the machine, put new paper and put machine back then it is a stack. If you lift all papers along with the machine then it is a queue. I believe second method looks more convenient unless you have a 5 foot of paper


meh, just use sum instead of counter

sorted([(i,sum([i==j for j in s.split()])) for i in list(set(s.split()))],key=lambda x:x[1],reverse=True)[:10]


I don't know what Japanese programmers use for social network, but try making a mixi group for hackers and see if you attract any people.


I've been wanting people to post their eigenvector/value on facial recognition training for a while. Imagine a crowdsource trained facial recognition database!


Yea, it sounds a bit mechanical. With that said, based on the priming he's given with the talk on waveform, I'm guess they are simply breaking the English speech into the waveform with corresponding frequency and simply mapping that over to the Chinese counterpart.

It would be even cooler if they created a distribution of possible sound frequency for each syllables in both English and Chinese, and determined where in the distribution his speech pattern lies, and transfer the "ranking" in the distribution. Hence you get a subjective transformation instead of a objective one. :)


and yep


yep


My buddy at Microsoft tells me that in terms of automation, they use the dogma that if you need to do it once or twice, you just do it by hand. If you need to do it more than that, you automate it.


Dijkstra says it even more succinctly: 'two or more, use a for'.


That is ever so subtly different.


A good rule of thumb. A more nuanced look proves even more helpful: There are graduations between manual and automatic. Semi-automation can be enormously productive, too. (And if you go beyond just automation, there's making your scripts bullet-proof under all circumstances. Which is useful, if you are doing tasks not only a few hundred times but even more often.)


There are always only three cases: 0, 1, and many


The elaborate on the Schultz paper, dopaminergic reward feedback mechanism is for unexpected reward only. Repeated stimulation depreciates the neuronal signals.

A take home message from this is that an ideal diablo 2 drop has a expected drop time of T, and should never change. The reward, of course, can come from increased benefit of character strength, or from the rare item itself (Herald of Zakarum + Barnar Star ftw).

It seems to me like the T for Diablo 3 is much greater than that of Diablo 2, and because pretty much set items and unique items are non-existent, the satisfaction of getting the item from drops is also non-existent. Alas, the AH allows you to gain strength when you are stuck, but because the time it takes you to go to AH is not a function of T itself, since you can go whenever you want, it does not really contribute to the over all game reward mechanism.

Not to mention the horrible infrastructure they have built for the AH as Lewisham mentioned.


The elaborate on the Schultz paper, dopaminergic reward feedback mechanism is for unexpected reward only. Repeated stimulation depreciates the neuronal signals.

As I'm still working through this stuff, and am certainly no psychologist or neurobiologist, did I say anything above that was contradictory to this? It sounds like you know more than I do about the subject (I'm dabbling in it for my Computer Science thesis) and I'm always paranoid that I'm overstating or overgeneralizing some experiment result.


Oh no, you did not contradict it at all. It is one of the vital point that was pointed out in the paper, and I wanted to make sure anybody else who don't having interest in reading the actual paper to have an opportunity to get this particular point across. I'm a researcher in a neuroscience lab, and this is one of the foundational paper into modern understanding of reward mechanism. Kudos to you for linking it here :)


Dopamine doesn't just spike at unexpected "rewards," though. It can also spike at surprising loud noises, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: