f f f f f f d f f f f f f d d f f f f d f d f f f d f f d f f f f f f d d d f f f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d
The carefully crafted (non-random) sequence above tries to fake out the script and succeeds every time. It reveals the script's own predictability, and why you shouldn't rely on it too much as a yardstick for testing your brain's psuedo-randomness.
Clever girl. But you rely on the "keyhole property" that loses considerable information about you as a physical entity as make this joke. You would not be able to hide your micro-expressions, or the variability of your typing, from a properly placed sensor.
Got it to 0.5991561181434598 after a while (totally random would be "0.5").
Not sure how "free will" comes into play, if there was indeed free will, I could have freely decided to only press one, and the Oracle would have had close to 100% certainty.
Rather, what it measures is randomness or predictability, which is not the same thing as free will (especially "after the fact", e.g. after the choice is made).
Right. It tries to measure how random your brain's PRNG is, which has nothing to do with free will. A simple computer has no free will at all, but after being instructed to type randomly, it will score close to 0.5 without fail.
The "free will" quote comes from a Berkley quantum computing student, who chose keys as independently of pattern as he could [1], and this seems to take it out of context. It might make sense to retitle this submission to something marginally less clickbaity, like "test how predictable you are."
Being quite unpredictable just means that you do a lot of random things -- but says nothing about how you came to will them.
If someone, for example, was threatened and ordered to follow for years what a program with millions of suggestions told him to do, he would be very unpredictable, but totally without free will.
And the inverse: one could choose to do the same routine all his life, and thus be very predictable, but it would still be his free choice.
Fair enough, but was only speaking to the "clickbait" charge, by arguing that it's not misleading to characterize this as test of "free will" in the typical usage of the term.
But yeah, it's not the same as being unpredictable: that would look like seizures, not "gosh which ice cream flavor will he pick?:
Those people are wrong. Other commenters dealt with that below.
RE free will itself, I'm thinking it has more to do with metacognition and general complexity of thoughts. People may have free will, but they're running the same "free will algorithm" - give them the same inputs, and most of the time you'll get the same outputs. A lot of people don't want to consciously accept that, but it's a fact. It's e.g. why economics works at all - because people are predictable at scale.
But when do we most often reference the concept of "free will" in practice? In evaluating degree of responsibility for decisions. "You did X, which was the most obvious and beneficial in the short-term option, but you could've done Y or Z, which would be better - therefore you will suffer consequences". Or, "I won't do X even though the situation tries to manipulate me into doing it; I notice I'm being manipulated so I'll do Y instead". Scenarios like these refer to the human ability of a) non-greedy optimization (long-term planning), and b) being able to go meta many levels up, to notice their own patterns of thoughts and use them as an input to the thinking process. A feedback loop over metacognition if you don't mind.
This is still all pretty deterministic. The apparent randomness - I think - comes from the fact that a) two people never have perfectly the same set of inputs, because the internal state is dependent on one's life history, and b) in some cases slight variations in those inputs cause huge variations in outputs (butterfly/hurricane and all that). In a way, the main connection between "free will" and randomness may be just that a human doesn't have capability to perfectly predict the thought process of another human. We can do that to inanimate objects, we can do that to algorithms we write - at least in principle. But we know that other humans will always suprise us (at least because they can notice we're trying to predict them and start acting random to mess with us).
> "give them the same inputs, and most of the time you'll get the same outputs.... people are predictable at scale."
That's not quite how economics work. It's more like, outputs from a given set of inputs can be characterized in a statistical way at scale (which is subtly different from saying that individuals are predictable at scale.) It's not that every person will choose to buy pizza if it's $11.99 but not if it's $12.00, but instead that if you offer pizza for $11.99 you'll generally get between X and Y sales per day, and if you increase that to $12.00 you can expect to lose about Z sales, and you can mess with that curve to try to maximize revenues or whatever.
I would say that what makes "free will" an interesting concept is that it's about how one chooses to prioritize different values -- how they express meaning by their choices. In the sense of a "free will algorithm", it's how you choose to weight different factors in your algorithm. How do you weight comfort, pleasure, stimulation, challenge, avoidance of pain, etc.? How do you weight those things for others? In practical usage, we talk about "free will" when someone makes a choice that surprises us, showing that their value system or their evaluation matrix is different from ours.
If I had to guess? I'd say most people believe they are capable of choosing their actions - but they would also admit to a reasonably predictable set of habits.
Yes, I think most people don't bring predictability into it at all. Free will is about the feeling of "I could do, or could have done, X instead of Y", but your search space only ever included X, Y, and Z. Just knowing your search space makes you highly predictable even if you maintain the freedom to choose inside that space. (And however we choose, it is not randomly, that would be terrible, but more chaotic in the same sense that weather has too many unknown inputs that we can't predict all the details we want to perfectly.)
Whether or not determinism and free will are compatible is a very intricate debate that your throwaway remark doesn't do justice to. Look up Compatibilism.
The former is the metaphysical idea that all events are causally linked, a belief which we can never confirm or refute. The latter describes something's observable degree of predictability.
They're not related concepts at all. This demo has nothing to do with free will, Determinism, or Compatibilism - but it's an interesting study in predictability.
Is the distinction roughly "causally determined" vs "logically determined" ?
If we are talking about an abstract computing device, and whether it is deterministic, we mean whether its next state is always uniquely logically determined by the current state, correct?
Hmm, if an abstract machine behaves in a way that is uniquely logically determined, but which is not computable, do we consider such machines to be deterministic? I would think so, but I am not sure. We would be unable to predict it, does that make it not deterministic(computing) ?
Did they predict that I'd wire the {d,f} input to a PRNG? Did they predict that I have a lot of digits of pi memorized, or that I could use the hex 'spigot' formula to generate digits of pi, then convert d=0, f=1 (or the inverse)?
Did they predict which method I'd wire it up with and the exact source of the program I'd use to do so?
I'm going to hazard a 'no', so it looks like I have free will.
The only way to assume that those are synonymous would be to ask if one can freely will themselves to be random. However, even that has more to do with aptitude and ability than free will. E.g. I can freely will myself to try and be an NBA player, but I'm bound by physical limitations. However, my inability to be an NBA player would not normally be considered a lacking of free will.
The "free will" bit is more of a joke than anything, based on a quote from Scot Aaronson's book (explained in the GitHub Readme):
> I couldn’t even beat my own program, knowing exactly how it worked. I challenged people to try this and the program was getting between 70% and 80% prediction rates. Then, we found one student that the program predicted exactly 50% of the time. We asked him what his secret was and he responded that he “just used his free will.”
There's another Aaronson call-back: he's complained about how trivial results (like the core part of the Free Will Theorem) will get a lot more attention if you mention free will:
> I wrote a review of Steven Wolfram's book a while ago where I mentioned this, as a basic consequence of Bell's Theorem that ruled out the sort of deterministic model of physics that Wolfram was trying to construct. I didn't call my little result the Free Will Theorem, but now I've learned my lesson: if I want people to pay attention, I should be talking about free will!
if .5 is completely random, what does it mean that I got it down to 0.4781666666666667? I may have gotten it lower than that, but got tired of taking screenshots. Eventually I ended up back around .53
It means you were able to fake the algorithm out. Consider a very simple algorithm that guesses based on your previous two characters:
df d
dd d
fd f
ff f
This is presumably much simpler than what the page is doing, but will nonetheless work pretty well on humans by looking for alternations and long runs.
This algorithm is trivially defeated (chance of guessing 0) by the string
ffddffddffdd
Since that holds a pattern just long enough to get the algorithm to guess based on it and then switches.
For a little while, either intentionally or not, you must have been doing a more sophisticated version of this.
With perfect knowledge, it should be possible to get it down to 0.
Any random result series has such "unlikely" occurrences -- you can get 100 6 in a row by throwing dice for example, it's just rare. It's the long term convergence that matters.
Yeah, I feel like randomness and "free will"-ness are pretty orthogonal. That said, I was able to keep the thing below a maximum of .56-ish without using any of the tricks others have suggested here, like picking an insignificant digit in the accuracy number and pressing 'f' or 'd' based on its value, or using the even/oddness of successive positions in π, or whatever.
So, apparently, my brain's PRNG is pretty random. But I still have absolutely fuck-all idea whether or not I have "free will".
This is silly. Enough people (especially self-reporting), a large percentage will be well below 50% (unless it really is an oracle). About as soon as I started, it quickly dropped into the 30's. Even after several minutes and hundreds or thousands of keystrokes, it hardly moved over 50%.
You can use its own output against it. (Which means, ironically, it does not follow "know thyself", a maxim said to be from the Oracle at Delphi.)
Fixate on a number fairly late in the sequence (the millionths place seems to work well). If that number is 5-9, push 'f'. Otherwise, push 'd'. Keeps it pretty consistently around 0.5.
> In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type randomly. They’ll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model. Even a very crude one will do well. I couldn’t even beat my own program, knowing exactly how it worked. I challenged people to try this and the program was getting between 70% and 80% prediction rates. Then, we found one student that the program predicted exactly 50% of the time. We asked him what his secret was and he responded that he “just used his free will.”
If you just type 01101110010111011110001001..., you should evade any simple LZW-type compressor, as this string is (viewed as a radix expansion in binary) a normal number.
It means either both the poster and the bot have free will or neither of them do!
But seriously, the intuitive concept of free will, something like "unconstrained choice" leads to the idea that free will means simulating randomness - but once you're there, it looks more or less like nonsense - "acting randomly"
I think the intuitive concept of free will is more like "able to take meaningful actions" -- which means choices that express intent or desire. Sometimes your intent is to be unpredictable, to fool the algorithm. But sometimes your intent is to be consistent.
You're playing this game, every day with every website you go to, every store you visit, every app you use, every interaction you make. Unless you live a (probably less comfortable) life in a rural area. These types of techs for the most part make out lives better.
A less efficient method I used was; press a key until the probability jumps up, then switch keys (probability will go down, mostly) until it goes up again. Rinse/repeat.
I tried that. It learns what you're doing pretty quickly.
I was doing better by just trying to continually break whatever pattern I had fallen into, and occasionally throwing in some long predictable patterns (i.e. all the same key) to simulate the low chance of a long predictable string occurring. That had me down in the ~.4 range for a while. (Edit: Down to 0.23529 this try)
When I was little (and had too much free time) I memorized many digits of some transcendental numbers. The only (semi)practical use I've found for that knowledge, is that I can use it as a reasonably good pseudorandom stream. If for some reason I have to choose things pseudorandomly. Pick an arbitrary starting point in pi, and press 'd' for even digits and 'f' for odd digits, I get around 0.49 accuracy with this oracle.
Mm I did something similar, doesn't need memorization though: arbitrarily form sentences, go through the letters in order, and pick 'd' for a-m and 'f' for n-z. Of course there will be some patterns, but it's sufficient for this. And I think this strategy uses more free will :P
I imagined how I would have coded it and so I devised a system to get it down as low as possible. I ended up with 0.44 accuracy. Pure random would have been better :)
That might be a slippery slope though. Is a known random sequence less random than an original one? Should a randomness detector keep a database of all random sequence ever produced to give lower scores to known ones?
The keypresses I entered were not predictable by the Oracle. I generated the keypresses by first picking an arbitrary integer in [0,59]. I then (using simple mental arithmetic) used that integer to seed a PRNG by George Marsaglia [1] to generate a stream of pseudo-random decimal digits. For each digit, if it was in [0,7], then I took the 3-low order bits of its binary representation, in order from least significant to most significant bit (0=f, 1=d). If the digit was 8 or 9, I discarded it. The PRNG I used has period 59, and only 20% of the output digits would have to be discarded. Therefore one could use this approach to generate up to 141 pseudo-random bits.
Do you have any references on the statistical properties of the primes % 4 sequence? When converted to coin flips, primes % 4 fails the statistical randomness tests at http://faculty.rhodes.edu/wetzel/random/mainbody.html
See results at http://imgur.com/iaHBNNr ("Your nth attempt" is primes % 4, "Coin nth attempt" is the Marsaglia PRNG). Note that primes % 4 has too many runs, its prob(H given H) is too low, and its prob(H given T) is too high.
At first I was stuck at ~0.95 but I was just striking the keys really fast but then I concentrated and looked at the keys and really tried to feel which key I wanted to press and it worked. I was able to score lower ~0.4. I'm computationally irreducible for now.
but how does it know to use the opposite for him and not for other guesses? I make a similar mistake when I try to remember something by noting "it's not the choice I'm naturally inclined to choose, it's the other one". After I get used to using that heuristic, the correct choice becomes the one I'm naturally inclined to do, but then I have trouble remembering "is it the choice I'm inclined to do or the opposite". I've since ditched that heuristic when I realize I'm doing it because it is quite detrimental.
That's the trouble with game theory-like heuristics. To a layman like me it's unclear how many times do you want to recursively apply it.
The 2/3 of the average guess game is an interesting illustration where if you take the game theory approach and recursively apply it you'll end up with 0 all the time:
https://en.m.wikipedia.org/wiki/Guess_2/3_of_the_average
Weird. When I tried to just "type random" without looking at the keys, it was routinely getting me around 75%. I tried looking at the keys as you described, and my score immediately dropped to around %50 and was even below %50 for a bit. Then it crept back up to around %60 when I started talking to a co-worker while typing.
There's been a number studies that showed that humans are bad at random number generation.
e.g. "Humans cannot consciously generate random numbers sequences" http://www.ncbi.nlm.nih.gov/pubmed/17888582
I guess that goes back to the original premise of the experiment. That "free will" is unpredictable. And the way I thought about it is that for me to have a true inclination -- not to just do it "randomly" -- I'd have to look at the keys and feel something.
I was able to hang at around 0.5 by remembering something I'd read on HN about identifying non-random strings because they did too much alternating and not enough repeating.
Instead of trying to pick randomly between f and d, I picked numbers between 0 and 9, with an intentional bias toward lower numbers, and those would determine the length of the next sequence. So if I picked 4, 3, 7, 1, 3, 2, 2, 5, 0, 4 as my numbers, that would translate to
ddddfffdddddddfdddffddfffff[0=don't change]ffff.
I suspect a more sophisticated program would be able to pick apart my strategy, but it worked really well against this particular algorithm.
I counted up from 0 in binary (excluding leading zeroes) and even with a completely predictable series, it only did slightly better than 50%. I was expecting more (that on-line game of 20 questions was freaky good).
The explanation of how it works says it uses 5-grams to predict the next bit, so if you use any sequence in which all 6-grams occur equally often, this won't be able to predict you at all.
Actually, an interesting challenge might be to construct a sequence that gives the Oracle as low a score as possible (is there a nicer simpler way to describe this than simulating the Oracle's code and always choosing what it doesn't expect?).
Sure, is there an easy way to generate that algorithmically without maintaining an explicit history? Is there a well-known sequence with that property? Do Gray codes have it, for example?
You'd want a sequence that contains every 6-gram exactly once and ends with the same 5 digits as it starts with: such a sequence would be 2^6=64+5=69 bits long, and then you'd repeat the first 64 over and over. (Or just contains exactly once up to cycles.)
The Gray code turns out to work if you require the n-grams to be aligned to n-character boundaries. So if you interpreted "00001111" as containing only the two 4-grams "0000" and "1111", you're OK; but if you allow arbitrary alignments, so that "00001111" contains the 5 4-grams "0000", "0001", "0011", "0111", and "1111", the Gray code doesn't work.
For example, the 4-bit Gray code (under this interpretation) when concatenated into a single bitstream begins "00000001", which then contains the 4-grams "0000", "0000", "0000", "0000", "0001", so the predictor gains an advantage right at the outset. (I didn't check whether it then loses its advantage over time... that's an interesting question.)
I have been having way too much fun with this, but it really goes off the rails sometimes. I am currently in a stream where the correct answer is "Ambrose Bunside". It asked me if my character has been dead for more than 100 years, then two questions later asked me if my character uses musical.ly. :/
I feel like it sometimes just wastes questions near the end when it's already narrowed the possibilities down. Maybe that's even on purpose, for showy effect.
After all, it's pretty impressive when it appears to correctly get Frank Sinatra out of "does your character play video games for a living."
I got it down to 0.51-something by simply switching keys iff the measure increased. Ironic, since I was exhibiting no "free will" by following that rule :)
Well, it's less about your 'free will' and more about one's inability to generate a random number.
I'm not going to take the time to test it, but if you flipped a coin before each key press, this algorithm would eventually get to 50% accuracy. But in the short term it would probably be <50%, because it assumes it won't actually get random input.
Actually, if a player is flipping a coin, it is literally impossible to design an algorithm that has expectation other than 50%. Even in the short term, it cannot have an expectation that's <50%.
Sklansky tells in the Theory of Poker that you need to be unpredictable when deciding whether to call a possible bluff, and "tune" it to the cards that are known.
So you may need to make a yes-no decision where you decide on yes about 65% of the time, when you consider lots of those decisions.
His solution? Glance at your watch. Divide the current minute into proportionate parts. See in which part the seconds hand is standing.
I wrote a timer in the developers console that logged either f or d depending on whether Math.random() was above or below 0.5 every second, typing the sequence that was being logged, the oracle consistently scored around ~0.4.
1. Unpredictability is possible even when the user doesn't have any choice about input (type keys based on a PRNG)
2. Predictability is possible even when the user can choose from both available options (human user)
It's an interesting study in our ability to be random, but it doesn't tell us anything about the macro-level philosophical concepts of determinism or free will. Compatibilism is not relevant to this demo.
From a physical point of view, free will doesn't make sense. Laws are deterministic at macro level and contain some randomness at the lower level. Neither are free will.
Also, there is a huge assumption there - the assumption that there is a "self" that can have this free will. There is no self, just a stream of experiences and actions. The "self" is an intuitive concept, a reification that is useful in social dealings. Taking it from the social context and putting it in the realm of physics could be just a sleight of language - it cannot be the same concept in both domains, because the two domains are so far off. And trying to formally define a self in physics is impossible for us.
"Press the 'f' and 'd' keys randomly. As randomly as you can. I'll try to predict which key you'll press next.
0.4893617021276598
A rolling mean of my accuracy in predicting what key you'll press."
So does that mean it's not able to predict with more than 50% accuracy?
Sure, there are ways to "break" it. The point here is to demonstrate that if you try to intuitively go for manual "non-randomness", you won't be as random as you think, sometimes surprisingly off from ~50%.
Actually the quickest way I found to "break" it and hover at ~50% is to simply aim for the dead space between the "F" and "D" keys. If it was using non-adjacent keys like "F" and "J" instead it would be better for its purpose. "Q" and "P" might work even better since it would mostly force you to use 2 hands, making it much harder to score close to 50%.
You are onto something here. This reminds me of various hierarchies of logic(Bertrand Russell's arguments). To beat a person such as you, all it has to do is lie - and tell you at the end it has been lying all along.
Your premise "open the source code for this, run it in node, get the probability of guessing f or d, and then guess the one that's lower" goes straight against the explicit request to try to type randomly, everything that follows althoug may be correct doesn't apply here.
I had a professor who, when playing games where the best move was to choose randomly (for instance, in a CCG, forcing the opponent to discard a card from their hand that you designate without seeing their hand), would always roll a die or otherwise mechanically randomize their choice, to ensure that they never exhibited potentially exploitable patterns in selection.
I've been relaying the quote from the readme for years. If you didn't click through:
"In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type randomly. They’ll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model. Even a very crude one will do well. I couldn’t even beat my own program, knowing exactly how it worked. I challenged people to try this and the program was getting between 70% and 80% prediction rates. Then, we found one student that the program predicted exactly 50% of the time. We asked him what his secret was and he responded that he “just used his free will.”
What this have to do with free will? It's a web page trying to infer the logic behind the user keystrokes. That's it.
Maybe there's some randomness lying deep down in what we experience as free will, but that doesn't mean that we're 100% unpredictable in everything we do.
There are some physicists running around saying that free will is defined as acting randomly even though that's at odds with philosophical and common sense definitions of the term. https://en.wikipedia.org/wiki/Free_will_theorem
There's a thread running through free will discussions referred to as "ultimate origination". The Free will theorem touches on this as the sibling commenter mentioned.
There's a fringe view (also help by some of these physicists) that "The more radical group holds that the agent who determines his own will is not causally influenced by external causal factors, including his own character."[1] Start with section 3.2 of the Stanford Encyclopedia of Philosophy and move on to perhaps the whole article or the referenced primary sources.
My take on the link between the Aaronson Oracle and the ultimate origin view follows like this.
Suppose that we take the radical definition of free will and try to devise a test that demonstrates an individual making a choice not influenced by external causal factors. One thing that comes to mind is to ask that individual to make a choice between two options. If the a choice is not causally influenced by external factors, then we can expect that it will not be predictable.
It's interesting that a number of commenters have reported prediction rates very close to 0.5. Typically in similar hot hand experiments[2] that's not the case and I suspect it's a reporting bias (why should those closer to 1.0 feel the need to report their findings).
There's also a bit of a counter-example in the whole space of pseudo-random number generation that's very much an example of a deterministic system with nothing but external causal factors, but will pass the Aaronson oracle test. On the other hand, truly random sources without any identifiable causal sources (ie radioactive decay) will also pass this test. Both of these passing support an argument that the test has no power to discriminate at all.
My personal take is that I like the idea of thinking about having free will as a spectrum or degree rather than a binary proposition. In the ultimate origin conversation, I don't see external causal factors as being completely avoidable nor unavoidable so having a test whose output is a range of predicability is nice to see.
This is pressing two letters, and you're explicitly told to press them in a way which feels random (which is going to be culturally determined). Aristotle would say that this is about habit, not free will -- unless you're opting to ignore the instructions.
Can you elaborate on how one's perception of randomness is subject to cultural bias? You do have to realize that frequent runs of repeated characters are expected to appear in a truly random sequence, but that's more a matter of basic reasoning (or perhaps education) than culture.
"FFFFFFFF" or "DDDDDDD" are random sequences if they're generated by a random process, but we [note: I wonder who 'we' means] expect "random" to mean "frequently changing" or "chaotic-looking".
This might easily be more about formal education in statistics than about culture, though. I'd expect people from any culture we know about to have the intuitive expectations I mentioned -- identifying randomness with chaos.
1. Totally clear your mind of the previous letter and make each letter a separate decision. That way you only have to avoid being biased toward using one letter or the other too much. This is probably pretty hard.
2. Allow yourself to be aware of your previous answers, but thwart your own biases by including patterns that cancel them out.
I went with the second option and consistently got between .45 and .55. I know our main bias around randomness is not realizing how many runs and patterns exist in truly random data. So I occasionally included runs like "fff" and "dddd" and, rarely, patterns like "dfdf" and "fdfd."
Free will is in play as and to the extent that my future is determined by my decisions, rather than by other factors beyond my control. For example, if I'm walking down the street, I have free will in the matter of whether I go North or South, but not in the matter of whether I go up or down.
I tried typing some letters because I was curious to see what the output would be. But for some reason, the page didn't work on my browser; no output was produced. Thus, my attempt to exercise free will was foiled.
When I get to look at the average I can keep it under 0.5 if I change the key every time the number goes up (and I keep pressing the same key if that strategy stops working).
As arguments against free will go... I think people with no short-term memory make a pretty solid case. You'd think with all the complex activity in our brains we'd at least behave a little randomly based on some chaotic processes, but without your short-term memory you begin to sound like a broken record.
It isn't telling you the odds of what you'll do next. It's telling you how often its hidden predictions been right so far. So if you've hit 100 keys, and it secretly guessed 75 of them right, it will display 0.75. The more you play, the more slowly this number moves, because each additional guess has a smaller effect on the overall average.
I was able to get it to slowly go down, more or less, by switching things up whenever I saw the percentage start increasing. Usually kept doing the same key or pattern as long as it went down and as soon as it started ticking up I switched it.
A couple things I did to get it down to ~.44
Press based on random environmental input (woman's voice on tv = f, man's voice = d, while flipping channels).
Do it very quickly, while paying attention to something else that utterly absorbs your attention, and don't look at the meter.
For a sense of urgency, imagine you're Manny Pacquiao and trying to hit Floyd Mayweather without resorting to this predictable series of left-right punches https://www.youtube.com/watch?v=fZklifGarQc
http://imgur.com/qREH4Jh
I guess I'm pretty random. I don't know what using your free will has anything to do with this. In fact, this may be the opposite of free will.
I realize that this is just a lark, but I just want to state for the record (as it were) that "free will" is such an ill-defined concept that any result from this is wrong and right at the same time. So there.
Pressing in between the keys so I randomly hit both of them was the only way I could be unpredictable. That got a 4.9. Pressing back and forth trying to act as "random" as I could I stayed around a 7.8. Cool experiment
Seems odd to call it free will. If you were 100% deterministic then sure you have no free will, but predictable, even very predictable is not the same as having no free will.
Has anyone done a control test with an RNG to see what happens?
It would be interesting to play with more modern branch prediction algorithms (e.g., Andre Seznec's recent work) with the same interactive interface. I haven't seen anyone do this before...
I started reading "Quantum Computing since Democritus" recently, the book that inspired this, and it is fantastic. Highly recommended if you like an entertaining walkthrough of some pretty heavy theory.
I didn't have a coin on my so I started tossing my keys on the floor and picking a letter based on the direction of the largest key. Ended up predictably around 0.5
I got mine up to 0.8~ and decided to spend some time trying to out-predict the machine and get my score lower. I got it down to 0.6 before it got really difficult
The following strategy seems to result in consistently 50% or less, typically ~45%:
If it got the previous guess correct, swap the key, otherwise keep the same.
I'm not sure there has to be anything odd going on. If you hit both keys at the same time, you can never produce the same letter three times in a row – which is extremely predictable!
Good one. Right handed when really just typing as fast and supposedly as randomly as I can, it was at 0.99 and continuing to converge to 1. Left-handed (especially if I switch which fingers I use occasionally), I was < 0.4.
Can this be used for game AI? I think it could. I wonder if people could apply this to games like Tennis? Or maybe to approaches to the basket in Basketball?
I am using KeePass 1.31's Random Password generator with minus and underline. The oracle is getting 59%. Hmm. Also, when I generate a 64 character string with two possible characters, KeePass claims I have 72 bits of entropy. Hmmm!
displaying the accuracy changes my behavior. I'd like it to log the prediction, result, and accuracy, and after a set number of configurable presses, it displays the report summary.
So. I have done a similar thing for a while now. I call it adversarial selection. I ask a participant(p) to guess a number [1,10] repeatedly. After they guess a number, I immediately tell them my guess of their selected number. Immediately after I guess they are instructed to make a new guess and tap the table indicating they have picked a number. The point is to go through these generations very fast.
Often what I'll observe is they try to mimic a random distribution by selecting a number far away from their last selection, but sometimes they somehow catch on and start saying things like the same number (rare) or a number close to it. This gets interesting because they usually start cycling between picking a number far away and close, but they do so with a particular pattern. I'm able to guess their numbers well beyond a random distribution, and it all comes down the the idea that they are trying to make a random number or trick me, which produces a blueprint of behavior.
Good forward planning to allocate a variable to hold a reference to the participant in case you need to refer to them later in the story, though in this case it turned out not to be needed. I need to start doing this in anecdotes. "So I was chatting with a colleague (C), and he mentioned..."
Are you sure the 10% chance isn't standing out more in your memory? It would be interesting to measure this over hundreds of guesses.
It seems likely that the probability of guessing correctly would be around 1/9, since people rarely pick the same number, but is a sustained 1/8 or better really possible?
It's written in a browser running Javascript, so by definition is only works on serial contexts. Plus, the key switches on your keyboard would give you at most 2 signals in parallel (f and d).
f f f f f f d f f f f f f d d f f f f d f d f f f d f f d f f f f f f d d d f f f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d f d f d f f f d f f d f f f f f f d d d d d d f d d d d f f d d d f d f d d d f f f d d f d d f d f f d d f f d f d d f f f f d