I deleted my Reddit account which kept the old design on it. Now when I occasionally browse Reddit as a non-user, it looks like an awful implementation of material design with tons of wasted space. For example, I see about 1.5 cards per page on my 13" laptop. Who designed this? It is horrid and nearly unusable. Not that I'll be using Reddit that much anymore, but it's like they deprecated the desktop version for mobile-everywhere. There's a reason the form factors are different!
One metric I'd love to see is the comparison between use of the link to visit the old site from the new site as opposed to the link to the new site from the old
It was a major can of worms when we started image recognition years ago. I think privacy will naturally continue to decrease if there's no regulation. More information is ammo for both good and bad applications. At first something like this could be used for affirmative action: airport security, seeking potential employees; anything where your appearance might be important. I think The West has more or less learned to ridicule the idea that "some people that naturally look a certain way should be treated a certain way". But yeah, if you're sweaty and shady at a presidential convention, it would be nice to have this kind of technology in place.
Then they would give their algorithms more power and memory, upgrade their cameras to capture more spectrum and/or resolution and learn who has used a tattoo to cover up their face.
You are right, according to quantum mechanics. If you have a pair of electrons in a product state and measure the spin of one of them, you instantly know what the "other particle's" spin is going to be (if you knew what the initial state was). When you "peek at a card", you "make a measurement" and alter the system.
For anything that exists outside of a light-cone around the first electron, a measurement of the second electron will be truly random from the measurer's perspective.
I just had a really bad experience with Uber myself. Here's my data point:
I was just trying to get an Uber the other day in a large city and I waited over an hour in the rain. The first N-1 drivers that accepted my request cancelled on me after 5 minutes.
Eventually, a driver picked me up. He didn't want to pick me up where I was, and had a very hard time communicating to me where he was. After some back and forth walks-100-ft-"Do you see me now?"-repeat for about 10 minutes, he asked me to just cancel the request.
I said no. I wasn't going to risk paying a fee to cancel. I was already waiting in the rain for an hour.
Eventually the driver found me and took me to my location. The whole ride was quiet until the driver decided to break the silence with full-volume anti-slavery rhetoric on the radio.
That day I learned that Uber's algorithms, whatever they are, led to at least a single very bad experience. I didn't downvote the driver. Instead, I just didn't rate him. That said, I'd say 95% of my Uber experiences were very good.
I don't see why this research couldn't be applied to empirical data from an arbitrary system of coupled oscillators.
I am interested in EEG and fNIRs, but there are drawbacks. Naively, people will try and study this data without first doing a necessary back-of-the-napkin.
EEG and fNIRs have hard physical limitations. EEG is limited to pick up only large-scale EM field activity, where the higher resolution perturbations and effects are averaged out. This is because of an increased measurement distance and noise that is acquired through the skull and other intermediary tissue. This is unacceptable since the current scientific consensus is high frequency phase and activity is fairly important for information coding.
On the bright side, a lot of information might also be encoded in larger scale synchronized oscillation that happens in the brain (the stuff that EEG picks up on). This space is obviously lower in dimension.
The only work-around for this hard information limit is to explore invasive BCI technology (e.g. tetrodes connected to your neurons).
Relatively speaking, this isn't difficult for scientific laboratory research because:
1. We don't care about invasive surgery on rats.
2. We don't care how comfortable rats are.
3. We don't care how mobile rats are.
On the other hand, for commercial purposes, it is not feasible to stick a wired-tetrode array into a human brain yet. We can't afford to lose a human. Engineering on the invasive BCI frontier is incredibly primitive right now.
It is tempting to be lazy and criticize this work because it contains a few instances of the phrase "quantum consciousness".
This paper has one purpose: To get people thinking of the brain and information-carrying systems from a physical perspective rather than solely a computer-science or information-theoretic perspective, or even worse -- a biological perspective. Physicists were largely responsible for computer science and information theory, and they will be largely responsible for breakthroughs in biology and machine-learning as interdisciplinary laboratories continue to grow.
Physics is the most sophisticated area of applied mathematics that currently exists. Whatever consciousness is, it will be understood through physics -- because, presumably, that's what it is.
That said, it is interesting to see a consolidated paper touching on common motifs. The brain exhibits many characteristics of any other state of matter; for example, it has phase transitions.
One thing I disliked about this paper is the conclusion is draws from its examples with the gold ring and the pond. They go on to say that information is not persistent in a pond; for example if you write your name on the surface, the energy will be propagated away and the surface will return to a higher entropy state fairly quickly. This is true, but one cannot say that the brain is different solely because of this. The brain is constantly under "external" influence. It is constantly being supplied with fresh nutrients; neurons are constantly being supplied with tugs from their neighbors. If you were to remove all incoming nutrients, the brain would surely collapse as an information processor, too (e.g. death of the organism).
I would go so far as to say that a conscious system requires constant input, and does not necessarily do anything in the absence of any input. This assertion is in direct contradiction to the heuristics ("principles") established in the paper. For example, computers, bacteria, and brains are all computing systems which require constant input.
Terrible advice! I tell all of my students to find as many analogies as they can. You can never lose by increasing the number of ways in which you understand something. Analogies, some may argue, are at the heart of mathematics.
> Mathematics is the art of giving the same name to different things
Poincare.
> The art of doing mathematics consists in finding that special case which contains all the germs of generality
Hilbert.
> The vast majority of us imagine ourselves as like literature people or math people. But the truth is that the massive processor known as the human brain is neither a literature organ or a math organ. It is both and more.
John Green.
> Sometimes I think that creativity is a matter of seeing, or stumbling over, unobvious similarities between things - like composing a fresh metaphor, but on a more complex scale.
David Mitchell.
There is an ounce of truth in what you say -- metaphors can be abused to draw false conclusions. This does not mean one should cower from using them.
Everyone has their own philosophy of mathematics. To me, the core idea is the idea of representations, such that those representations allow you to extract patterns (aka similarity or commonality) from various cases.
I can't help but indulge a bit and now ask whether or not we can't extract a common pattern from the two representations of mathematical cognition offered by yourself and GP. :-)
As an opposing view, I offer you the following analogy, by mathematician Charles C. Pugh, which he draws between conceptions of metaphor as understood in natural versus mathematical language:
From "Real Mathematical Analysis" 1st edition, p. 9:
Metaphor and Analogy
In high school English, you are taught that a metaphor is a figure of speech in which one idea or word is substituted for another to suggest a likeness or similarity. This can occur very simply as in "The ship plows the sea." Or it can be less direct, as in "his lawyers dropped the ball." What gives a metaphor its power and pleasure are the secondary suggestions of similarity. Not only did the lawyers make a mistake, but it was their own fault, and, like an athlete who has dropped a ball, they could not follow through with their next legal action. A secondary implication is that their enterprise was just a game.
Often a metaphor associates something abstract to something concrete, as "Life is a journey." The preservation of inference from the concrete to the abstract in this metaphor suggests that like a journey, life has a beginning and an end, it progresses in one direction, it may have stops and detours, ups and downs, etc. The beauty of a metaphor is that hidden in a simple sentence like "Life is a journey" lurk a great many parallels, waiting to be uncovered by the thoughtful mind.
Metaphorical thinking pervades mathematics to a remarkable degree. It is often reflected in the language mathematics choose to define new concepts. In his construction of the system of real numbers, Dedekind could have referred to A|B as a "type-two, order preserving equivalence class", or worse, whereas "cut" is the right metaphor. It corresponds closely to one's physical intuition about the real line. See Figure 3. In his book, Where Mathematics Comes From, George Lakoff gives a comprehensive view of metaphor in mathematics.
An analogy is a shallow form of metaphor. It just asserts that two things are similar. Although simple, analogies can be a great help in accepting abstract concepts. When you travel from home to school, at first you are closer to home, and then you are closer to school. Somewhere there is a halfway stage in your journey. You know this, long before you study mathematics. So when a curve connects two points in a metric space (Chapter 2), you should expect that as a point "travels along the curve," somewhere it will be equidistant between the curve's endpoints. Reasoning by analogy is also referred to as "intuitive reasoning."
Moral: Try to translate what you know of the real world to guess what is true in mathematics.
Sometimes data is not beautiful, but very ugly. These results are based on a flawed premise. One red flag -- where is the word "dank" in your list? Where are words used by people who actually smoke weed? Also, is "where score > 100" a good heuristic for this kind of study? I would argue that "where score < 100" is a better heuristic.
For example, a shill or superuser (people getting top comment) will not be using domain specific language -- they will be using language that caters to a general audience. If this is true, you would end up squeezing most of the interesting language out of your study. Have you been to Grass City forums? I am guessing these people surely aren't using terms like "Donald Trump" in their everyday conversations about weed.
Reddit is a huge melting pot and probably isn't a good place for insight about potheads. Grass City might not be either -- Grass City users are not typical potheads. The best place would be 10th grade high school social circles and college dorms. It really is amazing how little data is produced by social networks, in the grand scheme of things. We are all so used to hearing about how much data is produced by the internet. There are orders more data in the raw world just waiting to be scooped up.