Hacker News new | past | comments | ask | show | jobs | submit login

As I see it, the seeds of this user's disappointment were planted as soon as she starting gaming the system: "my posts became more and more filtered as the 'Friend' list increased. Now, they were getting the facade, the highlights because I donned the 'happy' mask. My closer friends were still catching the true story through instant messaging, text messaging and phone calls..."

The faint echoes of Gödel and Turing in the back of my mind say: no social algorithm can ever optimize its results to take into account how people will modify their behavior in response to the algorithm itself.




Her point still stands. I've been Facebooking since the start, so I don't necessarily alter my behavior much, I just use it to keep up with friends, post interesting tidbits about my life, and try to have real conversations happen.

I still see the same problems. When I talk to people, it's always "oh yeah, I saw on Facebook that you got a new job," etc. etc.

I went back and looked through my old e-mails from right before Facebook was becoming popular, early 2004. I once sent out one of those "My e-mail address is changing. Oh btw how are you?" e-mails to my whole list, and got back around 50 genuine responses that turned into conversations. Not "broadcast" style conversations, not public conversations, but real honest person-to-person human communication. It was brilliant.

I realized that I had completely lost that. If I was doing it today, I could send the same e-mail but people would send nothing back; it would just be "Thanks" because they already know everything else there is to know about my life, and I theirs.

It's a very strange and new way of connecting to people. On the one hand I have some deep insights into the lives of friends I might not otherwise talk to, or even those I do; on the other hand, I miss the humanity of one-on-one conversation.

I'm yet undecided if this is a good thing. But overall, I think the article overblows the affect this has on relationships. Personally, I sort of like it. All the trivial stuff is known already. No one cares where you work or what you're doing anymore, they want to hear how you're doing and how your life is really going. It negates some shallowness and small talk. Not necessarily bad.

But it is enormously complex. We still don't know how society will change as people become more connected in so many ways, but we do know it will change. Some might say it's the next level in our evolution; collaborative social evolution is the next step since biological evolution can't keep up. It'll be a fun ride.


calinet6: the fact that you're no longer sending 'broadcast' emails, nor consciously updating people about the changes in your life, and that "no one cares where you work or what you're doing anymore" are just more examples of gradual but dramatic changes in behavior resulting from the use of Facebook.

Unless Facebook has made some truly ground-breaking advances in AI, the company's algorithms cannot anticipate how users might modify their behavior in response to the algorithms themselves. AFAIK, that's not possible today.

Over time, this utter lack of 'intelligent auto-incorporation of feedback' might show up as people sharing 'fake' instead of real feelings, or as 'social' graphs diverging significantly from the true state of real-world relationships, or as automatic sharing of 'relevant' information (like ads) that look great to the algorithm but in hindsight are misguided.

(BTW, in my view, this is one of the biggest long-term risks for Facebook's business: that society over time learns to 'route around it' and it gradually loses relevance for day-to-day use.)

FWIW, I do agree with you that no one knows with certainty how society will change as a result of Facebook and its ilk.


I think you may have mashed up 12 different parts of my response and randomized the order... anyway. Yes, you have a point, but there's no algorithm at play here, it's simply that people are changing their behavior based on the fundamental and uncomplicated premise behind facebook itself: the sharing of personal information among a group. There's no need for an AI to anticipate that or guess behaviors. The simple fact that the information is shared as intended is enough to elicit the result I described. It's very simple. Complexity arisen from simple beginnings.

I think the behavior change that you were talking about originally is a bit of a stretch—most people have integrated Facebook and the like into their social fabric. It has become another level of communication, and at all levels and at all times we present a version of ourselves to others, whether facebook or not.

I don't believe the "social algorithm" needs to anticipate this natural human behavior any more than a telephone needs to anticipate it and change your conversation to feel more personal. Facebook is simply the format we're communicating through. I think it has to do more with the audience the format involves than anything else, and with Facebook, it's simply the self you present in the public non-anonymous space. This space has existed before, and Facebook is just a digital version. If people want something different from this that compensates for this behavior change, they won't use an AI algorithm; they'll just use a different platform. They'll chat, pick up the phone, or visit in person. Simple as that.

I think most people do this just fine. Like I said, my conversations with friends change because they know more details about my life, but they don't necessarily get worse or more impersonal. In fact they may be better since we're less focused on the trivial. The writer of this article may not have that perspective, and that's fine, but I think she's ignoring many advantages of the communication format that Facebook provides while emphasizing all of the disadvantages.


Unintended consequences are a fact of life. What the "company's algorithms cannot anticipate" is simply a specific instance of the more general problem that we cannot predict the future, nor can we run a simulation of the universe to see what the consequences of something will be. This is not at all specific to facebook, designing and adjusting systems like this will probably remain a task for humans for the foreseeable future.


I would just like to say that that has absolutely nothing to do with anything by Gödel or Turing. There definitely can exist machine learning algorithms which continually adapt, and I see no technical reason why you couldn't try and model human agency as well. The work by Gödel and Turing you're thinking of is about _formal_ systems such as logic or computer programs, and while it is very common for people to turn (i.e., abuse) their results into a metaphor with seemingly broader implications, this is actually a mistake; their proofs simply don't hold under those more general circumstances.


andreasvc: AFAIK, there is no program in existence today that can successfully model "human agency" (as you put it). Wouldn't that require major breakthroughs in AI?

And my understanding from chatting with friends in the fraud-detection space is that, while current state-of-the-art machine-learning systems can successfully adapt to the data they obtain from users, they cannot adapt to users learning to game or 'route around' the system -- at least not without programmer intervention 'from above.'

The link to Gödel and Turing I saw is that solving this problem without intervention 'from above' would require a computer program that can successfully model itself as it interacts with humans, but then we run into those two guys, no?


Yes I see the superficial resemblance with Gödel and Turing, but it's not more than that. The reason I insist on that is because the value of their theorems lies in the fact that they have been mathematically proven, and the proof only holds in very particular conditions. Basically, a system that is strong enough to prove statements about arithmetic cannot prove its own consistency. This hypothesis about the difficulty of certain machine learning tasks is a conjecture, at best. I don't think you could prove it, and if you could, it would look very different from the incompleteness proof. I think it has to do with certain AI problems being hard, but this is a rather vague notion; perhaps we simply lack certain concepts or mathematical tools. The important thing about the incompleteness proofs is that that is completely ruled out: given the right formal conditions, certain things are absolutely impossible to do.


The trajectories of adaptation then become second-and-beyond-order problems that in turn fall out of sync with reality.


I'm puzzled by this statement. The adaptation happens in reality, right? So how could their trajectories be out of sync with reality?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: