Hacker News new | past | comments | ask | show | jobs | submit login
All Hell May Break Loose On Twitter In 2 Hours (techcrunch.com)
49 points by peter123 on June 12, 2009 | hide | past | favorite | 38 comments



I'm going to go out on a limb here and say that even if Twitter vaporized in a puff of smoke in two hours and never came back, all hell would not break loose.


Humorously if everyone thought hell was breaking loose they would check twitter to see if it was just them.


To be fair, they just said that 'all hell may break loose on Twitter', not the whole world.


Last week I went to a "tweetup" (yes, I'm an idiot). Believe me, there are people who would probably break down completely if something happened to their beloved Twitter.

EDIT: admitted the fact that I'm an idiot.


What would hell breaking loose look like? And what is it breaking loose from? :P


Agreed for most of us--if you are on the technical staff at twitter, then such a breaking-loose scale might be what happens.


Ok I am going to TechCrunch bash. But ONLY because this is a transparent example of how their writing fails recently.

The story is valid and the content of the article is fairly accurate: that the status ID's are going to trip over beyond the 32-bit integer limit which presents potential problems for third parties.

Twitter sensibly pushed this forward by a few hours to make sure it happened in a "were noticing shit right now" period and thus got fixed. They announce this by tweeting a tongue in cheek "Twitapocalypse" message.

The crucial problem is that TC siezed on that idea and hammered at it - yes they do mention it will only affect apps BUT

- They dont mantion that it wont kill Twitter, inference from the article theoretically could go either way and I imagine many people left the page worried Twitter would crash and burn (and that the Twitterati were beavering away to "fix" things). Lines like "Hopefully Twitter will be able to resolve this quickly." emphasis this idea subtely.

- They heavily suggest Twitter themselves are working on this. Whiklst im sure they are trying ot make sure app developers notice it I seriouslky doubt they have all hands on deck to "fix" an issue they themselves arent facing. They were courteous enough to push it up into working hours to give devs a chance to fix errors "live" etc. But there is nothing Twitter themselves have to do, right?

All in all it's a good story written very badly - as a journalist I have to take issue with the sensationalism. It hurts the vendor and wosre it hurts the readers it dupes.

:)

EDIT: to be explicit. It smacks less of bad writing as deliberately badly structured writing to suck in an audience. The former is frustrating but unavoidable - the second is "evil".


"Hopefully Twitter will be able to resolve this quickly."

Gotta love tech writers and their incompetence.


Yeah, there were a couple of real gems in that "article":

> It’s possibly a coincidence, but Twitter has just welcomed two new members to its API team today. Is Twitter manning up for the battle?

Right, because as a company, what you really want to do if a big hammer is about to fall on you is immediately hire a couple of new programmers who aren't familiar with your internals at all.

> It’s now past 2 PM and no reports of massive failures yet. Perhaps this really is just like Y2K.

Yep, because there's no such thing as a bunch of programmers who spent a lot of hours fixing code _before_ it broke. (I was one of them.)


It always irritates me when people say "Y2K was such a non-event". It was a non-event because a whole lot of people fixed and tested a whole lot of code. If they hadn't, the world would have been a pretty miserable place on 1/1/2000.


Perhaps not really miserable, but a lot of things in administration (etc) would not have continued to work.

(My grandfather was also one of those fixers.)


Depends... the code that runs an insurance company is one thing. The embedded computers would likely have been the biggest problem - things that dispense medicine, control the power grid, etc.


Few embedded systems depend on the date, do they?


I don't have first hand experience, but from what I understood, yes. Particularly in monitoring devices: "Shut this off if you haven't received a signal in x minutes". That sort of thing.


Thanks.

Though I imagine that those watchdogs might have more problems with the year 2038.


I think the writer is under the impression that Twitter is using a 32-bit signed counter and will start using negative ids when it rolls over. Evidently some people on the linked Twitter development thread had the same misconception. Actually it looks like Twitter is doing it right: they're just going to keep counting up past the maximum 32-bit signed int (after skipping a bunch of ids to move the Twitpocalypse up into business hours.)


It sounded like the problem wasn't Twitter, but 3rd party Twitter clients. They have no way of knowing if these use a 32-bit or 64-bit ID field, so if Twitter sends back an ID that can't fit in 32 bits - what happens? Do they crash? Wrap? Take only the low-order 32 bits?



The title's slightly sensationalist. "Hell" will not break loose, although some poorly written 3rd party apps might fail.


The comments in here are competent, and it's semi-interesting news (especially if you're at all involved with the Twitter ecosystem) but it's just so hard to upmod yet another sensationalist/Twitter article from TechCrunch.


I think they've made a good choice by forcing the issue to happen now instead of arbitrarily occurring in the middle of the night on a weekend. The Friday afternoon part sucks though. If they were going to do this, they should have done it yesterday.


Yeah, they also should have told people more than 4 hours ahead of time.


At least we received the mandatory Twitter response that they know they made a mistake in letting us know so late and will try to improve on that in the future.


>Update 2: It’s now past 2 PM and no reports of massive failures yet. Perhaps this really is just like Y2K.

Techcrunch, do a little homework, please.

If they had actually gone to the public timeline and checked, they would have seen that the status ids are still in the 213xxxxxxx's. In other words, Twitter hasn't started the twitpocalypse yet.


About the actual technical root of the issue - does anyone else find it surprising that twitter is actually using integers for each status update added to it's database and incrementing as it goes? It seems to be this is particularly bad form for a number of reasons: not only does it lend itself to overflow issues such as this - it allows competitors to easily gauge twitter's usage.

I would of had a generated pseudo random sequence values from the start - it's not like having a simple integer buys you anything in this case.


It just happened, about 3 hours late. Lousy planning, and if I'm not mistaking it is 5 PM Pacific/8PM Eastern, which isn't exactly during working hours for most people. So either the people responsible for some of these apps are working late anyway, or any breakage will not be fixed immediately.

Twitter could have easily planned this for a monday morning and communicated it a month in advance.


Why does the status of third-party client apps need to concern Twitter so much?

Twitter isn't responsible for a third party's product. That's why it's called a "third party".


Since this has been known about, literally, for months, why didn't twitter setup a mock testing environment that generated this potentially problematic case, allowing twitter app developers to test in a sandbox and release a new version before the 11th hour occurs?


I'm guessing it'd be far easier to have developers just a) check their database schemas and code, and/or b) to multiply all incoming IDs by 10, than for Twitter to setup a separate mock environment.


It'd also be much less of a potential PR nightmare for them to just have setup a sandboxing environment.


I can't imagine the logistical nightmare of having to replicate their staging environment, with the caching and message queue systems. It probably isn't `cp -r staging/ sandbox/` :-/


The mock environment doesn't need to do anything other than accept all inputs and produce known outputs (presumably some with known error conditions). This can often be implemented using apache rewrite rules/aliasing and static content served with the correct MIME types. I've set up mock servers for my own testing of public APIs when I want more control over monitoring my application than using a remote service would allow. You make a request once to the real service, store the content in a file on your own web server, and change your app to hit your server rather than the public, remote API.


They already have an internal development environment, clone that.


I'm just glad it was after WWDC ended.

Developers are by nature anti-social creatures. Without Twitter, I think shoe-gazing would have become the main event.


I'm confused: how is "ETA: 13 Jun 2009 at 11:19:38 AM GMT" == "Friday afternoon"

Isn't June 13th Saturday?


That was the time it was expected to naturally occur. They made the decision to push it forward and force it to occur around 2:00 PM PST


That would be Friday in Samoa (although not the afternoon)


how sentimental, I love the old bugs, off by one, buffer overrun, math overflow... The good old days before garbage collection and 'number' datatypes... happy says.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: