Hacker News new | past | comments | ask | show | jobs | submit login
Carmack on why transatlantic ping is faster than pushing a pixel to the screen (superuser.com)
531 points by eavc on May 1, 2012 | hide | past | favorite | 64 comments



I know fanboy gushing isn't really productive. But I'd just like to say that it's so awesome to live in a time when we can start a topic of conversation about someone of note, and there's a chance that this individual will join the conversation personally.


I remember once upon a time WAAAAY back in the mid-90s I think, when I was having a problem with my Voodoo2 graphics driver and Quake.

I posted the problem on one of the Newsgroups at the time and got my reply back from John Carmack himself (which naturally fixed the problem pretty quickly.)

I remember being extremely excited then, as I'm sure the questioner is now on SO.


I too had a similar experience. Asked a question about cron in a newsgroup and got a response from Vixie. It was a small world then, perhaps it will be again as the SNR gets better.


I pinged John on Twitter with a link to elicit an answer. I have to agree with you, having this proximity and immediacy through things like Twitter, Stack Overflow and services like HN is quite exciting. Just a few years ago this would have been almost unheard of (exceptions always prove the rule).


A few years ago (eek a decade) when a lot of the tech community hung out on slashdot and I remember seeing this same thing happen. We are social creatures and before tweeting (I know shocking) there was other ways and there will be new ways when no one tweets anymore.


Back in the day they had a finger up on some random id Software server. They were just as present back then as they are now, it's just that the people without knowledge of usenet and Unix can interact with them :)


This can cut both ways. One time I said something vaguely insulting (and in retrospect, insensitive) about Steve Wozniak on Slashdot, and he replied to the comment personally.

You never know who is reading, so don't say anything you couldn't vouch for in real life (which increasingly overlaps with our Internet life)


I did this on twitter once with Mike Masnick of techdirt. He certainly didn't follow me, must have a search with his name setup. Of all the situations to raise the ire of someone even moderately internet famous it was possibly one of the most petty too - and his response was way more mature than my call...oops.


This is precisely the reason why I keep up with Quora. I've seen so many individuals of note from the industry answering questions on it and it really gives a nice one-degree of separation kind of perspective. It's kind of cool to know that I could ask a question on a topic like data science and someone like Jeff Hammerbacher could reply directly.


one of the few things slashdot is still occasionally good at. (pity so few of the stories are interesting enough to actually bring out the interesting people anymore.)


Neal Stephenson's reply to a Slashdot QA [1] is definitely one of the better things I've read on the internet.

[1] http://slashdot.org/story/04/10/20/1518217/neal-stephenson-r...


Indeed, I'm pretty sure that's the best thing ever posted on Slashdot.


Specifically, the Stephenson-Gibson conflict.


>* But I'd just like to say that it's so awesome to live in a time when we can start a topic of conversation about someone of note, and there's a chance that this individual will join the conversation personally.*

Which is not unlike how it was when communities where constrained in population. E.g. in ancient Athens for example you could have --if you lived at the right point in time and weren't a slave-- a conversation with the greatest minds of the era, from Socrates to Plato, to mathematicians etc...

So, a "global village" indeed...


>if you lived at the right point in time and weren't a slave You've just excluded the majority of Athenians[1]. If you generalize "slave" to "bottom Nth percentile of the population, the conditions are much better now than they were before. While someone low on the social pyramid in ancient Greece probably had no chance of contacting someone like Plato, many more people in modern times would be able to write a letter (or email, tweet, forum post, etc) to someone famous and actually receive a response.

[1] "According to the Ancient Greek historian Thucydides, the Athenian citizens at the beginning of the Peloponnesian War (5th century BC) numbered 40,000, making with their families a total of 140,000 people in all. The metics, i.e. those who did not have citizen rights and paid for the right to reside in Athens, numbered a further 70,000, whilst slaves were estimated at between 150,000 to 400,000.[7] Hence, approximately a tenth of the population were adult male citizens, eligible to meet and vote in the Assembly and be elected to office." http://en.wikipedia.org/wiki/History_of_Athens#Geographical_...


Right, but there was some distinguished metics, e.g. Aristotle.


The contentious question is: for every Aristotle in the top 10%, how many were in the bottom 90% that were denied that possible future by force?


>The contentious question is: for every Aristotle in the top 10%, how many were in the bottom 90% that were denied that possible future by force?

And the even more contentious question is: how many are in the same place today? Lack of money when growing up and circumstance can be equally as brute as force to deny someone his "possible future" as anything else. How many readers does HN have that are San Francisco natives and how many that are, say, from Mississippi or Alabama combined?

With ~2 million people in jail, some million homeless and several tens of millions eating with coups and soul kitchens, one of basic differences now is that we have the luxury (hypocrisy?) to blame them instead of some institution like slavery.


And then after starting an ill advised war that one argued against, convict him of sedition and have him poisoned. So... Where does Fox News fit into this model?


LOL I used to own a computer game center and had two dedicated T1's bonded and traffic managed. The CS players would complain if the ping time spiked from 20ms to 25ms to a server and would say it was causing them to miss shots. I reviewed the connections for jitter and all sorts of things and they would never believe that the 5ms didn't make the difference.

To prove the point I downloaded a simple javascript stoplight app that would measure reaction time and told them if anyone could beat my times I'd give them an all day pass. And it never happened.. not even once.. and the times were lucky to be in the 210+ms range. 5ms wasn't causing them to miss the shot..

For those who are interested:

http://getyourwebsitehere.com/jswb/rttest01.html


Completely unrelated.

If CS was a game where the players would just pop up out of nowhere (and then stay static) that would be somewhat comparable but when you aim at player you take lots of things in consideration, and most importantly - aiming and hitting a player has very little to do with reaction times. To hit a player that walks into your crosshair can probably be done with few ms of precision, that's because we extrapolate the movements and we subconsciously even takes things like input lag into consideration.

Not saying that those 5 ms are important, if anything constant 25 ms is better than 20 ms with +5 ms spikes, but reaction times have very little to do with actually hitting the other player. But when you have server side hit detection the lag is really important.


>Not saying that those 5 ms are important, if anything constant 25 ms is better than 20 ms with +5 ms spikes, but reaction times have very little to do with actually hitting the other player. But when you have server side hit detection the lag is really important.

Lag correction is key here.

Competitive CS is at a very high framerate(often around 120Hz). So at a 20ms ping, the client is around 2.4 frames behind the server. At 25ms, it's almost exactly 3 frames behind. That means that, on average, the client is going to be rendering 20% less lag corrected frames with the lower ping, which is a pretty huge amount for player movement.


That's a bad comparison, people can predict timing much more accurately than they can react.

A better example is mixing and recording music. Performers are recorded individually playing along to music they hear in their headphones.

Acceptable latency is under 20ms. http://www.soundonsound.com/sos/jan05/articles/pcmusician.ht...


That test is very cool. A better test may be to count down from 10 lights changing and try to click exactly and time when the last one changes. Under perfect timing (no variation between each light lit) I would think the average person could get the last light much better than the 250ms I am getting on the rttest01 test.

Now, to expand this scenario, add random delay between each light. first 10ms, 15ms, 50ms, etc. Then see how well a person can click when that last light lights up. This could be a better test for measuring 'Human Response Time vs Jitter Delay' or what ever the appropriate term is that we inherently use subconsciously in FPS games.


you cannot compare latency to reaction time.

I can easily tell the difference between 40ms and 80ms playing CS. but I doubt my reaction time is that low (not to mention the time until my finger clicks).


+1

I made a lot of tests with LCD TV input lag, and 30ms lag vs 100ms lag makes a huge difference in playability. At 120ms+ FPS games are unplayable.


It's absolutely terrible in rhythm games like RockBand when video and input have a delta, and even worse when you add audio desync into the mix. Fortunately you have a few knobs in the game options.


Nope, I'm telling you, I played Quakeworld on my 14.4k modem at 150ms+ lag constantly with no issues.


That's different--quakeworld did client side prediction so your local side was updating with no lag (ie, move the mouse and your view would rotate with very very little lag, just like you were playing a local fps). The positions of players was lagged because their canonical positions were controlled by the server, but that manifests itself totally different than display lag.


I am not sure if you are trolling or remembering through rose glasses. you CAN play qw with such a ping but you would not stand a chance against anyone with a ping of 30. projectiles and players jitter all around the place.


This is a very skewed test I think. While your result and intuition are likely to be true, the test does not prove it. The streetlights test measures the response to a sudden change with no previous indication. While playing an FPS, you get multiple sources of information which converge before a specific event.

To be more precise, you know what to expect and when - even without counting exact seconds, if you see someone running, then disappear behind something - you know when to expect him to enter your view again. Then you've got sound which additionally helps you in orientation. Then you've got the time it takes between you spotting something and getting the correct aim, while you see the target moving - you're aligning your aim with the moving object and deciding when they meet, rather than just measuring a response to an impulse.


Small bug report: If I click start over, then use space as my key, it records the time, then starts over.

You should return false from your event handler to prevent that (but only do that during an actual test, not all the time). Or remove the focus from the start over button.


I had to try 2 times...but I got "0.203". Then on my fourth try I also got "0.205".


Ha, I love stuff like that test. I couldn't break 210ms though, 213ms was my best


You should hold the key down. I got 10ms average.


Oddly I was faster if I concentrated on the vanishing red light (210-220 ms) instead on concentrating the lightening of the green light (240-260 ms).

I also tried out not focus (aren't the rod cells on the edge of your vision faster in detecting movement than the cone cells in the center?), but it didn't amount to much.


That was oddly fun. My best time was .283 seconds.


.007 so yea your server sucked!!! :)


Getting down voted over a joke?

Or maybe you think i'm lying about .007 which is a score I actually got..

either way, lighten up HN...


Since your account is only 84 days old, maybe you're not aware that jokes are generally frowned upon and will usually get you down voted. Live and learn :)


Carmack probably read the Anandtech article (from 2009) on this topic: http://www.anandtech.com/print/2803

So the short answer is: it takes that long to go through all of the processors (input controller/keyb/mouse, usb, cpu, processing latency, gpu, more processing latency, RAMDAC/digital output, LCD pre-processing, LCD output, LCD post-processing, pixel transistor)

He doesn't mention using "game mode" on that display - maybe it doesn't have one. Game mode is supposed to minimize pre- and post-processing to minimize LCD latency, exactly for this reason.

Also note it takes about that long for signals to hit your eye, go through your brain, hit a switch that fires an action, that signal to travel down your arm and back into your finger ~ 113ms.

(How does that line up with 100ms game tick cycles? I don't know.)


If you read the link, he actually describes his methodology:

"The time to send a packet to a remote host is half the time reported by ping, which measures a round trip time.

The display I was measuring was a Sony HMZ-T1 head mounted display connected to a PC.

To measure display latency, I have a small program that sits in a spin loop polling a game controller, doing a clear to a different color and swapping buffers whenever a button is pressed. I video record showing both the game controller and the screen with a 240 fps camera, then count the number of frames between the button being pressed and the screen starting to show a change.

The game controller updates at 250 hz, but there is no direct way to measure the latency on the input path (I wish I could still wire things to a parallel port and use in/out Sam instructions). As a control experiment, I do the same test on an old CRT display with a 170hz vertical retrace. Aero and multimon can introduce extra latency, but under optimal conditions you will usually see a color change starting at some point on the screen (vsync disabled) two 240 hz frames after the button goes down. It seems there is 8ms or so of latency going through the USB HID processing, but I would like to nail this down better in the future.

It is not uncommon to see desktop LCD monitors take 10+ 240hz frames to show a change on the screen. The Sony HMZ averaged around 18 frames, or 70+ total milliseconds."


This agrees with Anand's results - with vsync disabled and very fast input processing, you could have about 17ms for transmission from the video card (in the video cable), another 17ms for processing, and 4ms more for LCD response, giving the ~40ms that 10 frames would give.

(1/240 sec = .075/18 = .0041667 sec = 4.1667 ms)


This is why I play Geometry Wars on a Sony Trinitron CRT TV.

> "you will usually see a color change starting at some point on the screen (vsync disabled) two 240 hz frames after the button goes down"

That's less than 1/100th of a second from the button press to the screen.


I wonder what the margin of error is on marking when the button push is made. Controller button range of travel makes it nearly impossible to know exactly when the electrical connection takes place from a video.

I think it'd be better to touch wires together or something more obvious to reduce the margin of error.


It probably depends on how the button is debounced as well. Software and physical debounces are both generally tuned for at least a few milliseconds and potentially over ten milliseconds of latency.

A better trigger could be built by interfacing a microcontroller over USB HID and having it send a simulated keystroke or button press at the same time as turning on an LED.


I think you're being generous. I've had microswitches that needed a good 20-25 mSec to fully debounce. Good quality sealed switches (like Cherry) too, not badly plated counterfeit stuff.


I'd assume that Carmack was going by the same distance that the key was depressed in the reference frame for each test.

The variance on that would be about one frame, would would add up to 4 ms for each test. However, since the input device did not change between the experimental equipment, it's a constant error that would be added into every test, making that a wash.


There was a controller mod that addressed this exactly [1].

[1] http://youtu.be/XkiyxzvsAYo


I have seen test rigs where the button switches are in series with an LED - the LED lights essentially instantly, and it is very obvious on-camera when that circuit is closed.


Maybe I missed it, but he did not seem to go into detail as to how the pixel was plotted. If the pixel was placed in a buffer, and then a texture was locked and the pixel was transfered, this would obviously induce significant overhead on its own (any time you need to communicate between CPU/GPU memory induces a huge lag, in shorter words)...although Carmack knows this better than anyone else.

Also, the actual pressing of the button could use more clarification. When did measurement start? On the button down signal? On the button up signal? As soon as the finger touches the button?


He is doing a clear which is pretty much the fastest path in hardware.


Oops I did miss that. I would argue that a clear is actually not the fastest method -- it would be faster to keep two buffers of different colors (pre rendered) and swap them on key press...although the difference should be pretty trivial ;)


You're right, clear can sometimes be quite slow on some hardware. Fast clears tend to only be limited to all 0s or all 1s and only full screen. In the worst case, the hardware has no support at all and has to setup and rasterise a full-screen quad and send that down the pipeline.


But the delay involved in rendering to the buffer is part of the delay under test. We are not trying to get the lowest latency possible with the hardware, we are trying to get the lowest latency that a game could conceivable achieve.


In fast FPSes this makes a huge diff regardless of your ping indeed. Even in, game mode on some monitors/video card setups. You only see it well during LANs when the guy sees you half to a full second BEFORE you see him.

Then you realize no one was cheating, ur hardware is just crap ;-)


I am not really clear on the measurement methodology. Why is a game controller involved?

Did he trigger the ping from the same game controller button?

Did the game controller trigger the frame buffer swap on the graphics card?

At 240Hz the lower-right pixel on an LCD will be painted 4ms after the upper-right pixel. Where were the measurements taken.

Yes, LCDs are scanned devices, just like CRT's, you don't see it because of the slow response time.

Lastly, I didn't see any numbers. Where are the measurements?


The total length of wire in a microprocessor is currently on the order of 100km. A signal doesn't have to go through all of it, but processing often involves loops etc. Also considering the delays at logic gates (waiting for clock signals), it's not that surprising that a speed-of-light signal that needs complex processing may travel longer inside a microprocessor than on a straight journey of a few thousand miles across the ocean.


Uh, the packet still has to traverse the same loops of microprocessors. More loops, I would wager, than just sending some bits to a display.


I didn't say that network packets take no processing, just that the processing path matters. Contrary to your intuition, this path is significantly shorter for network packets, and an Ethernet board (first produced in the 1980s) is a much simpler circuit than the latest GPUs.


Transocean cables aren't just Ethernet boards on both ends. There's extensive, extensive gear on either end as well as at the peering points between you and the destination. I'd wager that the same packet traverses tens of microprocessors between you and the destination, so your point is questionable.

Just adding a router can add up to 30ms to a hop, if it sucks.


(The electron wavefront travels at ~0.66 C in copper. I don't know about Si though.)


> (The electron wavefront travels at ~0.66 C in copper. I don't know about Si though.)

Copper doesn't have much to do with this, rather what is important are the impedance characteristics of the transmission line. 0.66*C is a reasonable number for thin coax cable. Split speaker wire on the other hand is also often copper and has a propagation velocity of 95+% of the speed of light. Generally the dielectric used and configuration of the transmission line is far more important than the type of material used as the conductor.

As for processors, signals are still moved around on metal wiring, the propagation velocity through Si isn't really important but rather the propagation delay of gates which is less about the speed of the wave front and more about how long it takes to turn the transistor on or off. While there is some component of propagation velocity involved in this, its not really significant in comparison other factors in the gate design (see Intel's Tri-gate).


CPU's are vary parallel operations, for instance each bit of a 64bit CPU needs it's own wiring. With the longest path for a single clock cycle being limited to ~4cm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: