Share this question

Welcome to Teachnovice Q&A, where you can ask questions and receive answers from other members of the community.

This is a collaboratively edited question and answer site for computer enthusiasts and power users. It's 100% free, no registration required.

Transatlantic ping faster than sending a pixel to the screen?

0 like 0 dislike
3,859 views

 I can send an IP packet to Europe faster than I can send a pixel to the screen. How f’d up is that?

And if this weren’t John Carmack, I’d file it under “the interwebs being silly”.

But this is John Carmack.

How can this be true?

To avoid discussions about what exactly is meant in the tweet, this is what I would like to get answered:

How long does it take, in the best case, to get a single IP packet sent from a server in the US to somewhere in Europe, measuring from the time that a software triggers the packet, to the point that it’s received by a software above driver level?

How long does it take, in the best case, for a pixel to be displayed on the screen, measured from the point where a software above driver level changes that pixel’s value?


Even assuming that the transatlantic connection is the finest fibre optics cable that money can buy, and that John is sitting right next to his ISP, the data still has to be encoded in an IP packet, get from the main memory across to his network card, from there through a cable in the wall into another building, will probably hop across a few servers there (but let’s assume that it just needs a single relay), gets photonized across the ocean, converted back into an electrical impulse by a photosensor, and finally interpreted by another network card. Let’s stop there.

As for the pixel, this is a simple machine word that gets sent across the PCI express slot, written into a buffer, which is then flushed to the screen. Even accounting for the fact that “single pixels” probably result in the whole screen buffer being transmitted to the display, I don’t see how this can be slower: it’s not like the bits are transferred “one by one” – rather, they are consecutive electrical impulses which are transferred without latency between them (right?).

asked May 30, 2013 by anonymous  
By my calculations as long as the satellite is halfway between Boston/London; earth is a perfect sphere; and the satellite is at a height d >= (sec θ - 1)* R= 521 km (where R is the radius of the Earth ~6400 km) where θ ~ 2500km/6400km ~ 0.4 rads the angle between Boston/satellite (also same between satellite/London) then the satellite can be seen by both with a lower limit of total travel distance of 2sqrt( (r+d)^2 - r^2) = 5270 km at a one-way travel speed of ~18 ms. I use c to say lower limit; as faster methods then 0.7c are feasible - though not in practice.
Today people actually learns that electronics are under what makes programming work. Programming is accessible to everyone, but designing stuff like an entire computer is not up to everyone and have big repercutions in term of cost and make-bility. Graphics chips are so much different than other chips, and data still has to go through the screen hardware. Technology and physics are not as simple as programming is, and it costs money. Deal with it people. But still it'd quite cool if carmack could change things like he did for gfx cards !
I was just adding to the conversation, my article claimed that it was two errors, it was more of a reference than a reply
Transatlantic cables, see the CANCAT 3 cable in en.wikipedia.org/wiki/Transatlantic_communications_cable. Time for light from Nova Scotia to Iceland (part of Europe) in fiber is 16.7 ms, see wolframalpha.com/input/?i=distance+halifax%2C+canada+iceland
Apparently you can do a transatlantic ping faster, but that also means you wouldn't see it on the screen

3 Answers

0 like 0 dislike
 
Best answer

 The time to send a packet to a remote host is half the time reported by ping, which measures a round trip time.

The display I was measuring was a Sony HMZ-T1 head mounted display connected to a PC.

To measure display latency, I have a small program that sits in a spin loop polling a game controller, doing a clear to a different color and swapping buffers whenever a button is pressed. I video record showing both the game controller and the screen with a 240 fps camera, then count the number of frames between the button being pressed and the screen starting to show a change.

The game controller updates at 250 Hz, but there is no direct way to measure the latency on the input path (I wish I could still wire things to a parallel port and use in/out Sam instructions). As a control experiment, I do the same test on an old CRT display with a 170 Hz vertical retrace. Aero and multiple monitors can introduce extra latency, but under optimal conditions you will usually see a color change starting at some point on the screen (vsync disabled) two 240 Hz frames after the button goes down. It seems there is 8 ms or so of latency going through the USB HID processing, but I would like to nail this down better in the future.

It is not uncommon to see desktop LCD monitors take 10+ 240 Hz frames to show a change on the screen. The Sony HMZ averaged around 18 frames, or 70+ total milliseconds.

This was in a multimonitor setup, so a couple frames are the driver's fault.

Some latency is intrinsic to a technology. LCD panels take 4-20 milliseconds to actually change, depending on the technology. Single chip LCoS displays must buffer one video frame to convert from packed pixels to sequential color planes. Laser raster displays need some amount of buffering to convert from raster return to back and forth scanning patterns. A frame-sequential or top-bottom split stereo 3D display can't update mid frame half the time.

OLED displays should be among the very best, as demonstrated by an eMagin Z800, which is comparable to a 60 Hz CRT in latency, better than any other non-CRT I tested.

The bad performance on the Sony is due to poor software engineering. Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.

This is very unfortunate, but it is all fixable, and I hope to lean on display manufacturers more about latency in the future.

answered May 30, 2013 by anonymous  
You clearly need to figure out how to use Europe as a display technology
It sounds like you are saying that when LCD makers claim say, a 5ms response time, that may be time time it takes the raw panel to change, but the monitor adds quite a bit more time buffering and processing the signal before it actually drives the LCD. Doesn't that mean the manufacturers are publishing false/misleading specs?
Hopefully in the future, direct-view LED displays will be readily available. Sony has announced one that will be coming out within the next year or two, and I actually had an opportunity to look at one and talk to one of the engineers behind it. I specifically asked about latency, and he said it was on the order of nanoseconds. Plus a 60" screen was razor-thin, lightweight, and took something like 20 watts to operate, so I mean, how is this NOT a winning technology?
Here's how I measure display latency: Most chipsets provide some GPIO pins, which you can toggle with an outp instruction (your program must run very privileged of course for this to work). Then clone the screen on a digital and a analogue connection. The display goes to digital. Put a photodiode on the display and hook up the analoge video and the photodiode to an oscilloscope, and the scope's external trigger to the GPIO. Now you can use the GPIO for triggerin and accurately measure the time it takes for the signal to appear on the line and the display
0 like 0 dislike

Some monitors can have significant input lag

Accounting for an awesome internet connection compared to a crappy monitor and video card combo its possible

Sources:

Console Gaming: The Lag Factor • Page 2

So, at 30FPS we get baseline performance of eight frames/133ms, but in the second clip where the game has dropped to 24FPS, there is a clear 12 frames/200ms delay between me pulling the trigger, and Niko beginning the shotgun firing animation. That's 200ms plus the additional delay from your screen. Ouch.

A Display can add another 5-10ms

So, a console can have upto 210ms of lag

And, as per David's comment the best case should be about 70ms for sending a packet

answered May 30, 2013 by anonymous  
added source. Its a console, but I imagine a PC would have similar lags
Sorry but I still don’t see this really answering the question. The quote tells about “pulling the trigger” and this implies much more work, as in input processing, scene rendering etc., than just sending a pixel to the screen. Also, human reaction speed is relatively lousy compared to modern hardware performance. The time between the guy thinking he pulled the trigger, and actually pulling it, could well be the bottleneck
The linked article shows that the author of this analysis purchased a special device that can show you exactly when the button was pressed, so I don't think they're just winging the numbers
Perception is pretty weird stuff. I read an article a while ago about an experimental controller that read impulses directly off the spinal cord. People would feel that the computer was acting before they had clicked, even though it was their own nerve command to click it was reacting to.
This is a known effect. Google for "Benjamin Libet's Half Second Delay". Human consciousness requires significant processing time. Everything that think is happening now actually happened in the past. All your senses are giving you an "integrated multi-media experience" of an event from half a second ago. Furthermore, events appear to be "time stamped" by the brain. A direct brain stimulation has to be delayed relative to a tactile stimulation in order for the subject to report the sensations as simultaneous
0 like 0 dislike

 It is very simple to demonstrate input lag on monitors, just stick an lcd next to a crt and show a clock or an animation filling the screen and record it. One can be a second or more behind. It is something that LCD manufacturers have tightened up on since gamers, etc have noticed it more.

Eg. Youtube Video: Input Lag Test Vizio VL420M

answered May 30, 2013 by anonymous  
...