Hacker News new | past | comments | ask | show | jobs | submit login

I feel the same way. We have way too many people working on tooling who don't know how to properly make things fast.

On some days, I manage to type faster than XCode can display the letters on screen. There is no excuse for that with a 3 GHz CPU.

And yes, 200ms seems plausible to me:

Bluetooth adds delay over PS2 (about 28ms). DisplayPort adds delay over VGA. LCD screens need to buffer internally. Most even buffer 2-3 frames for motion smoothing (= 50ms). And suddenly you have 78 ms in hardware delay.

If the app you're using is Electron or the like, then the click will be buffered for 1 frame, then there's the click handler, then 1 frame of delay until the DOM is updated and another frame of delay for redraw. Maybe add 1 more frame for the Windows compositor. So that's 83ms in software-caused delay.

So I'd estimate a minimum of 161ms of latency if you use an Electron-based app with a wireless mouse on a DisplayPort-connected LCD screen, i.e. VSCode on my Mac.




The IDE is an extreme case of user interface.

You type in a letter and that starts off a cascade of computations, incremental compilation, table lookups, and such to support syntax highlighting, completion, etc. and then it updates whatever parts of a dynamic UI (the user decides which widgets are on the screen and where) need to be updated.

It almost has to be done in a "managed language" whether that is Emacs Lisp, Java, etc. and is likely to have an extension facility that might let the user add updating operations that could use unbounded time and space. (I am wary to add any plug-ins to Eclipse)

I usually use a powerful Windows laptop and notice that IDE responsiveness is very much affected by the power state: if I turn down the power use because it is getting too warm for my lap, the keypress lag increases greatly.


If kicking off incremental conpilation is causing the IDE's UI to behave sluggishly, then the IDE is wrong. The incremental compilation or other value-adds (relative to a text exitor) should not create perceptible regressions.

Table lookups for syntax-highlighting can't be backgrounded, but they should be trivial im comparison to stuff like compilation, intellisense, etc.


I'm a bit of a language geek but I've always been confused by IDE lag, so I figure there's something I don't know.

From a UX perspective, I can see doing simple syntax highlighting on the UI thread...so long as it is something with small, bounded execution time. I don't quite get why completions and other stuff lags the UI thread, as it seems obvious that looking that information up is expensive. I can't tell if that is what's happening, or there's something more going on, such as coordinating the communication between UI/worker threads becomes costly.

I've seen it in a bunch of IDEs though, especially those in managed languages. You're typing, it goes to show a completion, and then....you wait.


I’m amazed at how much faster Rider seems to be than Visual Studio at its own game. Intellisense is way slower than the C# IDE made by the people who make Resharper. Resharper in visual studio is always really slow though.


> DisplayPort adds delay over VGA

Surely VGA would have more latency than DP for an LCD? It's gotta convert from digital to analogue and then back to digital again at the other end.

Is the overhead of the protocol really greater than that? (genuine question)


I meant to compare DP+LCD vs. VGA+CRT.

But to answer your question, digital to analogue and analogue to digital conversions tend to be so fast that you don't notice. It is more of a convention thing that most VGA devices will display the image as the signal arrives, which means they have almost no latency. DP devices, on the other hand, tend to cache the image, do processing on the entire frame, and only then start the presentation.

As a result, for VGA the latency can be less than the time that it takes to send the entire picture through the wire. For DP, it always is at least one full transmission time of latency.


DP does not require buffering the entire frame. Data is sent as "micro packets". Each micro packet may include a maximum of 64 link symbols, and each link symbol is made up of 8 bits encoded as 8b/10b. The slowest supported link symbol clock is 1.62Gb/s, so even considering protocol overhead there are always millions of micro packets per second.

If the required video data rate is lower than the link symbol rate the micro packets are stuffed with dummy data to make up the difference, and up to four micro packets may be sent in parallel over separate lanes, so some buffering is required, but this need only add a few microseconds of latency, which is not perceptible. Of course it's possible for bad implementations to add more, but the protocol was designed to support low latency.


Thank you for teaching me something new :) I didn't know about micro-packets before.

In that case, I'm guessing the latency is coming from the fact that most LCD screens are caching one full image so that they can re-scale it in case the incoming video resolution isn't identical with the display's native resolution.

I vaguely remember there being an experimental NVIDIA feature to force scaling onto the GPU in hopes of reducing lag, but not sure that ever got released.


To be fair, it's only "almost no latency" if you just care about the pixels at the top of the screen. Since CRTs (and LCDs) draw the image over the course of a full frame, it's more fair to say 8.3ms, since that's when the middle of the screen will be drawn (at 60Hz). This is pretty comparable to modern gaming monitors, which have around 8.5-10ms of input delay @60Hz.

Where CRTs do have an advantage over LCDs is response time, which is generally a few ms even on the best monitors but basically nonexistent on CRTs.

But overall, a good monitor is only about half a frame worse than a CRT in terms of latency if you account for response time. At higher refresh rates it's even less of an issue; I'm not aware aware of any CRTs that can do high refresh rates at useful resolutions.

Got my numbers by glancing at a few RTINGS.com reviews: https://www.rtings.com/monitor/reviews/best/by-usage/gaming


Conversions between analog and digital happen in nanoseconds. They happen as the signal is sent.


MacOS' compositor is waaay worse than Windows'. On MacOS everything feels like it's lagging for 200ms.


161ms is longer than it takes to ping half way around the world. Amazing.


That's why most people don't notice any performance issues with Google Stadia / Geforce Now. They are conditioned to endure 100+ ms of latency for everything, so an additional 9ms of internet transmission delay from the datacenter into your house is barely noticeable.


161 ms is 1/6th of a second which I would have thought would be noticeable and yet I haven't noticed it. I assume that is mouse clicks?

I'm sure Id notice if typing had that much lag on vs code. I am using manjaro Linux but I can't imagine that it would be much faster than osx.


Fighting gamers are generally able to block overhead attacks (so they see the attack and successfully react by going from blocking low to blocking high, after waiting for the delay caused by software and the LCD monitor and their own input device) that take 20 frames or more. That's 333ms. So I think if you were really paying attention to the input delay instead of trying to write software you would end up noticing delays around the 160ms level, idk.


333ms is ages! I can react way faster than that on a touchscreen. I bet you can too:

https://humanbenchmark.com/tests/reactiontime


Yes. The players are trying to react to a bunch of other things, not just 1 possible move. It's in this context that 20 frames is the cutoff where moves start to be considered "fake" (i.e. getting hit is an unforced error)


Just trying in VS Code again, and there does seem to be a lag for mouse clicks. Not sure if its as much as 1/6s, but probably 1/10. Typing though looks as snappy as any terminal.

I get electron or MS have optimised the typing path. I don't click that much in VS Code so I don't think its ever bothered me.


Typing in VSCode is high latency as well, I find it viscerally unpleasant to use solely due to this. There's already a ticket: https://github.com/Microsoft/vscode/issues/27378


And some video games with good hardware manages less than 20-30ms button to pixel response.


> Maybe add 1 more frame for the Windows compositor.

Months ago I noticed picom causing issues with keynav I was too lazy to find a (proper, pretty-window-shadow retaining) fix for, so I just killed it and — while I can’t confidently say I remember noticing a significant lag decrease — I can say I don’t really miss it (and my CPU, RAM, and electricity use almost certainly decreased by some small fractions).


Being a Go/C/Scheme coder makes me not tied to an ide, and it runs fast. Zero latency.


I just used IDEs as an example. You'll have the same latency issues with WhatsApp, Signal, Slack, Deezer, for example.


Being an anti social GNU/Xorg*/SystemD/Archlinux nerd means I don't have to use any of those.

* - actually it could be Wayland but doesn't work with my old window manager config.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: