You can run this on your laptop and watch your WiFi drop in and out randomly (as other people around you download stuff, as Bluetooth devices interfere with it, etc.) It's also fun to run a speed test in another tab, and watch the latency increase 10x as needless buffers fill up and your latency-sensitive packets wait in line so your router gets a better review!
I found that tool from Ben Kuhn's blog, and at first it seemed great. But I've seen some weird behavior from it. Examples include drastically different behavior on a refresh (and not just for the 'blue' website, also for gstatic.com) and pings to Madagascar which are apparently faster than the speed of light. So now I'm somewhere between confused and distrustful.
If anyone has a detailed guide on using gfblip I'd be really curious to read it.
FYI - if you're getting a max red line on this site make sure you're not using the https version (and you may need to whitelist it in https-everywhere).
Intriguing, but: I don't think it's working as intended from my Brave browser.
After about 2 seconds of blue/green pings that seem faster than should be possible (<10ms), it goes full red.
It does work in Chrome! (Perhaps, Brave is doing some faster-fail on the probe request from the beginning, and then a cached-insta-failure after ~2 seconds?)
Had the same issue here, Brave blocks "insecure" scripts, you'll see a warning in the address bar and need to allow it to load those scripts to get it to work correctly.
My blue pings to calgary are reliably ~2000ms then ~200ms. I wonder where in the chain that pattern is originating. I wonder if it'll stay like that over time.
Agree! I wonder why the performance in Chrome is so much worse than Firefox? I constantly have ~100ms higher values in Chrome than Firefox. Maybe the author is using Firefox for development, so the test is optimized for Firefox.
Chrome also seems to show red dots every ~3 seconds for me, while Firefox shows no red dots, strange.
Another great example how a couple of traditional unix tools (if you want to count perl and gnuplot as traditional unix tools) can replace 250 lines of high-level language code (https://github.com/orf/gping/blob/master/src/main.rs).
Fantastic, thanks, I think this was the final push I needed to start abusing gnuplot.
Though for some reason the plot only updates when I give or take the plot window focus (refreshperiod mentioned in the man page doesn't seem to help, and I think it should default to 1Hz).
A thing I’d really like and have contemplated making is graphs like this for mtr rather than ping. That way you can see how things are changing at each hop—if my network isn’t working perfectly, I always jump to mtr rather than ping, because it shows me whether the problem is between my laptop and the router, the router and the ISP, the ISP and the world, or near the end server. For best results you’d probably want a full 3D chart rather than just side-by-side 2D charts which is the best you could do in a terminal.
Given that "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software." is in the license, and it isn't included... I dunno.
But at the same time, the legal system doesn't work like code. It's more flexible. So, who knows?
Yeah, the license requiring itself to be included in copies was what I was thinking of. Not having it in there leaves it in a gray enough area that I would be hesitant to rely on it for something where the licensing was important. Very possible the metadata being there and indicating the unique(?) license identifier of “MIT” would be enough to convince a court in the event of Problems, but I also am not a lawyer and like my usage rights to be unambiguous.
Not related to but, as you can see a trend in the pings over time on the graph it reminds me that even in a very granular benchmark it can be very helpful to plot the results first before trying to process them.
For example, we sometimes blindly calculate the mean whereas the data may be such that we end up giving a value the benchmark will never reach.
Well pings will have a bit of a weird distribution. If you did a histogram of your ping for a bit you would see most values sitting near some "floor" of X ms and then a long right-sided tail of values much larger than X ms. It seems to look sorta like a gamma distribution from a quick test of my wi-fi ping. The mean would sit somewhere to the right of the largest mass of ping times. If you wanted to know where most pings were you could look at the median (which will sit right where most pings are). If you were interested in the really large values of X ms then you could compare the mean and median to find out just how much those slow ping responses are dragging your mean away from the median.
Perhaps add support for also getting the actual normal/regular ping output so one can have that along with the graph? Apologize if that’s already there I was just looking at the usage video and switches and it appears to only output a graph? Would seem useful to also have the normal/traditional ping output as well.
I'm curious what the usecase of monitoring ping graphs by themselves is. Other than comparing with the ping graph of another device, or complaining that your internet connection is poor... it seems like other investigative tools would be needed for diagnosing. Maybe I'm just not imaginative enough!
Noticed this uses pinger, which executes the actual ping command behind the scenes and parses the output. Which makes it fragile in case there are changes in the output format of ping. Are there any good alternative libraries that can use raw sockets, etc?
I’ve just updated pinger to use a windows-specific API that doesn’t need root. But yes, in general it’s a nightmare especially with ping executables that don’t report timeouts at all.
This is a great example of one of the failings of the UNIX model. You can’t really parse the ping output across distributions easily, it’s built for humans to consume and until recently didn’t even report timeouts to stdout. Not to mention needing to invoke “ping6” for ipv6, which I need to add support for.
However it’s the best way to do this without needing to run gping and root.
This is unlikely to change since if ping changes many programs would broke. It is kinda like ipconfig.
AFAIK, ping is a setuid binary (since it needs raw socket, that is a privileged operation) so a program that implements ping by itself would need to do the same (or it would work for root).
Which unfortunately means you’ve partly helped make the GP’s point for them. Under Linux, the command you’re thinking of is ifconfig, which has long been deprecated and doesn’t ship by default anymore in newer distributions.
A great project Brendan. But as stated in Github issues, it is not worked on newer version of Go. I'm trying in Windows 10 and get error which look like:
`Error: 'date-time' main.go:59: open C:\Program Files Direct\duck\pings.json: The system cannot find the file specified.`
`not supported by windows. Request timeout for icmp_seq xx`
215loc? This makes me want to (finally) sit and learn Rust. Even with Python + PyPi packages, I doubt you can do much better and still have a portable result.
You can run this on your laptop and watch your WiFi drop in and out randomly (as other people around you download stuff, as Bluetooth devices interfere with it, etc.) It's also fun to run a speed test in another tab, and watch the latency increase 10x as needless buffers fill up and your latency-sensitive packets wait in line so your router gets a better review!