Hacker News new | past | comments | ask | show | jobs | submit | bratch's comments login

It looks like only IPv4 is affected. IPv6 is working fine to a couple of test endpoints.

   3  2a02:c28:1:6506::106  2.131 ms  2.124 ms  2.117 ms
   4  2a02:c28:11:6::100  15.695 ms  15.689 ms  15.682 ms
   5  2a02:c28:1:1900::19  15.998 ms  15.991 ms  15.985 ms
   6  2a02:c28:0:1819::18  15.259 ms  15.051 ms  16.975 ms
   7  2a02:c28:0:1718::17  16.968 ms  16.403 ms  15.619 ms
   8  2a02:c28:0:1731::31  15.406 ms  18.620 ms  18.554 ms
   9  2001:7f8:4::3f94:2  12.025 ms * *
  10  * * *
  11  2001:41d0:aaaa:100::5  21.622 ms  38.813 ms 2001:41d0:aaaa:100::3  37.231 ms
  12  * * *
  13  * 2001:41d0::25f1  19.507 ms 2001:41d0::c68  20.892 ms
  14  2001:41d0::513  19.685 ms  19.674 ms 2001:41d0::50d  18.248 ms
  15  2001:41d0:0:50::5:10a1  19.092 ms  19.917 ms 2001:41d0:0:50::5:10a5  20.126 ms
  16  2001:41d0:0:50::1:143f  19.375 ms 2001:41d0:0:50::1:143b  24.609 ms 2001:41d0:0:50::1:143d  18.933 ms


Ugh. You reminded me that while OVH used to hand out a /128, if you needed more address space you had to pay more for “enterprise” IPv6 or something.


Is their enterprise prefix /64? That would be even funnier. :)


lolwut ?

Don't consumer ISPs hand out /64s or /48s ?


Yeah, Comcast hands out /64s on residential connections. /128 is literally just a single address, which is absurd if true.


Unfortunately, it was(is?) true. Not just OVH though, even Scaleway also had these silly /128 shenanigans.


they give a /64 to all dedicated servers


It still works great on the Nokia N900!

https://developer.pidgin.im/wiki/UsingPidgin/N900


you still use the n900?


All day every day since 2009 :)

I've kept an eye on other phones throughout the years, but nothing quite ticks the same boxes (and by far most things aren't even remotely close).

There are some other interesting phones on the horizon (Librem, PinePhone), but then again things like Maemo Leste are ready to breathe new life into the N900, so I might be using it for very long time yet!


This was my first thought too. I have done this successfully to get Cat5e everywhere in a house that used to have coax everywhere. If you do it, I recommend attaching multiple pull strings to the coax, then pulling the coax out, then using one of the new pull strings for the Cat5e - otherwise you risk the Cat5e and the coax becoming detached from one another in the wall.


https://kis-orca.eu/map/ is a very accurate map, but doesn't cover the whole world. You can compare the cables in and leaving Europe to decide whether you think https://www.submarinecablemap.com/ is particularly accurate or not.


Interesting map. Astounding number of repeaters needed on a cable (you can show this in a map layer).

They must deliver some power with the cable to handle the repeaters? I wonder how much power it requires and if it's hard to ensure it doesn't break.


The cable landing stations provide DC power for the repeaters from both directions. The power requirements are substantial and use thousands of volts.

All the components of the subsea cable system are engineered with reliability in mind. None of the components are to require maintenance over the lifetime of the system. This means components are expected to operate 20 to 25 years at a minimum.


> The power requirements are substantial and use thousands of volts.

While 100% correct statement, the wording here could potentially lead someone to make an invalid assumption.

The core reason why it’s “thousands of volts” isn’t because a lot of power is needed specifically (although that is the case), it’s because of the nature of electrical conductors. Power loss over long distances is much more heavily influenced by current (current is squared in fact in the equation) rather than voltage. To say another way, you’ll have a lot less power loss with 50k volts at 100 amps than you would with 250V at 2,000A, despite both having comparable wattage (this is in part why high voltage power lines exist on land). Another reason is that electrical conductors (especially with copper, but also with aluminum and others) are heavy and the higher the amperage, the larger the conductor needs to be and thus the more weight.


Interesting. The westbound cables on that map all look like the are going too far South too. It would be interesting to know why.

Looking at the map of the Amitié cable, maybe they try to keep as much of the cable as possible in deep water. https://www.submarinenetworks.com/en/systems/trans-atlantic/...


The extra distance for NY/NJ-Bristol cables to cover is "just" ~400-500km (extra ~7%), plus if you look at ocean floor map [1] it seems they prefer not to put the cable in the relatively shallow waters of Newfoundland or Nova Scotia (maybe this decreases possibility of a damage by anchors, or they compensate a bit in length by going deeper, i.e. shorter circumference due to smaller radius).

The other possibility is that the direct path passes underwater mountainous region with quite variable depths in mid-Atlantic [1], so going up-down or follow the valleys anyway would increase the cable length. The right answer is likely a combination of all these to minimize cable cost, maintenance and latency.

[1] https://earth.google.com/web/@41.47661069,-40.80459324,-4414...


Ships anchors and trawlers are major risks for subsea cables. Deeper is thus better.

Underwater geography is also a major consideration. It's not just the going up and down, it's the risk of landslides, tectonic shifts, volcanoes, currents and icebergs scraping the ocean floor.

On top of that are geopolitics and enviromental concerns.


1901 telegraph submarine cables [1] were more straight for a similar route. Likely because it was essential to make them as short as possible going underwater, so most of them were going from Ireland to Newfoundland.

[1] https://upload.wikimedia.org/wikipedia/commons/a/a5/1901_Eas...


Why is Facebook involved in infrastructure like this?


Facebook operates a vast global network connecting its datacenters and points of presence (POPs). When you connect to Facebook from, say, Tokyo your HTTPS connection is terminated in a local point of presence (likely in Tokyo or somewhere in Japan) and from there the connection is forwarded on to a datacenter in the US or Europe. This forwarding happens over Facebook's privately operated network backbone, which needs to be substantial enough to handle billions of users' traffic (not to mention internal replication traffic and so on). Renting capacity from commercial providers is expensive so Facebook and the other big tech companies are interested in building their own infrastructure for this purpose.

Of course, it would be possible to have a user in Tokyo connect to Facebook's network somewhere closer to the DC but a) getting onto FB's high quality, low congestion network ASAP will perform yield much better performance than leaving the connection running over the internet where it's subject to congestion and long SSL connection round trips, b) terminating user sessions in POPs allows much finer control over which datacenter users are ultimately sent to. Without this FB would have to rely on DNS changes to steer traffic and they are slow to propagate and crude.


Facebook is responsible for about 8% of all Internet traffic. They need subsea cables to connect their datacenters and to deliver traffic to end users.


For Office 365 I use DavMail [0] for calendar, and Office 365's own SMTP/IMAP servers for mail.

Both seem to work well. It would be nice to have DavMail built in to Thunderbird, but it happily lives in a Screen session with no issues.

[0] http://davmail.sourceforge.net/


Intel's own distribution defaults to the peformance governor [1] in order to race to idle. This has been Intel's recommendation for some years.

[1] https://docs.01.org/clearlinux/latest/guides/maintenance/cpu...


Interesting note that I wasn't aware of-

>> The intel_pstate driver only supports performance and powersave governors.

What happens when ondemand is used with the intel driver? (is it even selectable?)


  # echo performance | tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
  performance
  # echo ondemand | tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
  ondemand
  tee: /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: Invalid argument
  # cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
  performance
The Arch Wiki suggests that both powersave and performance have dynamic scaling, and running 'watch -p -n 0.25 grep \"cpu MHz\" /proc/cpuinfo' indicates that both of them do, with powersave mostly sitting at 1.2GHz with occasional increases on a handful of cores while powersave fluctuates from 2~ to 4.2GHz across all cores with minimal load.

The linked docs from Clear Linux say that power draw isn't entirely dependant upon cpu frequency when there's no load so there's no issue with keeping it on performance, and I'm not about to doubt Intel here, but I'd be surprised if powersave didn't save power if only by clamping down how many resources programs can use. Anecdotally I've noticed that the powersave governor doesn't really work too well when doing things like running virtual machines and will keep the frequency very low, as if the scaler is blind to the resources the VM is using, while having it on the performance governor will pin all my cores to a far more appropriate 4.2GHz.


It's not selectable:

  /sys/bus/cpu/devices/cpu0/cpufreq # echo -n 'ondemand' > scaling_governor 
  echo: write error: invalid argument
Without touching anything, it's using powersave on my box. It seems to work well enough, both on performance as well as battery life, but that's just my personal experience.

The question that this tool completely fails to consider is that clocking the CPU lower isn't automatically the most battery efficient thing to do, cf. race to idle mentioned above. To me, this sounds a lot like someone on the trough of the Dunning-Kruger curve looked at a problem and wrenched at it without understanding enough of it.


I'm surprised to hear of problems with Firefox and ALSA. On all my Gentoo machines I simple compile Firefox with USE="-pulseaudio" which sets --enable-alsa within Firefox's build. Still working fine up to and including Firefox 66 directly to ALSA.


To name some big ones: Alpine, Gentoo, Slackware.


GLiv [1]. Firefox and GLiv both take ~5 seconds to load the image for me, although whilst Firefox takes another ~5 seconds to re-render after zooming in to 100 %, GLiv is instant.

[1] http://guichaz.free.fr/gliv/


I was delighted that a fun toy didn't need JS for once. If the concern is one of privacy, the author could just be sending the text to the server in the background with JS too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: