To me that looks like they are reinventing NTP, but not addressing all the issues of PTP.
A big problem with the PTP unicast mode is an almost infinite traffic amplification (useful for DDoS attacks). The server is basically a programmable packet generator. Never expose unicast PTP to internet. In SPTP that seems to be no longer the case (the server is stateless), but there is still the follow up message causing a 2:1 amplification. I think something like the NTP interleaved mode would be better.
It seems they didn't replace the PTP offset calculation assuming a constant delay (broadcast model). That doesn't work well when the distribution of the delay is not symmetric, e.g. errors in hardware timestamping on the NIC are sensitive to network load. They would need to measure the actual error of the clock to see that (the graphs in the article seem to show only the offset measured by SPTP itself, a common issue when improvements in time synchronization are demonstrated).
I think a better solution taking advantage of existing PTP support in hardware is to encapsulate NTP messages in PTP packets. NICs and switches/routers see PTP packets, so they provide highly accurate timestamps and corrections, but the measurements and their processing can be full-featured NTP, keeping all its advantages like resiliency and security. There is an IETF draft specifying that:
An experimental support for NTP-over-PTP is included in the latest chrony release. In my tests with switches that work as one-step transparent clocks the accuracy is same as with PTP (linuxptp).
I listened to a fascinating podcast from Jane Street on their solution.
Pasted the relevant part
https://signalsandthreads.com/clock-synchronization/
--quote
"So, we’re trying to build a proof of concept. At the end of the day, we sort of figured, “All right. We have these GPS appliances.” We talked about hardware timestamping before on the GPS appliances, and how they can’t hardware timestamp the NTP packets, so that’s problematic. We thought, “How can we move time from the GPS appliances off into the rest of the network?” And so we decided that we could use PTP to move time from the GPS appliances to a set of Linux machines, and then on those Linux machines we could leverage things, like hardware timestamping, and the NTP interleaved mode to move the time from those machines onto machines further downstream.
The NTP interleaved mode, just to give a short overview of what that means… when you send a packet if you get it hardware timestamps on transmission the way you use that hardware timestamp is you get it kind of looped back to you as an application. So I transmit a packet, I get that hardware timestamp after the packet’s already gone out the door. That’s not super useful from an NTP point of view, because really you wanted the other side to receive that hardware timestamp, and so the interleaved mode is sort of a special way in which you can run NTP and that when you transmit your next NTP packet you send that hardware timestamp that you got for the previous transmission, and then the other side, each side can use those. I don’t want to get into too much of the details of how that works, but it allows you to get more accuracy and to leverage those hardware timestamps on transmission."
-- end quote
SPTP does look a lot like NTP over PTP. I’m guessing they deployed this last year when nothing like it existed - the ietf draft (dated Jan 24, 2024) came much later after. They might even be involved with it. Anyways, it’s nice to see progress towards a simpler protocol that retains the precision of PTP.
> TIL there's a regular heartbeat in the quantum foam; there's a regular monotonic heartbeat in the quantum Rydberg wave packet interference; and that should be useful for distributed applications with and without vector clocks and an initial time synchronization service
> A big problem with the PTP unicast mode is an almost infinite traffic amplification (useful for DDoS attacks). The server is basically a programmable packet generator. Never expose unicast PTP to internet. In SPTP that seems to be no longer the case (the server is stateless), but there is still the follow up message causing a 2:1 amplification. I think something like the NTP interleaved mode would be better.
Facebook has little concern for traffic amplification that doesn't affect them. I can't find a source article for it now, but there was a time when you could take down a website hosting an image by simply posting <URL>/?<RANDOM>. I believe Facebook's (many) cache servers would individually make requests to the server until they inevitably saturated the image host's connection. I remember people complaining and it falling on deaf ears.
but this is not about facebook, this discussion is about the protocol
given how this industry protocols work, is likely that other big corporations that run data centers are also part of the real protocol discussion, some of those will be corncerned about traffic amplification
Facebook continues to follow the Yahoo and AOL trajectory of exceptional and generous engineering contributions amidst an increasingly disliked suite of commercial offerings.
Reminds me of a project idea where you list out all the big companies that have GitHub projects like Comcast, Walmart, Verizon, Target and even https://github.com/mcdcorp
Click-to-message is $10B/year, click-to-message in WhatsApp alone was $1.5B 1.5 years ago. They make buckets of money from business services on WhatsApp.
I guess, WhatsApp business just never caught up on our side of the world, but every time I’m outside of US/Canada, there will be some sort of business (like a restaurant) that will contact me through it. Assuming they’re making some pennies from WhatsApp Business.
Does it? All the stuff I've read complains about abysmal performance on Facebook and how it's money spent poorly unless you're trying to scam naive consumers.
There was a time where I'd see Coca cola advertising on Facebook. That time is gone.
I just opened Facebook to see what I’d see on the app, and just a couple of scrolls gave me: Planet Fitness, Expedia and a bunch of airline ads.
People will always complain how something doesn’t work, how they pulled out their ad money and etc. But then you see ad spend growth on every big platform.
It's a simple equation: Money spent on ads will go where the users (ears and eyeballs) are. By providing fine-grained targeting an advertising platform can extract more money from "the long end of the tail" which is where most of the money lives.
You're using a brand advertiser as an example when you should be using direct response advertiser as your example. Brand advertisers pay the least since their advertising is predicated on reaching a lot of people cheaply multiple times while direct advertisers pay the most since their advertising is predicated on getting people to convert immediately and judge their return accordingly.
The fact that you don't see Coca Cola ads means that Meta is able to find advertisers willing to pay more than them to reach you that you are more likely to convert immediately on.
weirdly, for my link that redirected to my countries' local coca-cola facebook brand page (denmark) which has its latest post 1 hour ago (8th feb 2023). I didnt know that kind of location-based redirect was a thing on facebook, interesting.
Can say with certainty that it's on fire alright, in the sense that the ad data is burning away.
The rapid innovation is a survival tactic. They know the ads boat is sinking, and unlike other tech companies, they don't have much diversity in their revenue. Hence the Metaverse, AI, etc. which although neat, are not exactly making the same level of money for the organization (at least not yet). In Q4 2023, ads had a revenue of ~$38B, while the Quest revenue was a loss of $4B. AI hasn't been directly monetized directly, so it's harder to say how that's doing.
Given how critical good data is to a model, I'm not optimistic this will work for them.
It's sad, really: Meta could be making amazing VR headsets and transforming the way people use them by making them more general-purpose (like PCs) but instead they're making VR headsets into toys. Even the Quest Pro, which was meant to be for business use, was a locked-down, hard-to-hack (aka hard for developers to fully utilize) Android toy. And when I say "toy" I mean, it's the software equivalent of a hard plastic device with tamper-resistant screws and "no user serviceable parts" intent.
Their dead-set focus on data collection and advertising is sabotaging their ability to make (potentially billions in) revenue from traditional models. I know Zuckerberg and many other CEOs want their "core business" to be just one thing with all the other businesses being offshoots of that one thing but the reality is that they've become too big for that. Zuck needs to give up on the idea of, "our business is data collection and targeted ads for consumers" and realize the truth: Their business is technology.
Personally I miss a lot of features in the Meta Quest 3 which would be helpful for making location based experiences (turn your local natural history museum into "Jurassic Park") such as having a persistent SLAM model and being able to at least use the camera to read and locate QR codes or, say, compute the pose of a person and overlay them with a video game character. I think though Meta is worried about the privacy implications of those things.
On the other hand, those LBEs have an antagonistic relationship with headset adoption. If everybody had a headset than there would be nothing special about MR experiences. For LBEs to be viable you want headsets to be capable and inexpensive but not widely adopted. (I almost wish it could be Winter 2024 forever) I'd imagine a headset vendor would like to charge me more for using a headset for an LBE than they would want to charge a ordinary user but on the other hand people who are blown away by an LBE (very possible) might go home and buy their own headset.
As for their vision, Meta seems to be doing really well running an app store for single-player games. I haven't seen a real multiplayer hit yet but I guess Demo Battles comes close. Meta knows what they'd like to do if they could create something like OASIS from Ready Player One but a close analysis of how Horizon Worlds falls short of that reveals how difficult that is. I guess anybody who can afford a seat of Dassault 3DExperience can also afford an AVP, maybe many Blender users can afford an MQ3. It's not clear to me at all what, past games and entertainment, is going to be a mass market in XR.
Actually, Valve owns that space. They have 132 million active monthly users. For comparison, Xbox has 120 million. Seems like only a minor lead until you look at the revenue: Steam (Valve) brings in ~$8 billion in revenue whereas Xbox brings in ~$4 billion.
Microsoft's operating overhead with Xbox is also vastly greater than Steam. Supposedly they make ~$28 every time they sell an Xbox One. That's based on just the manufacturing/parts cost of the hardware and doesn't include the costs associated with developing the hardware itself where they don't just take off-the-shelf chips and throw in an existing OS (like the Steam Deck) but instead custom-engineer a processor/architecture and make their own custom operating system.
If anybody wanted to take a dominant place in the industry they’d buy Valve but Valve is not for sale. For instance, if Gamestop has bought Valve at the top they’d have an answer to the problem of digital downloads eliminating both the buy and sell sides at Gamestop.
what I was trying to say is that microsoft owns the developer space, Valve has been a tough contender but also, microsoft has never gone straight against valve probably because the business wigs consider videogames less important than microsoft's other businesses.
so I should have said that while Valve may own the marketplace (the "app store") microsoft still owns what it takes to make a game in the first place. which is why facebook doesn't really stand a chance against MS. this also explainss how it came to pass that nobody cared about zuckerberg's metaverse... the metaverse didn't get access to the really cool graphic engines
Developer space? Isn’t that like Unity, Unreal, everybody except Microsoft?
I think the whole point of XBOX, GAME PASS and all that is to convince people who don’t play games (stock market analysts) that Microsoft is relevant. It’s vice signalling.
instagram is only one of their brands, it's like people quitting Budweiser... the corporation owns most other beer brands so they aren't really affected
>> innovating on ai and integrating it into its ad targeting.
Summer child.
No one who has add space to sell wants an efficient, accurate market. Those inefficiencies create competition, create volume create profit for those with ad space to sell.
The ad market had more effective targeting and better ROI 10 years ago than it does today.
Go and try to run a CPA campaign at any cost... you can't. It's all display add's CPM garbage, carpet bomb for pennies.
when Facebook actually did do this, Cambridge Analytica happened which is how they were forced to let 'higher order' (i.e. shady) players have the tweak-able ad-targetting marketplace they know and love
Funny how people "hate" them yet they have 4 billion active users and generate $135 billion in annual revenue. A better explanation is that outside of the HN bubble Meta's product offerings are insanely popular.
When you buy diapers you are paying for the manufacturing cost and the profit margin. When you use Meta's products, what are you paying for? That's what makes them closer to cigarettes. Tobacco companies sold a lie to their customers.
Meta too sells the lie that its products are paid for by ads. Its products are paid for by surveilling users, building behavioral profiles from the data that is collected, and then giving other companies access to that behavioral data in order to manipulate users to specific ends. In this quest to build better behavioral profiles, the products are made to be as addictive as possible, eating away people's time which could have been utilized in objectively better ways.
Not sure why you're making it sound like a conspiracy theory. User behavior profiling is common strategy for all personalized IR products, including recommender systems, targeted ads, web search, e-commerce, and many more. A bunch of major tech companies rely on it. Do you think Google AdSense is also an evil empire? What about Apple and Amazon ramping up their own ads businesses? More people use YouTube than Meta products and even spend more minutes per day there. Do you also think YouTube is also equivalent to cigarettes? What about TikTok, Twitch, and other streaming platforms? Was Doordash also wrong for setting up personalized ads and recommendations?
User behavior profiling wouldn't be bad if and only if users owned the data and had complete control of what is done it. Currently the legal/political system isn't equipped to handle this new technological assault on digital property and it will remain that way as long as long people keep hand waving it away. Imagine the same callous attitude being applied to real estate or other physical property.
You're only minorly wrong in that they don't sell access to the behavioral data [1] , but you do realize that you sound absolutely unhinged about it, right? Is HN surveilling my post because I typed here and pressed the 'reply' button?
I've never convinced someone of flaws in ethics by framing the perpetrator as a big bad boogeyman to the nth degree. It's unproductive self-satisfaction.
[1] They sell visibility to people queried against proprietary behavioral data.
There is mounting evidence that social media harms mental health[1]. I'm pretty sure when links between smoking and cancer were being established, there were plenty of people calling that evidence "unhinged", particularly if the evidence hurt their paychecks. Not saying that such people were willfully malicious, but that there are strong cognitive biases in favor of ignoring anything that can hurt their livelihoods.
HN Hated Facebook since its inception. There used to be people on HN defending Facebook, as a stock, or as an ad platform. Post 2015 on HN all ads are evil. Barely anyone defending them or Online ads anymore.
And AOL's stock had a 500% return in the 2010s... Yahoo had 300%.
Are you saying people like Facebook's new features like the suggested posts and what they've done to Instagram? Was their VR product line secretly a success?
I absolutely love Instagram app and my friend circle actively uses it. I'm not a big fan of Facebook, but I come back to Facebook groups and marketplace fairly frequently. What's your metric to measure this general propensity? DAU? MAU? Vibes?
Sure, the company may not exist 10 years from now. But there's no downfall indicator yet for this trillion dollar giant. All companies that size have headwinds and tailwinds. The self-assurance you see on this platform for Meta's sure shot upcoming decline is just absurd.
YoY growth and quarterly earnings report don't give you a full picture of how much a company is liked (which is important for b2c companies like Facebook) and how well it is actually performing for the medium/long term.
GE and Boeing also had amazing YoY growth and great quarterly reports, until the underlying dumpster fire they were nurturing for years exploded in their faces and now they don't have growth nor profits anymore.
You could say the same thing about every big company. Apple has headwinds from sales in China and US-China trade wars, Tesla is trailing BYD and seeing declining EV demand, and so on. Every large company has something or the other going on. But I find it funny that every new project from Apple is reminiscent of iPhone 1 while it's the Yahoo path for everything from Meta.
If they are relying on headset sales… competition is about to heat up. Ive been waiting for anyone but meta to come out with a decent, affordable kit and I think were almost there.
Does anyone know the differences between Meta's application of Precision Time Protocol and Google TrueTime? I was hoping to find some discussion in the article but found none.
- Adding precise and reliable timestamps on a back end and replicas allows us to simply wait until the replica catches up with the read timestamp...
- As you may see, the API doesn’t return the current time (aka time.Now()). Instead, it returns a window of time which contains the actual time with a very high degree of probability...
- A read-only transaction executes in two phases: assign a timestamp sread [8], and then execute the transaction’s reads as snapshot reads at sread. The snapshot reads can execute at any replicas that are sufficiently up-to-date...
PTP is part of the backbone of something like TrueTime. Meta uses their PTP infrastructure for a lot of the same basic fundamentals, including consistent read replicas, like Spanner does. PTP is a protocol for synchronizing the wall clock time of a set of computers under very tight error bounds, so that all of the servers have a very consistent and tight "view" of what time it is.
Now, independently of replica strategies, it's important to understand TrueTime is an API, to be clear, as you noted. It lets you represent some continuous interval of time based on the system clock error. You can then use this API to do things like ask "Did timestamp A occur before B?" And you can get an API equivalent to TrueTime on your own random Linux machine, using the Clock-Bound tools from AWS, combined with the chrony NTP daemon: https://github.com/aws/clock-bound
The API and all that is pretty basic, actually. Rather, the secret behind TrueTime and the like is just a huge amount of reliability engineering to ensure that the upper bound target (7ms IIRC from the Spanner paper) is actually maintained reliably and accurately, at global scale. That reliability means engineers can build on it with specific guarantees. You can slap chrony, ClockBound-D, and a PTP card into your rack and program away. But it's a matter of engineering guarantees more than like, theoretical computer science. Theoretically speaking, TrueTime can only help you definitively establish that some event A has actually "happened before" B in a distributed system. That's extremely powerful but it needs muscle backing it up to be true and useful in practice.
AWS has publicly advertised that EC2 has access to their 'AWS Time Sync' service, which is a globally consistent clock synchronization service designed to provide the backbone needed for services like TrueTime, and is freely available. Assuming you are willing to trust the EC2 network and AWS engineers, you can slap chrony and ClockBound-D on your AWS instances and get a TrueTime-like API with very tight global error tolerance, which would allow you to do consistent read replicas like Spanner, among other tricks
Just one slight trapdoor I triggered in my travails. I don't know whether PTP does PLL slaving but while NTP is also for wall time, it has an effect on clock_gettime(CLOCK_MONOTONIC) so if you need a local clock without external (network, I mean) influence, CLOCK_MONOTONIC_RAW is there for you. And luckily, on recent kernel it has been vDSO'd.
Doing consistent reads this way is a fairly old technique, at least back to the 80s (I can dig up some papers if folks are interested).
Spanner, rather famously, uses this range approach, but a good number of other systems are based on similar approaches. The important thing for reads is getting an upper error bound from the clock, having storage that can perform reads at that time (eg using MVCC), and having a way for storage to know when it's seen all writes before a timestamp.
This can be implemented using PTP, and that's the approach we use at AWS inside some database services.
Thank you! I’d be very interested in any papers you have the time to dig up. I feel like even one paper would give me a good pointer into the literature from which to start exploring.
Which... is fine! They're allowed to open source solutions for problems that are unique to their massively scaled architecture! Others can learn from it -- even if others don't have the same problem.
It is worth keeping in mind that very, very, very few people/companies are required to solve the same problems a company like Meta is.
A big problem with the PTP unicast mode is an almost infinite traffic amplification (useful for DDoS attacks). The server is basically a programmable packet generator. Never expose unicast PTP to internet. In SPTP that seems to be no longer the case (the server is stateless), but there is still the follow up message causing a 2:1 amplification. I think something like the NTP interleaved mode would be better.
It seems they didn't replace the PTP offset calculation assuming a constant delay (broadcast model). That doesn't work well when the distribution of the delay is not symmetric, e.g. errors in hardware timestamping on the NIC are sensitive to network load. They would need to measure the actual error of the clock to see that (the graphs in the article seem to show only the offset measured by SPTP itself, a common issue when improvements in time synchronization are demonstrated).
I think a better solution taking advantage of existing PTP support in hardware is to encapsulate NTP messages in PTP packets. NICs and switches/routers see PTP packets, so they provide highly accurate timestamps and corrections, but the measurements and their processing can be full-featured NTP, keeping all its advantages like resiliency and security. There is an IETF draft specifying that:
https://datatracker.ietf.org/doc/draft-ietf-ntp-over-ptp/
An experimental support for NTP-over-PTP is included in the latest chrony release. In my tests with switches that work as one-step transparent clocks the accuracy is same as with PTP (linuxptp).