Hacker News new | past | comments | ask | show | jobs | submit login

I've actually written a high-performance metaverse client, one that can usefully pull half a gigabit per second and more from the Internet. So I get to see this happening. I'm looking at a highly detailed area right now, from the air, and traffic peaked around 200Mb/s. This XDP thing seems to address the wrong problem.

Actual traffic for a metaverse is mostly bulk content download. Highly interactive traffic over UDP is maybe 1MB/second, including voice. You're mostly sending positions and orientations for moving objects. Latency matters for that, but an extra few hundred microseconds won't hurt. The rest is large file transfers. Those may be from totally different servers than the ones that talk interactive UDP. There's probably a CDN involved, and you're talking to caches. Latency doesn't matter that much, but big-block bandwidth does.

Practical problems include data caps. If you go driving around a big metaverse, you can easily pull 200GB/hour from the asset servers. Don't try this on "AT&T Unlimited Extra® EL". Check your data plan.

The last thing you want is game-specific code in the kernel. That creates a whole new attack surface.




I don't know about how well this solves any game programmer's problem, but the attack surface thing --- modulo the kfunc trick --- doesn't seem real: eBPF programs are ruthlessly verified, and most valid, safe C programs aren't accepted (because the verifier can't prove every loop in them is bounded and every memory access is provably bounded). It's kind of an unlikely place to expect a vulnerability, just because the programming model is so simplistic.


> Actual traffic for a metaverse is mostly bulk content download. Highly interactive traffic over UDP is maybe 1MB/second, including voice.

Typical bandwidth for multiplayer games like FPS (Counterstrike, Apex Legends) are around 512kbps-1mbit per-second down per-client, and this is old information, newer games almost certainly use more.

It's easy to see a more high fidelity gaming experience taking 10mbit - 100mbit traffic from server to client, just increase the size and fidelity of the world. next, increase player counts and you can easily fill 10gbit/sec for a future FPS/MMO hybrid.

God save us from the egress BW costs though :)


There are only so many pixels on the screen, as Epic points out. The need for bandwidth is finite.

It will be interesting to see if Epic makes a streamed version of the Matrix Awakens demo. You can download that and build it. It's about 1TB after decompression. If they can make that work with their asset streaming system, that will settle the question of what you really need from the network.


But pixels can be arbitrarily "deep". That is to say, the amount of context that is needed to figure out the color of a pixel can grow arbitrarily large.


Yes, but if it requires more data than the screen's worth of pixels, you can just send the pixels. Pretty sure this is the "cap" that was described.


And yet if you just send the pixels, you cannot client side predict (hide latency in multiplayer games) because the pixels fix you to a specific point of view. Game streaming is not really the solution here.


Even with a zero size game client, there is only so much to stream in textures and geometry. And when you're done with that, the game world itself is trivial.

The only thing in games that is bandwidth heavy is on-demand game asset delivery, which is highly cacheable and shardable. It will need no XDP-like networking tricks in either servers or clients.


> the game world itself is trivial.

This is just absolutely not true.


Adding a possible example here to back Glen's comment. Larger render distance Minecraft servers, Bandwidth cost is un-predictable (Player movement based ex: Elytra, portals, joining) and scales squared! with the render distance.


That's what impostors are for.

In GTA V, each region has a custom-built low-rez model, and that's what you're seeing when you're more than about 200-300m away from it. Watch closely and see where the cars appear and disappear in the distance. That's the edge of the real rendering area.

I'm looking at doing this for a metaverse. In the GTA V era, those impostors were a manual job done by game devs. That needs to be automated. Rather than doing mesh reduction on large areas, I want to take pictures of each area at high resolution from multiple angles, and feed the pictures through Open Drone Map to get a 3D mesh. The result looks like this.[1] For even more distant areas, those meshes can be consolidated into larger and lower-rez mesh tiles. It's the 3D equivalent of a slippy map. The amount of data you need to send is finite regardless of the world size, because the far-away stuff has lower resolution. The sum of that series is finite. This is similar to how Google Earth works when you get close enough to see 3D.

Handling a metaverse with user-created content is a big data-wrangling problem, but the compute and network loads are finite.

[1] https://content.invisioncic.com/Mseclife/monthly_2023_12/bas...


You can't client side predict pixels.


You'll hit game engine CPU usage limits way before getting anywhere near Gbit/s in outbound game traffic on a game server, and the cost of network I/O is going to be negligible.

XDP is useful for applications that are network I/O bound. Gaming is not one of those.


i venture a guess that there are still no online games that use more than one mbps for interactive traffic and seeing fully remote gaming using tens of mbps, i don't see any justification for complexities like asset streaming.


What is a metaverse client? Do you just mean a cross platform VR app?


It's a client for https://secondlife.com/


In games the problem isn't about bandwidth but latency.


That's really not true for the type of game the OP is talking about. Think Second Life et al, where most of the content is dynamically streamed and rendered in real time


We're seeing more real metaverse-type systems. With the NFT clown car out of the way, and Meta's Horizon becoming a niche, the development efforts that were quietly underway to build real, working metaverses are starting to show results. There's M2, from Improbable, which now has a shared developer metaverse in test, for which one can sign up. There are others who have reached the demo video level, such as Readyverse, which is probably going to be an MMO rather than a real user-created metaverse. Disney and Epic are jointly making metaverse noises.

The big-world high-detail user-created metaverse problem is being worked on.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: