Hacker News new | past | comments | ask | show | jobs | submit login

That's great, but how do you then get that many packets to disk so that you can do something with them?

Presumably you need flash drives, and probably an append-only filesystem?




Not necessarily. One can capture 10G to disk using "only" a RAID-0 with 8-10 mechanical disks: it does the job both in bandwidth and space, and you can use regular filesystems such as XFS.

40G is a little bit more difficult: you need a huge RAID (simple, direct scaling: 32-40 disks) with mechanical disks to achieve the necessary bandwidth, and if you want to use SSDs you will need a lot of them too in order to have enough space to save any meaningful amount of traffic.

I remember seeing papers on on-the-fly compression for network traffic, but IIRC the results were not very impressive and the performance cost was noticeable.


> but how do you then get that many packets to disk

it may be possible to do disk i/o at that high rate e.g. with pci-e or a dedicated appliance for dumping the entire stream. but you would running out of storage pretty fast.

for example, a quick back-of-the-envelope calculation, where you dump packet stream from 4x10gbps cards with minimal 84b size (on ethernet), show that you would exhaust the storage in approx. 4.5 minutes :)


40 Gigabits per second is roughly 4 Gigabytes per second.

4 Gigabytes per second times 86400 seconds per day is 345,600 Gigabytes per day.

Roughly: 345 Terabytes per day.

Large, but not stupidly so.


40 Gbps would actually be exactly 5 Gigabytes per second (divided by 8).


While I don't know the exact overhead of 10GigE, there is likely still some overhead.

At the lower speeds, things like 8b/10b encoding and Reed-Solomon ECC added enough overhead that dividing by 10 was more accurate than dividing by 8.


> While I don't know the exact overhead of 10GigE, there is likely still some overhead.

on 10gige pipes, at max Ethernet mtu (1500) bytes etc, there is approx. 94% of available bandwidth for user data (accounting for things like inter-frame-gap, crc checksums etc). with jumbo-frames that number goes to 99%.


Okay, so call it 10% overhead (actually 8%) if we're taking a WAG (wild *ss guess).

That would mean that I would need to divide by about roughly nine (8.8 or so).

Sorry, I can't do divide-by-nine quickly in my head. I can do divide by 10 though, and my error is roughly 10%.


> 40 Gigabits per second is roughly 4 Gigabytes per second.

it should be 5GB/s right ? :)


> where you dump packet stream from 4x10gbps cards with minimal 84b size (on ethernet), show that you would exhaust the storage in approx. 4.5 minutes

I don't understand your relations here, or the size of the storage you're considering. I did this:

3.5 million packets per second * 84 bytes per packet * 4 interfaces == 101.606 terabytes/day.

Or, at the raw interface rate:

10 Gb/s * 4 interfaces == 432 terabytes/day.


> or the size of the storage

whoopsie, i was considering 1.2tb of storage...

> 3.5 million packets per second

it is actually, 14.88 Mpps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: