Hacker News new | past | comments | ask | show | jobs | submit login
Formula 1: The super-fast net driving teams to the podium (bbc.co.uk)
112 points by jjp on Aug 14, 2014 | hide | past | favorite | 59 comments



Although this article discusses the actual collection and transmission of data, what I find more interesting is how efficiently the vast amount of data is being used in the decision making process for vehicle setup and race strategy. (I strongly believe that data has no value until it is used for analysis and decision making.)

Data coming from the vehicle sensors and other sources are concurrently being used for analysis (by both hundreds of engineers and a wide range of "algorithms") as well as input to simulation models. The results from the simulations means that even more data is being generated even when the vehicle isn't running.

Within minutes, using the data, the different engineering groups (typically responsible for a sub-system, ie. engine, tires, aerodynamics) arrive at conclusions which then the vehicle's race/performance engineers are using to enhance the setup of the vehicle. The results of changes are then fed back to the engineers, and evaluating if the analysis and predictions are correct is a big part of the post-event work.

I honestly can't think of any other industry that carries out this kind of analysis of highly non-linear systems at this scale and speed. The only other industry that I can think of is finance?

Disclosure: I work in motorsports


This is done during flight test of prototype aircraft (and probably spacecraft as well to some extent, I haven't done that) - the data acquisition system on a flight test plane is incredibly advanced, and the data gets beamed down in realtime to a group of (real) engineers on the ground in the flight test station who analyze the data and communicate back and forth to the pilots again in real time. All in real time, the test pilots will decide what/how maneuvers to do based on learnings captured/analyzed, change the configuration of the aircraft systems/surfaces, intentionally induce failure conditions and faults, ... It's a real ballet between the engineers on the ground and the pilots, when the team is working well together. As you said, post-event debriefing, detailed analysis, recommendations/reports, aircraft changes etc are started immediately when the aircraft lands, to prepare for the next flight. It's very intense, but also very rewarding and a lot of fun.


beamed down in realtime to a group of (real) engineers on the ground

I assume this is in reference to use of the term "race engineer" in F1.

I can assure you that the sport does use real engineers, and very intensely so. Folks like Adrian Newey would qualify as engineers by anyone's definition, and each team has heaps of MechE's, EEs, software engineers, even materials engineers, back at home base.


.. not to mention that there's a lot of overlap between aerospace and F1 engineering. Plenty of the engineers (even the trackside ones) started out studying aero engineering.


Yes, that makes sense, thanks for the insight.


Power grid management is of similar complexity and it's a 24/7 job. Everything from current temperature to when TV shows end impacts the grid. Add to that just how expencive peaking power is and the incentives for extremely accurate modeling become significant.


Of course, and I'm sure dealing with the many different data sources (ie. weather and TV listings) makes it even more complicated!


Do you know if anything similar is being done in MotoGP? I know the communication with the riders is much more limited, but I'm curious what they can get and send back to the bikes themselves, perhaps communicating via the instrument panel.


I also find this fascinating, but have never seen any good technical discussion of it. Do you have any pointers?


the more interesting network side is the link between car and garage - renting a high bandwidth private circuit for sports events is SOP these days and has been going back to the 50's or 60's.


It's astonishing how many people are working to keep the cars running on track. There's perhaps 50 people for each pair of cars, maybe even more, every time the cars are running on track. There's 5 guys on the pit wall, a dozen behind the pit garages and twenty or more at the teams' bases.

They make the tactical and strategic decisions on when to pit during the race as well as guide the car set up and tuning on Friday and Saturday practice.

Here's a few impressive videos from McLaren's mission control, located in their base in Woking, near London.

https://www.youtube.com/watch?v=vYhl7csZJHw https://www.youtube.com/watch?v=Ple7W6bdxJc


"perhaps 50 people for each pair of cars"

Well, yes. From the article: "the FIA - restricts them to 60 staff members at the trackside, including engineers"


> Well, yes. From the article: "the FIA - restricts them to 60 staff members at the trackside, including engineers"

Does that include the people at the factory in the mission control? I don't think it does. McLaren's Mission Control (in the videos) already has about 20 people working.

I didn't even count the actual pit crew in my estimate of 50. There's around 20 people in the pit crew.


> Does that include the people at the factory in the mission control? I don't think it does.

Right, that was the point of the article... they can only have 60 people trackside, so they need all this bandwidth to send the data back to the factory where even more people can analyze it.


I don't think this applies for all teams. This must be true for the top-6, not for the rest which have considerably less resources.


"Considerably less" still means the smallest F1 team has more resources than the entire NASCAR and Indycar grid combined.


> Both AT&T and Tata insist their networks are secure, but they are tight-lipped about precisely how this is done.

This is absolutely terrible. Your network should be secure, even if you tell everyone how it's done, and reluctance to do so makes it very likely it's not secure.


I see this line of argument very frequently - that if you are really secure, you should tell everyone how you do it, as some sort of gold standard for security. Kerchoff's principle suggests nothing of the sort - it is something of a thought experiment. When you design a system, you should ask yourself if it would still be secure if you told everyone how it worked. There is no real need to tell anyone, the obscurity adds an extra layer of defense.

Here's Steve Bellovin's thoughts on this:

'The subject of security through obscurity comes up frequently. I think a lot of the debate happens because people misunderstand the issue. It helps, I think, to go back to Kerckhoffs's second principle, translated as "The system must not require secrecy and can be stolen by the enemy without causing trouble," Kerckhoff said neither "publish everything" nor "keep everything secret"; rather, he said that the system should still be secure even if the enemy has a copy.

In other words – design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.) After that, though, there's nothing wrong with trying to keep it secret – it's another hurdle factor the enemy has to overcome. (One obstacle the British ran into when attacking the German Enigma system was simple: they didn't know the unkeyed mapping between keyboard keys and the input to the rotor array.) But – don't rely on secrecy.'


You are right in this case. The only one who must be informed about how the system works is the client. In this case, the F1 team should know how is this security achieved, not the attackers or the general public.

On the other hand, if it is my information you are securing and if I don't have good reasons to trust you, I want to know how it is being done, even if that means attackers also know. If everyone is your client (for example, if you provide public services), then everyone must know, so they can independently audit the system.

Obscurity can be a layer of security in one system, made ,managed and audited by trusted entities, but, generally speaking, it is a weak layer for a attacker with a great enough motivation.

On the other hand, obscurity is detrimental to a collective of systems made, managed and audited (or not) by a great variety of entities. Sure, on the real world we place trust on companies to handle their security well, but that has ended badly in the past. The knee jerk reaction to security through obscurity we have is beneficial because it is a symptom of security issues in the system.

In conclusion, regarding obscurity, the beneficial effect of an extra security layer is only greater than the potential malefic effect of hiding security problems if the entities behind it, including the auditor, are competent and trustworthy. As a rule of thumb it should be avoided or have its bad side mitigated by independent security audits.


Exactly. “Security through obscurity should be an _extra_ layer, not _the_ layer”.


Let's put it another way: reporter who doesn't even begin to have the base knowledge to understand the answer, let alone write about it intelligently, asks "so, what makes your network so secure? Can you give me more details?"

Interviewee: "sigh. There are multiple layers of blah-blah, with foobar authentication schemes. That's all the detail I'm prepared to go into."

<Reporter writes down, "tight-lipped about security">

AT&T and Tata don't owe you an explanation just because you think that's the way the world should work. Such an explanation takes time away from doing other things, all so only a tiny fraction of a percentage of the world population can actually understand what was said.


I expect if you tell everyone how it's done you'll then attract a bunch of people probing it thinking they might know a way around the restrictions. Not knowing where to start, they don't bother.


They're probably not allowed to reveal any specific information at all regarding these networks - including security. I suspect they know what they're doing.


Yes, because they have to dedicate time explaining every detail of their implementation for the perusal of everybody, sure.

That's not how things work


Sure, telling people how your security works is great. And in a tech world we accept and want this.

But this is F1, everything is a secret. F1 teams go to crazy lengths to keep anything that could be an advantage to themselves (ex: Mercedes mass-damper, Red Bull double-diffuser). I would suspect that there is a very strict NDA as part of supplying the communications to each team.


understanding the importance of "security through obscurity" is not equivalent to "tell everyone exactly how everything works, whenever they ask".

there's a difference between relying on obscurity, and publishing all of your designs of a private system.

the underlying design principle being promoted is to assume that your design specification has been compromised - not to compromise it yourself.


Why the downvotes? Even if the comment is wrong, it certainly merits discussion.

Remember: disagreement should not lead to downvotes, it should lead to a reasoned reply. The reason for a downvote is when a comment detracts from the discussion, and I can't imagine how the parent post could be viewed that way.



There was a related 'In Business' BBC Radio 4 programme on this subject last week - it was an interesting listen. You can download it here (possibly UK only - not sure) "In Business: Fast and Furious 7 Aug 2014" :

http://www.bbc.co.uk/podcasts/series/worldbiz


I'm fairly sure BBC podcasts are available worldwide.


I used to work for a (traditional) engineering company that partners with an F1 team.

Once a year they would come around and give a talk about asset reliability and how they operate.

An interesting story was how the one time they ignored their data (because, due to the reliability of the part involved, they couldn't believe the likelihood of such an event occurring) they had a catastrophic engine failure mid-race.

Thanks to the data collected they were able to run detailed simulations to "debug" the issue - a small change somewhere else causing a part to vibrate at it's natural frequency - i.e shaking itself to bits. This then helped them mitigate the risk of the same thing happening in the future.

From then on they always trust the data.

Test your sensors, trust your sensors, trust your data.


Except fuel flow sensors ;)


This 2.5 minute video gives you the scope and breath of how much support goes behind racing an F1 car. 

It’s really the engineers making the pre-race setup for the car and making real time adjustments during the race that makes a winning F1 team.  It has been argued the F1 drivers merely are “along for the ride” and it’s really what goes on behind the scenes that wins a race. https://www.youtube.com/watch?v=1j9r7Ue6XnA


"It’s really the engineers making the pre-race setup for the car and making real time adjustments during the race"

"8.5.2 Pit to car telemetry is prohibited." source: http://www.formula1.com/inside_f1/rules_and_regulations/tech...

"F1 drivers merely are “along for the ride”" This is wrong on so many levels. F1 is a very physically demanding sport. Their reflexes have to be as good in lap 40 as they were in lap 4. They need to perform changes to the car in real time, mostly between turns, as they're told. They need to know how to attack, defend, overtake, save tyres, save fuel.

The difference is that 10/15/20 years ago they did all this without not much data, and now they have much more context to each decision.

You clearly do not watch much Formula 1.


I'm really not sure where you get the idea that the drivers are along for the ride. Certainly not in that video, where the only input from a driver (Rosberg) shows that he's very, very busy during the race.


> sometimes amounting to over 100 gigabytes each second.

So that's 800 gigabits per second, nearly a Tbps. Which is a lot of network throughput. Although this is not impossible, it seems surprisingly high, and I wonder if the article should have said "100 gigabits each second", which is still a lot of bandwidth but a little more reasonable.

The article describes the data being transmitted as largely telemetry. Let's say they are sampling each of their metrics every 10 milliseconds (100/second), and they need a 32 bit integer to store each value. So, that's 100 * 32 / 8 = 400 bytes per second per metric. Divided into 100 gigabytes, that would be 250 million metrics. Even if you change these assumptions, clearly this link is used for more than telemetry.

800 Gbps requires a large number of computers to generate and transmit, then another large set of machines at HQ to ingest all that data and then run analytical models against it. Just to handle the network traffic, without any data analysis, if you assume servers have a pair of 10 Gbps network interfaces, that's 40 machines.

If you assume hard drives can write at about 100 megabytes per second, that's 1,000 disks just to handle the write IOPS. But, with those 40 machines, that's an average of 25 disks per machine, so not unreasonable. With SSD's it would be fewer devices.

I do not know if a way to get this much network throughput over a radio link, so I can't imagine it's all coming from the car itself.

It would be interested to know what other data is transmitted over this link during a race other than telemetry.


> so I can't imagine it's all coming from the car itself.

There's also high-definition imagery from each car, I guess that some of it is generated in the car and the other imagery is supplied by the TV cameras.

I don't know though if these run on UHD, but given the boatloads of cash behind F1 I'd assume so.


To my knowledge, none of the F1 cameras are UHD; FOM (who nowadays produce all the TV coverage in-house, except for Monaco) only moved to HD in 2010, and even then only offered the HD feed from 2011 (in 2010 the feed everyone got was downscaled).


Yes, it's an error.

The link will be utilised for telemetry, data services, radio intercom, video and other connectivity.


"And a colossal amount of data is being transmitted over the network - sometimes amounting to over 100 gigabytes each second."

What am I missing?


It's BBC'tech reporting. The details are intended to impress the lambda non-techy readers. 100GB/s seems a bit over-exagerated, but whatever, it's a funny article to read anyway.


Someone downvoted this, but I think s/he's right in that it's exaggerated. I explained why here: https://news.ycombinator.com/item?id=8177618


They're counting the video feeds also.


Perhaps the article meant the "entire network" and not just a single connection?


Still, it makes no sense. The Amsterdam Internet Exchange is one of the biggest in the world, and they transfer 0.8tbps (800gbps or 100 gigabytes per second) at night, peaking to about 2.6tbps during daytime[1]. If F1 races added up to 100 gigabytes per second to that, it would be a 100% increase at night or a 30% increase when the sun is up here.

The article mentions it also needs to span continents. I'm pretty sure that if you want to reliably transfer 100GB/s, you'd need to roll new cables between a couple of continents. North-America to Europe might have the capacity, but I'm sure (looked into it before[3]) that from South America to Europe can't do it directly even if you exclusively rented the cable that runs between the continents (which is the Atlantis-2). So you'd need to route through North America or maybe Africa, which adds latency. Probably still under 300ms single-trip, but it still seems unfeasibly large to me.

Even if the article is mistaken and means 100 gigabits instead of -bytes, I have a hard time believing it. It's possible, but hard to get that much data between continents while keeping the latency low enough, at least without rolling your own underseas cables which even Google doesn't do on their own[2].

[1] https://ams-ix.net/technical/statistics

[2] http://gizmodo.com/google-funds-an-undersea-cable-to-connect...

[3] http://submarinecablemap.com


Are they including web traffic?

"The Tata Global Network(TGN) has Trans-Atlantic and Trans-Pacific data transfer capacity of one terabit per second." http://www.formula1.com/news/headlines/2012/2/13043.html (Two years old)


Absolutely, that figure is insanely exaggerated. I am willing to bet it's more like a low latency 1-10Gbit/s connection per team, depending on the country they are racing in.


Interesting article, but the 100GB/s does not make sense, i mean that would require a 800Gbit/s - 1Tbit/s connection per Team ? Uhm no. Even if its 100Gbit/s, that would still be 12,5GB/s which seems like a lot for telemetry data and video streams. I wonder how they transfer that anyway as the nearby city would need to support that kind of throughput for all teams at the same time.

Pretty sure the actual number is a lot lower than that. I'd say its probably more like a low latency 1-10Gbit/s connection per team.


This is very interesting and intellectually stimulating, but none of this matters in the current season where Mercedes has a vastly superior engine and is winning races by 10s of seconds, not milliseconds.


Not in recent races, where there's been real competition.


"It's a completely bespoke car for each circuit."

This may give the wrong impression, so just to be clear there are strict regulations in F1 regarding the reuse of components and static designs.

http://www.formula1.com/inside_f1/rules_and_regulations/spor...

Mercedes, who sit in #1 and #2 in the driver standings, and of course #1 in the constructors standings, is presumed to have an advantage because of the way they laid out their turbochargers.

https://www.youtube.com/watch?v=NuBB2F6IutQ

By F1 rules, no other team can change their own engine designs until next season.


While the engine, gearbox and chassis have strict regulations, the cars undergo major changes from race to race.

Next week at Spa (a very fast track) we will see radically different cars from, say, Monte Carlo (the slowest track). The cars look visually different if you look at them in side by side images.


The overwhelming bulk of the changes in the car from track to track are setup changes. These are designs they put an enormous amount of work into over the off-season, and a significant amount of test time, so it seems circumspect that they would radically if even marginally change them mid-season, before even considering the incredible tight restrictions of F1 (which are there for safety and fairness, but also to try to keep it within "reasonable" fiscal bounds).

Teams come out with a season design, and from that they adjust wings, tire pressure, gear ratios or change points, etc, within the tolerances allowed. I've only marginally been following F1 for the past year, but I'm wholly unaware of any substantial construction change from track to track.

Edit: To substantiate this a bit, here's the Ferrari car at Hockenheim (a very high speed track), compared to Monaco (a twisty, "urban" track).

http://www.zimbio.com/pictures/bGZpJEcephz/F1+Grand+Prix+of+...

http://www.formula1.com/news/headlines/2014/5/15851.html (click on the Ferrari picture)


they adjust wings, tire pressure, gear ratios or change points, etc

This year the set of gears is fixed over the entire year (there's some wiggle room the details of which I forget, but that's the gist). A team has to elect at the beginning of the season what gearings they'll put into the gearbox.

The tradeoff for this was that this year they've got 8-speed gearboxes compared to last year's 7. So in the past they had to pick 7 gears optimal for one track, now they've got 8 gears that must compromise across the whole season.


The competitive teams will field a variety of front and rear wings tailored to different tracks.


I believe they used to have different wheel base chassis' for some circuits - not sure if this is now banned.


It's not banned, Raikkonen changed his Lotus wheelbase mid-season.


"When the races are held closer to Red Bull's factory, which is based in Milton Keynes in the UK, the delay can drop to a staggering seven milliseconds"

whoopee doo.


Important to know–they're almost certainly talking about the race in Silverstone England. That's about a 30 min drive.(https://goo.gl/maps/EdR4q) So fast, but technically not nearly as difficult as connecting to Malaysia.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: