Hacker News new | past | comments | ask | show | jobs | submit login
Bringing back the golden days of Bell Labs (nature.com)
173 points by gtsnexp on Aug 5, 2022 | hide | past | favorite | 128 comments



Every once in a while these articles about Bell Labs or Google X, etc. come up, and I'm always amazed that our national network of government funded research labs are never brought up.

Quantum Computing, new encryption methods, neuromorphic computing, tons of cool stuff. All stuff being worked on by some of the smartest people, at our national labs. Some are more security focused than others (Sandia, Los ALamos) but others (LBNL, LLNL) are very diverse in their research areas.

And they collaborate with businesses to bring their work into the real world.

Are they just not well known enough? It's a great alternative to universities that is more research focused, publication track records aren't as important.


The national labs do pretty cool stuff, but I don't think it was as freeform as the original ARPA with Licklider and Bell Labs research. My understand is that the success of those programs where that they funded ideas and people and not projects. Nowadays, that is extremely rare, and for the most part, the national labs are highly focused on projects. There is not much speculative research on things that could fail. Really, the same goes for a lot of academia.

That's not to say that the projects at the national labs aren't interesting. They're very interesting, difficult, and challenging. But it would be nice if cells within them had much longer time-frames of ten years and beyond for transformational technology.

The other issue is that ARPA, Bell Labs, PARC Research, and the like were also catalyzed by their time. They were absolutely ahead of their time with the solutions they came up with, but the 20th century was ripe for the picking for transformational technologies.


I have worked at a National Lab previously. For the computational field I worked in BigTech and most startups pay much more and have better access to resources.

If you’re a decent programmer at a National Lab now you can work on interesting problems at a startup for likely double your current salary. You can also immediately spin up and spend $500 in AWS without anyone batting an eye, at a National lab you’ll generally only have access to one type of compute (supercluster) and you have to wait days or weeks for your turn.

I think this is one major difference between the golden age of Bell Labs and now. I think if you want to recreate that you have to try and at least compete on the hiring playing field that BigTech is.


The national labs are a huge asset I wish more people knew about. You get to work on interesting, meaningful projects, are given access to all the expertise, training, and development resources you could reasonably want, and have a relatively secure job with a good work-life balance.

I often talk to graduate students in STEM fields about their career prospects, many of whom are interested in software development, but are disillusioned with the career paths academia, FAANG, and the standard tech companies have to offer. Many of them just aren't aware of what the labs have to offer when I bring it up, unless their faculty advisor is in some way involved.


I looked into working at a national lab after grad school. It seemed like way too many of the openings were post doc positions with a two year contract that was supposed to come with a research proposal. No thanks. I have no interest in spending two months writing a proposal so I can make a 60k salary for two years and then get let go. It's hard to be disillusioned about FAANG when that's the alternative.


Yikes, it sounds like they want a huge contract workforce of dubious quality.


When Bell Labs was doing cool stuff, few people knew about them.

It's only retrospectively that they are famous, because we have all those historical tech figures that went there and got blown away. Then this bubble expended, and year later, it became public, common knowledge.

It may happen the same way for the places you are talking about.


> And they collaborate with businesses to bring their work into the real world.

and they collaborate with nearby schools: i took physics in high school at Fermilab, for example, and it was super inspiring and accessible.


I always felt universities should have been the ones to take on the roles similar to bell labs. They receive government funding and are focused on fundamental scientific research. Now it seems most academics are (understandably) preoccupied with publications and, in computer science at least, breakthrough deep learning research is from rich tech companies like Google and Facebook. I wonder what happened.


I agree, but I think the article had the reason, "Stable Funding".

The few profs I have run into over the years largest worry is getting grants. Thus the preoccupation with publications.

Where I work, it seems this trend is hitting commercial labs too, researchers are slowly being forced to "produce". Unless something changes, research for its own sake seems to be a rare thing, at least in the US.


> Where I work, it seems this trend is hitting commercial labs too, researchers are slowly being forced to "produce".

Even better, this is often the same productivity metric: papers. Though papers can be good, I'd much prefer to _build_ stuff, and measuring productivity by paper count & journal "impact" undercuts the research itself and its productisation.


>Riordan attributed the success of Bell Labs to the “combination of stable funding and long-term thinking”1.

This is not fantasy, it is with 20/20 hindsight.

Actually the same hindsight I had as a teenager, although I'm old enough so most of that would have to be considered as foresight by the time Riordan's article came out.

In the 1960's there was even an experimental school or two intended to be kind of a pipeline to Bell labs, to select & prepare preteens for later scientific PhD programs and eventual leadership if not at Bell, then places like NASA, IBM, National Labs, etc. Working for Messrs Hewlett and Packard was right up there too.

>most importantly the freedom and time to pursue any research interest is a luxury very few scientists in academia or other research organizations can afford.

There will always be very few positions where you can utilize millions of dollars of resources to experiment however you would like.

>Kelly believed that to achieve outstanding results an organization needs a critical mass of talented people with different skills. He was looking to hire men (remember this is the 1950s!) “of the same high quality as are required for distinguished pure research in universities”5. Attracting such talent was not a problem, rather the challenge was to create the right environment for it to thrive. “We give much attention to the maintenance of an atmosphere of freedom and an environment stimulating to scholarship and scientific research interest. It is most important to limit their work to that of research”5. Kelly believed that any distractions would make researchers lose “contact with the forefront of their scientific interest”5 and decrease their productivity in research. Above all, Kelly saw research as a “non-scheduled area of work”4, translating to no deadlines, objectives or progress reports.

This was also with 20/20 hindsight, and it has been possible to make decades of conceptual progress since then, too bad Bell Labs couldn't remain in existence to most likely participate.

Anyway, so without stable funding as a teenager that all left me with nothing but the long-term thinking, but at least I got a head start :)

But missing that key element, success would certainly not be achieved on the same financial terms or scale as at Bell.

Then again places like Bell weren't going to be there forever.

Accepting this, I persevered a full decade of continuous improvement until I could leverage far fewer resources to equal the accomplishments of my peers.

Continuing on the same course was a no-brainer and has allowed for pulling ahead in a number of areas.

Once I got to the point where I could solve some problems for billion-dollar companies faster than their researchers could do themselves, I never looked back.

Without a big enough organization you don't achieve critical mass, but I was raised to be a businessman anyway even if it could be a distraction from research, I just worked longer hours so the experiments themselves are not less than a full-time effort. And at least I own everything I invented while I was an entrepreneur.

>I'd much prefer to _build_ stuff

Me too, since there's negative incentive to disclose my findings to potential competitors, so less time wasted at the desk on publications or lack-of-progress reports, and more time at the bench.


In computer science, or more specifically in deep learning, there is a high correlation between resources you can spend / use and progress you can make. This is why all the major tech firms have a leg up, they have internal computing power and data that is impossible for any one university to have. It is also the case that a lot of professors and even students have temporary appointments with tech firms, for exactly this reason.

If you look at other fields which are not as resource intensive, university can and do contribute to the state of the art. Especially in areas like theory, where even though MSR etc do cutting edge research, the bulk of work is done at universities.


I don't think this is the only reason though. (preclinical) Bioscience experiments can be extremely expensive and yet they remain pretty concentrated at universities, for better or worse.

I don't know about the state of funding in protein folding specifically, but knowing about the kind of funding that some other high profile bio labs can get I am skeptical that material resources were the main difference between alpha fold and what the prior academic lit could do, for example.


I guess it's important to socialize product development in order to maximize the private profit of those few individuals who control the means of production.


Wasn’t this what ARPA and Project MAC was?


This cannot go out without mentioning the wonderful book:

"The Idea Factory: Bell Labs and the Great Age of American Innovation" by Jon Gertner

https://www.goodreads.com/book/show/11797471-the-idea-factor...

A highly recommended read!


This book is really good but does not detail the Thompson and Ritchie portion of the labs history. It only mentions it in passing.


"UNIX: A History and a Memoir" by B. Kernighan features both Unix creation and Bell Labs culture of its time.

https://www.amazon.com/UNIX-History-Memoir-Brian-Kernighan/d...


In your opinion, what's the single best book to read about Bell Labs and absorb lessons that can be applied to building modern innovative organizations?


Probably the one already mentioned.

I am currently reading A Thread Across the Ocean which was recommended on here as well.



Bell Labs in its prime was a marvel of human civilization. But let’s admit the economic conditions which made it possible. Pre antitrust Bell was an absolute monopoly. They could afford science projects with no economic returns because they were insulated from any economic pressures.

The public paid for Bell Labs through high monopoly rents. Want Google to have a Bell Labs then designate it the nation’s sole internet monopoly (which I’m not really advocating and nobody else would either).


And yet it is the opposite fact, strong anti-monopoly enforcement, which lead to such labs. Most big companies had big R&D efforts during the mid-20th Century, and that was because the government was unusually aggressive in its anti-monopoly efforts. Big companies couldn't simply buy small, innovative startups. Mergers were less common, and the government examined each for any possible threat to competition. Buying an up-and-coming competitor, to limit competition, was strictly forbidden. So if you wanted to be innovative, you had to build a big lab, and do all the innovation yourself.


Because of strict regulations on Ma Bell, Unix was free with all the source code. Universities would request a copy and Ken would send a magtape.

And so it began.


It’s not like Ma Bell had any competitors they could buy, even if they wanted to. Because they were a complete monopoly. And despite this supposed strong anti-monopoly enforcement, they milked that monopoly for decades.

But hey, at least we got C and Unix out of it.


C, UNIX, radio astronomy, cellular networks, the transistor, integrated circuits, the laser, photovoltaic cells, the charge-coupled device (CCD), information theory, super resolution microscopy, optical tweezers, error-correcting codes, fibre optic networks...


Seriously, when you list it all out it sounds like a reasonable trade.


The question isn't whether those things are nice, it's whether they wouldn't have been developed as fast in a competitive environment.

(Personally, I think just based on human nature that competition is anti-innovative in the long run, but I can't prove anything either way.)


The question is what we lost because we did those things.

We know what happened, we don't know what alternative histories would have been like. We don't know if they would have been better or worse.


It's not a perfect answer, but we can look at other industrialized countries where nothing like Bell Labs existed.

An easy citation would be CERN. They gave us the WWW, but then CERN is a monopoly in its own right, given that no one else has similar facilities, similar requirements, or a comparable base of talent to draw from.

Other than CERN, I'm drawing a blank here. Examples, anyone?


I would argue we have examples now: Google Research, Deepmind, MSR, IBM Research, AT&T Labs, FAIR...

I would argue that all their combined achievements don't add up to Bell Labs'. (IBM Research Zurich as a couple of Nobels I think). Certainly at least one reason is that these labs have pressure to bring some of their research to market rather than solve for the long term.


Not at all because all later research was influenced by what went before. And of course all those people doing the work were not doing something else.


Don't forget S was developed at Bell Labs, the mother of R.


> But hey, at least we got C and Unix out of it.

And transistor [0], laser [1], CCD image sensor [2] to name a few more.

[0] https://en.wikipedia.org/wiki/History_of_the_transistor

[1] https://en.wikipedia.org/wiki/Charles_H._Townes

[2] https://en.wikipedia.org/wiki/Charge-coupled_device#History


I heard Townes in a small lecture at Berkeley once. As I recall he described a bit of a struggle getting Bell Labs interested in patenting the maser (or some similar administrative detail). He offered that the key phrase turned out to be: you might be able to use this for communications.


Seems like the early CATV operators would have been an easy target for bell.


bell didn’t spring into existence a monopoly. GP described the conditions that grew bell


It’s interesting you mention Google because they do have quasi-monopoly profits that allow them to fund a lot of different things. Sure, they launch a new chat app every year and close it. But look beyond that.

Their profits allow them to staff projects that we benefit from. For example, programming languages like Go, Dart and (soon) Carbon. Go is a particularly good example here because it was literally created by Bell Labs alumni working at Google.

And it’s not just languages. No other organisation in the world wants to create another operating system. We’ve reached a local maxima with Windows, Linux and macOS (+iOS) that no one has the capability of challenging. The sensible thing is to run these operating systems till the end of time because they’re good enough. And yet Google with its profits says “how about we build an OS with a completely different approach?”

Quite apart from Google’s impact on the world, we benefit from having the option of Go, Fuschia and the rest of it. This is the product of research and development they have funded. Even if we don’t use them directly, we learn from their approach for future attempts to improve things.


>For example, programming languages like Go, Dart and (soon) Carbon.

So what’s innovative about them, that hasn’t been tried before?

>No other organization wants to create another operating system.

Erm, what? Lots of companies created, and still create, new operating systems, it’s just that they don’t have the PR to make noise about it. Also what’s this “different approach” you’re talking about? Capability based security is some 50 years old, has been implemented several times, and is the default sandboxing model in FreeBSD.

Can you show some really crucial inventions from Google, comparable to eg Unix, or transistor? Inventions, as in, something else turn repackaging old ideas with lots of marketing around them?


What a curious standard you use to judge inventions. Inventions from Google inspired by previous approaches are discredited as derivative ... while inventions by Bell Labs also inspired by previous approaches get a free pass?

Are you under the impression that Unix was the first operating system, or that C was the first programming language? What do you reckon about C++, also invented at Bell Labs? The folks at Bell Labs were giants and they stood on the shoulders of giants.

Carbon could make a substantial difference to the safety of C++ software by migrating such codebases to a relatively safer language. That's an improvement that we benefit from because Google and friends are publishing it as FOSS. Has a similar approach of gradual migration been done before? Kotlin and TypeScript did it. Eschewing C++ templates for generics? That's from Swift. Using syntax that's easy to read and to parse? Inspired by Rust. They're up front that they're inspired by all these successful languages.

But who cares? The fact is, something doesn't need to be completely novel for it to be useful. You can denigrate the work of others all you like, but all work that's done by humans is inspired by previous work by someone else. That's a fundamental fact of life. Even something as game changing as the transistor was inspired by existing vacuum tubes.


There's an inherent tension in this take.

The people need to be free to study whatever they want without profit interfering, but also they need to be attached to a for profit business in a specific arena that has monopoly profits.

What is the magic ingredient that we lose if we just give the public's money (plus some extra we get from avoiding the inefficency of monopolies) to academics directly and tell them to build cool stuff?

Is it patents? Is this basically an internal Venture Capital thing using monopoly profits? If they get to stick their flag in one new idea before anyone else then they get to extract rent from everyone else and/or squash competition and so pay for the 99 misses?

edit: they were forced to licence their patents in 1956 so before that, patents were presumably the goal.

https://www.aeaweb.org/articles?id=10.1257/pol.20190086

The above suggests that the open licencing of the patents spurred innovation, except in telecoms, where AT&T maintained their monopoly.


You could write a book on this point. In fact a few have and they are worth reading.

someone at Bell Labs once remarked that 90% of staff worked on development (bringing research into a practical deployment). Based on that they estimate that development is ~10% more difficult than research. While works like Shannon’s or Bardeen’s or Pierce’s stand out, most of the work was devoted to deployment. Something academics and patents don’t cover.


So the modern equivalent might be Sunshot where the government recognises a certain area with potential like PV (sort of a Bell Labs innovation coincidentally) and then funds efforts to commercialize and drive prices down while supporting initial business in the field.

https://www.energy.gov/eere/solar/sunshot-initiative

A decentralised, non-monopolistic Bell Labs for the 21st century.

Oh, just noticed it was announced by a Bell Labs alumni, Steven Chu, in his role as U.S. Energy Secretary.


This was basically the approach of Mozilla Research AIUI, which gave us Rust.

IIRC this talk goes into a lot of history there: https://youtu.be/9OHcJzJQ2Nk.


> You could write a book on this point. In fact a few have and they are worth reading.

If you have time, could you share some specific recommendations?


If I'm going to read one book about Bell Labs, with the goal of learning things I can apply to the current day, which book should I read?


> If I'm going to read one book about Bell Labs, with the goal of learning things I can apply to the current day, which book should I read?

The "Idea Factory" by Jon Gertner comes to mind. It is quite accurate.


Industrial labs pay better and don't sink as much time into grantsmanship.

It's not so much that they are magical, it's that traditional academia sabotages itself by being stingy. Industrial leaders have a better calibrated sense for when "stingy" becomes "counterproductive," so they don't shoot themselves and their scientists in the foot.


I don't see the connection. We have just gone through the longest bull market in history. Large tech companies were absolutely rolling in cash, hiring tens of thousands of expensive software engineers, whose purpose was unclear.

What do we have to show for it? That's right - complex ad algorithms and sophisticated content suggestions that lead to escalating extremism and teenage girl depression.

I would blame the "stakeholder value" culture more than anything. Short-term profits over long-term growth and innovation. GE and Boeing had every opportunity to do cool things, but the Jack Welch school of management drained these companies of life, with piece-by-piece sell offs and stock buybacks (which, by the way, were illegal at this scale before the early 80s).

The problem is not monopolies or lack of (although the government is effectively toothless at this point in that regard).

The problem is broken Capitalism and ass-backwards incentives.


Isn't Meta kind of doing this with VR/AR? Spending billions every year on research with speculative future value..


>What do we have to show for it?

Go, Rust, Chrome, Android, TensorFlow, PyTorch, tensor processing units, free education material, free email, free chat, data center scale computing... Upcoming there is self driving cars, ar/vr, quantum computing...

We actually have quite a lot to show.


Go and Rust - these are just languages. They do not fundamentally change the world in any way.

Free email and chat - come on. Hotmail, Yahoo Mail, ICQ, we had all that - without feeling violated from all the aggressive data collection, which leads me to... machine learning libraries. That's what they are used for.

Most of the technological "advances" since 2007 (Facebook, Twitter, mobile phones, big data) have been used to steal and monetize our focus, and hence we have half of our kids on Adderall. That's what we really have to show for it.

And notice that all these names you dropped - layman would have zero idea what those are. They mean nothing to an average person. My point is about the larger culture of innovation across all industries, not just some stuff deep inside the software world.


Ie rehashed old ideas, but “free”, as in, paid from ads.


Poor refutation.

Neither of the languages or machine learning frameworks are old ideas or directly ad supported. Tensor processing units and cloud computing are also purchased directly.


Once again: can you point me at the new ideas here?


Monopolies don't foster innovation, which is the implicit claim you are making.

If the ROI on a lab is the best of available options and the company has sufficient capital then... it will get done.

Being a monopoly makes the capital part easier, but a non-monopoly of sufficient size would make the same decision.

And the direct effect of monopolies on innovation is even worse: that the directly stifle competitors.


unfortunately, i don't have access to university resources anymore. but i suspect that if someone were to go pull the 1980 annual report for at&t (T) from the proquest historical annual reports database (should be available via online library resources to anyone with an active university affilation) and were to take those financials and the inflation adjust them (perhaps to pre-pandemic times as things are all shifting around right now), and then compared the revenues and profits of the top five largest technology companies in the united states, one would find that they're on a similar scale.

unfortunately i cannot verify that for myself right now.


One of the key problems is that in todays society, those high monopoly rents wouldn't necessarily be re-invested in projects


" Want Google to have a Bell Labs"

Are you suggesting that google couldn't already do this if they had chosen too?


>Quantum computing is in a similar position today

I don't think that the position is similar. The level of 'ingenuity' of the overall system was lower. Fewer men were needed to understand each and every aspect of the system.

Now, each scientist is more specialized which means that they individually see less of the system. Additionally, the system is more complex. Together, this means that many more scientists have to be coordinated to make progress.

This is further complicated by knowledge protections. Bell Labs most likely had all relevant patents and copy rights. To reach the same level of fluidity of knowledge, an additional army of patent lawyers is needed.

An additional problem is that targeting quantum computer specifically takes away from the idea of 'no deadlines, objectives or progress reports'. This will be especially a problem in those proposed 'focused research organizations'.

What I don't want to understand is how Google could come so close and fail. They have 10% projects and moonshots. They have the monopoly-like income stream to fund it. But instead of creating progress, their products seem to get worse and worse.

How could one sustainably fund something like Bell Labs with company profits?

*edit: The more interesting question: Since the target of the Bell Labs, the 'universal connectivity' has been achieved, do we still need centralized labs for the next relevant innovations? Doesn't universal connectivity give us the potential to come up with something better? I believe that a social network is missing that is built to create scientific progress.


> What I don't want to understand is how Google could come so close and fail. They have 10% projects and moonshots. They have the monopoly-like income stream to fund it. But instead of creating progress, their products seem to get worse and worse.

They sell information and ads. Most of their products are targeted towards that.

And ads give the client zero added value. It's not a product, it's not a utility nor a tool.

Many of their products are oriented towards that. And it's hard to produce good inventions when your business model is basically dark patterns.


> a social network is missing that is built to create scientific progress

Like a Royal Society?


That would be very exclusive, wouldn't it?

The problem is that 'every mathematician has only a few tricks' [1]. This means that every scientific problem should be exposed to as many scientists as possible, coordinated in a way that maximizes the number of tricks that are tried.

To me, this suggests that something like the TikTok algorithm should bring scientists and problems together. The problem is that each problem is not a 30 second video that is easy to consume, but a problem that requires some amount of studying before it is approachable.

There is most likely not much to invent, e.g. there is already stack exchange to answer questions, and Khan Academy to gain basic knowledge, but like YT before TikTok, there is no algorithm that brings it all together.

[1] https://mathoverflow.net/questions/363119/every-mathematicia...


A related thought: I sometimes think how cool it would be if some of the empowering ideas from the smalltalk and lisp communities (think Xerox, Symbolics etc.) would push personal and end user computing forward.

HCI seems generally stagnant or at least the real world influences. Apple managed to extract and distill the UX portions, but their products lack the empowerment aspect. Linux is kind of the opposite of that.

I feel like there are so many problems with end user computing, specifically because every program defines its own little world. Nothing is integrated well, most things are incredibly hard and shaky to integrate and extend. So much time and effort is wasted because of lock-in effects and feature bloat.

Also we’re missing out on large portions of users solving their own problems, bringing in their expertise and creativity to computing instead of their frustrations.


There's a lot of assumptions in this comment about (a) what people want (b) what people are willing to do (c) the conceptual level at which "integration" can and should take place.

Look at other tools (non-computational). The story there doesn't look much like what you're proposing. Maybe there's agreement on lumber and thread sizing, but in general, tools do not interoperate much. Whatever level of integration they do have comes mostly from having to interface with the physical universe. In addition, most people don't know how to use most tools, and things are broadly left to experts ("tradesmen").


What a bizzare disjoint article. Bell Labs, Quantum Computing, racial discrimination, blah, blah. There are analogs to Bell Labs today, including in Google and other working on Quantum Computing, and machine learning, and self driving vehicles, and... With one similarity that large (possibly monopoly) financial power allows companies to make these long term expensive investments. If somebody in Nature wanted to opine on Bell Labs like organizations they only need to look at the many examples out there. Sure they will look different but they are there. Why what was written was worthy of being a feature article in Nature Review physics is beyond me.


Between today and the hey day of places like Bell Labs there is one fundamental difference:

Where are the true believers?

Let's use a HN reader as our baseline. You're like much smarter than the average. Do you want to woek incredibly hard, long, focused hours building another IP monopoly for somebody who's already OP? What if they reward you with a pat on the back? Is that personally appealing? Is there anyone or entity you love enough to slave under?

I posit the difference in mentality is massive.

A new Bell Labs would need to incenticize the people actually creating value, by "allowing" them to retain real stake in what they own.


> You're like much smarter than the average. Do you want to woek incredibly hard, long, focused hours building another IP monopoly for somebody who's already OP?

Possibly. Tons of people do ti.

> What if they reward you with a pat on the back?

They reward the high-performers with risk-free, extremely attractive compensation. Very few people would be able to create value on their own.


Thanks, My question though is, do you personally desire to be in that position?


A stable job with great pay and the freedom to work on tough problems without having to worry about funding? Yeah, I'd love to be in that position. If you want the fruits of that labour to not be monopolized, then it needs to be regulated.


I stand corrected and am updating my understanding.Thanks!


One more comment from someone else. IP does not last forever. Specifically patents only last for 20 years. So whatever you discover for your employer, the whole humanity gets profit from, just later.

If the alternative is that the same thing does not get discovered at all, or 30 years later, then the tradeoff is good.


It's all about trade-off, aversion to risk, skill sets and so on...


That tax was mentioned exactly once in the article is pretty sad.

The corporate tax rate was over 50% for the period and R&D was a way to offset that.

Today there is no need since innovations in accounting have lead to the extinction of profits in the US.


>"In quantum computing thinking is shifting from a government-funded big science approach to an exploration and exploitation of the more dynamic start-up innovation ecosystem. “We need to try out different things, and use the innovation ecosystem to test, learn and build machines."

this is weird because it's very clear that the basic science of quantum computing is far from done. Investors can pour as much money as they like into a series of holes in the ground, but if the universe says no then nothing will come of it. We do not know how to solve many of the fundamental problems of scaling a QC, until we do I think this is really just stupid.

The QC startup scene looks like 50 Theranos's in search of a lawsuit to me; it's going to be 30 years before serious and usable machines appear - if ever.


Its called the Explore-Exploit Tradeoff. If you are King of Spain you can afford to send out a bunch of ships that will never return. If you are not King of Spain, and want to be King of Spain, then all this wild goose chase stuff makes no sense. Makes more sense to get busy on land exploiting others to become King of Spain.


Problem is when the King of Spain wants to become King of Europe and starts waging war to achieve his higher ambitions. That is what you see with modern companies, Google isn't fine just being the king of Search, Amazon isn't fine just being the king of online shopping etc, everyone wants to dominate ever more areas.


You see it with tin pot dictators too.


the difference between a QC and Theranos is you can't prove a quantum computer isn't actually working as advertised.


I was quite irritated by the debunking of Sycamore's result recently; I had the impression from the pr (and the paper) that they had obtained exact results, but the truth is that these results were approximate. That's totally different. Previously I had dug out of the papers that it takes 48hrs to boot the machine and that it can only run for a few hours... and that was disappointing enough.

Also the obscuring of the impact of quantum fidelity on the Eagle chip from IBM. Ok, it's got 124 qbits - but you end up with a max circuit size of 24 which is similar to the previous 48 qbit generation (I think that one is called dodo or something).

Anyway, it looks like the absolute best spin is put on everything and the actual problems are being ducked - probably because they have to be because they are way beyond the current SOA.


if I got a startup grant for every idea someone can't prove doesn't work I'd be the most well funded individual on the planet


Shor you can: can it break RSA? If yes, QC. If not, not QC.


What if it's just able to break a particular RSA file after several billion attempts to keep it running long enough?


It is likely an Edison scenario, we just need to test thousands of things to find something that is stable enough to produce results. There are so many possibly setups to make it work that we will probably find something sooner or later. We already have many setups that works, but aren't powerful enough to be useful, just need to find a setup that works a bit better.


Theranos made the mistake of trying to apply the "fake it until you make it" strategy to medical devices, where there just isn't the option avoid proving your solution is safe and effective before bringing it to market.


I work in Stuttgart on the campus that once was SEL, then Alcatel and then Nokia. There was also one building with the big letters „Bell Labs“. Sadly they took the letters down about a year ago since all the buildings get assimilated by Porsche.


Interesting - I was born in New Jersey in 66 when my dad was working at Bell Labs but I grew up in Stuttgart. Had no idea there was a "Bell Labs" there


small world 48.83174624272352, 9.147884320080053 The smaller building to the right.


Big fan of Bell Labs, and yet as a retail consumer of telecom services the improvement in price, customer service and improvement of the product once ATT was broken up can’t be understated.


My father worked at Bell Labs, with Thompson, Ritchie, Aho, etc.. My dad says Bell Labs was the result of the monopoly laws restricting AT&T. The government forced AT&T's prices to be based on its costs from 2 years previously with a fixed rate of profit. So, prices were something like 104% of its costs from 2 years earlier (or something like that). That meant that the only way that AT&T could increase its profits was by innovating to lower its costs. For example, if research lowered its costs by 2% per year, the company's profits would double! Thus, research was a top priority for AT&T and Bell Labs was well-funded and well-managed.

(BTW, I loved "take your kid to work day" at Bell Labs --- I got to use computers with 5-button mice and play hilarious computer games. I was also given a copy of "the old testament" (pre-ANSI "The C Programming Langauge" book) in high school. My dad worked in hardware and really enjoyed his time there helping to design the CRISP and Hobbit processors.)


I might be way-off here but although Bell Labs produced the various transistors we have grown to love, these were a relatively simple set of variables with a known problem, it can only take so long testing these combinations of temperature, acidity, material etc. to get pointed in the right direction (no offence to these clever men, it was still hard).

Quantum computing is orders of magnitude more complex and requires an enormous number of things to be tried individually and in combination and not using a strip of metal with a wire glued to it on a wooden bench but using enormously expensive and specialist equipment. I can see how the Automated Turk model would work for these types of problems.

That said, I could imagine something like this working perhaps in another arena like solving drinking dirty water, developing country toilets, other engineering problems.


They always keep trying though. Janelia Farms, now this “renewed” Cancer Moonshot program, and some commercial attempts like Calico, Chan Zuckerberg - each with their own goal but similar idea that if only they could recreate bell labs or the Manhattan project or the moon Landings..

They all fail at that goal and become just another research institute no better or worse than the entities they purport to replace.

If you ask me why, it’s because they forget and do the one cardinal mistake - they hire professors from existing academia or industry. By the time you finish your PhD (leave alone the rest of Tenure track), it has been subconsciously indoctrinated in you to only care about YOURSELF. And worse, that the only way to do that is to specialize in a specific topic and then keep convincing everyone that this is the topic that will cure cancer, eradicate Covid, solve the South China Sea crisis and then finally eliminate aging altogether.

In the Manhattan project, the only qualifications required were that you were a physicist and smart one at that. You gave up on your own research and focused on what needed to be done whatever field it may be. In the cancer moonshot program every guy they brought in tried to cure cancer with his own tool. They always want to do their own thing. Guess what nothing has been cured and hundreds of billions have disappeared.

For better or mosty worse Neuralink seems to be the one place bucking the tend. Worse because they seem to be driven by a guy who really doesn’t care about humanity, is of extremely questionable ethics and what they’re trying to do will likely be used for bad things more than good anyway (once it leaves the clinical realm).

If someone gives me $50MM and asked me to cure cancer I’d probably run around the US top institutions and identify truly smart 1-2 year PhD students and recruit them with insane amounts and try to Train them “in the old ways” so to speak.


the manhattan project was a wartime effort, it is not comparable. moreover, if you read feynman's memoirs, it seems there were certainly a lot of people indulging their intellectual interests under the banner of los alamos. (feynman himself, for example, discovered a fondness for tinkering with computers- and he was a physicist)

i've also been told that bell labs strength comes from the opposite of what you speak. from a culture of intellectual freedom. from a culture of "no committees." they hired the smartest people they could, let them work on what they wanted to, made sure they had every resource they needed, engaged socratically on project progress and otherwise stayed out of the way.

in terms of people using their own tools? that's specialization. if someone spends a lot of years on something, that's what they become an expert in, they will try and apply it and that's what they should do.

while i certainly believe that science can and should answer the calls of society to make the world a better place. it seems that time and time again this happens through the magic of basic research where scientists pursue their interests without regard to this ultimately producing discoveries that are sometimes combined and then give rise to practical and translational applications.


The Manhattan Project was successful, because there was a real pressure to get groundbreaking results quickly. Similar pressures don't exist in the academia or the industry in normal times. If a company fails, the worst case is generally bounded by the value of the company. People are not that motivated to get results, because the alternative is probably no worse than bankruptcy. There is no looming threat of slavery or death for employees, shareholders, and their friends and family.

Covid vaccines were a recent example of similar pressures. The academia and the big pharma – the establishment – managed to create many effective vaccines in a record time, because people had real reasons to achieve something.


Interesting thoughts, could you perhaps expand on what you mean by " Train them “in the old ways” " ?


The core of academia is preserving a fairly specific culture for long periods of time - a culture of people who care more about logical argument than social pressure.

In an academic setting the person with the strongest argument generally has a long term advantage. That gets compared to the real world where entities like the churches, political parties & <insert any groups> just don't work that way. Many organisations can go for multiple generations with charisma being a superior strategy to argument. In academics they remember the people who went it alone with the most technically correct argument in the face of general opposition (names like Galileo and Socrates spring to mind, along with legions of unpopular and unlikable men who nevertheless earn places of high honour for their contributions to the world's knowledge).

The academics perpetuate that with a series of exclusionary, weird and crufty traditions that frequently date back centuries. It isn't perfect but it has proven an effective culture at grinding out slow productivity gains.


> In an academic setting the person with the strongest argument generally has a long term advantage.

This is a remarkably naive view of how contemporary (and likely, any) academic institution or academia as a whole operates. Do you actually believe this?


You don't have to look very far to discover that they aren't perfect. I do suspect that as the academic tradition will fail as the bar to getting a degree lowers. Something like half the population the population of the US is getting a degree? Some of those students must be pretty average. Pretty challenging to build an exceptional cultural pocket out of average people.

Nevertheless they are much closer to the ideal than pretty much any community you can name. Especially on the scale the academy operates at in both time, space and social influence.


Academia is at the vanguard of dismissing meritocracy for politicking. If it were so great at dispassionately pursuing the truth then you wouldn't get things like decades of Alzheimer's research being based on a fraudulent study. There's nothing magical about the academy. They don't hire people of above average moral integrity and they are incentivized to publish attention grabbing findings. The result is they form cliques to protect their meal ticket theories.


You may say that but there is much more of a backlash against a fraudulent study than, say, all the people who lied the US into various financially ruinous wars. Despite one of those being objectively much worse for more people than the other.

The academy condemns people who publish fraudulent studies. The US political class is mostly at peace with the legacy of a George W. Bush or equivalent. Mild admiration for his political technique. Relatively popular president compared to the last 2. This is a difference of culture.

> There's nothing magical about the academy.

There is nothing magic about anything. Magic isn't an influence over our daily lives. :)

Things are still different from each other in observable ways.


The core of academia is being a good sophist. Every successful professor I met in gradschool was a better salesman than they were scientist, my own advisor included.


By that I just mean to make sure that they don’t succumb to the pressures of modern academia (grant agencies and every committee judge your proposal based on your expertise in the exact same field only in the past). That they need to find the goal they want and have an absolute open mind to reach it. That they also need to be well versed with philosophy (Neil Bohrs philosophy background arguably gave us the atomic bomb). That they should not think small (most academics would swear their life that there can never be a single cure for all of cancer. I beg to differ). That they should steer clear of this George Costanza academia attitude of “it’s not a lie if you truly believe in it” you need to get your grants and papers published. I can keep going on lol.


He's trying to say he has no idea what he's talking about


I’m pretty sure I have a decent idea about this. I’ve been fascinated at the rot of modern academia and to identify what went right in the past in such successful scientific endeavors. Decades of reading and rumination. If you now want to come and say my PhD is not enough and I need to be a professor to comment on the status quo, then perhaps that’s the exact attitude I’m critiquing.


Yeah, insane amount of money will definitely keep them motivated. Feynman wouldn't have gotten anything done, were it not huge money government paid.


Feynman didn’t have a ton of other options back in the day. If you’re truly smart today, the smartest thing to do is join tech or go into finance. Will be a millionaire in a few years. If you can’t pay the smartest minds to do science competitively you’ll only get people who are not as good or were stuck with their decisions for a bunch of reasons.


About 15 years ago I had a neighbor who had worked at our local sprawling Bell mothership years before I met him. He was retired at maybe 45, and could talk Fortran until my ears bled.

This was pre-smart-phone, but dude did not have Internet access either. The PC at his house I managed to spot one day I was helping move furniture around was a mid-80s "Turbo" button nondescript 486 box. He had a VCR he'd tape NFL games on, but would wait until they started to press "record."


I never knew anybody who actually programmed their VCR, either. My family was an 'early adopter' and had one of the first VHS models released back in the 80's.

Are you sure you aren't exaggerating? 486's weren't released until 1989. Almost nobody actually had one until the early 90's.


It was 2005 or 06 I saw this computer at his place.


I think the number of good ideas waiting to be discovered has decreased. We arent gonna get the same rate of innovation.


What's the rationale for thinking that Bell Labs is special?

Reading a general history of science, it's remarkable how often the "inventor" of something is a rich amatuer who happened to have the time and money to spend on looking into the next big thing, and often the connections to publicize it.

Like, invent the microscope, then the people who can afford a microscope use it to look at things and discover them.

How do we know Bell Labs isn't the institutional version of that? And if it is, then the modern version would just to be to build really expensive machines and give some smart people time to play with them.

edit: looked up the invention of radio telescopy at Bell Labs:

> Karl Jansky made the discovery of the first astronomical radio source serendipitously in the early 1930s. As a newly hired radio engineer with Bell Telephone Laboratories, he was assigned the task to investigate static that might interfere with short wave transatlantic voice transmissions. Using a large directional antenna, Jansky noticed that his analog pen-and-paper recording system kept recording a persistent repeating signal or "hiss" of unknown origin. Since the signal peaked about every 24 hours, Jansky first suspected the source of the interference was the Sun crossing the view of his directional antenna. Continued analysis, however, showed that the source was not following the 24-hour daily cycle of the Sun exactly, but instead repeating on a cycle of 23 hours and 56 minutes. Jansky discussed the puzzling phenomena with his friend, astrophysicist Albert Melvin Skellett, who pointed out that the observed time between the signal peaks was the exact length of a sidereal day; the time it took for "fixed" astronomical objects, such as a star, to pass in front of the antenna every time the Earth rotated.[2] By comparing his observations with optical astronomical maps, Jansky eventually concluded that the radiation source peaked when his antenna was aimed at the densest part of the Milky Way in the constellation of Sagittarius.[3]

edit: similar story with laser, they were just building a new type of Maser, which several people and places were doing at the same time. Bell Labs controversially got the patent though:

https://en.wikipedia.org/wiki/Laser#Maser

> The question of just how to assign credit for inventing the laser remains unresolved by historians


Reading about the history of the transistor, it's truly amazing that AT&T basically just gave the technology away to the world for a single $25k licensing fee. Nominally, they were hoping to benefit from other company's innovations, but mostly they were worried about protecting their government sanctioned monopoly and didn't want to be seen as abusing it. That last bit isn't mentioned in most online summaries, but was an actual decision made at the highest levels of the company. So as long as you weren't in a communist nation, you could license the tech, including a nine-day long seminar on the details, and another a year later detailing the manufacturing process. [1]

In Europe, there was an independently developed point-transfer transistor called the Transitron created by a couple German physicists, and developed as an amplifier by a Westinghouse subsidiary in France. But the Bell Labs bipolar junction transistor was more advanced and based on a better understanding of the underlying effect. (Shockley was a paranoid asshole, but he was also a very capable scientist.)

Imagine if the US had a monopoly on transistor technology for a decade or so? No flood of cheaper Japanese electronics, no competitors for the microchip, etc. It's truly an astounding gift, or massive blunder depending on your point of view.

1. https://www.computerhistory.org/siliconengine/bell-labs-lice...


Never forget NJ was the #1 tech hub. Edison was here, Bell labs, Princeton had Einstein and Von Neumann.


https://youtu.be/IFfdnFOiXUU

Half cringe corporate jingle, half mind-blowing list of accomplishments with a chorus punctuated by Nobel prizes.


kind of ridiculous to write this without talking about janelia


>some interdisciplinary problems are difficult to address in existing research settings, and Janelia was built as a separate institution to address such problems in neurobiology.

>The center was designed to emulate the unconstrained and collaborative environments at AT&T Bell Laboratories and Cambridge's Laboratory of Molecular Biology. Researchers are on six-year contracts and fully internally funded, independent of traditional research grant funding.[11]

https://en.wikipedia.org/wiki/Janelia_Research_Campus


janelia is focused on a specific domain. an interesting domain, for sure, but the author is interested in quantum computing and that would be a shoehorn for janelia's mission at best.


Strangely no one mentions IBM which holds half the global patents and has R&D labs almost in any country of the world.


Why is Bell Labs owned by Nokia?


ATT Bell Labs -> Lucent Bell Labs -> Alcatel Bell Labs -> Nokia Bell Labs.

The first step was a corporate breakup, the others were corporate purchases. (I was there during Lucent days.)


International industrial espionage was less of a problem back then.


eeh... the Soviets pretty much wrote the textbook on industrial espionage during the Cold War.


International industrial spies doesn't seem to be needed today, USA gladly gave away most of their technology to China for free. Now the highest end silicon is made and developed in Asia, why would they spy on USA?


Rose tinted glasses much?


Want R&D? Oppose monopolies. Apple, Google, Facebook, Amazon, and possibly others all need to be broken up into smithereens.

There seems to be bipartisan support for this but current leadership is once again distracted buying voters rather than solving hard problems (typical for no matter who is in power).


And yet TFA is in large part about how government backed monopolies can and did give rise to significant R&D ....


It's always tough to tell, because we don't have a comparison to anything else. Maybe society would've seen more innovation under the same circumstances without the government backed monopoly. That you can get X units of innovation even within the government is not controversial, I think. It's whether you can get more than X units with the same input without government (or monopolist).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: