Hacker News new | past | comments | ask | show | jobs | submit | dogber1's comments login

Better gate control results in lower leakage compared to fins, and there are probably also some better thermals associated with this approach due to the surrounding metals. However, the density is crap - stackable nanoribbons are likely the better path.

The rest is just atrocious marketing smoke and mirrors from a zombie company that is essentially dead, but somehow manages to linger on as a shell of its former glory.


It’s so sad seeing my back then dream company IBM Research becoming a zombie. Probably that’s normal cycle of some companies.


And on top of that, you can make almost any material lase. There have been entertaining experiments where folks used yogurt, apple juice, hair dye, and other fun liquids in pumped lasers, exploiting some molecular band transition that was accidentally good enough.


In reality, all-optical computing is mostly a terrible idea: fundamentally, it cannot reach the integration density of electronics. It boils down to the elementary differences between Fermions (electrons, neutrons, etc.) and Bosons (photons, etc.). Their intrinsic behavior determines the interaction with matter, i.e. conductive/absorptive properties. As a result, optical wires (waveguides) have to be sized roughly at a wavelength (hundreds of nm), whereas electrical wires can be much smaller (<30nm and below). Suppose you want to build an amplifier: all the claimed speed benefits of this optical device would vanish in the path delay of the feedback loop.

But just like graphene, carbon nanotubes, and other fads, you can publish fancy papers with it.


> all-optical computing

The keyword here is ‘all’. There are some things optical computing is bad at. However there are some things it is unparalleled at. For example, light can multiplex. It can have much lower energy losses. It can run at much higher frequencies. It is by far the best way to transmit information at extremely high data rates. Even within a chip, free space optical communication has massive theoretical potential.

Your comment would have been an excellent one without the last sentence.


But the whole wisdom of the parent comment is in the last sentence. This is mostly what is happening with such papers.

The keyword here might be "all", and there are some applications where optical computing is unparalled at. But research teams, vendors, and the media spin those things are a recplament for every application, not as some niche thing that's good at some niche applications that most people need not care about...


There seems to be an awfully large amount of projection here from people seemingly just reading the headline and not the article (much less the paper).

Even just the article's sub-title has tempered predictions: "“Optical accelerator” devices could one day soon turbocharge tailored applications"

And the research has immediate practical applications, again per the article:

> "The most surprising finding was that we could trigger the optical switch with the smallest amount of light, a single photon," says study senior author Pavlos Lagoudakis, [..] Lagoudakis says the super-sensitivity of the new optical switch to light suggests it could serve as a light detector that could find use in lidar scanners, such as those finding use in drones and autonomous vehicles.


To me it sounds like the counterargument nulls the wisdom. If you can only make 200nm optical nodes, but they could multiplex 1M signals, you'd win by 100x over 2nm electrical nodes. It will be 100_000x if you add 10THz vs 10GHz difference to the picture.


You are right about transmission of data. But the article is about transistors, i.e. processing of data.


(I swear I’m not in Fridman’s payroll.)

As a layperson I found this episode with Jeffrey Shainline an interesting discussion tangential to the topic of optoelectronic computing. The basic gist was that photons are good for communication, electrons are good for compute.

https://youtu.be/EwueqdgIvq4


The specific timestamp for that part of the conversation: https://youtu.be/EwueqdgIvq4?t=2793


Not in the domain but as far as I understand from video for the counter argument: "Adding a matter of component and creating this kind of liquid of the light you add interactions in the system so practically light interacts more efficient with the light"

https://youtu.be/Kv25Dw-IuCM?t=675


This makes me wonder if transmitting data optically would help with the gradual lowering of the data/compute ratio over time:

https://sites.utexas.edu/jdm4372/files/2016/11/Slide16.png


We already transmit data optically, that's what fiber optics is.


I'm talking about between chips and RAM of course because that's where the data/compute ratio is shrinking, not between Tokyo and New York.


the hard part is that currently electrical to optical conversions take a fair amount of space which would make them hard to do on CPU. it might be practical for storage to ram though, which would be really cool.


The flipside is switching speed, optically you can reach THz and more apparently, while heat/capacitance/crosstalk limit electronic transistors IIRC.


Yes, signal (non-)interference is a big upside to optical communication. Photon streams don't interact even when passing through the same waveguide, so you can superimpose many bits/streams/connections in the same transmission channel at the same time (using varying wavelengths or polarisation), and two optical channels running side-by-side don't exert a magnetic force on each other either.

The main upside for optical processing (photonics) is in signal switching then, as in this case. Having to receive the multitude of optical signals, converting them to electrical, doing the signal routing and processing in the electrical domain, then converting back to optical for transmission is a lot of busywork.


This sounds to me like doctors claiming they know so much about the human body when in reality we are at the infancy of our understanding


We're in an interesting point: there is so much we don't know but in order to learn more everything we do must fit in the already immense amount of knowledge that we accumulated so far. In the vast majority of cases this requires that the people who want to nudge the frontier a bit further must first dedicate a good portion of their life's studying what we know, and as the result sounding a bit arrogant when they explain to a layperson that actually they know what they're talking about. Yes, in some cases, they may be erring on the side of too confidence, but in many many cases is the layperson who doesn't fully grasp the ramifications of the innocent looking alternatives.


That’s a pretty long way around to what is essentially an appeal to authority. You’re right about the knowledge and devotion required of frontier pushers. History is full of people who challenged this thinking and completely overhauled human understanding of a topic, though, often in the face of relentless ridicule.

The error (in your telling) is equating knowledge with confidence. Knowledge is knowing you might be wrong about it all. The advice to spend one’s life questioning isn’t a smarmy nothing; it’s the only truly sensible approach when you step back and think about it.


It's not actually an appeal to authority as such - authority implies (usually) institutional accreditation, whereas here we are pointing out the situation is so complex opinions without years of study are more or less pointless.

Then, when we have two e.g. physicists, who both know quite well what they are discussing, and one of them is more famous and potentially through their prestige succeed in ridiculing their less recognized colleague, we are at the "appeal to auhority" position.

One famous example is that of Ernst Mach who was was positivist (i.e. did not respect theory whose constituents you could not directly measure) and ridiculed Boltzmanns kinetic theory of gases because Mach did not believe in atom theory (!). Boltzmann's theory was effectively attacked precisely from position of authority.

So, if a layman and a physicist argue what is possible, it is very likely while both of them may be wrong, the layman likely does not have any understanding what their position implies.

So in my opinion, you can have a pathological appeal to authority sort of situation only when two equally skilled persons have an argument an the institutional prestige of one of them is used as an appeal for them.


> History is full of people who challenged this thinking and completely overhauled human understanding of a topic

At the risk of argumentum ad logicam, this is a textbook example of survivorship bias. History is also full of people who were adamant they were correct in the face of ridicule, and turned out to be wrong anyway.


Appeal to authority is only a fallacy when the authority is not an actual authority on the exact topic being mentioned.

Your broader point is right; obviously if we stop to poke at certain assumptions, the occasional one will collapse.

However, the pathway you’ve just suggested is less practical than you think. The GP is talking about a systematic, coordinated exploration effort of known unknowns.

Metaphorically - he/she is saying that there’s more likely to be gold at the unexplored end of gold mine, not in the excavated dirt.

It’s a fair assumption to keep in practise.


> Knowledge is knowing you might be wrong about it all

that's not a great definition... I know I might be wrong about flying UFOs, but that doesn't count as 'knowledge', does it?


Sure, but this argument could be used for any statement. It's not very compelling.


Are you familiar with physics?

What engineers work with is, maybe, 1/1000 of our physics knowledge (maybe 2/1000 for electronical engineers who need a solid basis of quantum mechanics).

Our physics knowledge is maybe 1/1000 of what we roughly know should be there but cannot be probed (quantum gravity, nonlinear field theories, dark stuff...).

The Universe is so huge that it is pretty impossible to descrive how much bigger than us it is - probably infinitely.

The point is, between the stuff that we know and the stuff that we roughly know but don't really know - we know a lot more than what we can use.

Saying that something is not so useful technologically, as OP stated, is rather a safe statement. We know a lot about fermions and bosons, light and electrons - and we know sufficient information to be able to state when something is overhyped and not really useful as it seems


It’s not all about computing. It’s about avoiding conversion from electrical to optical signal (and back) at every network node, which is costly.


Don't you need a certain amount of computing at each network node anyways to see what to do and where to send the optical signal next? In additional to error correction/amplifying the signal?


Generally you only need to read the 'header'. If that is little enough computation maybe that can be done optically, gaining the advantage of not needing to convert twice.


Often it might be as simple as routing right wavelength through right path, as in WDM systems. Optical amplifiers, such as EDFA [0] are interesting thing, too.

[0]: http://www.fiber-optical-networking.com/the-application-of-e...


Isn't this just a trade off? Is there never a scenario where you would trade transistor density for switching speed and lower power consumption?


That's indeed done all the time in electronics: for example, RF CMOS usually trailing on a node three or four generations behind the bleeding edge.

However, all-optical/photonic computing is just intrinsically so much worse than electronics. On top of the issues that I touched on, there are also other fundamental problems, e.g. distribution of power: photons like to get absorbed by nearby electrons. How do you then supply all the active devices (switches/lasers/etc.) with power while maintaining some semblance of signal integrity and dense integration?


There is a special case: pumped laser amplification of signal in underwater fiber optic cables. That's all optical for the signal path as far as I know.

https://www.laserfocusworld.com/fiber-optics/article/1655109...


Could nonlinear wave interactions be applied in near vacuum, isolated from the lasers, amplifiers and counters? Think 100000*100000 imprecise loss-full tensor/matrix multiplications.


Exactly. While this speed vs space trade off makes less sense in mobiles, it might make perfect sense in industrial settings. Imagine 3D computers the size of a room (Craigh 2) but a 1000 times faster than any TPU only cluster.


If you can trade single-core speed for parallelism cluster of traditional electronics make more sense, but for some algorithms you can't.

Imagine a CPU with the complexity of Arduino but running at 100 GHz.


Yes. IIRC, amd chips have been beating intel chips for a while now on transistor sizes but intel even with larger transistors still have a greater density on a chip (maybe it's changed in the latest gen).

Another benefit of lower density is cooling.


You can't directly compare optical and electrical compute through looking at the difference in feature densities. Optical compute will most likely take the form of analog waveforms that contain many bits of information, whereas electronics for computing is inherently binary.


I'm afraid that's not even remotely true. Just two counterexamples:

- MLC flash storage devices use multiple levels to store/retrieve bits [1], - Lots of control systems are implemented with analog PIDs [2]. A trivial example is a jellybean voltage regulator that computes the adjustments needed to maintain a stable output voltage independent of the load.

[1] https://en.wikipedia.org/wiki/Multi-level_cell [2] https://control.com/textbook/closed-loop-control/analog-elec...


Isn't digital just an abstraction on top of analog anyway? Pretty much all electronics isime that


That argument doesn't make sense to me.

You can just choose to use light at a smaller wavelength.

Also, less density by itself doesn't mean less performance, the larger optical components can just run faster to end up with higher overal performance.


In principle, yes, but: - lower wavelength light is harder to confine within waveguides (or transmissive optics), and messes up atoms when colliding (think of x-rays), - finding an efficient source at lower wavelengths is one of the main struggles of the semiconductor industry.


So they were right about computers the size of entire buildings, they were just off by 100 years?


1 GHz allows for photon to move 3 m per cycle in vacuum. 10 GHz is 30 cm. Even less in fiber cable. I think that's a fundamental restriction of a size of an individual computing module. Of course you can stack modules in entire buildings just like you can stack cpus in servers in data center now.


Perhaps I'm misunderstanding your comment, but the frequency doesn't change the speed of light.


They're talking about wavelengths ("per cycle"). But I'm not sure it makes more sense knowing that, since there's a fundamental disconnect between the signal frequency and the carrier frequency. I think QAM can even be used on a signal rate that's higher than the carrier frequency (as long as the carrier frequency is known), but I'm not 100% sure.


If we take our definition of "individual computing module" to be that it has a defined state during every tick of the clock, then there is a hard physical limit of 30cm for a module that runs at 10ghz. Anything larger must be operating asynchronously, as a distributed system.


Point is that interconnect between floor 1 and 5 might pose considerable challenges, thus greatly minimizing the potential advantages of having massive building sized computers


Not true for plasmonic waveguides which can confine energy well beyond the diffraction limit. But I agree that for now, photonics is just an academic wet dream.


If it is lower power, going 3d with it makes more sense though. Brain structures like synapses are ~2x smaller than UVC wavelengths or so (cubing that, ~10x smaller).


Would it be possible to circumvent this problem with something like squeezed light? (https://en.m.wikipedia.org/wiki/Squeezed_states_of_light)


No - that's just a neat trick to enhance the precision in the measurement of non-commuting observable quantities of interest.


While for most things density is good. However if you can have a certain task take advantage of this insane switching frequency there could be reasons to build a room or multi-room sized specialized computer. Not everything needs to be tiny for every application.

Also path delay is not an issue if you have a task that can be pipelined for raw through put. Latency is less of issue in such scenarios.

So claiming there is no use for such things seems a stretch. It certainly can have niche uses. Bigger problem with a lot these papers is their tech needs to be at least reasonable to manufacture to have niche uses.


I think fermions vs bosons is irrelevant here - you can't build transistors out of neutrons. Sure, photons at these energies are larger, but still can be used for certain tasks, like quantum computers.


Couldn't this perhaps be useful for specialized compute problems that can be represented as a combinatorial system of optical gates/switches? It would be something useful for a specialized subset of problems, sort of like quantum computing.

QC is also not going to replace general purpose electronic computers but augment them for certain classes of problems.


how about for an optical switch? for switching network packets over fibre?


Switching/routing usually requires significant information processing (e.g. decode packet header, match destination address against routing tables, etc.). This necessitates 10k or more gates. All-optical computing can't deliver this level of integration density, nor the performance at reasonable power levels.

Maybe there will be some smart way to pre-encode routing information onto packets to reduce processing requirements, but I doubt that such a network could scale.


> Maybe there will be some smart way to pre-encode routing information onto packets to reduce processing requirements, but I doubt that such a network could scale.

https://en.wikipedia.org/wiki/Multiprotocol_Label_Switching


Being able to do any kind of computation in the optical domain would benefit telecommunications immensely.

Basic things like optical muxing/demuxing and serialization/deserialization would be fantastic.


I was going to say "But wavelength/polarization multiplexing is the norm", but you have "fiber network design" in your about so I'm wondering what I'm missing - I guess you mean dynamic muxing, essentially routing? SerDes is of course annoying.


Yes, being able to do anything dynamically in the optical domain would improve things.

My main point is that to be able to do even the small stuff in the optical domain would be a big win. You don’t need to be able to achieve L2/L3 switching/routing to move the needle.


That sounds like a derivate of MPLS, where labelsa re only stripped along the path.


> In reality, all-optical computing is mostly a terrible idea: fundamentally, it cannot reach the integration density of electronics.

It doesn't need this density to be useful or better than electronics in many cases. For instance, photonic quantum computation happens at room temperature, but this doesn't seem like it will be feasible with any other method for long time, if ever.


I don't know anything about the topic, but it does make me wonder. Our problem does not seem to be a lack of transistors to make all manner of specialized single purpose logic. We do see to be stuck when it comes to single core performance. I wonder if a new technology like optical could be used to add a single core accelerator to supplement existing chips.


Well, good thing that the proposed application is about multiplexing/demultiplexing, and not about general computing.

Light has many inherent advantages over electricity for multiplexing/demultiplexing. Also, optical amplification works quite well too, and people use it on every long distance data cable nowadays.


~400nm for waveguides isn’t that big of an issue. Optical computing may be relighted to stuff like DSP, but that’s still a vast market.


But isn't a fundamentally different type of computation? A type of computation that might be faster even at lower density?


That is an extremely common practice in the industry. Every large chip design house does this to maximize line yield - it is called "binning" [1].

[1] https://en.wikipedia.org/wiki/Product_binning#Semiconductor_...


Of course binning is common, but creating a retail SKU to sell the fail bin from a semi-custom (Xbox) is always newsworthy.

There was something similar with the previous Xbox generation: https://www.extremetech.com/gaming/316299-amds-a9-9820-8-cor...


So many hyperbolic claims that boil down to a moderate 33% area benefit with very little data about mundane things like long-term reliability, or process variation.

Every time an article mentions some buzzword like 'graphene', it turns out to be just a load of baloney. Graphene can do everything but leave the lab.


Graphene already has left the lab, it's being used in some high-end batteries.


I think graphene gets a bad rap because it's had a very slow ramp up time due to chicken/egg problems in production cost/demand curves, but that's been changing drastically over the last decade. It's come down in cost by multiple orders of magnitude [0] While total sales have been modestly increasing, with projections for massive growth [1] and if there's a steady market cap while marginal costs decrease that much, also means orders of magnitude increases in shipped product.

It's had a lot of growing pains, but we seem to mostly have passed the inflection point in terms of market viability.

[0] https://www.researchgate.net/figure/Fig-12-Comparison-of-the...

[1] https://www.grandviewresearch.com/industry-analysis/graphene...


No, that's just some marketing verbiage with no real physical correspondence.


> hyperbolic

That's exactly the word I had in my mind when I was reading the article!


Linearity, on-state loss, pole-to-pole isolation, and breakdown will be atrocious. It makes for a nice clickbaity publication though.


That's a great re-implementation from some stuff I did eons ago [0].

BIOS passwords are indeed a complete joke as means to secure access. There are a bunch of vendors out there who moved the authentication off from the BIOS/CPU to the KBC (keyboard controller) - Toshiba and Lenovo are among them. Still, it's ludicrously easy to circumvent these.

[0] https://dogber1.blogspot.com/2009/05/table-of-reverse-engine...


The linked article did not have info about Thinkpads. I wonder how nowadays one can skip BIOS password of a T series thinkpad. So far it has always ended up with a motherboard change for me.


There is a guy in hungary that offers unlocking services for about 50EUR. He is a bit difficult to work with (insists on a NDA) but the procedure is as follows: you dump the bios via SPI and send it to him. Afterwards you get a patched image back you flash onto the bios chip. Then you boot and you need to enter some numbers he also sends you (I assume some type of copy protection) and it will unlock and reset the bios password. After that you just reflash the bios image you made earlier.


If you don't mind physically opening the laptop, you open it and take out the CMOS battery, boot it up without it then shut it down and put the CMOS battery back in and boot. The BIOS password will no longer bet set. I don't know if this still works but it used to on older laptops.


Modern UEFI uses Flash Storage and CMOS is only relevant for the computer time keeping (though some UEFIs have a full CMOS to emulate BIOS behaviour)


Doesn't work anymore. The data is no longer stored in CMOS.


Sometimes it is the same chip but not the same range, so it's not cleared ever.


i used this method on an old t520 with success.


For slightly older ThinkPads, the method is to short clock and data lines (SCL and SDA), power on the laptop, and press F1 all at the right moment. This website has the locations for many models, for example the X220: http://www.ja.axxs.net/x220.htm (note that this person is/was selling a device to assist with the process but its use is not required, although, predictably, it doesn't say so on the website).

For newer ThinkPads, there is a method to replace the LenovoTranslateService EFI module with a modified version that passes control to another module, which in turn removes the password. This is supposed to be a paid solution (the module will ask for a code that has to be purchased) but apparently there is a "workaround" for that too.

I might not be up to date as I had no need for any of these and my most recent ThinkPad is an X220 but it's safe to say there is always going to be some solution without having to resort to motherboard replacement.


There is a trick that work for some (most?) models in the T-series: if you short the pins of some chip with the right timing, you can bypass the password check. See, for example: https://amp.reddit.com/r/thinkpad/comments/b7jbqq/reset_bios...


I believe you force BIOS to think that it had been lucky but checksums don’t match and EEPROM save is corrupt, then load default and let password go.

Works for straightforward ones like most Lenovo, but not for weirdos like Toshiba. Sometimes I see lots of Toshiba office laptops with locked BIOS waiting to be recycled as the result.


Older Toshibas have pins near the RAM that can be shorted to clear the passwords. There's a big list here: https://biosbypass.com/how-to-clear-toshiba-bios-password/


I seem to remember the last model that works on is the T420, after that you are out of luck.


I do not understand the proclaimed 3X increase of design cost per finfet node, particularly in the context of digital ICs. Most cell designs are highly repetitive, so the increased design complexity should only add a one-time offset to the total chip cost. Hence, the total design cost should be comparable to previous nodes, i.e. well below 2X. I understand that purely analog designs are a different beast, but it doesn't make any sense to use advanced nodes for these in the first place (ft/fmax drops as a result of higher-than-linearly scaled CGS).


Multi-patterned masks are a big expense both in the manufacture, yield, and design/verification. Really, 10nm and below truly need EUV (we used to call it X-ray lithography, but that was already a decade late so they changed the name) to be commercially viable. It brings the mask layers back down to ~60 from 80 @ 10nm... and I assume triple patterning and techniques beyond at 7nm or 5. The mask production and exposure steps balloon without EUV (not using 193nm ArF with crazy NA immersion tricks), but right now the light sources for EUV aren't bright enough to put wafers through at a reasonable speed. https://en.wikipedia.org/wiki/Next-generation_lithography

Also, when you need almost zero defects and the costs of low yields are so big, then a lot (20-30%?) of your budget is spent modeling and confirming your design. Then there is the design of the SOC itself, which only makes sense to do at such geometries, if your level of integration is extreme. Once speed and power stoped scaling with geometry (around 55-28nm), the only reason to keep going was integration and $/transistor, but with higher upfront costs you got more to amortize.

Now don't get me wrong, there's good reason to think that 10nm could be a decent node, and finFETs are cool (low leakage and fast). It's just that I suspect multi-wafer stacking is going to give us a big jump too. We may also leave Si behind. But each of these are really separate jumps rather than incremental.

The problem is that the obvious treadmill is over. The returns are less certain, and once the money men figure that out, there won't be $billions to spend every year. That's the real end to semiconductor age.


The "design" cost quoted mostly means mask cost. It gets very expensive to produce for smaller nodes.


> The "design" cost quoted mostly means mask cost.

No, mask costs are only a very small part in development and production of complex ASICs. Some sources:

1) First graph in http://electroiq.com/insights-from-leading-edge/2014/02/gate...

2) First graph in http://www.eetimes.com/author.asp?doc_id=1322021


That's the recurring cost of the mask set, because they don't last forever. The setup cost of the mask however is very high.

As for logic design effort, verilog is verilog. Floor-planning and P&R is a bit more complex at lower nodes but that's mostly software. Also TSMC offers you standard cells to pop into your design.

Therefore most of that "design" cost is in the foundry making the fist set of masks for you, not the incremental mask cost. They're recouping their investment in the process, thus the price is extremely high for newer nodes.


Do you have a source or some links? I would like to know more about this.


It is mostly based on my experience working as an ASIC designer and having taped out several chips.

However, you can consider it logically -- the engineering effort to design the logic, and perform place and route doesn't change much from node to node. You're doing the same work, with the same software; albeit with new libraries provided by your fab.

The cost clearly correlates with smaller nodes because they charge you a ton to do the "setup" for you, i.e. make the first set of masks. Older nodes are now much cheaper than they were because more shops are using them, thus spreading the amortization costs.

AFAIK, a lot of cortex-M ARM chips (such as STM32F) are made on 90nm nodes. There, the variation you offer your customer is important so they want lower mask costs to make as many variants as possible. The core itself is so small than going to 28nm wouldn't offer much savings because a bulk of the cost is in packaging, testing, and at 28nm would be the amortized mask cost.


Apple is doing a much better job than all the rest of the PC vendors. Even those vendors that I haven't published keygens for [1] have just stupendously unsound bypass mechanisms for BIOS passwords.

[1] https://dogber1.blogspot.com/2009/05/table-of-reverse-engine...


Nevertheless, physical access still means it's game over; and I consider that a feature, not a bug. Given that, it's actually a little amusing that Apple went to all that effort for something that can be defeated with nothing more than a full BIOS reflash.


Maybe not so amusing! As far as I know, you’ll need to take the machine apart to reflash it, plus special hardware — because when a firmware password is set, a Mac requires the password to choose a different boot disk.

with this feature, Apple HQ can give a service center the ability to clear a particular firmware password without giving them a universal backdoor (hardware or software).


> As far as I know, you’ll need to take the machine apart to reflash it, plus special hardware

This doesn't take very long. Maybe 5 minutes to disassemble the machine.

As for hardware, you can flash SPI chips using a Teensy and a clip chip. [1] The total cost of parts is under $30.

Incidentally, I highly recommend investing in one of these if you're doing firmware development for routers. It's so much easier to flash a backup than muck around with TFTP.

> because when a firmware password is set, a Mac requires the password to choose a different boot disk.

This is hardly unique to Apple. Most PC laptop manufacturers also disable changing the boot device or choosing a temporary boot device when a setup password is enabled.

> with this feature, Apple HQ can give a service center the ability to clear a particular firmware password without giving them a universal backdoor (hardware or software).

Um, this is how it works for PC firmware passwords as well. Unless there is a keygen available, most modern implementations use a hashed value from the serial number or hard drive as the master unlock password. It's unique to the laptop being unlocked.

[1] https://trmm.net/SPI_flash


Your list is 7 years old and relates to non-EFI bios implementations. It's hardly a valid comparison to a modern Apple bios as looked at here.


The bypass algorithms have largely remained unchanged when the industry moved to EFI. Most vendors (Lenovo, Dell, Acer, Asus, Toshiba, Fujitsu, ...) simply wrapped their bypass algorithms into some DXE driver and called it a day.


Seeing Compaq at the top of that table brought me all the way back...

Presario FTW!


I wonder what PCs would be like if Intel bought Compaq in 1991 with people like Rod Canion and Jim Harris staying on. I think they were there when Compaq reverse engineered the IBM PC BIOS for example.


No, it won't: on the length scale of a chip, the power efficiency of an electrical-optical-electrical transmission line is an order of magnitude worse than direct electrical transmission, regardless of the actual technology. Also, optical transmission lines are huge in volume in comparison to electrical lines.


The article makes it seem like this technology will completely replace copper, which is obviously untrue. Silicon photonics for use in interconnects in multicores, though, is very promising purely from the perspective of bandwidth. HP and Intel have expressed interest, and there is a lot of academic research at Northwestern and UCSB ECE, for instance. I do not know what energy efficiency figures you were referring to, but emitter efficiency is dependent on the band gap and other properties of the semiconductor and lasing material, so the specific technology used does matter.


But light in free space travels 50% faster than electricity in copper and doesn't incur a heat load for the distance it travels. Likewise it carries more information than an electrical line.

Importantly as a counterpoint to

> the power efficiency of an electrical-optical-electrical transmission line is an order of magnitude worse than direct electrical transmission,

---

> In contrast, semiconductors of main group IV — to which both silicon and germanium belong — can be integrated into the manufacturing process without any major difficulties. Neither element by itself is very efficient as a light source, however. They are classed among the indirect semiconductors. In contrast to direct semiconductors, they emit mostly heat and only a little light when excited.

>The scientists at Jülich’s Peter Grünberg Institute have now for the first time succeeded in creating a “real” direct main group IV semiconductor laser by combining germanium and tin, which is also classed in main group IV.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: