Hacker News new | past | comments | ask | show | jobs | submit login
The US chip industry starts to wake up to new competitive reality (archive.is)
107 points by yazr on Sept 7, 2018 | hide | past | favorite | 87 comments



As a consumer, I'd rather see them make faster hard drives. I think that's a much bigger impact than faster CPUs. I remember when i got my mac book pro 2015. the hard drive was so much faster than 5 years earlier. I can now boot up in less than 20 seconds. and my IDE opens in less then 4 seconds. getting those two numbers down to 1 or less would be a much bigger improvement, IMHO.


Faster or lower latency? Right now traditional SSDs are extremely high bandwidth when transferring sequential files (approaching 3.5gbyte/s read and 3gbyte/s write) but they lack the random 4k IOPS to lower the perceived latency on disk.

Intel's Optane is the first consumer SSD I've seen that has high performance 4k read write IOPs not unlike memory, but even if you use that as your boot drive we are starting to put pressure on other part of the system. Soon the SSDs will start to have higher performance than their PCIe 3.0 x4 slot and some already need a x8 slot when striping nvme SSDs.

Take a look at https://www.anandtech.com/show/12951/plextor-demonstrates-4w...


> Faster or lower latency?

Yes.


Only if at a cheaper price though!


This is hilarious for people who used to boot their OS instantly from ROM in 1985 ... The main culprits here are OS bloat and inefficient boot procedures, not the performance of the storage system.


To be fair, you did not boot into an OS those days, booted into a single threaded REPL with a library, a “slight” difference...


In 1985, the Atari ST came out and that same year supported booting into its (single-tasking) GUI OS from ROM, quite a step up from the BASIC REPLs I assume you're referencing here.


Indeed O:-) You may have had to wait until as late as April 1989 to have a credibly decent GUI and (collective) multi-tasking straight from ROM.

https://en.wikipedia.org/wiki/History_of_RISC_OS#RISC_OS_2


But as a consumer, I can't control for that. I can read specs on SSD speed.

> OS bloat

Results from tradeoffs against other things and is not going away.


Hardware has been getting faster for decades; somehow software always manages to slow it down. I don't think there's a way to win this race without either changing the tradeoffs or building a software culture that actually cares about performance.


As the quotation goes: "What Andy giveth, Bill taketh away."


> But as a consumer, I can't control for that. I can read specs on SSD speed

You have options. I used to have a Windows PC for all tasks (with these annoying issues) and have since moved to an iPad for most tasks, a PS4 for games and a MacBook for programming (which I admittedly don‘t do much anymore). SSD speed is not relevant anymore, booting has been replaced by opening/picking up/starting from hibernation.


What stops you putting your xOS PC in standy/hibernation ? I believe you compare different situations/needs


You’re not wrong but I just bought the 2018 MacBook Pro (i9, 6 core) to replace my 2016 TB MBP. Both had an SSD but the new one blows the doors off the old one (video encoding, compiling code).

What I could use now is more memory on or near the CPU maybe like the iPhone A7+.


the boot times on these new MBPs are crazy. If you look away for just a moment, you can miss the restart entirely and you'll be left wondering if it restarted or not.


That login screen after restart is basically fancy boot loader screen. After authentication your disk is decrypted and the slow parts start. You will notice that that login screen is some what different for restart and sleep.


The new one is probably an nvme attached directly to PCIe, and the other is probably connecting over SATA. These days, no two SSDs are the same.


They are both NVMe.

Apple bought a company that developed NVMe SSD controllers and has been producing it's own custom controllers for a few years now.

https://www.anandtech.com/show/9136/the-2015-macbook-review/...

The 2018 MacBook Pro is just using a newer high performance version of the controller.

>The T2 is also Apple’s SSD controller, so this means that the MBP is getting a SSD upgrade. Apple is now offering up to 4TB of SSD storage on the 15-inch MBP and 2TB on the 13-inch model. And judging from some of the numbers the 4TB iMac Pro has put up with the same controller, the 15-inch MBP stands to have chart-topping SSD performance.

https://www.anandtech.com/show/13073/apple-updates-macbook-p...


I got a Lenovo Carbon X1. I have to charge it as often as a phone, and smart sleep settings make it seem like it's always on. I can just leave the IDE open, so it loads in 0 seconds.


A company called Micron is developing non volatile storage to fill the latency gap between pcie storage and sdram. Almost certainly requires new mobo architecture and might require more trivial kernel changes, but nvme tech is already pushing the limits of the north bridge.

In 20 years we'll probably see this in line production. In 40, we might see non volatile storage replace volatile memory for most use cases except things that need to be volatile, such as symmetric crypto keys.


My 2013 MacBook Air screen recently died.

I just bought a new one off eBay from the same year.

Teenage me would not understand. But I think personal computers peaked years ago.

The reason this is possible is I have opted out of “modern” development tools and I write all my own code with a minimum of dependencies. If I was using webpack and Docker I would have to buy a 2018 machine.


Do you mean faster SSDs?


I think OP means any non-volatile, 1TB +/- an order of magnitude storage. Most of us don't care what the physical media is as long as it's durable and fast.


On a MacBook, you can pretty easily create a temporary RAM based virtual disk. Just drop your IDE app in one of those? Could probably make it a script that runs on startup


They are constantly working on faster and cheaper SSDs.


They're correct, it's been obvious to everyone that there are limits to physics that preclude pushing much smaller than we are presently designing without switching to a different method of creating logic devices.


And what would that different method be?


I think if they knew that they'd probably be selling the answer, not giving it away here. It's clear the our current approach to making chips is approaching a fundamental limit, it's not clear that there's an alternative, other than asymptotically scraping increasingly small performance gains as we toe closer to that line.



That's probably the other half of where things will go. General purpose computers eking out minor performance gains, and niche applications continuing to move toward more and more specialized and single-purpose hardware.


I’m sure there are plenty of ways to physically implement logic that don’t rely on mostly-2D silicon transistor chips. Perhaps some of those pathways, once sufficiently developed, will have a higher upper bound on performance

I think in the near term we will see micro fluid cooling tech developed for more fully 3D chips. The human brain is a pretty good model for what that may look like


In addition to the other replies... it isn't obvious that there is going to be one. Experts far more versed in this field than I might have theories that could pan out, but I wouldn't know.

Science fiction likes to point to either optical or some method of quantum interaction computing. I'm actually not sure what the basis for those are, it could be that photon based computers are actually a type of quantum computation system. As I led with, that's stuff I've seen mentioned in fiction.


So basically we will now need to write more efficient software. A novel idea to many!


Reality: we get Electron but this time using Servo


Yeah this is the most uncomfortable opinion I see with rust. In general rust doesn't seem terribly well suited to existing toolkits (as most make heavy use of the OO paradigm). What's left is often very immature (e.g. cursive) and so there tends to be a knee-jerk reaction of: let's build a GUI toolkit on top of Servo. Electron isn't just bad because it's all a mess of javascript, it's bad because it's not portable, because the resulting apps are bloated and non-performant, and because they never quite work like native apps.

For all the people who wave the "one true Scotsman" flag at people whinging about the non-native feel of Electron I'd just like to point at Visual Studio Code. Opening a file vs directory on Windows is a hilarious exercise in fermented emacs-like keystrokes. On a Mac, try using the "Window" menu (it doesn't list the children) or try using the minimized window icons (they all have the very useful title "Code").


Yeah this is the most uncomfortable opinion I see with rust. In general rust doesn't seem terribly well suited to existing toolkits (as most make heavy use of the OO paradigm).

The Gtk+ bindings are pretty good. Of course, this is not surprising, because Gtk+ was doing OO on top of C.

Gtk+ is not the native widget set on macOS or Windows, but neither is Electron nor 'Servotron' ;).


Yeah I forgot (intentionally?) about the Gtk bindings. I've never been much a fan of Gtk (OO on C seems... wrong among other quibbles), but yeah the Gtk bindings occupy that better-than-nothing space but aren't something I'm hugely enthuastic about. They're not a multi-process client-server mess like Electron but they're also just wrappers around a big C library.


> Electron isn't just bad because it's all a mess of javascript, it's bad because it's not portable

Electron runs on (at least) Linux, Windows, and Mac OS. That may not be perfect, but, I think its hard to brand that as "not portable".


> Electron runs on (at least) Linux, Windows, and Mac OS.

Take a look at the issues opened for porting Visual Studio Code to not-Linux/not-MacOS/not-Windows. It's non-trivial in large part because Electron is a mess and not officially supported beyond Linux/Mac/Windows.


Are other toolkits easier to port to unsupported platforms?


I'd assume so based on both Gtk, Qt, WxWidgets, and Curses supporting more platforms (e.g. BSDs). Certainly qmake is leagues ahead of the messes that Electron and VS Code use.

Edit: None of this is helped with the absolutely shittacular support that Node has for non-Linux unicies. Multi-platform support is one of those things that Rust really gets right.


I think a better analogy would be something backend. Front ends may have gotten slow but that has been exacerbated by intel sitting on 4 core consumer chips for a decade now. However, AMD finally came along and shook that up with threadripper and now you can get 8 times the number of cores from AMD, or 50% more from intel. I think we still have quite a ways to go on the consumer side.


I feel like I could do essentially all computing that I do now, except surfing the Internet at broadband speeds, on a machine with 64MB RAM and a Pentium II. IO has gotten a lot faster and hard disks are a lot bigger, but the things I do still work at about the same speed most of the time.


Rust came just in time. A low-level language like those of old, but without the undefined behaviour and terrible tooling.


Without undefined behavior, the compiler had to be really conservative and for some platform it is going to be really slow.

For example, if you force C to have defined overflow semantics, the code you'll get for a CPU which does not behave in the same way will use an order of magnitude more instructions for that purpose.


I am no expert on Rust, but I think targeted optimized algorithms are what the unsafe trait is for. That way you can profile and choose when programming and debugging.


Rust actually gives you access to all overflow behaviors that you could want. So if you know that your target CPU has saturating overflow you use https://doc.rust-lang.org/std/primitive.i32.html#method.satu... or wrapping_add, or overflowing_add, or checked_add. The syntax isn't as nice as for normal + though.


Is there a way to specify that I don't care? That's exactly what C undefined behavior is for.


I'd take predictable slow software on some platforms over unpredictable fast software any day.


In rust, integer overflows are defined to be wrapping, which is how it works automatically on essentially every architecture Rust supports.


Say what you will about the undefined behavior, but C has great tooling :-).


There is in fact so much tooling for C that every project can use a different tool for managing dependencies.


Specially sanitizers for handling memory corruption like no other one.


Data scientist and lover of Python & Julia here. I'm looking forward to learning Rust.


Having come from Julia - I rather like elixir. Rust seems like it's great if you need control over everthing at a low level (great for browsers, operating systems). If you want a high level, but still very performant[0], FP language with very nice guardrails... I wrote a complete scheduler for data science tasks, from nothing, in 3 months.

[0] a lot of people seem to think that performance is everything but actually uptime and reliability are often more important.


With Elixir, can I set up a queue of separate programs written in other languages (Python/TensorFlow) such that when one task is finished, another can be started right away, or have a programmatic schedule? Essentially, can Elixir act as the scheduler for various tasks, or even spin up AWS instances through its scheduling?


not out of the box, but that's basically what I wrote.


How does one connect Elixir to the various other programs? I'm not too familiar with it, just curious.


I use singularity containers! So that lets me define and use a "unified shell out" protocol. It's pretty sweet. If you have any questions drop a line.


I came the other way around and tried some Erlang before settling on Julia, but if I your focus is on scheduling of data science tasks I know why you used elixir.


you definitely don't want to be using a BEAM language for numeric computation. On the other hand, if you try to write a web application in Julia, you have no uptime guarantees. I think that Elixir/Julia would make a really sweet combination for certain things, if Julia could easily talk to BEAM instances by presenting itself as BEAM node (to do RPCs from Elixir to Julia).


Yeah, I can imagine. That would maybe also not be too difficult as Julia apparently could ccall the erl_interface c-library http://erlang.org/doc/apps/erl_interface/ei_users_guide.html


With Elixir, can I set up a queue of separate programs written in other languages (Python/TensorFlow) such that when one task is finished, another can be started right away, or have a programmatic schedule? Essentially, can Elixir act as the scheduler for various tasks, or even spin up AWS instances through its scheduling?


You should ask the poster above me.


Indeed I shall


> but without the undefined behaviour

This line of comments is rather silly, as it tries to compare implementations with a language specification described through international standards. Apples and oranges. Mozilla's rust implementation has no undefined behavior because it isn't even defined in any standard document nor are there many competing commercial teams developing rust implementations independently of each other. Complaining that C has undefined behavior but rust hasn't makes as much sense as complaining that C has undefined behavior but MS Visual C 2015 doesn't.


> Mozilla's rust implementation has no undefined behavior because it isn't even defined in any standard document nor are there many competing commercial teams developing rust implementations independently of each other.

This is not true. “No UB in safe Rust” is a guiding principle of the language design.

(And there is a second implementation of everything but the borrow checker.)

MS Visual C absolutely has UB.


I thought this was going to be about competition with China.


It sounds like it's going to be exciting times in the field of chip design.


Latching sort-nets.


This is a paywall bypass for a Financial Times story. I think that should be mentioned in the title.


Thanks, I had to wonder why this was on archive.is.


"Software-defined hardware"

We can only hope that this term never becomes a buzzword.


Isn't that just a FPGA?


Nah, FPGAs are programmable hardware, it's a subtle but important difference.


Yeah I'm not 100% sure what the difference is, I'd love an elaboration. The languages you program FPGAs in typically are in the class of hardware description languages (HDLs) - and IMO description and definition are interchangeable in this context.


Yeah, it's subtle but let me see if I can tease it out.

When you're talking about software you're talking about issuing a series of commands that execute in certain ways that produce a desired result. While you may have threading to give the appearance of concurrency(or actual discrete cores/etc) it's still largely a serial operation(with pipelining under the hood but largely not exposed at a software level).

In contrast FPGAs, CPLDs and the like use as you said Hardware Description Languages. These languages are not about issuing a series of actions but instead describing a blueprint for construction of physical hardware units(via SRAM LUTs, ASIC lithography or the like). Many things that are common and expected as a part of software(looping, blocking, etc) are either anti-patterns to not available at all.

Asking someone who's familiar with software to write HDL is like asking a Civil Engineer to do the job of a Mechanical Engineer in designing the drivetrain of a car. Both are professional engineers but they have vastly different skills.

I've spent a fair time in both worlds(although more on the software side) and learning FPGAs was like hitting a brick wall for ~3 months until you realize you have to unlearn all you know about software and build up an understanding from the hardware side first. It's also probably why firmware engineers tend to write so much copy/paste code and software engineers tend to write code that uses an order of magnitude more resources than they actually need.


> It's also probably why firmware engineers tend to write so much copy/paste code

This always seemed like limitations of the commonly used HDLs. Both Verilog and VHDL are extremely verbose and do not really allow a lot of abstraction. Even stuff like "I want this module to have a $common_interface" is hard to impossible without just copy pasting the definitions (sure, there are C macros) every time.


Oh yeah totally, I think it's also a byproduct of how they synthesize. Extra abstractions in software have relatively low cost where they can be really expensive in HW.


System verilog interfaces solve this. A lot of the publicly available code is just riddled with bad practices, just like it is the case with C.


Try Chisel. Chisel solves this.


The difference you describe imo is between descriptive and imperative programming languages though (Prolog comes close for the desktop world) and that one usually targets wires and one targets assembly is irrelevant as you can use either for either purpose. Your reply covers which is used most commonly for which purpose. Where does System C fit in this model out of curiosity?


Nah, it's deeper than that. Assembly is still just a series of commands with desired effects. HDLs are a description of physical hardware(minus IP blocks which are vendor specific). You're describing things down to the actual gate level.

Stuff like sync vs async latches, global resets, exact cycle count are things that are critical in HDLs but don't have matter much in software these days.

For SystemC I'm not a huge fan, the types if things you want to describe tend to get bolted on and many constructs just aren't synthesizeable. There's a huge impedence match between the two.


The same is true of languages like Prolog on the desktop. You don't express how you want the software to work (imperative) but rather what you want it to do (declarative). Just because one synthesizes hardware and one assembly instructions isn't particularly significant, software is software! :) It's just that historically declarative languages have mapped more naturally onto FPGAs.

I definitely agree with you there's more to consider when writing for an FPGA but not much more so than let's say writing for a microcontroller where cycles (at least timing) matters. Or anything hard real-time for that matter.


HDLs are not software. There is no generated instruction sequence. The end result can just as readily be an ASIC which is most definitely not "software defined".


There absolutely can be a generated instruction sequence, that just depends on the compiler. That's what GHDL does if I'm not mistaken. In fact, "by using a code generator (llvm, GCC or a builtin one), GHDL is much faster than any interpreted simulator." [1]

Theoretically I could write any web service in a synthesizable subset of VHDL. Similarly, you could, in theory, compile down any C program into an FPGA bitstream, as both HDLs and C are Turing complete. It may not function efficiently as the right considerations probably haven't been made in your C code to make it easy to do well. I could even turn my C program into an ASIC but that's not the point. The fact that FPGAs are reprogrammable and their functionality is defined in code means they are as much software defined as as any microcontroller. That definition IMO doesn't stretch to ASICs as they are not reprogrammable, and the resulting HDL goes through a huge pipeline of vendor-specific tools to turn the result into silicon.

This is my opinion of course, and I would argue that an FPGA bitstream is equivalent to a ROM for a microcontroller - software that defines the digital electrical characteristics in response to stimuli. Seems that if the latter is defined by software so is the former.

[1] http://ghdl.free.fr


Is programming not defining something with software - encoding physical actions (math) as wordy descriptions (art)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: