Hacker News new | past | comments | ask | show | jobs | submit login
IBM scientists demonstrate 10x faster large-scale machine learning using GPUs (ibm.com)
236 points by brisance on Dec 7, 2017 | hide | past | favorite | 40 comments



> We can see that the scheme that uses sequential batching actually performs worse than the CPU alone, whereas the new approach using DuHL achieves a 10× speed-up over the CPU.

I had to get down to the graph to realize they're talking about SVM, not deep learning.

This could be pretty cool. Training a SVM has usually been "load ALL the data and go", and sequential implementations are almost non-existent. Even if this was 1x or 0.5x speed and didn't require the entire dataset at once it's a big win.


>I had to get down to the graph to realize they're talking about SVM, not deep learning.

there's still a ton of usage for classical learning algorithms. I'd be a very happy camper if we could speed SVMs up by a magnitude


> for classical learning algorithms

Indeed, for relatively "simple" models, SVM can get very, very close to deep learning accuracy for classification, with only a fraction of the computing time needed.


Not to mention 'tweaking' required.


I know two projects (fairly simple implementation) of sequential SVM. I believe vowpal wabbit can also do max-margin optimization.

http://leon.bottou.org/projects/lasvm

http://leon.bottou.org/projects/sgd


Yes, i felt cheated when i read it was about training 1/10th of ImageNet on a SVM. I guess IBM are desparate not to be left behind in the race for distributed deep learning platforms.


To be honest, I'd readily cheer any groups working on traditional machine learning advancements despite all the current hype for neural methods.


I'll second that. For all the attention DL/ANNs get... there's still a lot of legwork going on out there using linear models, basic trees, etc. IIRC this years kaggle survey ranked Logistic Regression as the #1 most used model by a long shot.


neural networks are stacked logistic regressions. a lot of the deep learning research benefits logistic regression


you say that as if it's empty hype. but it's not: deep learning works, and works much better by any reasonable metric than SVMs in most problems that require high to very high model capacity.


Yeah they get you really hyped up first and drop the bomb at the end. Still quite impressive speedup, would be better though to show it on a benchmark where SVM's are used in practice


Looking at the normal stuff coming out of IBM they're not associated with good software in my mind. So the more outrageous their claim, the less I believe them. They need to earn a reputation first.


That's like judging all of Google based on the quality of one product. With 10x as many employees as Google and a very loose organization, expecting any kind of reputation is folly.

A product from an IBM consultant is about as related to a product from IBM Watson as is a product from Microsoft being related to a product from Apple.


IBM has 400k employees and god knows how many subsidiaries and divisions, do you really think you can paint them all with one brush because of some negative experience you had with one of their products ?


Not one, we have multiple IBM software products in my company, and they're all consistently the most terrible software you can imagine making.

Sure they might have some divisions that do better, but I have yet to see them.


I'd love to see more details.

Ultimately, it seems like IBM has managed to make a generalized gather/scatter operation over large datasets in this particular task. Yes, this is an "old problem", but at the same time, its the kind of "Engineering advancement" that definitely deserves talk. Any engineer who cares about performance will want to know about memory optimization techniques.

As CPUs (and GPUs! And Tensors, and FPGAs, and whatever other accelerators come out) get faster and faster, the memory-layout problem becomes more and more important. CPUs / GPUs / etc. etc. are all getting way faster than RAM, and RAM simply isn't keeping up anymore.

A methodology to "properly" access memory sequentially has broad applicability at EVERY level of the CPU or GPU cache.

From Main Memory to L3, L3 to L2, L2 to L1. The only place this "serialization" method won't apply is in register space.

The "machine learning" buzzword is getting annoying IMO, but there's likely a very useful thing to talk about here. I for one am excited to see the full talk.


They buried it, but their NIPS 2017 paper is linked in the article.

https://arxiv.org/abs/1708.05357


Thanks.

It does seem specific to machine learning / Tensors. But that's still cool. I'll have to sit down and grok the paper more carefully to fully understand what they're doing.


This is pretty fascinating! Though the concept seems to work only for convex problems (in particular, problems which have strong duality; this excludes NNs in almost their entirety, except 1 layer nets), but the application is nice and straightforward.

I wonder if there is a similar lower-bound which can be constructed for non convex problems which retain enough properties for this method to be useful?


How about if you did the same on 8 or 16 core CPU that can have much more than 16 GB of memory and is not as expensive to move data around its own memory?


Roughly 1000x slower? GPUs nowadays have 5000+ "cores" inside.


That's the point. On the GPU side they use all the 5000+ cores to parallelize the algorithm (they use the hardware to its full potential). On the CPU side they use just one core (at least there is no mention around the cores used on the CPU). It's like saying a Camry beat a Ferrari in maximum speed, but you don't mention that the Ferrari was only in the first gear for that specific race.


> they use the hardware to its full potential

If only! In fact it's a struggle to utilize a GPU to its full potential because the communication bottleneck makes it infeasible. Compute is fast but data can't get there fast enough.

The authors of this paper were saying the same thing in the promo video, in fact, they were working on making GPU's more efficient. Why would they do that if GPU's are using their "full potential" already?


> Roughly 1000x slower?

Not really. A modern Coffee Lake i7 has several distinct advantages over GPUs. (AMD Ryzen also has similar advantages, but I'm gonna focus on Coffee Lake)

1. AVX2 (256-bit SIMD), for 32-bit ints / floats that's 8 operations per cycle. AVX512 exists (16 operations per cycle) but it its only on Server architectures. Also, AVX512 has... issues... with the superscaling point#2 below. So I'm assuming AVX2 / 256-bit SIMD.

2. Superscalar execution: Every Skylake i7 (and Coffee Lake by extension) has THREE AVX ports (Port0, Port1, and Port5). We're now up to 24-operations per cycle in fully optimized code... although Skylake AVX2 can only do 16 Fused-multiply-adds at a time per core.

3. Intel machines run at 4GHz or so, maybe 3GHz for some of the really high core-count models. GPUs only run at 1.6GHz or so. This effectively gives a 2x to 2.5x multiplier.

So realistically, an Intel Coffee Lake core at full speed is roughly equivalent to 32 GPU "cores". (8x from AVX2 SIMD, x2 or x3 from Superscalar, and x2 from clock speed). If we compare like-with-like, a $1000 Nvidia Titan X (Pascal) has 3584 cores. While a $1000 Intel i9-7900 Skylake has 10 CPU cores (each of which can perform as well as 32-NVidia cores in Fused MultiplyAdd FLOPs).

i9-7900 Skylake is maybe 10x slower than an Nvidia Titan X when both are pushed to their limits. At least, on paper.

And remember: CPUs can "act" like a GPU by using SIMD instructions such as AVX2. GPUs cannot act like a CPU with regards to latency-bound tasks. So the CPU / GPU split is way closer than what most people would expect.

-------------

A major advantage GPUs have is their "Shared" memory (in CUDA) or "LDS" memory (in OpenCL). CPUs have a rough equivalent in L1 Cache, but GPUs also have L1 cache to work with. Based on what I've seen, GPU "cores" can all access Shared / LDS memory every clock (if optimized perfectly: perfectly coalesced accesses across memory-channels and whatever. Not easy to do, but its possible).

But Intel Cores can only do ~2 accesses per clock to their L1 cache.

GPUs can execute atomic operations on the Shared / LDS memory extremely efficiently. So coordination and synchronization of "threads", as well as memory-movements to-and-from this shared region is significantly faster than anything the CPU can hope to accomplish.

A second major advantage is that GPUs often use GDDR5 or GDDR5x (or even HBM), which is superior main-memory. The Titan X has 480 GB/s (that's "big" B, bytes) of main memory bandwidth.

A quad-channel i9-7900 Skylake will only get ~82 GB/second when equipped with 4x DDR4-3200MHz ram.

GPUs have a memory-advantage that CPUs cannot hope to beat. And IMO, that's where their major practicality lies. The GPU architecture has a way harder memory model to program for, but its way more efficient to execute.


Very good analysis, and a correct conclusion that memory bandwidth is the bottleneck (at least for Matrix fused multiply-add intensive workloads - like feeedforward NNs and Convnets). We have done experiments on the 1080Ti (484 GB/s) and for 32-bit FP training (convnets on tensorflow), it is close in performance to the P100 (717 GB/s).

The other point to add is that SIMD operation for GPUs is what gives them efficient batched reads from GPU memory for each operation.


Thanks.

I can't say I'm an expert yet. But the more and more I read about highly optimized code on any platform, the more and more I realize that 90% of the problem is dealing with memory.

Virtually every optimization guide or highly-optimized code tutorial spends an enormous amount of time discussing memory problems. It seems like memory bandwidth is the singular thing that HPC coders think about the most.


It's worth noting that this GPU RAM advantage is usually coupled with a PCIe bus disadvantage, which means that you need to be able to hold a complete working set of data in the GPU long enough to really benefit from the extra bandwidth and horsepower.

If you don't have enough computations-per-byte to perform on the GPU, you will find your total job time starts to be dominated by the time it takes to stage data in and out of the GPU, without being able to keep the GPU cores busy. Even if the CPU is 5-10x slower according to issue rates and RAM bandwidth, it can keep calculating steadily with a higher duty cycle since system RAM can be much larger.

However, the CPU also benefits from locality, so you should still prefer to structure your work into block-decomposed work units if possible. A decomposition which allows you to work through a large problem as a series of sub-problems sized for a modest GPU RAM area will also let the sub-problem rise higher in the CPU cache hierarchy to get more effective throughput. However, if the decomposition adds too much sequential overhead for marshalling or final reduction of results, it may not help versus a monolithic algorithm with reasonably good vectorization/streaming access to the full data.


The I9-7900 seems like a rather strange CPU to compare a video card. Why not a Intel xeon with 50% more memory bandwidth? Or an AMD Epyc with 100% more bandwidth? Not hard to get 2-3x the cores (in a single socket) and double the bandwidth/cores with a dual socket.

That way you get pretty good memory bandwidth, can directly access much more ram (1TB easy), and you can run a wide variety of codes (not just GPU codes).

Sure the Titan X is great if your code A) doesn't communicate B) fits entirely in system memory and C) runs on CUDA. Of course the real world often intrudes with PCI-e latency and memory limitations.

Not saying GPUs don't have their place, but it's easy to overstate their usefulness.


I picked two $1000 components from memory. I recognize that there are other choices out there, but $1000 is a nice round number and I honestly don't know the market any more to pick another price point.

If you know the name of a Xeon Skylake-server, and its memory capacity, that is roughly $1000 (and therefore comparable to a Titan X in MSRP cost), you are welcome to rerun the analysis yourself.

I can't do that because I don't know the capabilities of the Xeon Skylake servers from memory, nor their prices. And I'm certainly not going to spend 30 minutes googling this information for other people's sake.

What I will say is that the i9-7900x is a Skylake-server part with AVX512 support and Quad-channel memory. That's way stronger than a typical desktop. And I think assuming Quad-Channel 4xDDR4-3200MHz is pretty fair, all else considered.


Both chips have similar (within an order of magnitude) die areas, frequencies, power dissipations, and external pin bandwidth.

If the GPU were truly 1000x more efficient than the CPU, then the CPU vendor could just take 1/1000th of a GPU and squeeze it onto their own chip to double their performance.

(In a sense the trend since the late 90's has been to do exactly this via vector extensions.)


That's wrong by orders of magnitude. The actual speedup of GPU's is about 8x. Those GPU cores are much weaker than CPU cores.

The paper in discussion here reports 10x speedup for GPU vs CPU.


SVMs have better generalization possibilities than NNs, so this is neat.


How do I use this to mine bitcoin? K thanks.


tldr: They made a caching algorithm.

Article was touched by pr dept, but still has actually information.

longer tldr:

They did the same thing that has been done for thousands of years. Back then the hot area of research was how to stage advance food and resource caches along a route for long journeys. They came up with algorithms to optimize cache hits.

In this case, the problem is GPUs can be fast for ML, but usually only have 16GB ram when dataset can be terabytes.

Simple chunk processing would seem to solve the problem, but it’s turns out overhead of cpu/gpu transfers badly degraded performance.

Their claim here is they can on the fly determine how important different samples are, and make sure samples that yield better results are in the chance more often than those with less importance.


> Their claim here is they can on the fly determine how important different samples are, and make sure samples that yield better results are in the chance more often than those with less importance. Isn't that the exact same idea as in active learning?


I don't think you need active learning to get results like this, just decent statistical analysis. There are parallels here with distributed query planning.


Could you please not post snarky dismissals of other people's work? I realize that PR-filtered bigco tech articles aren't the greatest medium. But when you hand-wave this back to "the same thing that has been done for thousands of years", that's the kind of cheap internet discourse that degrades and ultimately destroys a site like HN, which is trying for something at least a little better.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html


To be fair - most innovation boils down to this kind of incremental stuff. 99% of 'tech' is an amalgamation of more basic ideas, not 'magic leap' kind of innovation.

I mean, we all love the magic, but I think we're getting spoiled as of late with all the magic AI/Deep Learning stuff coming out.


I agree with your first statement to the point of disagreeing with your second. i.e. even the magic stuff is just incremental progress that people were not paying attention to. (Self driving cars have been wowing people since the 90s, object recognition just got incrementally better every year etc)


Oh certainly, I did not intend to be dismissive of their work.

My goal in a tldr is only to minimize the number of seconds it takes to digest some essential concept.

I wish for every article here someone would write up a 1 sentence tldr and a one paragraph tldr+, to help us track more happenings in our head at once and to help choose the ones we decide to spend our deep reading time on.

But of course your point is valid, shoulders of giants and what have you...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: