Hacker News new | past | comments | ask | show | jobs | submit login

Unfortunately this doesn't seem to be for me, even though I'm really interested in AI and I'm currently working on an AI project (look at my profile if you are interested). I wish I could run my own AI algorithm rather than just using their own. It would probably be cheaper to just buy my own Xeon computers. Training is really what takes most of the computer power.



We just bought a used Xeon HP workstation for a build server. It is 7 years old. But a 24-core new workstation would be much more expensive. We're putting in some SSDs to speed up build times, and it came with 24GB of RAM, which is enough for our use cases. The price? $400 or so. It even came with a Quadro 5000 video card, though we weren't planning on GPU compute for this particular box.


>>The price? $400 or so

Wow. The price is quite good. Is there some website you bought this from? How many cores?

>>But a 24-core new workstation would be much more expensive

Tell me about it. I've been checking the prices of a dual 24 core workstation and it is around $7K.


It's also a 24-core, dual-socket (12-cores per socket). There's a couple left on ebay:

http://www.ebay.com/itm/HP-Z800-Workstation-2x-Xeon-x5670-2-...

The one downside of this box is that the power supply is totally proprietary to HP. Well, the motherboard too. So if one of those craps out, you're done, unless you can locate a spare cheaply.

It's also reeeeaaally heavy and draws a lot of power.


> It would probably be cheaper to just buy my own Xeon computers.

What does Xeon have to do with training? You need a fast GPU (or apparently TPU). The CPU is relatively unimportant. Furthermore, you can roll your own algorithms (i.e. architecture) in TensorFlow. You don't have to use "theirs".


Well, I'm designing my own AI engine from scratch, not using any of the current machine learning techniques, i.e. Convolutional Neural Networks, etc. Right now the way my code works is that it uses cores to make the whole training faster, ans xeon's have lots of cores. If I could use GPU's I would, maybe I can and I just need to figure it out. For now the simplest thing to do to make it faster is to add more cores.

Now, you may think I'm crazy for designing something from scratch, I may be indeed, but how else can we discover/invent something totally new. I think I'm actually onto something given that the early results look quite promising.

I actually created a video demonstration of my AI Engine which you can find a link to it in my profile but I have done a crappy job of explaining the strengths of it [1]. Too much work, so little time.

[1] https://www.youtube.com/watch?v=WHNdIuBJHTo


Neat. I also have built my own system from scratch., so I don't think it's crazy at all.

Where I get stuck is figuring out new challenges to throw at the system. I find it funny that there is lots of discussion here about performance and tuning, and not so much about practice applications and valuable problems to solve.


Unless you have like 64 cores, I don't think you will be a GPU. And yes, you can use multiple GPUs in parallel to train.


The whole point is that you can run your own algorithms; that's why it's on Google Compute Engine. As far as I can tell you should be able to avoid TensorFlow even, although I suspect that'll be a pain.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: