Hacker News new | past | comments | ask | show | jobs | submit login

Anecdote, I was an avid gamer that would upgrade whenever the price/performance was good enough for my budget. However, I bought almost top tier stuff back in 2008 and haven't upgraded the core of my system since. I don't game like I used to, but it still does everything I need it to do. I used to do yearly-ish new builds as well.



Me as well, I built an i5-2500K system backend 2010(ish), it's still my main home desktop, 32GB RAM and an SSD and it's as fast as the machine at work that is two generations newer (or more correct it's imperceptibly slower).

Things have really levelled out for average loads even developer loads (mostly do web dev, run vagrant machines that kind of thing).

I can't see me upgrading til this thing dies tbh.


Same boat. I built in 2009 and have a Q9450@3Ghz and a Radeon 5870 with 8GB RAM and a one-time top of the line Intel 160GB X25-M SSD. Still works great. There's only 1 thing I don't like about it, with 3 monitor outputs enabled the idle temp on the GPU is pretty hot (~86C). I'm hoping a newer machine will bring that down. Single output it's about 57C, so a huge difference.

But I got tired of 190F slowly being pumped out of my case and into the room. Its replacement is finally on the way. I preordered Intel's Skull Canyon NUC[0]. Got 32GB of DDR4-2800Mhz memory and a 512GB Samsung 950 Pro PCIE/NVME M.2 SSD. I'll be dailychaining a single DisplayPort cable to 3 new LCDs as well.

Pretty huge leap in performance. It just made sense to stop building new computers and jump on the NUC bandwagon. All I do is development, League of Legends and the rare CSGo. The ~Geforce 750 performance levels that NUC will provide will be enough. The inclusion of the Thunderbolt3 port for an external GPU case really put my mind at ease. Not that I intend to utilize it, but I'm glad it's there. Same upgradability as any other machine. SSD/RAM/GPU. The CPU is soldered, but I never once replaced a CPU after building a computer anyway. Other than the few Athlons I killed from overclocking in ~2001.

Probably upgrade more often if these new gaming NUCs are as good as I think they'll be. Next upgrade for me will be 10nm + Thunderbolt4 NUC. And the final perk, all-Intel so it'll work great with any Linux distro natively. That's worth a lot to me.

Unless Intel failed hard with this thing, which I highly doubt.. it's Intel.. I'm all-in on NUCs from here on out.

[0]http://www.newegg.com/Product/Product.aspx?Item=N82E16856102...


> Me as well, I built an i5-2500K system backend 2010(ish), it's still my main home desktop

Good pick: it's still one of the faster chips around. However, it's TDP is 95W, which is fine for a desktop. Since then, Intel has been concentrating on delivering the same (or, often, much less) speed with lower TDP ratings.

These address the market need for thin, high-priced laptops -- preferably without fans -- that you can use in Starbucks.

You could "upgrade" to a new Intel Core i7-6600U that's actually slower than your i5-2500K but has a TDP of only 15W.


What you folks are saying is mostly true. VR and 4K are the drivers for a whole new compute cycle in the home. I think the current core counts for Xeon are good enough for cloud (by this I mean that I think CPU isn't the bottleneck for the average EC2, Google Compute, Azure instance). Any one care to comment?


In the sense of VR/4K the CPU is way less important than the GPU and we still have some headroom on those even with existing process, what will be interesting is when everyone else's process catches up with where Intel is now, they've generally stayed out in front of everyone else for a long time (except AMD for a spell).

I'm really looking forwards to VR if it catches on though, having an insanely high resolution headset so I can dump multiple monitors for programming is a big win, combine that with something that has the portability/form factor of a MS Book/Macbook Pro and you'd be able to program as capably from a hotel room as at your desk at home/work.

That would be the biggest shift in my work habits since I went from Windows to Linux in the late 90's.

Also I think once everyone can get down to the same size as intel we might start seeing more exotic architectures, Intel has often won with the "with enough thrust a brick will fly" approach to engineering, it doesn't matter if your chip is clock for clock more efficient if Intel is operating at a level where they can put 5 times as many transistors down in the same unit area and ramp the clock speed way up.


Good point. Nvidia seems like the exciting company in this regard. I have a bad feeling they're going to tumble because the latest announcements related to Pascal have focused on deep learning instead of 4K for consumers. As far as I know, they haven't even announced their consumer Pascal cards yet. The rumor I've read online is announcements in June.


What would it mean for Nvidia to focus on 4k? Do the new screen resolutions require architectural innovations in the GPU? (Honest question. I'm not a graphics engineer)


Me either so take this with a pinch of salt.

Mostly it's about shuttling around 4 times as much memory for each screen as well as 4 times as much processing.

1920x1080 has ~ 2 million pixels. 3840x2160 has ~ 8 million pixels.

Internally iirc this is often done as vectors before been rasterised out and having various filters and shaders applied but that step requires that you store multiple buffers etc, same reason a card that will play a game just comfortably at 1024x768 will run away crying at 1920x1080 I guess.


I can see how higher resolutions will require the GPU to have more memory or more FLOPS or both. I guess my question is, how, if at all, are the improvements required to support 4K different from improvements that target something like deep learning performance?


Not quite about that last part. Clock rates have been more or less stationary over the last decade or so. What Intel has been focusing on for some time is cache, and how to practically always have the right data in the cache at the right time. but yes, having more transistors to work with do allow them to pack more cache onto the die.


Yeah that wasn't perhaps very clear, I meant historically they ramped the clock up (back in the P4 days).


And ran into a brick wall, iirc.


We still need more CPU in the cloud. There's an almost unlimited appetite; and we still often look for optimizations to our program that reduce CPU load. (At least at Google. Not sure how much I am allowed to say in more detail, though..)


Touche. I've also been involved in building clouds but ours was focused on customers building web-apps. Those are probably not compute bound. I can see why data processing workloads would need more CPU.


> VR and 4K are the drivers for a whole new compute cycle in the home.

My i7 desktop will be four years old in June. I've done the research and all I need to run the Oculus or HTC Vive (or the upcoming games that look good) is a graphics card upgrade.


As a software developer I constantly find myself optimizing and scaling across CPU's. Sure those programs I optimized 10 years ago run blazingly fast now. But as computers got faster the possibilities and expectations also increased.

I would like to compare CPU's with roads: Roads increase traffic, not the other way around. So if you make a faster CPU, the demand for a faster CPU increases.

So if Intel stops making faster CPU's, the demand will (unexpectedly) go down.


Good point except facebook is targeting 6k per eye. People are definitely going to buy hardware, but maybe not in huge numbers.


That's what I've got as well. I'll probably build a new machine (actually, I'll just buy a NUC) this Spring. Another big factor in not upgrading is the effort involved in setting up a new Windows machine. I'm sure I'm in for at least 30 hours on that front.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: