What really astonishes me though is how you can have these really nice machines and Apple doesn't bother to update the Mac mini. Seriously, an Apple version of this would be a nice desktop.
What really astonishes me though is how you can have these really nice machines and Apple doesn't bother to update the Mac mini.
It would almost have been better if they didn't update it. The 2012 mini was fantastic; you could take it to a quad i7 and 16GB for under $1000 thanks to user-accessible RAM. For most purposes that's much better than the current $1000 mini.
I also have the Skullcanyon, as a laptop replacement machine. None of my previous personal computers have had ECC RAM, but none have had more than 16GB either. What is your use case that benefits from error-correction, if you please?
the wikipedia page says:
" Tests show that simple ECC solutions, providing single-error correction and double-error detection (SECDED) capabilities, are not able to correct or detect all observed disturbance errors because some of them include more than two flipped bits per memory word."
http://serverfault.com/questions/674214/what-is-the-rowhamme...
adds
"ECC effectively turns the Row Hammer exploit into a DOS attack. 1bit errors will be corrected by ECC, and as soon as a non-correctable 2bit error is detected the system will halt (assuming SECDED ECC)."
It can mess up _any_ file system. ZFS is, under some conditions, more likely to _tell_ you that you've had a problem.
That said, if a bit flips in the wrong place in RAM, even a checksumming file system will not protect you. The file system will happily write out the corrupted data if the checksum is calculated after the bit flip occurs.
In the mini-itx space, there's also http://asrock.com/mb/Intel/X99E-ITXac/index.us.asp which is DDR4, not SODIMMS, and your choice of processor, including V4 Xeons. I have one with a E5-2680V4 + 64GB of ram in it. (128 should be feasible with 64GB sticks as far as I know, but as they're about $700 each last I checked I haven't tried it.)
On my current desktop machine (32GB ECC DRAM) I see a single bit error (corrected) probably every 18 to 20 days. There isn't an easy way to tell where that error hit in memory (you could presumably walk it back from the physical chip to the physical address space to the current page table and then back into virtual space and a process but that is waaaaaaaay too much work :-) Without ECC those errors would happen and I'd keep working and they may or not cause a visible symptom. A schematic data base slightly corrupted when it was saved to disk, a Flash image that isn't quite right, a system daemon that crashes and restarts unexpectedly, etc. Not really something I want to take my chances on.
Of course if you don't have ECC memory you don't know that your memory is being silently corrupted. For some people they don't care, for me, I do.
How are you able to "see" the single bit errors? I had an ECC machine but I couldn't find any indication that ECC was working or that any bit errors happened.
Reading this I thought the best I could do is to believe that workstation with ECC modules and Xeon will have functioning ECC:
"Unfortunately, we have found that there is no consistent, conclusive way to determine if ECC RAM is working properly. ...We've actually asked Intel, Kingston and Asus over the years for their recommendations for methods to confirm that ECC is working, but we haven't gotten much more back than a blank stare."
The BIOS has to correct them, you get what is essentially a machine check, the BIOS fixes the error, does the writeback, and in my case logs the fix on the system management (SMB) bus and that shows up in the IPMI system event log[1]. Not sure who the BIOS would tell if there wasn't some sort of BMC chip to hear about it, it might keep an internal log.
I'm surprised the board/bios manufacturer couldn't answer the question, it's a pretty straight forward system.
[1] They show up as 'SBE' events (single bit error). An 'MBE' event (detectable but not correctable) would presumably result in an unhandled machine check and cause the kernel to reboot.
That's interesting. Is it possible to use IPMI and log BMC events on a regular workstation from Dell or Lenovo, perhaps via some expansion card? Or is this only a server-grade feature?
They're also logged by some portable mechanism on normal x86 boards, viewable at least using the "mcelog" tool under Linux, and probably logged by the kernel in the normal kernel event log. I think the mce log is supposed to persist over reboots so you could see the multibit error afterwards if you got a machine check induced reboot.
If you think that link is about hairdryers, I really suggest you do some shopping.
You'll find less smoldering, better smell, and less pain if you use an actual hair dryer to dry your hair (rather than a heat gun, which the page is about).
I'm putting money on something to do with ZFS. Turns out bit flips are significantly more common than had been thought in the past, which makes a error-correcting file system somewhat risky to use with normal non-ecc ram.
In the safety industry everyone knows that bit flips (SEUs) are way more common than laypeople believe.
And that's for embedded microcontrollers and RAM that aren't nearly pushing it to extremely small processes.
Let me just quote IEC 61508, where "FIT" means 10e-9 h:
Causes of soft errors are: (1) Alpha particles from package decay, (2)
Neutrons, (3) external EMI noise, (4) Internal cross-talk. External EMI noise is covered by other requirements of
this international standard.
A soft error occurs when a radiation event causes enough of a charge disturbance to reverse or flip the data state
of a low energized semiconductor memory cell, register, latch, or flip-flop. The error is called “soft” because the
circuit itself is not permanently damaged by the radiation. Soft-errors are classified in Single Bit Upsets (SBU) or
Single Event Upsets (SEU) and Multi-Bit Upsets (MBU).
The soft error rate has been reported (see a) and i) below) to be in a range of 700 Fit/MBit to 1 200 Fit/MBit for
(embedded) memories. This is a reference value to be compared with data coming from the silicon process with
which the device is implemented. Until recently SBU were considered to be dominant, but the latest forecast (see
a) below) reports a growing percentage of MBU of the overall soft-error rate (SER) for technologies from 65 nm
down.
The following literature and sources give details about soft-errors:
a) Altitude SEE Test European Platform (ASTEP) and First Results in CMOS 130 nm SRAM. J-L. Autran,
P. Roche, C. Sudre et al. Nuclear Science, IEEE Transactions on Volume 54, Issue 4, Aug. 2007
Page(s):1002 - 1009
b) Radiation-Induced Soft Errors in Advanced Semiconductor Technologies, Robert C. Baumann, Fellow,
IEEE, IEEE TRANSACTIONS ON DEVICE AND MATERIALS RELIABILITY, VOL. 5, NO. 3, SEPTEMBER 2005
c) Soft errors' impact on system reliability, Ritesh Mastipuram and Edwin C Wee,
Cypress Semiconductor, 2004
d) Trends And Challenges In VLSI Circuit Reliability, C. Costantinescu, Intel, 2003, IEEE Computer Society
e) Basic mechanisms and modeling of single-event upset in digital microelectronics, P. E. Dodd and L. W.
Massengill, IEEE Trans. Nucl. Sci., vol. 50, no. 3, pp. 583–602, Jun. 2003.
f) Destructive single-event effects in semiconductor devices and ICs, F. W. Sexton, IEEE Trans. Nucl. Sci.,
vol. 50, no. 3, pp. 603–621, Jun. 2003.
g) Coming Challenges in Microarchitecture and Architecture, Ronen, Mendelson, Proceedings of the IEEE,
Volume 89, Issue 3, Mar 2001 Page(s):325 – 340
h) Scaling and Technology Issues for Soft Error Rates, A Johnston, 4th Annual Research Conference on
Reliability Stanford University, October 2000
i) International Technology Roadmap for Semiconductors (ITRS), several papers.
Because you are paranoid and also have a small apartment. I have a mini-itx home server build with 16 gb of ram and 4 WD 3tb RED drives. It doesn't run ZFS because I don't have ECC ram in it, but it's the smallest computer I could buy that would fit the hard disks I already owned. If I were starting from scratch I'd probably use 2.5" drives instead, and if I wasn't working for a non-profit, I'd probably use SSDs now that they are about $.27 a gb.
Mini-itx I can understand, but this box has one drive of 1.5TB max (according to http://www.hp.com/go/Z2mini). You would have to connect any other drives externally. If you do that, the result probably takes more room than that mini-itx.
Not OP, but for me, numerical simulation. I don't have the numbers to 'prove' that corruption is really a serious problem for my use cases, or what the effects would be on my results, but for a few hundred extra, I figured why not go for a Xeon/ECC platform.
>What really astonishes me though is how you can have these really nice machines and Apple doesn't bother to update the Mac mini.
IMHO there is indeed a great deal of pent-up demand for more powerful Mac Minis. I really wish Apple wouldn't just treat them as incidental devices, but presumably there's not much incentive from their perspective since these are lower margin than their other offerings.
>> It would compete with the IMac then, and with their notebooks too.
Companies that try to avoid competing with themselves often fail quickly and spectacularly (after this approach appearing to work well for an extended time).
Apple historically hasn't had a problem competing with itself - iPod mini - iPod nano - iPhone, each of these completely destroyed an existing product or product line.
Apple probably is an exception rather than a rule here; their products are so overpriced that any new product success easily makes up for previous products losses.
It might be a bit different for "normal" companies though.
Absolutely - my main computer is an iMac, which will soon be replaced by another iMac (5K), however as someone who builds and upgrades a gaming rig as well, I always look with envy towards the very affordable world of PCs. I'd love to be able to slap macOS on custom build computers for a fraction of the price that is attached to Apple hardware.
I'm not sure, but I think it's not for me. It always sounded like an option with lots of compromises and obstacles and I'm not really interested in that.
I did some caching/content distribution/load balancing work for HP many years ago (maybe 10 years ago, now).
The names are (or were) geographic and divisional. Their infrastructure is (or was) pretty old school. While they do have load balancing and such, it is often of the form of redirects based on locality or other information. People in Hong Kong or China starting on HP.com in the US would find themselves bounced over to servers in Hong Kong, based on their language and where their traffic was being routed to/from. They mirrored tons of their data automatically with Squid proxies and just additional web servers. So, once a user gets directed to a local HP site, they'll tend to stay on it...and if content doesn't exist in the local cache, it'll pull it from the origin. HP has tons of public server names not just in the wwwN.hp.com space.
So, while I don't have specific information about www8 vs www2, etc. Based on my own experience with their infrastructure, I would assume it is the most boring explanation. They need several servers to provide good service, so they just give various divisions their own servers. This was the case in Hong Kong, Korea, and China, where my work was deployed. They do have load balancing and content distribution and such (and did even back then), but the actual web services were pretty traditional; they just ran on a server somewhere.
I don't know why I find that explanation so much more reassuring than my assumption that it was simply a really sloppy version control hack, to denote upgraded deployments.
Even though there are a bazillion other ways to codify DNS subdomains by region, including some sort of reasonable vernacular for the region, I'm glad it's about locales, regions and perhaps even content delivery or timezones maybe, moreso than version numbers.
Every time I get redirected to something like www2.example.com, I've often wondered which obstinate jerk would refuse to move whatever's on www proper, forcing the entire world to suffer eternity in a purgatory named www2.
Hopefully, all those times, it just turned out that I was in North America, region #2, and maybe region #1 was Antarctica for some reason.
Based on a google site: search and an eyeball of the first couple of results pages I'd summarise as
www2 - partnering, alliances education [1]
www3 - investor relations, careers [2]
www4 - central european languages [3]
www5 - japanese [4]
www6 - no results found
www7 - open vms [5]
www8 - seems to be the primary site for english content
That doesn't even make sense. What happens if a server goes down and needs to be replaced? Do you have to wait for someone to remove the old box and insert a new box in the rack? Obviously there needs to be a handoff of the same name because everyone in the world is linking to the "license-plate" URL....
Having seen these at several large corporations, I always figured it was either:
1. a division of labor: each corporate "division"—themselves large enough to have their own subsumed IT departments—gets its own hostname prefix to plan a routing map under.
2. a division in time: creating a new wwwN "prefix" allows them to "throw out" their old route-map and start again for new projects, while still keeping old projects on their existing routes at the old hostname.
My guess leans more toward the second option: corporations large enough to maintain extensive intranets have a particularly faithful dedication to the "cool URLs don't change" philosophy, because they don't want to go around fixing thousands of links built into dozens or hundreds of internal apps, emails, calendars, &c.
Of course, if you know you're going to do this from the beginning, you can just do what the W3 recommends, and begin each newly-defined route with a prefix for the current year+month (see https://www.w3.org/2005/07/13-nsuri).
Edit: The Z2 Mini supports Intel Core i3, i5 and i7 processors, in addition to Xeon E3 processors. It also supports up to 32GB of DDR4 SO-DIMM memory, and the unit can be equipped with Nvidia Quadro M620 graphics with 2GB of memory. (from http://www.tomshardware.com/news/hp-z2-mini-g3-workstation,3...)
Any idea to which consumer card the Quadro M620 correlates? If it's something halfway decent then it might be a also a much better ultraportable gaming solution for some people than e.g. a NUC. However that also depends on if the Quadro drivers are suitable for gaming or not.
Ok thx, that's quite bad. Had expected more, since some smaller notebooks are now getting 1060s. But then this thing would probably be much more expensive.
Your best bet, other than building your own, is probably a Steam Machine. For around the same price as this workstation you can get one with a better video card.
That seems to have gotten better of late, as more systems use ECC sodimms. Poking around seems to suggest a price difference of $5-$10 a stick, and that's pretty constant regardless of memory size.
Sorry, I was being less pedantic than I should have been. Thanks for the correction and have an upvote.
They're "incompatible" in the sense that the pinouts differ, but an ECC-capable DDR3 machine can use both ECC and non-ECC memory; however, the pinout incompatibility means that a non-ECC-capable machine cannot use ECC RAM (but then, who would want to spend the extra $$ for ECC RAM when their machine doesn't support it anyway?).
By pre-defining specs for the device, you can obviously create a sleak box for it with no wasted space, but I would claim it will always be impossible to create something as powerful as something 4 times its size, given the larger components are still produced, just going off physics. Otherwise no one would still build towers and just use laptops. (Of course more people use laptops, but it's not because of performance)
Since they don't mention specs other than "next gen xeon and nvidia graphics", and the only claim to power is "Building off the success of the HP Z240 SFF, the HP Z2 Mini Workstation is twice as powerful as any commercial mini PC on the market today", I won't hold my breath.
Getting better performance than "other mini PCs" is a pretty low bar for a stationary workstation. I'm unsure who this is for. It's not a laptop, so you can't bring it with you. A midtower would have better performance and can be strapped under your desk so you never see it.
You wouldn't be able to build anything as powerful as these HP offerings, because they can custom fab parts to pack in ram and high end processors in a smaller density (like laptops) and build a lot of custom heat dispensations components as well.
Still, there are Thin-ITX boards that can support a decent amount of ram and power, just not Xeons or high end video cards.
Four times the size can also be tremendously counter-productive, in terms of having a big empty space that air swirls around in. Cooling is a pain in the arse.
It's a 4-way M.2 NVMe raid card, which HP has ignorantly decided to populate with seemingly mediocre SSDs instead of the current gen flagships from Samsung or Intel.
You can remove the included SSDs and populate it with up to 4 960 Pro SSDs for what I imagine would be 14/8 GB/s sequential read/write performance.
Unfortunately there's no way to buy it without the included SSDs, so you're better off with the unpopulated M.2/mini-SAS NVMe raid cards from Dell, HighPoint, or SuperMicro.
> Unfortunately there's no way to buy it without the included SSDs
Ah, brings back memories of buying HP servers several years ago. The only way to get the servers we needed was to buy servers pre-shipped with inadequate RAM and drives, rip it all out, and then install new drives and new RAM (purchased at the same time from HP, for an inflated price of course). We were left with a pile of essentially useless SAS drives and ECC RAM, and still had to do all the assembly in-house.
I've never had so much trouble getting a company to accept several tens of thousands of dollars before.. or since.
That project explicitly specified HP hardware, and as far as I know we never bought HP servers again after that experience.
What shitty vendor is this? Our VAR handles all of that for us. 48 Dimms, 16 SAS drives, P420e, and loads of nics. And they did the firmware updates and checked for defects.
The only thing I needed to do was rack it and kick off the PXE install.
At the time, every HP reseller. They had some kind of gold/platinum/bronze partner system where specific resellers could only order certain parts. I don't know if this is still the case, but honestly the whole thing was an immensely frustrating experience.
I could order direct, but was very limited in what I could customize. We worked with a couple different resellers, but they were also limited (each in different ways), and ultimately did purchase through one of them.
I don't remember all the specifics now, but the main factor was the database servers which had high storage and very high I/O requirements. I think the only way to get the drive config we needed was to get something with 4 or 5x the CPU, and the cost accordingly skyrocketed. These days a couple SSD's could easily beat the performance of the 12 or so drives we had at the time, but in 2009-ish the limited SSD stuff on the market was out of reach.
Do you happen to know what card that is? I couldn't find it in TFA. I'm not sure about getting the full read/write performance with 4 960s, since I think you'll saturate the bus with 2 or 3.
That's the MSI Trident referenced there; HP doesn't say whether or not the Z2 accepts a standard M.2 drive (full specs don't appear to be available yet).
First footnote in the linked press release, justifying them calling it "the world’s first mini workstation[1]"
[1] Based on publicly available information of workstation competitors as of October 3, 2016 with volume of at least 1 million units annually as of October 3, 2016 having < 3 litres volume, professional graphics, Intel® Xeon® quad core processor, ISV certified applications, ECC memory.
It does support ECC, but I don't understand its thermal design. Probably it's like NUCs, a small box for the sake of prettiness and a tiny-n-cheap fan that will be noisy.
It's a shame, because you can passively cool things like a i7-6700T these days without much effort and without ever falling into throttling [1]. Plus, Calyos is about to release a loop heat pipe that will be able to cool massive systems without any noise [2].
I wish someone like Apple or perhaps the new Microsoft had the vision to sell corporate desktops and workstations that did not suck noisewise. I imagine a little fanless Mac Mini running on Xeon would sell like hotcakes and would give Apple the chance to gain a further chunk of enterprise marketshare. But they don't seem to be interested.
They specifically mention acoustics in the (long) text about it. I would assume that means they tried to make it quiet. Whether I believe them or not is up in the air, as my last HP workstation was louder than hell.
Fair enough. My main ergonomic requirement for a workstation is zero noise at rest, and reasonable otherwise. I'm interested in seeing what I can get in form factor though.
What I did (but I'll admit this is not an option in many cases) is to move my workstation into a rack in the basement, with 20m DisplayPort cables connecting the monitors and a 20m usb3 extender cable hooking up a usb hub on the desk for keyboard, mouse, usb storage etc.
Let the computer be loud there - noise has become a non-issue when selecting parts. It has a gtx1070 GPU which gets quite noisy under load (gpgpu processing). All 'workstations' I had in the past, like from Dell, got quite load under load, and had (baseline) loud power supplies as well.
I do need access to the power button more often than I thought, so I'll have to wire the front panel power button to a button on my desk.
A similar strategy I am considering. Get a fanless NUC case from Akasa [http://www.akasa.com.tw] and remote desktop to the "real" computer in the basement.
Yeah, I'm pretty sure the rule of thumb is that standard towers can be quieter because unless the SFF computers are passively cooled, larger towers can have larger fans spinning at a slower RPM to move the same amount of air, as well as have larger radiators.
They probably just wanted to show off the 6-display config for that photo. Stock traders are much more likely to use that particular display arrangement than engineers.
This packet is probably more for their investors then anything else and investors probably think 6 monitors means powerful. It's smart marketing to attack who you are actually advertising to (and here it's business 'leaders').
"world’s first mini workstation" my ass (and no, the footnote doesn't make it so). Here's one that supports 7 4k displays and has no fan: https://airtop-pc.com/
It looks nice, but beyond looks I don't really get the appeal. If you want and/or need the power, and you're already planning to have 3-6 monitors on your desk, why not go for a slightly larger box that can hold reasonable cooling solutions that don't sound like a jet taking off when the PC is under the slightest load? IME, HP is terrible about considering the noise of their designs. Our current work laptops are HP 1040 G3's, and it doesn't take much get the fans spinning at a noticeable volume (playing a video + a couple instances of VS + the normal outlook/web browsers/etc). The 1040 at least has been an upgrade over our old HP laptops (I forget the model) where the fan seemed to run at 100% all the time.
One upside is that this will fit in your carry-on luggage. Cheaper than a beefy laptop, but I'm not sure portability buys you much if you'd need to think about mouse/keyboard/monitor too.
If they can actually pull it off without the device either melting or spinning its fans fast enough to achieve escape velocity, it does sound like a pretty sweet deal.
If you've got enough displays, cable length can become a factor! This is much easier to solve if you've got your PC on your desk, and it's much easier to fit your PC on your desk if it's smaller rather than larger.
I get the feeling people who have security folks that need 6 monitors will be ordering this. It sure would be a nice replacement for the box we have now. The small size is a big benefit.
The pro version has 4 displayports was written on one site.
But I guess that means that 6 displays can only be connected at a limited resolution (Full HD?). Wonder if could even drive a 4k screen on each displayport.
Even a Macbook Pro can drive 4 external displays and the internal display at the same time. The Mac Pro that came out in 2013 can drive 6.
The question is really up to the GPU makers to put the right output capacity on their cards, and the integrator to hook up all the ports and make sure the software actually works.
Apples and oranges. Intel's mobile Skylake processors don't support more than 16GB of low-power RAM; Apple didn't have anything to do with that limitation.
I up voted your response as it's correct, but for anyone who doesn't want to believe, here's a fully referenced post I made a while back with specific references in the Intel documentation:
Now I know why both the new surface book and the mbook pro have such paltry ram configurations. Sad times when intel itself seems to be falling behind.
Intel's mobile processors like the ones in the Macbook Pro support up to 64 megs of ram. Their ultra low power cpus only support 16 gigs, but the line in question goes up to 64, even with ddr3l ram.
This is very nice. I used to have a mac mini and would toss it in my backpack so I could work at home (I had two sets of displays and good keyboards). I can see this working out the same (well I work from home now all the time, but in a general sense I see it used that way too).
Seeing stuff like this on HN brings excitement back into tech for me. New hardware and debatably a challenger to the Mac mini and Intel NUC? Very cool stuff.
I am hungry for news of HP's "The Machine" or whatever they're calling it, which is the big iron that will leverage and be built around the memristor-based memory.
it was a hype machine. the leader of HP labs just kind of made up a machine that combined all the experimental technologies the labs were working on (terrible idea: make a product that requires N experimental projects to all work out).
It's certainly not an Industry first, nor a "World's First" as the press release claims.
[1]Based on publicly available information of workstation competitors as of October 3, 2016 with volume of at least 1 million units annually as of October 3, 2016 having < 3 litres volume, professional graphics, Intel® Xeon® quad core processor, ISV certified applications, ECC memory.
Ever see something being advertised as "new and improved"? It can't be both.
I learned -- a long time ago -- to immediately ignore about half of anything in a press release or coming from a marketing/sales person. I skipped reading probably half of this press release just because I don't care that the VP of the division responsible for the product thinks it's great (of course he does, he's being paid to say that).
Give me facts: data sheets, technical specifications, etc. Personally, I have a (possible unhealthy?) extreme level of "contempt" for the average marketing/sales droid.
It has to. 135W power supply for a device which is 8.5x8.5x2.3"
There's no way that much heat is going to be quietly removed by those tiny fans.
I'd love some high res photos of the inside so I could look up the data sheet of the fans in use.
Considering HP servers ship with cooling ducts to direct airflow and this thing doesn't even seem to have them. I mean look at the CPU exhaust! From the photos it looks like the only vents are at the corners, but they've decided to exhaust the CPU (and presumably the GPU, hard to tell since it's under a hard drive) straight into the rear IO ports which don't have any vents to the outside.
I'm not aware of PC manufacturers making a SFF PC with an internal power supply (look at the Dell Optiplex GX620, and Steam machine).
Pretty much any PC small enough to be VESA mountable will have an external power supply.
Apple has made the power supply on the Mac Mini internal since 2010, but previous models had an external power supply. I can't find exact numbers, but I'd be very surprised if the power supply in the current Mac Mini model is rated for more than 60 or 80W output.
But besides, it doesn't really matter if the power supply is internal or external, other than it would be a feat of engineering to fit a 135W power supply and a PC in such a small space. The fact is, that volume is quite small to fit the components and a heat sink which can handle a 60-80W TDP (CPU+GPU) without having a lot of airflow over it to keep temperatures under control.
Is there an obvious reason that they stick with traditional air cooling for this, rather than using liquid cooling? Here's an example of a compact system with liquid cooled CPU and graphics card built using off-the-shelf parts: http://blog.newegg.com/building-a-mini-itx-gaming-pc/
I'd think that at this small size, you'd need really high air velocity to make it work. Coupled with small fans, this generally means the system would be really noisy. Is liquid cooling still too expensive? Not reliable enough? Not actually an advantage for noise? Or do the expected purchasers just not care about excessive fan noise?
Duh. OP suggests liquid cooling would give some benefit to acoustic performance.
Really, it just adds complexity and bulk; it's not some magic solution to efficiency or increased performance. Especially in a compact form factor.
If volume is not a constraint, sure the availability of added surface area to dissipate the heat collected from a smaller surface area is beneficial. Most high performance, low resonance liquid cooling systems take advantage of increased surface area using oversized radiators and multiple low-speed fans.
I have successfully designed and implemented my own custom water cooling loops to pull concentrated heat from multiple components, but there is most definitely no benefit to the volume occupied by the entire system.
I wasn't suggesting that mini-ITX was a substitute. It's only compact for something that uses standard hardware. What I was wondering was why they stuck with air cooling despite using a custom form factor. I presume there are reasons for this --- I just don't know them.
HP does claim that this machine is "63% quieter than a comparable HP business-class mini PC.": http://www.tomshardware.com/news/hp-z2-mini-g3-workstation,3.... My previous experience with HP machines though is that I hate being in the same room as one even when it is at idle. So even if this is 1/3 as loud, it may still sound like a vacuum cleaner.
The supported E3-1270, 1275, and 1280 are rated at 80W. I'm not sure about the M620 GPU, but since it's the successor to the 45W K620, that might be a good guess. If they need to offer the 200W version, I assume they will occasionally need to handle at least 135W of cooling inside the chassis.
That's also what I'm wondering. For my primary work PC it's: Either it's really portable and has a screen - then it should be small. Or it's not portable - then I don't care as long as it fits on or under the desk.
For some environments however small or invisible hardware and the associated clean look is definitely a bonus: E.g. if it's a PC in your living room (I bought a NUC for that), or maybe for publicly exposed working places (receptionist, ...). But that's not really what is targeted with that hardware.
On a 'living room' PC: I bought a Thinkpad T430s for £150. It's connected to the TV, the amp, and syncs various file storage services to itself then backs them up to various locations with Crashplan.
The best thing about it? If I want to do anything with it, I just open the lid. Screen, keyboard and pointing device built in.
I don't do CAD but powerful linux station in small package that can power multiple screens is something I would be in market for. Let's wait for reviews...
I loved that thing. I used it 24/7 for two years and then sold it for just 250 euro below the original price. Overpriced Macs do have their advantages.
I still use the 2.3GHz quad core "server" version as my main system and I love it :) I took one of the 1TB disks out and put an SSD in to make a DIY fusion drive.
I've deployed a hundred or so HP mini's pro units processionally, utter garbage. Blurscreened constantly, took 6 months of screaming at their tech support to get them replaced. I can't recommended any HP product after that.
I've got an HP Stream Mini, which I've been perfectly happy with[0]. Granted it's not a business machine and I don't do any dev work on it, it's just my living room HTPC. But I totally dig how easy it was to open up and replace the wifi card, hard-drive, and memory. In fact, my only complaint about it so far is that I stupidly let it update to Windows 10, and now I periodically turn my TV on expecting to be staring at Kodi, but instead I get a login screen because W10 decided to install updates and reboot on me.
"Based on publicly available information of workstation competitors as of October 3, 2016 with volume of at least 1 million units annually as of October 3, 2016 having < 3 litres volume, professional graphics, Intel® Xeon® quad core processor, ISV certified applications, ECC memory."
It'd definitely smaller than 12". That's the size of the SFF PC I have on my desk, and this thing, next to the coffee cup and mouse shown, is definitely not as large. The subtitle of the article states, "HP Z2 Mini delivers server-grade power in a 2.3 by 8.5 inch package".
That makes sense. I'd suggest "diagonal" as opposed to "diameter", as "diameter" suggests that the measurement is being taken across the flats. I see what you're saying though. The Mac Mini was, I recall, of roughly the same area as a CD jewel case, which is consistent with the size you describe.
Still, there's room on the case for additional front and back ports, and all the hubs I've used have had problems. It also defeats the clean look of the machine.
anecdata: I've bought more hubs than I care to count for multiple employers and not had issues. Also, the monitor hub route wouldn't affect the "clean look" of the machine any more than having a dozen cables sticking out of it every which way.
What is the etiquette on posting literal company press releases? I'm not asking snarkily, but mean it as a serious question. I don't want to see an HN that's dominated by ads for the latest gadgets, but I understand that this is something of interest to hackers, and perhaps that criterion dominates all others.
Well everytime Apple, MS, Google launch something, no matter how irrelevant, it ends up on the front page. As a point of principle, if that's OK with folks, why shouldn't it be OK here too. Heck I remember seeing a _font_ change, (by either Apple or Google), ending up on the front page.
Yeah, I think the same rules apply as for any other link.
(which the way I see it, as long as you aren't posting a torrent of boring HP links, it would even be fine if you worked at HP and posted the link, so long as it is interesting)
You can 'flag' stories with 500 karma, which is basically a super-downvote in the sense that a flag suggests the story doesn't belong on HN at all. My sense is that it is a much stronger signal than a comment downvote.
I am not complaining. I don't trust myself with the responsibility anyway but I have 500+ and I cannot down vote stories.
I used to avoid voting on comments on my phone for a while because I was afraid I'd end up downing with my fat fingers but they've fixed it with an undown link when I down vote. Maybe they've relieved me from the responsibility of down voting stories?
I don't know how I feel about down voting because there is always a desire to down vote dissenting opinion.
Per the guidelines: "On-Topic: Anything that good hackers would find interesting." I think that rule should trump any other, regardless of source or topic.
I'm sure they're referring to upgradability which is the last attractive feature of desktops. I can't figure out why anyone would care how big their tower is. If you're going to have an array of 6 monitors, I'm sure you've got an extra square foot or 2 for a standard tower that won't lock you into HP components.
What really astonishes me though is how you can have these really nice machines and Apple doesn't bother to update the Mac mini. Seriously, an Apple version of this would be a nice desktop.