Hacker News new | past | comments | ask | show | jobs | submit login

You're forgetting the cost of fighting IT in a bureaucratic corporation to get them to let you buy/run non-standard hardware

Much easier to spend huge amounts of money of Azure/AWS and politely tell them it's their own fucking fault when they complain about the costs. (what me? no I'm not bitter, why do you ask?)




Unfortunately, the fight goes even further than that when you go against the cloud.

Last week I was in an event with the CTOs of many of the hottest startups in America. It was shocking how much money is wasted on the cloud because inefficiencies and they simply don't care how much it costs.

I guess since they are not wasting their own money, they can always come up with the same excuse: developers are more expensive than infrastructure. Well... that argument starts to fall apart very quickly when a company spends six figures every month on AWS.

I'm on the other extreme. I run my company stuff on ten $300 servers I bought on eBay in 2012 and put inside a soundproof rack in my office in NJ, with a 300 Mb FIOS connection using Cloudflare as a proxy / CDN. The servers run Proxmox for private cloud and CEPH for storage. They all have SSDs and some have Optane storage. In 6 years, there were only 3 outages that weren't my fault. All at the cost of office rent ($1000) + FIOS ($359) + Cloudflare costs and S3 for images and backups.

With my infrastructure, I can run 6k requests per minute on the main Rails app (+ Scala backend) with a 40ms response time with plenty of resources to spare.


Only a Rails developer would think 6k requests per minute with 40ms latency is reasonable with all that hardware. If you rewrote it you probably only need 1 server but you will probably make an argument about how developer time is more valuable :)


The 6k/req with 40ms is just at the front door.

I'm talking about a real application here. With 100s of database and API calls on each web page load. I could make the whole thing in Golang or Scala and that would be at least one order of magnitude faster. But then I would have to throw away all the business knowledge that was added to the Rails app.

For instance, the slowest API call on the 40 ms is one that hits an ElasticSearch cluster with over 1 billion documents and is made on a Scala backend using Apache Thrift. There's a lot of caching but still, long tail and customization will kill caching at the top level.


It sounds very similar to the Rails frontend I helped replace at Twitter — no business logic was thrown away, it was done without any loss in application fidelity. We did get approximately 10x fewer servers with 10x less latency after porting it to the JVM as a result. However, without the latency improvement, I don't think we should have done it. Fewer servers just isn't as important as the developer time necessary to do the change as you just pointed out. Just as using the cloud to simplify things and reduce the amount of effort spent on infrastructure is the main driver of adoption. There is clearly a cross-over point though where you decide it is worth it. The CTOs you are speaking of are making that choice and it probably isn't a silly excuse.


I get Rails people are crazy biased, but I still don't understand your argument.

What type of business that someone would basically run out of an office closet like this would need to service more than 6k requests per minute?


I was talking about reducing the number of servers to serve 6k/minute. No idea if that was a synthetic benchmark or the current load on the service.


It's a matter of dosage. You might be talking about how another tech stack would perform better at this metric, but the price of that is the company would have been simply unable to ship anything in the time they could afford.

Swinging the conversation beyond the dosages of either side doesn't produce interesting insight


Exactly. I'm pretty sure if I developed the whole thing in C, it would be really fast. But I would be the slowest part of the development process.


10 servers does seem a bit much. But who knows what their hardware specs are.


>In 6 years, there were only 3 outages that weren't my fault.

How many where there that were your fault? And of those, how many would have been avoided by using Heroku?

>All at the cost of office rent ($1000) + FIOS ($359) + Cloudflare costs and S3 for images and backups.

What about the time spent creating and maintaining this infrastructure?


I'm not sure how many outages I could avoid using Heroku, but I guess at least a few.

One time I was using Docker for a 2 TB MongoDB and it messed the iptables rules. I notice everything slow for a few days until the database disappeared and when I logged in to check, there was a ransom note.

I flew from Boca Raton to NJ to recover the backup and audit if that was the only breach. That was the longest outage.

Like Rome, this infrastructure was not created in one day. Adding Optane storage is something more recent, for example. Or adding a remote KVM to make easier to manage than dealing with multiple DRACS, which I did after a moved to Florida.

But I'm not against using the cloud. I'm actually very in favor. What I'm against is waste.

In my case, being very conservative with my costs and still have a lot of resources available allowed me to try and keep trying many different ideas in the search for product/market fit.


Are you using the Optane SSDs for Ceph as you mentioned in your original comment? Curious what benefit you're seeing if not for Ceph, and if for something else, would you mind commenting? We're looking to share best practices with the community we're building over at acceleratewithoptane.com on how to take advantage of Optane SSDs.


If you buy equipment you can only deduct its depreciation costs every year. If you pay for services you can deduct the full cost from your taxes.


Anyone who thinks spending $10 instead of $1, so that you can book the expense (presumably for a tax write-off which might save you 30%... MAYBE?) needs to stay away from Finances.


Correct, unless the org doesn't have the money to make all the capex upfront


You can typically take the full depreciation in the year you purchased the equipment under IRS Section 179, up to a limit which varies depending on which way the wind is blowing in Congress. For 2018 the limit is $1MM. Whether or not its more beneficial to you to take the depreciation over time is a question for your accountant—technically if you later sell the equipment you're supposed to recapture the revenue from the sale for tax purposes.


That's very passive aggressive way to deal with it. If that's your only option you are really a cubicle slave in corporate hell.

In my opinion it's better to escalate upwards with proposals and not back down easily. You just have to frame it correctly and use right names and terms.

* Usually big companies understand the concept of "a lab" that has infrastructure managed outside the corporate IT. Once you fight the hard fight, you get your own corner and are left alone to do your job and can gradually grow it for other things.

* Asking forgiveness works even for large companies. Sometimes someone is not happy and you 'are in trouble' but not really. You just have to have a personality that does not care if somebody gets mad at you sometimes.


In my experience, the blind drive to the cloud comes from above. It's someone's pet project and you're either on-board or not.

I won't go as far as calling it yet another cyclical IT phase, but it has all the hallmarks of one.


It has to be, even with as irrational as the market can be I don't see people sustaining this kind of spend on public clouds -- the cultural barriers and perverse incentives that prevent effective use of private / hybrid cloud have to erode eventually.


> have to erode eventually

That's not borne out by history, especially in the face of "enterprise" hardware, software, and support, which, in many large companies, is being replaced by cloud (at lower cost!).

For smaller companies, it may be a different story, especially at the next economic downturn, especially if VC money becomes scarce enough for long enough.


There are armies of salespeople brainwashing executives about "the cloud". At my last job it only took a few years before every single C-level was spouting off recycled cloud propaganda


That's ironic-- back in 2014, every prospect was hanging up on me saying they "weren't ready to go cloud yet" :)


I wanted to put an Ubuntu partition on my work PC for Python deep learning work, as I'm significantly faster and happier on it. When I mentioned it to the sysadmin, he said "I'm not allowing that. Linux is like Wikipedia, any idiot can contribute to it. Windows is made by professionals so it has to be better."


Wow. Sometimes I wonder how these people even get hired. I guess a decent workaround would be if you can just get Docker approved, then you can do what you want.


Is that what people learn in Windows sysadmin school?


And they call themselves a "sysadmin" !


> You just have to have a personality that does not care if somebody gets mad at you sometimes.

THIS


This is absolutely the right answer as to how AWS got so big in the first place. Capex and IT are huge pain points. Starting something up on the free tier isn't. Once something's running and providing value, spending money on it becomes a "necessity" and the obstructionism goes away.


In most companies, AWS just becomes a new front-end to the same old IT bureaucracy, and dev teams are still disallowed from creating their own instances or EMR clusters or setting up new products like their own Redshift instance or ECR deployment solution.


Respectfully, those companies' cloud architects suck.

If someone goes to the trouble to migrate onto cloud, and then replicates pre-devops workflows... wow.


Not every company is a single page webapp and simple service portfolio... I work at a place with 3,000+ developers and over 700 applications - there is no way in hell our cloud portfolio would have any standards if we didn't have a robust operations/engineering team making it work. Sometimes, even when you adopt the cloud, you realize that your operations model is even more important and there is nothing wrong with that.

The real problem is believing that there is a pure model that works for everyone.


The parent's point was that if the ops/deployment engineering team is unresponsive to the needs of developers, it may end up being better to run with no standards in cowboy-mode. If they ops team is extremely fast and highly skilled, they will be a boon full stop. If they are unskilled and politically obstructive, they will be a curse, full stop.


I got a more "you're doing it wrong" vibe...


I work at a Top25 US company by revenue.

In no way am I responsible for our migration going well. But I do feel if a company this size can do it, then others failing to do so says more about their architecture teams than the endeavor's futility.


This has less to do with cloud vs own hardware and more about how the company is structured. I've worked in companies before with an ops department: a few people responsible for managing all the cloud servers. All the devs (like me at the time) work locally and have access to an isolated dev/uat environment provided by that team. That team had most of the show automated, they weren't really provisioning any machines by hand.

If something bad slipped into prod there was a process in place of how a dev would work together with someone from that team to fix it.

> pre-devops workflows

I guess this is what you are calling a pre-devops workflow? In a lot of fields not all devs are allowed to see/touch the complete production environment. Not everyone can go the netflix way of "everyone pushes to production and we'll just fix it when it breaks".


I dunno, that sounds like the a good way to do it.

I kinda shake my head if people call for devs to handle all the infrastructure and everything in prod. Why should a developer concern himself with details of scaling and tuning postgres, elasticsearch or loadbalancers? If you don't outsource that, that's the job for the ops team. However, if that's the responsibility of the ops team, there's no reason for dev to have access to these systems beyond the application layer. That just seems like a good way to split the work of running and scaling an application.

Now if we're talking about applications, that's something different. In fact, it's the contrary. I am a big proponent of having developers manage and configure their own applications, including prod. It's fast and makes them develop better applications. However, doing this for critical systems in a responsible way requires automation to enforce best practices. At our place, devs don't have root access to productive application servers, but devs have permissions to configure and use the automation controlling the productive application servers. And it's safe and rather risk-free because the automation does the right thing.

And it's also a different thing if we're talking about test. By all means, deploy a 3 container database cluster to poke around with it. I like experimentation and volatility in test setups and PoCs. Sometimes its just faster to solve the problem with 3 candidate technologies and go from there. Just don't expect things to go into production just like that. We'll have to think about availability, scaling and automating that properly first.


I’m a machine learning engineer. I’d love it if I only needed to worry about machine learning and application interfaces.

Instead, because ops / infra arbitrarily block me from what I need, I have to be an expert on database internals, network bottlenecks, app security topics, deployment, containers, CI tools, etc. etc., both so that I can “do it myself” when ops e.g. refuses to acknowledge some assumption-breaking GPU architecture we need, and so that I can exhaustively deal with every single arch / ops debate or bureaucratic hurdle that comes up for me to endlessly justify everything I need to do far beyond any reasonable standard.

For me, managing devops shit myself is a necessary evil. Far better than the case of unresponsive / bureaucratic ops teams, but worse than the unicorn case of an actual customer service oriented ops team that actually cares rather than engages in convenience-for-themselves optimization at every turn.


This is so frustrating to read as a service provider.

Especially because a well-done infrastructure scales so much harder than the manual style. We're currently dealing with a bit of fallout from a bureaucratic system at one point. People are so confused because to get a custom container build, all they do is create a repository with a specific name and a templated Jenkinsfile and 10 - 15 minutes later they get a mail with deployment instructions. It's so easy, and no one has to do anything else.


> I guess this is what you are calling a pre-devops workflow? In a lot of fields not all devs are allowed to see/touch the complete production environment. Not everyone can go the netflix way of "everyone pushes to production and we'll just fix it when it breaks"

Unwillingness shouldn't be confused with ability. Most companies can do this if they're not handling PII/PHI. It takes investment in smart people and time but most companies besides pure software companies see tech as a cost center and avoid investing in better infrastructure and platform systems.


Very much so when I was arguably doing devops for BT using PR1ME super minis' back in the day. A mate who was in operations for an IBM shop was horrified that our team were allowed to write our own JCL.


One of my pet peeves is that there's no mandatory "History of ideas in software development" course in most CS curriculums.


A lot of those companies don't have cloud architects, just Amazon/Microsoft/Google sales reps talking into the ears of MBAs about turning capex into opex.


In my current company (a large bank), our team (~50 people) is trying to leave the on-premise infrastructure and move to the cloud because the on-perm stuff is managed in such a way that it's hard for us to acomplish anything. It will probably cost a lot more to use the cloud, but we're gladly willing to pay for it if we can shed the bureucracy this way.

I've read that in medieval times, kings have sometimes abandoned their castles for new ones after too much fecies were accumulated in them (as people were shitting wherever back then). I feel a strong parallel in this story to our situation.


why not cut out the middle man and just say those companies suck, and the cloud isn't going to fix it for you.


Because that seems overly reductive.

There are things you can do in cloud (especially as an SMB) that weren't possible on-prem.

To look at those new opportunities and say "No, how can we do things exactly the way we were?" seems like the real mistake.


Yep. No one telling me no anymore and I can write Lambas to replace cron jobs, use RDS to replace DBs, use S3 and Glacier to replace Storage, etc. Fargate is awesome too. No gate keepers nor bureaucracy just code and git repos. That's why AWS is so awesome.

And, I can show exactly how much everything costs. As well as work to reduce those costs.

AWS added to a small, skilled dev team is a huge multiplier.


> use RDS to replace DBs

RDS is just managed databases though...


>>politely tell them it's their own ... fault when they complain about the costs.

They don't care. Neither do their bosses. Not even the CFO or the CEO.

The only people who will care are the investors. And sometimes not even them, they will likely just sell and walk.

Only people who are likely to care are activist investors with large block of shares, those types have vested interests in these things.

Part of the reason why there is so much waste every where is because organizations are not humans. And most people in authority have no real stake or long term consequences for decisions. This is everywhere, religious organizations, companies, governments etc. Everywhere.


Could be very interesting when we have another economic downturn to see if attitudes on this change. It certainly seems more cost-effective to run ones own technical operations rather than offloading onto AWS / Google / Microsoft.


It will only matter if the infrastructure is a large part of their cost of technical operations (as opposed to labor) or, more importantly, overall cost of operations.

In another thread on here (unfortunately I can't recall which), an executive shared a sentiment along the lines of "I don't care if it's 1% or 0.1% of my overall budget".

Perhaps 0.9% would become more significant during a downturn, especially for startups if VC money dries up.


They seriously can't buy a graphics card and slap it in the PCIe slot?


When I worked at Google, I really missed the dual monitor setup I had had at my previous job. I asked my manager how to get a 2nd monitor. Apparently, since I had the larger monitor, I was not allowed to get a 2nd one without all kinds of hassle. I asked if I was allowed to just buy one from Amazon and plug it in, and I was told no. I finally just grabbed an older one that had been sitting in the hallway of the cube-farm next to us for a few days, waiting for to be picked up and re-used. I'm sure somebody's inventory sheet finally made sense 2 years later when I quit, and they collected that monitor along with the rest of my stuff.


I have a hilarious story about this from Google:

I wanted second 30" monitor, so I filed a ticket. They sent me long email listing reasons why I shouldn't get a second monitor, including (numbers are approximate, employee count from 2013 or so) "If every googler gets an extra monitor, in a year it would be equivalent to driving Toyota Camry for 18000 miles."

I'm thinking "this can't possible be right", so I spend some time calculating, and it turned out to be approximately correct. So I'm thinking, "we should hire one less person and give everyone an extra monitor!". I replied "yes, I understand, go ahead and give me an extra monitor anyway". They replied "we'll require triple management approval!". Me: "please proceed". First two managers approved the request, director emails me "Why I'm bother with this?". Me: "they want your approval to give me second monitor", him: "whatever".

And finally few days later I got a second monitor...


> They sent me long email listing reasons why I shouldn't get a second monitor

I had no idea Google was so cheap. Gourmet breakfast, lunch, and dinner every day? No problem. A couple hundred bucks for a second monitor? Uh... it's not about the money, we're, uh, concerned about the environment.


> Gourmet breakfast, lunch, and dinner every day? No problem.

These eke a few more hours of work out of you per day.

> A couple hundred bucks for a second monitor?

Arguments about productivity aside (I agree, more productive), these don't.


To be fair back then decent 30" monitors were around $800-$1000. (Also, it's just normal food, not "gourmet".)


But it’s certainly a step up from standard corporate catering


When I started at Amazon they gave us dual 22" monitors. I bought dual 27" dells and a gfx card to back them and plugged it in. I also explained how much I was swapping with 8gb and a virt (45 minutes of lost productivity a day), and ram was $86. Manager happily expensed ram for me and the rest of the team . 2 years later that was the standard setup. Now I have 7 monitors and 3 PC's from multiple projects and oses and hardware reups and interns. None funded by me. The dells are happily on my quad tree at home.

Amazon also allows you to byoc and image it. Images are easily available. Bias for action gets u far.


A colleague at my office recently brought in a 43" 4K TV to use as a monitor. He was reimbursed immediately.

After a few days I bought one, too, took it home, and took the 2x 23" monitors it replaced to the office.

In a different department another friend systematically acquired new large monitors for her dev team. Her management chain complained about the expense.

Bias for action not only gets you far, it attracts folks to your team inside a company. ;-)


When I worked at Amazon in 2009 I brought my own monitor, chair and keyboard, because standard one were, well... sub-standard. :)


TIL Bias for action.


As did I and the excellent rabbit hole that it led to.


At an old job I managed to snag a 30" monitor from a colleague who was moving to a different job. He managed to get it in the brief window that it was offered instead of 26" ones. When I announced that I was moving on, the vultures started circling over who would get the monitor.

The inventory sheet will never be reconciled over that monitor, unless it breaks.


My company has the same policy - one monitor per employee. When I wanted a second one, I bought my own, set it up over the weekend, and when someone from IT finally noticed it a month or so later, I just said "Oh, I found it in the hardware e-waste pile over by the server room"

Not sure if I can take it with when I leave, but hey, at least now I have my second monitor and it cost less than $300 - I'll probably bequeath it to a fellow employee.


Reminds me of the time when LCD monitors were just taking over. The software team had budgeted and gotten approval for nice new development system along with a large LCD monitor. As we tried to place the order, the IT manager gets hold and makes a big fuss about it. Given that the CEO even approved it, it caught us off guard. Best we could tell she was butt hurt for us getting LCD monitors before their team did.


Wow, no dual/triple monitor setup is just uncivilized.


There was a choice between a "big" single monitor, or a "small" dual monitor setup. I had just not realized how much I used the framing of the 2 monitors to stay organized, and I chose the "big" single monitor setup. That turned out to be a mistake. The point is that it was easier to "dumpster dive" for a second small monitor than it was to get one in a legit way...


At my employer, I happened to get one of the first machines they built with SSDs (in 2014) and one of the last machines they issued as desktop instead of laptop.

A year later they stopped issuing SSDs and I have to do the whole "managerial approval" thing to get any variation from the "standard" machine, so I am more than a year overdue for a replacement.


When decent monitors cost like $100-$150, it's nuts. I've got a quad 27" setup. I'd like to have more, but things get a little dodgy when you run out of real video ouputs and have to start using displaylink usb adapters.


I always waited until my interns left and then "collected" their monitors. Eventually I had too many monitors (Just managing the asset tags and reconciling things was a huge pain.), and replaced everything with a small 24" 4K.


You're neglecting management costs. IT teams don't buy hardware with corporate credit cards, they have to work through pre-existing requisition processes that properly budget for the hardware, make sure support contracts are in place, etc. You have to migrate whatever workload off the server where you installed the GPU (politically problematic since Murphy promises you that your users will be connecting to the server by its IP address, or the workload is "mission critical" and can't be stopped, or whatever) and reserve the server for GPU work. If you're running something like vCenter to virtualize your datacenter resources, you need to make sure that VMware picks up on the GPU in the new machine, doesn't schedule new VMs without GPU requirements on the server with the new GPU, etc.

Comments like these (why can't I just do X?), it's like the difference between being single and being in a serious relationship. When you're single, you can do whatever you want. When you're in a serious relationship, you can't just make whatever decisions you want without talking to your partner. Well, big corporate enterprise is like that on steroids, because instead of having one partner, you have several or dozens, and everyone needs to buy in.


However, this cannot justify just any disparity in prices. Data science FTW !

Capex => Opex has a name. It's called "a loan". So let's model cloud usage as a loan. Let's assume you want, for an employee, a machine learning rig and you set depreciation on 2 years. Let's also assume that the article is correct about the ballpark figures, somewhere around $1400/month for renting, and $3000 for the machine.

And let's ignore the difference in power usage, extra space, bandwidth, ... (which is going to be 2 digit dollars at the very most anyway, since you need all that for the employee in the first place)

So how much interest does cloud charge for this Capex => Opex change ? Well 33600/30000.5 or 613.5% per year. Pretty much every bank on the planet will offer very low credit score companies 30% loans, even 10% is very realistic.

There are no words ... Just give the man his bloody machine. Hell, give him 5, 4 just in case 3 fail, and 1 for Crysis just to be a "nice" guy (you're not really being so nice: you're saving money) and you still come out ahead of the cloud.

We both know why this doesn't happen, the real reason: you don't trust your employees. Letting this employee have that machine would immediately cause a jealousy fight within the company, and cause a major problem. That's of course why the GP comment is right: leave this bloody nightmare of a company, today.


Why I as developer should worry about this? It's their job.


In most (good) companies team/group have budgets where large portions are at discretion to group manager. I would expect group managers to approve for such hardware requests. It’s much harder to justify $4K bills on cloud services. This is not IT issue and I think this thread has got derailed little here.


How often do employers wait for their employees' approval before making a decision?


They seriously can't buy a graphics card and slap it in the PCIe slot?

Well I mean they obviously can...

First of all you're assuming IT will just let you have a decent machines with PCIe slots. It's all about laptops don't you know. Workstations are so 2012.

Secondly while they might, after much begging, let me have one graphics card to put into this one old workstation I've scavenged, they certainly won't let me have a machine with top of line specs with several cards or, heaven forbid, several machines, (but they'll happily let me have a laptop that costs just as much for less than half the performance).


We have had luck begging for "just one" to try it out. Then run some benchmarks. We were able to show that the one developer with the fast workstation compiled faster, and therefore was more productive. A few numbers with dollar signs and management told IT to stop stalling and get everybody a good workstation - things happened fast.

You need to repeat that every couple years though.


Ahh yes, corporate laptops.

At our enterprise, we have HP ultrabooks or however they are called. Windows 7, 32 Bit. You know, it has a decent i5 on it, but there is some specific bottleneck making it terrible. Every employee that sits in an office has this piece of crap.

Takes 3 minutes to start IntelliJ. Since some years ~50 developers were able to get Macbooks, luckily I am one of them, else I'd just be pissed off every single work day.


Probably some corporate required antivirus package.


Not to mention you'll be on some awful Dell tower with no 6/8-pin power connectors and the case won't fit anything reasonably powerful, and the IT gatekeepers have no idea what you mean when you explain this problem.


Last time I worked in big corporate I had to use their laptop even if I didn't plug mine into anything, because of security. At some point said laptop got stolen and I sent a buying request for a new one. Took a year without feedback and a heated discussion for my boss to accuse me for having been to passive on it. I was then using my own laptop. At that point you can't even call it shadow IT anymore.

Any buying requests go through the direct supervisor, the financial department, the director of the company, then back to the supervisor for the final signature. And because it's Germany, it was on paper. When I say any buying request, I mean any. From any programming book, to any accounting book for finance or any HR book.

You'd think people at the top have better things to do. You think that, until you hear the department heads discuss which excel format files should be saved in.


They seriously can’t.

Typically if you want to buy some hardware in a big Corp you go to some preapptoved internal catalog to pick up from one or two models of desktop or a laptop from vendors like Dell, HP, or Lenovo - whatever was preapproved. Sometimes you can fine tune specs but not in a very wide range.

Buying anything outside of that will require senior level approval and - heaven forbid - “vendor approval study” (or similar verbiage).


I worked at a startup that started doing that, the company was only around 250 people at the time, but the VP of IT was hired from a fortune 100 company, and he instituted the catalog approach of only 2 supported laptop configs (13" or 15"), and one desktop.

When I left the company, the engineering director was working out how to let his team go rogue - basically handling all of their own IT support, the only caveat being that they had to install IT's intrusion detection software, but aside from that, an engineer would be able to buy any machine within a certain budget.


In most medium size companies, think Fortune XXX, only IT is allowed to do that, after getting all the required approvals.

And good luck getting the cost center ID for the initial ticket submission to start with.


No, not in most corporate it shops... card had to be “procured” through the corporate vendor, support has to be in place, drivers for some shitty obsolete os flavor has to be installed... that’s assuming you can even find a server in rack that is recent enough to even run it all, if not then same shit has to happen with the server.

It’s all ITs own fault frankly - or more like fault of corporations approaching IT as mostly a desktop Helpdesk support.


Most of time standard power supplies and cabinet of these Desktop does not support 1080Ti power consumption.


Hey there, I'm the writer for the article. I have another article planned that will talk about part picking and actual building. For now though, here is the parts list: https://pcpartpicker.com/b/B6LJ7P w/ 1600W psu.


Great read! As an aside your hyperlink to HomebrewAIClub accidentally included a comma.


We just got around them and argue after the fact. It is a pain no matter what. But we actually do both, interernal custom builds and cloud computing so no matter what we are set


Innovation being sidelined in favor of status quo and 'cover-my-ass' risk management highlights exactly the reason I won't work for a bureaucratic corporation...


Money^, benefits^^, ample PTO^^^, and a 8-5 workday allowing plenty of time for the important things in life like my family are exactly the reasons I do.

^: You know, like real money, not 90k plus 2.5% of something that will probably not exist next year

^^: You know, like real benefits such as low-cost, low-deduct. health insurance, matching 401k, etc.

^^^: As in, real, actual PTO, not this "unlimited" bullshit some startups and now sadly companies are replicating. If you decide to quit and can't "cash out" your PTO, you don't have real PTO.


Low deductible healthcare? Where do you live? Europe?


America, surprisingly, in a southern state no less, where heart disease and diabetes rates are higher. Our work has a grandfathered non-ACA plan (we got to keep our old insurance but means we miss out on "free" stuff like contraceptives and a breast pump, and other banal things, but in exchange, we get much better coverage, and a lower deductible)


You always have at least two options, and this case is not extreme -

1. You can blow up any amount if you like to. 2. Or, you can figure out what you are trying to do. Then, learn how to do it better. There is a cheaper way to run in the cloud too - https://twitter.com/troyhunt/status/968407559102058496




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: