Hacker News new | past | comments | ask | show | jobs | submit login

Asset management is definitely a thing. Tag your environments, tag your apps, and provide your apps criticality ratings based on how important they are to running the business. Then it's a matter of a query to know which servers can be shut, and which absolutely must remain.



> provide your apps criticality ratings based on how important they are to running the business

In a decentralized, self-service model, you can add "deal with convincing a stakeholder their app is anything less than most-critical."

Although it usually works itself out if higher-criticality imposes ongoing time commitments on them as well (aka stick).


That seems like a poorly run company. Idk. Maybe we’ve worked in very different environments, but devs have almost always been aware of the criticality of the app, so convincing people wasn’t hard. In most places, the answer hinges on “is it customer facing?” and/or “does it break a core part of our business?” If the answer is no to both, it’s not critical, and everyone understands that. There’s always some weird outlier, “well, this runs process B to report on process A, and sends a custom report to the CEO …”, but hopefully those exceptions are rare.


>devs have almost always been aware of the criticality of the app

I'm sure that developers are aware of the how important their stuff is to their immediate customer, but they're almost never aware of the relative criticality vis-a-vis stuff they don't own or have any idea about.


I maintain a couple of apps that are pretty much free to break or to revert back to an older build without much consequence, except for one day a week, when half the team uses them instead of just me.

Any other day I can use them to test new base images, new coding or deployment techniques, etc. I just have to put things back by the end of our cycle.


Welcome to University IT, where organizational structures are basically feudal (by law!). Imagine an organization where your president can't order a VP to do something, and you have academia :)


agree. ethbr1 is 100% right about this being a problem; if politics is driving your criticality rating, it's probably being done wrong. it should be as simple as your statement, being mindful of some of those downstream systems that aren't always obviously critical (until they are unavailable for $time)

edit: whoops, maybe I read the meaning backward, but both issues exist!


Some kind of cost shedding to the application owner (in many enterprises this is not the infra owner) is definitely needed otherwise everything becomes critical.

"Everything is critical" should sound a million alarm bells in the minds of "enterprise architects" but most I've discussed this with are blissfully unaware.


In moments of crisis, immediate measures like physical tagging can be crucial. Yet, a broader challenge looms: our dependency on air conditioning. In Toronto's winter, the missed opportunity to design buildings that work with the climate, rather than defaulting to a universal AC solution, underscores the need for thoughtful asset management tailored to specific environments.


Toronto's climate and winters is dramatically changing, the universal AC solution is almost mandatory due to the climate not being as cold in this area as it once was.


do you have a source for that? my source[1] appears the average temp hasn't changed much in the past quarter century:

https://toronto.weatherstats.ca/metrics/temperature.html


Average temp probably isn’t what you need here - peak temperature and length of high temperature conditions would be more important when figuring out if you need to have artificial cooling available.


From the site you cited:

* Feb 1 2014 MAX 5.5 °C HR MEAN -7.8 °C MEAN M/M -8.3 °C MIN -21.3 °C

* Feb 1 2024 MAX 15.7 °C HR MEAN 1.3 °C MEAN M/M 1.3 °C MIN -8.2 °C

* Jan 1 2014 MAX 7.5 °C HR MEAN -8.3 °C MEAN M/M -8.6 °C MIN -24.2 °C

* Jan 1 2024 MAX 6.5 °C HR MEAN -2.3 °C MEAN M/M -2.1 °C MIN -15.5 °C

* Dec 1 2013 MAX 15.6 °C HR MEAN -4.0 °C MEAN M/M -4.2 °C MIN -17.8 °C

* Dec 1 2023 MAX 13.1 °C HR MEAN 2.7 °C MEAN M/M 2.7 °C MIN -4.9 °C

Looking at the MEANs shows the real story - as well as seeing that the Mins are getting nowhere close to what they used to.

Also I live here, snow volumes have been low enough that I no longer need a snowblower. I used to build a backyard rink and haven't been able to properly last couple of years because the weather is too mild and I can't enough solid time that it'll be frozen. Public outdoor rinks (that aren't artificially chilled) suffer the same fate and are rarely if ever available.

Even in Ottawa where at this time a decade ago people would have been skating on the Canal for weeks by now - it's still not frozen over and open for the public.


Several data centres in Toronto (including the massive facilities at 151 Front Street West where most of the internet for the province passes through) make use of the deep lake cooling loop that takes water pumped in from Lake Ontario to cool equipment before moving on to other uses. Water is pumped in from a sufficient depth such that the temperature is fairly constant year round.


I think the system just has an isolated loop that heat exchanges with the incoming municipal water supply. Unsure if the whole system cools the loop glycol further or not, but ultimately there’s still a compressor-based aircon system sitting somewhere, probably at each building, that they’re depending on. They’re just not rejecting heat to the air (as much?).

Would love to know if a data centre could get paid for rejecting it’s heat to the system during what is heating time for other users.


> Would love to know if a data centre could get paid for rejecting it’s heat to the system during what is heating time for other users.

This is definitely a thing in other parts of the world https://www.datacenterdynamics.com/en/news/stockholm-data-pa...

> IP-Only, Interxion and Advania Data Centers are building data centers on the Kista site, which is connected to Stockholm's district heating system so tenants get paid for their waste heat, which is used to warm local homes and businesses


I upvoted, but I agree so much, I had to comment, too. I wonder how long it’d take to recoup the loss of retrofitting such a system. Despite this story today, this type of problem must be rare. I imagine most of the savings would be found in the electric bill, and it’d probably take a lot of years to recoup the cost.


It's pretty common for hyperscalers actually: https://betterbuildingssolutioncenter.energy.gov/showcase-pr...

https://greenmountain.no/data-centers/cooling/

I vaguely remember some other whole building DC designs that used a central vent which opened externally based on external climate for some additional free cooling. Can't find the reference now though. But geothermal is pretty common for sure.


You may be thinking about Yahoo’s approach from 2010?

> The Yahoo! approach is to avoid the capital cost and power consumption of chillers entirely by allowing the cold aisle temperatures to rise to 85F to 90F when they are unable to hold the temperature lower. They calculate they will only do this 34 hours a year which is less than 0.4% of the year.

https://perspectives.mvdirona.com/2011/03/yahoo-compute-coop...


No, what I was remembering was a building design for datacenters, but I can't find a reference. Maybe it was only conceptual. The design was to pull in cold exterior air, pass thru the dehumidifiers to bring some of the moisture levels down, and vent heat from a high rise shaft out the top. All controlled to ensure humidity didn't get wrecked.


This is done in Facebook's data center in Northern Sweden https://www.theguardian.com/technology/2015/sep/25/facebook-...


I know someone who did that in the Yukon during the winter, just monitor temperatures and crack a window when it got too hot. Seems like a great solution except that they were in a different building so they had to trudge through the snow to close the window if it got too cold.


was that “PC” on the corner of “S” Street by any chance?

didn’t think I’d see Yukon here :)


Having an application, process and hardware inventory is a must if you are going to have any hope of disaster recovery. Along with regular failovers to make sure you haven’t missed anything.


Good documentation and metadata like this is necessary for corporations to truly be organized.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: