Hacker News new | past | comments | ask | show | jobs | submit | more ldoughty's comments login

We offered "usage+minimal base fee" (cost-plus) based first, then used that data to determine fixed rate plans based on the initial customer usage. We still offer both, but fixed rate outsells by far because of the reliable bill.. though it is generally more expensive for the consumer (as we take on the risk).

We find our customers (mostly education sector) flat out could not get variable pricing approved. There's probably many other teams and sectors in that boat.

That said, flat rate pricing is sooooo much easier to deal with... Especially if you want to pass through the cloud costs, and not use your own "usage determination"


Distilled water is effectively demineralized... Not perfect... technically it's collected steam from boiling water, so it kills most bacteria and leaves most other material or minerals that can't be carried by steam.


I hadn't considered that steam could carry tiny particles of minerals, but it's obvious now that you point it out. So demineralized water would probably be closer to pure H2O. I've never noticed demineralized water at my grocery store, but maybe that's just because I've never looked. But I will next week!


microplastics and forever chemicals both show up in clouds and rainwater, so I'm guessing they're both in distilled water, but I wonder if at least the microplastics are removed from demineralized water


In the USA it is almost always labeled distilled water.. I've never seen "demineralized water" advertised as the alternate methods are relatively new (at a competing price point).. but they would likely just be called purified, and you'd have to look at the label.

Purified water may or may not be demineralized, depending on the method used to purify.

Basically any other form of bottled water is about as regulated as homeopathic remedies... Short of killing people, nobody really watches them. This is part of the reason why distilled water is recommended for home medical use like CPAP machines or Neti pots, with purified as second best (though it COULD be equal or better, the term is too generic)


I'm not sure "large percentage" is a statement I'd agree with, my searching skills are failing me, do you have any kind of source for that? I'd be shocked if it was over 5%...


I live near a medium-busy street. I haven't seen actual numbers but it wouldn't surprise me if at peak hours there are over 100 cars passing per minute.

If 5% of those are overly loud, that's an average of a very loud noise every 3 seconds, and most of them will take somewhere between 5 and 10 seconds to come and finally go away. If you don't think that's large, we have very different noise thresholds.


I guess "large" is subjective. 1-5% is the ballpark I have in my head based on experience, which qualifies as "large" to me when I get passed by thousands of cars a day.

The hard numbers I'm aware of are about motorcycles, which have much higher rates of illegal modifications than other vehicles. This source documents a bunch of other sources, with estimates ranging from 40-70%:

https://noisefree.org/sources-of-noise/motorcycles/


My argument is: weather.

I think it is fine & better for society to have applications and models for things we don't fully understand... We can model lots of small aspects of weather, and we have a lot of factors nailed down, but not necessarily all the interactions.. and not all of the factors. (Additional example for the same reason: Gravity)

Used responsibly. Of course. I wouldn't think an AI model designing an airplane that no engineers understand how it works is a good idea :-)

And presumably all of this is followed by people trying to understand the results (expanding potential research areas)


It would be cool to see an airplane made using generative design.



Note that browsers/extensions/settings that hide your referrer might prevent you from encountering this redirect


Guacamole has supported windows RDP for at least eight years now. (Probably longer, I started using it in 2016 and it had support already)


It's not uncommon for businesses to "realign their business strategy" in a way that fires everyone in the union...

In this case, as the parent stated, outsourcing would be a likely outcome


Yes, it's not like there aren't already third-party games testing companies in Eastern Europe and elsewhere. I'm not sure to what degree games companies use agile and devops practices though. Working with an outsourcer for QA is a sure-fire way to increase your lead times and slow down development.


I feel like qa is generally the easiest thing to outsource and will have the least impact.


Working at a very small place in house QA was absolutely impactful in being subject matter experts on what the software actually did and was supposed to do. I am not sure how you measure that.


A good QA might be even the folks that know most how system actually works and what are all the features. Your developer can end up easily work in single corner, with few happy paths. Where as QA should touch nearly all surfaces of product. And even have idea what are most of the specs.


Exactly, we had 1 to 2 QA people compared to 6 to 8 developers, roughly speaking the developers knew more about particular details of like a third of the system where QA knew all the interactions.


That's definitely a fair point. However, that knowledge should live with other members of the team and it can be helpful for QA to have less knowledge of the system. How can that be helpful ? -- so they can view the systems through the lense of a typical end user.


> For scientists, an ice-free Arctic doesn’t mean there would be zero ice in the water.

> Instead, researchers say the Arctic is ice-free when the ocean has less than 1 million square kilometers (386,000 square miles) of ice

I really really dislike this. This is going to be the next global warming shenanigans where people latch onto the term and are confused, and point to the "falsehood" to discredit the science. It really hurts the message and mission to use such terms.

Give it another catastrophic sounding name that isn't easily twisted to being false.. "artic ice season collapse" or something that doesn't suggest there's no ice... But rather a change in the status quo


> "Prompt engineering is best done by the model"

Well, That would be ideal, but if I type in "Middle-aged white male in full plate armor standing on a battlefield resting on a full tower shield" I likely will want to further modify the result, or style, or detail level. There almost certainly will continue to be "hacks" to get it stylized as desired. Even if I say "Painting of..." there's still a huge range of options.

I understand and agree that it's desirable to get AI prompts as close to natural language, but how do you quantify a level of stylization in natural language? "A very very very very Michelangelo style painting of a slightly slightly slightly Middle-aged white male..."

I think prompt engineering will change quickly, and to keep up, it could potentially be a 'profession' that is very specific to the model. I don't think that's a bad thing, but I would think/agree that it will likely not employ many people at all.


"Prompt engineering" is just specification writing but with a LLM instead of a person as the one implementing the spec's requirements. Natural language is imprecise, precisely specifying requirements is hard, and LLMs can't read your mind any more than other humans can.


Maybe it would help if you actually understood the specific qualities of "very very Michelangelo style" and could describe those, instead?


The problem is the source of the reports and display of the reporting.

I'd trust Down Detector a lot more if it was filled with Hacker News community -- people who are able to understand that there's "DNS" and "Routing".. and that your phone can have internet access at home while your home PC does not.

I personally hate Down Detector's graphing because it can make it 'look' like there's an issue when there isn't really... Facebook with 500,000 reports looked as down as Google with 1,000 reports... For equally sized / used entities, I would not trust that "Google" is down with 1,000 reports. I had a coworker ask me what was going on with the internet because "everything is down.. Facebook, google, gmail, microsoft!" (when seeing the Down Detector home page)

DD should normalize the graphs against the service history in some way. A service shouldn't spike because it had 30 reports / hour for a day, then suddenly has 100... when it has a history of being out with 100,000+ reports. The 100 reports are probably mis-reporting, but you can't tell until you dig into each service, one by one, with separate page loads.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: