Hacker News new | past | comments | ask | show | jobs | submit | pythko's comments login

I think you and the parent comment are talking about different scales. A large SaaS company deal could be $300k per month per customer, and the sales process for a company like that can involve changing the software to meet the needs of the customer. A very early lesson is that what the customer says they need is not always the same as what they actually need.

One of the many reasons calls happen is that customers say "I need XYZ feature in order to do this deal," and the salesperson then needs to ask why they need XYZ feature, and what they want to accomplish, and maybe existing ABC feature actually meets their need, or maybe the company needs to develop XYZ feature to secure the contract. Once you get into a complex domain, that is not happening over email.

The article contains good advice to many businesses out there, but it's worth considering the situations where it doesn't apply, too.


It certainly makes sense for a deep dive sales interaction if you're actually going to your product or engineering team to make changes.

But if you're selling what's already on the truck, as most of these companies are, then there is no reason for the "call for pricing" for a standard enterprise plan. Pricing pages should have a separate column for custom/bespoke solutions, where it makes sense to have "schedule a call".


I really like this idea, and would love to subscribe when you’re ready!

I think many commenters have pointed out reasons why this model is not suited to widespread user adoption, but I just want to say that may not be a bad thing. Meme content moderation is not a big problem if your users are not the type to submit or upvote a lot of low effort memes in the first place. A paywall will inherently limit your user growth, but if the users you get end up being people who are happy to create and support high quality content, that seems pretty ideal to me (unless you are looking for maximal growth, VC level returns, and/or an IPO).

If there’s a mailing list or other way to get notified when you’re ready to do a full launch, I’d love to sign up for that!


I agree; I’ve had multiple instances recently of booking through a third party where getting changes or refunds is very slow and clunky, if they will even do it at all. Contrast that to my experience with booking a hotel directly through their website, where I mistakenly booked the wrong dates. One phone call to the hotel and 2 minutes later they changed it with no hassle.


It’s free to end consumer, but these schemes make money by charging a percentage to merchants. And those merchants will compensate by bumping up their prices a little to cover the transaction costs.

In the best case, these are pointless services that only transfer money to the finance industry. In the worst case, they incentivize people to spend money on things they can’t afford (and also transfer money to the finance industry).


I found this video to be a good explainer: https://m.youtube.com/watch?v=R1JaMRpcDrQ&pp=ygUbQnV5IG5vIHB...

It seems like many of them act like credit cards and charge the merchant a percentage, since they “drive consumption” and encourage people to buy stuff. Of course, this fee will likely be added into the price that all consumers pay, so as these get bigger, we all will be subsidizing interest free loans to people in the form of 1%-3% higher prices. Much like credit cards are today.


You seem to be the lucky lightning rod comment on this!

I guess it's fitting that the tech world gets particularly up in arms about this; we're certainly a group who enjoys demanding standardization while refusing to change our own practices.

(But also, if you're not using ISO 8601 in a code context, what are you doing?)


I've settled on the explanation that this is just a cultural difference, and anyone arguing from a place of "logic" or "correctness" is refusing to accept that it's all convention, and different people do things differently.

As an analogy, some languages put the verb at the end of the sentence (e.g. Latin, certain German grammatical structures). As an English speaker, this is weird because I don't really know what's going on until the sentence is done, and it feels like I'm putting together a little puzzle in my head. Whereas to a fluent speaker, it presumably just makes sense and you don't really find it difficult. Same thing with dates.

As an American, I like our convention for writing dates. I usually care about month first. Immediately upon seeing a date, I know the rough time frame. Is it this month? Next month? Around Christmas? Around my birthday? Then the day pins it down to something specific. I will assume a date is referring to the current year, unless I see a different year, in which case it's a quick update to my mental model. April 12 flows as "soon, and exactly 18 days away" and September 12 flows "far away, and the middle part of the month".

I get that computers are a different use case, and there I'm a ISO 8601 advocate.


And the 1975!

https://m.youtube.com/watch?v=0E_uK01odHg

Based on the context of the title, I had just assumed petrichor was the name of an artist or something. Glad to know the real definition!


There actually is one, quite liked his debut when it came out.

https://www.discogs.com/de/artist/3536488-Petrichor


Executive summaries are part of the industry, where people boil down non-fiction books into short pieces of text. E.g. Blinkist (not affiliated, just the first result on Google)

It is somewhat funny that presumably a good amount of work goes into padding the book, and then work goes in on the other side to strip the padding away.


I can tell you that the current focuses of AI implementations are around real, impactful issues: sepsis risk, readmission risk, deterioration index, etc.

The problems with AI in healthcare are:

1) People don’t want it to be a black box - that means quantifying the factors that go into a recommendation

2) Operationalizing AI recommendations is hard. AI tends to give gradiated information on binary decisions (e.g. there’s a 68% chance this patient is septic. Should someone go check on them? What if they were 49%?). The challenge becomes deciding how that information should be shown to people and what the acceptable false positive and false negative rate are.

3) The same problems of AI everywhere. Things like garbage in garbage out, unrealistic user expectations, feeling like it basically tells you what you already know, the challenge of getting insight from a pile of data.


No, the problem with AI in healthcare is that like much of healthtech is that it further reduces the ability of providers (especially in hospital settings) to respond to fluid and evolving situations that may fall outside the dotted lines that the AI understands or scenarios the system allows you to work within. Specifically, it creates further red tape that providers need to worry about, more checkboxes on an iPad to be clicked, more time required per patient on administrivia.

It could be done well but it will be done poorly, will increase the burden of front-line workers while making administrators feel like they can say they accomplished a big project this year. At the end of the day rather than making healthcare more auditable, practitioners will learn to just quickly fill in bogus data on the new system so they can go deal with the patient that's coding and when the AI gives a recommendation a provider doesn't like they'll just ignore it anyway.

In a good system that wasn't falling apart at the seams, AI in healthcare would be a boon, but in a broken system that's falling apart and failing its front-line workers, it will just serve as a distraction and another burden.


I think what you described falls under #2 in my reply. “Doing it well” is not a trivial option that people are ignoring; “doing it well” is the thing people are trying to solve. AI is not a magic bullet that always makes everything better.


Honestly that's worse than I thought. I work in the field, particularly in relation to accountable AI, and it's not OK to have models that tell you whether to check on people to make sure they're not dying unless there is also a human checking every case, which I hope is what's going on. How would you like to be different than the training data and deemed "no risk, 100% confidence" when you actually have a life threatening problem?


In a hospital setting, nurses and doctors round regularly. No one is talking about using AI as a replacement for that, because no one has anything approaching that much trust in predictive models.

Predictive models are most often used as either an alerting mechanism or an additional data point on a dashboard. You need to careful of alert fatigue, where too many false positives cause humans to disregard all alerts from the model. And if you don’t get people ignoring alerts, you can waste a lot of people’s time and energy by having constantly having them run to check on someone who is actually fine.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: