Unless a site chooses to make RSS, especially full content feeds, part of a subscriber benefit. Ars Technica, for example, provides full content feeds to subscribers, while the general public can access title + preview snippets.
Approaching it this way (similar to paid mailing lists) is the best of both worlds, providing revenue incentives to users while removing site ads for users who care about and pay for the content.
True, but most people don't like to pay. So it's hard to sell people on the idea. Maybe it will be possible in the future as people get used to the idea of paying for content.
> True, but most people don't like to pay. So it's hard to sell people on the idea.
So provide only a title+snippet feed so folks can now when you articles appear on the site, and hopefully go to the 'full' page which has ads. Anything that can drive traffic can be helpful.
I think this is too dichotomous, and misses the mark because of it.
It's not that people don't like to pay; it's that most people don't like to pay what seems like an unjustifiable amount of money for a service which is otherwise nice but not essential.
Especially when the service comes off as "defective by design", in which case rebelliousness kicks in on top of everything.
Problems compounding this effect?
1. The default is ads. This really serves to create the impression that your product isn't really worth that much, if all the user is expected to do is ignore a stupid ad on the page, which you're probably not even seeing on the first place if you have an adblocker. Which is why when the alternative monetization is $30 monthly or whatever, you're like "wat? what for?"
2. Companies rarely let you pay for what you're using. instead, it's a subscription, and it's for all the features of the site, including those 500 features you're never going to use. Hence you're asked to pay $30 a month, for an RSS feed you'd otherwise have been happy paying a far more reasonable amount.
3. Most companies don't make it easy for you to know how much you're using the product or if you're paying too much, if it hurts their "milk as much as possible" approach. If you paid cents per RSS, and had a free trial for a month to see what a typical month will cost you, I think most people wouldn't bat an eyelid when it came to paying.
I tend to agree with this. But most importantly, many sites go at this the wrong way by putting content behind a paywall. Substack and Bandcamp are good examples of ways to pay and get paid for content. If you want to people to pay for content online, incentivize.
Do you know how Ars Technica does it? Is there some personalized URL to the suscriber feed? I imagine those are pretty easily shared among a group of people?
People sharing a feed with one or two people isn't going to account for a huge amount, and one person (accidentially|on purpose) sharing their URL to a bunch of people will become visible pretty quickly in analytics. Nobody's making big dollars on illicit Ars Technica RSS Feed content, especially when a lot of the value of Ars is in their curated forums and discussions.
Ars makes enough off ads that they encourage people to pay to remove ads, get "et subscriptor" appended to their title, and some minor perks across the site (RSS, a slightly nicer forum experience, etc).
A corollary is Linux World News. LWN hides articles with journalistic leverage behind a paywall, marked as such, and offers similar per-subscriber RSS feeds. LWN doesn't have ads.
LWN also allows subscribers to share any article on their behalf; you've probably seen an LWN article or two shared by a subscriber here on HN. They know there's people who generate the share link and dump it into massive aggregators (like HN, Reddit, etc.) but they don't care because it drives people to go "maybe this is worth $20/year" and toss some cash up.
I had an LWN subscription when I was in Google Summer of Code. It was cool, and I read it far more regularly than I would have otherwise, but I decided to allow the subscription to lapse after I didn't read it daily. With my shift to a new job, I'll probably get a new LWN subscription just because it's becoming more salient to my daily work.
I still buy tons of things from there, but usually only if it's sold by Amazon.com specifically or the known manufacturer selling there. And if I can find it through Target, it'll get here almost as fast with free shipping, from a company with stronger values, so I prefer that when possible.
Second this. Back the day it was "Fulfilled by Amazon" that I looked for, now that changed to either known sellers or sold by Amazon. All others, well, fir the stuff I look for eBay isn't too bad. Exceptions are, e.g., books. As long as the edition is the right one you cannot go wrong regardkess of seller. I do buy less and less from Amazon, so.
> As long as the edition is the right one you cannot go wrong regardkess of seller.
Haha, oh no, bad news! A lot of times amazon will have special "amazon print" editions and many times they have had printing problems... no print, wrong book inside, etc.
I have never personally seen one IRL but I've seen review pics of people showing the problems with their books they bought directly from amazon.
> CloudShell is intended to be used from the AWS Management Console and does not currently support programmatic interaction
Which unfortunately means I can only access this from a browser window and can't start up a session from my own terminal. Sure would be nice to be able to launch a secure, remote CLI without all the limitations of a web client.
The point of CloudShell is to easy use AWS CLI without setting it up and setting the credentials, however to use this from your own terminal, it means you have to install software and then configure credentials, well then that would exactly same as installing AWS CLI and configuring it.
On the flip side: How is installing a browser and authenticating in it any better than installing openssh and/or awscli and authenticating through them?
I think it's assumed that everyone already has a browser installed. Also to authenticate through openssh and/or awscli it will likely require some browser interaction, so that would require installing a browser if one isn't installed.
MFA and persistence — especially if you use SSO. If you have credentials sitting around in your home directory they can be harvested from a standard location by malware and people are often very slow to rotate them. In contrast, if you're following Amazon's guidelines your console login will already have MFA and be using short-term credentials.
It supports some MFA (e.g. not U2F / FIDO) and not if you use SSO.
The browser profile is harder to exfiltrate, in part because modern OSes have ways to restrict access to particular processes, but that was also only part of the benefit: the main thing is the duration of the session. Tons of people leave AWS keys sitting around in ~/.aws for ages.
You can setup schemes with STS but not everyone remembers that and with this approach you have a very simple answer: it always uses STS, there's never a file sitting around for someone to accidentally save somewhere they shouldn't, etc.
Nothing here is something you couldn't do on your own — it's just a very easy option with safe defaults.
I think the issue is that web based terminals aren't very usable, as they mess with keybindings and line wrapping, for example. At least that is the case with GCP Cloud Shell. It makes it pretty difficult to use for even basic things like running vi or emacs.
I think something like this: $gcloud alpha cloud-shell ssh
I love to be able to use my terminal client than browser. This is neat because I don't want to maintain another ec2 myself even if its in the free tier.
I think this is meant as an alternative to having the cli installed locally. What is the advantage to you of running a remote AWS cli session from your terminal?
The same thing you get from AWS WorkSpaces, but in CLI form: a machine that's running within/adjacent to your corporate VPC, with fast high-bandwidth access to all your internal infrastructure, especially things like storage buckets. As opposed to your own machine, running half-way across the country where you might only be able to achieve 10Mbps between you and the AWS datacenter.
Think "I'll run this arbitrary script to batch-process input-bucket X to output-bucket Y, enriching the data by calling out to internal service Foo and external service Bar." The kind of thing Google's Cloud Dataflow is for, but one-off and freeform.
—also, for a lot of people, just the fact that things are running in the cloud, means they're running more reliably. If you want to run something that's going to take four days to finish, you don't want to do it on your own workstation. What if the power cuts out in your house? (Just the fact that you can restart/OS-update your local computer and "keep your place" in the remote is nice, too.) You want a remote VM somewhere (preferably with live migration in case of host maintenance) running screen(1) or tmux(1), with your job inside it. Of course, you can just create a regular VM in your VPC, and do it on top of that; but a cloud shell abstracts that away, and "garbage collects" after itself if you leave it idle.
Availability seems to be an issue all around. After an early lockdown surge of restaurant additions in DoorDash, now listings which appear to be available and open have a decent chance of getting your order rejected, or just dropping off altogether. Restaurants are having a tough time right now, and it remains to be seen if poorly-advised re-openings will have any short-term positive effect on business.