Hacker News new | past | comments | ask | show | jobs | submit login
AWS CloudShell (amazon.com)
443 points by jeffbarr on Dec 15, 2020 | hide | past | favorite | 230 comments



I didn't realize AWS didn't have this already! I've been working exclusively in GCP for the past few years, and I assumed the two platforms were at parity. Is AWS starting to lag behind in new features?


They both have features the other doesn't. If you count up the number of features, AWS is way ahead. But yes, GCP has features AWS doesn't have.


I have less experience with GCP than AWS, but counting the number of features/services on AWS is definitely misleading. So many of the AWS services are effectively abandoned and missing critical features that makes them completely unfit for production usage, and it can be really hard to find this out before you run into the problems yourself. And then out of nowhere 3 years later they'll pick up development on an old thing again and finally fix that critical issue and make it much better. So I'm not sure I'd call that being way ahead.

My impression is that the stuff that GCP does have tends to be more capable and production ready (although I'm curious if others would disagree with that).


What AWS services do you consider "abandoned"?

The only one I can think of that might be in this category is SimpleDB. AWS recommends you use DynamoDB instead of SimpleDB for new applications.

However, I wouldn't call SimpleDB abandoned. SimpleDB continues to work as it has for may years.

One thing that AWS is amazingly good at is not breaking existing customers and their applications. Did you build an application 10 years ago based on SimpleDB? All the APIs you used 10 years ago are still there and available to your application today. Its really quite amazing how dedicated AWS is to not breaking existing customers.


Abandoned as in not being updated, and therefore falling behind all its competitors.

Fair enough that services often continue to work the way they always have, but I'm thinking more of the case where you're actively developing something, and the AWS service has major bugs or is missing significant features that all the alternatives have which makes your job harder building on top of it.

Elasticsearch was a famous example, it went years without any updates during which time upstream Elasticsearch itself improved dramatically. Then they picked it back up again once the elastic.io hosted product got good enough that it was a much better alternative.

Another example is ECS, which was out in the wild for a couple of years with a very limited feature set while GKE was completely eating its lunch and upstream Kubernetes got a lot of major improvements. Then AWS released EKS which sort of seemed to replace ECS for a while, but they have gone back and forth now for a bit with ECS having some features that EKS didn't have (e.g. fargate for a long time) and vice versa.

There are all sorts of other support bugs I've stumbled across in their forums, too many to recall. Often years old threads that have never really been addressed.

Edit: another one that comes to mind is cloudwatch/the entire monitoring & logging stack. It's very basic, not really a viable alternative at all to something like splunk. As such an important thing, I always kind of expected it to get better but it just didn't, you had to export your own logs/events to a separate ES cluster, or to Redshift via S3, or something else, etc. Whereas GCP Stackdriver is a much better solution out of the box.


Your argument seems to be AWS isn't prioritizing the things that you think are most important. I think thats fair. But that doesn't mean the products abandoned or not being updated.


Yup, today I was looking on some examples of AWS CloudFormation templates and I saw version 2010-09-09. Instantly, I thought the webpage I was reading must be old; so I opened the docs. In the docs, I see "The latest template format version is 2010-09-09 and is currently the only valid value."

Last version was 10 years ago and this service is one of the core AWS services, so it's definitively not abandoned.


That's just the template format version, not the version of the service. Actual features and resources are being continuously added to CloudFormation. See release history: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGui...

Unless I'm confusing your comment.


That's just protocol versioning - it means they haven't made backwards-incompatible breaking changes to the template protocol, not that the features haven't been updated in 10 years.


Please do tell us what changes they should make to their template format such that it would require a version update.


In some circles, this is seen as a mark of stability.


Amazon CloudSearch hasn't had any serious updates in 6 years [1]. It is Apache Solr behind the scenes, but with a proprietary API on top. I suspect everyone with serious search requirements has moved to Elasticsearch or another product by now.

[1] https://docs.aws.amazon.com/cloudsearch/latest/developerguid...


I've worked with Cognito extensively, and it's effectively abandoned. This issue sums up some of the common frustrations https://github.com/aws-amplify/amplify-js/issues/3495


Data Pipeline is abandoned - it is also miserable to work with.

Otherwise yeah, there might be too many ways to do the same thing, but AWS generally has a stellar track record of supporting everything else they have made.


Data pipeline has been superseded by Step Functions.


Did AWS announced it or are you announcing it?

I'm searching just now over *.amazon.com and nothing comes up. The product pages also bear zero indication of that.


No one “announced it”. Why would they? If Data Pipeline meets your needs, it still works. But, if you keep track of AWS and where all of the new features are being added, it’s clear that Step Functions is getting all of the new and shiny and can do everything Data Pipeline can do.

https://aws.amazon.com/blogs/compute/implementing-dynamic-et...


SimpleDB is also not officially deprecated, but neither does Amazon ever work on it: https://aws.amazon.com/simpledb/

Amazon are like the anti-Google: rather than killing products, they're happy to have them limp along forever.


CloudFormation. The UI is lacking critical information about the resources that CF operates on and it's been like this for years.


Exactly. It is beyond me how can Serverless.com provide a better experience. I can chose between something slightly broken (hello existing S3 resources) and 10x verbosity of JSON programming. I counted the number of lines for a simple Lambda function, it is 1000 lines of JSON. I think I could not use CF without Serverless.


CloudFormation isn't abandoned and is continuously updated as AWS adds services and options in existing services. The AWS Management Console UI for CloudFormation might be largely abandoned, but the AWS Management Console component for X, in general, seems often to be only distantly related to X in terms of support; the main focus often seems to be on using the service via the API, SDKs, or (for setting up resources) CloudFormation, instead of the console UI.


CloudFormation is... fine until you realize it's missing that resource that you need. CF is notorious for lagging behind products in terms of support. It's incredibly painful.


Cloudformation is the best tool for aws infra-as-a-code. Nothing is even close to it. It rolls back, no leftovers on delete, fast parallel ops, full control of resource properties. Yes, some things can be better, but above is priceless. 10yrs experience here.


Terraform is much, much better.

I used CloudFormation for a few years and ran up against a lot of its limitations. Maximum file size. Maximum resource count. Automatic rollbacks of THE WHOLE STACK on any subsystem failure like unavailable instance. It has no templating built in, so doing ten things with minor differences means copying and pasting 10x or deploying your own templating solution to generate CF. And I did this back in the ROLLBACK_FAILED days, where if you did something that it couldn't automatically undo you were stuck - no way to roll forward or backward, just abandon innplace. The button for "Continue rollback" or whatever that came out 5-10 years ago was huge.

Contrast that with Terraform: all of your points addressed, plus all of mine are non-problems. It lets you do some awesome things that are no doubt inspired by CF, but it takes them to an entirely new level.

There are a few downsides too, but nothing compared to CF. You can build too-complex things much more easily in TF, so you have to be careful not to go overboard. It also bugs me that I can't spec a high-level thing like "compute with 4 cores and 32GB RAM, repeat 10x and put behind a load balancer with DNS name foo" and use it anywhere, I have to say google_compute or azure_load_balancer or aws_dns. That was the biggest disappointment coming out of CF and hearing how awesome this thing was, and then realizing it still left me vendor locked.


> ran up against a lot of its limitations. Maximum file size. Maximum resource count. Automatic rollbacks of THE WHOLE STACK

I totally can see where this is coming from :) CFN is best used as separate templates/stacks for parts of the solution, not the whole solution rolled in a single template. Reusable is the key word here, and Parameters. Let me try a city example. Have separate templates for a school, fire departnemt and house block. Build all Detroit schools using the same school.yml template, just supply different parameters for each. Don't copy-paste code from school.yml into detroit.yml. Actually, there should be no detroit.yml, leave city level to CI/CD job.

> I have to say google_compute or azure_load_balancer or aws_dns

Multicloud? It rarely makes sense. All you get is triple the infra code, triple monitoring tools, triple devops competence requirements. Properly designed solution with HA and AZ/regional redundancy is sufficient on a single cloud platform.


I'm not suggesting multicloud as in using more than one at a time, I mean porting to another cloud or defining some OSS infrastructure in $magic_terraform that any cloud user could deploy without translation.

It would be limited to least-common denominator just because of the vagueness of objects that it would support, but the example I suggested would be incredibly useful.


> It would be limited to least-common denominator ..

Thats exactly why it is not useful. You design your solution using all features of the LB, solution using only basic ones will be meh.


> Multicloud? It rarely makes sense.

It tends to be badly implemented, but it makes a ton of sense as a strategy.


> Cloudformation is the best tool for aws infra-as-a-code.

Talk about damning with faint praise.

> It rolls back,

Sometimes.

> no leftovers on delete,

Well, on successful delete, maybe. DELETE_FAILED with partial stack deletion is a thing (and a thing CF could fairly trivially avoid in some common cases by simply querying resources for deletion protection.)

> full control of resource properties

Except the AWS resources it doesn't support, and properties of supported resources it doesn't support, because CF always lags the underlying services and their APIs.


Until a rollback due to a failure itself fails on a complex stack and prod is hosed for hours while you try to figure out how to unfuck.

Maybe CF has gotten better, but I'm not sure since I totally jumped ship for GCP.


Not sure what you are about, wild guess is cfn nested stacks, yeah, I learned hard way to never use it, its a feature that should not be there.


Oh, yeah, it lags and it is painful, but it is continuously moving forward (but chasing a moving target), even if the UI isn't (it's been probably more than a year since I looked at the UI.)


That is historical. The missing feature gaps are extremely small today.


I hadn't used AWS in about 4 years, but I recently starting using it for a side project. I needed to process a big dataset and I wanted to use pyspark, so I gave EMR a try. I was impressed how easy it was seemingly to create clusters in the UI and then run jobs using an ipython notebook.

That is until I realized that nothing worked. You had to use a version that was 5 versions behind the current version which is the opposite of what the documentation said which explicitly said not to use that version. Even then not everything worked out of the box.


We had to do a security review of EMR recently. I’m amazed it works at all. Hop on one of the cluster nodes and take a look at the processes running.


It was probably an issues specifically with notebooks.


AWS OpsWorks Stacks is several major versions behind on Chef.


Redshift for instance is still using Postgres 8 wire protocol


Redshift is definitely not abandoned. AWS just announced a bunch of improvements and new features at re:invent in the last 2 weeks[0].

Its not obvious to me why supporting a newer wire format would be a high priority for AWS. I think I would rather they work on things like native JSON/semi-structured data support[1] than a new wire format.

[0]: https://aws.amazon.com/redshift/whats-new/ [1]: https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-re...


Amazon didn't touch Redshift for years and years, until suddenly Snowflake became a thing and then they remembered they had Redshift.


Redshift has constantly been improved. One of Amazon’s major initiatives was to get off of Oracle and improving Redshift was a major part of that initiative.


You will eventually run into issues when client interfaces start to drop support for old versions of the wire format to simplify their codebase.


This is already the case, sqlx has to now add specific support for Redshift, its not enough ti use the postgres driver as is


I agree. We usually limit ourselves to a small subset of features we are using from AWS. The newer services tend to be less reliable, reasonable. I think part of it is the APIs being too complicated. Have you tried to use Kinesis low level API? You know what I am talking about. Ok, lets use the high level API. Well it is written in Java (and Java only) so you are going to be running a JVM on your end regardless. If you think I am joking:

https://docs.aws.amazon.com/streams/latest/dev/shared-throug...

I would classify this as a critical issue, and this is part of the reason why we stopped using Kinesis entirely.


Is this true? That isn't typically the Google way. Traditionally they build something and then let it linger for years before killing it.

https://killedbygoogle.com/


One of my favorite features of Cloud Shell on GCP is that for pretty much any action in the UI, it will provide the equivalent fully populated cloud shell command. I hope AWS does something similar.


There is a third party extension that does that and also creates CloudFormation code.

https://chrome.google.com/webstore/detail/console-recorder-f...


That extension is also available for Firefox. GitHub link:

https://github.com/iann0036/AWSConsoleRecorder


I've generally seen that AWS produces new services quickly but features is a little bit different.

From my understanding each service team within AWS is run pretty much as it's own little startup which sometimes make features across services, i.e. tagging, be inconsistent. It also leads to why the UI seems to be so fractured.


One of the first things I noticed when trying out GCP after working on AWS for a few months was how clean and consistent the user interface was. Your comment definitely explains the poor UX on the AWS console!


AWS has always made the expert “UI” first priority by providing APIs and tooling. The on boarding experience for those who hadn’t transitioned to devops yet seem to be a much lower priority for AWS. I find the opposite to be true for Azure and GCE.


This is a confusing sentiment to me. GCP APIs also come first. I don’t even think the UI is possible without them. Can you name a product where this is not the case?


Azure has had this since forever too. You can use either Powershell or Bash (my preference), and it works really well.


AWS has Cloud 9 as a separate service, which includes this kind of terminal-in-browser thingy. This is basically lightweight version of that concept


The question is not this but is this feature really needed. Maybe it is for the GCP user base, maybe it isn't for the AWS user base. I personally use AWS for ~9 years and I never needed such feature. I can achieve exactly the same (quite often do exactly that) by provisioning a small free tier instance with an instance profile that uses the same policy as the service or resource (lambda function for example) that I am debugging.

If AWS lacks anything, it is a "why the hell this API call failing exactly" feature. It is horrendous to debug a resource that is using other resources and you do not have any means to get what _exactly_ is missing. Usually you get an error like "s3 throw a 403, bye" message. The closest to a solution is CloudtTrail with giant amount of JSON entries to go through or try to load it to Athena or other database, and because you do not know what exactly you are looking for it is very hard. I usually just ask the support to debug it for me because they have internal tooling that can do that. Most of our support tickets fall into this category.


AWS LightSail has always supported a cloud shell.

However, those instances get used in more of a "pet" than "cattle" context.


Are you talking about the button in the LightSail interface to open up an SSH session in a browser window? Cloud Shell on GCP is slightly different, in that it gives you a preloaded, preconfigured machine to perform tasks on the command line that you would normally do with the GUI.


> AWS LightSail has always supported a cloud shell.

The difference being that you have to launch & maintain the server yourself (and pay for its runtime).


AWS does have aws cli, which is what this essentially is, except this shell is on the cloud and not another posix shell running locally on your machine.

This probably abstract away the .aws profile? I can't see much reason to use it since I use aws cli just fine in the however many terminal tabs I have.


I have found the Cloud Shell on GCP to be much more convenient for a few reasons:

- You have to install CLI components piecemeal on GCP, and sometimes need to opt into beta features. With the Cloud Shell, it's all preinstalled for you.

- You have to "log into" the CLI if you're running it locally, which can be a minor annoyance. (I know the AWS CLI doesn't have this issue, as it doesn't use OAuth for authentication with the console.)

- All of the data transfers in Cloud Shell are happening between machines in Google's data center, so you get gigabit speed file transfers in the Cloud Shell. For example, this is super useful when you need to download a large bucket to a working directory to make edits to multiple files, or if you need to run scripts that pull and push to/from Cloud Storage.

I think a Cloud Shell for AWS is a net positive! It can make some workloads easier and reduce the amount of configuration you need to do.


What do you mean by posix shell?


Sorry I mean just shell running on presumably your local computer.


Azure has had a cloud shell for a while as well


Psssst, we also had transactional consistency in Azure Blob Storage since 2011 too, but S3 only got it 2 weeks ago!

https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-s3... + https://github.com/gaul/are-we-consistent-yet/blob/master/RE...


It seems inevitable that remote dev environments will become ubiquitous.

With increased distribution of systems, increased use of cloud proprietary infra software that you can't run locally and now these custom SOCs. Companies are just going to give up on local dev environments and force everyone to write code in a browser.


Maybe not in a browser, but in VS Code with custom extensions — definitely happening already.


My primary workflow on side projects right now is VS Code with an ssh connection to a GCP instance. It's very quick, and I can run it across both my desktop and laptop, so if I want to go work in the living room I can get up and get to work immediately on whatever was open in the workspace.

The only downside is that the difference between extensions installed on the remote and locally is a little confusing, but the extension ecosystem in VS Code has satisfied all of my use cases. I've also run into some occasional ssh hangups on weak connections, but I haven't experienced that isuse in a few months.

For years I was a linux guy, and now I see no reason to go back, because I can just remotely access a linux environment from whatever system I'm already on. Less time spent swapping between operating systems to work on projects, and a consistent environment, are both huge features.


I've been experimenting with VS Code remote for the past few weeks, and I'm really enjoying it. I have a Linux desktop that I run headless in the closet and can connect to it from thin clients. I recently bought the base model new Macbook Air to replace my Linux laptop which I'm hoping will give me the best of both worlds: thin and light laptop with great battery life which I can connect to a powerful Linux server/desktop for development.

This way I don't have to spend so much money on expensive desktop replacement laptops.


This is actually a good idea, I've been meaning to turn one of my old laptops into a media server, but it hadn't occurred to me to use it as a development server as well.

I'm in a similar boat as the Macbook Air, but I was going to go for a Surface Book because I want the detachable tablet features


Does the extension build/test code locally or on the remote system? It would be great to keep my local system free of all the build tools and dependencies.


Build tools run on the remote system.


This has been available since the 90s, 80s if you count telnet. Yes it has improved, but what hasn't in that timeframe?


Timesharing is back! At least it is better than those X Windows terminals I was using in 1994.


Hopefully with another attendant revolutionary operating system


You mean the web browser ?


At least VS Code means we'll have options for a half-decent IDE in the browser. (Not that there's anything wrong with vim or emacs over SSH-in-a-browser, but...)


Note you can open files through SSH in Vim. This is better that ssh'ing on a machine that runs Vim, because all the editing is done on your machine. It only uses the connection when you save the file.


Emacs too, of course: https://www.emacswiki.org/emacs/TrampMode

Despite that I still tend to ssh to machines and use a local editor, because I typically don't want to just edit files, I also want to run some commands in-between editing files.


Tramp will run whatever commands you want to run on remote server if you're running commands with `M-!`

I use it for work sometimes, that + vterm and ssh makes for a fairly pleasant remote editing experience in emacs. LSP even works over tramp (kinda)


You can also SSH -X and start a GUI app on the remote machine. (you can install Xserver on both Mac and Windows) Although you need low latency/ping or it will lag.


I have tried this setup recently (with XQuartz) but it had abysmal performance compared to a Tiger VNC session with a Linux server and a macOS client. I couldn't figure out why. I remember it was working better decades ago over 10Mbit/s connections, running on 80486 machines... Maybe between 2 Linux machines running the same Xorg versions would work better?


I have not studied the Xserver protocol so I am only speculating. I think it would work better on older GUI's that uses predefined UI components vs apps that treat the UI as a canvas and re-paints rather then reusing components. That said, when experimenting I have found it works on games too. But high latency (more then 1ms) will kill the experience, so slow radio like wifi or a mobile modem would not work. Its funny that we have made so many advances in network an compute, both in reduced latency and increased bandwidth/capacity, yet its all eaten up by consumer stack layers, like 10ms lost due to slow radio, and another 10ms putting the image on the screen, and maybe 1-5ms in software layers. And a keyboard or touchscreen with 100Hz poll rate.


VS Code can do that as well (I think it might be how they implement WSL support too).


> half-decent IDE in the browser

https://aws.amazon.com/cloud9/

It was more than half decent last time I used it.


Personally, my development routine is to use VSCode's "Remote Connections" as my primary way of editing code/interacting with the CLI (via the built-in terminal). It lets me work closer to the shell with a shareable app instance (pointed at a dev.myappname.com domain) that I can share or Slack a coworker at any time.


Could you elaborate a bit on the shareable app instance (or just share links to its documentation)? That sounds really useful but I hadn't heard about this use case before


I assume they are talking about something like https://github.com/cdr/code-server .

It’s vscode in your web browser. Run it locally in your dev environment and setup your web server to proxy vscode.your domain.tld to it and boom, you have vscode running in your dev environment that anyone with a link can access.

The fastest way to set it up with tls+auth is probably something like https://hub.docker.com/r/linuxserver/code-server + caddy configured with basicauth.


Looks like this is what he's talking about: https://code.visualstudio.com/docs/remote/ssh

I haven't used the remote SSH feature, but I have been playing around with GitHub Codespaces through VSCode. Once you have the extension, you can select a repository and it will spin up a virtual workspace for you on their servers. It actually works surprisingly well- tasks like installing node packages is faster than on my local computer and it automatically handles things like setting up a proxy for local web environments.


Looks like I won't be needing that sweet Apple Silicon, after all.


Why? If we indeed go remote dev we need laptops with exceptional energy efficiency for browser workloads.


Not to mention not burning ourselves slaving over a hot laptop.


I dread the companies that will mandate this, no local environment…

(and I know, GCP and Azure had this for quite some time now)


Eventually we're just back to mainframes + dumb terminals, except the dumb terminals are web browsers.


History doesn't repeat, but it certainly rhymes


And swings back and forth. In 10 or 20 or 40 years there will be something about liberating computing from the cloud overlords. The last major swing in that direction: the home/personal computers of the 70/80s. An higher harmonic: owncloud and all the current self hosted services, but we're still swinging towards centralization.


The environment problem does not really allow for the American dream of 1:1 ownership.

I have little sympathy for a generation raised on such notions who see it all as theft. No such legal contract exists.

It makes me wonder how the next generations would feel about a stable environment being taken.

But we won’t be around and have our social orders so ...


For big companies having a completely non-local dev environment actually works pretty amazingly. It's something that I've been trying to set up at home with Eclipse Che. The workflow is great. I can switch between laptops, desktops, etc with no changes. I can run really large jobs from my phone and come back to my desk an hour or two later to see if my code worked right. I can do all of this without installing anything on my local system.


The funny thing is that I remember doing exactly this with PHP in the early 2000’s on a small team. We didn’t even have version control on the project and just did all of the development in vim, relying on its lock files to ensure we didn’t edit the same files at the same time. Deployed changes with rsync.

That project ran in an enterprise environment, customer facing for 10 years with almost no maintenance.


> for 10 years with almost no maintenance.

10 years is a long time in security. Presumably that's what the maintenance was for?


Yep. It was also a project where we rolled our own...everything. This was an AJAX project before Prototype, jQuery or even JSON was popular so we had a lot of explicitly mapped paths and code that only didn’t exactly what was needed and verified every argument.


Those companies are going to do it anyway, not without reason. If you have high security requirements something like this is important if you don’t want to have two laptops so your development work is isolated (think what keys someone could get with a bad npm/Python package install).


You can still do a bad install inside the cloud IDE and screw up your development environment. I’ve done this on Cloud 9.

But it’s usually easier to trash a cloud IDE and create a fresh instance than it is to unknot a bad Python configuration on a local machine. (Although you can always use Docker or Vagrant)


The main advantage I was thinking was less “I broke my machine and I need to rebuild it” and more “we had everything setup on our jump server but it wasn't documented and now we can't figure out how to rebuild it” or “someone — totally not me — forgot to clear out the admin credentials after they were done and didn't think about it since everything worked”. Ephemeral servers are a great way to keep people honest about things like that.


But you don't need things to be web-based for this, do you?

I have worked in places which say, "no code on laptops! You get a remote machine, all code must live here"

Same security advantages, but you get way more customize-ability -- choose a terminal app, font, fullscreen or many windows, and so on.


It doesn't need to be web-based, of course. This isn't anything you couldn't do before — it's just much easier to setup and has fewer points to get wrong (ever see a jump server whose owner forgot to change the port 22 0.0.0.0/0 allow rule?). It's especially nice because it works for everyone in every context without needing anything setup so you can safely use it for examples, training, someone working with a loaner machine, etc. even if you normally work with a custom configuration.


What's there to be afraid of, so long as your workflows aren't made significantly more painful?

A local development environment should still be possible in many cases. One shouldn't need to call AWS services.


> What's there to be afraid of, so long as your workflows aren't made significantly more painful?

It's more painful.


...more painful workflows


All the workflows my colleagues and I deal with that would be made more painful by this are ones that are on the list of things that should be migrated to machine management. Often because they involve humans touching production systems where no humans actually need to touch production systems.

Can you help me understand what, precisely, this would do in your work that's so dreadful? Perhaps there's something I just don't know.


At least with Azure the new Windows Terminal App has Azure Cloud Shell in it. So you can use it like it was just a different session target and less fuss...


Exactly something that I used to do about 25 years ago when working on UNIX, and keep doing regularly on Windows systems over Citrix or RDP.


Big companies also usually mandate no external ips and at least in gcp case it doesn’t work without external interface on the vm


Does GCP not let you set up a VPN? All instances can have an interface on the VPN so that you log into the VPN and can hit your instance without it being external.


Yes you can go on vpn or direct connect or bastion host to hit vms on private ip but their cloud shell thingy couldn’t do that last time i checked


Their cloud shell thingy is actually just a docker container, and you can create your own to load as a replacement.


I'm already arguing with our CSO about this on Slack!


what are the benefits to mandating this?


For the company: infrastructure management. You don't have a local PC (other than mainly a low cost thing acting as a fairly dumb terminal) that may have parts fail and will otherwise need upgrades every now and then, with local work that may need to be encrypted and backed up, ... You are working in "the cloud", your environment is running on a common set of VMs/containers, a fault in a node just means a new one spins up (or you get shunted onto the still running ones), hardware redundancy is handled at that level reducing single points of failer, local machines don't need to be monitored for data/apps/other they should not have, resource management (does anyone run their dev PC at full tilt 24/7? no? so CPU/IO/other resources can be shared), ...

For the individual: similar concerns of hardware failures losing work go away a bit (there are still ways to lose everything, but less of them), easy moving between environments (desktop, laptop, phone), ...

Though it depends how much is pushed to the "cloud". You may still need some meat on the local resource bones if not pushing any CPU crunching into the sky too.

Essentially we are reinventing the thin client from the 90s/00s, which in turn reinvented many mainframe concepts, not that either ever completely went away, with an eye on much the same benefits.


It's the "servers are pets, not cattle" but applied to local machines. That's sort of how IT has been run for a while now, but only half-hearted and in the worst way possible.

Almost every organization I've worked with has the policy of "if you get malware, we wipe your whole machine and reinstall the gold image" which is quite disruptive because you then have to reconfigure your settings and reinstall all your software packages and regenerate SSH keys etc. It can be a whole day of downtime and then a week of slowly getting back up and running full speed.

But if your hardware and local software are irrelevant, you can just swap your dumb terminal for another dumb terminal without skipping a beat. And with things like Chromebooks or iPads (actual real dumb terminals) the likelihood of getting to a "wipe it and start over" goes down a lot compared to machines running a full-fledged OS with a privileged user account.

If you drop your Chromebook in a lake, you could run to Best Buy and get a new one for $300 and you've only lost an hour or so, and if all your data is stored in OneDrive and your IDE is Codespaces you haven't lost anything of real value.


From a security perspective, you remove the possibility of exfiltration of client data, especially PII or other sensitive data. Many orgs that have to work with PII already have strict controls around them, but that usually means that the company installs crapware on dev machines.


Exactly! Either developer laptops are part of the network that has access to lots of very sensitive data (and get treated accordingly) or they aren't. There's no sane middle ground where developers have infinite free reign and root on their laptops while also doing dumps from production databases of PII.

There are a lot of situations where people tolerate less sane practices because they are convenient, but this isn't a good strategy.


It could cut IT costs down dramatically. Depending on what you're working on you might need a pretty beefy machine, but nobody really wants to deal with the actual hassle of managing a fleet of high powered machines. If you can use commodity hardware + nice monitors and run the machines remotely then the machine itself can be scaled up and down arbitrarily.

This feels like another solution a lot like Stadia. There have been countless other attempts at the same idea, but the problem is always the same. Latency and user interaction between the local and remote hosts always end up being overwhelming constraints.


I have a friend who works for a big 3D animation house. When COVID hit and everyone was remote, people used Teradici to remote into their powerhouse workstations. It is apparently very performant. And touching on the IT think, no files need to leave their secure home.


The most important one for us is credential management: say you do most of your work using a CI/CD pipeline but you need the AWS CLI to run reports, troubleshoot, etc. If that means you have credentials floating around in ~/.aws/credentials, there's a risk that an attacker could exfiltrate those. If you use a short-term credential system to load them via SSO, you have more infrastructure to maintain. If you setup a bastion host, you need to keep that secured because it's a really high-value target and might allow an attacker to get higher access credentials than the person they compromised if there are any mistakes on setup (common in what I've seen - internal infrastructure is often neglected compared to production servers)

None of that is something which can't be solved, of course, but this is a nice way to avoid having to deal with O&M yourself, which is the point to a lot of cloud services.


Had to get feature parity with GCP, huh. Is this similar to SSH via SSM in that it could be a security improvement? Can I disable port 22 and remove the SSH client altogether and still use AWS CloudShell on an instance?


> Can I disable port 22 and remove the SSH client altogether and still use AWS CloudShell on an instance?

Yes, provided you whitelist the IP range for Amazon's Instance Connect service. (They don't call it Cloud Shell.) From [0]:

> We recommend that your instance allows inbound SSH traffic from the recommended IP block published for the service.

You have to trawl the giant JSON document that it links to, to find the relevant IP range to permit, where the region matches yours and where you see "service": "EC2_INSTANCE_CONNECT". Then, whitelist the specified IP range (obviously for incoming traffic on TCP port 22).

[0] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-inst...



Looks like yes. Here are two blog posts about it. I'm still not clear on the difference.

https://carriagereturn.nl/aws/ec2/ssh/connect/ssm/2019/07/26...

https://ystatit.medium.com/different-between-ec2-instance-co...


SSM connects to the instance through the agent (you're logged in as ssm-user) so you don't need to open port 22 inbound, while Instance Connect does the public key magic and connects you directly over SSH.

From EC2 Instance Connect docs: "[...] it generates a one-time-use SSH public key, pushes the key to the instance where it remains for 60 seconds, and connects the user to the instance. You can use basic SSH/SFTP commands with the Instance Connect CLI."

Disclaimer: I'm a SA at AWS.


protip: look into this for pci.


Can you explain what you mean? You don't "use AWS CloudShell on an instance". This isn't a service meant to be used as a bastion to SSH into your resources or something that is installed on your instances. This is primarily an addition to the AWS Management Console interface that allows you to run AWS CLI commands from your browser.


Ah you're correct, I misunderstood the feature.


One of the nicest features for me in this is the webpage widget to upload and download files into the CloudShell instance.

I have zero problem doing this with the normal remote instance I use for this sort of thing in the past, but for whatever reason, walking junior engineers through this process is always one of the most painful things I deal with. Having a GUI way of doing this will make walking them through it easy.

That said, this environment doesn't deal with flaky connections well. A few toggles of my wifi, and now I have multiple bash orphans on my ECS container. I shouldn't be too surprised, looks like they've repurposed the SSH client from Cloud 9. It'd be nice if they brought in something like a Mosh client.


> CloudShell is intended to be used from the AWS Management Console and does not currently support programmatic interaction

Which unfortunately means I can only access this from a browser window and can't start up a session from my own terminal. Sure would be nice to be able to launch a secure, remote CLI without all the limitations of a web client.


The point of CloudShell is to easy use AWS CLI without setting it up and setting the credentials, however to use this from your own terminal, it means you have to install software and then configure credentials, well then that would exactly same as installing AWS CLI and configuring it.


On the flip side: How is installing a browser and authenticating in it any better than installing openssh and/or awscli and authenticating through them?


I think it's assumed that everyone already has a browser installed. Also to authenticate through openssh and/or awscli it will likely require some browser interaction, so that would require installing a browser if one isn't installed.


MFA and persistence — especially if you use SSO. If you have credentials sitting around in your home directory they can be harvested from a standard location by malware and people are often very slow to rotate them. In contrast, if you're following Amazon's guidelines your console login will already have MFA and be using short-term credentials.


awscli directly supports 2fa (https://aws.amazon.com/premiumsupport/knowledge-center/authe... ); I guess having to harvest cookies out of a browser profile is more work, but it seems like a small difference


It supports some MFA (e.g. not U2F / FIDO) and not if you use SSO.

The browser profile is harder to exfiltrate, in part because modern OSes have ways to restrict access to particular processes, but that was also only part of the benefit: the main thing is the duration of the session. Tons of people leave AWS keys sitting around in ~/.aws for ages.

You can setup schemes with STS but not everyone remembers that and with this approach you have a very simple answer: it always uses STS, there's never a file sitting around for someone to accidentally save somewhere they shouldn't, etc.

Nothing here is something you couldn't do on your own — it's just a very easy option with safe defaults.


I think the issue is that web based terminals aren't very usable, as they mess with keybindings and line wrapping, for example. At least that is the case with GCP Cloud Shell. It makes it pretty difficult to use for even basic things like running vi or emacs.


Doesnt save link as a PWA make it behave like yet another electron app with regards to key capture?


You can connect via ssh to a GCP cloudshell instance, you just need to spin it up first.


The problem is that AWS CLI without access to your custom workflows in the form of aliases, scripts and what-not is far less useful.

Or perhaps even entirely useless if you'd normally use it as part of a local build and test process.



to cloudshell?


What would the difference be? Isn't CloudShell just a shell that you can easily access from the browser?


managed vs self-managed among other things


I think something like this: $gcloud alpha cloud-shell ssh

I love to be able to use my terminal client than browser. This is neat because I don't want to maintain another ec2 myself even if its in the free tier.


I think this is meant as an alternative to having the cli installed locally. What is the advantage to you of running a remote AWS cli session from your terminal?


The same thing you get from AWS WorkSpaces, but in CLI form: a machine that's running within/adjacent to your corporate VPC, with fast high-bandwidth access to all your internal infrastructure, especially things like storage buckets. As opposed to your own machine, running half-way across the country where you might only be able to achieve 10Mbps between you and the AWS datacenter.

Think "I'll run this arbitrary script to batch-process input-bucket X to output-bucket Y, enriching the data by calling out to internal service Foo and external service Bar." The kind of thing Google's Cloud Dataflow is for, but one-off and freeform.

—also, for a lot of people, just the fact that things are running in the cloud, means they're running more reliably. If you want to run something that's going to take four days to finish, you don't want to do it on your own workstation. What if the power cuts out in your house? (Just the fact that you can restart/OS-update your local computer and "keep your place" in the remote is nice, too.) You want a remote VM somewhere (preferably with live migration in case of host maintenance) running screen(1) or tmux(1), with your job inside it. Of course, you can just create a regular VM in your VPC, and do it on top of that; but a cloud shell abstracts that away, and "garbage collects" after itself if you leave it idle.


Except it doesn't run in any of your VPC's.


Not yet


You mean like Cloud9?

https://aws.amazon.com/cloud9/


"AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser."

I would say no.


You can easily SSH to a Cloud9 instance, so it would satisfy the criteria given.

Alternatively, one can launch a custom AMI in EC2 to do whatever they want.

Multiple solutions already exist for the given problem, therefore yet another AWS service seems unnecessary.


Except that Cloud9 requires an EC2 instance and associated spend, whereas CloudShell is free.


I think you just described ssh


I am glad to see this as I hate using a web console to try to get actual work done.

But I have to confess I opened this article half hoping it would be about Lambda support for bare bash scripts. Horrifying, yes, but at the same time...


I imagine you could accomplish that via a Lambda custom runtime. The example function given here is a shell script: https://docs.aws.amazon.com/lambda/latest/dg/runtimes-walkth...


Back in my day we threw up PHP scripts using apache and called it a day...

But seriously custom runtimes are real bastards to get working.


Horrifying sure, but I am curious what is the motivation here. We use more lambdas and for more things than are generally considered kosher... one more couldn't hurt.


Hey, I have written (long ago!) production CGIs in bash. I’m still perfectly comfortable writing bash code but for most things it’s definitely not the right tool for the job.


what is going on with those screenshots, why would you add a torn paper effect to pictures of your high tech product


One of the most frustrating things about HN for me these days is going into the comments to read more about the content of an article, and instead being bombarded by people nitpicking the design of it.


I dislike it when people complain about people disliking the design and presentation of a site, because it devalues the reality that these are important for effective communication.


Comments on presentation are quick and easy to make. They show up soonest on a submission, and since they're at the top, collect upvotes. Comments on content take longer to write, end up below comments on presentation, and receive fewer upvotes.


HN should have randomised ordering of presentation of top posts, with a sort of simulated annealing technique that fades over time, so leaves are sorted to the top based on the branch total vote count after some hours.


Usually that effect is to show that the particular edge it’s applied to has been cropped.


not only that is has been cropped, but you instantly know that the button is in the upper right hand side


It's fine for me.

A shell is not something from the future, no need for fancy graphics.

It also has logical semantic meaning:

staright line = end if the content

"torn" line = content is "cut off"


Probably to indicate that the screenshot was cropped to only show part of the viewport.


I have no idea how a company so large can have such poor design. The new management web interface looks like a hastily made bootstrap theme. My only explanation is that Bezos himself chooses the designs and nobody can object.


Torn paper thing (and the whole blogpost) is prob a marketing team only thing. Actual product interface looks okay to me.


Jeff Barr has used the torn paper effect on screenshots for the AWS blog for many years. It's not a one-off for this article. You can see it on basically every blog post he has made for a decade.


Thanks for being such a long-time reader!


Probably as an affordance to communicate that you cannot click/interact with the screenshot!? ::shrug::


They didn't use transparency either, so it looks extra good in dark mode ;)


It's a blog post, not a PR piece. The only issue I take with it is in the first screenshot, where it says to 'click the CloudShell icon' and then presents you a screenshot that doesn't actually highlight what and where the icon is. You just have to already know that '>_' is some sort of hacker parlance for a shell.

It's just a picture of the corner of the page.


AWS has such obvious contempt for the design profession that no sane designer would choose to work there.


I would say that's the problem with Amazon overall.


I might argue that it’s their recipe to success. Their UIs, from Amazon.com to AWS are pretty intuitive and content-oriented. It’s their style and it works for them.


There are many ways I'd describe Amazon's UIs, but intuitive is certainly not one of them.


Maybe it is a good look with purple hair?


Really glad to see this directly integrated into the AWS console.

I ran workshops when I was at AWS, and using the Cloud 9 shell saved us a ton of time getting a room full of people set up with a functioning AWS CLI. Being able to just click a button to pull up a shell and then paste in a command is so much lower friction.


> Sessions cannot currently connect to resources inside of private VPC subnets, but that’s also on the near-term roadmap.

That should probably have been on the launch roadmap.


Why limit access to an already valuable product?


Sure but it’s likely significantly more work to set up the private connectivity to the VPC. Makes sense to release to some customers IMO.


This would have been great to have last week. I was walking a client through deploying a project I wrote over a video call.

But before we could get started he had to:

- install the AWS CLI

- stop screen sharing while I walked him through creating an access key/secret key from the web console

- walk him through aws configure

start the screen share back

- install the SAM CLI

- install jq

If he had used this. He could have just run

   git clone
   aws s3 mb $artifactBucket
   sam package....
   sam deploy
And all of the resources would have been created.


Oh how many times I closed these cloudshell's with Cntr-w while editing commandline. Biggest annoyance ever.


The links labeled "AWS CloudShell" in the post just link to the EC2 product page.


https://aws.amazon.com/cloudshell/ is the right link; I am updating the post now!



Nice to see AWS using ECS front and center! The containers might be floating around in fargate me thinks.

Started a CloudShell session and ran:

ps aux

cat /proc/1/cgroup

echo cool :)

Also feels like now an EIP IPv4 has been assigned to my IAM user. Pros and Cons seem to equal right now in my head. Mmmmm


Hmm, I suppose this is useful for super large orgs? I feel managing the IAM policies around this is pretty much the same level of complexity as managing access to a bastion host to open a ssh tunnel through.


We use GSuite SSO with Context Aware Access and other such policies to gate access to the browser. So that means that we could give out access via CloudShell, and now those commands are gated by those same policies. That's really nice from a security perspective.

In our case, since we do development in a ChromeOS environment, and the browser is relatively isolated from the Linux VM, it also likely prevents classic SSH-hijacking.


It’s an order of magnitude less work to set an IAM policy because that doesn’t require ongoing maintenance commitments. An IAM policy is a one-time setup cost and the limited duration keeps people honest about not accumulating unmanaged local state. It’s also handy for non-administrators to contain a compromise or error - if someone pops a shared system multiple users will be affected.


I'm glad AWS is working on this. It's a big problem and companies are not facing it. Wrote about it here: https://andrios.co/articles/oneoffs

But CloudShell is yet too narrow of a solution, I'm sure they will improve it over time, but a few problems with todays' release:

1) It only tracks bash commands. What if I write a quick Python one-off script and run it from a file? CloudTrail will never get the content of such script. This is script will get lost at the end of my session. What about Git for storing code?

2) Only works in the browser. The browser has it's good parts, but during incident resolution speed is critical. Getting a prompt without my local shell history, aliases, binaries, and many others, will make it slower to resolve incidents. One might say it's for a good reason, but we can do better.

3) Only works with AWS. This is a big problem as many companies are in the process of migrating to AWS, with services running within their own servers. Companies will use CloudShell to investigate edge cases, most of the time during incidents, engineers need fast access to all resources. Using a different solution for each type of resource won't help.

4) Hard to audit. If you ever tried using CloudTrail, you know what I'm talking about. And again, companies will need different solutions if they don't run only in AWS.

5) No review workflow support. If you only allow platform and SRE to access infrastructure, this is fine. But if you really want to bring ownership of problems to developers (DevOps), they need a way to get this level of access without risking production. This comes in the form of experts reviewing (instead of running) commands and scripts faster that the regular Github Pull Request workflow.

There are more, but I'm still happy with the product. AWS saying that you need one-off solutions no matter how much automation you have will help us move to a future where companies treat one-off scripts as first class citizens.

If you are interested in a solution that solves the problems I pointed out and many more, check out RunOps: https://www.loom.com/share/ea25027e73c94aa395f3e0ab70b71f0e


How does this compare to using AWS Systems Manager Session Manager (except a more straight forward naming convention)?


Session Manager is for logging into your instances, this provides an ephemeral "instance." You could (and many places do) accomplish the same thing by having an instance that provides similar functionality, but this removes the need to manage that.


This is unrelated, but currently, I'm doing my own basic web development projects and pushing them to the cloud using netlify. What should be my next step to learn about AWS, devops, and these things in general?



Interesting that AWS went with the "pet vs cattle" terminology in their blog post. I thought it was not very cool to use in 2020, as evidenced by debates on naming convention in K8S.


Makes me wonder if I can install Terraform and Terragrunt on this...

LOL, or run a remote VSCode session on it :D (I know that's not gonna happen, but would be kinda cool nonetheless)


Definitely do-able, Google wrote a blog about how to do exactly that on the GCP Cloud Shell a while back so not unreasonable to do the same on AWS:

https://medium.com/google-cloud/how-to-run-visual-studio-cod...

And Hashicorp have one on using Terraform from GCP Cloud Shell:

https://www.hashicorp.com/blog/kickstart-terraform-on-gcp-wi...


... or once you're in there, ssh out to create a tunnel you can proxy heavy network traffic through on their dime? :D

(Note: I'm sure they would catch this and it would either be a policy violation that gets you shut down or they would just know how to bill you for it)


It gives you sudo access so you can install anything. Although you don't need root to "install" Terraform, you can just download the binary and run it from ${HOME} or wherever.



I think this existed already. Of course GCP one is what I am more familiar with. AWS one seems to have 4G with 2G free. GCP last I checked only had 1G


Looks good, but I wish autocomplete would be available.


I can think of unlimited uses for this. That said, everything should be and is in vpc subnets, so I will keep waiting


Always amazed to still see Jeff Barr at it.


Why do you say that?


Not OP but for me it's - how does he have this much energy to keep testing, posting and showcasing new features? Maybe he could offer up a t2.small bit of it to some of us.


I'm seriously struggling to think of a use case for why I would want to use web browser to use a CLI tool.


You'd be surprised, maybe horrified, if you knew how many people primarily interact with the Management Console instead of the CLI or the SDKs. In-browser terminal sessions are a real convenience to that special kind of user that hasn't (or won't) take the time to learn a modicum of productive CLI skills, but has the occasional need to SSH in.


On the GCP mobile app for example, not all feature is exposed, but there is a cloud shell, with which you can do pretty much anything you want.


So now I can swap one of my terminal tabs for yet another browser tab where I can only run AWS commands. Great.


Prior: https://news.ycombinator.com/item?id=25431697

Not quite logged in yet - had a "AWS CloudShell is temporarily unavailable because it's being activated" screen for a while now. Fingers crossed!


Finally catching up to Azure


AWS finally catching up with Azure? Yes. That's exactly it. One day AWS will have the depth and breadth of...Azure! :)


How difficult is it for Amazon to get a live human being to read this out?

I hate mechanical voices.


Full circle :D


I find that AWS chose the same name as GCP for this tool hilarious.

Nonetheless, excited to see it -- it's something that I've complained about with AWS since using Google's CloudShell. It also continues us down the path to easy Ops-type work on an iPad (even though you can already have an EC2 instance and use Prompt to access it, being able to have a shell without needing to provision and EC2 instance is chefs kiss).


I’m impressed it actually has a name that describes what it does, instead of something like Walrus or Chalkboard.


They're actually a bit different. AWS's Cloud9[1] is like GCPs' CloudShell[2]. AWS's CloudShell[3] is /just/ a shell.

[1] https://aws.amazon.com/cloud9

[2] https://cloud.google.com/shell

[3] https://aws.amazon.com/cloudshell


I don't think there's a better name... Azure's is Cloud Shell (with a space)


We could always call it AWS SeaShell (C. Shell).


I'd be disappointed if I launched an instance and discovered that /bin/csh [0] (or at least tcsh) wasn't the default!

--

[0]: https://en.wikipedia.org/wiki/C_shell


Which we could then shorten to AWS3!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: