It's probably my fault if you haven't heard of Bash-My-AWS.
Bash-My-AWS is a simple but extremely powerful set of CLI commands for managing resources on Amazon Web Services. They harness the power of Amazon's AWSCLI, while abstracting away the verbosity. The project implements some innovative patterns but (arguably) remains simple, beautiful, readable and easily extensible.
The project started in 2014 and while many hundreds of hours have gone into it, far less has gone into promotion.
I'm speaking about it at LinuxConf and have created a documentation site at https://bash-my-aws.org
for anyone on this thread that is interested. I run https://getcommandeer.com which is a tool to manage your AWS and IAC infrastructure from a desktop GUI. I love this bash-my-aws, as we are about to release Bash, Docker Compose, and Terraform Runners. We already have Serverless and Ansible runners. They enable you to run your command line system from a GUI, so that you can instantly switch between AWS accounts/regions and even LocalStack. Because it is a desktop, under the hood we are really running cli tools mixed in with some AWS JS SDK.
yea, it works terrible without a monitor as well. The site does not work headless. what was I thinking? It's a desktop application. The site is where you download the app.
To be a little more constructive with my answer. The website needs javascript for our chat service, our newsletter service, our ability for users to purchase licenses for the app, the night/dark mode switch. The codebase also shares components with our desktop app as they are both Vue, but with the desktop app utilizing electron. We will look to make the site be html only at some point, but it is not a focus of our dev team. If you download the app, you will see that it is a first rate experience for managing many AWS services. The app itself probably has about 1,000 man hours or more put into it. Would love to hear insight into how we can make it even more powerful for the community.
Meta note: All things considered, Amazon has it pretty good. They put out a barely usable, bare-bones, but fully functional tool in awscli. Paying customers of AWS have to perform the engineering effort to make the API more usable, and some even open-source their projects like this. AWS is an incredible business model.
I looked, no one created packages with those 2 names that I could fine, although there are some clis for controlling spotify it seems. Were you referring to specific packages somewhere for github and spotify?
bash-my-github and bash-my-spotify are two things I've made a start on but are not public yet. They've been on the backburner for a while due to competing priorities.
I was able to write a simple command that returned all songs a friend and I had in common in our public playlists.
I forget the exact syntax but it was something roughly along the lines of:
$ sort <(
user-playlists alice | playlist-tracks
user-playlists bob | playlist-tracks
) | uniq --repeated
More charitably, AWS provides a CLI for most resources and commands and (in large measure) either provides backwards compatibility or a long depreciation window. This allows hackers to build syntactic sugar on top of their infrastructure.
I love that a community has an option to build components and share the same. It has made my work much more productive.
But we certainly agree on that last point: it’s an incredible business model.
AWSCLI backward compatibility has been so good, I've never had a Bash-My-AWS command fail due to a change.
AWS CLI v2 previews were released in Nov 19 and while this may contain some breaking changes, I wouldn't be surprised if all the commands BMA uses continue to work as normal.
Anything above bare bones will be opinionated, imo this is the best solution for infrastructure provider - maximum freedom, but also providing a UI for simpler access.
The annoying part about AWS putting so little effort into smoothing out the rough edges in their tooling, ux and services, is that the whole ecosystem of helpers/wrappers developed to make the services usable by humans needing to get something done (rather than deep dive again and again into many rabbit holes of AWS's idiosyncrasies) - inevitably only cover some tiny subset of the services a typical shop is going to use.
Not to take anything away from the author of this project - Bash-my-aws looks fantastic - but it only helps you with a few core services. Same appears to be true of the commandeer tool that has also been mentioned in this thread. And the same is true for localstack, and on and on.
I really wish AWS would devote some resources to filling in these gaps themselves, and comprehensively.
Coming from Google Cloud, I couldn't deal with the atrocity that is awscli, so I ended up eventually implementing the bare minimum of shell wrappers to at least start, stop, ssh into, rsync files to and from, etc, my aws instances _by name_, not by instance ID. Took me a couple of hours to cobble it together.
Google cloud CLI offers all of this out of the box. Why Amazon wants to make such basic commands difficult, I'll never understand.
This looks pretty great, while the AWS CLI is very comprehensive, I always struggle to remember which flags are needed for each command, and it's not very consistent.
One thing I've not been able to work out with bash-my-aws yet was how to easily switch between regions and accounts. I noticed you can use `region` on it's own to set the current default region, but I'm often working with multiple regions, and it'd be a pain to have to run `region us-west-1` separately each time I want to use a different region. I couldn't see a way to just specify a region for a given command (eg how you'd do `aws get-instances --region us-west-1`). I guess you could do this with the environment variable `AWS_DEFAULT_REGION=us-west-1 instances` but that's a bit verbose.
Similarly with AWS accounts, I use multiple AWS accounts, which are accessed with different access keys, which are defined as profiles in my ~/.aws/config. Normally I'd use these with the AWS CLI like `aws ec2 get-instances --profile production`, I couldn't see any way in the docs to use or set this?
They're good questions. I can tell you how I manage regions and accounts but am interested in learning how people think Bash-my-AWS might better support users in this regard.
The AWCLI, as well as SDKs all support grabbing Regions and account credentials from environment variables.
For Regions, I work tend to use the following aliases:
alias au='export AWS_DEFAULT_REGION=ap-southeast-2'
alias us='export AWS_DEFAULT_REGION=us-east-1'
alias dr='export AWS_DEFAULT_REGION=ap-southeast-1'
I normally work in a single Region and swap when required by typing the 2 character alias.
To run a script or command (doesn't have to be Bash-my-AWS) across all Regions I use region-each:
For AWS accounts, I type the name of the account and I'm in. For accounts using IDP (ldap/AD backed corporate logins) I generate aliases so I have tab completion and simple naming.
In accounts that are only setup to use AWS keys, I use aliases that export credentials kept in GPG encrypted files. Last time I looked, AWS docs suggested keeping these long lives credentials in plaintext files readable by your account. That's asking for trouble IMO, especially if they're kept in a known location that a compromised node library could exfiltrate them from.
AWSCLI v2 beta includes support for SSO so it's probably a good time to look at how BMA could include support for auth.
I would definitely keep jq dependency there, especially if you plan to expand the code base to provide ecs and Fargate commands. You will quickly run into use cases where JMESpath is not capable of parsing json outputs for different tasksdefintion commands.
"For more advanced filtering that you might not be able to do with --query, you can consider jq, a command line JSON processor. You can download it and find the official tutorial at http://stedolan.github.io/jq/."
Just the other day I was looking for an official docker image that includes the AWS CLI. On top of that, and mainly, I was looking to find more documentation or tooling to better automate the deployment of new AWS projects.
Does anyone here have any experience of (starting from scratch or with no AWS resources) setting up policies/users/resources/configurations via something similar to the Deployment Managers of GCP and Azure?.. preferably something declarative or via templates?
Bash-my-AWS looks like a great step towards the goal I have in mind but I may also just be unaware of other tooling or AWS capabilities.
Have a look at CDK. It's a framework on top of CF to use python/javascript/etc made by AWS.
I've been trying it out recently to try to move away from TF and it's a promising alternative.
ansible and serverless are also very powerful IAC tools that let you deploy on top of CloudFormation but give you a much nicer way to do so. Terraform does require state which is a pain point of it for some. Ansible let's you just run their scripts and you don't have to worry about state in S3 or Dynamo DB.
I'd say it's more and more common that CF doesn't support X resource or pattern than anything else.
We've got custom resources _everywhere_ instead and only just started on our journey of using TF instead. CDK is trying to drive up adoption though I've not used it yet so can't provide any opinions.
I would strongly recommend to use cloud formation through a typed proxy like troposphere. Also would not recommend to use terraform at all since you will run into warts and fundamental issues quickly. I have done projects with both and my current blessed workflow is a custom python driver which uses CF via troposphere and minimal boto3 as glue. Also I work at AWS.
Several of the warts in Terraform were fixed in 0.12.
While I think the HCL DSL was a mistake and prefer the CloudFormation YAML, CloudFormation has its share of warts as well, and the TF community has been doing better than CF in staying up-to-date with the AWS API updates - which reflects quite poorly on AWS actually.
> would not recommend to use terraform at all since you will run into warts and fundamental issues
It's not a good look to be employed by the 800 pound gorilla and bash your company's competitor without mentioning specifics.
I just finished a POC that generates 90% of the AWS services I use per client/project/application. The remaining 10% is DNS stuff that I can easily do by hand, but with a few clicks I get everything provisioned with much less human error (buckets, Lambdas, API Gateways, Cloudfront distributions, etc.)
The formation definition is ~1000 lines of JSON, but it explicitly describes everything I need and it takes in parameters - it's wonderful! Thank you again!
At work we are using Terraform to manage everything that is related to AWS resources, including accounts, IAM policies and groups. We also used Serverless framework and CloudFormation, but Terraform is what works for us and I can recommend it as a main IaaS tool
Interacting with individual DNS records in Route 53 is very hard using AWSCLI. I wrote a Python wrapper around the Route 53 API to make it easier to do command line records management (and also to do dynamic DNS with your own Route 53 hosted domain):
Bash-My-AWS thinly wraps AWSCLI commands that would otherwise be too long to type. So you're still using AWSCLI and can improve your skill with it by inspecting the source of Bash-My-AWS functions.
If you want to easily manipulate your AWS environment from the command line use the AWS cmdlets for PowerShell. The fact that PowerShell cmdlets work on objects instead of text makes them miles better than this or the AWS CLI because you don't spend most of your time figuring out how to wrangle text into meaningful output.
Windows users have commented at how Bash-My-AWS's innovative use of unix streams reminds them of PowerShell.
The listing functions output lines of tokens. The first token is the resource identifier. Piping that output into functions for that resource type results in the resource IDs only being used.
Do you have any insights on how someone who is used to the text-only world of Bash transition to using Powershell cmdlets?
The problem I run into is that it just feels like so much typing to me. I have to read documentation. All the attributes HaveReallyLongNamesThatContainCapitalLetters. By the time I've made my beta version of the command I want to run, I feel like I need to open a text editor to finish it. Maybe add some error checking. Some comments too. Maybe a unit test or three. And now I have an entire project and all I wanted to do was add a line of text to the end of a file.
Part of the problem on my part is my own ignorance of the APIs and what commands are available to me. But it all seems too verbose to use practically. The Powershell language seems very good for what you would write a shell script to do, but for interactive commands, I have a hard time believing that people use it. It's just so verbose.
I have developed something similar on top of the AWS CLI that incorporates a bunch of integrations with other tools like the cloudinit and various bits of Batch-related instrumentation: https://github.com/kislyuk/aegea
The first thing I'd do after installing this is to alias prefixing "aws-" to every commands. I like namespacing things as I suck at remembering names...
What really sells me on this tool is the ability to examine the underlying awscli command and transformations. I’ll be giving this a go in the new year!
Bash-My-AWS is a simple but extremely powerful set of CLI commands for managing resources on Amazon Web Services. They harness the power of Amazon's AWSCLI, while abstracting away the verbosity. The project implements some innovative patterns but (arguably) remains simple, beautiful, readable and easily extensible.
The project started in 2014 and while many hundreds of hours have gone into it, far less has gone into promotion.
I'm speaking about it at LinuxConf and have created a documentation site at https://bash-my-aws.org
https://linux.conf.au/schedule/presentation/144/