Your boss uses Excel and doesn't like it when the reports\ read\ like\ this because you ignored his requirements. Your coworker is also angry at you now, because when he scheduled your script to run regularly, the output occasionally turns into gibberish because of inconsistent handling of special characters and error conditions, breaking automations randomly.
I spotted several errors in your provided "solution", which ought not be possible, because I'm a person that uses Linux briefly every few years.
The problem statement isn't a made up, artificial, or toy scenario. The few times I have to use Linux, it's to do this kind of activity. "Run a command on all Linux servers in the data centre" is something I had to do just recently, and it turned into a multi-month exercise involving six people. It would have been an hour with Windows. Think "look for log4j", "install an agent", or "report on the kernel version used".
Problems like this seem trivial, but it is precisely the independent nature of those tools that makes them not composable.
For example, 'ps' has different output and different capabilities on different systems! You can't naively run it across multiple systems, because output will be an inconsistent mess. You can't then sort, filter, aggregate, or do anything useful with its output.
"aux" is the wrong option to use. It'll pretty-print the output, which forces it to truncate long values. It'll return date times with inconsistent offsets, depending on each server's individual time zone and regional settings. It'll strip the year off, so if you want the 'date' a process started, it won't be obvious if it has been running for 1 month or 13 months.
If any servers aren't already in your ssh keychain, then your command will... what? Freeze? Output errors into the report? Prompt for your input thousands of times, one per server? In parallel? How... did you expect this to work!?
saying your thousands of servers wont have ssh keys installed when they have arbitrary software installed is disingenuous at best.
thousands of servers are managable only with orchestration solutions, this is a solved problem. the unix world also has evolved from a bash for loop to ssh into servers and read outputs.
i can solve your task in comparable time with pyinfra or ansible executing arbitrary python code on every node with not just csv output but whatever you dream up.
I spotted several errors in your provided "solution", which ought not be possible, because I'm a person that uses Linux briefly every few years.
The problem statement isn't a made up, artificial, or toy scenario. The few times I have to use Linux, it's to do this kind of activity. "Run a command on all Linux servers in the data centre" is something I had to do just recently, and it turned into a multi-month exercise involving six people. It would have been an hour with Windows. Think "look for log4j", "install an agent", or "report on the kernel version used".
Problems like this seem trivial, but it is precisely the independent nature of those tools that makes them not composable.
For example, 'ps' has different output and different capabilities on different systems! You can't naively run it across multiple systems, because output will be an inconsistent mess. You can't then sort, filter, aggregate, or do anything useful with its output.
"aux" is the wrong option to use. It'll pretty-print the output, which forces it to truncate long values. It'll return date times with inconsistent offsets, depending on each server's individual time zone and regional settings. It'll strip the year off, so if you want the 'date' a process started, it won't be obvious if it has been running for 1 month or 13 months.
If any servers aren't already in your ssh keychain, then your command will... what? Freeze? Output errors into the report? Prompt for your input thousands of times, one per server? In parallel? How... did you expect this to work!?