I don't get it. How is this different than starting new threads?
In the article example, it doesn't look like anything is returned from each parallel function call. the main loop just invokes the func for each I, and they print when done. No shared memory, no scheduling or ordering.. what's the advantage here?
In code examples, seems shared memory & scheduling are not a thing either. More like functional or chain programming - a function calls next func and passes output to it. Each loop runs independently, asynchronously from others.
Reminds me of ECS model in gamedev.
That's great and all, but it doesn't solve or simplify intricacies of parallel programming so much as it circumnavigates them, right?
Is the advantage it being low-level and small?
I think the same "concept" can be done in Bash:
```for i in $(seq 1 100); do fizzbuzz $i & ; done```
If it wasn't a toy problem but rather a larger set of rules describing a more salient algorithm it would matter more whether you could pour in more facts as data enters the system.
I get your point, I personally do a lot of crude concurrency with POSIX fork() and shell spawns from within suitable programming languages, e.g. Picolisp, Elixir.
Now I want to be devils advocate and give you the "corporate" perspective -
You say Agile disrupts your "flow", makes you deliver buggy code and just work to close tickets like a drone?
Good!
Who said code needs to be perfect and bug-free? remember Lean? Just get it out there! we can always fix things later.
From a business perspective - we are spending money to develop functionality, not our peoples skills.
Tickets and tasks should result in adding value to the business, not to our codebase.
This is a cruel and unsustainable way of thinking, which will have your best developers leaving the company very quickly. But if you're an enterprise corp - you see them as interchangable "resources" anyway, and Agile helps you accomodate to them leaving by splitting the work to small bits and having no single owner.
> Tickets and tasks should result in adding value to the business
I mean, they should but its not always a direct path. Resolving tech debt should speed up delivery of future products and features. Do businesses actually understand that reasoning... not really from my experience. This is really just a lack of understanding or appreciation of nuance for how software is developed by non tech companies that have been forced to hire engineers and create engineering departments.
> Agile helps you accomodate to them leaving by splitting the work to small bits and having no single owner.
This only holds true if you have a team structure and organization that prevents specialists and keeps everyone as a generalist. I don't believe that's actually possible...
Instiutional knowledge is massive, and I feel its often vastly undervalued by companies when engineers decide to leave.
It's a good way to set expectations for what will be/is to be delivered in a Sprint, and limits going off-piste in too many directions while not delivering functionality (which is a problem for creative/broad-skillset developers at times).
It also limits the impact/waste in case the requirements change - you only worked two weeks in the wrong direction. (and they say changing requirements is the biggest problem in software projects since ever)
However, I find that it often reduces developers to mindless drones, just trying to satisfy acceptance criteria without thinking about the big picture. (You know you're there when Refinement meetings are very quiet and the Architect does most of the talking).
As for estimation/scoring - I find that estimating "complexity" is nonsense. Treating points as "days" is more realistic and leads to better time management from everyone.
A little disappointing actually! I thought the pattern was a clever way to capture the status of the parachute in the event of failure, so if all they got was a garbled low-res image before the lander crashed to the surface they could deduce which section failed. Alas, it was just an easter-egg..
I actually clicked on this thread for the purpose of commenting that you could do cool things with some combination of this, using spare screen/tablet devices, and Barrier. But you beat me to mentioning the concept.
Thanks! I mentioned Synergy as I've used it many years ago and was not aware of license change implications (that's why I linked to the open-source repo).
Good to know there's a good fork.
Synergy never ceases to amaze me in that these few bugs that have been around for years make it a completely broken product for me. Regret paying them.
I had a similar experience after purchasing a license. Admittedly a few years back now, but I seemed to encounter issues that would strike at the most inopportune time. I eventually moved to an RDP based setup and haven't looked back.
For Windows only there exists Mouse without border [0] which works a lot better than synergy, including sharing of the clipboard. Of course not usable if you have different systems, but for Windows there is no better solution I'm aware of.
In the article example, it doesn't look like anything is returned from each parallel function call. the main loop just invokes the func for each I, and they print when done. No shared memory, no scheduling or ordering.. what's the advantage here?
In code examples, seems shared memory & scheduling are not a thing either. More like functional or chain programming - a function calls next func and passes output to it. Each loop runs independently, asynchronously from others. Reminds me of ECS model in gamedev.
That's great and all, but it doesn't solve or simplify intricacies of parallel programming so much as it circumnavigates them, right?
Is the advantage it being low-level and small?
I think the same "concept" can be done in Bash: ```for i in $(seq 1 100); do fizzbuzz $i & ; done```