people.sort_by_key(|person| person.name); // sort_by is also an option...
I think it's worth calling out exactly what is happening in the Go example:
- We create a closure that captures the people slice
- We pass the people slice and the closure to the Slice function
- The Slice function mutates the people slice, and because the closure captured the slice it sees these mutations too
I get why the Go team wrote sort.Slice like that, and it was perhaps the best they could have done with the language features...But I think we're going to have to agree to disagree on how wonderful it is compared to other languages ;).
No disagreement, having used Python a lot and finding it much nicer for this case. My compliments to Go on this were relative to the just how tedious the “implement this sort interface” method was that TFA was describing. sort.Slice is definitely still pretty rote (I think I write the same Less() function 90% of the time?), but it’s at least fewer characters of rote!
It's worth noting though that Go's way is just 40 characters more than Python with an inline comparison function and Go's verbosity.
If I need to reverse the order, it looks easier to do with Go (just reverse the operator) than with Python and Rust way (I guess both have something like an "order" additional parameter).
Rust and Python both feel more elegant but I actually like Go's way.
Go's method is like that because it didn't have generics, but it does have the advantage of allowing you to sort more complicated things, e.g. indexes into other data structures, or computed values.
Sorting by key is a special case (admittedly the most common special case).
I like working with Free and Open software much more than proprietary software. I think it's important for society, and I have more fun that way too!
Also the payoff for me has been very good, I can learn emacs once and enjoy using it for the rest of my life for all significant written language tasks on a computer.
Perhaps I could be a little more efficient if I were using a jetbrains IDE, but then I wouldn't like what I was doing as much. Enjoying what I do, even if it may look slightly contrived to others, is important in me achieving results at work.
This argument is not convincing to me, especially considering JetBrains publishes their IDE base as open source.
Everytime I have to use VSC to develop typescript and angular, I am having problems with finding definitions (works 30% of the time), code search (it takes longer due to the constricted interface), git operations (want to do more than a simple pull and push? good luck), and much more. WebStorm on the other hand has a lot less bugs, a more flexible interface, and more features. I am glad that people make an effort to make an IDE instead of an editor with IDE-style features and I'll gladly pay a very small amount of my salary to them.
Every workshop has higher costs than a software developer. Imagine a car mechanic propping up a car with 2 by 4 because they use what's available for free. No, they buy their $30,000 lift because they need it to get their work done quicker.
> Every workshop has higher costs than a software developer. Imagine a car mechanic propping up a car with 2 by 4 because they use what's available for free. No, they buy their $30,000 lift because they need it to get their work done quicker.
This is a far from convincing argument. Jetbrains IDEs are not the equivalent of a professional lift and the competing (often free) products are not the equivalent of a 2x4.
Is Jetbrains good? Well, I've used it for Java and was pretty impressed.
Is it $199/year[1] better than the free stuff? Well many people don't think so. It's fine if you only every use a single stack, but most of us use multiple languages and multiple stacks, now you're looking at $649/year (see link below) for all tools. Considering that my current personal development computer cost less than that years ago, is it now wonder that the price is considered too much?
I think the problem is that developers are looking at the Jetbrains products and comparing it to the value they get from other development purchases.
Compare:
A single $1k computer will last for many years, do every single development task needed to make money, be used for entertainment, and write all the actual software that will be sold. When it is too slow for dev (in a decade from now), it'll be repurposed for something else.
A single annual payment of $649 to JB results in a tiny increase in dev speed, which will disappear at the end of the year anyway. It won't make the code more robust, it won't help solve business problems any faster, it will only make code navigation faster.
For a dev, look what $1000 buys, and then look at JB for $650, and it doesn't look like all that good value for money anymore.
Just wanted to note that if you're talking about "personal development computer" and "developers loooking ... and comparing", then it's more like an "individual" license as opposed to the commercial one, which is meant to be evaluated by companies.
The individual-license pack for all of their IDEs would set you back $250 as opposed to $650 commercial license.
I used to use "IDEA Ultimate", which is 30% cheaper than the All-Pack and supports installation of most of the other language plugins, allowing me to use a single IDE for everything. Nowadays I'm using separate IDEs as that seems to work faster.
I personally find the price worth it, as even a simple "expand selection scope" operation which I use many times per day just doesn't feel right in VS Code.
JetBrains' All-Products Pack costs me only 150€/year + VAT (250€ individual license minus 40% for subscribing for 3+ years) which is also fully tax deductible. In the end it costs me less than my cinema budget for a year of top notch products with proper support and bug fixing where I don't have to wait months for an answer.
I stand by my 2 by 4 comment because they get the work done, just slower and more awkward. It's always super amusing how software developers with one of the highest salaries around the world (yes, even in poorer regions) complain about costs when other jobs require tens of thousands in initial investment.
Currently that shows a cost of 250 euros per year, or about 20 euros per month (excluding VAT which varies).
For most developers, that is indeed a relatively small amount (even I opted for the ultimate package, despite earning in the low 2 figures in Latvia), whereas the 650 euros for commercial licenses would be doable for any organization that cares about their developers' experience.
Personally, whenever I see commercial software or a SaaS/PaaS/IaaS solution, I'm tempted to throw a brick through someone's window (figuratively) because those are likely to result in unreasonable amounts of vendor lock (especially with cloud services around Kubernetes management), but personally I haven't found a better IDE than what JetBrains offer.
For Java, all of the alternatives are worse: Eclipse is buggy and crashes (though some swear by its incremental compiler and integrations), NetBeans is kind of dated and struggles with projects that have 4000+ source files (though it's cool that Apache keeps it alive and there's the whole module enable/disable functionality and their VisualVM integration is great).
For .NET, Rider is easily up there with Visual Studio, even when you're doing something more niche, like working with the Unity game engine (the performance hints are nice), or just working on .NET apps.
For PHP, Ruby, Go, Python and other languages their tools feel competent and oftentimes suggest you whatever it is that you might want to do, be it setting up your runtimes properly, your dependency management systems, install all of the dependencies, import the project config/launch profiles etc.
For Node/JavaScript I have never found a good IDE, but maybe that's because the language is sometimes a mess to work with - e.g. getting only some very basic completion in some garbage 3000 line AngularJS controller because even the IDE has no idea what the hell is going on there, or having Vue 3 use the <script> tag for adding code imports, instead of detecting that i'd like to use <script setup> but then again, they're pretty speedy with updates and if you don't do anything too crazy with projects, then it should be good.
I don't have much experience with their C/C++ offerings, or their lightweight text editor (Fleet) or the likes of DataSpell, though their DB management offering, DataGrip is pretty okay too! Though you can also configure the individual IDEs like IntelliJ to show up hints for most decent frameworks.
Sure I also like to use emacs since it stood the test of the time but the point was not proprietary is better but unwillingness to support companies that sell products.
If we refuse to buy products then we end up with companies offering just services with vendor lock in. Well...we already sort of ended up in such world.
There are significant pockets of non-git users in the developer community ;)
I find hg much more pleasant to use than git, and can generally still work in git based teams while using hg.
I find pijul's approach much more interesting than git's, and look forward to pijul (and others :) pushing forward how we do revision control.
I don't believe git to be the end of the evolution of revision control systems (which for me has looked like cp -> rcs -> cvs -> svn -> git|hg), and find git lacking enough that I look forward to what the next generation does.
Would I argue that you personally should branch outside of your git centric world? Of course not! Just as I've worked with many devs who live their lives gainfully employed only knowing Java, or sys admins who only know Microsoft, it is up to you as to if you wish to explore different approaches and styles to achieve your end goal.
Sure, their are developer communities around the other SCMs, and many are passionate about those alternatives.
My point is that even if you do get into one of those other communities, you are still going to have to learn how to use git because so much of the world uses it. You will need it either for your job or because the open source project you want to use is on it. You can't skip learning git.
If you don't mind that, then go for it. For me, I don't want to use my limited capacity for learning things on learning a second (or third, since I still have SVN usage somewhere in my brain) SCM.
That’s not true. I’m a gamedev and have never had to learn git. The closest would be downloading a zip from GitHub. Almost all game dev is done on Perforce due to the enormous asset sizes we work with.
Also in gamedev. Git doesn't work for larger AAA projects, as it often chokes on the amount of data that stored by production teams, even when LFS is used. Perforce can handle huge binary files relatively easily in comparison.
so you're saying contribute to absolutely no 3rd party code? The person is saying that if you spend time outside of your VCS bubble, then you will end up using git, even if you don't like it.
Correct. The dev teams working on the ‘client’ part of the game (ie, the game) don’t have time to be contributing code to open source projects, but we don’t use much open source code anyway. But other teams within the company will be using Git for server code, and other non-client side work.
I’m not saying that Git is never used in game dev (it absolutely is), I’m saying that not everyone will have to learn it, or ever touch it.
The thing about Linux in the early 00s was that to a significant portion of developers, it was already obvious where things were headed. Anyone not seeing the potential wasn't really paying attention. And one shouldn't care what people who aren't paying attention think.
How likely is it that Fossil will displace Git? If you believe it is likely, then learning it, and using it for projects, may be time well spent. But even then, it'll take so long that you will still have plenty of time to adapt. So perhaps it is something you should keep an eye on, but not worth investing in now. If you find Git to be a pain, then a better investment right now, which pays off right now, would be to figure out what you can do to get along with Git, and thus most developers.
You have to treat learning as an investment. Which means, you have to think about about returns on investment. And just like regular investments, it doesn't matter what people who don't pay attention think.
I think it kinda doesn't matter in this context, though. You can pick up enough in an afternoon to be productive in a new VCS. If I worked at a Mercurial shop, I wouldn't disqualify a candidate that had only used git.
But sysadmin'ing Windows vs. Linux is basically two different jobs. You certainly wouldn't hire someone to be a Linux admin who'd only worked on Windows servers before.
Eventually? Sure, nothing is forever. But the replacement won't be by something older like p4 or hg. The real question is: has the replacement been written yet?
The bar is a lot lower; sure, you might want to be able to use git, but if you're only using it enough to interoperate you don't need to know how to do fancy stuff and can get by with a subset of common commands (if you only clone, pull, commit, and push, then you can just about s/hg/git/g) and ugly fixes (why learn to rebase when you can make a fresh clone and hand-pick changes?) without feeling bad about it.
The next big thing will be semantic code versioning. I.e. a tool that understands language syntax and can track when a symbol is changed.
There are already experiments in this area[1], but I don't think any are ready for mass adoption yet.
The fact that it's 2022 and we're still working with red/green line-based diffs is absurd. Our tools should be smarter and we shouldn't have to handhold them as much as we do with Git. It wouldn't surprise me if the next big VCS used ML to track code changes and resolve conflicts. I personally can't wait.
> The next big thing will be semantic code versioning. I.e. a tool that understands language syntax and can track when a symbol is changed.
Hmm maybe. I think this will come, but it may be more broken up through build tools that include this functionality, not a singular VCS. I think aggressive linting is starting to help with whitespace and formatting adding noise to diffs. We're seeing progressively more integrated build tools (Cargo, NPM, vs anything C/Java).
Personally, my bets are on the next VCS being more "workspace" centric as a next-step evolution. Any big change is going to come as we already change how we work. We're starting to see a lot of various tools that are basically VM/Container workspaces that you work out of. Cheap to spin up for each feature, instead of branching/pushing/pulling on one local repo. I think "thin client" reproducible workspaces are the next evolution.
What will VCS look like when everything is always-connected and (maybe?) cloud hosted? Maybe it'll allow "sharing" of workspaces, so you can build big features as a team (instead of feature branches being pushed/pulled). You'd certainly not store the "fat" history of changes locally, and big assets/binaries/etc would be better supported. Maybe it'll be built into one of these virtual workspaces as an overlay-fs, so its transparently auto-saved to a central store instead of manipulating the existing file system like git.
If they ever fix the performance issues and get a little polish, the patch-based DVCSs (pijul et al.) will be a great improvement in usability. If I were guessing, I bet on that as the next step (given the understanding that predicting the future like that is a fool's game).
Sorry, I didn't use SVN enough to answer that. Basically, they're distributed VCSs like git, but where git stores commits, that is, snapshots of what the files look like at a specific commit, patched-based VCSs in the darcs "lineage" store diffs. Obviously the end result looks similar - git gives you diffs by comparing commits, darcs replays diffs to instantiate what the files actually look like at any given time - but it (allegedly) makes it much easier to handle arbitrary merges and do things like maintain a branch that's just like another branch but with a single commit on top.
Largely complaints of Rust seem to boil down to the programmer needing to describe object lifetime information in code.
We can, as the original post did, show approaches that are overwhelmingly difficult in Rust because of this but trivial in Go. Alternative approaches, as this post shows, can be relatively straightforward.
In a similar vein, a Python programmer might complain about having to explicitly describe object type information in Go code. One supposes they could show approaches that are overwhelmingly difficult in Go, but trivial in Python.
Python, Go and Rust programs all do have types and object lifetimes. It is just that mistakes in type are not found until runtime in Python, and likewise mistakes in lifetime are not found until runtime in Python and Go.
Personally, after years of Python I came to value describing types in code, and after years of Go I came to value describing lifetimes in code too.
Increasing we automate processes, have programs do the work humans once did.
It's extremely helpful and productive, but it has a darker side. The processes are rigid because machines are rigid, and the designers cater to the 99% cases.
But then the 1% happens, and you're left out in the cold.
In the old world of humans and paper, as wasteful as it was, it was easy for exceptions to be made if the clerk was willing, and if they weren't you'd find another clerk, or a clerks supervisor. The processes tended towards being flexible.
But today, you increasingly don't interact with any humans, or if you do don't be surprised if, in your unusual case, they say "the computer won't let me".
As governments move more and more towards digitization, and embrace machine learning, I expect similar stories might unfold - only it won't be with an opt in social media website.
> As governments move more and more towards digitization, and embrace machine learning
I spent a decade in the public sector digitalisation of Denmark, a country that competes with Estonia about having the most digitalisation in the world.
I fully believe we should legislate against automated processes taking decisive actions.
It’s inefficient, but what I experienced in regards to laws is that they are way more messy than anyone working in digitalisation seem to realise. We build a system that let employees report their business-related driving, in Denmark you get a tax-reduction when you drive in your own car for work purposes, and the laws covering it is basically an A4 page of tax-law that seems sort of clear. You have 3 set of taxation rates that you get to deduct from, they are meant to be used for different types of work related driving. Simple, right?
Well, it turned out that in 9 different municipalities there was 9 different ways to interpret that A4 page of law text, and, more than a 100 different union agreements on how to extend or alter the tax law for certain groups of workers.
As hilarious as it was to sit through meetings with different sets of tax people from different municipalities getting into heated arguments about who was break the law, it was also sort of eye opening for me at the time. Because we made this as an OSS project where we bought the development that we project managed. My role was part of the project management team as a code-reviewer/specifier of sorts, and all our estimates simply went out the window when we realised we really had to build all those different ways of interpreting the law, as well as making room for future alterations. In the end, it didn’t extend the project that much, I think we still delivered it on schedule but it was a very different product with lots and lots of setup required, because the different municipalities needed to be capable of deciding which rules were turned on for which groups of workers, as well as control over how the approval system was handled by everything from tax lawyers going through every submission to secretaries to RPA robots simply clicking accept on everything.
The system wasn’t related to decision making automation that couldn’t be easily undone by humans, because it was still a relatively simple system. But if that sort of complexity is what you get from some of the simplest legislation we have, then imagine what it would look like for laws covering thousands of A4 pages of text.
Dehumanisation of essential civic processes is a step towards
"cybernetic governance", and is a topic I explore in some detail in
Digital Vegan [1]. This is distinct from what most of us still call
"e-governance" in subtle ways. I am concerned that people do not yet
understand the nuances between processes that can be automated to
really improve life and where we cross the line into technofascist
dystopias that will tear societies apart.
I share an attitude with Frank Zappa here. Zappa was rather oddly
against "Love song lyrics". He said they led to poor mental health by
propagating unrealistic expectations of intimate relations.
Similarly, I think that Science Fiction has a lot to answer for. I
personally love most SciFi, but like Orwell the Cyberpunk genre was
misinterpreted as a blueprint instead of a warning, and many people
carry around disported, unrealistic and quite mentally damaged ideas
of what a "good" technological society should look like.
>I fully believe we should legislate against automated processes taking decisive actions.
There are certainly decisions that should not be fully automated. But this has very little to do with the account recovery issue we're talking about.
I believe that account recovery, and more generally proving your identity, can be done automatically with greater accuracy and far more securely than any process involving humans.
We have secure, electronic, government issued identity documents that are perfectly suitable for automation. Let's just use them! If we must legislate then let's introduce a right to prove our identity using our government issued ID.
There are other issues related to oligopoly accounts that are hard to solve. But proof of identity is not one of them.
> We have secure, electronic, government issued identity documents that are perfectly suitable for automation.
And what do you propose as a solution if your government-provided identity gets lost or stolen or hacked?
Or for people who have a hard time getting such a doc? (note: Sweden currently has a crisis because it can take over 1 year to get a passport).
Or for people who live in countries which don't have these systems?
Are you really ok with uploading a video of you holding your passport every time you want to log onto a service (see "id.me" controversy)?
Now, what might be nice is if the government used a highly secure crypotgraphic system to allow identity verification, but drivers licenses and passports aren't that.
>And what do you propose as a solution if your government-provided identity gets lost or stolen or hacked?
Report the old one stolen/compromised, get a new one, use it in the account recovery process.
>Or for people who have a hard time getting such a doc? Or for people who live in countries which don't have these systems?
This is a core responsibility of any government. It works well enough in many countries and we should not wait for the last government on earth to get its act together before using it. It can be gradually introduced country by country.
>Are you really ok with uploading a video of you holding your passport every time you want to log onto a service (see "id.me" controversy)?
Having a right to prove your identity using an official ID is not the same as having an obligation to do so. I would only use it with a few key accounts that I trust (and with financial institutions where ID checks are mandatory).
Also, I wouldn't have to hold up my passport at all, nor would I have to do it every time I log in. The platform would read the passport chip once upon registration or during account recovery and check if the picture on the chip matches my face.
>Now, what might be nice is if the government used a highly secure crypotgraphic system to allow identity verification, but drivers licenses and passports aren't that.
> Having a right to prove your identity using an official ID is not the same as having an obligation to do so.
I'm sceptical as to whether you can avoid it becoming an obligation.
You sign up for $SOCIALNETWORK. Some opaque 'bot detection' process deems your account 'suspicious' and locks it. They offer to unlock your account if you prove your identity using an official ID.
That makes it obligatory in practice, if not in theory.
I share your scepticism, but that's a political decision. Nothing protects us from bad political decisions besides participating in the democratic process.
What's happening right now is that we are sacrificing a lot for the financial benefit of corporations and for politicians' control obsession while we can't use some of the same technologies and capabilities for our own benefit.
We often have an obligation to prove our identity using a government issued ID, but we have no right to do so when we want to.
You can also read this story another way: Non-scaled manual processes have accumulated decades or generations of accidental complexities. I've seen this in the example of a central software my university was ordering to manage the records of all grades, achieved credits, registered exams and so forth. Most of this was already managed by a centralized agency (Zentrales Prüfungsamt) but every faculty had slightly different examination regulations and processes. It's not that most of these differences really provide any benefit to the students or the institution – electrical and mechanical engineering are so close to each other that there is no rational way to explain why they can't have the same length of time the registration window for practical courses is open – except that everybody is used to the way it is now and each faculty makes a stand for their right for the status quo.
And in my opinion the reason for most of the conflicts that arose is a failure of expectation management what the digitization effort can accomplish (in a reasonable budget): Software systems are cost efficient only with mostly homogeneous processes. Their development is such an expensive undertaking that it can only compete with individually trained humans when you can amortize the costs over large amount of use cases (c.f. https://xkcd.com/1319/ ).
Thus the first step should always be to get everybody on-board to give up some of their non-essential individuality. There is no need for car taxation to change from municipality to municipality. (Be aware of the reverse phenomenon as well, though: Individual needs getting thrown under the rug by systems that are too rigid or simplistic in the wrong places. See all the falsehoods programmers believe about {names, time, gender, ...} articles. TFA in my opinion is not an example of that phenomenon btw.: Facebook, like Google, is justifying cost cutting at places which obviously need trained human support, with a fetish for technological solutions.)
Of course this homogenization is not something that my parent poster would be in any position to accomplish, so this is not meant as a critique. Also I agree with EnKopVand that automated processes (or even overly rigid bureaucracies) should not take decisive actions on their own.
You should absolutely read it that way, and, you should go even further and point fingers at the legislation itself. In my decade of public service we had five different ministers of “digitalisation” (they had other titles because IT doesn’t win votes) that all put effort into making our laws better suited for digitalisation. I think we even had a prime minister get into it, and every prime minister throughout my entire life has had an ambition of making laws less complicated.
Well, let’s just say that while you’re completely correct, I don’t think we should wait for our countries to become less Kafkan, which is why I’m a fan of simply banning the automated decision making. Maybe if you hurt the bureaucracy where it matters (cost) we might actually get some officials who deal with the root cause of the issues.
>> In the old world of humans and paper, as wasteful as it was, it was easy for exceptions to be made if the clerk was willing,
One of my first jobs out of college was an account manager at a big corporation. It was an easy job. Half customer service and half sales. The guy in the cube next to me was always chided for being a dinosaur because his desk (in his words) "looked like a tree just puked on his desk" because of all the paper copies he had floating around - but damn if he couldn't find a contract six months old or an email with some promise he had made a client a year earlier.
It was his file system and it worked magically. You'd ask him a question and he'd look around and then dive to a stack of copies of contracts, come up with the right contract and let the customer know the details so quickly. Customer's loved him because he could reference things so quickly and was so sharp with conversations and notes he had taken. No problem if his laptop crashed - he already had a paper copy. He always referenced himself as a go-between the paper world and the digital world that was quickly consuming his talents.
I heard he retired a few years back - but he was the guy you're talking about to a tee. He was around during the transition to email from everything being paper. Dude still made it work, even when he knew his time had come and gone.
> As governments move more and more towards digitization, and embrace machine learning, I expect similar stories might unfold - only it won't be with an opt in social media website.
It's already arrived, in the form of the Australian government's "Robodebt" scheme: 20,000 automated debt notices per week with minimal oversight [1]. The government denies it killed people, but there are claims that it did [2]. After several years a court eventually stopped the scheme and awarded about $2 billion in compensation, but by then a lot of lives had been ruined.
My wife and most of her friends have all lost their Facebook accounts at least once. They all gave up getting them back. Many tears as most of them use it as their only photo backup for kid pictures.
At this point it’s just routine for them to have their account taken over and lost periodically.
> Many tears as most of them use it as their only photo backup for kid pictures.
Painful as that is, those of us who tend more technical should be helping people understand the nature of social media companies, and encouraging backups in whatever form is supported.
... and then helping push them over the edge to find other ways to communicate with friends, share photos, etc. "Do it all on Facebook!" was novel, 10-12 years ago. Now it's just a lack of creativity and a willingness to help add feet to the next yacht.
My girlfriend's mother had this happen to her recently. They got in and changed her password, profile picture and name. Account recovery options didn't work, and also reporting the profile as stolen / whatever option was most appropriate on the form didn't achieve anything.
I found it quite confusing as to the motivation, and seeming gap in the armour of automated bans.
As far as I know, many people get instant banned if they attempt to setup a second profile for their Oculus or similar, I assume the motivation is to get a fake account that has history to avoid this. What surprises me though is that changing password, profile picture and full name in quick succession + attempts to recover / report don't trigger this mechanism/some kind of review process.
It has nothing to do with automation and everything to do with unaccountable centralisation of power.
It does not matter what moderation scheme they use -- automated or beauraucratic, if you gift the town square to a private entity and allow access to it to be determined hy their whims you get this.
Reminds me of the rollout of Obamacare. I went to sign up on the website almost immediately. Ran into a host of errors. Called numerous times and every agent there told me they couldn't help and didn't know what was wrong. The solution: call us back in 3-4 weeks.
I work in a company that's all in with serverless on AWS, but unlike you I can't give a glowing recommendation.
The answer should always be "it depends".
IMHO the more distributed a system the more difficult it is to correctly build. Serverless architectures encourage distribution by building your service from different AWS components, commonly say, Lambdas, S3, DDB, SQS, etc, you end up building a distributed system from AWS provided distributed systems.
System performance is commonly a function of data locality, and serverless typically spreads things out. The Lambda's have no persistent state of their own, though DDB is fastish, it's still multiple network hops away. Speaking of performance, you are also always trying to work within the 15 minute / 10 GB lambda hard limits, which has made my current role the one with the most performance management work yet!
Lastly, you require a rather high proficiency in AWS, both in general (IAM polices, roles, cloud formation, cloudwatch, et al), and in specific services - and each service is it's own thing with it's own performance traits and consistency guarantees and semantics. It takes a lot to pick it all up.
Meanwhile, single servers, being whether you rent an EC2 virtual instance, an OVH rented physical instance, or purchase one, have gotten extraordinarily powerful. And postgres has proven to be very useful.
I'm not suggesting people ignore AWS all together, but perhaps using less services could serve a lot of people rather well, 1 EC2 using Postgres RDS and EBS, can do an awful lot. I notice the M6a instances apparently max out at 192 VCPU, 768GB RAM, 50Gbps networking and 40Gbps of EBS connectivity.
Most people know how to run 1 host, everyone has 1 host they use as their work desktop, they have routers and home servers, and they get great data locality and a much easier to reason about system.
Some might allege that such a host will have less uptime compared to what someone might build with a serverless architecture. However, I believe most outages are a result of human error, be it in configuration or code, and by simplifying the system to a host, a DB and a SAN volume, one might be able to make it much more straightforward, which also helps recover from outages much quicker. (Serverless observability is no where near as good as what one gets with a process running on a Linux server you can SSH into.)
Some might say you will hit hard scaling limits, since such a design scales up rather than out. This is true, but AWS is a minefield of quotas and hard limits, and you have to design carefully around them. A Linux host will also have such limits, but AWS Serverless will be a superset, since at the end of the day AWS Serverless is running your process in a Linux firecracker VM anyway.
I don't mean to be an advocate for any part of the spectrum, just that I don't think there's any free lunch here by picking serverless. Much like picking a programming language, I think it might come down largely to what your existing skills are and your personality.
I can only speak from my own experience, but if you're anywhere close to the limits that Lambda imposes, then you should probably have already converted those functions over to Fargate or your own ECS or EKS clusters a long time ago. Many people abuse Lambda as a way to quickly launch an instance that's supposed to last for a relatively long time instead of one whose lifespan should be measured in terms of milliseconds or maybe just a few seconds.
Lambda gives you way more rope than you need to hang yourself many times over.
Don't get me wrong, it's a great tool when it fits your use case, but there are a lot of people who got started with it without really understanding their serverless use case, and because it worked so well they just never bothered to do their due diligence to find out if it really was the right solution. And even if they did do their due diligence at the time, they just cast that decision in stone and never came back to it.
It is possible to build and run a Tier 1 class service and take dependencies on serverless technologies plus DynamoDB and other Tier 0 or Tier 1 services. More than a few services at AWS are built that way. But you have to be careful when incorporating Lambda into that kind of equation.
I agree with a lot of what you said. Though you may want to look into AWS EFS a sit has opened up a whole bunch of use cases that were previously very difficult with AWS. You can also use Lambda with VPC Endpoints to make the network distance to DynamoDB more efficient.
But yeah, if latency between servers is a concern of your's than Lambda is not the best. But for a different reason than you described, I think it is the lack of ability to have a long running network connection that is the hurdle there since handshakes take time.
Didn't mean to imply it is a free lunch either. But I do think it should be most people's default. The goal should not be to be free it should be to provide high availability and reasonable performance for minimal cost. And in my experience Serverless does that very well.
I find interviewing skills are more important than job skills in terms of income.
Trying to get a 20% raise by doing a good job I've found to be a protracted difficult task, but a 50% raise by interviewing? Much easier.
You do need some job skills, but in my experience being fired for not being good enough is rare, and can take a lot of time - during which you're getting paid.
Another thing I've found is I pick up the most exposure to new ideas in that first 12 months in a job.
These two things together means being able to get a new job easily is very useful.
But getting a job is different from doing a job, don't expect the two activities to have much in common. It could be better, but hey 3-4 months of evenings spent programming to get a big step up in the job market is a pretty good deal if you enjoy programming :)
(Personally I've found leetcode useful at becoming a better programmer, though I didn't approach it as memorizing riddles.)
I want to believe it means one day we will do that TODO, it'll beat the priority of all the other tickets in the queue.
But it never does, there's always some critical new feature sales wants, or something bigger on fire.
I still create the tickets, as a kind of cathartic process, a confession to the jira gods, I have sinned and half assed something. Please take this ticket as my penance.
> But it never does, there's always some critical new feature sales wants, or something bigger on fire.
Then clearly it is not deserving of your attention according to the powers that be. If it is, then talk in the language that these parties understand better/prefer: bump up the priority of the task, add a "blocked by" or "has to be done after" link to these other issues in the tracker and tell the rest of the teammates that you'll be working on that piece of technical debt instead of the new feature.
If you can't do that, then either the "TODO" comments/issues aren't important enough, or they're not deemed to be and will/should simply be left to rot until you either have an abundance of time to address them (which may be never), or the project is retired.
If more pushback is necessary, then do it doing any estimates (provided that you have any): "Task $FOO will take X% more time due to $BAR not being finished and slowing down development. Consider doing $BAR first and $FOO should become less complex then."
That's definitely the way management wants you to work, and it's worth trying to see how it works out. But in many cases it's not in your personal best interest.
In those cases, there's a way to keep management happy and yourself too:
1) Realize that when management asks for time estimates, they don't want your median estimate, because you'll be late half the time and that messes up their scheduling. Give them an estimate that you'll meet at least 90% of the time.
2) This means in almost half the cases, you'll have extra time. You can give some of that back to management, but use some of the time to fix technical debt. The more debt, the more of the time you spend fixing it.
3) With less technical debt, you can speed up your estimates and still be at that 90% level. Now you're delivering as much as if you let management fill your time in the first place. Your code magically has fewer bugs, there aren't many unexpected delays, and you almost always meet your deadlines. But you still have plenty of free time to improve things even more, develop your skills, etc.
As a bonus, you have more of a sense of agency, which is an important factor in feeling happy at work.
> If more pushback is necessary, then do it doing any estimates (provided that you have any): "Task $FOO will take X% more time due to $BAR not being finished and slowing down development. Consider doing $BAR first and $FOO should become less complex then."
Creating those estimates alone is more than the sum of total work usually.
Then the problem of convincing others is itself typically more than the sum of total work.
Usually metrics aren't accurate enough to be able to prove these things anyway in my experience.
yeah, it takes more time in jira to create a task with a deadline and get it in a sprint and make sure it has an owner and find it in the backlog and move it through the workflow's steps than "is unused? delete". heck, sometimes the task takes less time than loading jira's website.
task systems are frequently awful at team-owned (not necessarily an individual) lightweight checks.
> Creating those estimates alone is more than the sum of total work usually.
If that was true then it would be trivial to just push a pull request and update the ticket to review it.
> Then the problem of convincing others is itself typically more than the sum of total work.
When that happens, that's the universe trying to explain to you that you're wasting your time with stuff that does not add tangible value.
This does not change with petty office politics moves such as ignoring your team's best judgement and continue pushing for something that was already decided to be a waste of time.
> When that happens, that's the universe trying to explain to you that you're wasting your time with stuff that does not add tangible value.
That's not always true, especially if it's tangible value that takes more than a week or two to see.
Tech is very biased to short-term rewards that cost more in the long term.
> This does not change with petty office politics moves such as ignoring your team's best judgement and continue pushing for something that was already decided to be a waste of time.
If this happens, it's usually because that team is conflict avoidant or only minimally addresses surface level concerns in disagreements.
If you just want to get your paycheck, that works fine. If you are interested in improving, this is frustrating because the correct or more correct answers are never worth the time.
> Creating those estimates alone is more than the sum of total work usually.
Then simply add those hours spent estimating things to whatever time management solution that you use personally (for insights into where your time goes each quarter/year) or that your company mandates (for basically the same thing). If it's an actual problem, then simply raise it as such later down the road and look for ways to streamline things.
Most of the time you shouldn't care about how much time something takes (exceptions exist, based on the industry/project, but many deadlines are made up anyways), merely that the time to be spent on the task has been approved and that you're not doing something that isn't documented/categorized, e.g. time that "disappears" by being spent on things that don't have any sort of visibility/basis for taking up your time.
If changing how a button works takes 2 hours but writing tests takes 4 and refactoring old code takes another hour, then let the record show exactly that. If unclear requirements cause you to spend another hour or two in meetings and talking to people who didn't document code or explain their changes in merge/pull requests, then let the record show that, too.
Of course, this might vary on a per-company/project/culture basis.
> Then the problem of convincing others is itself typically more than the sum of total work.
If others in the team don't want to fix these things, then they probably aren't as important/annoying as you think they are and therefore probably won't be prioritized so highly anyways. When there are really annoying/brittle API integrations, for example, most devs usually should back you up in your efforts in making things less horrible.
> Usually metrics aren't accurate enough to be able to prove these things anyway in my experience.
You don't actually need metrics that are completely accurate, because finding ones that are is seldom easy to do or even possible, or simply not worth the effort. Having something to the tune of "This API integration can be improved, the last 3 feature requests related to it exceeded their estimates by quite a bit" should be enough, provided that you can get the other people on board.
If you cannot, then it's probably an issue of the actual work environment/culture and/or soft skills.
>Then simply add those hours spent estimating things to whatever time management solution that you use personally (for insights into where your time goes each quarter/year) or that your company mandates (for basically the same thing).
Which sometimes means it doesn't happen because it's not valuable enough at any given instant to go through that multi-hour process, yes.
Lightweight tasks exist. As much as I generally agree with you, following your rules means you either do not ever do them, or you bloat them to multiple times larger than necessary. And correcting a company's task-management process as a whole is not always feasible either.
> Lightweight tasks exist. As much as I generally agree with you, following your rules means you either do not ever do them, or you bloat them to multiple times larger than necessary. And correcting a company's task-management process as a whole is not always feasible either.
Creating a short Jira issue and poking someone about including it in the current sprint should take about 10 minutes in total, if you want a decent description and are okay with occasionally not focusing on the "ceremonies" of sprint grooming too much, since as you say, some tasks are indeed small (and as long as this doesn't lead to scope creep, e.g. fixes/refactoring instead of new features this way).
Of course, that may always not be viable and i see where you're coming from - yours is also a valid stance to take and i see why focusing on an issue tracker too much would be cumbersome. Then again, in my eyes that's a bit like the difference between having almost entirely empty pull/merge requests and having a description of why the changes exist, a summary of what's done (high level overview), additional relevant details and images/GIFs of the changes in action, DB schema diagrams or anything of the sort.
I feel like additional information will always be useful, as long as it doesn't get in the way of getting things done (for an analogy, think along the lines of covering most of the business logic with tests vs doing TDD against a bloated web framework and thus not getting anything done - a few tradeoffs to be made).
> I want to believe it means one day we will do that TODO, it'll beat the priority of all the other tickets in the queue.
> But it never does, there's always some critical new feature sales wants, or something bigger on fire.
Why do you expect that random comments in the code will affect the priority of some tasks? Do you feel that stashing out-of-band info on pending tasks which were deemed not important regarding the project workload changes anything?
Also, if everything is always more important than the TODO item, that is the universe telling you that your TODO item should be deleted and that you should stop wasting your bandwidth with useless and unnecessary tasks.
That TODO item is just the receipt for a few bytes of technical debt you took with the universe. A few bytes is nothing compared to Management's roadmap, so it gets ignored. But have no doubt: the universe always returns to claim its technical debt...
> That TODO item is just the receipt for a few bytes of technical debt you took with the universe.
It really isn't.
A TODO item that you feel does not justify a ticket is just a subjective nice-to-have expressed as noise/a declaration of intent with no intention to deliver, which ultimately only results in noise.
It's not even technical debt. At most, it's a pledge to goldplate something without being able to argue any tangible upside.
> A few bytes is nothing compared to Management's roadmap, so it gets ignored.
Bytes are irrelevant. Tickets also cost bytes. Tickets also track rationale and context. What really matters is allocated resources in order to deliver value.
The only reason your TODO item gets ignored is because the potential value it promises does not justify allocating resources to it.
People need to be smart about how they invest their time and effort. Tracking vague tasks deemed unnecessary or useless in a separate out-of-band source of info is not productive and ends up only creating noise and distractions.
Wanting SQLite in Go touches on something that I think is quite a waste in modern Go circles, but happens everywhere to varying degrees.
There's often (for instance, in Go projects wanting to avoid cgo) a desire for everything to be in the single source language - Go. In what resembles NIH syndrome, there will be clones of existing libraries, offering little over the original except being "Written in Go". From experience this often makes for more bugs, as the Go version is commonly much younger and lessor used than the existing non-Go library.
The Python world does it a lot less, perhaps the slowness of Python helps encourage using non-python libraries in Python modules. But that sure does making building and distributing Python projects "fun".
What I'm trying to say is that:
A world where every language community has it's own SQLite project because the communities shun code written in other languages just feels like a profound waste.
It's not as simple as NIH. Using cgo means you have to build and link external dependencies in another language. Compared to Go, this makes it quite a bit more difficult to build and distribute than a single binary. But I think the biggest reason is cgo is slow. Unlike rust which uses the C calling convention (correct me if I'm wrong) and has no garbage collector or coroutine stacks to worry about, Go has a hefty price to pay when calling a C funcion. For something like sqlite that causes a noticable slowdown. Typically what I like to do in this case is write a C wrapper around the slow API that let's me batch work or combine multiple calls into one. With sqlite, instead of fetching rows one at a time, I would write a wrapper that takes some arrays and or buffers to fetch multiple rows at a time. That amortizes the overhead and makes it less significant. When that's not possible, I use unsafe and or assembly to lift just the problematic C calls into Go. That can sometimes work wonders, but it's also not a magic bullet.
Rust can use the C calling convention to call C functions or export functions to C code, but this requires extra annotations. By default, Rust uses its own unstable ABI.
Interesting! I've never considered batching my calls to SQLite from Go that way. Do you have any numbers you can share about performance when doing that?
I don't, but I'm general it matters most for the cheapest C calls, so the functions doing the least work. Batching those somehow can give big speedups, over 2x, depending on how much of the total time was going into cgo overhead.
To be honest the reason why Go developers want "pure Go" libraries is simply because they can be statically + cross compiled and used without having to carry around an additional library, especially in environments where you hardly have anything other than the binary itself (e.g: Docker containers "FROM scratch" or Gokrazy)
In this case sqlite is bundled as a single C source file. You could just use Zig as your C cross compiler to cross compile alongside Go for almost any platform[1].
It is a bug, and we will fix it, but keep in mind the scope - this is something that affects old versions of glibc. Newer versions of glibc are not affected, and neither is musl libc (often preferable for cross compiling to Linux).
You can target a newer glibc like this: -target x86_64-linux-gnu.2.28
You can target musl libc like this: -target x86_64-linux-musl
I just can't agree with this. It's true that one piece, the compiling, is less painful. But the entire rest of the system from developing to testing to routing out implementation bugs is way, way more painful
It's ridiculously more painful for a single one off by a single dev for a single tool but an actual ecosystem of pure go reimplementations has popped up where that load can become collectively shared over time and ultimately using native implementations doesn't carry that burden to the end developer. It has to start somewhere though.
I've seen the same thing in the Java world, I presume for the same reasons as in Go: calling native code in Java is painful, and traditionally Java always had an emphasis on "write once, run everywhere". So the Java community tends to reimplement C code in Java, even when limitations of the language make the code slower and/or more complex (see the classic post "Re: Why Git is so fast" aka "Why is C faster than Java" at https://marc.info/?l=git&m=124111702609723&w=2 or https://public-inbox.org/git/20090430184319.GP23604@spearce.... which mentions things like the lack of unsigned types or value types).
It created a lot of churn, but I think it has been net positive for Go because they now have a huge ecosystem of stuff that can be installed without the black hole of C/C++ dependency installation.
Each call out to a C library consumes 1 OS thread (max 10s of Ks of threads before terrible performance/scheduling issues); each call out to a Go library consumes a go routine, of which you can have 100s of Ks without much problem.
For SQLite it seems it would be ok, as there's no network traffic, but I've had issues where network glitches (Kafka publisher C library) would cause unrecoverable CPU spikes and an increase in OS threads that never recovered.
So that's the functional reason behind the Go communities desire to write everything in Go. Plus a lot of the people who love Go also tend to be the sort who would enjoy re-writing C libraries into a nice new language.
Rewriting things in Go not only makes using them nicer, it often is a way to get a more correct and stable program. However, we are talking about sqlite here, one of the best tested and stable C programs out there. Rewriting it in Go rather raises the chance of a bug and that would be counter-productive.
It still can be an interesting project and if it proves to be correct and eventually shows some advantages vs. using the C version, it might become a nice alternative.
I guess the reason is that it is easier to cross compile by keeping everything in go? I have no knowledge of Sqlite cross compilation (on Linux targeting windows for instance) but I guess it's also possible but it makes build process a little more complex.
Node.js can take advantage of WASM which is pretty handy in some cases.
Getting cgo to cross-compile while targeting less popular architectures can be a royal pain: I was trying to use cgo to add official SQLite to a Go app that I had running on a long-abandoned (by OEM) mips/linux 2.x kernel "IoT" device with an equally ancient libc. It was a sisyphean task that absolutely nerd-sniped me, I spent way too much time on the building toolchains and trying to get them to work with cgo. I ended up going with a Go-version of SQLite
Yes this is the reason. Using cgo to link in c libraries for example is slower and also brings up other difficulties if you want to cross compile. Here's an old link (may be out of date) outlining some of them:
The main reason why I personally try to avoid adding non-go languages to my go projects is because it tends to make profiling / debugging a bit of a pain. pprof has limited vision into any external C threads so all you can see is the function call and the time goroutines spent off-cpu while waiting for it to finish. You can obviously supplement some of that with other tools (perf), but sometimes accepting the tradeoffs and using a Go implementation of the package instead just makes more sense.
> a desire for everything to be in the single source language
And that is good.
In special, this is driven by how TERRIBLE all the dance around C is. Sqlite is among the easiest, yet, it also cause trouble: suddenly, you need to bring certain LLVM, Visual Studio Tools, etc. And then you HOPE all the other tools use the correct env_vars, settings, etc.
And then, you hit a snag, and waste time dancing around C.
A big part of my pain, and the pain I've observed in 15 years of industry, is programming language silos. Too much time is spent on "How do I do X in language Y?" rather than just "How do I do X?"
For example, people want a web socket server, or a syntax highlighting library, in pure Python, or Go, or JavaScript, etc. It's repetitive and drastically increases the amount of code that has to be maintained, and reduces the overall quality of each solution (e.g. think e-mail parsers, video codecs, spam filters, information retrieval libraries, etc.).
There's this tendency of languages to want to be the be-all end-all, i.e. to pretend that they are at the center of the universe. Instead, they should focus on interoperating with other languages (as in the Unix philosophy).
One reason I left Google over 6 years ago was the constant code churn without user visible progress. Somebody wrote a Google+ rant about how Python services should be rewritten in Go so that IDEs would work better. I posted something like <troll> ... Meanwhile other companies are shipping features that users care about </troll>. Google+ itself is probably another example of that inward looking, out of touch view. (which was of course not universal at Google, but definitely there)
I think you need to look deeper - one of the strengths of go is the runtime and everything they do there to support their internal threading model. When you are calling out to an external language you have memory being allocated and managed outside the go runtime, and you have opaque blocks of code that aren’t going to let the go runtime do anything else on the same cpu until they exit. Those are more the considerations to wanting go native implementations. Even with SQLite, which is probably the most solid and throughly debugged pieces of code written since the Apollo program, it would be desirable to minimize the amount of data being copied across the runtime interface, and to allow other go routines to run while I/o operations are in progress.
It's especially helpful to be pure Go when targeting both iOS and Android (in addition to Linux, Mac, and Windows) with https://github.com/fyne-io/fyne#about
I was a beginner in Go when I wanted to use SQLite with it and I wanted an easy way without a lot of hassle to build, it looked like cgo was the only solution back then. I really wished there was something that I can easily use from Go rather than building with cgo.
I don't think the Go community is particularly susceptible to this. You mention Python; as you say, Python and the dynamic scripting languages are particularly "OK" with having things that back to C, because of the huge performance improvements you get with doing as much as possible in C in those languages. Dynamic scripting languages are slow. But these are the exceptions, not the rule.
Most other languages want native libraries not because of some bizarre fear of C, but because of the semantics. Native libraries work with all the features of the language, whatever they may be. A naive native binding to SQLite in Rust may be functional, but it will not, for instance, support Rust's iterators. That's kind of a bummer. Any real Rust library for something as big as SQLite will of course provide them, but as you go further down the list of "popular libraries" the bindings will get more and more foreign.
Also, the design of these dynamic scripting languages were non-trivially bent around treating the ability to bind as C as a first-class concern. I think if they were never designed around that, there are many things that would not look the same. One big one is that Python would be multithreaded cleanly today if it didn't consider that a big deal because the primary problem with the GIL isn't Python itself, but the C bindings you'd leave behind if you remove it. Go's issue is mostly that it came far enough into C's extremely slow, but steady, decline that it was able to make it a second-class concern instead of a first, and not force the entire language's design and runtime to bend around making C happy.
As it happens, in my other window, I'm writing Go code using GraphViz bindings, and I'm experiencing exactly this problem. It works, yes. But it's very non-idiomatic. I've had to penetrate the abstraction a couple of time to pass parameters down to GraphViz that the wrapper didn't directly support. (Fortunately, it also provided the capability to do so, but that doesn't always happen.) There's a function I have to call to indicate that a particular section is using the HTML-like label support GraphViz has, which in Go, takes a string and appears to return the exact same string, but the second string is magical and if used as the label will be HTML-like.
This is not special to Go, I've encountered this problem in Python (the Tkinter bindings are a ton of "fun"; the foreign language in this case is Tcl, and if you want to get fancy you'll end up learning some Tcl too!), Perl, several other places. A native library would be much nicer.
Finally, the Go SQLite project isn't it's own SQLite. It's actually a C-to-Go compile, as I understand it. That's not really a separate project.
There are good reasons to avoid C dependencies in go though. Because it's essentially a big black box to the runtime and so you loose some benefit when you go there (no pun intended).
Yes – Go is generally memory safe, with the exception of the `unsafe` package which is (obviously) not. Outside of this, there is no access to raw pointers or pointer arithmetic. The use of goroutines does not affect memory safety.
None of this means that you can't make an absolute mess of concurrency, but that's not a memory safety concern.
https://research.swtch.com/gorace describes a loss of memory safety through data races, but I note it says "In the current Go implementations" and was written in 2010.
I never heard of news that the situation had changed, but if it has I'm most interested in when it did! :)
It really depends on where you draw the line. This is an obviously-incorrect program (`++` isn't atomic):
func main() {
var x, y int64
doneCh := make(chan struct{})
inc := func() {
for i := 0; i < 2<<20; i++ {
x++ // line 13 in test/main.go
atomic.AddInt64(&y, 1)
}
doneCh <- struct{}{}
return
}
go inc()
go inc()
<-doneCh
<-doneCh
fmt.Printf("x, y = %v, %v\n", x, y)
}
This prints something like:
$ go run main.go
x, y = 3482626, 4194304
If you run it with the race detector the problem is clear:
$ go run -race main.go
==================
WARNING: DATA RACE
Read at 0x00c0001bc008 by goroutine 7:
main.main.func1()
/home/jrockway/test/main.go:13 +0x50
Previous write at 0x00c0001bc008 by goroutine 8:
main.main.func1()
/home/jrockway/test/main.go:13 +0x64
Goroutine 7 (running) created at:
main.main()
/home/jrockway/test/main.go:19 +0x176
Goroutine 8 (running) created at:
main.main()
/home/jrockway/test/main.go:20 +0x184
==================
x, y = 4189962, 4194304
Found 1 data race(s)
exit status 66
This is not strictly a memory safety problem (this can't crash the runtime), but a program that returns the wrong answer is pretty useless, so there is that. Obviously if x were going to be used as a pointer through the `unsafe` package, you could have problems. (Though I think that x will always be less than y, so if you have proved that memory[base+y] is safe to read, then memory[base+x] is safe to read. But it's easy to imagine a scenario where you overcount instead of undercount.
I'm very confused by this comment. Memory safety doesn't mean "can't crash the runtime"[0]. That would exclude the most basic, canonical examples of memory unsafety: use after free, buffer overwrite/overread, double free, race conditions, etc. The erroneous Go exemplar you posted is literally one of the first examples of memory unsafety on Wikipedia: https://en.wikipedia.org/wiki/Memory_safety. Go is not memory safe by any stretch of the imagination. It doesn't even claim to be. At most it incidentally prevents a narrow subset of unsafe memory uses, compared with C (not a high standard), in the sense of providing an array type (and concomitant string type) rather than just pointer arithmetic.
- We create a closure that captures the people slice
- We pass the people slice and the closure to the Slice function
- The Slice function mutates the people slice, and because the closure captured the slice it sees these mutations too
I get why the Go team wrote sort.Slice like that, and it was perhaps the best they could have done with the language features...But I think we're going to have to agree to disagree on how wonderful it is compared to other languages ;).