I have a Google Pixel 4a and planned to use the phone until it physically no longer worked. I loved it, especially with its wired headphone jack and small size.
It has worked for 3+ years and held a charge for 2-3 days easily until the other week when they pushed the battery patch. Now it dies in a few hours with light usage.
I asked Google support on what will happen if I get a battery replacement and it's still draining fast. They won't answer.
Google reps at a repair center said a battery replacement is unlikely to fix the drain issue since the drain behavior is attached to an OS update you can't opt out of.
This is really frustrating to be ignored by Google after they essentially bricked a fully working device that I paid for.
This is a type of move where I'm tempted to de-Google myself over this, including deleting my YouTube channel with 20k subs that I've been regularly posting to for almost a decade.
My whole business (selling courses and contract work) depends on SEO from Google and YouTube and I'm close to saying fuck it and destroying all of that out of principle on how poorly they are treating folks over this issue. I haven't made that decision yet but it's close to be honest. Close enough that I'm openly posting this message.
Have you given consideration on using CSV files for the back-end?
I ask because funny enough over the weekend I spent 4 hours writing a terminal based expense tracker with a sole focus on making it easy to see income vs expenses vs business expenses totals to make it easier for me to file quarterly and yearly taxes. It's not a TUI but it lets me put items into a category and it tallies up the amounts with counts.
It's simple input / output with CSV files that you can edit with whatever editor you want and that's the main reason I wrote it. I've been using GnuCash for almost 10 years and it's kind of inefficient to input data quickly.
It's nice to see your project on the other side of the spectrum (super polished, TUI, etc.). What's it like to quickly add in items or generate income / expense / business expense reports? Ideally I'd like to be able to open my code editor of choice and do whatever operations I need to quickly insert multiple items into multiple categories.
No, it never came across my mind unfortunately. There are a handful of relations between each table for my schema design, so never though CSV would be viable for my use case.
The TUI is the frontend for the schema. You can do all CRUD with the UI, and I designed it to be able to add multiple records quickly, with input modes and templates!
I would much rather get an honest answer in any scenario, no matter what it is. There are people out there who would take that feedback and reflect on it in a positive way.
How else can you improve or be mindful of things in the future if you don't even know what happened or what went wrong?
I sort of feel like the people who don't give honest feedback or ghost aren't trying to protect anyone's feelings or are avoiding conflict. They themselves have something going on internally that bothers them when it comes to giving or receiving feedback. They are maybe letting unrelated previous experiences dictate their current life and decisions.
So now you have 2 types of people. People who want honest feedback and people who never give it. You can't force either one to do the other thing so we're always left with 1 side feeling unhappy.
It doesn't make sense to me that this is how most folks are ok with operating.
Interestingly you exemplify the problem here. You ignore their explanation and examples and rather fall down to prescribe some flaw to them. Ironically this is some of the outcomes of giving honest feedback that makes people don't bother.
Do you actually though? A lot of people say they would rather get an honest answer but don't react very well in reality.
> They themselves have something going on internally that bothers them when it comes to giving or receiving feedback.
The idea that some people would judge them like this certainly wouldn't help people to try to be more honest and open, especially if such a person is demanding an "honest answer".
> Do you actually though? A lot of people say they would rather get an honest answer but don't react very well in reality.
For me? Yes, in almost all cases, especially if we're talking about either getting no feedback vs. honest feedback. There's been a number of times where someone said X, I thought about it and either changed or at least internally made a note.
I will say it really depends on the context and situation. For example, if it involves someone you care about then sure an honest 5 minute conversation can help eliminate a lot of assumptions from both sides or uncover unknown tensions from the other side. On the flip side, if nothing gets said then nothing will change.
Being honest and transparent doesn't always mean literally saying what's on your mind too. It could be trying to achieve an outcome, such as with a code review. There's lots of ways to provide feedback in a way where you can get the other person to self-realize something without you needing to say it just by asking questions a certain way. This isn't an easy skill and it's something I'm always trying to improve. It applies outside of coding too.
There's honest and there's too honest. I was once rejected with "you don't have the technical depth needed". I appreciate that's what they honestly felt, but sending it back like that was just too honest. Especially because I felt their technical screening process wasn't really all that brilliant (to put it mildly).
Something like "we're looking for someone with a different skill set" would still be reasonable honest, but also wouldn't make me feel terrible. The notion that you can fully asses someone's technical abilities from a one-hour interview is mistaken anyway. So an honest reply should take that in to account.
---
A second scenario is where I did a take-home code test thinghy. I went for the "simple but obviously correct and easy to implement approach". The performance for that seemed more than enough for the stated use case, and included some benchmarks and a bit of text to justify it. Performance wasn't mentioned in the task, but seems like the common sense thing to do. After a few weeks I got a one-line "doesn't meet expected performance" rejection. Well, you didn't mention what the "expected performance" is motherfucker. That's not what I sent back (I didn't reply at all), but what a fucked up way to evaluate and dismiss people.
> I was once rejected with "you don't have the technical depth needed". I appreciate that's what they honestly felt, but sending it back like that was just too honest.
Do you think if they were more specific it could have helped?
As someone who does like honesty, that type of response would bother me too because it doesn't feel like an honest reply. It feels like a blanket statement to quickly say something and move on.
If they said something like "when it came to thinking about and writing database queries, we felt like your solutions could have used more thought around performance optimizations and fundamental knowledge about joins".
I'd be really happy with a rejection like that because it's super specific. Now there's 2 action items I can do to improve, such as focusing on query tuning and getting better at joins. These are things you could search for and find tons of content / examples to improve on.
If you think about it like a loop, it's a loop that's complete. You did something poorly, you know what you did poorly, you can level up those specific skills and try again. The problem is when the feedback doesn't let you complete the loop.
> Do you think if they were more specific it could have helped?
To be honest I think it was just a "bad vibe" or whatever you want to call it, and/or didn't meet an exactly pre-defined approach they wanted during the "systems design" interview which was quite badly done IMHO, and felt like stumbling around trying to find the answer he was looking for while he was going out of the way to drip-feed me information.
But who knows...
But yes, I agree with you: it's non-actionable feedback. And also came across as quite personal (that is: the difference with "you're a bad coder" vs. "this is bad code").
Giving good individual feedback requires effort and is low priority.
Those things tend to not happen without requiring complicated feelings to be involved.
For #4 (divide and conquer), I've found `git bisect` helps a lot. If you have a known good commit and one of dozens or hundreds of commits after that is bad, this can help you identify the bad commit / code in a few steps.
I jumped into a pretty big unknown code base in a live consulting call and we found the problem pretty quickly using this method. Without that, the scope of where things could be broken was too big given the context (unfamiliar code base, multiple people working on it, only able to chat with 1 developer on the project, etc.).
"git bisect" is why I maintain the discipline that all commits to the "real" branch, however you define that term, should all individually build and pass all (known-at-the-time) tests and generally be deployable in the sense that they would "work" to the best of your knowledge, even if you do not actually want to deploy that literal release. I use this as my #1 principle, above "I should be able to see every keystroke ever written" or "I want every last 'Fixes.' commit" that is sometimes advocated for here, because those principles make bisect useless.
The thing is, I don't even bisect that often... the discipline necessary to maintain that in your source code heavily overlaps with the disciplines to prevent code regression and bugs in the first place, but when I do finally use it, it can pay for itself in literally one shot once a year, because we get bisect out for the biggest, most mysterious bugs, the ones that I know from experience can involve devs staring at code for potentially weeks, and while I'm yet to have a bisect that points at a one-line commit, I've definitely had it hand us multiple-day's-worth of clue in one shot.
If I was maintaining that discipline just for bisect we might quibble with the cost/benefits, but since there's a lot of other reasons to maintain that discipline anyhow, it's a big win for those sorts of disciplines.
Sometimes you'll find a repo where that isn't true. Fortunately, git bisect has a way to deal with failed builds, etc: three-value logic. The test program that git bisect runs can return an exit value that means that the failure didn't happen, a different value that means that it did, or a third that means that it neither failed nor succeeded. I wrote up an example here:
I do bisecting almost as a last resort. I've used it when all else fails only a few times. Especially as I've never worked on code where it was very easy to just build and deploy a working debug system from a random past commit.
Edit to add: I will study old diffs when there is a bug, particularly for bugs that seem correlated with a new release. Asking "what has changed since this used to work?" often leads to an obvious cause or at least helps narrow where to look. Also asking the person who made those changes for help looking at the bug can be useful, as the code may be more fresh in their mind than in yours.
> why I maintain the discipline that all commits to the "real" branch, however you define that term, should all individually build and pass all (known-at-the-time) tests and generally be deployable in the sense that they would "work" to the best of your knowledge, even if you do not actually want to deploy that literal release
You’re spot on.
However it’s clearly a missing feature that Git/Mercurial can’t tag diffs as “passes” or “bisectsble”.
This is especially annoying when you want to merge a stack of commits and the top passes all tests but the middle does not. It’s a monumental and valueless waste of time to fix the middle of the stack. But it’s required if you want to maintain bisectability.
As someone who doesn't like to see history lost via "rebase" and "squashing" branches, I have had to think through some of these things, since my personal preferences are often trampled on by company policy.
I have only been in one place where "rebase" is used regularly, and now that I'm a little more familiar with it, I don't mind using it to bring in changes from a parent branch into a working branch, if the working branch hasn't been pushed to origin. It still weirds me out somewhat, and I don't see why a simple merge can't just be the preferred way.-
I have, however, seen "squashing" regularly (and my current position uses it as well as rebasing) -- and I don't particularly like it, because sometimes I put in notes and trials that get "lost" as the task progresses, but nonetheless might be helpful for future work. While it's often standard to delete "squashed" branches, I cannot help but think that, for history-minded folks like me, a good compromise would be to "squash and keep" -- so that the individual commits don't pollute the parent branch, while the details are kept around for anyone needing to review them.
Having said that, I've never been in a position where I felt like I need to "forcibly" push for my preferences. I just figure I might as well just "go with the flow", even if a tiny bit of me dies every time I squash or rebase something, or delete a branch upon merging!
> I cannot help but think that, for history-minded folks like me, a good compromise would be to "squash and keep" -- so that the individual commits don't pollute the parent branch, while the details are kept around for anyone needing to review them.
But not linked together and those "closed" branches are mixed in with the current ones.
Instead, try out "git merge --no-ff" to merge back into master (forcing a merge commit to exist even if a fast-forward was possible) and "git log --first-parent" to only look at those merge commits. Kinda squash-like, but with all the commits still there.
I use git-format-patch to create a list of diffs for the individual commits before the branch gets squashed, and tuck them away in a private directory. Several times have I gone back to peek at those lists to understand my own thoughts later.
I explicitly don’t want squash. The commits are still worth keeping separate. There’s lots of distinct pieces of work. But sometimes you break something and fix it later. Or you add something new but support different environments/platforms later.
But if you don't squash, doesn't this render git bisect almost useless?
I think every commit that gets merged to main should be an atomic believed-to-work thing. Not only does this make bisect way more effective, but it's a much better narrative for others to read. You should write code to be as readable by others as possible, and your git history likewise.
Individual atomic working commits don't necessarily make a full feature. Most of the time I build features up in stages and each commit works on its own, even without completing the feature in question.
Back in the 1990s, while debugging some network configuration issue a wiser older colleague taught me the more general concept that lies behind git bisect, which is "compare the broken system to a working system and systematically eliminate differences to find the fault." This can apply to things other than software or computer hardware. Back in the 90s my friend and I had identical jet-skis on a trailer we shared. When working on one of them, it was nice to have its twin right there to compare it to.
The principle here "bisection" is a lot more general than just "git bisect" for identifying ranges of commits. It can also be used for partitioning the space of systems. For instance, if a workflow with 10 steps is broken, can you perform some tests to confirm that 5 of the steps functioned correctly? Can you figure out that it's definitely not a hardware issue (or definitely a hardware issue) somewhere?
This is critical to apply in cases where the problem might not even be caused by a code commit in the repo you're bisecting!
Not to complain about bisect, which is great. But IMHO it's really important to distinguish the philosophy and mindspace aspect to this book (the "rules") from the practical advice ("tools").
Someone who thinks about a problem via "which tool do I want" (c.f. "git bisect helps a lot"[1]) is going to be at a huge disadvantage to someone else coming at the same decisions via "didn't this used to work?"[2]
The world is filled to the brim with tools. Trying to file away all the tools in your head just leads to madness. Embrace philosophy first.
[1] Also things like "use a time travel debugger", "enable logging", etc...
[2] e.g. "This state is illegal, where did it go wrong?", "What command are we trying to process here?"
I've spent the past two decades working on a time travel debugger so obviously I'm massively biassed, but IMO most programmers are not nearly as proficient in the available debug tooling as they should be. Consider how long it takes to pick up a tool so that you at least have a vague understanding of what it can do, and compare to how much time a programmer spends debugging. Too many just spend hour after hour hammering out printf's.
I find the tools are annoyingly hard to use, particularly when a program is using a build system you aren't familiar with. I love time travelling debuggers, but I've also lost hours to getting large java, or C++ programs, into any working debuggers along with their debugging symbols (for C++).
This is one area where I've been disappointed by rust, they cleaned up testing, and getting dependencies, by getting them into core, but debugging is still a mess with several poorly supported cargo extensions, none of which seem to work consistently for me (no insult to their authors, who are providing something better than nothing!)
You can also use divide and conquer when dealing with a complex system.
Like, traffic going from A to B can turn ... complicated with VPNs and such. You kinda have source firewalls, source routing, connectivity of the source to a router, routing on the router, firewalls on the router, various VPN configs that can go wrong, and all of that on the destination side as well. There can easily be 15+ things that can cause the traffic to disappear.
That's why our runbook recommends to start troubleshooting by dumping traffic on the VPN nodes. That's a very low-effort, quick step to figure out on which of the six-ish legs of the journey drops traffic - to VPN, through VPN, to destination, back to VPN node, back through VPN, back to source. Then you realize traffic back to VPN node disappears and you can dig into that.
And this is a powerful concept to think through in system troubleshooting: Can I understand my system as a number of connected tubes, so that I have a simple, low-effort way to pinpoint one tube to look further into?
As another example, for many services, the answer here is to look at the requests on the loadbalancer. This quickly isolates which services are throwing errors blowing up requests, so you can start looking at those. Or, system metrics can help - which services / servers are burning CPU and thus do something, and which aren't? Does that pattern make sense? Sometimes this can tell you what step in a pipeline of steps on different systems fails.
git bisect is an absolute power feature everybody should be aware of. I use it maybe once or twice a year at most but it's the difference between fixing a bug in an hour vs spending days or weeks spinning your wheels
When you don't know what is breaking that specific scroll or layout somewhere in the page, you can just remove half the DOM in the dev tools and check if the problem is still there.
Rinse and repeat, it's a basic binary search.
I am often surprised that leetcode black belts are absolutely unable to apply what they learn in the real world, neither in code nor debugging which always reminds me of what a useless metric to hire engineers it is.
Binary search rules. Being systematic about dividing the problem in half, determining which half the issue is in, and then repeating applies to non software problems quite well. I use the strategy all the time while troubleshooting issue with cars, etc.
It doesn't but if you're putting this into a script that's ok. You can set the script's shebang to bash so even if your user's shell is using zsh, the script will run with bash.
That reminds me of the toilet paper calculator during peak covid.
He put up a site in about half an hour and it ended up getting coverage on TV which led to ~10 million visitors and it making $5,000 a day in ads for a while. The core logic of the app is 6 lines of JavaScript to take a few inputs and do basic math on them.
Listened today. Nice work, I really like these kind of stories (small time internet success stories). I don't know if your podcast has others, as it seems like you're mostly focused on the tech stack of sites. Some of my favourite HN posts are the common "What's your tech side hustle?" posts. I love hearing the stories of people making a bit of side cash on small little sites. Some examples I can think of (however, both are much bigger now) is the Vileda Onions guy, bingo card creator and the wedding table name tag site.
Thanks. If I had to estimate, I'd say about 2/3rds of the episodes are solo developers putting up some type of site (SAAS app, selling a product, etc.).
One thing I don't get about the current Rails direction is pushing hard to use SQLite and removing external dependencies but at the same time also advocating to use Docker.
Running Postgres and Redis in Docker is a matter of adding a few lines of YAML to a file once and never thinking about it again. I have done this for 10 years, it is painless.
I'm all for reducing moving parts and would also choose the same strategy if there were no down sides but DHH is running their apps on dedicated hardware with some of the best performing SSDs you can get. Most VPS providers have a lot worse disk performance.
There's also downsides to running your DB (SQLite or Postgres) directly on your single VPS when it comes to uptime. If your server is stateless then you can upgrade it with zero downtime. All you have to do is spin up a new server, provision it, add it to DNS, wait a day or 2 and then decommission the old one. This is nice for resizing instances or big OS updates in a zero risk way. You can just make a new server.
That works because it doesn't matter if the old or new server is writing to your externally hosted DB. If your DB is directly on the host or using block storage that can only be connected to one instance at a time then you can't spin up a new server since one of the DBs will get out of date.
Now you have Redis running with your project and you can use "redis" as the hostname to connect from your app. It even persists to disk with a volume if needed.
It's similar for Postgres except you'd also want to set the PG username and password with environment variables so your initial user and DB get set up with reasonable defaults.
Agreed. Adding SQLite is potentially a quick way to make your stateless rails container/app stateful. While using a proper DBMS as a separate service makes perfect sense.
I've been running a Crucial MX100 256 GB SSD for 10 years. It's at 63% health from a S.M.A.R.T. readout. It's been powered on 125 times over ~10 years and transferred 56 TB in that time. It's my main Windows partition and runs WSL 2 where I've built and ran thousands of Docker images. Basically, it hasn't been sitting here unused.
Yep, I have an rsync script which I run from WSL to backup both my WSL home directory and a bunch of files on a separate Windows drive to an external hard drive.
It takes something like 8 minutes for rsync to get the metadata about files before it starts potentially transferring anything. On native Linux a comparable amount of files takes less than 10 seconds.
The worst case scenario is when the data has to go over the WSL border twice (one to be processed by whatever Unix thing you are using) and another when it's written to an NTFS filesystem somewhere else.
My personal happy space is actually Cygwin. It's Unixy enough and doesn't cross the WSL border. Not great because NTFS doesn't perform that well under it, but definitely better than WSL.
It has worked for 3+ years and held a charge for 2-3 days easily until the other week when they pushed the battery patch. Now it dies in a few hours with light usage.
I asked Google support on what will happen if I get a battery replacement and it's still draining fast. They won't answer.
Google reps at a repair center said a battery replacement is unlikely to fix the drain issue since the drain behavior is attached to an OS update you can't opt out of.
This is really frustrating to be ignored by Google after they essentially bricked a fully working device that I paid for.
This is a type of move where I'm tempted to de-Google myself over this, including deleting my YouTube channel with 20k subs that I've been regularly posting to for almost a decade.
My whole business (selling courses and contract work) depends on SEO from Google and YouTube and I'm close to saying fuck it and destroying all of that out of principle on how poorly they are treating folks over this issue. I haven't made that decision yet but it's close to be honest. Close enough that I'm openly posting this message.
reply