The person is a salaried employee. They are getting paid by their employer. The web site is a personal portfolio / blog /resume site. Traditionally you're paid in attention on that sort of thing and use it to bolster salary via opportunities.
Getting a few dollars here and there from a personal site's ads feels cheap and detracts from the article. Tip jar, fine. But ads no. It just feels dirty. Even if they are "ethical".
From my perspective (as an employer) I see this stuff and think holy hell they'll want to stick advertising on everything. Turns me right off.
Author here. I'm sorry you feel this way. The ads are an experiment to see how much money I'm leaving on the table by not doing them. As of late it is currently just enough to pay for half of my server costs per month. This combined with Patreon means that my blog is cashflow positive. It's nowhere near enough to make a living off of, but it helps me get a small passive income and pays for all of my video games.
From your perspective as an employer, this should signal that I know what I'm worth and I am more than willing to negotiate for it. If you want the ads gone, please feel free to email me a job opportunity. I'll be more than willing to seriously consider it should you meet my requirements.
I'm seriously not trolling you, please take this as a question in good faith if you can... (I understand that any message prefaced as such is immediately suspect)
How much does it cost to host that website? It can't be much more than 10-15 usd/month, right? If so, is it really worth it to recover such relatively small amounts through serving ads?
Maybe I'm wildly off in my estimation; personally I see adding ads as a pretty heavy thing to do, so the monetary benefits would have to clearly outweigh that...
My website is hosted off a Hetzner dedicated server (Ryzen 5 3600, 64 GB of ram) in Helsinki. That costs about 70 EUR per month (though I use it for other things like my IRC bouncer, been considering moving that to my homelab, and innumerable side projects I've picked up and abandoned over the years) and also stores all of my random files I've accumulated over the years. It's on another continent so I have an easy to access live backup of some very important files in case my house gets destroyed.
The other expensive parts are AWS Route 53 for DNS after I dropped Cloudflare (cost varies based on popularity, but ranges between USD$7.50 and USD$12.50, this is still cheaper than Cloudflare because I paid them USD$20 per month) and the CDN I host on fly.io which costs about USD$5-7 per month including bandwidth overages.
My website gets more traffic than you are probably comfortable with imagining. I have to take performance and hardware efficiency seriously. I can easily do a few hundred gigabytes per month of egress bandwidth. More if I end up posting bangers. This is after doing things like hilariously optimizing images (the stickers can get down to about 10 KB each with avid files) and literally having my site load everything into ram on application boot.
I am told that these performance requirements are not needed for most websites, but I don't seem to be lucky enough to be able to do things the stupid way. I have to actually think about the performance impacts of everything I do because getting on the front page of Hacker News so often means that I need to focus on making things super efficient.
For a while my tower was also a Risen 5 3600, so I was doing the moral equivalent of compiling my website with -march=native to unlock _even more performance gains_, but now the machine I deploy from is a homelab node with an Intel Core i5 10600 so I can't do that trick anymore.
Subscribing to me on Patreon gives me infinitely more money than ads ever will, but having multiple income streams means that there's redundancy.
Of course I can afford to pay all of this out of pocket, but everything paying for itself is super fucking nice.
The actual site binary ends up using about 256 MB of ram worst case, the main problem is that in order to get more cores on Hetzner, you need to pay for more ram. I run some virtual machines on that dedi but they don't really add up to much in the RAM/active CPU cores department.
I go into much more detail about this on my blog here:
Those files are backed up other safe places as added redundancy. Those files aren't visible from the public_html folder of my website. Please have faith that I know what I'm doing.
It's useful for trivial unambiguous tasks where you have your hands full or don't want to touch your device or it's dangerous to. That's all I can muster mine for.
"Hey Siri, add more toilet paper to the shopping list" (while pooping)
"Hey Siri, shuffle my music" (while driving)
"Hey Siri, countdown 10 minutes" (while shoving a pizza in the oven)
Anything else is a shit show. Anything where trust or accuracy is involved i.e. mutating data, spending money, absolutely no way can I trust it at all and never will.
Agreed, but I find for even these simple tasks it's hit-and-miss for accuracy. My Google device will randomly not know what a "shopping list" is, or the interactions go something like this:
"Hey Google, put dishwasher salt on the shopping list"
"OK, I added 'put dishwasher salt'"
(strangely, this particular bug only manifests for dishwasher salt).
Timers are useful, but sometimes they can't be shut off by voice command.
Yeah it doesn't always work well. I say "hey siri add green milk to the shopping list". I want "green milk" added to the shopping list which in the UK is semi-skimmed milk. What does it do? Adds "green" and "milk" because it thinks I'm a weed smoker...
Trust and accuracy is involved in the first and last of your examples - I'd end up having to check that the TP was actually added to the list, and that the timer had actually begun and was set to 10 mins.
Shuffling music, turning lights on, yes fine - because confirmation that the right thing has happened is instant and effortless. Anything else, I'll use a button or a screen.
Definitely agree with this. You get that confirmation with siri. I mostly use my watch for it and it will show me what it did on the screen without having to touch anything.
Not really - adding toilet paper to a shopping list is not clicking the "buy" button. And if you set up a timer you get quick confirmation that it has been set. If the timer is accidentally set for 100 mins it's easily corrected.
I think the parent meant that you need to check if these commands are executed properly, otherwise you get into trouble later. For example, if the toilet paper isn't added to the shopping list, and you go shopping with this list the next day trusting it contains everything you need, you're not buying the toilet paper. Similarly, if the timer is accidentally set to 100, you only notice it after, say, 20 minutes when there's black smoke coming out of the oven.
I just asked Alexa to set a timer for 2 mins, and you're right - she did then ponderously state that a timer for 2 mins was starting. Then she asked me if I'd like to hear tips about using timers? No. Then she told me I had two notifications, would I like to hear them? No.
Then I timed myself setting a timer on my phone, which took 9 seconds from pocket to running.
Adding to a shopping list isn't clicking the "buy" button, no - but if it's not on the list I won't buy it and then I will have no toilet paper. I would not need a list if I could simply remember everything.
Then she asked me if I'd like to hear tips about using timers? No. Then she told me I had two notifications, would I like to hear them? No.
Are you saying this for comedic effect, or does the Alexa really do this? (I'd look it up myself, but good luck with that query...) To each their own, but I'd throw the device into the street if it pulled a stunt like that.
Then I timed myself setting a timer on my phone, which took 9 seconds from pocket to running.
To the Homepod or my Apple Watch: "hey, siri, tea timer for three minutes".
"Three minute tea timer, starting now."
I didn't think a product could screw that up. I would suppose it's a design decision between "assistant" and "servant that carries out my command without backtalk". There are times that I wish the Apple product were more "assistant" than "servant", but the Alexa product just sounds pushy.
I use Alexa for shopping lists, I get a “toilet paper added to your shopping list” confirmation after adding items to my list.
It’s not perfect though, for example when trying to add fruit and fibre cereal it will often add two items, “fruit” and “fibre”. But its close enough that when I get to the store and check the list I know what I intended to add to the list.
> "Hey Siri, add more toilet paper to the shopping list" (while pooping)
This is the main reason why I have an Echo in my bathroom! The one advantage Alexa has over everything else is that you can voice shop -- "alexa buy more toilet paper" solves the problem that much faster than a reminder for later.
I don't want that to happen because the price variation in toilet paper is huge based on deals and offers available, and Amazon is rarely the cheapest provider these days, so it's actually worth me spending a few minutes on it to save some money.
The reason Alexa exists is to sell you Amazon's prices, not necessarily a good deal.
Also, I think I'd rather just add stuff to my shopping list so that I can at a later time order everything together, rather than have multiple deliveries.
The sort of consumables I might order on Amazon on a regular basis--like those that the Amazon Dash buttons were intended to address--can vary a fair bit in price and quantity. I'm not going to have Amazon just ship whatever.
And it's not even a very frequent thing. Mostly, every few months, I look through what consumables need replenishing and I fill up the car with plus-size packages from Walmart.
I know the point of this article is to demonstrate how to solve a problem but the premise is a bad one and must not be confused with sound software architecture at all. In fact it's slightly painful reading it and I really wish people would stop writing articles like this.
The functional requirement here is to take some HTML, parse it and emit slack flavoured markdown.
Involving WASM / Rust / cross compiling stuff and fucking around with a build tool chain should not even be being discussed anywhere as a solution for this. I mean it's fine as a toy but the problem is that half the industry doesn't have any idea what's a good idea and what isn't from an engineering perspective when it comes to solving problems and this will be taken as gospel on how to solve the thing.
What we have here is a Rube Goldberg machine, not a cleanly solved engineering problem.
The article is tagged cursed for a reason. This is not intended to be a good solution, it is intended to be _a solution_ that works better than you'd expect while also explaining the Unix philosophy and going through my entire thought process into making it.
The best kinds of hacks are the ones that look ludicrous but are actually somewhat fine in practice.
It's very clearly a toy example to demonstrate the idea, and it does so well.
> What we have here is a Rube Goldberg machine, not a cleanly solved engineering problem.
And what _we_ have here is unfounded indignation over a perfectly fine way to solve a problem. People have been using linked libraries to re-use code across languages since forever, it's fine. This solution isn't very different, it just makes shipping easier.
As a parting comment, "sound software architecture" is not a decided upon principle which can be empirically determined, and if it was we'd all be out of a job.
> And what _we_ have here is unfounded indignation over a perfectly fine way to solve a problem
Absolutely no way is this a fine way to solve the problem. That is crazy talk.
1. It introduces additional toolchains into the solution when it is unnecessary.
2. It now means you need multiple language specialists to maintain it and associated communications and context switching.
3. More interfaces and integration means more fragility when it comes to debugging, problem solving as well as increasing the problem surface.
4. It massively increases the dependency stack which means there are now multiple sets of supply chain issues and patching required.
This makes no problems easier at all! It's even a bad last resort if that's all you have left to try.
Sound software architecture is very very well defined and this is definitely not it. I have seen entire companies burn by taking this approach to problem solving.
I'm really getting tired of solutions before problems and this is a fundamental example of it. Give us a real use case not manufacture a problem for it.
> 2. It now means you need multiple language specialists to maintain it and associated communications and context switching.
Sometimes you have that and you need to accept it. In this case the function was really trivial. What if it was a mega secret algorithm that needs Rust speed which you want to really write in Rust because XYZ? Or C++ I don't care. But not Go, which you want to use exclusively for microservices, for example.
> 3. More interfaces and integration means more fragility when it comes to debugging, problem solving as well as increasing the problem surface.
This is inevitable when you want to go that low level. Some companies use C++ for everything, some mix low/high level: how do you do in that case?
This article is about Go, but I could imagine something similar for Python: use something superfast under the hood, python for the high level part. It's really a common solution, which requires a bit of bindings left and right but it's worth it.
> 4. It massively increases the dependency stack which means there are now multiple sets of supply chain issues and patching required.
This is how software is done today. Maybe 30 years ago you would write everything in C and be happy with it. Today companies use at least 3 programming languages - at least that's my experience.
> This article is about Go, but I could imagine something similar for Python: use something superfast under the hood, python for the high level part. It's really a common solution, which requires a bit of bindings left and right but it's worth it.
That's not a bad idea really, the wasm/wasi option could be a nice intermediate step between "we can't optimize the python any further" and "write a native extension" (or god forbid use cffi or the cursed horror that is ctypes).
> Absolutely no way is this a fine way to solve the problem. That is crazy talk.
I disagree. I see this in a similar way I see Electron and actually you can make the same arguments against using Electron.
But guess what, Electron wins on practicality. It makes creating GUI apps much easier. That wins over any problems associated with the extra baggage of shipping a whole browser. It doesn't win it for everyone, but it does for the majority of people who just want to get shit done.
> People have been using linked libraries to re-use code across languages since forever, it's a fine way to solve problems.
People have been doing it forever, but people have also hated it forever. Twenty years ago you saw SWIG in a build, you'd know you were probably in for a bad time in an exciting new way.
I think SWIG had a different purpose. The author here generates and embeds WebAssembly in Go, to avoid building the same lib for multiple platforms (+ the bindings to call the low level c code). Maybe the tool wasn't good enough? Right now this is just WebAssembly which is proven to work on multiple platforms.
If the API is clear and documented, I don't see why this would be an issue, except for the fact it might be a little bit clunky.
It's not the first solution I would come up with, but the question would be: why not? Just because we're used to older and more traditional patterns, why not just to embed webassembly for low level stuff in your code?
This appears to be a straw man. Nobody is trying to tell you to rewrite your software stack using this technique. The OP demonstrates a cool hack. The site we currently occupying is a place for cool hacks. I don't see the problem. As far as hacks go, it's far from the most egregious that I've seen, and even suggests a few thoughtful lessons about the future of FFI beyond the C ABI.
> The functional requirement here is to take some HTML, parse it and emit slack flavoured markdown.
That is solved already and not what the article is about
The nonfunctional requirement is a self-contained binary that is not dependent on machine's own libraries or any extra files. That's not just mental excercise but a feature.
That is what article is about.
The "proper" engineering solution might be very well "just rewrite that small part in Go", but this approach is nonetheless interesting.
> The nonfunctional requirement is a self-contained binary that is not dependent on machine's own libraries or any extra files. That's not just mental excercise but a feature.
Let me add more to this: speed.
You have Rust/WebAsssembly in the mix and automatically you gain in speed as shown in the article.
I honestly don't understand the negative comments.
If you build a lib in C and then you use standard binding mechanisms, oh it's ok. In this case you leverage a great tool (cargo) to do the heavy lifting for you and bam you get a safe binary that gives you better performance and overall better tooling -> it's bad.... why?
I read the original version of the poster's comment and it was way more aggressive and more about "sound architecture". Maybe he/she can explain what exactly is wrong with this approach and what other approach should be taken instead?
> The "proper" engineering solution might be very well "just rewrite that small part in Go", but this approach is nonetheless interesting.
Why? In this case it's a trivial function. What if it's a strong algorithm that needs speed? Rust helps there, and it's faster than Go.
For a system architect or senior developer it's very interesting to know I might glue a (not so trivial) Rust codebase into my Go program so easily. For people actually working on Go, Rust, WASM, etc., these are also experiments to evaluate the ergonomics, performance, etc. of all the tooling. For someone who wants to learn how FFIs work, this is a great tutorial.
But I'm certain I will now get at least one mid-level dev or interview candidate actually try to do this, in a similarly trivial case. And I will have to explain yes you already wrote the whole build pipeline but no we're not going to maintain it, and yes the blog post says it's fast but it's not really that fast, and etc. etc. It's bad enough every time I have to hear "it's just one Python script, what could it cost?", but the more complex it gets the more energy it takes to genuinely convince (rather than merely authoritatively declare) people not to do it, and the limit there seems unbounded.
This is a toy example to demonstrate a framework to build with. But I don't think it's not a cleanly solved engineering problem. Instead of reinventing xer Rust code in Go, Xe has simply taken the functional rust code and made go run it in a cross-platform and portable manner. Minimal effort from the engineer has been expended to solve this and C was not involved for FFI definitions either.
I don't care for the eccentricities of the Rust community, but this is a good article that demonstrates an effective approach to invoke code from one language to another. The problem is a toy one that's not the focus here. The focus is a better means of doing interop
> Minimal effort from the engineer has been expended
Is the definition of "how do I get promoted in a company where I don't care if it survives the next five years", not "a cleanly solved engineering problem".
The alternative is rewriting the code of Go into Rust or vice versa.
And if rewrite is Rust->Go way, also continual effort of porting any bugfixes in upstream lib.
I dare to say more complex toolchain is the easier and less time consuming part.
I wouldn't do it in this particular case (converting with mastodon powered html is probably simple enough) but it wouldn't be a terrible solution in some cases.
What's more interesting is if you use same method for app plugins, now you can compile anything in WASM and as long as it have right hooks it can be used in your app as a plugin
Depends on how well supported those stacks are. WASM is very well supported and likely to be getting tested/used/improved extensively as the years go.
I'd rather work on software that depends on three different tech stacks that are well understood and used by many, than software that depends on a single niche tech stack.
I'm not a "stick to a single stack for everything" kind of person, but here we're comparing Go or Rust, to Go + Rust + WASM. The first option is strictly and substantially less risky in this dimension.
I've dealt with pretty much everything from steaming nightmare creeping Cthulhu desktop applications right into back end fintech stuff written in the dark ages over the last 30 years. At no point have I found this solution being applied where it solved a problem. I have seen it applied many times where it created problems!
Author here. My article is not supposed to talk about a good idea. It's meant to bring a bad idea to the table and explain why it works. I designed the function in question with the understanding that it would fire once or twice per 10 minutes. This means that paying a cost like 3.1 megabytes per invocation is okay. If this was intended to run _constantly_ (such as if it was a core part of a run loop), that's different.
With the version of wazero I'm using right now, it's about 0.3 milliseconds to do a single call from Go to Rust and back in the best case on my hardware. I am told that a recent patch to wazero will increase this by a lot, so I'm going to try upgrading to that patch and see what differences it makes. I still think that it will be a bit slower than cgo, BUT the platform-independence and strict sandboxing makes up for it in my book.
I think the article is absolutely clear about it but also I think people will (are! in this very thread) ignore that part and make enormous messes that will hurt real users and someone else will have to clean up.
I’m not sure how to fix this but I’m also tired of pretending it’s not a chronic problem.
There is constructive criticism and then there is this. Once you use phrases like the above and follow it up by pontificating about how you think the author's house looks like, you have crossed a line, by far.
I know that the use of cat isn't technically required, but I still build bash oneliners step-by-step and find starting with `cat foo` to be a helpful reminder of the format of the file.
A common issue of having a majority of "feature oriented" engineers on your team is that they block any effort to improve the existing codebase, so while you're cranking features out, the code slowly rots and technical debt creeps in.
The counterpart of this is having a majority of "perfection oriented" engineers, in which you have top notch processes and code, but with endless yak shaving and little product work being done :)
I'm personally better off. Not sure that would be the case for you or others.
Re: food- Fresh produce in the PNW is incomparable imo. Never had anything like it in the world. In the UK I was largely limited to Sainsbury's etc., which have pretty poor quality food and produce.
Here in the PNW the variety (and quality) far exceeds what was available to me in the West Midlands, Greater Manchester and Greater London (not sure about other parts of the country).
It's a very pleasant luxury to not be limited to supermarkets.
Increased risk, increased reward. And if I'm fired, laid off, or deported, worst case I just go back to the UK having already earned hundreds of thousands of dollars more than I would have in the UK.
I don't think there's any debate that software engineers are paid much more on average in the US, or that the number of highly paid jobs is much greater in the US.
Even at within the same firm, for the same job, it seems like UK engineers are usually paid much less than US ones.
For now. I know someone who was "doing waaay better in the US", took 3 weeks of leave and came back to a huge series of lay offs, market saturation, unemployment with no healthcare then his visa renewal was declined. So he had to spend his last wedge of cash getting back to the UK...
Took a position with 2/3 of the US salary but actual tangible security.
Conversely one of my children has a complex medical condition which would have bankrupted me several times over in the US. And the service has been second to none.
I specified and architected an internal ERP system (PLM/SRM side) to replace a paper system. This was a big Oracle/web thing in the early 00's at the peak of Six Sigma. This was resisted by the engineering teams who had been using the old system since WW2. Rather than use it, there was a mini strike that resulted in nearly 50 engineers taking retirement and lead to a national scandal when a huge defence project was delayed and over budget. It was one of the largest contributing factors of the project failure.
After doing lots of post mortem analysis, the paper system was far more capable and had a better audit trail and most of the objections that were formed were entirely spot on. But we steamrolled them because we were under the six sigma project flag.
After seeing the shit show and reflection, I quit and got a shit job throwing web sites together.
Just like to say a big thank you for this. When ActiveX was no longer acceptable I wrote some desktop integration technology which replaced it for web apps with an http based background messaging protocol and activation via URL handler. This was sold to some large corporates. This made me a fuck load of money over the last decade or so. If you hadn't built ActiveX this wouldn't have been possible.