Hacker News new | past | comments | ask | show | jobs | submit | theknarf's comments login

A bit like LINQ, only its a pre-processor pass instead of being integrated into the language like in C#.


Its just means that if you change the code of SŌZU you'll have to have your fork public on GitHub.. Not a big deal.


If I understood it correctly, if you change the code of SŌZU, you need to open-source not only your fork, but everything that can be reached/connected to by SŌZU.

Google has a policy against using APGL internally, nor in your company equipment[0].

    The primary risk presented by AGPL is that any product or service that depends on AGPL-licensed code, or includes anything copied or derived from AGPL-licensed code, may be subject to the virality of the AGPL license. This viral effect requires that the complete corresponding source code of the product or service be released to the world under the AGPL license.
[0] https://opensource.google/documentation/reference/using/agpl...


I don't think this true, the idea is that it triggers the same-license requirement even for network accessible code. Where normal GPL only triggers that when the code is distributed, which allowed the SAAS loophole to exist.

As for the spreading to other programs part, yes that's the entire point, if you use free code, anything built on it should be free too, to continue giving to the Free/open Eco system.


This seems more like an argument against the object oriented model of C++ than anything else. Would have been more interesting if the performance was compared to languages like Rust.


I think it could be ok to have a link every once in a while that doesn't talk about rust.


We're talking about C++ and performance, no way nobody would mention rust :P


My key learning is the importance of balancing performance and code cleanliness instead of blindly adhering to clean code principles.


If you optimize for readability performance would suffer. If you optimize for performance readability will suffer.

Casey prizes performance over everything else.


In this particular case I find the code optimized for speed (the one using switch) to be also more readable and simpler than the code using virtual dispatch.

The problem with virtual calls in a big project is that there is no good way of knowing what is the target of the call, without some additional tooling like IDE. But in case of a switch/if, it is pretty obvious what the cases are.


Sure, but what happens, once you want to start supporting other shapes other than basics? Because clean code assumes code will be changed/maintained.

Then you get people writing their own horrible hacks.

Both clean code and performance oriented design have their extremes.

Clean Code has Spring with Proxy/Method/Factory monster... and hyper performance has the extreme in the story of Mel (i.e. read-and-weep only code).


> Sure, but what happens, once you want to start supporting other shapes other than basics? Because clean code assumes code will be changed/maintained.

You get to push back and ask, is it worth the developer time and predicted 1.5 - 5x perf drop across the board (depending on shape specifics)? In some cases, it might not be. In others, it might. But you get to ask the question. And more importantly, whatever the outcome, you're still left with software that's an order of magnitude faster than the "clean code" one.

Clean code "assumes code will be changed/maintained" in a maximally generic, unconstrained way. In a sense, it's the embodiment of "YAGNI" violation: it tries to make any possible change equally easy. At a huge cost to performance, and often readability. More performant code, written with the approach like Casey demonstrated, also assumes code will be changed/maintained - but constraints the directions of changes that are easy.

In the example from video, as long as you can fit your new shape to the same math as the other ones, the change is trivial and free. A more complex shape may force you to tweak the equation, taking little more time and incurring a performance penalty on all shapes. Even more complex shape may require you to rewrite the module, costing you a lot of time and possibly performance. But you can probably guess how likely the latter is going to be - you're not making an "abstract shape study tool", but rather a poly mesh renderer, or non-parametric CAD, or something else that's specific.


> Sure, but what happens, once you want to start supporting other shapes other than basics?

Sure, but what happens, once you want to start supporting more operations on the shapes?


Add more abstractions of course.


I do agree. I've looked at code bases where the switch was replaced with virtuals and the like. It's kind of hard to navigate around the code when that happens. I think I'd usually rather have a completely separate path through the code rather than switch or virtual, making the decision at the highest level possible, usually the top-level caller.


Indeed. If anything this demo shows how badly C++ polymorphism performs. It doesn't necessarly means that all OOP languages created equal. Although I have no data to prove anything, and frankly don't care b/c all these arguments about clean vs dirty code are meaningless in an absence of formally defined rules and metrics universally enforced by some authority that can revoke your sw dev license or something like that


> It doesn't necessarly means that all OOP languages created equal.

Exactly - unless you're trying very hard, you're unlikely to beat C++ polymorphism with your OOP code in a different language. Which makes Casey's argument that much stronger. C++ with its relatively unsophisticated OOP and minimal overhead on everything, is as fast as you're going to get, so it's good for showing just how slow that still is if you follow the Uncle Bob et al. Clean Code tradition.


> C++ with its relatively unsophisticated OOP and minimal overhead on everything, is as fast as you're going to get

No it isn't. If your C++ compiler isn't devirtualising at all (implied by the article) it'll get stomped on by anything doing inline caching [0] which will generate the switch-case code. The JVM does that for example.

[0] https://bibliography.selflanguage.org/_static/pics.pdf


I'm surprised I had to scroll down this far to find the obligatory Rust evangelist comment.


It all boils down to capitalism. Capitalism won't allow you to spend time and money building quality, when someone else can do the same job twice as fast and for half the time. Getting first to marked is more important for most than having software that work all of the time.

The only reason that other engineering professions does things properly is that bridge building is not free marked capitalism, but government mandated rules. Just look at how much GDPR rules in EU does for actually making companies do the bare minimum to respect privacy.


For personal notes I mostly use Notion. But for a while I used Obsidian with markdown files stored in a Git repository, this mostly worked fine.

In a professional capacity I use Docusaurus for project documentation, this is stored in the same Git repo as the rest of our products source code (so one monorepo both for code and documentation).


I haven't really heard about anyone going from Notion to Obsidian. What does Notion do right? I find the lack of local files a bit disturbing.


I found that most tooling was overkill and now just use a single <100 Bash script [0].

[0] https://github.com/TheKnarf/configs/blob/master/setup


Why are you hosting compiled binaries on GitHub and no source code...


GitHub issues are great from a developer perspective, much easier to work with than Jira and most other solutions.

But from a project management perspective it does lack in some perspectives. This mostly depends on what you need and how you use it. Unlike Jira, Github issues does not come with plugin support and can't be customised as much. This makes it easier to work with for developers but might not give you all the power you might want while planning the project. For example, to my knowledge, theres no way of assigning "Story points" and having "burn down charts" in Github issues; as its not a tool ment for doing Scrum. But if you're not doing Scrum then it doesn't matter. It also doesn't have a concept of sprints and epics. But I does have both tags and releases which you can use for planning.

And the new Github Issues beta looks to add a lot of features: https://github.com/features/issues


What does this solve that isn't already solved by Vagrant?


I didn't use this, but a similar/related project that is for MacOS as host OS, https://github.com/knazarov/homebrew-qemu-virgl

Vagrant is an orthogonal tool to this, it is a VM orchestrator not an actual VM. What does this do: well, qemu doesn't have virgl support merged yet, so you need to go to some lengths to compile it for yourself.

All of the other stuff (spice, virtio) is for "it should be a nice user experience and perform well" above and beyond simply being fast enough to use. In other words, you should be able to copy and paste between the host OS and the guest. You should be able to slide your mouse across the border of the VM window and do some clicking around then simply slide it back out and use your mouse with native host-OS windows again. It should have all of the features you expect and not force you to read a bunch of tutorials to find the features you expect from your desktop VM.

These things are all not granted when you use qemu out of the box. I have this intense 25-lines "qemu" script for invoking qemu-system properly. It was enlightening but I'm not sure how much I was enriched by the process of actually figuring all this out.

Quickemu is, I guess, for making figuring out all this stuff and making it easier to do (and on Linux.)


> I have this intense 25-lines "qemu" script for invoking qemu-system properly. It was enlightening but I'm not sure how much I was enriched by the process of actually figuring all this out.

Do you have your script posted publicly somewhere?


Yes! https://github.com/kingdonb/dot/blob/ubuntu-vm/bin/qemu

This expects you're using the special qemu from the prior link, compiled with homebrew. (else I think there will be no virtio-vga-gl video driver?)

The guest OS is an Ubuntu VM. I think the instructions say to use a recent Fedora/Silverblue for a reason (there are some things that don't quite work right around window resizing.)

Each time I start the VM, it shows up with tiny tiny pixels and the menubar does not work. I switch to another app, switch back, go to the menu and enable "zoom to fit" and it's off to the races. Other things to be aware of, if you resize the window it actually scales the pixels, (which is OK and doesn't even have any noticeable perf impact because OpenGL, I guess)


Thanks :)


Or even libvirt/the various virt-* utilities (eg: virt-install)

I'm not against tooling like this, but as someone already pretty familiar with KVM... I think I'd be quicker with virsh as I've been operating


I haven't tried embedding a browser window in OBS on Mac yet, I'll have to try it.

But you should check to see if you have activated hardware encoding. Go to "Settings" and "Output", select "Output Mode: Advanced" and then in the "Encoder"-dropdown you should now see an option called "Apple VT H264 Hardware Encoder". That should hopefully help somewhat with CPU.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: