Hacker News new | past | comments | ask | show | jobs | submit login

The most fun I had with (2D) graphics programming was trying to reinvent the whole wheel myself in a high-level language (C# in this case). My approach was to simply output bitmap images, subsequently compressed into jpeg by way of libjpegturbo, at a high enough framerate to produce a believable result in a web browser, other clients, or even as input into ffmpeg.

To my surprise, I was able to get a fairly stable 60fps@1080p (in chrome) using this approach. Not 1 line of GPU-accelerated "magic" required. Now, this is just a blank image sent to the client view each time. The "fun" part was figuring out how to draw lines, text, etc. without absolutely crippling the performance. I got far enough to determine that a useful business system/UI framework could hypothetically be constructed in this way. Certainly, a little too constrained for AAA 3D gaming, but there are many genres that could fit a smaller perf budget. Controlling the entire code pile is an extremely compelling angle for me and the products I work on.

I could have started with D3D, Vulkan, et. al., but I can guarantee I would not have spent as much time or learned as much down those paths. I have already spent countless hours screwing with these APIs (mostly D3D/OpenGL) over my career, only to end up with a pile of frustrating garbage because I couldn't be bothered to follow all the rules exactly the way they wanted me to. I personally view the current state of graphics APIs and hardware as extremely regrettable and "in the way".

To me, a modern graphics card and its associated proprietary functionality stack is like the most recent Katy Freeway (I-10) expansion in Houston. A totally hopeless attempt at solving a much deeper and more difficult question (i.e. traffic or how to create fun/useful visual experiences).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: