Hacker News new | past | comments | ask | show | jobs | submit | gdxhyrd's comments login

In almost no situations you want to have software, pixel-based access.

Are we really going back 30 years?


To reinforce on this: is pixel actually a well defined usable unit? as far as I understood many modern monitor often use a definition of pixel quite different from the one of a couple decades ago.


Where is this all being discussed and developed?


There is a W3C sponsored Community Group https://www.w3.org/community/webassembly/.

Development of the spec(s) happen in the open on the various github projects (https://github.com/WebAssembly/). Meeting notes, agendas, proposals, etc... are all kept as markdown in the git repositories.


Thanks!


POSIX, SDL and the usual suspects are pretty much cross-platform and run everywhere.


Eh, no. Vendors everywhere will package things for your architecture and give you binaries.

That is how it has always been done and is still done.

Yes, you can do it differently with an IL, but that isn't new either (Java, .NET, etc.).


No, OpenGL is not "on the way out". Please back up your claims.


“Apple deprecates OpenGL across all OSes”: https://www.anandtech.com/show/12894/apple-deprecates-opengl...


Apple is not Khronos.

They haven't supported OpenGL for years anyway, so the fact they deprecate it now is irrelevant.


A cross platform library loses its potency if it doesn't work cross platform.


OpenGL does not stop being cross-platform just because it does not work natively in 1 platform.

In any case, abstraction layers for macOS already exist.

And, going by your definition, if Apple only allows Metal, then no API will be cross-platform ever anyway.


OpenGL was never supported on games consoles, so....


The GL tools rot is already starting. E.g. the Radeon GPU Profiler supports Vulkan, D3D12, OpenCL, but not OpenGL:

https://gpuopen.com/gaming-product/radeon-gpu-profiler-rgp/


That profiler is meant for low-level debugging as its own description says. That is why it does not support DX11 either.


AMD's OpenGL tools don't look too active either, e.g. the last release of CodeXL is from 2018, and this only updated dependencies or removed functionality, for instance this nugget from the release notes:

---

* Removal of components which have been replaced by new standalone tools:

* FrameAnalysis - use https://github.com/GPUOpen-Tools/Radeon-GPUProfiler

---

...that Radeon-GPUProfiler is that tool which has only D3D12 and Vulkan support.

Look around for AMD's OpenGL activity more recently, there's not much, which isn't surprising because they've been lagging behind NVIDIA with their GL drivers since forever. I bet they're eager to close that chapter.

NVIDIA seems more committed to GL still, but without having all GPU vendors on board to continue supporting OpenGL, Khronos won't be able to do much to keep it alive.


> but without having all GPU vendors on board to continue supporting OpenGL, Khronos won't be able to do much to keep it alive

Please don't make things up. OpenGL 4.6 was released in July 2017. According to Wikipedia, modern AMD and NVIDIA cards both gained driver support for it in April 2018. Intel drivers have support since May 2019.

(https://en.wikipedia.org/wiki/OpenGL#OpenGL_4.6)


AMD has always lacked tools and has never produced much on the software side.

That is not news, and that has nothing to do with the state of OpenGL.

Please, stop spreading misinformation about OpenGL.


OpenGL absolutely is a legacy API today. It has been an awful impedance mismatch to modern GPUs for about a decade now.


It's a high-level API. It went through N generations of graphics hardware for 30 years (40 if you consider IRIS GL).


And it became increasingly less of a good fit with each one. By now, it is ridiculously mismatched to the task it needs to perform.

Just leave it to die.


OpenGL is certainly used by a number of creative apps like Photoshop, but you can't deny that the number of games released with OpenGL is down significantly since say 2010.


A lot of games and apps are being released using OpenGL today.

Then there is WebGL and OpenGL ES, widely used everywhere too.

So, no, it is not going anywhere. In fact, Khronos themselves have said so.


I didn't say that there were no OpenGL games being released in 2019/2020 but that the ratio is definitely skewing away from OpenGL. "A lot of games" seems like a stretch compared with what the numbers used to be.

Also I think it's obvious but I'll say it anyway; it's not up to khronos it's up to developers. It doesn't matter if they continue to support OpenGL if developers move to some combination of dx11/12, Vulkan, and Metal.

WebGL and OpenGL ES aren't OpenGL. Their uptake or lack thereof in other arenas is orthogonal to whether people are moving away from OpenGL for game development.


Considering UE4 and Unity both support OpenGL, yes, a lot if not the majority of games.

WebGL and OpenGL ES are pretty much OpenGL. Formally they are different standards, but they are based all in OpenGL: if you know one, you know the others pretty well, which is what is important for developers as you agree in the second paragraph.


Once again, that's support not implementation. Developers could use OpenGL in Unreal. They could, but they haven't in general. Vulkan is the default, and is the more common choice.

If it were true that lots of games were picking OpenGL, why is it so hard to make a long list of them? It's easy to make such a list for vulkan.


It depends on what you are trying to do.

Most of these benchmarks are not done to predict performance in a target machine, but to find out which is faster.


You are speaking as a stats person. :)

While there are dozens of things that can go south in benchmarking, how to solve them is not about applying advanced stats, but about understanding and eliminating the sources of noise.

Most benchmarks in CS can be done quite easily as long as one understands all the technology stack.


I'm not a stats person! But I have been burned in real projects by doing the wrong thing. I just have learned --- the hard way --- to honestly appreciate it.


> Please don't do that: they won't necessarily be the outliers in the dataset and your model will converge to the wrong thing.

They are the outliers. These experiments are typically measuring something that is effectively constant with almost zero noise, rather than some complex physical phenomena.

If you don't get an almost perfect fit, there is something going on that invalidates what you are doing (e.g. cache effects, clock effects, etc.).

In fact, if there are any outliers, I would not trust the benchmark at all. So removing them seems like trying to fix a bad benchmark with statistics.

> Statistics is hard. Like, really really hard. Stuff goes wrong all the time. Please leave it to the experts.

That is unnecessary gatekeeping. Benchmarking in CS is hard not because the maths/stats that are needed are hard, but because setting up the right experiment is hard and most people don't know all the pitfalls.

Therefore, if anything, you should leave benchmarks to CS/SE experts, rather than a statistician!


Not really, because commits don't go across the entire SVN, which is what makes monorepos so powerful.


What do you mean? When you commit to svn the whole repository goes up in version number.


You are right, I was thinking of CVS.

In any case, with SVN you usually do not want to give write perms to everyone in all the tree, so you end up with effectively partitioned spaces, or you make several repos instead, or you put another layer on top. With Git, anyone can easily develop global commits.


What problems did you encounter with just a few services?

Monorepos should be straightforward unless you are managing the code of >1k engineers.


We’ve run into some nontrivial but totally solvable issues at about 100-200 engineers.

IME, most consternation comes from people adopting a mono repo without adopting a build/dependency graph tool (like Bazel, buck or pants).

An additional source of strain is from people abusing the repo (checking in large binaries, third party dependencies, etc).

A third is when people try to do branch-based feature development, instead of the “correct” practice of only deploying master (or weekly cuts of master).

I think even a simple list of these sort of “gotchas” would be valuable for the aspirational mono repo company.

My impression is that a lot of teams hit these early and painful roadblocks, and imagine that they’ll never go away (they do!!).


Checking in third-party dependencies is not always abuse. It can be a useful habit for certain kinds of reproducible builds. The Buck documentation even endorses keeping your dependencies in your monorepo along with your own sources.


I understand the reasoning, and agree that it’s not always abuse. At first blush it’s a good idea, but I’d maintain that it’s one of the things that balloons your repo size quite quickly. Plus, one have to draw a line somewhere on what to include (a Python interpreter? A Go version? awk and grep?), and third party vs in-house is a fairly robust one imo.

We host a private mirror for third party dependencies, so that “pip install”/“go get” fail on our CI system if the dependency isn’t hosted by us. This gives us reproducible builds, while allowing us to hold 3rd party libraries to a higher standard of entry than source code. For certain libraries we pin version numbers in our build system, but in general it allows us to update dependencies transparently. It also keeps our source repo size small, for developers, and allows for conflicting versions (example Kafka X.Y and X.Z) without cluttering the repo with duplicates.

It’s definitely a smaller gotcha than the others I listed, maybe to the point where it’s not a gotcha, but I stand by it :)


If you can do that with 3rd party dependencies, can't you do that with all the code?

This is what confuses me about monorepos. Their design requires an array of confusing processes and complex software to make the process of merging, testing, and releasing code manageable at scale (and "scale" can even be 6 developers working on 2 separate features each across 10 services, in one repo).

But it turns out that you can also develop individual components, version their releases, link their dependencies, and still have a usable system. That's literally how all Linux distros have worked for decades, and how most other language-specific packaging systems work. None of which requires a monorepo.

So what I'd like to know is, of the 3 actual reasons I've heard companies claim are why they need a monorepo, is it impossible to do these things with multirepo? If it is indeed "hard" to do, is it "so hard" that it justifies all the complexity inherent to the monorepo? Or is it really just a meme? And are these things even necessary at all, if other systems seem to get away without it?


These are great questions!! :)

> Can you treat all code like 3rd party dependencies?

Yes, but there are trade-offs. Discoverability, enforcing hard deadlines on global changes, style consistency, etc.

> Is it impossible to do these things with multi-repo?

No, but there are trade-offs to consider.

> If it's hard, is it "so hard" that it justifies the complexity?

Hitting the nail on the head; there are trade-offs :)

> Are these things necessary, if other systems get away without it?

There are many stable equilibria; open source ecosystem evolved one solution and large companies evolved another, because they have been subject to very different constraints. The organization of the open source projects is extremely different from the organization of 100+ engineer companies, even if the contributor headcounts are similar.

For me, the the semantic distinction between monorepos and multirepos is the same as the distinction between internal and 3rd party dependencies. Does your team want to treat other teams as a 3rd party dependency? The correct answer depends on company culture, etc. It's a set of tradeoffs, including transparency over privacy, consistency over freedom, collaboration over compartmentalization.

With monorepos, you can gain a little privacy, freedom, and compartmentalization by being clever, but get the rest for cheap; vice versa for multirepos. It's trading one set of problems for another. I'd challenge the base assumption that multirepos are "simpler", they're just more tolerant of chaos, in a way that's very valuable for the open source community.

I hope we've not been talking past each other, I really like the ideas your raising! :)


I don't think we're talking past each other, and thank you for your responses.

> Does your team want to treat other teams as a 3rd party dependency?

From what I recall, 'true' microservices are supposed to operate totally independent from each other, so one team's microservice really is a 3rd party dependency of another team's (if one depends on the other). OTOH, monolithic services would require much tighter integration between teams. But there's also architecture like SOA that sort of sits in the middle.

To my mind, if the repo structure mimics the communication and workflow of the people writing the code, it feels like the tradeoffs might fit better. But I'd need to make a matrix of all the things (repos, architectures, SDLCs, tradeoffs, etc) and see some white papers to actually know. If someone feels like writing that book, I'd read it!


> This is what confuses me about monorepos. Their design requires an array of confusing processes and complex software to make the process of merging, testing, and releasing code manageable at scale (and "scale" can even be 6 developers working on 2 separate features each across 10 services, in one repo).

False. It is having multiple repos what creates those problems and a huge graph of versions and dependencies.

What "processes" are you talking about?


> It is having multiple repos what creates those problems and a huge graph of versions and dependencies.

Bazel, the open source version of Google's CI tool, is built specifically to handle "build dependencies in complex build graphs". With monorepos. If it didn't do that, you'd never know what to test, what to deploy, what service depends on what other thing, etc. Versions and dependencies are inherent to any collection of independently changing "things".

Even if you build every service you have every time you commit a single line of code to any service, and run every test for every service any time you change a single line of code, the end result of all those newly-built services is still a new version. A change in that line of code still reflects the service it belongs to, and so thinking about "this change to this service" involves things like "other changes to other services", and so you need to be able to refer to one change when you talk about a different change. But they are different changes, with different implications. You may need to go back to a previous "version" of a line of code for one service, so it doesn't negatively impact another "version" of a different line of code in a different service. Every line of code, compared to every other line of code, is a unique version, and you have to track them somehow. You can use commit hashes or you can use semantic versions, it doesn't matter.

So because versions and dependencies are inherent to any collection of code, regardless of whether it's monorepo or multirepo, I don't buy this "it's easier to handle versions/dependencies" claim. In practice it doesn't seem to matter at all.

> What "processes" are you talking about?

Developer A and developer B are working on changes A1 and B1. Both are in review. Change A1 is merged. Now B1 needs to merge A1: it becomes B1.1. Fixing conflicts, running tests, and fixing anything changed finally results in B1.2, which goes into review. Now A develops and merges A2, so B1.2 goes through it all over again to become B1.4.

You can do all of that manually, but it's time-consuming, and the more people and services involved, the more time it takes to manage it all. So you add automated processes to try to speed up as much of it as you can: automatically merging the mainline into any open PRs and running tests, and doing this potentially with a dozen different merged items at once. Hence tools like Bazel, Zuul, etc. So, those processes.


You are conflating language/build issues with VCS issues.

Everything you discuss also applies to multirepo, but worse, because there no one enforces consistency across all the project and you will end up with a broken interdependency.


> Plus, one have to draw a line somewhere on what to include (a Python interpreter? A Go version? awk and grep?), and third party vs in-house is a fairly robust one imo.

If your code/project/company uses the dependency in any way in production and it is not a part of the base system (which should be reproducibly installed), you include it; either in source or binary form.

Why is the size a problem? Developers should only be checking out once. If your repo hits the many-GiB mark, then you can apply more complex solutions (LFS, sparse, etc.) if it is a burden.


It's a problem if the first step of your build system is a fresh `git pull` :)

Not unsolvable of course, just necessitates an extra layer of complexity.


> IME, most consternation comes from people adopting a mono repo without adopting a build/dependency graph tool (like Bazel, buck or pants).

That seems like a build problem, not a Git problem.

> An additional source of strain is from people abusing the repo (checking in large binaries, third party dependencies, etc).

That is not necessarily abuse. In fact, it is a good practice in many cases!

> A third is when people try to do branch-based feature development, instead of the “correct” practice of only deploying master (or weekly cuts of master).

I am not sure what you mean by branch-based development, but I don't see why that would be a specific problem of monorepos.


How are they straightforward? Like rebuilding a car's engine is straightforward? If you know how they're built, it's easy...


What? I don't understand what that means.

A monorepo is just 1 repo. There is nothing more straightforward than that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: