It has been a decade or so since I read The Singularity is Near. From what I remember, Kurzweil said that technologies will increase exponentially once they become linked to the exponential increase in computing power.
Now that Moore's Law has leveled off, I do wonder what will keep the exponential increase going. For example, LLMs are increasing exponentially in size each generation, but that also involves an exponential increase in cost, and will only be sustainable up to a certain point.
> but that also involves an exponential increase in cost
Not necessarily. For training, yes. But there surely will be custom hardware specialized for the inference of "1-bit" quantized models, which could be orders of magnitude more energy-effective than GPUs.
Hell, you probably can even bake trained models in silicon. Maybe instead of digital logic, use analog circuits. Photonics?
I believe there's so much room for improvement and I feel that GPUs are just the "training wheels" (lol, quite literally) — there will be exponentially more effective hardware for running LLMs.
Even asic will imo only give us one step-function jump. there will be a huge
jump and then it is mostly back to moore's law-ish. maybe the gradient will be a bit steeper but i doubt it.
If blockchain is any indication, ASICs give two jumps (2 orders of magnitude) over GPUs. But yeah, then further jumps will have to be via parallelism; can we put 100s or 1000s of these ASICs in a single computer?
In Kurzweil's version, Moore's law is not a metric of transistor capacity per dollar, but instead is an exponential growth of the amount of information that humans can leverage. He claims that this has been going for thousands of years and continues (quite vertically) now.
This ex-OpenAI engineer’s recent essay makes the case that LLMs are progressing at 5x the rate of Moore’s Law across a number of dimensions: compute, efficiency, and capability: https://situational-awareness.ai/from-gpt-4-to-agi/
It's surprising compute/$ has increased in a fairly smooth exponential way for more than a century, long before Moore. I guess if the financial incentives are there people find a way.
The variant with the 8-core GPU is most likely also a binned version of the 10-core GPU variant. It's a lot easier to enable/disable cores than to produce different silicon for minor spec differences.
It may be that Apple's yield of 10-core GPUs is lower than expected, but those GPUs validate just fine with 9 cores enabled. Or maybe Apple had another marketing reason, such as further differentiating the iPad Pro.
They probably found that the 10th one was systemically defective, and is just disabling it now vs a crap shoot which shipped products will crash or not when using theirs. Much like what they'd do with the batteries when they'd go wonky.
Or it's simply that much cheaper to yield a 9 core than a 10 from their production. Gotta stretch that profit margin even more, they don't make enough of apple users already.
If you make a marketing claim, the government can sue you for not selling what you say you are.
This doesn’t apply to puffery claims though, which is why Apple could have said ‘the new M2, with the best number of cores ever!’ And been fine with whatever.
I have been curious about this for a while, particularly in relation to the increasing cost of training LLMs.
I was recently talking to a friend who works on the AI team for one of the large tech companies, and I asked him this directly. He said that each generation is ~10x the training cost for ~1.5X improvement in performance (and the rate of improvement is tapering off). The current generation is ~$1 billion, and the next generation will be about $10 billion.
The question is whether anyone can afford to spend $100 billion on the next generation after that. Maybe a couple of the tech giants can afford that, but you do rapidly get to unaffordability for anyone smaller than the government of a rich country.
It will likely be possible to continue optimizing models for a while after that, and there is always the possibility of new technology that creates a discontinuity. I think the big question is whether AI is "good enough" by the time we hit the asymptote, where good enough is somewhat defined by the use case, but roughly corresponds to whether AI can either replace humans or improve human efficiency by an order of magnitude.
I fairly recently used Shadow DOM for a project. We built a widget that was embedded in a web page, and that page could be arbitrarily styled. The outside page was typically a Wordpress theme, which in most cases did all kind of nasty things with CSS.
Even though there is CSS encapsulation, styles can still leak (cascade?) from the parent element into the Shadow DOM. You therefore need to put a style tag on the root that resets all styles.
Even then, there are some things like CSS transformations that will still affect elements within a Shadow DOM.
Another weirdness is that React modals are usually going to break unless you pass them a reference to the Shadow Root. Most popular libraries have been modified to take such a reference.
Yes but it doesn't leak CSS when you want it to, I've never figured how to setup a CSS only dark theme using a top-level classname like Tailwind does for instance
A relative of mine works for a large tech company. Her manager has 2 open reqs on the team. However, he has been told that he can’t hire anyone in California or New York. This is not to say that there are no qualified candidates elsewhere, but there are a ton of qualified candidates in the Bay Area that he can’t hire. He will probably eventually fill the roles, but it will take a lot longer.
It could be due to pay. Some companies adjust pay based on where people live, adjusted for COL. CA and NY tend to be on the high end. Just one possible theory, there could be many other reasons.
That's the most likely explanation. Companies don't want to go through a whole hiring process just to have a candidate laugh at the offer. Whether or not companies have any sort of formal don't hire in CA or NY policy, I see a lot of non-SV companies informally pulling back from CA--closing offices and the like.
Waiting for phased trials when you understand the research and hypothetical risks is torture. Especially when many drugs have proven safety profiles where the only harm is financial.
This particular drug was just a combination of two already-approved generic drugs. Anyone could have tried them with a doctor's Rx without a change in the law.
The FDA has been 'working' on authorizing sunscreen widely used in the rest of the developed world for the last 20 years, or since the original Dubya administration. This is not a typo. They are glacially, bureaucratically slow & cautious
I upgraded my 10 year old PC to Ryzen 5000 a couple of years back, and have never really looked back. Sure, it wasn't that long until 7000 supplanted it, but my PC is still just fine, and will likely be for the next 7 years or so (the biggest wild card is Windows support).
The only thing I really regret is getting a B450 Motherboard that only supports 1 NVMe SSD (and only PCIE Gen 3 SSDs). I would focus on making sure your motherboard has enough RAM and SSD expansion room, and then buy a big enough power supply for whatever GPU you want to run. You might want to figure out whether Zen 5 will have a new chipset, and if so, will the IO be significantly better. Nothing else is going to make a huge difference.
It greatly frustrates me that many AMD desktops have broken sleep/suspend modes.
I moved & started actually paying my power bill, and I really wouldn't mind my desktop's 100w draw if I could effectively suspend it and wake-on-lan it as needed (for either remote gaming, or to hop back into an existing tmux session). But if I suspend it, it goes down maybe ok, but never wakes up; I literally have to unplug it to get it back.
The various threads about give me the sense that I am far from the only one here with these kinds of issues. I tried windows, I tried turning on every wakeup I could find in the bios, I tried turning on every wakeup I could find in Linux. There's two different sleep modes, tried that. I wouldn't mind having lost like 8 hours of time to this issue, except I feel like I'm nowhere; no suspend, no tools to see what is or isn't happening. And it seems very prevalent on AMD. Frustrating.
Microsoft and Intel has a very overt conspiracy to kill ACPI S3 suspend. They mostly succeeded, so suspend support on recent platforms is now no better than in 2005.
Not mentioned in the article (but present in the original source) is that non-plugin hybrids have 26% fewer problems than ICE cars - I wonder why this is.
Yes, a lot of the stats in that article correlate pretty closely to which Manufacturers are generally reliable (Toyota & Honda) and which are not (Chrysler).
You would need to break out the EVs separately from the rest and see how each model compares to really evaluate the state of EVs.
With so few models, it is easy for a few rotten apples to pull down the whole bunch.
Additionally, even if we could remove carbon from the system scale, it's not like everything would magically reset to where it was. We are seeing changes (e.g. extinctions) that aren't really undoable.
With that said, I'm sure that some of the damage can potentially be reversed, and the planet has some capacity to self-heal over long periods of time.
Hopefully reducing emissions and even partial decarbonization will lead to a less bad outcome in the medium to long-term.
Back when I was in college, people would go dumpster diving for the really old analog oscilloscopes with the green tubes. They would hook them up to their stereos in spectrum analyzer mode.
We would use the ones like this to actually do work in lab.
Now that Moore's Law has leveled off, I do wonder what will keep the exponential increase going. For example, LLMs are increasing exponentially in size each generation, but that also involves an exponential increase in cost, and will only be sustainable up to a certain point.
reply