Hacker News new | past | comments | ask | show | jobs | submit | dlevine's comments login

It has been a decade or so since I read The Singularity is Near. From what I remember, Kurzweil said that technologies will increase exponentially once they become linked to the exponential increase in computing power.

Now that Moore's Law has leveled off, I do wonder what will keep the exponential increase going. For example, LLMs are increasing exponentially in size each generation, but that also involves an exponential increase in cost, and will only be sustainable up to a certain point.


> but that also involves an exponential increase in cost

Not necessarily. For training, yes. But there surely will be custom hardware specialized for the inference of "1-bit" quantized models, which could be orders of magnitude more energy-effective than GPUs.

Hell, you probably can even bake trained models in silicon. Maybe instead of digital logic, use analog circuits. Photonics?

I believe there's so much room for improvement and I feel that GPUs are just the "training wheels" (lol, quite literally) — there will be exponentially more effective hardware for running LLMs.


Even asic will imo only give us one step-function jump. there will be a huge jump and then it is mostly back to moore's law-ish. maybe the gradient will be a bit steeper but i doubt it.

If blockchain is any indication, ASICs give two jumps (2 orders of magnitude) over GPUs. But yeah, then further jumps will have to be via parallelism; can we put 100s or 1000s of these ASICs in a single computer?

LLMs aren't even relevant in Kurzweil's Singularity vision, they are just a current trend. They may pan out, or something else will.

In Kurzweil's version, Moore's law is not a metric of transistor capacity per dollar, but instead is an exponential growth of the amount of information that humans can leverage. He claims that this has been going for thousands of years and continues (quite vertically) now.

I would argue that there will be a horizontalization of the curve at some point. Meaning, we cannot continue exponentially forever.

Unless we discover new physics we will be bound to cubic growth.

This ex-OpenAI engineer’s recent essay makes the case that LLMs are progressing at 5x the rate of Moore’s Law across a number of dimensions: compute, efficiency, and capability: https://situational-awareness.ai/from-gpt-4-to-agi/

I don't know but in another HN thread today they discuss a way to make the things much more efficient https://news.ycombinator.com/item?id=40794564

It's surprising compute/$ has increased in a fairly smooth exponential way for more than a century, long before Moore. I guess if the financial incentives are there people find a way.


The variant with the 8-core GPU is most likely also a binned version of the 10-core GPU variant. It's a lot easier to enable/disable cores than to produce different silicon for minor spec differences.

It may be that Apple's yield of 10-core GPUs is lower than expected, but those GPUs validate just fine with 9 cores enabled. Or maybe Apple had another marketing reason, such as further differentiating the iPad Pro.


theoretically, a single defect will impact overall device yield by

  fraction of area for GPU / total device area * defect density
so, if all the GPUS are 30% of the device area, with a 5% defect density, binning from 10 to 9 is going to increase total yield by 1.6%

square that for binning from 10 to 8 so 3% yield improvement there (so ~4.6% total)

if the TMSC 3nm process is up at 10-20% then this starts to get material


They probably found that the 10th one was systemically defective, and is just disabling it now vs a crap shoot which shipped products will crash or not when using theirs. Much like what they'd do with the batteries when they'd go wonky.

Or it's simply that much cheaper to yield a 9 core than a 10 from their production. Gotta stretch that profit margin even more, they don't make enough of apple users already.


I'll bet it's a yield + dont-want-a-lawsuit combo


why lawsuit?


Apple gets sued for things all the time. I think there's a whole class action system set up for it.

"iPhone owners get $92 payouts from Apple in phone-throttling settlement"

https://arstechnica.com/tech-policy/2024/01/iphone-owners-ge...

etc...

"Some iPhone users eligible for $349 in lawsuit settlement payout over audio issues"

"Apple Lawsuit: M1 MacBook Pro and Air Have Display Hardware Defects, Causes Screens to Crack"

"AppleCare Class Action: Apple Agrees to $95 Million Settlement for iPhone, iPad Users Given ‘Remanufactured’ Devices"

"Apple Agrees to $50 Million Settlement Over Butterfly Keyboard Complaints"


If you make a marketing claim, the government can sue you for not selling what you say you are.

This doesn’t apply to puffery claims though, which is why Apple could have said ‘the new M2, with the best number of cores ever!’ And been fine with whatever.


I have been curious about this for a while, particularly in relation to the increasing cost of training LLMs.

I was recently talking to a friend who works on the AI team for one of the large tech companies, and I asked him this directly. He said that each generation is ~10x the training cost for ~1.5X improvement in performance (and the rate of improvement is tapering off). The current generation is ~$1 billion, and the next generation will be about $10 billion.

The question is whether anyone can afford to spend $100 billion on the next generation after that. Maybe a couple of the tech giants can afford that, but you do rapidly get to unaffordability for anyone smaller than the government of a rich country.

It will likely be possible to continue optimizing models for a while after that, and there is always the possibility of new technology that creates a discontinuity. I think the big question is whether AI is "good enough" by the time we hit the asymptote, where good enough is somewhat defined by the use case, but roughly corresponds to whether AI can either replace humans or improve human efficiency by an order of magnitude.


this is the facebook.zuck.yann view. I wonder what anthropic and openai see as the future


I fairly recently used Shadow DOM for a project. We built a widget that was embedded in a web page, and that page could be arbitrarily styled. The outside page was typically a Wordpress theme, which in most cases did all kind of nasty things with CSS.

Even though there is CSS encapsulation, styles can still leak (cascade?) from the parent element into the Shadow DOM. You therefore need to put a style tag on the root that resets all styles.

Even then, there are some things like CSS transformations that will still affect elements within a Shadow DOM.

Another weirdness is that React modals are usually going to break unless you pass them a reference to the Shadow Root. Most popular libraries have been modified to take such a reference.


Yes but it doesn't leak CSS when you want it to, I've never figured how to setup a CSS only dark theme using a top-level classname like Tailwind does for instance


CSS custom properties are what you’re looking for here.

They cascade into the ShadowDOM without any issues.


If you want classnames:

- <my-component class="dark"

- :host(.dark) to style inside the component


A relative of mine works for a large tech company. Her manager has 2 open reqs on the team. However, he has been told that he can’t hire anyone in California or New York. This is not to say that there are no qualified candidates elsewhere, but there are a ton of qualified candidates in the Bay Area that he can’t hire. He will probably eventually fill the roles, but it will take a lot longer.


Why not CA or NY?


Non-American here: do those states happen to have better employee protections by any chance?


Yes.


Generally, yes.

CA and NY are also states that mandatory pay range disclosures for job adverts, too.


It could be due to pay. Some companies adjust pay based on where people live, adjusted for COL. CA and NY tend to be on the high end. Just one possible theory, there could be many other reasons.


That's the most likely explanation. Companies don't want to go through a whole hiring process just to have a candidate laugh at the offer. Whether or not companies have any sort of formal don't hire in CA or NY policy, I see a lot of non-SV companies informally pulling back from CA--closing offices and the like.


These are the most expensive job markets and the company is trying to reduce labor costs.


Someone could be willing to work for a lower than average rate if it’s the right company though.


There was a good article on this in the New Yorker a while back: https://www.newyorker.com/magazine/2023/06/26/relyvrio-als-f...

It's sad that the drug didn't work (and I had a cousin recently die of ALS), but it's important to let the FDA do their work.


What work does the FDA do here? It is the work of large investments or initiatives to prove safety and efficacy and the FDA makes decisions.

If anything we live in a world where we need better provisions for these illnesses similar to https://www.congress.gov/bill/118th-congress/senate-bill/190...

Waiting for phased trials when you understand the research and hypothetical risks is torture. Especially when many drugs have proven safety profiles where the only harm is financial.


This particular drug was just a combination of two already-approved generic drugs. Anyone could have tried them with a doctor's Rx without a change in the law.


Ah so a patent company who wanted to make a profit?

https://www.regulations.gov/docket/FDA-2023-E-2605

Interesting!


Infuriating that government site does not even display without JavaScript.


>but it's important to let the FDA do their work

The FDA has been 'working' on authorizing sunscreen widely used in the rest of the developed world for the last 20 years, or since the original Dubya administration. This is not a typo. They are glacially, bureaucratically slow & cautious


Isn’t the real problem on that from that no company will put up funds to do the trials since they aren’t patentable at this point?

That said, I feel like there should absolutely be an almost automatic approval for stuff sold in multiple other countries for decades.


I upgraded my 10 year old PC to Ryzen 5000 a couple of years back, and have never really looked back. Sure, it wasn't that long until 7000 supplanted it, but my PC is still just fine, and will likely be for the next 7 years or so (the biggest wild card is Windows support).

The only thing I really regret is getting a B450 Motherboard that only supports 1 NVMe SSD (and only PCIE Gen 3 SSDs). I would focus on making sure your motherboard has enough RAM and SSD expansion room, and then buy a big enough power supply for whatever GPU you want to run. You might want to figure out whether Zen 5 will have a new chipset, and if so, will the IO be significantly better. Nothing else is going to make a huge difference.


It greatly frustrates me that many AMD desktops have broken sleep/suspend modes.

I moved & started actually paying my power bill, and I really wouldn't mind my desktop's 100w draw if I could effectively suspend it and wake-on-lan it as needed (for either remote gaming, or to hop back into an existing tmux session). But if I suspend it, it goes down maybe ok, but never wakes up; I literally have to unplug it to get it back.

The various threads about give me the sense that I am far from the only one here with these kinds of issues. I tried windows, I tried turning on every wakeup I could find in the bios, I tried turning on every wakeup I could find in Linux. There's two different sleep modes, tried that. I wouldn't mind having lost like 8 hours of time to this issue, except I feel like I'm nowhere; no suspend, no tools to see what is or isn't happening. And it seems very prevalent on AMD. Frustrating.


Maybe you are having issues with the motherboard drivers/firmware. Most motherboard manufacturers are pretty terrible at this sort of stuff.


Microsoft and Intel has a very overt conspiracy to kill ACPI S3 suspend. They mostly succeeded, so suspend support on recent platforms is now no better than in 2005.


Not mentioned in the article (but present in the original source) is that non-plugin hybrids have 26% fewer problems than ICE cars - I wonder why this is.


Because most hybrids are Toyotas.


Yes, a lot of the stats in that article correlate pretty closely to which Manufacturers are generally reliable (Toyota & Honda) and which are not (Chrysler).

You would need to break out the EVs separately from the rest and see how each model compares to really evaluate the state of EVs.

With so few models, it is easy for a few rotten apples to pull down the whole bunch.


Additionally, even if we could remove carbon from the system scale, it's not like everything would magically reset to where it was. We are seeing changes (e.g. extinctions) that aren't really undoable.

With that said, I'm sure that some of the damage can potentially be reversed, and the planet has some capacity to self-heal over long periods of time.

Hopefully reducing emissions and even partial decarbonization will lead to a less bad outcome in the medium to long-term.


Back when I was in college, people would go dumpster diving for the really old analog oscilloscopes with the green tubes. They would hook them up to their stereos in spectrum analyzer mode.

We would use the ones like this to actually do work in lab.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: