Hacker News new | past | comments | ask | show | jobs | submit login
How DRAM changed the world (micron.com)
136 points by sandwichsphinx 3 months ago | hide | past | favorite | 117 comments



The article doesn't properly explain how DRAM is different from SRAM. DRAM has to constantly refresh itself in order not to 'forget' its contents.


Indeed - the 'dynamic' comes from 'dynamic logic'. Wikipedia: "It is distinguished from the so-called static logic by exploiting temporary storage of information in stray and gate capacitances." What Dennard realised was that you don't actually need to have a separate capacitor to hold the bit value - the bit value is just held on the stray and gate capacitance of the transistor that switches on when that bit's row and column are selected, causing the stray capacitance to discharge through the output line.

Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.

But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.


Yes, it totally misses the crucial and non obvious trade off which unlocked the benefits. The rest of the system has to take care of periodically rewriting every memory cell so that the charge doesn't dissipate.

In fact it took a bit of time for the CPUs or memory controllers to do it automatically, i.e. without the programmer having to explicitly code the refresh.


It isn't the point of the article, but this is true of every storage medium. It's just a question of milliseconds or years.


Static RAM ( based on how it is used) never needs to be refreshed at current typical computer power on times (hours or days ) . Current DRAM must be refreshed at very much faster rates to be able to be useful.


As in multiple times a second.


Why would we use DRAM, then? It seems better not to have to refresh it all the time.

(I think I more or less know, but I’d rather talk about it than look it up this morning.)


Because SRAM is essentially a flipflop gate. It takes at least four transistors to store a single bit in SRAM, some designs use six. And current must continuously flow to keep the transistors in their state, so it's rather power hungry.

One bit of DRAM is just one transistor and one capacitor. Massive density improvements; all the complexity is in the row/column circuitry at the edges of the array. And it only burns power during accesses or refreshes. If you don't need to refresh very often, you can get the power very low. If the array isn't being accessed, the refresh time can be double-digit milliseconds, perhaps triple-digit.

Which of course leads to problems like rowhammer, where rows affected by adjacent accesses don't get additional refreshes like they should (because this has a performance cost -- any cycle spent refreshing is a cycle not spent accessing), and you end up with the RAM reading out different bits than were put in. Which is the most fundamental defect conceivable for a storage device, but the industry is too addicted to performance to tap the brakes and address correctness. Every DDR3/DDR4 chip ever manufactured is defective by design.


A nitpick: if the chip is manufactured in CMOS technology (as it's typically done), then no, current does not have to flow to keep the transistors' state (it's sufficient that a potential difference is maintained), only to change it. There's a tiny leakage current however, which over a few billion transistors adds up.


That makes a lot of sense. Thanks!


The key point is that the refreshes do not need to happen very often. Something like once per 20 ms for each row was doable even by an explicit loop that the CPU had to periodically execute.

And this task soon moved to memory controllers, or at least got done by CPUs automatically without need for explicit coding.


I have always had some questions about these low level details.

Back when it needed to be explicit code, what exactly was the code doing? I tried to find some example of what it might look like online but search is so muddy.


Reading once from every page.

DRAM has destructive reads and is arranged in pages. When you read from a page, the entire contents of the page are read into an SRAM buffer inside the memory chip, the bit(s) selected are written out to the pins, and then the entire contents of the SRAM buffer is written back into DRAM.

For old DRAM, usually half the bits in an address selected the page, and the other half selected the word from the page (actually, often a single bit, and this was extended to a full word by accessing multiple chips in parallel). Set your address lines so that the page address is in the low order bits, and any linear read of 2^(log2(DRAM chip size)/2) length is sufficient to refresh all ram. Many early computers made use of this to do the refresh as a side effect; as an example, IIRC the Apple 2 was set up so that the chip updating the screen would also refresh the ram.


The inventor of DRAM, Robert Heath Dennard, just died a few months ago and I was reading his obit and his history.

I think the long and short of it is that DRAM is cheap. DRAM needs one transistor per data bit. Competing technologies needed far more. SRAM needed six transistors per bit for example.

Dennard figured out how to vastly cut down complexity, thus costs.


Dennard scaling for SRAM has certainly halted, as demonstrated by TSMC’s 3nm process vs 5 nm.

What’s the likely ETA for DRAM?


Years ago.

DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).

As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.

Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.


What does “faster than 400MHz” mean in this context? Does that mean you can’t ask for a unit of memory from it more than 400M times a second? If so, what’s the basic unit there, a bit? A word?

I built a little CPU in undergrad but never got around to building RAM and admit it’s still kind of a black box to me.

Bonus question: When I had an Amiga, we’d buy 50 or 60ns RAM. Any idea what that number meant, or what today’s equivalent would be?


The capacitors take time to charge and discharge. You can't do that more than around 400MHz with current materials. You are correct that it means you can't access the same bit of memory more than 400M/sec. This is the same whether you are accessing 1bit or 1M bits because the individual capacitors that make up those bits can't be charged/discharged any faster.

When we moved from SDR to DDR1, latencies dropped from 20-25ns to about 15ns too, but if you run the math, we've been at 13-17ns of latency ever since.


Yep, we pretty much hit a wall during DDR2 rein https://en.wikipedia.org/wiki/CAS_latency#Memory_timing_exam...


No, 7.5 to 9.75ns during the DDR4 rein (DDR4-4266), according to that page.


DDR2-1066 CL4 is also 7.5ns to first data.


But DDR2-1066 is worse in second and later accesses.


Consequent accesses are measuring bus speed, not actual DRAM cell latency. At that point data is already loaded into a row of sense amplifiers.


If that's the case, why haven't we switched to SRAM? Isn't it only about 4x the price at any given process node?


That 4x the price does also explain why it has not happened.


If it were even 20% faster than DRAM, there would be a market for it at the higher price. The post I replied to was asserting that there was a physical limit of 400MHz for DRAM entirely due to the capacitor. If SRAM could run with lower latency, memory-bound workloads would get comparably faster.


Yeah but why don't we have like 500MB of SRAM per 8GB of RAM. Certain addresses could be faster, in the same way /dev/shm/ is faster.


This is sort of the role that L3 cache plays already. Your proposal would be effectively an upgradable L4 cache. No idea if the economics on that are worth it vs bigger DRAM so you have less pressure on the nvme disk.


Cache is not general purpose as far as I know. I want to be able to do whatever I want with it.


Coreboot and some other low-level stuff uses cache-as-RAM during early steps of the boot process.

There was briefly a product called vCage loading a whole secure hypervisor into cache-as-RAM, with a goal of being secure against DRAM-remanence ("cold-boot") attacks where the DIMMs are fast-chilled to slow charge leakage and removed from the target system to dump their contents. Since the whole secure perimeter was on-die in the CPU, it could use memory encryption to treat the DRAM as untrusted.

So, yeah, you can do it. It's funky.


Where’s the market demand for that?


Where is the market demand for faster RAM? This isn't a good question.


Gamers, 3D modelling professionals, IBM, HP, ORACLE, many IT departments, etc… ?


Yeah, you’re basically betting that people will put a lot of effort in trying to out/optimize the hardware and perhaps to some degree the OS. Not a good bet.

When SMP first came out we had one large customer that wanted to manually handle scheduling themselves. That didn’t last long.


Effort? It's not like it's hard to map an SRAM chip to whatever address you want and expose it raw or as a block device. That's a 100 LOC kernel module.


AMD offers CPUs with over 768MiB of cache if you're willing and able to afford them.


There used to be a DRAM with build-in SRAM cache called EDRAM (Enhanced DRAM, not to be confused with eDRAM Embedded DRAM).

• 2Kbit SRAM Cache Memory for 15ns Random Reads Within a Page

• Fast 4Mbit DRAM Array for 35ns Access to Any New Page

• Write Posting Register for 15ns Random Writes and Burst Writes Within a Page (Hit or Miss)

• 256-byte Wide DRAM to SRAM Bus for 7.3 Gigabytes/Sec Cache Fill

• On-chip Cache Hit/Miss Comparators Maintain Cache Coherency on Writes

Afaik only ever manufactured by a single vendor Ramtron https://bitsavers.computerhistory.org/components/ramtron/_da... and only ever used in two products:

- Mylex DAC960 RAID controller

- Octek HIPPO DCA II 486-33 PC motherboard https://theretroweb.com/motherboards/s/octek-hippo-dca-ii-48...


5nm can hold roughly a gigabyte of SRAM on a cpu-sized die, that's around $130/GB I believe. At some point 5nm will be cheap enough that we can start considering replacing DRAM with SRAM directly on the chip (aka L4 cache). I wonder how big of a latency and bandwidth bonus that'd be. You could even go for a larger node size without losing much capacity for half the price.


SRAM also requires more power than DRAM and the simple regular structure of SRAM arrays compared to (other) logic makes it possible to get good yield rates through redundancy and error correction codes so you could have giant monolithic dies, but information can't exceed the speed of light in a medium. There just isn't enough time for the signals to propagate to get the latency you expect of a L3 cache out of gigabytes (in relative terms) far away big dies containing gigabytes of SRAM. Also moving that the data would to perform computations without caching would be terrible wasteful given how much energy is needed just to move the data. Instead you would probably end up with something closer to the computing memory concept to map computation to ALUs close to the data with an at least two tier network (on-die, inter-die) to support reductions.


Oh yeah this would definitely be something like L4 cache rather than L3 like AMD's X3D cpus. The expectation is as an alternative to DRAM (or as a supplement), kind of like what Xeon Phi did.


5nm will never be that cheap. The performance benefit would be easily 2x or more though.


Even 15 years from now?


Now? Prices have been flat for 15 years and DRAM has been stuck on 10 nm for a while.


That's overstating the flatness of prices. In 2009, the best price recorded here was 10 dollars per gigabyte:

https://jcmit.net/memoryprice.htm

Recently DDR4 RAM is available at well under $2/GB, some closer to $1/GB.


$1/GB? That's around the price SSDs took over from HDDs...


> Dennard scaling for SRAM has certainly halted, as demonstrated by TSMC’s 3nm process vs 5 nm.

I don't think the latter (SRAM capacity remaining the same per area?) has anything to do with Dennard scaling.


Not soon as DRAM is mostly on older node. But overall cost reduction of DRAM is moving very very slowly.


I have a recollection of a design where microprocessor reads were used to refresh DRAM contents. Late 1970s. I thought it was in a early 6800 Motorola book. Can find it now, or no mention of the technique, now. Would slow down program operation for sure. Maybe my recollection is wrong, not sure.


updated June 2024

Update: Today, marking the 56th anniversary...1966

Please forgive my pedantry but 58th. It was a busy year.


I miss RAM. I feel like if you lived through that 90s RAM frenzy, you probably miss RAM too. It was crazy how quickly we move through SDRAM/DDR, prices dropped and you could make real increases in performance year over year for not much money. I'm sure some of it was the software being able to capture the hw improvements, but that certainly was my fav period in tech so far.


I am confused by this comment. You said "RAM" (contrast to "DRAM" in the article title) but I think you are talking about DRAM sticks? But those have not gone away (other than with some laptops where it's soldered on and not upgradable).

Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.

One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).

Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.

[1] https://aiimpacts.org/trends-in-dram-price-per-gigabyte/

[2] http://blog.logicalincrements.com/2016/03/ultimate-guide-com...


> Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.

This statement makes it difficult to believe you were there.


Nowadays electron apps will just baloon and take up the slack. Back in the days, it meant you could run 4x more stuff or handle bigger files/models with less thrashing of your very slow hdd.


Of course, the really significant step back then was being able to go from 640 kB (+/-) to more than 1 MB (https://en.wikipedia.org/wiki/Conventional_memory).


Yup.

We maxed our Tandy (Radio Shack) 286 boxen, juiced to a blistering 8mhz, with 2.5mb RAM. I got blisters from stuffing RAM into daughter boards.

https://en.wikipedia.org/wiki/Expanded_memory#Expansion_boar...

Twas early years of PC-based image scanning and processing. We only had 1 image capture board. The boot's POST duration for 2.5mb was ridiculous. So after a cpature, my coworker (hi Tim Hall!) would ever so quickly yank the board out of a running box (now doing image processing) to be used in another box.

Multitasking!

As I remember it, the jump to 386 (much delayed by Micron's protectionist racket) was the next biggest step up.


How so?


8GB -> 32GB doesn't really give you a whole lot more than opening more browser tabs or whatnot. Cache some more filesystem into memory I suppose.

8MB -> 32MB enabled entirely new categories of (consumer-level) software. It would be a game changer for performance as you were no longer swapping to (exceedingly slow) disk.

They simply are not comparable, imo. 8MB to 32MB was night and day difference and you would drool over the idea of someday being able to afford such a luxury. 8GB to 32GB was at least until very recently a nice to have for power users.


Going from 4MB to 16MB made Autocad and Orcad Capture under Win 3.1 no longer slow.

I remember a few years before that you'd zoom in on a drawing and do as much as you could without zooming back out because it would take a full minute to redraw the screen. And then another minute to zoom back in somewhere else.


Depends on your definition of "there". Going from 8MB to 32MB did very little for you pre Windows95. Every piece of x86 software up to that point was maxing below 8MB because ram was insanely expensive and you needed to populate 4 modules at a time. Motherboards came standard with 4 or 8 ram slots limiting your ram choices to 1,2,4,5,8,16,17,20,32 MB with last 4 costing more than motherboard and CPU combined.

In 1992 standard desktop was still 386 + 4MB, with highend 486 + 8MB. 1MB SIMM was $30-50. 4MB $150 January 1992, dropping to $100 in December 1992, and back to $130 in December 1994.

Afaik 72 pin simms were first introduced in 1989 IBM PS/2 (55SX? proprietary variant) and later around 1993 in clones. You could run 1,2,3 or 4 simms of any size independently. In December 1994 2MB 72pin Simm was $80, 4MB $150, 8MB $300, 16MB $560. 32MB $1200, 64MB $2800.

486DX2-66 itself was ~$300, + $100 VLB motherboard meanwhile $1100 got you Pentium 90MHz with PCI motherboard. In December 1994 for the price of 486 with 32MB ram ($1600) you could have bought P90 with 16MB ($1660).


Thank you for this. It all sounds correct to me and brought back so many things I’d forgotten – but now I’m really curious how you remember these dates and numbers. My memory for details like these is pretty strong, but this level of recall is remarkable. I’m genuinely impressed.


Its cut&paste of research I did for Vogons (retro computing forum) post some time ago in my "I need to know everything about SIMM ram" phase :-)

Personally I started with PCs around 1995 and this I remember vividly due to major mistake* I made :) I had a school friend working weekends at the local computer fleamarket scrounge for me the cheapest second hand components possible to build barebones 386 system (minimum to play Doom*/Privateer) piece by piece in $50 increments. Tiny motherboard with soldered Am386DX40 + case was first. Next months $50 paid for 4MB, ISA VGA, keyboard and FDD. This got me off the ground once I hobbled together VGA to SCART cable with special driver to use TV instead of expensive VGA monitor. Third installment another 4MB and 40MB HDD, and the final $50 concluded build with sound card, CDROM and a mouse :)

* I will never forget my friend trying to convince me to throw extra $10 for a 486SX25 on a VLB board and me casually saying 'bah bro thats a tasty pizza money thanks but no thanks' :| This fatal f-up meant 5-9fps in Doom instead of what one would call fluid at the time 10-15 fps on 486SX25+VLB, motivated me to get into PC building and started my career in IT. 3 years later I was working in a service center for regional PC components distributor.


Disk speed is definitely a key factor. Solid state is dramatically lower latency than HDDs were, while DRAM latency in absolute terms has been pretty constant for a long time. So even if you're doing something very demanding that works with large amounts of data relative to standard quantities of RAM (a situation that gets harder to come by,) having it all in RAM at the same time is much less important than back in the 90s/00s.


Dunno. Early nineties I had to shut down the X11 server to compile Linux on my 386 with 8MiB RAM, else the machine would thrash. Mid-nineties on a 486 with 16MiB Linux compiled happily in the background while the host remained usable, including GUI (might also have been due to then still rapid improvements in Linux itself).

Now my 8GiB Apple mini feels overworked and swaps a lot (quite noticeable so due to the spinning rust drive), while my laptop with 32GiB never breaks a sweat.

The real jump was when finally leaving behind my 8bit home computer (upgraded to 64+256KiB RAM of which most were usable only as RAM disk) when I got the 386 with 4MiB (soon maxed out at 8MiB). Now, that was a game changer.


Yeah, the equivalent nowadays would be like going from 256mb to 16gb.

Even most AAA games will still run on 8gb ram just fine.


Resolutions for video might be good comparison. SD to HD(or 720p) is very nice jump. HD to 4k some benefit. 4k to 8k... Well I suppose there is things.

The starting to point really does matter.

Resolutions are actually good example of things at low end(today) scale. 1920x1080 * 24 bits = 6220800 bytes. So 6 megs. Just to store 16 million colour screen state.


4K to 8K (and for that matter 30fps to 60fps+) is REALLY noticeable in VR video.

Normal video, not so much.


Does anyone even consider VR below 90 FPS? Sounds like a miserable experience.

But yes, for resolution the angular pixel size is what matters and since VR covers more of your field of view you also need more pixels for the same results.


Going from an inflatable pool in your backyard to a community pool is world changing.

Going from the Pacific Ocean to the Seven Seas is still lots of water.


Great analogy as both the ocean and RAM today end up being used as the garbage dump of inconsiderate assholes.


Lots of water that you don't really need, so hardly makes a difference...


It's simple, and for context I made both upgrades scenarios: 8gb -> 32gb: yeah, it feels a bit snappier, I can finally load multiple VMs and a ton of containers. Which rarely happens.. 8mb -> 32mb: Wow! I can finally play Age of Empires 2 and blue screen crashes dropped by half!

The feeling isn't the same even remotely...


Most RAM found in consumer PC's during the 90s was still DRAM, including SDRAM, EDO, and Rambus. I believe OP is just being nostalgic over the period of time when RAM upgrades were very a exciting thing, as hardware was changing very quickly in that era and each year felt considerably more capable than the prior.


I'm not sure it's easy to understand what a big change there has been in the perceived pace of computer technology development if you weren't there. I'm typing this on a laptop that I purchased 11 years ago, in 2013. It's still my one and only home computer, and it hasn't given me any trouble.

In 1994, though, an 11 year old computer would already be considered vintage. In 1983 the hot new computer was the Commodore 64. In 1994 everyone was upgrading their computers with CD-ROM drives so they could play Myst.


Hilariously enough, you could still purchase a brand new Commodore 64 in 1994... albeit right before Commodore went bankrupt in May of that year. I vaguely remember some local electronics store in Pittsburgh having Commodore 64s for sale on the store shelves for really low prices back in the day. Admittedly, this was an unusual sight to behold in the US, because we had well since moved on to IBM PC Compatibles by then. In Europe, C64s were a tad bit easier to source.

It was definitely more of a curiosity and a toy rather than a serious computer in 1994.


In fact, I'd venture as far as to say that all memory in personal and/or "home" computers, starting with the earliest generation (Apple I & Co), has always been some variation of DRAM - with the possible exception of the external cache memory chips used by some CPUs (e.g. 486, Pentium IIRC).


There were SRAM-based home computers, they just weren't very competitive when DRAM-based ones could offer 4x the RAM for the same price. VIC-20 did well for its day, though.


My first "home" computer (a home-built COSMAC Elf) had 256 bytes of static RAM (in two DIPs). No DRAM at all. I don't think Radio Shack even sold DRAM.


Getting a new stick of RAM was so damn exciting in the 90s.


And putting the old ones in a SIMM-stack to still use them on a new motherboard, because noone would be so crazy as to throw away good DRAM.


Sad indeed. All that was taken away once it became possible to download more ram[0].

0. https://downloadmoreram.com/


I started late but i rememeber when i upgraded my system with an additional 64mb stick, i was able to reduce the GTA 3 Loadtime between one island to another from 20 seconds to 1.

And at that time i also learned how critical it was to check your ram for errors. I reinstalled win98 and windows 2000 so often until i figured this out.


Nah the biggest jump in performance by far was SSDs. It was a huge step so software had no chance to "catch up" initially.


It's happening, Windows gets slower every year to adapt to SSDs.


tinfoil hat on.

I think Microsoft is making windows slow to prepare people for when they move everything on your computer to their cloud.

tinfoil hat off.


That's Wirth's law moved to a conspiracy theory level.


RAM speeds are still improving pretty fast. I'm running DDR5 6000 and DDR5 8300 is available. GDDR7 uses PAM3 to get 40Gbps


How does that contrast with the increase cas latency in real world terms? (Actually asking, not being combative, I don't know)


Last time I checked, DDR5 and DDR4 latency was basically the same. Very little progress there. May be with integrating DRAM and CPU on the same package, some latency wins will be available just because wires will be shorter, but nothing groundbreaking.


My DDR4 was C16, but my DDR5 at C30 makes up for that with sheer speed.

Currently sporting this - G.Skill Trident Z5 Neo RGB Black 64GB (2x32GB) PC5-48000 (6000MHz) with a 7800x3d.

Previous kit was G.Skill Trident Z Neo RGB 32GB (2x16GB), PC4-28800 (3600MHz) DDR4, 16-16-16-36 [X2, eventually, for 64 total] with, you guessed it, the 5800x3d, from the 3900xt - my son loves it.


To reiterate the GPs point, in case anyone didn't get it: DDR4-3200 CL16 is equivalent to DDR5-6000 CL30 or DDR5-6400 CL32 in terms of latency. Divide the frequency by the CAS latency and you get the same number for all of those. It was the same situation going from DDR3 to 4. There's some wiggle room if you run above-spec voltages (and depending on the quality of the chips, etc.) but things have stayed roughly where they are latency-wise, gen-to-gen.


True! I realise that I left it unsaid in the numbers. Granted, herz for herz my DDR4 3600-CL16 had even better latencies than my DDR5 (4.44ns vs 5.00nz) - but for overall performance the speed then tends to make up for it (assuming a varied workload).

I've actively shopped for low latency RAM - within reasons, but have paid good premiums especially in DDR4 days. For DDR5, there can be surprisingly little price wise to differentiate e.g. CL30 or CL32, so whilst it may not offer the greatest of differences, if you're already paying e.g. $350 (AUD) for a kit at CL32 the improved latency might just be $20 more at the same speed.

(I see that things have moved on a bit from last September when I did my last upgrade; now we have CL32 at higher speeds, so maybe that's the go to now.)


There's another factor, which is the number of independent channels; DDR5 doubles the number of channels per memory stick. This gets more important as CPUs get more and more cores.


Yes, latency has stayed fairly constant going back at least as far as DDR3. It was nudged downward a little from DDR2 to 3, with higher clocks bringing finer grained timings, but I would say any improvement in later generations is pretty negligible in nanosecond terms.


can relate

though i guess the 90's are _the_ best tech era by far and for some time to come, because that's where capable and modular computing machines became a real commodity.


"8K video recording" - does anyone really need this? Seems like for negligible gain in quality people are pushed to sacrifice their storage & battery, and so upgrade their hardware sooner...


Yes, they record with higher resolutions and then the director and the operateur has greater flexibility later when they realize they need a different framing - or just fixing the cameraman's errors cutting parts of the picture out. They need the extra pixels/captured area to be able to do this.


I think the studios and anyone doing video production probably would use a 8k toolchain if possible. As others have pointed out, this lets you crop and modify video while still being able to output 4k without having to upscale.


Well for starters 8k video lets you zoom in and crop and still get 4k in the end.


I think 4k is also too much in the vast majority of cases..


It's insufficient for large theaters and barely sufficient for medium theaters. 1080p is plenty for most home theaters (though with the so-so quality of streaming, I wonder to what degree macroblocks are the new pixels)


1080p looks different to 4k. Just because its 'plenty' doesn't mean 4k doesn't matter.

And we are at 2024, i own a 4k lg oled display now for years. Why not leverage it? Just because 1080p is 'plenty'?


The problem with 4k, which 99% of people only experience as a stream, is that it's over-compressed. A 1080p BluRay is 36 Mbit.

Netflix 4k is 15 Mbit.

So unless I see people mentioning the media, I am always weary of the comparison.


I have watched plenty of movies from a bluray.

Nonetheless, i do think that compression from a high res source looks different / sharper.


Why stream when you can play files from a local media server? Cheaper and I'm sure the Mbit rate is better.


Because you have to maintain that local media server and go through the process of adding in the media you want to see and removing media you are done with if the disk is full.

As opposed to Netflix where you press a button and you're watching something.

People out there aren't much like the people who post here - when they get home from their crappy job they hate, finally make dinner because they can't afford food delivery, kick off their shoes on the old couch they bought at a yard sale and turn on their 15 year old TV they want it to just work. They don't have the interest or energy to fuck around with things that many here find fun and interesting and exciting.

This post is giving me major "why would you need Dropbox when you can rsync?" vibes: https://news.ycombinator.com/item?id=18255896


Because the majority of people don't do that.


With approximately 90% of the HT setups[1] I have seen, I can downscale a 4k video to 1080p and have the owner of the HT setup not be able to successfully ABX the difference.

1: I'm using the term "HT setup" rather broadly, as the primary location in a residence for watching movies as a group; it includes e.g. people who don't own a TV and watch movies on their laptop sitting on a coffee table. Setups where the display covers over 40% of the FOV (where 4k definitely makes a difference) are somewhere in the top 5%.


Then that's even more crop & zoom headroom for 1080.


You are thinking from a consumer point of view, consumer as in Jane taking videos of her cats which 8K, even 4K would be overkill. You can set your recording device to record in 720p or 1080p and so on to suit the purpose.

For commercial purposes it's another story and it makes sense to consider shooting in 8K if possible, thus the option should exist.


Yes why not?

Different use cases exist:

Record 8k text and you could zoom in and read things. Record 8k and crop withot quality loss or 'zoom' in

Does everyone need this? Probably not but we are on hn not at a coffee party


I need more than 8K. I'm working at microscopic levels when I study minerals, I need as much resolution as I can possibly get, to the limit of optical diffraction.


Are you actually recording movies of them though?

Honest question. I hope I learn something about studying minerals!


Yes. I do record videos for research purposes. Watching how a crystal reacts to various forms of radiation (primarily various bands of UV) gives me an idea of what impurities it might contain, or clues to composition from known emission spectra. That's recorded simultaneously through OBS with my voice doing a voiceover as I perform irradiation of the sample. Mineral goes into modified integrated sphere, sphere gets sealed, sample is illuminated, study begins.


Huh! That sounds fascinating. I'd love to see something like that.


Some times I do live broadcasts for a group of hobbyists and degree-holding geologists. the bandwidth to do compressed 16K already exists for me, just not the tooling at the moment.

Trying to show something that is literally one pixel at 400x magnification at 1080p is no fun. Even a few more pixels helps.


8K is important for VR video; otherwise, not so much. There's a really noticeable step up from 4K in that area.

On a large TV though , it's probably an improvement over 4K for sports where you need to track a small item moving fast.


Yes, it makes post-production SO MUCH EASIER




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: