Hacker News new | past | comments | ask | show | jobs | submit | stu2b50's comments login

Eh, I don’t think it’s the same thing. The gulf between “photos user” and “pixelmator” user is quite high, much more so than “weather app” and “weather app but better”.

In particular, if you have the average user Pixelmator, they’d be worse off. The same isn’t really true with weather or darksky - they really just do the same thing.

We still have iMovie and FinalCut, GarageBand and Logic. Apple has kept two different product lines before.


Also remember that some of those have been crippled in the past. iMovie used to be way more advanced. Older versions of Pages had (pretty basic but still) layout options that were completely removed.

It's also not impossible that Apple moves a few of Pixelmator's tools into Photos but kills the rest of it, either actively or just by stagnant development.


No. If you’re that kind of power user, the iMac isn’t for you. Get a Mac mini or studio.


A lot of people in this thread commenting without making an attempt to understand who this is for.


> A lot of people in this thread commenting without making an attempt to understand who this is for.

Why do you seem defensive? I asked a reasonable question of a group of people I thought might have product knowledge I lacked. I am not a Very Pink Apple Product SME, and stated as much in my question.


This isn't directed at you. I just happened to read a bunch of comments before yours. Stu2b50's response seemed like a good enough place to point out the dynamic.


Many of these are customer service desks which are visible from the side.


The ram is on the package for more than portability. It’s necessary for fast enough transfer speeds for the iGPU.


Then again, the rest of the industry has figured out a way to make slottable RAM almost as fast and compact as soldered RAM with the new CAMM2/LPCAMM2 standards. The M4 has LPDDR5X-7500 120GB/sec memory and there are already LPCAMM2-7500 120GB/sec modules, with even faster ones on the way: https://www.anandtech.com/show/21390/micron-ships-crucialbra...

Two of those modules working in parallel would hit "M Pro" speeds as well. I doubt Apple will be adopting them though, for the same reason they don't offer standard M.2 SSD slots even on systems that could obviously support them with minimal design compromises.


These are still well below what Apple offers at the high end and you can not buy systems like that right now. If you want high memory bandwidth on the CPU today, you will be charged a big markup on Epyc/Xeon/ThreadripperPro CPUs and motherboards, rather than the DRAM.


> Then again, the rest of the industry has figured out a way to make slottable RAM almost as fast and compact as soldered RAM…

Just be patient, the EU will take a large stick and force Apple to allow users to replace their RAM soon too.


Very unlikely. Apple can argue that less than 1% of computers users ever upgrade their memory (which is true), and after all, did the EU intervene when GPUs dropped their slotted memory?


> did the EU intervene when GPUs dropped their slotted memory?

The difference there is that slotted GPU memory is demonstrably impactical, but the memory on the M4 isn't demonstrably better than the LPCAMM2 module above. It's literally the exact same spec. Not that I expect the EU to do anything either when they didn't act on Apples soldered-in SSDs, which definitely aren't any better than standardized M.2 drives.


Actually, incorrect. On some scenarios, you’d need up to 4 CAMM2 slots to do what Apple does. This is due to CAMM2 maxing out at 128 bit busses; but M3 Max chips are currently at 512. Needless to say, battery life most affected.

https://news.ycombinator.com/item?id=40287592


Yes, the higher end Max and Ultra chips would still need soldered memory for sure. Two CAMM modules flanking opposite sides of the SOC is probably doable though, so I think the M Pros could practically have socketed memory.


GPU memory is 20 MT/s+, Apple is ~6 MT/s, LPCAMM supports ~7500 MT/s.

Easy heuristic: if your memory transfer rate is more than 1.5x the standard, you can solder RAM. If not, you must use the standard.


For SSD speeds, that was already dismistified with iBoff new adapter which makes an M1 Macbook Air upgradable and faster. I wouldn't be surprised if the same was true for RAM using the CAMM standard positioned near the CPU. Or maybe even better, slotted memory chips like in the old days, with a memory controller ready to accept multiple chip sizes.


> necessary for fast enough transfer speeds

Source?


When was the last time you saw a GPU with slottable memory?

For transfer speeds, look at the data sheets for the M series. Much faster than DDR4 or DDR5 RAM. In the ballpark of GPU memory.


Would the people who were buying the baseline 8GB model (presumably just for general computing/office work) care about the GPU being slightly slower, though?

I bet that the extreme lag when you run out of memory because you have an Electron app or two, several browser tabs and something like Excel is way more noticeable.

Hardly anyone is using Macs for gaming these days and almost anybody who does something GPU intense would need more than 16GB anyway.


This has been the approach since the M1s.

See: https://www.theregister.com/2020/11/19/apple_m1_high_bandwid...

> The SoC has access to 16GB of unified memory. This uses 4266 MT/s LPDDR4X SDRAM (synchronous DRAM) and is mounted with the SoC using a system-in-package (SiP) design. A SoC is built from a single semiconductor die whereas a SiP connects two or more semiconductor dies.



Source for what? Parallel RAM interfaces have strict timing and electrical requirements. Classic DDR sockets are modular at the cost of peak bandwidth and bus width. The wider your bus, the more traces you have to run in parallel from the socket to the compute complex, which becomes harder and harder. You don't see sockets for HBM or GDDR for a good reason. The proof is there.

LPCAMM solutions mentioned upthread resolve some of this by making the problem more "three dimensional" from what I can tell. They reduce the length of the traces by making the pinout more "square" (as opposed to thin and rectangular) and stacking them closer to the actual dies they connect to. This allows you to cram swappable memory into the same form factor, while retaining the same clock speeds/size/bus width, and without as many design complexities that come from complex socket traces.

In Apple's case they connect their GPU to the same pool of memory that their CPU uses. This is a key piece of the puzzle for their design, because even if the CPU doesn't need 200GB/s of bandwidth, GPUs are a very different story. If you want them to do work, you have to feed them with something, so you need lots of memory bandwidth to do that. Note that Samsung's LPCAMM solutions are only 128-bits wide and reported around 120GB/s. Apple's gone as high as 1024-bit busses with hundreds of GB/s of bandwidth; the M1 Max was released years ago and does 400GB/s. LPCAMM is still useful and a good improvement over the status quo, of course, but I don't think you're even going to see 256-bit or 512-bit versions just so soon.

And if your problem can be parallelized, then the higher your bus width, the lower your clock speeds can go, so you can get lower power while retaining the same level of performance. This same dynamic is how an A100 (1024-bit bus) can smoke a 3090 (384-bit) despite a far lower clock speed and power usage.

There is no magical secret or magical trick. You will always get better performance, less noise, at lower power by directly integrating these components together. It's a matter of if it makes sense given the rest of your design decisions -- like whether your GPU shares the memory pool or not.

There are alternative memory solutions like IBM using serial interfaces for disaggregating RAM and driving the clock speeds higher in the Power10 series, allowing you to kind of "socket-ify" GDDR. But these are mostly unobtainium and nobody is doing them in consumer stuff.


First, there's an official position: https://en.wikipedia.org/wiki/Chief_Official_White_House_Pho...

Secondly, why would it be a coincidence? You don't think... the people in the situation room knew something, perhaps a situation, was happening at the time?

Otherwise, why would they _be_ in the situation room? There's usually a situation going on.


Apologies for my ignorance. I didn’t think photographers were allowed into that room most of the time.


It doesn't. A loss is a loss.


I think being into NFTs and crypto makes you a fool.

But I do not think fools should suffer vision damage, or that they deserve harm on them.


Fools alone, maybe deserving of some sympathy. Greedy get-rich-quick fools who derided everyone not participating in the scam, nah, no sympathy from me, none at all.


RAW video isn’t like RAW photography. The sheer size of raw footage is insane - it’s normal for cameras to be unable to record RAW footage natively without an external recorder.

Thats not to say processing isn’t part of it, but even $2k mirrorless cameras don’t record RAW video internally.


That depends on what the goal is. It's worthless if you're trying to convince the other side. But in many cases, you're actually trying to convince the audience, and certainly for something like a blogpost here, I'd argue that it is not only the case, but successful, since at minimum we know it got posted to hackernews and has traction.


I mean, you'd also probably want to shoot at something other than 26mm. The image quality from the "telephoto" lens and wide lens on the iPhone (and any other flagship) is so bad it's distinctly noticeable even on an iPhone sized screen, compared to just the iPhone's main shooter, let alone a discrete camera.

Traditionally, the "default" focal length, for being close to the amount of distortion the human eye has, is 50mm - so quite a lot tighter than the iPhone's main shooter.

The iPhone look is a combination of extremely deep depth of field, not particularly pleasing bokeh, and digital sharpening and smoothing. Image stacking can make up for the lack of light on a tiny sensor with a slow lens, but it can't make up for the depth of field.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: