Hacker News new | past | comments | ask | show | jobs | submit | more photonerd's comments login

It’s more than that. That’s just shared memory. As you say, if it was just that they it wouldn’t be noteworthy at all


If it quacks like a duck, looks like a duck, it's probably a duck.


Unless it's from Apple, in which case they call it a peacock and everyone gets all hyped about it.


Do we know its noteworthy? Its not like everything apple names is really that interesting. Liquid Retina Display, Dynamic Island, etc.


more like "a hundred bucks less". The M1's go for maybe half... if you're lucky. Most are more.


Because there's a newer model, which is now true for the M2 ones too?


M1’s took a hit because multiple processors came out each gen. Remember: the M1 was superseded by both the Pro & the Max (and the unites but that’s not in a MacBook).

By the time the M2 came around ppl knew what was up. So you might be able to find a bog standard M1 that low (they do show up) due to this… but it’s rare.


> But others have caught up

Then you proceed to link to one that... hasn't? (it's good, yes, but it's not caught up at all)


Depends. Is it faster? Then it's an upgrade.

Has the CPU industry really managed to pull off it's attempt at a bs coup that more cores always === better?

I thought we'd learned our lesson with the silly Mhz Myth already?


I guess we'll have to wait for benchmarks but I did find this interesting:

Apple's PR release for M2 pro: "up to 20 percent greater performance over M1 Pro"

Apple's announcement for M3 pro: "up to 20 percent faster than M1 Pro" (they didn't bother to compare it to M2 pro)


Sure, that's the title, but at least in this PR they immediately show a graph with a comparison to both.

Presumably it makes more marketing sense to compare to the M1 family up front because most people that bought an M2 last year are probably not going to be upgrading to M3. They are speaking to the people most likely to upgrade.


fwiw, i cant remember the last time i saw a company go back more than a generation in their own comparison. Apple is saying here as much as they're not saying here. M2->M3 may not be a compelling upgrade story.


The vast majority of Mac users go years between upgrades. For any other vendor it might seem weird to show several comparisons going back multiple generations (M1 and x86), but for the macOS ecosystem it makes perfect sense since only a very tiny slice of M2 users will be upgrading.


and what makes you think windows users update their devices every single generation?


Windows has distinct groups: the people who buy whatever costs $700 at Costco every 10 years / when it breaks don’t care but there’s also a vocal enthusiast community who do upgrade frequently. That group gets more attention since it’s a profitable niche and gaming generates a lot of revenue.


I used buy a $700 Windows laptop every 18 months in the 2000s. Then I got fed up with them just falling apart and switched to Macbooks. My 2013 purchase is still alive and being used by the kids.


In the 2000s, I went through a wide variety of PC laptops (Lenovo, Toshiba, Dell, Alienware, Sony, etc.) all within the range of $1200-$6500 and they all died within 3 years (except for the cheapest one which was a Lenovo with Linux). Some died within a year.

When my first Macbook lasted for more than 3 or 4 years I was surprised that I was upgrading before it died. I went through many upgrades with almost zero issues(one HDD failure, one battery failure). I still have a 2012 Macbook Pro that I've since installed Linux on.

When I bought the first touchbar Macbook (late 2015?) I spent around $6k maxing out the options, and I was surprised at how totally trash it was. Hardware QC issues were shocking: particles under the screen from manufacturing, keys stuck within the first hour of usage, external monitor issues, touchbar issues...

I haven't bought a laptop since.


Did you buy a macbook for $700? That was a pretty low price back then which meant you were buying devices made to a price. Buying a Macbook is one solution, another would have been to spend more money on a higher quality Wintel system.


No, it was around $1100 IIRC, maybe as much as $1300.


Right, so when spend twice as much you wind up with a better device. I think this might be only tangentially related to the fact that it was an Apple product, rather, you weren't purchasing the cheapest available device.


Ten years ago Apple was by far the highest quality laptop manufacturer. There was essentially no other option back in the early 2010s. Even now laptops with a "retina" display are not always easy to find for other manufacturers. In retrospect, that was probably the killer feature which induced me to switch.


Yeah, the quality of PC laptops has improved but that really just means you can get closer to equivalent quality at equivalent pricing. I've heard people claim to have saved a ton but every single time I used one there was some noticeable quality decrease, which I find kind of refreshing as a reminder that the market does actually work pretty well.


Did you treat the MB differently because you paid more? If so, that may have yielded longer life in addition to quality design, etc.


Not really. The difference in build quality was night and day; metal vs. plastic, keyboard that doesn't flex, etc.


Windows users buy whatever, from so many brands, that it doesn't matter how often they upgrade, they're likely to not upgrade from the same vendor anyway (so that the comparison to its older generations to be meaningful in the first place).


> and what makes you think windows users update their devices every single generation?

They don't, but the difference is that Windows users generally don't know or care about processor generations. In contrast, it's common for Mac users to know they have an "old" Intel-based Pro, an M1 Air, etc., and to use that knowledge to help determine when it might be time to upgrade.

You can test this by asking Windows users what CPU they have. For the few who know and who have an Intel CPU, you can ask what their Brand Modifier¹ (i3/i5/i7) is. If they know that, you can ask what the 5-digit number following the Brand Modifier is — the first two digits are the Generation Indicator¹. I'd be surprised if more than 0.01% of Windows users know this.

¹ Intel's name


Intel's CPU naming strategy used to drive me nuts when trying to talk to anyone at work who knew "just enough to be dangerous." Why is X so slow on this machine, it's got an [6 year old, dual core] i5! It runs fine on my laptop and that's only an [1 year old, quad-core] i3!


> it's common for Mac users to know they have an "old" Intel-based Pro, an M1 Air, etc., and to use that knowledge to help determine when it might be time to upgrade.

Not at all. I've worked with FANG developers with brand new M1 MBPs that had no idea what 'm1' meant until something broke.


like everything you said could apply to nvidia gpus as well


man, that's a whole lot of mental gymnastics to justify scummy benchmark practices from apple.


How are they scummy? The M3 vs. M2 performance improvements they showed looked pretty modest.

My interpretation while watching the event is that this is a company persuading x86 holdouts to upgrade to Apple Silicon, and maybe some M1 users as well.


It’s absolutely not, and that’s fine. The video has statements that the machines are made to “last for years” and they want to save natural resources be making long lasting machines.

I’m currently at 4 to 5 years on laptops and 3 to 4 years on phones, and even then I hand them over to kids/friends/family who get a bit more use out of them.


> they want to save natural resources be making long lasting machines.

Apple always comes from a position of strength. Again, they're saying as much as they're not saying.

Also, if they really cared about long lasting machines: slotted ram and flash please, thanks!


Huh. So they used to do this, but looking at the M series chips it seems like the architecture assumes the CPU-GPU-RAM are all on the same chip and hooked into each other, which enables zero copy. Someone more well versed in hardware could explain if this is even possible.

Expandable internal storage would be nice, yeah. But I get the sealed, very tightly packed chassis they’re going for.


> get the sealed, very tightly packed chassis they’re going for

The Dell XPS 17 is only 0.1 inch thicker yet has fully replaceable RAM and 2(!) m2 slots. I’m pretty sure what Apple is going for is maximizing profit margins over anything else..


I have an XPS 15. And while I liked that I could bring my own SSD and RAM, the build quality is nowhere near a Macbook Pro... like not even in the same galaxy. I had to have it serviced multiple times within the first few weeks. It had to be sent to Texas, and when it returned, one WiFi antenna wouldn't plug into the card, and the light on the front was permanently broken. I could have demanded Dell fix it - and I'd have been even more weeks without my main work laptop. So, by pure numbers/specs? Sure. By real world quality, no way would I favor Dell.


The issue is often comparing apples (heh) to oranges.

I understand the desire for slotted RAM, but the major limiting factor for nearly 10 years was CPU support for more than 16G of RAM. I had 16G of ram in 2011 and it was only 2019 when Intels 9th Gen laptop CPUs started supporting more.

The Dell XPS 17 itself has so many issues that if it was a Macbook people would be chomping at the bit, including not having a reliable suspend and memory issues causing BSOD's. -- reliability of these devices, at least when it comes to memory, might actually be worse and cause a shorter lifespan than if it had been soldered.

Of course it always feels good to buy an underspecced machine and upgrade it a year later, which is what we're trading off.

But it's interesting that we don't seem to have taken issue with BGA CPU mounts in laptops but we did for memory, I think this might be because Apple was one of the first to do it - and we feel a certain way when Apple limits us but not when other companies do.


There’s a lot of flat-out wrong information in this post. For one, even the low-power (U-series) Intel laptop CPUs have suported 32GB+ of memory since at least the 6th generation[1]. Many machines based on these CPUs unofficially support more than that. I have a Thinkpad with an i7-8550u and 64GB of DDR4, and it runs great.

On top of that, the higher-power laptop SKUs have supported 64gb or more since that time as well.

Secondly, it’s silly to claim that having RAM slots somehow makes a computer inherently more unstable. Typically these types of issues are the result of the manufacturer of the machine having bugs in the BIOS/EFI implementation, which are exacerbated by certain brands/types of memory. If you don’t want to mess around with figuring that stuff out, most manufacturers publish a list of officially-tested RAM modules which are not always the cheapest in absolute terms, but are always night-and-day cheaper than Apple’s ridiculous memory pricing.

[1] https://www.intel.com/content/www/us/en/products/sku/88190/i...


Sorry, you're entirely mistaken, there is no business laptop that you could reasonably buy with more than 16G of RAM. I know because I had to buy a high end workstation laptop (Dell Precision 5520 FWIW) because no other laptop was supporting more than 16G of RAM in a thin chassis.

No Dell Latitude, Elitebook, Thinkpad X/T-series or even Fujitsu lifebook supported a CPU that was permitting greater than 16GiB of memory.

I know this because it was something I was looking at intently at the time and was very happy when the restrictions were lifted for commercially viable laptop SKUs.

Citing that something exists predisposes the notion of availability and functionality. No sane person is going to be rocking into the room with a Precision 7520 and calling it portable. The thing could be used as a weapon and not much else if you had no power source for more than 2hrs.

Also, socketed anything definitely increases material reliability. I ship desktop PC's internationally pretty often and the movement of shipping unseats components quite easily even with good packing.

I'm talking as if I'm against socketed components, I'm not, but don't pretend there's no downsides and infinite upgrade as an upside, it's disingenuous, in my experience there are some minor reliability issues (XPS17 being an exceptional case and one I was using to illustrate that sometimes we cherry pick what one manufacturer is doing with the belief that there were no trade offs to get there) and some limitations on the hardware side that limit your upgrade potential outside of being soldered.


> Sorry, you're entirely mistaken, there is no business laptop that you could reasonably buy with more than 16G of RAM.

> No Dell Latitude, Elitebook, Thinkpad X/T-series or even Fujitsu lifebook supported a CPU that was permitting greater than 16GiB of memory.

Here are the Lenovo PSRef specs for the Thinkpad T470, which clearly states 32GB as the officially-supported maximum, using a 6th or 7th gen CPU:

https://psref.lenovo.com/syspool/Sys/PDF/ThinkPad/ThinkPad_T...

This is not a behemoth of a laptop; I'm writing this on a T480 right now, which supports 32GB officially and 64GB unofficially, and it weighs 4lbs with the high-capacity battery (the same as the T470).

I can't tell if you're trolling or what, but if you're serious, you clearly didn't look hard enough.

Edit: since you mentioned Latitudes, Elitebooks, and Fujitsu lifebooks:

- Dell Latitude 7480 (6th gen CPUs) officially supports 32GB: https://www.dell.com/support/manuals/en-us/latitude-14-7480-...

- HP Elitebook 840 G3 (6th gen CPUs) officially supports 32GB: https://support.hp.com/us-en/document/c05259054

- For Lifebooks, I couldn't find an older one that supported 32GB, but this U937 uses 7th gen CPUs, and has 4GB soldered and one DIMM slot which supports up to 16GB. This is a total of 20GB, again, breaking the 16GB barrier: https://www.fujitsu.com/tw/Images/ds-LIFEBOOK%20U937.pdf

I believe these are all 14"-class laptops that weigh under 4 pounds.


One more thought: you might be getting confused here with the LPDDR3 limitation, which was a legit thing that existed until the timeframe you're thinking of.

Any laptop which used LPDDR3 (soldered) typically maxed out at 16GB, but as far as I'm aware, this was due to capacity limitations of the RAM chips, not anything to do with the CPUs. For example, the Lenovo X1 Carbon had a 16GB upper limit for a while due to this. I believe the 15" MacBook Pro had the same limitation until moving to DDR4. But this is entirely the result of a design decision on the part of the laptop manufacturer, not the CPU, and as I've shown there were plenty of laptops out there in the ~2014-2016 timeframe which supported 32GB or more.


Intel actually has this documented all on one page: https://www.intel.com/content/www/us/en/support/articles/000...

DDR4 support was introduced with the 6th gen Core (except Core m) in 2016, LPDDR4 support didn't show up until (half of) the 10th gen lineup in 2019. It's just another aspect of their post-Skylake disaster, wherein they kept shipping the same stuff under new names for years on end before finally getting 10nm usable enough for some laptop processors, then a few years later getting it working well enough for desktop processors. In the meantime, they spent years not even trying to design a new memory PHY for the 14nm process that actually worked.


Yeah, this link is helpful, but IMHO doesn’t actually call out the specific problem I was referring to, which is that only laptops that used LPDDR3 had the 16GB limitation. If the laptop used regular DDR3, or DDR4, it could handle 32/64GB. The table lumps everything together per processor model/generation.


They haven't made slotted ram or storage on their macbooks since 2012 (retina macbooks removed the slotted ram afaik). It might save on thickness, but I'm not buying the slim chasses argument being the only reason, since they happily made their devices thicker for the M series cpus.


> It might save on thickness, but I'm not buying the slim chasses argument being the only reason

Soldered memory allows higher bus frequency much, much easier. From a high frequency perspective, the slots are a nightmare.


It's not soldered. It used to be, but ever since the M1, it's in-CPU. The ram is actually part of the CPU die.

Needless to say it has batshit insane implications for memory bandwidth.

I've got an M1, and the load time for apps is absolutely fucking insane by comparison to my iMac; there's at least one AAA game whose loading time dropped from about 5 minutes on my quad-core intel, to 5 seconds on my mac studio.

There's just a shitload of text-processing and compiling going on any time a large game gets launched. It's been incredibly good for compiling C++ and Node apps, as well.


the ram is not on die, and 5 min to 5 sec is obviously due to other things, if legit


Sounds like the iMac had spinning hard disks rather than SSD storage.


Yup. I’ve been looking at the Framework laptop, and it’s barely any thicker than the current MacBook Pro.


I have no excuse for flash, but memory can't really be slotted anymore since SODIMM is crap. High hopes for CAMM making it's way into every other machine 2024!


Given that there is a legally mandated 2-year warranty period at least in Europe, I would be surprised if any laptops weren’t made to “last for years”.

The problem with Apple, however, is that their hardware will long outlive their software support. So if they really want to save natural resources by making long-lasting machines, they should put much more effort into sustained software support.


Yes my MacBook Pro 2010 is still going strong.

But, drivers are only available for win 7 and macOS High Sierra was the last supported version.

Luckily Linux still works great.


> i cant remember the last time i saw a company go back more than a generation in their own comparison

Apple likes doing that quite frequently while dumping their "up to X% better" stats on you for minutes.


Nvidia did it when they released the RTX 3080 / 3090 because the RTX 2000 series was kind of a dud upgrade from GTX 1060 and 1080 Ti


Apple always does game comparisons like this for their conferences though. The intel era was even worse with this iirc.


Intel era there wasn’t much to game, they’re using the same chips as all the PC competitors. The PowerPC era, on the other hand…


The majority of MacBooks out there are still intel based. This presentation was mostly aimed at them & M1 owners.


Is it a problem, though? The vast majority of people skip generation and for them the relevant reference point is what they have, which is going to be hardware from a couple of generations ago. M2 -> M3 does not have to be compelling: the people with M3 devices are a tiny fraction of the market anyway.

I find it interesting how people respond to this. On one side, it’s marketing so it should be taken critically. OTOH, if they stress the improvements over the last generation, people say they create artificial demand and things about sheeple; if they compare to generations before people say that it’s deceptive and that they lost their edge. It seems that some vocal people are going to complain regardless.


Given how strong they emphasised the performance over the Intel base - who now have had their machines for 4 years and are likely to replace soon (and may be wondering if they stay at Apple or switch over to PCs), it is pretty obvious that they also want to target that demographic specifically.


That’s not what it says. Actual quote:

> The 12-core CPU design has six performance cores and six efficiency cores, offering single-threaded performance that is up to 30 percent faster than M1 Pro.


Ok, so then the M3 pro is up to 1.3/1.2=~8% faster than the M2 pro? I can see why they wouldn't use that for marketing.


Depends who they are marketing to I think is the point. If the biggest group of potential buyers are not M2 users, then it makes sense not to market to them directly with these stats.

I've got an M1 Max 64GB and I'm not even tempted to upgrade yet, maybe they'll still be comparing to M1 when the M10 comes out though.


I'm also far from replacing my M1. But if someone from an older generation of Intel Macs considers upgrading the marketing is off as well.


I was referring to the graphic they showed during the announcement that verbatim said the CPU was "up to 20% faster than M1 Pro".

https://images.macrumors.com/t/wMtonfH5PZT9yjQhYNv0uHbpIlM=/...


Plausibly they thought market is saturated with M1:s and targeted this to entice M1 users to switch.


> Depends. Is it faster?

The devil tends to be in the details. More precisely, in the benchmark details. I think Apple provided none other than the marketing blurb. In the meantime, embarrassingly parallel applications do benefit from having more performant cores.


Heh, I recall seeing many posts arguing against benchmarks when all Macs equipped with an M2/8GB/256GB SSD scored much, much lower than the M1/8GB/256GB SSD. People said the synthetic benchmarks were not representative of real world use and you'd never notice the difference. 'Twas a battle of the optimists, pessimists, and realists. In reality, 'twas just Apple cutting costs in their newer product.


> Heh, I recall seeing many posts arguing against benchmarks (...)

It's one thing to argue that some real-world data might not be representative all on itself.

It's an entirely different thing to present no proof at all, and just claim "trust me, bro" on marketing brochures.


oh absolutely, I can't wait to see the benchmarks. Per the (non-numerical data) benchmarks in the video tho - it is faster. So... until other evidence presents itself, that's what we have to go on.


> Has the CPU industry really managed to pull off it's attempt at a bs coup that more cores always === better?

I thought this at first then I realized the cost-performance benefit gained from adding more cores often outweighs just improving the performance of single cores. Even in gaming. I think this is what led AMD to create their Ryzen 9 line of CPUs with 12 cores in 2019.

That being said, I abhor the deceptive marketing which says 50% more performance when in reality, it's at most 50% more performance specifically on perfectly parallel tasks which is not the general performance that the consumer expects.



Few game devs bother optimizing games to take advantage of multiple cores


I find that frustrating with how intel markets its desktop CPUs. Often I find performance enhancements directly turning off efficiency cores...


Faster than what? M1 Pro? Just barely.


Reference should be M2 pro


I suspect it's about equal or perhaps even slower.


Based on what? The event video says it's faster.


M2 Pro was about 20-25% faster than M1 Pro, M3 Pro quotes a similar number. It has faster cores but a weaker distribution of them. Seems like a wash, but we'll see exactly how close when benchmarks are out.


2.5x is "just barely"? lol k.


> 2.5x is "just barely"? lol k.

That's only rendering speed, and M3 Max vs M1 Max (not Pro). M3 Pro is only 30 percent faster:

> The 12-core CPU design has six performance cores and six efficiency cores, offering single-threaded performance that is up to 30 percent faster than M1 Pro.


20%


Let me re-write your post with the opposite view. Both are unconvincing.

<< Depends. Is it faster? Then it's an upgrade. Has the CPU industry really managed to pull off it's attempt at a bs coup that more MHz always === better?

I thought we'd learned our lesson with the silly cores Myth already? >>


I think you're misreading the comment you're replying to. Both "more cores is always better" and "more MHz is always better" are myths.


Yup, exactly what I was saying.


Yes, but the number of cores in similar cpus do provide a good comparison. For example, with base M2pro at 6 p cores and base M3pro at 5 p cores, one would want ~20% faster cores to compensate for the lack of one core in parallel processing scenarios where things scale well. I don't think M3 brings that. I am waiting to see tests to understand what the new M3s are better for (prob battery life).


That's... the same view, just applied to a different metric. Both would be correct.

Your reading comprehension needs work, no wonder you're unconvinced when you don't even understand what is being said.


That makes less sense because the MHz marketing came before the core count marketing.

I agree with GP that we should rely on real measures like "is it faster", but maybe the goal of exchanging performance cores for efficiency was to decrease power consumption, not be faster?


Probably a balance of both tbh, as it appears to be both faster AND around the same performance per watt.


Basically hdd arcade games are PCs or similar (occasionally customized console hardware) with HDDs attached.

Pretty much anything made post 2002 is that. It started a few years earlier tho.

Before that: custom roms (or at least some custom Roms on standard arcade platform hardware for that company)


Famously games like Dragon's Lair from the 90s require a CHD for all of the full motion video.


Yeah, they’re originally laserdiscs so a bit of a special case, but same basic idea.

However it famously doesn’t support Dragons Lair! Not properly (it can play the video but that’s about it). Tho work has been ongoing.

Daphne is the only emulator of Dragons Lair I know of


While MAME does support some Laserdisc-based games - Firefox, Us vs. Them, Mach 3, Cube Quest, and Time Traveler, along with a couple of others - Dragon's Lair is not among them. It's complicated.


It’s 2.5tb for the software archive alone.

But for games 1tb is roughly correct. Assuming no deduplication


Their idea is that v1 is when all machines they support are perfectly emulated.

So… it’s asymptoticly approaching v1 you could say.

Of course they keep adding more machines… so…


Yep. In fact, the first releases of MAME had 0.01 increments. Not sure when they went to finer steps, but it is pretty much a given that they'll do it again when the current revision gets too close to 1.


Version numbers don't work like that :)

MAME just has a permanent "0" major version, and the minor version continues to be bumped up forever. It's not a decimal number, so "close to 1" doesn't really make any sense.


Ehhhh… while there’s no particular decimal cut off, “closer to 1” is always going to be a higher minor version.

It’s not incorrect as such


It’s usually got the “best” everything emulator. As long as by best you mean most accurate.

It may not be the fastest (usually never is) and may not be the most compatible (because it may not be complete vs hacks in other emulators) but what it does emulate is usually pretty close to perfect & will converge on that at least.

It’s Genesis emulator is a great example of that: about as perfect as non-hardware can get… but definitely not the fastest or most fancy


Nah, there is a whole cadre of emulators who have pushed emulation to the next step in the early 2010s, kind of spearheaded by Higan. They achieved cycle-accurate emulation of systems by eliminating the use of shortcuts. Behaviours are adequately reproduced, performance comes second. MAME has never held the crown of most acurate emulator, except in its field of expertise, the arcade.

MAME of course has gone in the same direction, but the project is known for its portability. So it means it has many emulation options to use less cycle accurate methods to run on RaspberryPi for example. From my limited experience, those far older emulators filled with performance-focused shortcuts are the default in MAME. So if you want the most acurate emulation, MAME isn't really it.


That's not actually correct. MAME intends to be as accurate as possible (we emulate the actual microcontrollers running the actual firmware inside the keyboards for PC, AT, and PS/2 keyboards as one example) but it's not always there.

Sega Genesis is a poor example; MAME runs the entire commercially released library fine, but some homebrew stuff that seeks to stretch or break the hardware doesn't do well (notably the Titan demos).

On the other hand, MAME's 8-bit Apple II emulation is cycle-accurate and you can raster-split mid-scanline by counting cycles (as some people crazier than me have done). So it really depends on the specific driver.


This is literally just wrong. Are you confusing MAME with RetroArch? Because that's a completely different project.


He's right, having been in the MAME source myself, it was designed as hardware documentation, not necessarily the most accurate emulator.

Just look at the source for most any 3d polygon rendering platform. Take the gaelco stuff or the Midway V platform. The CPU might be well emulated (enough), but the GPU is very frequently only simulated at the highest level. Someone might look at the displaylist generated by the CPU and pick out the vertex format, then they copy-paste a generic triangle rasterizer in its place and try to implement whatever features can be figured out. Attention is only paid to things such as interrupt timing and DMA timing, Fifo timing and so on only to the extent that is blocks operation or crashes the CPU.

This is not to dismiss the incredible work in MAME. It is frequently the first and only source of documentation at all. But parent is correct that it's common for more specialized emulators to go dig up more details and get more things correct than MAME.


They definitely are thinking of RetroArch. It’s mame cores suuuuuck too! So it’s compounding their confusion it seems.

The rest of their post is about as accurate too


Yeah, I confused MAME with retroArch. MOVE ALONG, NOTHING TO SEE.


You appear to be highly confused. None of what you said is accurate


You can't be serious. MAME's Genesis driver is far from perfect and most definitely less accurate than BlastEm.

mametesters.org/view_all_bug_page.php?filter=125692


Try playing Lightning Force aka Thunder Force IV — it’s famous for its heavy metal soundtrack. In Higan it does not work, in Genecyst the sound is crappy (on older emulators on Windows I also were out of luck). In MAME it runs just fine w/o tweaks. Haven’t tried BlastEm though.


BlastEm is great but—despite is vaunted accuracy claims—is buggy as hell too.

It plays more games but that’s not the same thing as accurate.

MAMEs emulation is incomplete, that’s why there are bugs.


They did. The long now used it about a decade ago.


But the Long Now project seems to be dormant. I'm looking for a live project that is doing this. Or maybe we could start one.


That’s par for the course for the Long Now. It’s basically a bunch of rich folks circle jerking themselves as a hobby.

Don’t get me wrong: love the goal. Practically tho it’s basically just nerd hobby projects


Given the long term focus of the Long Now project, wouldn't it be expected to look dormant?


One day this tech will be available to regular people.

The LongNow used it for some random codex & then seems to have dropped it. Last I saw it was mostly used by one single jeweler, bizarrely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: