Hacker News new | past | comments | ask | show | jobs | submit login

At a certain point it's more about the semantics of what a "computer" is. I don't know if I'd count an ASIC from a musical greeting card, though; and even within general purpose devices, microcontrollers vs microprocessors are typically delineated by the presence of an MMU.



If I can program it to execute a sequence of arithmetic and logical operations that approximate a Turing machine (with a finite band), and reprogram it at a later date to execute a different sequence of such operations, that's a computer to me. I wouldn't count ASICs, but the PIC12F508 or the 3-cent microcontroller referenced in the post definitely count.

Though by my definition of requiring reprogrammability and Turing completeness I am purposefully excluding many things that have historically been considered computers, like the many mechanical computers of the 19th and 20th century. From that standpoint I can see how some people might count ASICs as computers, even if I don't think that fits modern usage.


These RISC-V chips are on the order of .02 to .10 (qty 1)

https://www.wch-ic.com/products/CH32V003.html

The PIC12 has 25 bytes! of sram. The CH32V003 has 2k.

https://www.aliexpress.us/w/wholesale-CH32V003.html?spm=a2g0...


the 6502, 8080, z80, 8085, 8086, 65816, 68000, and 68010 were universally described as microprocessors, not microcontrollers, but did not have mmus built in (and of these only the 68010 could easily have one bolted on, as i understand it)

i think typically the thing that distinguished these from microcontrollers like the 8031, 8051, 8748, 8751, pic1650, etc., is that the microcontrollers had program memory built into them, either rom, eprom, or, starting in the 90s, flash. so they didn't need to be booted, they didn't need program ram, and in fact for a lot of applications they didn't need any external ram at all


Arguing about hard definitions differentiating microprocessor from microcontrollers based on single feature is pointless. It's a vague product/marketing category for certain usecases. There will be group of features that are more likely included or not included, but for most of them there will likely be exceptions. And the set of available features available MCUs and microprocessors change over the time. As technology improves both microcontrolers and processors are gaining new capabilities.

* MCUs usually have program memory builtin. But then there chips like RP2040 or ESP32 which while considered MCUs are used with external Flash memory chips for storing the firmware. * MCUs usually have builtin RAM. But there are also some capable of directly using external RAM. * Then there are things like apple M1 chips, with a lot of stuff builtin you still don't call them MCU. * A bunch of ARM application processors/SOCs/Microprocessors might have enough resources builtin that they could be used as more or less standalone microcontrollers, without external RAM or flash memory. * some early microprocessors used external MMUs and it took some time until the processors settled on architecture that's closer to how we have things now * early personal computer processors were in a weird category in terms of price and processing power, in certain time period it wasn't impossible that similar microprocessor chip was used both for as main computer CPU and also for peripheral devices.

The microprocessor name in my opinion at this point is slightly outdated. It's not like anyone beside hobbyists is making non micro processors out individual relays, transistors or logic chips.


mostly i agree; it's mostly a marketing distinction rather than a technical one

actually i think non-micro (multi-integrated-circuit) processors are becoming popular again. the 'microprocessor' moniker wasn't coined to distinguish processors built out of discrete transistors from processors built out of integrated circuits; that was the 'second-generation computer' vs. 'third-generation computer' distinction back in the 01960s. what made a microprocessor 'micro' was that it was a chip instead of a circuit board


I believe 68000 could use an MMU, but the catch was that it couldn't do demand paging, just memory protection and virtual/physical translation. I can't find the specific explanation right now, but it's something along the lines of the bus error exception (needed to actually stop the memory cycle) being special in a way that sometimes causes an incorrect PC value to be pushed to the stack. So you could terminate a process on an MMU exception, but resuming it was not reliable.


There was at least one company (Apollo, I think) that implemented demand paging on 68000 by using two 68000s. You had one, the leader, running as the "real" CPU, with the other, the follower, executing the same code on the same data but delayed by one instruction.

If the leader got a bus error they would generate an interrupt on the follower to stop it before it executed the bus erroring instruction.

The leader and follower would then switch roles, and the new leader could deal with the situation that had caused the bus error on the former leader.


That's so clever. What a hack. I'm imagining the slow smile on the face of the person that came up with it. "What if...".


I feel that the semantics are quickly becoming irrelevant. Many everyday items like sports watches, toys, kitchen appliances, alarm clocks or table radios already have more processing power, more memory and storage, higher resolution screen and better network connectivity than my first desktop. Running Doom on mundane items like key fobs and light bulbs isn't too far away from where we are in 2024.


> Running Doom on mundane items like key fobs and light bulbs isn't too far away from where we are in 2024.

About that... https://www.pcmag.com/news/you-can-run-doom-on-a-chip-from-a...





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: