Hacker News new | past | comments | ask | show | jobs | submit login
Apollo 11 Guidance Computer vs. USB-C Chargers (forrestheller.com)
397 points by indy on Feb 7, 2020 | hide | past | favorite | 205 comments



I'd like to also see an article examining these computing power attributes on the piece of garbage embedded computer on the front of the gas pumps where the lag between registering every keypress is like 800ms, and every screen redraw takes about 2000ms. It takes me an extra 60 seconds to get gas now thanks to how slow these things are, and it is nearly universal despite whether they appear to be 20 year old or new devices.

> Car Wash Y/N? (wait) > Loyalty (wait) > Fuel Rewards (wait) > Alt ID (wait) > 0005550000 (wait for the numbers to appear on screen.

I bet they run Java.


It's exceptionally slow on some days and quite variable. That makes me think it's "phoning home" using the GPRS gateway for every keypress, almost as if the menu items need to be generated dynamically.


I think about this every time I used one of these machines... isn't it in the businesses best interest to get people through as fast as possible? I have the same experience at grocery stores and convenience stores, I'm always amazed at how slow the machines are with such seemingly simple tasks.


The machines on brand new gas stations and self-checkouts are much faster [also support mobile payment], but performing upgrades to older systems usually isn't worth the cost since they're only at max. throughput for a few hours every week, if that.


The new ones play video ads and (some at least) have audio that you can't disable too (spotted in New Mexico). Truly baffling!

Hopefully not all of the machines are that bad. I wish that common point-of-sale interfaces focused on simplicity and ease of use (especially accessibility). Maybe one day!


Some of the unmarked buttons on those machines will mute the ads. Depends on the model.


I find its the opposite, in the US they weren't this slow when they were rolling out 20 years ago. Probably 10-15 years ago they started to get a lot slower, they went from basically instant to increasingly laggy. Gas stations have been rolling out chip aware ones for the past couple years and overwhelmingly they seem fairly laggy. It seems like there are a couple brands that are faster, and the rest are timeslicing 10k pumps inputs on a mainframe in new jersy (ok that's a joke but wouldn't surprise me if they are javascript based and round-tripping over dial-up to a linux machine fronting a AS400 or something).


Saving a few seconds is only the immediately measurable effect of such an investment, but as you can see in this post it seems the added friction is a major headache for people, and removing it may have long term customer loyalty effects.


This reminds me of back in the 90’s when the store I worked at replaced an old electronic cash register with one run by a computer.

It was so slow and I had no idea why. The display was a standard text screen of 40 x 80. The math is dead simple and the only other thing it did was open the cash drawer. The computer was a 486-40MHz so better than the one I had a home that ran Ultimate 7 just fine.


You forgot: Would you like a receipt? (wait)

It makes no sense at all. As soon as you input payment it should output gas. There's really no need for buttons or a screen at all (well, maybe except for entering a zip). If they really have to show all that stuff, it can come after I have my gas.

Note: languages aren't slow, it's the programs that are slow.


As a non-US citizen: What's the deal with entering ZIP codes in some gas stations in the US? Tax reasons?


From what I understand it is for fraud protection. If you steal a credit card you likely do not know what the billing zip code is.

According to this blog[0] they also collect the data for "marketing purposes".

[0]:https://blogs.creditcards.com/2014/05/zip-codes-gas-station-...


It's also extremely annoying when driving in the states as a Canadian, since our postal codes don't follow the same format. Canadian postal codes are six character alphanumeric strings with alternating letters and numbers, so you can't enter them on the keypad.

The trick, which isn't obvious, is that you're supposed to drop the letters from your postal code, leaving three digits, and then append two zeros to the end. For example: K9Z 2P7 -> 92700.

In Canada we don't have to deal with this, since pretty much all our pumps have supported EMV chip cards for years.


I remeber a foreigner asking me what should he enter for zip code, when fueling, I did not know the answer, but I'm wondering how did he solve it.


On that note, I just used a pump with EMV for the first time, (US, in Delaware) which didn't ask for zip. I guess they're just starting to get deployed.


I think it's just fraud protection, but data farming makes sense too... It's only required some times at some stations it seems. At the station that I frequent I don't have to enter a ZIP, but other stations owned by the same company just miles away require one.


I’ve heard it’s a substitute solution for billing address textbox for that interface


Fraud protection


I'm a bit surprised to see the comments here on automated gas station pumps. I've used quite a few different ones in a few different countries of Europe (where I go outside Europe, usually the stations are not automated so I can't compare) and I have never had this kind of slow experience.

Here in Belgium I don't even get asked if I want a receipt, I can just ask it afterwards if I like (just like for the rewards card). The worst I've had were in store French supermarket stations where for a while they had audio ads playing while you were filling up, that was absolutely horrible and I stopped going there. Eventually those things went away.

But the interface itself always asks for the least possible things. Diesel/gasoline 95/gasoline 98 then the card PIN, and you can fill up.


> But the interface itself always asks for the least possible things. Diesel/gasoline 95/gasoline 98 then the card PIN, and you can fill up.

In Finland, they don't even ask what you're going to pump. You just grab the right nozzle and go.


In the US there are separate buttons to select the grade of gasoline (I always push them to see if I can just bypass all the nonsense, but alas it doesn't work that way). The screens are an add-on gatekeeper that disables the grade selection until all its questions have been answered (NO, NO, NO, NO).


My process has been pick up the handle, hit the grade button, attempt to start pumping since the pay at the pump machines started to roll out a couple decades ago.

This used to work for probably better than half of them due to the fact that they had pay after pumping policies for the cash customers. Hardly ever works anymore, but I keep doing it and sometimes am surprised, while on a road-trip out in the country, when it works. OTOH, I also tend to stop at old crotchety gas stations on principal and have been known to make a U turn if a station has pumps that look like they are older than a couple decades. A few years ago, I stopped at one where there was a $3 taped over one of the digits. I asked the owner about it, apparently his pumps could only charge .01-$1.50 (or some range like that) so every-time gas prices went over or under some threshold he had to physically flip some mechanical gear in the pump and get the state to come back out and recertify the pump. This of course apparently cost more than the poor guy was making in gas, so he was saying that likely the next time it happened he would simply shutdown.


I'm surprised to see that you (and obviously lots of other people) were able to affect a reduction in advertising by boycotting ad-playing stations.

I wonder if it would work here... I live in Minnesota and I'm pretty limited on where I can fill up with 98 RON (called 93 Pump here in the US for some reason...). A lot of stations play video ads with sounds while you stand there.


Well, for a long time the population as a whole in the US managed to kill pay first gas stations. Pretty much everyone just kept driving when they saw the "pay first" signs and the pumps refused to just pump gas. The inconvenience of guessing how much gas you need give the attendant a $10 or $20, pump gas, go back and collect your change was a far worse experience than pump the gas, pay whatever you owe.

Wouldn't surprise me if the ad based gas stations just lost a 10-20% drop in customers if there was an alternative nearby without the ads.


I'm in BC, Canada, and I think it's all pay-first here. On the other hand, some places will accept debit cards at the pump, and it just pre-authorizes some set amount before permitting me to fill the tank. Of course, I'm only charged for the amount dispensed.

This strikes me as a reasonable solution, in that it removes the need to guess how much I need beforehand.


I don't know if boycott was the only reason, I wouldn't be surprised to hear they were often vandalized.


I wonder if any of that performance is due to the intrinsic safety requirements, especially the need to select only intrinsically safe components that will never produce enough energy to ignite gasoline vapors under any transient condition - https://en.wikipedia.org/wiki/Intrinsic_safety


Yet, they can process data fast enough to display video ads.

I don't think this is the reason.


Funny you should mention this. I regularly use the gas pumps at our local chain grocery store (Kroger) at two different locations; one with old pumps and one with new pumps. Originally the displays were very snappy and I could get in and out pretty quickly. About a year ago they updated the software on the "new" pumps and it got very, very slow (along with some questionable font choices). It was so bad that I went out of my way to visit the "old" pumps more frequently. Unfortunately, the old pumps were eventually "upgraded" and I now have to deal with crappy gas pump software everywhere.

So, I don't think it's based on the age of the pump. I'm guessing that there is a pretty standard architecture that they all adhere to. So yeah, I bet they run Java.

BTW: I've noticed this same thing with ATM software. I'm pretty sure they run Java as well.


Our local ATMs are slow because they pause to show ads.

"Please wait" is a deliberate feature, not a bug.


Even my car has a touchscreen that requires, on average about 1 second to register a touch.

I mean, a replacement capacitive touch screen for my samsung mobile is like 2000 Rs. (30 USD)

Its frustrating how far we have come and yet how behind we are.


I doubt your average phone screen would be happy being left in 80+ °C for prolonged periods of time. Automotive grade exists for a reason, as Tesla[1] found out the hard way.

[1]: https://www.thedrive.com/tech/27989/teslas-screen-saga-shows...


That’s not an excuse for shitty experience, that’s materials cost/part performance justification. Think first-gen iPhone: by now it’s worse in performance compared to better infotainment, but infotainment still sucks.


Weren't a lot of those windows ce back in the day? So regular c/cpp/... Apps under the hood, just terrible ones and on asthmatic hardware


Windows Phone 7 was ecstatically phenomenal though, despite being CE based.


>I bet they run Java. And maybe it runs on Windows CE device with anemic CPU..

On aside.. fuel pumps with embedded computers and payment terminals are US thing? There it is necessary to pay at the cashier after filling. (and it is most annoying part, because they try to upsell you hotdog/coffee, washer fluid and encourage to join loyalty program)


"Would you like to join our customer loyalty reward program?"

"No, I'm just naturally loyal. Runs in the family."


Java is quite fast.

In general, for 99% of tasks, if it takes more than a few milliseconds there's a network involved.


I’ve seen a few with the Windows BSOD but not enough info to figure out what’s otherwise going on.


Like atm and other common access point : used to be windows ce under the hood. Since the new line it can be windows 10 now, it's lightweight / embed version offers a kiosk mode (pick an app and the account at boot auto login and auto start the app and refuse anything that takes focus from the app basically... Except when something crash then you might see the underlying parts)


I think they run slower the hotter or colder it is outside.


But why? It's not like running at 20 MHz is going to produce an exponential amount of heat in the processor.

I mean, an ESP32 module runs at about 160 MHz, and using it with a capacitive touch screen almost as smooth as in a mobile.


I think that at least in some cases it has more to do with the LCD displays refresh rate and its response to cold temperatures. "As the liquid in the display freezes the response time slows down. In other words, it takes longer for the numbers and letters on the display to change (Turn ON or Turn OFF)."

Reference: https://focuslcds.com/journals/what-happens-to-the-lcd-in-co...


> I bet they run Java

Probably Electron apps.


probably a remote session to node.js app


Maybe California mandates the delay to discourage gas consumption...


This could actually be a good mechanism, it disincentivises people of (nearly) all incomes than flat taxes.


Yes, and the firmware for the USB-C wasn't sewn by hand into the plug (core rope memory). They did this for reliability purposes. It's amazing the craftsmanship that went into much of that rocket by people of all walks of life.

http://www.righto.com/2019/07/software-woven-into-wire-core-...

Here's another great article on the hand-welds on the F1. In only a short amount of time we've outsourced so much of these production tasks to other software/machines. But it's really amazing to contemplate and appreciate what a work of human hands Apollo was.

https://www.wired.co.uk/article/f-1-moon-rocket

It's hard for me to grasp it, but I can't help but think it beautiful when I meditate on all the _people_ involved in the moon landing, each person playing a small part in a very complex symphony.


There is a scene in From Here to the Moon that takes place after the deaths of the Apollo 1 crew. Frank Borman is testifying before a congressional panel and, in talking about Ed White, he said "At the plant, Ed saw a group of men off to the side so he went over to talk to them, which never happens. It turned out, they were the men who made the tools that build the spacecraft that will take us to the moon". It wasn't just the scientists, engineers, and technicians. Everyone who had a role in the program felt a sense of responsibility and pride in what they did to take these astronauts to the moon and bring them back. It was an amazing time, considering all that was happening at the same time. I feel very fortunate to have been able to witness the whole thing as a kid.


Look up the "Moon Machines" documentary series - it's great storytelling with amazing B-roll. The GNC episode has stuff about the core rope, and the "first stage" episode had the story of the surfers who worked on the tanks.


https://www.youtube.com/watch?v=6zotaRLROtw "Many stones can form an arch, singly none, singly none". This is one the first songs I learned to perform, along with Somebody Will.


This is awesome! The media loves to say your pocketwatch is more powerful than X form $date. This is a fun technical rundown.

I think at some point we have to start talking about how far from optimized we are https://en.wikipedia.org/wiki/MenuetOS comes to mind when talking about what can be done in fasm. Are we ever going to get compilers to the point where we can squash things down to that size and efficiency? Is this yet another thing that AI can promise to solve someday?


> Are we ever going to get compilers to the point where we can squash things down to that size and efficiency?

Compilers are already that good.

The problem is economics. If you have an OS that uses up 1% of the available CPU budget, there will be pressure from management & product to spend the additional 99% on additional bells & whistles. Maybe you add fancy animations and transitions to window opening. Maybe you add full-HD search and spend additional CPU cycles indexing. Maybe you add transparent cloud storage and spend the additional CPU cycles backing up files. Maybe you add new frameworks for developer productivity, all of which suck up CPU time.

And the thing is - from a business perspective, these are all the right thing to do. Because having a slick, glossy UI and a long list of useful features will sell more copies, while being able to run it on a USB-C charger or 25 year old computer will probably not sell more copies.


Speaking of fun technical rundowns, I've idly wondered:

- Approximately which year's average desktop PC is equivalent to a current Raspberry Pi?

- What year's worldwide computing power is equivalent to a current Raspberry Pi?

I'm sure folks can come up with other comparisons to make. I know it's not very useful to the world, but it's fun to think of just how much astonishing compute power we have these days.


For the first question, probably somewhere in the mid-2000s. Maybe early Core microarchitecture.


I'll give you a mid-2000s Celeron, before core, except 4 of them because it's a quad core, and with a few times more RAM since you can get pis with up to 4GB now. I had a 1.8Ghz single-core celeron with 512MB of RAM in 2005, upgraded to 3.0GHz with 768MB by 2006, then leaped to a Q6600/4GB in 2007. Not sure how my Radeon 9200 and later X1600 would've compared to the Pi GPU though.

This guy puts the rpi 4's GPU at 8 GFLOPS:

https://www.raspberrypi.org/forums/viewtopic.php?t=244519

Looks like the X1600 was around 6 GFLOPS:

https://en.wikipedia.org/wiki/List_of_AMD_graphics_processin...


You could do dual socket on Socket370, don’t know about quads though


- How many Apollo 11 guidance computers would you need to run Doom?


I believe you require a 32 bit computer and I think those were 15. So no doom :(


Though, where there's a will there's a way...

https://doom.fandom.com/wiki/16-bit

>After the release of Doom II on the PC, the original three episodes were released to some 16-bit consoles that used special 32-bit enhancement hardware. For the Sega Genesis, the 32X allowed for additional address space to enable Doom to run its demanding resources of the time which a 16-bit system wouldn't have handled. Whereas with the SNES, the Super FX 2 chip inside the DOOM cartridge allowed for an internal co-processor of the game cartridge to eliminate the need for bulky addons for the aging SNES.

Would it count if you somehow built in a 32-bit coprocessor for your 15-bit apollo doom port?


Question was could it do it which it can’t. But yeah you could use the Apollo for controller IO and put the rest on a raspberry pi but that kinda defeats the purpose.


But the data width is the least of concerns anyway. You should realize a 15 bit computer can compute everything a 32 bit can?

The bigger problems are more pedestrian like complete lack of suitable IO.


Could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

HN is a community and we want it to remain one. For that, users need some identity for others to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...

You needn't use your real name, of course.


So you'd need 2.13 of them is what you're saying.


My idly wondering one day was when in history did the worldwide available compute power (roughly) equal that of my iPhone 11?


If it can be done efficiently in asm then it can be done in C, C++, perhaps even Rust. But there are sacrifices to be made: unmaintainable “clever” code without clean abstractions.


There are architectures that are just fundamentally C-hostile. Perhaps the most famous one is 6502. No efficient way to implement stack based parameter passing, generic pointer arithmetic, and lots of other bits and pieces.

You can still use C on 6502, it's just not very fast.

I guess you could define another language that mostly looks like C, but built around particular CPU architecture quirks.


I think that, partly, that it isn’t as much C-hostile as that C compilers mostly target other architectures, and that many C programs take too much for granted, such as the existence of huge 16-bit Numbers or efficient multiplication.

A compiler/linker combo for the 6502 should, for example, analyse the call graph of the entire program to figure out what return addresses it needs to store on the stack at all (alternative 1: store it at a forced address in main memory (will work in single-threaded code that isn’t recursive). Alternative 2: instead of RTS, jump back to the caller (will work for functions that get called from only one place. Aggressively inlining those is an alternative, but may turn short branches into long ones, with their own problems)

Similarly, a compiler/linker for the 6502 probably should try really hard to move function arguments to fixed addresses, and to reuse such memory between different functions (if foo isn’t (indirectly) recursive and doesn’t (indirectly) call bar and vice versa, their function arguments can share memory)

And yes, self-hosting such a compiler would be even more challenging, given the 64kB memory limit.


Ben Eater has a fantastic series of videos where he builds a hello world program on the 6502

https://www.youtube.com/watch?v=LnzuMJLZRdU&list=PLowKtXNTBy...


I wonder if you can write fast C for the 6502, though, but you'll have to constrain your use of the language considerably?


The 6502 is a very constrained CPU. It only has 6 registers, only one of which (PC) is 16 bits. The stack is fixed at addresses 0x0100 to 0x01FF, and the two index registers are 8-bits in size. So is the accumulator and the stack register (hence the fixed address for the system stack) and no stack-based addressing modes. You'll have a severe penalty in doing any 16-bit arithmetic. There is no general multiply or divide instructions. I think the architecture is about as hostile to C as you can get and still be popular.


The main problem is that most compilers really need lots of registers to optimize well. Without lots of registers, you're constantly spilling to memory and that makes things slow. If you program in assembly, you have visceral knowledge of when you overrun you registers and you will restructure the entire architecture of your program to avoid that.


What are some languages (other than its own instructions) that 6502 is friendly to?


BASIC, arguably. People often say C is 'high level assembly' but in some ways BASIC is closer with the absence of block structure, scopes, etc. Control flow in a BASIC (basic BASIC, not fancy newer BASIC) program ends up looking a lot like that in an assembly one.


Given that all that high-level control flow is rewritten into gotos anyway, why would that make a difference in general, or specifically for 6502?


All high-level control flow is very definitely not rewritten into gotos. Almost all cpus have some form of function call instruction, and c functions are implemented by using it. Without such an instruction, or even a fast way to model it, C becomes much less efficient to implement.


OP was talking about block structure, not subprograms. In any case, BASIC has them.


FORTH was pretty widely used on 6502-based machines.


Sounds like it was targeting Fortran.


What kind of hardware do pocket (/wrist) watches run? (Not smart watches) How much RAM/ROM/CPU? E.g. those famous Casios with LCD.


God bless Richard Stallman for open source and GitHub for making sharing it easy

https://github.com/carrotIndustries/pluto/blob/master/README...


My TI Chronos "development kit" watch (AKA my normal watch) contains a CC430F6137 mcu, which has 32 kB flash and 4 kB RAM. It contains a MSP430(X) core and a CC1101 RF front-end for communicating with other accessories or a computer.

It's running the (custom/not from TI) open-source firmware openchronos-ng-elf, which is on github. When I get around to it, there are a few features I'd like to add, but it's nice just to know that I can.


Quarts watches use 32.768kHz(1/2 * 65536 Hz) crystal, means its output through binary counters easily produce clocks at base 10 fractions of seconds. That could be then fed to decoders to LCD, so no need for uC.


Apollo 11 used a heck of a lot more materials and energy than a USB charger.


And it had a lot more redundancy and probably dealt with radiation better than a USB charger. Different designs for different applications.


I note that the Google charger has much lower specs than the others. This is what I would expect from them. I worked for a company that was excited about getting them as a customer for our data processing services. We thought (at least I assume management had high hopes) "wow, Google, they must have tons of data and they will pay us tons of money to deal with it". Nope, they optimized and filtered it to the point where they weren't sending us even 1% of what other customers were. Since they were being charged based on volume.


One is a phone charger, the second is a power bank, the third is a charger for 2 laptops, the fourth is a spacecraft navigation system.

Now, if you'll calculate clock speed per W you'll see that Google charger easly beats the Huawei and the Anker.


What do you know, a data company that knows how to deal with data when it counts.


[flagged]


Why do you see that comment as dumping on Google? I see it as complementing them for taking the time to optimize the hardware rather than just throwing a larger chip at it.


Obviously Google has a lot of really smart engineers internally, but they still outsource stuff. If you're working in a field that involves data processing, and you are always thinking "this is basically just search, I wonder how Google would do it and why they don't" and then one day you get Google as a customer it realigns your thinking. But then Google turns out to be different from other customers - even when they rely on outside expertise, they seem unusually efficient at it.

I think it's interesting not just if you're interested in Google, but to contemplate what the limitations of talent and intelligence are, and how they can be utilized in unexpected ways. And what synergies exist between the people at the very top of a field and those who aren't.


Doesn’t seem like a disparagement, Google comes off as more efficient than expected.


To me, building a charger with a CPU that couldn't make it to the moon is a sign of old-fashioned quality.


Yeah modern microcontrollers are probably less efficiently used than they could be, but the AGC essentially crashed twice during the lunar landing because it didn't have enough compute available.


The real fun with the AGC on landing happened on Apollo 14, with a flipped bit in the abort landing flag. Percussive maintenance temporarily flipped the bit back, giving enough time for ground control to develop a hack. Didn't they just ignore the warnings on Apollo 11? And I think it was non synced timing references rather than strictly resource availability that got them on 11


The docking (like find your way back to the mothership radar) was still turned on, so the CPU was receiving the data and having to process it.

The CPU was throwing an I'm overloaded signal, but dealing with it, as the process priority put docking radar as less priority than the landing sequence.

So the CPU was complaining but still doing it's job.

All this is from memory :P


Yes, just looked it up.

I was (partially) wrong about the un-synced frequency though. The synchro's for the docking radar ran from a different power circuit than the AGC, and whilst they were frequency locked they weren't in phase; resulting in timings getting thrown off and the priority issues.

Edit: Superb deep dive on AGC in Ars from last week https://arstechnica.com/science/2020/01/a-deep-dive-into-the...


The reason for warnings on 11 was that the landing was done in what was essentially an unsupported configuration. Still the “kernel” behaved as designed, didn't run low priority task (that was not supposed to be running in the first place) and raised an warning about doing so.


Actually radar counter wasn't handled by any software task, it was hardware interrupt hardwired to steal CPU ALU cycles to increment/decrement memory location. Abnormally high steal time caused main navigation job to run too long, and it was getting scheduled again before previous instance finished. That led to memory exhaustion and BAILOUT software restart.

source: https://www.doneyles.com/LM/Tales.html


It didn't crash. It behaved as designed. Too much work to do, and the scheduler had to deal with priority problems.


It ran out of memory for allocating new tasks that were stacking on top of each other, and triggered BAILOUT restart. Navigation continued because it was restart protected by loading state checkpoint from dedicated memory area.


That’s not entirely true. It suffered from a priority inversion.


It suffered from simple CPU overload, in order to have priority inversion you need to have preemptive multitasking and synchronization primitives. The scheduler on AGC was essentially cooperative.

Side note: that is partly what the 90's term “Network Operating System” is about. Both Cisco and Novell managed to market the hell out of the fact that both IOS and Netware are not real multitasking OSes but essentially giant event driven state machines (nice related buzzword: “run-to-completion scheduler”), which they managed to market as being a good thing.


And despite the fact that a wall charger has more compute power than the Saturn V, we can't actually go to the moon today.

We've actually lost the ability to go. No one understands how the engines on the Saturn V work anymore. It would take years to tear them apart and analyze them, and most of the original engineers are dead.

And the guidance systems would have to be built back up from scratch.

Kind of sad if you think about it.


None of that is true. It's really annoying to see it repeated. It's even been covered on HN https://news.ycombinator.com/item?id=7304188 before.

There are parts of the F-1 engine that would be difficult to exactly replicate because while the engineering specs are known the exact fabrication processes weren't all documented. An exact workalike could be made for those parts that fit the specification without a problem. That new engine would need to be entirely requalified for use.

The F-1 is big and expensive and rebuilding them has long been outside of NASA's budget. If they were going to rebuild the F-1 it would be the same expense and effort to just design an entirely new engine.

This is why, for instance, the SLS is using the SSME as its first stage engines. It's a known system with a long and successful (the engine itself) flight history. It's been iterated upon and is well understood.

The design philosophy of rockets has also changed from the 60s when the Saturn was designed. Solid fuel boosters were not considered useful for Saturn but have been proven since then and have a long flight history. There's also the SpaceX/Soyuz model of an array of smaller/simpler engines. A small number of monster engines is just not needed like it was for Saturn.

The expense of the F-1 and the lack of real need has kept NASA from using them, not some loss of technology. The guidance systems...are a long long solved technology. Modern guidance systems are better than the Apollo and Saturn systems by orders of magnitude. They're far smaller which allows redundancy and still lower weight than old systems.

Since Saturn NASA has launched successful probes to every planet in the solar system and several dwarf-planets, landed several extremely long lived rovers on Mars, launched hundreds of satellites, over a hundred Shuttle missions, and assembled one of the most complex machines ever in orbit.

What's sad is the mistaken belief that Apollo was some pinnacle of capability and technological prowess for NASA.


Actually, a solid fueled Saturn V first stage was considered with multiple prototypes built and fired:

"The largest solid rocket motors ever built were Aerojet's three 260 inch monolithic solid motors cast in Florida.[18] Motors 260 SL-1 and SL-2 were 261 inches in diameter, 80 ft 8in long, weighed 1,858,300 pounds and had a maximum thrust of 3.5M pounds. Burn duration was two minutes. The nozzle throat was large enough to walk through standing up. The motor was capable of serving as a 1-to-1 replacement for the 8-engine Saturn 1 liquid-propellant first stage but was never used as such. Motor 260 SL-3 was of similar length and weight but had a maximum 5.4M pounds thrust and a shorter duration."

and

"Between Sept. 25, 1965 and June 17, 1967, three static test firings [of the AJ-260 rocket] were done. SL-1 was fired at night, and the flame was clearly visible from Miami 50 km away, producing over 3 million pounds of thrust. SL-2 was fired with similar success and relatively uneventful. SL-3, the third and what would be the final test rocket, used a partially submerged nozzle and produced 2,670,000 kgf thrust, making it the largest solid-fuel rocket ever."

For more see: https://en.m.wikipedia.org/wiki/Aerojet#Florida_facility_and...


>No one understands how the engines on the Saturn V work anymore.

That's not really true [1]. Although the project is dead AFAIK a fair bit of work went into reviving the F-1 engine in the early 2010s.

>And the guidance systems would have to be built back up from scratch.

As with many things, you can't easily replicate complex old artifacts because you don't even have the tools to make the tools to make the tools... No, we couldn't build an Apollo Guidance Computer today. I doubt you could easily get things like rope core memory or just about any of the components. But you wouldn't want to anyway. (The guidance computer would probably be the least of your challenges; this is a pretty well-understood technology and the company that made it is even still around.)

[1] https://www.thespacereview.com/article/3724/1


> I doubt you could easily get things like rope core memory or just about any of the components.

"Any of the components" is an exaggeration. For example, bipolar transistors are still here, here's the schematics [0] of the switched-mode power supply in the AGC - still perfectly understandable, rebuilding it can a fun weekend project. Also, discrete NOR gates are still around, although the underlying technology is different.

> As with many things, you can't easily replicate complex old artifacts because you don't even have the tools to make the tools to make the tools... No, we couldn't build an Apollo Guidance Computer today.

I think there's a difference between a 1:1 faithful replication and a replication based on the same architecture and/or principle of operation. While it's difficult to build a 1:1 faithful replication, rebuilding a new one based on the same architecture in general is much easier.

So when the original commenter said, "no one understands how the engines on the Saturn V work anymore", I think the point to be made is whether the ideas from the Space Age, e.g. whether the Saturn V architecture has survived, it is not necessarily meant that rebuilding it makes sense - it probably does not.

> The guidance computer would probably be the least of your challenges

+1.

[0] https://www.righto.com/2019/08/reliable-after-50-years-apoll...


Fair enough.

>whether the Saturn V architecture has survived

We understand very well the basic approach that was taken to land on the moon and the hardware we used to do so. We'd have to re-engineer a lot of things that aren't just off-the-shelf to do it again. But we could do so pretty quickly if there were any compelling reason to do so.

As I understand it, current technologies that could include moon landing are being worked on in the broader context of going to Mars.


> We'd have to re-engineer a lot of things that aren't just off-the-shelf to do it again.

Yes. I think this is the perspective that people trend to overlook - every engineering project is unique, in the sense that it's optimized to do the job under very specific constraints - costs, production capabilities, available materials, etc. When the circumstance changes, sometimes you simply can't just pull the same blueprint out of the archive to make another one - redesigning it is sometimes unavoidable, even if the older design is nothing wrong from a technical point-of-view. I think the same applies to software development.


We could easily go to the moon. It's just a completely useless exercise to do so, unless you can find a way to taunt China into another round of the space race.

The scientific case for the moon was always weak, and most anything they could come up with has been done. Just look at the useless lets-grow-salad-in-space-oh-look-its-just-salad timesinks the ISS has been busying themselves with.

So, in a way we've become much smarter. And SpaceX, a commercial outfit with both solid financials, a good track record on safety, and rather unlimited ambition is probably the most exciting thing happening since the first moon landing.


> useless lets-grow-salad-in-space-oh-look-its-just-salad timesinks

This is called basic research. It's geared towards building greater knowledge of a study area without specific concerns towards application. That's how science works. We don't just fund the stuff that's immediately profitable (though things are shifting that way).


It might seem 'obvious' that salad is just salad when grown in space, but you don't know that until you try. Understanding how various foods grow in space would seem to be a pretty basic necessity if we're going to attempt any kinds of long-term settlements or research stations where regular resupply from Earth is dicey.


That's the same SpaceX that has launched 240 visible objects into orbit in the last year?

More than doubling the number of visible man made objects (~200 till the start of the SpaceX Starlink project).

And with fcc approval for another 12,000 and plans for a further 30,000.

It excites me alright, but certainly not in a good way...



I don't believe that's a good portrayal of the situation. I doubt that there's anything all that special about the actual Saturn V engines. Sure, maybe nobody remembers all of the engineering details about those particular engines, and it would be hard to reverse engineer them. But I'm pretty sure we would find it much easier to design and build new engines of similar spec today. SpaceX, Blue Origin, and several other new space companies don't seem to have had all that much trouble designing and manufacturing new, powerful rocket engines.

Ditto guidance systems. Why try to replicate something designed around the constraints of hardware that seems ancient by today's standards? No reason to think we couldn't build something new with equivalent or better features on modern hardware.


The F1 engine is interesting from this PoV because the thing as it is documented does not work (and even the documented design is full of weird design decisions that nobody today knows the reason for) and it worked by combination of luck and institutional knowledge of the actual machinists who built it not exactly to specifications.

In fact if you take various “high tech” designs from middle of the 20th century you will see a pattern of things that cannot work if built as designed, but work because of random effects that are certainly outside of what the original designers thought of (I have seen large amount of instrumentation electronics where voltage sag on supply voltage is part of feedback loop and it will not work without it. I highly doubt that the original engineers consciously designed that.)


This reminds me of the story about using genetic programming to design hardware, where it would reliably generate designs that worked... somehow. Complete with apparently nonfunctional parts that couldn't be pruned without breaking it.


Your comment reminded me of the story of the MIT "Magic" button [0].

[0] http://catb.org/jargon/html/magic-story.html


Something requiring human labor doesn't make it impossible! There's no lost scientific knowledge. Engineering details may be lost but we can design a new rocket that works just as well. We send unmanned probes to the moon all the time. There's no loss of resources to do it either. We have more money than ever before. We just can't be bothered. That's all.


Even if we could be bothered, I think we've lost the political technology to organize large government projects and complete them on a sane budget/timetable.

The budget plan for the James Webb Space Telescope was $500m with completion in 2007. We're now at $9.6b with completion predicted in 2021. Some of the cost overruns and delays were caused by the unexpected complexity of fabricating the mirrors and assembling them in space, but that underscores my point. The Apollo mission was orders of magnitude more ambitious at the time.

I'm not pointing any fingers because the causes are complex. NASA is underfunded and relies on unreliable funding from Congress, which itself is a source of inefficiency. Defense/aerospace contractors are greedy, sure, but there's a much bigger pie to be had if past successes encouraged even larger-scale projects like moon colonization they could bid for.

I don't know what's gone wrong, but we are failing on these big projects as a species for some reason.


Yes, that does seem to be true. Though I wonder if the incredible amazingness of the first moon trip together with a sense of competition with USSR actually gave workers throughout the project more personal motivation to put in real effort instead of just being minimum-effort drones in a bureaucracy that doesn't matter to them. Doing it all again will never be the same. Even going to Mars won't be such a massive leap.


> we can't actually go to the moon today.

On the same tech we did 50 years ago.


We arguably can't go for entirely different reasons - lack of motivation.

There are things to do on the Moon, right now. Basic science ought to be enough; but applied things like material extraction and tourism would add to that. However, we consider those things lesser priority than some others - and we're getting a lot of short-term bang for our bucks, just not so much of foundation for future developments.

So, today we don't see the reason to apply efforts to get there. Those few who could justify going "in small", are to few and far between to mount a concerted effort. So we're slowly developing such things as cheap LEO launchers and lunar landers - the precursors to return to the Moon. We'll get back there - in the future, not in present. In present we still can't.


The thing is given modern technology it does not make sense to replicate Saturn V 1:1 except for historical reasons.

Other than all the advances that will give you much better performance at a lower cost enumerated in other comments, there is another problem.

When you dive a bit deeper into the history and design of the Saturn V (the NASA history website is a good source) you kinda get the feeling it was hacked together in many places to meet a deadline. Many of its systems were very complicated, labor intensive or hard to upgrade or extend jus so that they could meet the deadline and beat the Soviet Union to the lunar surface. The unlimitted money pass they got for this for a while definitelly did not help matters.

This also in my opinion is why Saturn V ended production so quickly - it did meet the deadline, but was just too expensove and inflexible due to all the hacks for a regular budget conscious "production" use.


I've long imagined that the situation was not so much can't, as that USA/NASA/people have become more risk averse. It was OK that the people strapped to the top of a Saturn V might die when y'all were fighting to "beat the commies", but to put people on the moon, again, ...?

Interestingly, China germinated plants on the moon this year, but that news seems to have been buried in the UK. So getting a payload to the moon, guidance systems and such, isn't the problem.


SpaceX


Sure, you could fly to the moon with 4 USB-C chargers. But how would you charge your phone en route?


5 USB-C chargers


Haven't you watched Apollo 13? You would end up needing to power the spacecraft from the phone.


I hear solar panels can give a pretty solid amount of power.


Asking the important questions


otg


This is brilliant and hilarious, but, as a person that cares about this I honestly have no idea what to do. Fighting software waste feels like getting in a fistfight with the ocean or something. I could write everything in C or assembly, I don't even think that notion is crazy, but my client's want javascript. Not that it's even javascripts fault, or any particular languages fault, I just don't know how you convince anyone that maaayybee we could actually take advantage of the hardware in front of us instead of just adding extra waste every 18 months.


As we're in a closed energy loop we're going to have to address this, sooner rather than later.

Every starry eyed AI/ML piece has me thinking, yes but where's the power going to come from? And where's the power on top to encrypt all this?

And quantum computing? Super-cooling at scale? Aye right pal.

The elegance of the AGC and it's approach to computing could provide a good starting point to a solution out of our mess.

to quote from a 2004 paper by Don Eyles, one of the AGC programmers:

"When Hal Laning designed the Executive and Waitlist system in the mid 1960's, he made it up from whole cloth with no examples to guide him. The design is still valid today. The allocation of functions among a sensible number of asynchronous processes, under control of a rate- and priority-driven preemptive executive, still represents the state of the art in real-time GN&C computers for spacecraft."

https://www.doneyles.com/LM/Tales.html


Nah. I feel a lot like you, and can't wait for the day optimizations matter again. But realistically, fat chance. It's easier and cheaper to hire 50 code monkeys to throw crap at the wall and make something surprisingly functional than it is to hire a few who know how to wrangle bits NES style. And even as processors max out, it's still cheaper to just throw more CPUs at the problem. There will always be niche markets that demand real understanding and skills, but the general 'coder' market is lost for good.


Software bloat is a problem. But I am not sure this is really an example of that. I would guess that they used a relatively powerful microcontroller because it was cheaper and easier than implementation using custom or discrete components. And processors have become so ridiculously cheap that it is cheaper to implement features in software than hardware. As a programmer I think that is really exciting.

The trend is to use microprocessors for increasingly trivial things. You use that power to implement a UI, or a light bulb, or charger. That is far less important than driving a spac ship. But isn't that true of most things we develop?

I agree that the trend towards browser based apps is not always efficient. But the browser does a surprising amount in the background to even out hardware differences. It helps you take advantage of the hardware (like a powerful GPU) without having to worry about specifics. That was never exactly easy in developing a conventional native app on a traditional OS.


Interesting, I see absolutely no problem with this. There's no reason to use worse chips in USB-C chargers, because the cost savings would be minimal and possibly outweighed by software development costs.


One of the most important things that is left out of most computing narratives is that the early space programs had a tremendous amount of stuff pre-computed, i.e. cached and memoized through the hard work of mathematicians and physicists back on Earth. It is not like these days we don't have to run those numbers, just that our computers taken over most of what used to be done by hand. Noting the amount of human computing manpower taken in conjunction with the guidance system would give a much clearer picture.


Interesting... I wonder what part of those pre-computed numbers is actually now safer because instead of relying on dynamic inputs from sensors(which we have seen, can break, iceover , installed the wrong way around etc)


It would be interesting to also compare the power consumption of the embedded ARM processor in USB-C chargers with the power consumption of AGC (Apollo 11 guidance computer).


Wow. I'm a vintage computer enthusiast, specifically Atari, and my beloved Atari 800xl computers only have a 1.79MHz clock speed.

I'll never cease to be amazed by technology. I was born in 1985 and our first computer in the house (aside from an Atari 260 and NES) was a machine in 1995 although my computer usage began in 1990 with Apple II variants. My first portable (Game Boy aside) ran Windows CE around 1998ish and my first remotely 'smart' phone was a Treo 650 about 14 years ago. I still get plenty of value out of 8-bit Atari computers yet the USB charger sitting on my desk I'll discard without a thought if it breaks and it has a clock speed 5.5x that of my beloved Atari 800xl computers but the 800xl did come with 64k of memory from the factory so at least there is that, ha.

Just looking at how far the technology has come from 1979 (800xl release date) to 2002 is mind boggling. Had you described an Android or iOS phone to me in 1995 I'd have asked "are you writing a science fiction novel?" and now I have 5 of them I use on a daily basis (multiple accounts on a freemium game) and 2 that I carry on my person. It's crazy and wonderful and terrifying and even a little unbelievable.


*to 2020 is mind boggling.


I find it a bit a bit scary that a microcontroller controls charger voltage. There might be a risk of power glitches or bugs causing output voltages which destroy devices.


Author here.

Thankfully device-side microchips that interface with USB chargers come with overvoltage protection. For example: http://www.ti.com/lit/ds/symlink/tps65983b.pdf

Devices which charge from wall chargers have to undergo testing for power glitches to obtain FCC certification. I forget the name of the test. Somebody else will know what I'm talking about!


There's a bunch of twilight power conditions in embedded systems that can screw up their function. E.g. brownouts that can lead to partially or fully destroying flash memory in a microprocessor system.

They are usually sporadic and no surge protection test will provide a guarantee against them.


I experienced a brownout a few months ago that destroyed three of my in-wall smart switches. I assume it was the flash memory that got corrupted or destroyed.


This somewhat legendary teardown of an Apple charger is a fun read: http://www.righto.com/2012/05/apple-iphone-charger-teardown-...

Basically, they seem to do a pretty good job isolating high- and low-voltage circuits. As far as I understand it, it is air-gapped, and physically impossible to induce dangerously high voltage on the device side without some mechanical failure that bridges the gap.


a USB pd charger could try to put out 18v to a device that's expecting 5v though, and that can still burn out the device


Assuming reasonable charger design (ie. at least some overcurrent protection) the internal ESD protection of USB-C port in the device that only ever expects 5V (think SRV05-4) should be more than enough to safely trip the overcurrent protection in the charger (or overload it enough that the destroyed component is the charger and not the device).

On the other hand there are documented cases of USB-C PD between two device-ish things (ie. things that can be both power source and sink) failing in variously spectacular ways.


The problem is that the ESD protection can only stop short duration spikes. The device would have to have a fuse (or other overcurrent protection) to prevent the 20V @ 5A as allowed in the USB spec from destroying the ESD circuit.


A USB port should always have a polyfuse. And active disconnect isn't hard either.


Yes, of course. I was thinking of the risk of death and injury, not broken gadgets. Because it is somewhat disconcerting to think how often we/babies touch/stick in our mouth a cable with an almost direct connection to the grid.


I remember there being a guy who reviewed a lot of usb-c cables for compliance (NathanK) having a video on how non-compliant cables can wreck devices:

https://www.youtube.com/watch?v=SjeZB12985c

Maybe that's an example of what you mean?



And then there's the Nintendo Switch, a black abyss of USB compliance.

https://switchchargers.com/safety/


I would say the risk is the same as with hardware 'bugs'. Where for example a passive component that drives a power transistor can exceed it's expected tolerance rate. Thats why reliable electronic systems (medical, space, avionics) have some components or circuits redundant.


Some definitely can. There’s a Google Sheet out there listing many different chargers and their levels of compliance. I only remember that Apple did pretty well.


And that's nothing. Wait for AI in power sockets! For now, they only have simple processors in USB-C charger, but soon there will be more intelligence in there, than you can imagine! :-)


I think this is the thing we'll be looking back on like this.

AI in EVERYTHING. because it'll be so cheap in 20 years.

Like maybe those useless automatic taps or hand dryers will finally actually know if hands are under them properly because they'll have the equivalent of a modern day supercomputer of AI power trying to figure it out.


Maybe. 20 years ago Moore's Law was in effect. We've become accustomed to "it'll be so cheap in 20 years" because, from 1947 to 2010, transistors dropped in price by a factor of two every 2 years, while getting faster starting in 1975 due to Dennard scaling. Roughly, for ten thousand transistors or so, the prices (in inflation-adjusted US dollars) and response times varied very roughly as follows:

1950: $1'048'576 (1 μs)

1960: $32'768 (100 ns)

1970: $1024 (100 ns)

1980: $32 (20 ns)

1990: $1 (3 ns)

2000: $0.033 (0.5 ns)

2010: $0.001 (0.2 ns)

2020: $0.0005 (0.1 ns)

This is pretty inaccurate, but close enough to explain my point, which is that most of the humans have lived their entire lives and their parents' entire lives in this Moore's-Law regime, and it's coming to an end. But their cultural expectations, formed by three human generations of Moore's Law, have not yet adjusted; that will take another generation.

But who knows? Maybe some other similar exponential economic trend will come along and make AI cheap. But it isn't going to happen just by virtue of Moore's Law the way it would have in the 1980s or 1990s.


20 years is a long time to work some stuff out though.

I think it’s a bill gates quote (badly paraphrased probably) - “people overestimate what can be achieved in a year and underestimate why can be done in a decade.”

So 2 decades ought to be something well outside our current expectations.

I hope it will be at least!


Imagine being in 1925 and predicting what the future would be like in 1945, now that the Great War was over so that nobody would even think of starting another major war. You might predict cities built almost entirely underground with kilometer-high skyscrapers http://www.popsci.com/archive-viewer?id=YScDAAAAMBAJ&pg=40 https://futurelapse.blogspot.com/2015/05/underground-living-... since skyscrapers had been rising dramatically in height. Pneumatic tube systems would probably make most deliveries https://www.fastcompany.com/3049647/what-the-city-of-the-fut... since they were already in service in large cities, and it was clear that cities would get denser and denser. Since Television was already being demonstrated, you might imagine doctors diagnosing patients over the radio waves https://www.smithsonianmag.com/history/telemedicine-predicte... using Vidicon tubes that made the house calls for them, along with a Teledactyl that transmitted haptic information to allow doctors to feel the patients; moreover, you could watch theatrical performances from far away. Maybe you'd predict that in 20 years abstract art would be even more influential than it was in 1925, and that the US might institute some kind of social security scheme, and perhaps many more cities would have subways, and maybe more utilities would be publicly owned rather than operated for profit. Given the growth of Communist movements in the 1920s, you might predict they would expand further. Maybe you'd predict that future musical instruments would be electric.

Some of that sort of happened, though mostly later (the electric guitar dates from 1932). But think about what you'd be missing: atomic energy (the Cockcroft–Walton generator wouldn't split the lithium atom until 1932(?)). Manchukuo and the rest of World War II. Alcoholics Anonymous. Mechanical refrigerators. Widespread washing machines and consequently women's liberation. The Little Prince. The jake walk. The Long March and Communist China. Looney Tunes. Airlines. Mass nonviolent civil disobedience. Independent India. The end of the British Empire. Comic books (outside Japan). Neoprene and nylon. The end of dirigibles. Vaginoplasty. The Hoover Dam and the Tennessee Valley Authority. Bugs Bunny. Computers and the theory of computable functions, and the death of Hilbert's decidability program. Universal censorship of mass-market films in the US under the Hays Code. Liquid-fueled rockets and ballistic and cruise missiles. The discovery of the Lascaux cave paintings. The discovery that the universe was less than 20 billion years old. The Moonies. The Great Depression. Perón. Merchandising agreements for children's characters such as Pooh. Radar. Antibiotics. Nineteen Eighty-Four. The end of the gold standard after 2500 years. Pluto. Maybe even talkies — you might have predicted in 1925 that talkies would become a mass phenomenon, but it might have been unimaginable that they would entirely displace silent film.

Maybe you would have predicted one or two of these. But nobody imagined very many of them.

So, what are the major developments you're missing over the next 20 years, which will change the world into which AI-powered hand dryers could potentially be born?


GPUs, FPGAs, TPUs, etc.--combined with open source and public clouds that abstract a lot of the complexity--have helped mitigate the impact of the slowdown of CMOS process scaling especially for the workloads that really need the performance. But it's reasonable to wonder what happens when we start to run out of various "hacks" and performance stops getting cheaper and more power efficient.

Big implications that I'm not sure the tech industry fully appreciates.


A brain the size of the universe and it's there, negotiating voltages with some savage hardware! I sure hope it likes a 10kV surge!


When your charger becomes self-aware because someone put a TPU in it by accident...


While I highly doubt it will become self-aware, but I almost surely expect it to send metric of power usage to Google.


Perhaps connect to your wireless network and snoop traffic.


Forget AGC: Cypress CYPD4225's clock rate is higher than that of Playstation 1 (48 MHz vs 33). However, I don't see graphics specs here so there's room for growth.


Might be hard to compare since the PS1 had a custom cpu arch designed specifically for 3D graphics. https://en.m.wikipedia.org/wiki/PlayStation_technical_specif...


Clock rate means not very much across architectures, especially when specialised architecture is involved.


> The CYPD4225 is definitely not rated for space. I have no idea if it would work in space.

There lays the rub. Even today computers rated for space are relatively underpowered compared to consumer hardware. Comparing the microcontrollers used in USB-C wall warts to the AGC is like saying your car has a higher top speed than a tank. You wouldn't be wrong but you'd also be purposely ignoring some key differences in design goals.


This just made me realise that these chargers are probably not running on Free Software.

Is it possible to update their firmware? I.e., is there an equivalent of an "update firmware" button anywhere? If so, Richard Stallman would not approve of using these non-free chargers. We should not even mention them, lest anyone would think it's acceptable to use them for anything.


Neither do pacemakers. God forbid RMS ever needs an arrhythmia corrected...


But is updating the firmware a normal function of pacemakers?

He makes a distinction on the update-firmware level — if the microwave has no such functionality, then he does not consider it a computer. Look for "microwave" at http://stallman.org/stallman-computing.html:

> As for microwave ovens and other appliances, if updating software is not a normal part of use of the device, then it is not a computer. In that case, I think the user need not take cognizance of whether the device contains a processor and software, or is built some other way. However, if it has an "update firmware" button, that means installing different software is a normal part of use, so it is a computer.


I think so. A relative has a similar device with remote monitoring, and it had to be re-swizzled when they switched services.


I've a friend who has a pacemaker, who went in to hospital for what he reported as a firmware update.


Neither do AEDs which are becoming increasingly connected to the internet. What could possibly go wrong?


Whenever one of my developers tells me that my phone is too slow when their app lags on my Android 4.1 phone, I tell them the specs of Apollo 11 computer and scold them for being unable to deliver a smoothly scrolling list of messages on a freaking supercomputer in my palm.


But do you also give them a free hand to implement the entire stack below to meet that goal, along with the budgeting advice "waste anything but time"?


That is overkill, and in the end they manage to deliver an acceptable results.

You see, the problem of app developers, especially in big companies, is that they are given a top-notch smartphones, so they naturally target such hardware as a benchmark for performance, which moves every year. That's why a 2016 Moto Z play smartphone ran smoothly in 2016, but works really slow in 2020 - because all the regular apps (Gmail, Chrome, FB, etc) were updated with newer flagship smartphones in mind. "New build works really fast on my new Pixel 4", the Gmail developer thinks, "ship in to production!"


Have you considered maybe giving them test phones with the configurations they're supposed to support instead of "scolding" them like misbehaving children because they're apparently working with the phones they have but supposed to magically support phones they don't?


Yes, I had. They all have easy unrestricted access to a variety of 'old' (of the oldest supported OS versions) phones lying on a special shelf in our office.


Yes because the AGC was well known for its smoothly scrolling message lists on its large high-density display.


Are you actually trying to argue that smooth scrolling is hard, and specifically because of the display size?

On the CPU side, you can figure out how to position a line of text in less than 100 instructions if you store it right. But even if you use 10 million instructions to lay out a handful of lines, you'll never have to hitch.

On the GPU side, the pre-retina A5 had a fill rate of 2 billion pixels per second, and a brand new high-DPI phone has 100-200 million pixels per second to draw at 60fps. Let alone comparing like to like. A high-end qualcomm chip from 2013-2014 would match that, and a low end phone would have half the power and half the pixels.

We are in a wonderful age of overpowered GPUs for 2D work. There are no excuses for dropped frames.


> Are you actually trying to argue that smooth scrolling is hard, and specifically because of the display size?

What I'm arguing is that the problems have little to do with one another, and I'm guessing so do the means. That's like saying building a viaduct is easy because the pyramids exist.


It's all about quickly responding to input.

Because it's the CPU stuff that's the problem here, and that's not fundamentally different. The display differences are a very separate thing and not the cause of the problem.


No, it is known for smooth landings on the Moon.


But, on its way to the Moon, how would that Cortex M0 fare against cosmic radiation? One bad bit stops the show.

The smaller are your transistors, the more they are susceptible to impact from a particle.


That particular chip, probably badly. But you can buy a rad-hardened M0 with similar specs: https://www.voragotech.com/products/va10820 The price is under a thousand dollars, too.

NASA's also working on a hardened A53 at ~800MHz, and you can get hardened POWER chips at similar frequencies.


Working a bit now on fpga I wonder whether many even light switch using fpga can be computer for open source coverage.


I admit I didn't understand much of that but enjoyed reading it none the less.

I wonder about the hardware selected for say a charger, is it maybe just that the power (CPU, memory) are just a factor of that being the most cost effective choice and the power (cpu, memory) are far more than a USB charger needs?


I'd guess at one point you simply can't buy anything cheaper than 10MHz?

I'm updating my laptop; it'd probably be fully sufficient to have just 16GB DDR4 (I have just 4GB right now, and it ain't that bad), but I can get a 32GB stick for only like 110 USD, so, I might as well go for 32GB to max out the slot and not have to worry about it later on. If the price for 32GB sticks was more like $500, I'd probably not bother (although it'd arguably still be a good investment in our profession here, it's just a little more hard to justify when you know the price will come down relatively soon, and you don't quite need that much RAM in the immediate future anyways).


That's kinda what I'm assuming. As a chip maker one chip that covers like X use cases might be more powerful than a lot of them need, but it covers all those use cases.


I guess – if you don't mind halving your memory bandwidth.

Most laptops have two memory channels, so if both are not populated with same size memory module, at least part of the memory range will provide only 50% bandwidth.

For memory bandwidth sensitive tasks it can halve performance, although I guess for most workloads the effect is way less, perhaps just 15-20% slowdown.


the cheapest MCU I'm aware of, Padauk PMS150C, $0.03, runs up to 8MHz. Spending a few cents more can get you a lot more speed and features, of course.


A row that compares the manufacturing cost per unit would make this table complete


I seem to remember that are additional computers and a whole group of engineers and scientists on the ground to support the smaller AGS in flight. Kind of like cloud computing.


Is agc more a microcontroller than a computer. Do they have programmer with idea of circuit (paralkel, device control etc) than “computing”


Reading this was a lot of fun. Great effort!


The ARM M0 thumb software divide takes 45 cycles or less. It is still faster than the AGC in terms of clock cycles.


I cannot wait, the day my charger comes with cuda cores or tensor core and my on charger AI modulates my power.


It's wild to see how far our technology has come, I enjoyed the retrospective offered here.


TIL there are cpus in USB chargers. Ain't it crazy ?


Super interesting article!


> Program Storage Space

Solar System




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: