Hacker News new | past | comments | ask | show | jobs | submit | jmole's comments login

How do you handle client side interactivity? I’m probably an outlier, but all the JS I write is client side, and I sure wish I had a typed language to use in development.

I’m on mobile at the moment so could probably give you a more detailed answer tomorrow, but check out https://htmgo.dev/docs#interactivity-events

There is a lot more to San Francisco's housing crisis than price controls: lengthy permitting processes, environmental reviews, NIMBY community outreach, etc.


San Francisco has always been a crooked city.. fleecing newbies is sport.. they have jokes and murals and parties around it and always have.. in the American era.. source: personal testimony by someone born and raised there around 1900


This is what happens when your phone and all it's apps are written in a garbage collected language.


That’s a gross misrepresentation, and far from the whole truth. (Besides, ref counting is also a garbage collection algorithm)

A huge chunk of the OS runs native stuff. Add to it that android is less strict at putting apps/processes to sleep, it’s become a much less black and white question.


You are missing the point here – garbage collected languages (the "allocate it and forget about it" style, not refcounting like Objective-C) require much more memory to perform at the same level as a language where memory is explicitly allocated and deallocated.


Yes, I’m fairly familiar with garbage collectors and these are called tracing GCs. My point is that a big chunk of the system doesn’t use these, so you can’t just put the blame on one thing.

Would one percent be responsible for a system’s bad performance if that is “slow”?


Yep. I still use the iphone 7+ as a backup phone, it still gets security updates and runs modern apps fine with 3GB! Slow compared to modern phones, but that's not down to the RAM, the memory management is amazing, never had an issue.


It's incredibly refreshing to see them pricing these at volume costs for non-volume buyers.

The insane component prices that we pay in the US for lower volume electronics manufacturing are killing innovation and driving manufacturing to countries with actual price competition.

Looking on Digikey/Mouser, you'll find that other MCUs with a similar spec are selling for at least $4-5 at qty 3000+.

The RP2350 price is a lot closer to what a large volume buyer would pay for MCUs like this, and still appears to have a healthy margin on top of the cost of dies, packaging, test, etc.


I disagree.

Small uC is almost entirely about peripherals and RP2040 and RP2350 barely have any peripherals worth talking about.

It's always a weird strategy to me. Rasp. Pi foundation makes a bare bones no-frills chip (most models missing even Flash, fortunately RP2354 has Flash finally) in a field where processing power is not the focus.

MPUs like SAMA5D27 are sub $10 and Linux capable and are still a bad Processing Power per $$$ point because Rasp Pi 5 exists which is bad because AMD Epic exists. If we go down the processing power chain, we simply end up in Server Land.

The traditional use of uCs is to simplify circuit design by having one chip do 90% of any given project. Running BLDC motors? STM32 has BLDC motor uC that handles the 250V or 600V needed. (EDIT: STSPIN32F0602)

You don't need much more than a Cortex-M0+ to drive BLDC either, so all the extra processing power of RP2350 is wasted.

---------

RP2350 is very MPU-ish to me. It requires additional components and reviewers who love it praise features like PSRAM compatibility and talk about how Flash is cheap anyway.

But that ignores that your typical uC doesn't need any external anything aside from maybe Xtals for communication.


I don't think the Raspberry Pi foundation needs to focus on direct IGBT drive, and there are a lot more tasks in the world than driving BLDC motors.

And yes, there are plenty of highly-specialized microcontrollers that serve a variety of industry niches quite well.

But look at the competitors from ST in RP2354A price bracket of ~$1:

RP2354A, $1 at whatever quantity: Dual 150 MHz CM33, 2MB flash, 520KB SRAM

STM32F103, $0.99 at Qty 1K from LCSC: 72 MHz CM3, 128KB flash, 20KB SRAM.

STM32L431, $0.98 at Qty 1K from LCSC: 80 MHz CM4, 256KB flash, 64KB SRAM.

There are several ST clones from China that are offering a bit more SRAM, a bit higher clock speeds or slightly more flash, like 512KB, but nothing even comes close to the RP2350.

PIO more than makes up for the lack of any digital-specific interface that you think you're missing, not to mention you have an entire extra core just sitting around that you could use to bit-bang whatever crazy interface you decide that you need.

The main area where they are significantly lagging their price-matched competitors is the ADC, and analog in general. I suspect that they are prioritizing limited die area for compute over large analog blocks that are hard to characterize and design on whatever process Sony is giving them.

And if you really need certain specific peripherals (and low-power doesn't matter to you) - capacitive/inductive sensing, DACs, etc., you're almost better off buying a low-spec ultra cheap MCU with the features you need, and then running it as a peripheral to the RP2350. I honestly don't think there's a better price/performance chip out there right now.


> And if you really need certain specific peripherals (and low-power doesn't matter to you) - capacitive/inductive sensing, DACs, etc., you're almost better off buying a low-spec ultra cheap MCU with the features you need, and then running it as a peripheral to the RP2350. I honestly don't think there's a better price/performance chip out there right now.

Price performance? You're not going to beat DDR2 RAM with microcontrollers..., which I'm seeing at 128MB for $3 or less these days, in terms of price/performance. Once we start talking about price/performance of compute, we very, very quickly rise up the escalation scale all the way to servers.

My point is that uCs exist almost purely because of their analog features. Take the STM32L431: its got 1x OpAmp, 2x DACs, and 5Msps 12-bit ADC at less than $1.

The only other devices that offer as much analog components are other microcontrollers (AVR, PIC, MSP430, MSPM0, etc. etc.). And in most cases, these other microcontrollers have "some niche" that tries to differentiate itself. That's why there's also so many of them, some of them are explicitly designed to be motor-controllers as one example.

--------

What real-world analog task are you looking at that requires 520KB of SRAM? Things are the opposite. Real-world analog-sensing tasks are like temperature-sensors (which take many seconds to change), or at fastest... things like rotation-sensors on a motor (10,000 RPM is 166Hz, even a 1MHz 8051 is sufficient to keep up with 10,000 RPM or 166Hz motors).

In these cases, the most important thing is... having a 2nd DAC. Or having Analog Comparators. Remember that Analog Comparators are 1-bit ADCs really, so those 2x ACs in the STM32L431 kinda-sorta act as an additional independently-running ADC in practice.

If you're using the 12-bit ADC on some important measurement task, its usually nice to have a 2nd AC and a 3rd AC measuring other attributes. Take for example a Buck converter: you might use the ADC to measure current (after the x16 multiplier from the OpAmp to help minimize the resistance of the shunt-current sense resistor), but then use Analog Comparator#1 as the feedback-sense to set the output voltage and then Analog Comparator#2 goes to your temperature-sensor for emergency thermal shutdown.

And AC#1 (aka: Output voltage) quite possibly needs to change depending on current, so you'd like that to be programmable. So the STM32L431 solves that by configuring DAC#1 to AC#1-, and now you got a controller that reacts instantly.

> capacitive/inductive sensing,

I'm not 100% sure, but I feel like the PIOs probably can implement capacitive sensing. Its actually just a timer and PIOs look like _very_ advanced timers to me.

------------

Or lets take the 4-20mA protocol as another example. How would you use the RP2354 to make a 4-20mA compatible product? Your one ADC is on the sensor, and you don't have any Analog Comparators (or other analog components...), so you're already out of analog components to implement the protocol.

You can of course use I2C sensors (which opens up the ADC off of sensing duty and onto current-sense duty needed to implement 4-20mA transmit), but how many I2C sensors out there are cheaper than these microcontrollers?

Even then: RP2350 / RP2040 is a single-ended ADC, is it not? That makes current-sensing low-side only which limits the applicability. Its not even a good solution.

I know that AVR / Atmel chips offer differential ADCs (aka: sense the voltage of (A-B)), which is instrumental in high-side current sensing. Its not sufficient to assume B is ground in 4-20mA protocol.

The STM32L431 has an OpAmp, so it wouldn't be too hard to implement (A-B) across two points using the OpAmp (aka: Differential Amplifier mode) and then report it to the single-ended ADC. So its different, but ultimately the STM32L431 chip solves the problem.


Adding to my earlier post....

I think RP2xxx is best for screens and GUIs. So a game console and similar.

PIO looks like it's good for a lot of digital interfaces as well. I know DVI and VGA have been proven on PIO for example and there's probably all kinds of 80s and 90s chips that need emulators or replacements that RP2xxx would be good for.

So there is definitely a niche for RP2xxx chips. But it's not in power control, motor control or the like. And since it's not quite strong enough for Linux, it's a hassle to run Web Communications (wget/curl) off of it either.

------

But traditionally, game consoles were an MPU market not a microcontroller market. And if I were making a modern hobby game console, Linux capabilities would be on the top of my list to make everything more convenient. And it's really not that expensive these days to reach into proper Linux / DDR2 / NAND Flash.


> RP2354 has Flash finally

The flash is QSPI, so its not really on die flash with a real flash controller. There is some QSPI cache but it’s really a band-aid solution to not having the real thing. People around the net don't seem to understand the difference and it can be very misleading.


Hmmm, could you elaborate?

So I know that RP2xxx doesn't execute out of flash ever. I'm pretty sure the architecture of this is all execute out of SRAM.

Im pretty sure RP2040 copies out of QSPI and then executes in SRAM, but I'm not entirely sure what the downsides of this could be.

I'm guessing there might be security implications?


Nothing refreshing about this. Their mid range Raspberry Pi Zero used to be $5, now they are giving you an underpowered microcontroller for the same price. You can get Pico 2 sized individual boards for less than $2 in Asia.


The MCU is $0.80 - $1.10.

$1.10 for a dual core Cortex-M33 with 2MB flash and 512KB is a bargain in 2024 compared to other MCUs available at this time, at least in terms of prices that you'll be able to get as a regular consumer in the US. The only other chip that even comes close is the ESP32.


I’ve always kept the dedicated BMC network port locked down out of paranoia, but this seems to be an exploit that a rogue BMC or BIOS could use to access the host networking stack.


Rogue BIOS always had total control of the system; it doesn't need exploits.


It doesn't even necessarily need a running/enabled BMC.


it's stopping, like all printers of the era, e.g. https://youtu.be/A_vXA058EDY?&t=41

edit: I see now you were asking about the drum, rather than the paper


NeXT invented Objective-C and encouraged its use, and the object-oriented paradigm was as popular back then as rust is today. It was just trendy to do so.


They did not invent it, it was developed at PPI (later Stepstone), before NeXT was founded.


To a great degree, NeXT was comprised of an assemblage of talent and technologies which Steve Jobs put together:

- Mach microkernel --- Avie Tevanian may well be the most heavily recruited computer science student in history with offers from AT&T, IBM, Microsoft, and NeXT

- Interface Builder --- Jean-Marie Hullot originally did a graphical layout system for developing on the Mac

- Display PostScript --- to a great degree, NeXT was responsible for this

- Objective-C --- as noted elsethread this was worked up by Brad Cox at Stepstone

and, of course they licensed Unix from AT&T (and other bits from other sources such as a Pantone color library, or Webster's dictionary for Webster.app, and Mathematica from Wolfram was included early on).

Wish my Cube hadn't stopped booting up...


>- Display PostScript --- to a great degree, NeXT was responsible for this

Confirm, got to see a live demo at Comdex ATL back in 1992? Mind blown.


Quartz, née Display PDF is a nice alternative (and is probably even more reliable these days), but I still miss Display PostScript and the ability to program custom fills/strokes and so forth --- huge potential security hole as Frank Siegert's "Project Akira" showed though.


Yes. The Computer History Museum has a two part interview with Steve Naroff, one of the engineers who worked on Objective C:

https://youtu.be/ljx0Zh7eidE

https://youtu.be/vrRCY6vwvbU


Memory allocation isn’t that slow (in fact all the ram is SRAM which is typically quite fast), it’s just that you only have 256k-1M RAM in total. This means that any time you’re saving later by trying to fill space now ends up getting wasted when that memory needs to be reclaimed.


Is that via cash holdings or stock holdings? I know Alphabet has had some buyback activity, but I don't really track Apple's moves here.

I am glad to see Alphabet paying dividends now though, it feels like the sign of a company that knows insane stock price growth can't go on forever.


Think about a company like ADT - they are selling security systems, but the people who really really need security (large clients with large IT budgets) would never buy an ADT system.

So like it or not, you're going to be going door to door and helping smaller clients integrate this into their systems.

I think the right way to approach this would be to better understand the problems your clients would face when trying to integrate this kind of system, and then figure out how to solve them at scale in a way that you make customer acquisition and onboarding easier in the future.

Maybe it's things like creating base docker images for common services or OS pairings that have your stack already integrated. Maybe it's turnkey integrations with existing cloud identity providers or SSO. Maybe it's tailscale integration.

In fact tailscale is probably a good model to look at here - no large organization with an existing VPN solution is moving to tailscale, or at least weren't when they first started. But tailscale made a hard thing easy, and that's exactly what you're doing here.


Tailscale is a good model for software businesses in general IMO, but they also have another clear advantage over some project like this: they focus on exactly one thing and do it exceptionally well. There's probably a (small) market for out-of-the-box stuff like this, but I'd imagine it has got to be pretty small.


So caveat that I am not a small business in need of any of this stuff, but I'll give you an example of the kinds of things that I have struggled with in the past where I think a solution like this could add a lot of value if extended in the right ways:

- certificate management amongst a plethora of hosts, both SSL/web certificates for external use, and management and installation of self-signed "root" certs for validating internal applications and services - keymastering server: an appliance that acts as a genuine root of trust for an organization, using a Yubico HSM for key storage, but providing middleware & admin controls to manage issuance and distribution of intermediate certificates - AD/LDAP/SSO/etc user management, key issuance, etc.

If you have a small team and you don't need global redundancy for these functions across a large fleet, then it makes a lot of sense IMO to shell out $5-10k for a set it and forget it security appliance that makes certificate/key management simple and easy.

I think the biggest challenge is that it's hard to build trust as a startup without open-sourcing your stack, but that makes it a lot harder to get buy-in for an appliance model unless you have some creative dual-licensing ideas.

But "your keys/certs are stored securely on your hardware in the room next door" is a compelling value proposition & probably a much easier pill to swallow for certain companies than a cloud HSM or other solutions which sorta boil down to "trust me bro".


> using a Yubico HSM for key storage, but providing middleware & admin controls

> a compelling value proposition

I completely agree; I'd originally drawn up a design for an offline root CA, then an box with a separate server for an intermediate CA with HSM for intermediate keys, a second, dedicated Secure NTP server (possibly hardware based) so that certificate expiration times could be kept short.

While all that is easy enough to prototype, the complexity of hardware distribution is better left to a later point in the roadmap.


I wouldn’t use the Yubico HSM, because I think it misses a feature that would IMO add considerable value: an enforced CT-style log. If I were paying for a corporate root of trust, I would want very strong auditability. Set it up so that the HSM does not release a signature until presented with an SCT. Make it impossible for buggy or compromised host software to create bad certificates without being detected.

A hardware HSM is not magic or even especially complex. Java cards can do it (slowly). Yubikeys can do it. Other vendors’ devices can do it. Lots of microcontrollers can do it as long as you don’t need resistance to complex physical attack. A startup in this space should seriously consider building its own.


Do you have a recommended vendor that could be made to support hardware signed CT type logs?


Sure, so focus on being a local certificate authority using hardware backed roots. I can see that being valuable, but you'd be competing against companies like Smallstep.


Physical security is probably a bad example, when people need physical security they end up calling installers. ADT got where they are by making their installers lives easier, and make it easier to find those installers.

I am not the market for the OP. Because I want the ability to change MFA vendors or federate, but the strategies of non-software companies is much different IMHO.


What other MFA vendor would you go with? For my own business continuity it might make sense to white-label both yubikeys and an alternative vendor, but Yubico seems to have the best product unless you're wanting to push MFA to user's phones.


It is more about vendor and technology mitigation vs selection.

Companies fail, projects degrade, like lask week quality can go down.

The better question is why would my organization couple their success to you wagon, do you provide them with a way to get their info out in a portable way?

But there are many reasons to maintain the ability of your IAM to access multiple IDPs.

I do think there is a growing market for products that don't use your private data for their own goals.

But coupling a domain controller to a single 2FA provider just doesn't have any value as you have described it, at least for me.

I am not the entire market, just one potential user, so take this as feedback and not outright dismissal.

Perhaps if you develop the idea more I may be interested in the future.


Physical security like ADT is so yucky to be honest. It violates _all_ the principles we use in IT security. The vendors are super secretive about their specifications and even basic aspects are usually impossible to figure out / considered a business secret.

Like, I was looking for an RFID entry system for a customer. Some of these are advertised as using DES/AES security (implied to be some version of DESfire). Most aren't. Try figuring out if they actually use DESfire and if the handshake is tunneled to the door controller (placed in a secure area) or the card reader (placed in the vulnerable, insecure area) has the keys and is just sending the UID to the controller. Nobody will answer this question. (Presumably because these secure systems are all actually UID-only on the backend so trivial to bypass if you learn a valid backend UID).

And even then, you're like "Okay, this sounds interesting. I wanna buy it." - "Oh, you can't. We don't sell these. You need a system integrator / installer." And then you go to one of these and it's super obvious they have essentially no clue how any of the stuff they're system-integrating works, but of course they won't give you admin access to the system they wanna install. "How do I configure this?" - "You don't. Only we do. Using a proprietary software." - "Where's the system manual for this?" - "We have it, we can't and won't give it to you."

I mean a lot of stuff works like this, usually with incompetent middle-men fucking up products which aren't all that bad (another most popular example would be HVAC and heat pumps, especially ASHPs) and manufacturers trying to make a SaaS kind of play with hardware you bought. But for security it feels especially egregious. How do you know the installer doesn't have a master key? Well, they usually do. How do you know the ACLs are set up correctly? Trust me bro. And so on.


Reminds me of a time I had a heat pump water system installed with clearly labeled warnings on the outlets that the covers needed to be removed or requirements that the fans be sheltered.

None of this was done. It was out in the sun (laminate on control panel fused to the screen), air intake was factory sealed (system failed after a while) and it was left in the rain after an installer came to remove the covers (air intake / exhaust are top facing).

I could have easily solved the issues myself but didn't want to give them the option of pinning liability on the client.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: