Hacker News new | past | comments | ask | show | jobs | submit login

As usual, looks good on papier but no ecosystem and the current stack that it comes with is dead on arrival.

Boards need to come with mainstream distro support or with an ecosystem (or both), otherwise it will just die. And with internet connected devices, you can't set-and-forget, so it needs to have live support.

Edit: a RockPi S (same SoC) can be had for $35 and a touch LCD for $11... Granted, it doesn't come with the same integration and headers, and doesn't have a microphone by default, and no RISC-V MCU. But the point of integration is a useful product that stays useful, and it doesn't if there is no ecosystem.

As for 'adding' mainstream support, it takes quite some effort and active ownership. Out of the many rockchip boards and add-ons this is the set that made it to some level of standard inclusion: http://ftp.debian.org/debian/dists/stable/main/installer-arm...




Assuming it's small enough for your needs, a used Intel NUC can get you loads more mileage than pretty much any ARM SoC at an equivalent price.

Currently available on eBay:

* $49.95 for an i5-5300U at 2.3GHz, 4GB of RAM, and 128GB of SSD storage

* $89.00 for an i3-7100U at 2.4GHz, 8GB of RAM, and 256GB of SSD storage

* $135.00 for an i7-5557U at 3.1GHz, 16GB of RAM, and 256GB of SSD storage

...all loaded with ample driver support, extensive connectivity options, and bog-standard x86-64 instructions.

Obviously, there will always be use cases that call for a tiny ARM SoC and nothing else, but plenty of the applications for which people will buy an ARM SoC (e.g., Kodi/Jellyfin/Plex/Emby media centers, RetroArch gaming machines, Minecraft servers, streaming servers, web servers, network filtering, seedboxes, etc.) are better served by a used Intel NUC.


Wait, NUCs are that cheap? I would appreciate a link so that I can get my hands on one. I don't use eBay often and filtering feels terrible on their site.


Not a NUC, or your parent poster, but I picked up a Dell Wyse 7020 thin client for ~$35 + shipping from a recycled electronics vendor in Pennsylvania. EPC here in the USA is an electronics liquidator / off-business-lease seller.

https://www.ebay.com/itm/275777770141?epid=519034385&hash=it...

A homegrown website on re-using thin clients: https://www.parkytowers.me.uk/thin/wyse/z/zx0q/

Of course, I'm still trying to figure out exactly how to upgrade it and if it will replace my trusty RPi4, but I prefer fanless designs where possible, especially for 24/7 usage.

Search tips: look for "USFF" (ultra small form factor), or find a refurbishment vendor like EPC ( https://epcglobal.shop/collections/dell-desktops/price_-100-... ) and then track down their eBay page. Specifically, EPC Pennsylvania seemed to have some very inexpensive deals: https://www.ebay.com/str/epcpennsylvania

It can also help to search for specific business computer models on eBay, as small vendors doing electronics liquidation may just read the model number on the unit, take a picture, and fling it up on the store page in the "priced to sell" price bracket.

https://www.servethehome.com/tag/tinyminimicro/ has a list of small servers they have reviewed, so that can help with getting an idea of a computer's capabilities, upgrade paths, or gotchas (e.g. not supporting more than 8 GB of RAM or something).


Here are links to the exact machines I mentioned, in order:

[1] https://www.ebay.com/itm/364239987954

[2] https://www.ebay.com/itm/145134550658

[3] https://www.ebay.com/itm/175698509348

And here's a link to the search query I used to find those machines:

[4] https://www.ebay.com/sch/i.html?_nkw=intel+nuc


It's not too bad. I usually filter by US only, used but not for parts, then it has pretty decent search syntax like "Intel NUC (i5,i7) -parts" and you can keep adding "-unwanted_terms" to get the query down to something more selective.


Some of the low price NUC like things found on eBay are missing things like a 19vdc 4.5A external power supply, so expect that unless it's clearly called out in the auction.


If space is not an issue used Optiplexes or ThinkCentres are extremely useful for similar use cases

$30-150 AUD

The only concern I have is power consumption being orders of magnitude higher than modern SoCs


I purchased a few ARM boards in the past and won't do it again. All became e-waste.

Rockchip notably, great hardware on paper. Try developing something that needs hardware acceleration and it's a paper weight.

It's a common cycle of broken promises to provide drivers by manufactures, rinse and repeat. The exception being raspberry pi or expressif. Depending on the application.


I've had the same experience with non Raspberry Pi boards. They've become useless thanks to endlessly broken software and zero support. My rpi's have been useful for years and are still getting regular updates.

I think I too was drawn in by the performance claims made by those other manufacturers. I realized though that raspberry pi are very powerful computers. We just take the scale of modern computing for granted and our software is the real problem.


Yup. It's RPi or nothing. It's the software that makes these valuable, not hardware specs. If I wanted to sell an ARM board I'd make it as Pi compatible as possible.


VisionFive2[0] is the way to go. They're much further in upstreaming in just half a year than Raspberry Pi in its long history.

RISC-V is inevitable.

0. https://rvspace.org/en/project/JH7110_Upstream_Plan


The problem is that Broadcom won't sell you the chips they use so you can't.


It's been gathering dust for a while now, but I had great success with my odroid C2. At one point I was running a matrix server with various bridges.


Even Raspberry Pi has struggled with GPU/HW acceleration, but at least its distro is maintained & updated.


The pi4's GPU at least supports OpenGL ES 3.1 and Vulkan 1.2 now. That's decent even if not powerful.


rpi4 is massively memory bandwidth starved. It can't even keep up with drawing the screen in general, much less run games.

VisionFive2's GPU (from Imatech) is supposedly at least 4x as fast.


> ...just take the scale of modern computing for granted and our software is the real problem.

I was thinking how I constantly find myself using GPT and Bard to prettify source code. Surely, not the most efficient use of compute.

Like some crude Parkinson's Law joke on more proficient algorithms.


They’re less popular in the hobby world, but the Freescale/NXP i.MX6 family has absolutely fabulous mainline Linux support. There’s still a Freescale tree, but for most functionality you don’t need it. A few years back a client of mine needed some other functionality that wasn’t available in the version that Freescale forked for their tree and they wanted me to come in and assess what it would take to backport the driver they needed or basically make our own fork that forward-ported whatever Freescale mojo was needed.

As it turned out, I spent a couple of hours messing around a bit with some device tree configuration to make it work with the peripherals on their custom carrier and a bit of `make menuconfig`… and it booted right up on Linus’ tree.


Bang! Spot on! Love NXP. Everyone else here, read the above comment.


I’m still trying to get a Rock 5B to boot from NVMe. By all appearances they included a USB Type-C port that doesn’t work, so the system never pulls sufficient power not to boot loop. Apparently even after a firmware update.

To be totally fair to the up-and-coming boards, the Pi was pretty poorly supported on day 1 and the community did (and still does) a lot of heavy lifting. I’m still waiting for a contender, but I don’t have the time or energy to devote to another SBC.

Despite being a super prolific maintainer of Pi-stuff and related projects I haven’t had a single SBC manufacturer reach out to me and say “hey what do you need to support this?” They just don’t seem to care beyond shoving product out the door and hoping for the best.


If You have not tried what the other reply-post suggests (dumb brick), I have a 5V 4A dumb brick I can loan for a quick test. Seattle area. It has worked on quite a few boards here. Or purchase one from where-ever, they are really useful outside of SBCs (though they are usually 5.3V and 4.xA, so watch out with sensitive parts).


What is a dumb brick?


Sounds like a USB power supply that doesn't do any power negotiation and just gives as much current over 5V as it can handle. Those things typically top out at 2.4A so not really sure where you'd get one that could do much more.


4A, in my measurements.


It is a power supply that if intelligent negotiations fail, falls back to providing 5V (at whatever current), usually (2023) USB-C.


You need to get a dumb brick for that Rock 5B. Looks like you got one of the ones with a bad firmware. When it negotiates the power drops out. There're tons of forum posts.


You are correct. I have many that do 5V 4A.


Armbian is a saving grace for boards that it supports.


Excuse my ignorance, but what sort of applications benefit from hardware acceleration on limited SoC resources?


You're actually answering your own question here ;-)

Because a SoC with limited CPU-core resources can't do everything in software, the chip contains many system components (hence System-on-Chip or System-on-a-Chip) that handle things the CPU cores then no longer need to do.

Think protocol handling or memory; instead of spending many clock cycles on handling the USB bus, you can leave that to the USB controller and only deal with what is actually relevant to your USB device. Same with the VOP (Video Output Processor) block, instead of spending many clock cycles on putting the right bits in the frame buffer you tell the VOP that you'd like the background to be orange and then only spend time setting the right bits for black text (for example). So instead of having to deal with many millions of bits, you only have to deal with less than 1% of them because every bit you don't set becomes orange. For other things like I2C, I2S, DMA, networking, cryptography, SD-IO, GPIO, PWM etc. the same applies. Instead of constantly spending time setting the right bit at the right time many times each second, you just tell a dedicated block on the SoC to do a thing in a certain pattern and it will do it for you, consuming on CPU core resources.

This also allows slow CPU cores that wouldn't be able to decode video in real time to offload the entire decoding to a video decoder block, and then tell the GPU part of the SoC that you're drawing a green rectangle somewhere and that's where it has to put the decoded video frames. Why would one do all of this? Because it's cheap and power-efficient, and that's how you make a big pile of money.

I have no idea how accurate or up-to-date this PDF is, but it should at least give you some idea as to what a SoC can do without bogging down the CPU cores: https://dl.radxa.com/rockpis/docs/hw/datasheets/Rockchip%20R... Check chapter 9 for example, all of those boxes are things you don't have to spend CPU cycles on. If you did use the CPU, it would be super slow.


Is this in the purview of the panfrost driver stack?


I think that's only for ARM's own GPU IP, not for the VOP (it's more of a LCD driver than an actual GPU -- I'd compare the VOP to say, GameBoy-color level of graphics capabilities, which is cool but not even close to a bare-bones VESA or UEFI framebuffer).


Inferencing neural networks is a big one. It's hard to get near real-time performance without it. Depending on the use case of course.


Why on earth would you run ML on a potato?


What should we run it on if we want to deploy it on mobile systems with limited access to network and extremely restricted power budgets?

For a hobby project I'm trying to find a solution - Power budget for multiple is sub 200w. Need to run inference on a lower resolution video stream (or multiple, that be nice) to do object detection. Cost is a factor because I need to have multiple angles to determine where in relation to a mobile platform the target object is. I'm looking at the Coral.ai board because RPi like boards lack the ability to do ML tasks at reasonable FPS and NVidia seems to have abandoned the lower cost side of the market since the Jetson Nanos seem to be less and less available. (Not that Coral.ai boards are available at all...)


Check out the Luxonis Oak products. I use an Oak-1 Lite to do real time 2 stage object detection and recognition (~23FPS at 1080P inference on device with two custom yolov5n models). With a bit of python and a Pi (or a Rock64 or similar) you can get it up and running in a day. They also have a decent community and are actively developing the API/SDK and hardware.


Thanks, I've got one of their depth cameras that's been ok. I didn't realize they'd expanded their line so much. Glad to hear about the API/SDK improving, last time I mucked with it a year or so ago it seemed like it was underdeveloped.

Going to have to dig into the sensors they use - had passable luck with non ML tasks using dirt cheap camera modules from laptops running at low resolutions right up until I started moving the cameras at all and then it became a blurry mess because they were so small their exposure times were high. (I'm trying to also avoid having to put a bunch of illumination near the cameras so it doesn't entirely look like a biblically accurate angel)


Well, you can control the camera ISP and they use very decent IMX modules, so it really shouldn't an issue like it was with the cheapos, as long as you can do the coding to your needs.

EDIT:

* https://docs.luxonis.com/projects/api/en/latest/samples/Colo...


Check out TI’s new low cost board vision AI board: https://www.ti.com/tool/SK-AM62A-LP

They have a home grown AI accelerator along with free Deep learning SDK.

Also offer a pretty easy online tool (free again) to use called TI Edge AI studio. They are using extending the existing AI solutions that come from the higher performance parts like TDA4 and AM68A parts. Pretty good considering a lot of these manufactures are just buying other AI Ip that isn’t performing great and investing in their own engineering.


Looks nice. Honestly I am just kind of playing around right now and the Luxonis products are the only ones that seem to have any kind of active development, support, and usable (for me) hardware in the hobbyist (<$200) price range.

nVidia's platform is just a huge mess. I tried to get their SDK running and their own documentation was out-of-date, missing necessary links, and sometimes blatantly wrong. I wasn't going to dump $400+ into that ecosystem.

Google gives up on hardware consistently and has the worst support of any existing software company (effectively zero) and has bungled the AI hand every change they get.

ARM NPUs I am not going to bother with. I can't even get video encoder acceleration working on a non-Pi ARM SoC except for the Rock64 and that is like 6 years old and was missing that functionality for 4 of them.

Intel only cares about its corporate partners and doesn't give a crap about hobbyists in regards to A.I. But their VPU was (is) decent and Oak guaranteed supply for at least until 2025 or thereabouts and built a useable API so we don't have to mess with OpenVINO.

It's all a mess right now but I can't say that competition is bad. It will be nice if we dispense with all the bespoke platforms and agree on some common architecture for edge devices, but I won't hold my breath.


To give one example, Raspberry Pi are often used to control 3D printers and attached webcams, and there are ML systems that supervise the webcam feed for evidence of a print failure. Being able to run these systems on the SBC would be an advantage, but that's typically not realistic today.


It would probably not run on this potato directly, but on an accelerator (be it in-SoC or a separate thing like a Google Coral). The management and control of a Coral needs to happen on a CPU somewhere, but the actual work doesn't involve the CPU much, if at all.

So if you have a thing that has all the data and all the work done on an engine elsewhere and you just need to have a SoC to turn the thing on and off and get the data in and out, that's where a potato-SoC could work. Of course, the potato would need good distro support with up-to-date kernel, drivers, python, libraries etc. and if you're connecting it to the network, best make sure it's also getting patched consistently.

So, as far as ML Potatoes go, that's about it. It is a totally valid question by the way, even if asked in jest.


Why wouldn't you?

It's the same situation as all the bitcoin miners from years ago. They mined on the GPU, so the CPU didn't matter as much as how many PCIe slots the motherboard had. The CPU is purely there to drive the operating system and enable the network to communicate with the GPU.

If you aren't using the CPU for anything other than the PCIe slot to enable a Nvidia card for ML, then a very cheap potato makes a lot of sense.


inference, object detection for one.


> As usual, looks good on papier but no ecosystem and the current stack that it comes with is dead on arrival.

There is no standard Arm ecosystem to boot on which is the real issue - every board needs a custom kernel and booting.

I blame Arm for never cooking up a PC like spec that defines hardware configuration, firmware and booting. This is why x86 is going to stick around for a while - its dead simple to bootstrap a random x86 machine (well, almost).


I always wondered how we got there.

The original ARM chips, as in the Archimedes era, obviously ran in things that broadly resembled the other desktop computing platforms of the era.

They had to solve the same problems: deciding which device to boot from, how to initialize the attached grab-bag of peripherals. They also had a long series of models with varying capabilities and integration.

So they must have something comparable to a BIOS/EFI/Open Firmware-- a standardized hardware enumeration and bringup process.

Somehow that disappeared when ARM moved away from RISC OS "desktop computers" and went into the microcontroller world. Why?

I figured it probably either had to do with licensing, or an assumption that an embedded ARM core in some random device would only ever be used with a particular bag of peripherals, so there was no need to rely on conventions.


I naively assumed that device trees had solved this problem, but clearly they haven't in practice. How come?


For one, you have to provide one specifically for your board to boot at all. There is no "common" one that'll get you to a shell on any board.

The main issue is that most SoC vendors just dump a heavily hacked up Linux kernel source intended for Android on you, along with binary blobs (firmware, android hal services, etc.). The drivers aren't in the kernel per se, just enough of a stub for the proprietary blobs to talk to the hardware.

So if the goal is to run a standard Linux distro on the board, you're pretty screwed unless you have the time and resources (or a community) to reverse engineer the android bsp into drivers for the mainline kernel.

The thing that makes the RPi great is that there's actually an entity paying for this development to happen, along with the community. It's certainly far from the fastest ARM board, but even the decade old RPi 1 still gets kernel updates.


I blame Arm for never cooking up a PC like spec that defines hardware configuration, firmware and booting.

ARM isn't to blame; it's that no one company was big enough to set the standard (and let others clone it) for ARM systems unlike what IBM did with the x86/PC, which is not surprising given how diverse ARM's cores are.


The bottom connector looks like a BBC micro:bit so there is an ecosystem at least on the hardware side


This, I bought a Cubieboard back in 2013 because everything looked good on paper.

It didn’t take long to realize how useless the thing was with ARM Debian and absolutely no driver support…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: