Hacker News new | past | comments | ask | show | jobs | submit login
Coral Dev Board Micro (coral.ai)
129 points by kaycebasques on Jan 12, 2024 | hide | past | favorite | 61 comments



Yeah, I didn't understand the purpose of this device when it came out and still don't. It's an interesting system but seems like worst of both worlds because the coral TPU and M7 aren’t low power enough for battery applications, and it's unclear whether the full 4 TOPS of the Coral is achievable given the MCU’s memory bus bandwidth. So to me it looks like a computationally underpowered system that you have to keep plugged into the wall.

Going with the Cortex A Coral Dev Board or another SBC with the PCIe or USB standalone Coral TPU seems like a better bet. You'd get a better camera (eg via USB), more processing power and memory, and more full featured software (both Linux and TFLite instead of baremetal or embedded OS and TFLite Micro). Price point would be higher for this option, but you'd certainly make that up in saved time very quickly not having to deal with baremetal programming or an embedded OS.


One thing that springs to mind is a monitor for predator (pest animal) traps in remote bush regions. The system spends most of its time asleep, and is woken and starts consuming power only when a PIR sensor is triggered from a visiting animal's body heat.

Then the battery-sapping stuff happens to analyse video and differentiate between target and non-target species and finally trigger the trap or go back to sleep.

A system like this would be an ecological game changer in my country.


The animal would probably be long gone by the time this thing boots and loads the model.


It’s got a low power mode where it can still do some processing: https://youtube.com/watch?v=hS-NiaGeeVA


Not sure about that, but animals are cautious and will usually interact with a baited trap for some minutes.


Yeah once you get to this price point it's starting to make more sense to just buy a Jetson nano and throw Linux on it.


Jetson Nano is a lot more expensive. But at the same price point as Coral you can buy Orange Pi 5 with 4 gigs of ram, 6 tops NPU(not supported as good as Coral though, but is generally more open and has support of much more frameworks) and have a full-blown OS there. SBC is $66, and camera is another $15. And you can buy it right now, not "pre-order" something "coming soon". Coral supply history is a very sad tale. The hardware was impressive 4 years ago, when it first came out. But now we have no second generation in sight, no supply and no communication from Google that they are still interested in this project.


> just buy a Jetson nano and throw Linux on it

I've been writing gstreamer-based inference pipelines for a couple years on Jetsons and in my experience there is never a "just" with any of the these, sadly. It is such a painful platform to deal with at a software level... I wish NVidia had more competition.


Without RidgeRun wiki's working pipelines I'd have suspected gstreamer on Jetson never worked at all. It's beautiful when everything comes together but absolutely chock-full of gotchas.


Plus Nvidia seems to think that an acceptable time frame to support a Jetson model is like 2 years, which makes them effectively abandonware soon after launch. They get one LTS OS distro and that's it.

Compare that to ~15 years of ongoing support that the Pi foundation does for the average Pi.


I concur. I've been in that ecosystem for a few years and I finally had to give up on my Jetson Nano in "favor" the new Jetson Nano Orin. Which did solve a lot of my issues with software, and I paid for it, too. Just say no to Nvidia sbc lol but it's extremely powerful when the software is aligned with your goals.


I struggled with the developer experience for Coral TPUs. It's an uphill battle trying to convert models to run on TFLite.

I wish there were a TFLite backend for ONNX Runtime—then you could use the same API and same model file to run accelerated inferences on Coral, CUDA, CoreML, etc. - https://github.com/microsoft/onnxruntime/issues/10248


I just want them to make more of the USB and PCIe variants. They are so handy for little Frigate NVR boxes—throw one of these on a Pi 4 and you can do at least a few 1080p cameras with person/object detection, or with a Pi 5 I'm able to do a few 4K streams!

Currently the only way to get one is to pay scalper prices like $100+ on eBay or Amazon.


> throw one of these on a Pi 4 and you can do at least a few 1080p cameras with person/object detection, or with a Pi 5 I'm able to do a few 4K streams!

You can take the M.2 A+E version and throw it into an Intel NUC (replacing the wifi+bt card).

For me, the nuisance is, that the gasket-driver is not upstreamed, distros do not ship it, so you have to build it yourself and fool around with Secure Boot signing keys.

> Currently the only way to get one is to pay scalper prices like $100+ on eBay or Amazon.

Depending on the model, Mouser has them in stock.


I've used them for Frigate as well and agree that they are amazing.

Digikey has a few hundred of USB style in stock:

https://www.digikey.com/en/products/detail/seeed-technology-...


I would really love to pick your brains on this. Did you do anything special with Frigate to do the stream processing ( hardware offloading etc? ) or was it simply plug and play.


IIRC, it's editing the config file to tell Frigate top use the Coral.

https://docs.frigate.video/configuration/detectors/


Open-source NVR Frigate uses Coral TPU on RPi for security camera monitoring, https://docs.frigate.video/

> Local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras. Use of a Google Coral Accelerator is optional, but strongly recommended. CPU detection should only be used for testing purposes. The Coral will outperform even the best CPUs and can process 100+ FPS with very little overhead.


It indeed works well. I bought a refurb Dell SFF desktop with an empty M.2 slot for wifi/Bluetooth. I added a Coral TPU in M.2 form factor and it's serving as my NVR now. The integrated Intel GPU also offers hardware accelerated video encode/decode.


This thing works great. It's loaded in my local 5-node(2x3) k8s cluster, with a Coral TPU plugged into a single node. I have 6 4k cameras running, and it does object analysis on all of them with < 10% CPU on the node.


I run this too on k8s, but there's no benefit of running on k8s because the app doesn't benefit from multiple pod / clustering. Unless I'm missing something, there's no real advantage to running on k8s compared to docker. It would be cool if each camera stream could be spawned off to a pod so that it could leverage clustering.


Agreed, no advantage, I just run a larger k8s cluster instead of having docker systems laying around now.


I have a few of these boards. I've been tweaking a recognizer for a tiny conlang to run on one. I want the device to be able to recognize around 100 words for use in an edge computing context where network connectivity is limited. That's stretching the limits of what this device can do, but that makes for an amusing toy.


Warning about Coral products.

I dig the idea behind what Coral is doing in general. However, the one product I got from them, and USB edge TPU (“USB Accelerator”), hasn't been well supported after it was released. After some digging, it seems that their “python3-pycoral” package doesn’t work on version of python greater than 3.9 (I've read that there is a hack to get it working with 3.10). I'm running Debian stable, not exactly bleeding edge, and it ships with python 3.11. So basically I have to run a downgraded VM just to use this thing.

These propriety products live and die based on their support. And every time I use any kind of propriety product, I get burned this way.


> So basically I have to run a downgraded VM just to use this thing.

Why not just use pyenv and

    pyenv install 3.9.17

?


I may be missing something obvious but I'm not super clear on what this would be used for.

What kinds of projects have people done where this might be used?

[edit]

Or alternatively what would be a good first project that could justify getting one of these?


Limited use case, but Limelight Vision[1] supports Coral boards for their FIRST Robotics cameras. Accelerated vision pipelines for object detection in an FRC game[2].

[1] https://limelightvision.io [2] https://docs.limelightvision.io/docs/docs-limelight/pipeline...


And why is this still for sale given Google's recent layoffs? I.e. how did this survive?


This will change the way cat flap business: https://towardsdatascience.com/keep-your-home-mouse-free-wit...


This is definitely a real edge use case!


> Header pins are not included. We recommend soldering header pins onto the board so you can access the serial port via UART.

$80 and you can't include 2 measly header pins (let alone 24)? $2 ESP32 boards from China will come with 32 headers pins.


Consider it a filter. If you can't be bothered to solder on some header pins, the device will most definitely end up in a drawer somewhere with a dozen old Arduinos, etc.


I was commenting on their decision not to include the pins, not on the need to solder them. Almost every single dev board or sensor module you purchase today comes with pins that you're expected to solder yourself.

I'm just surprised that for $80 they couldn't be hosed to include them in the box. If anything it's an admission that they know this will end up in most people's drawers.


Without the header pins, it takes up less room in the drawer.


Valid point. You can store twice as many boards in the same space.


Oh, then yes, I agree with you.



Is this new? It says pre-order but then when you click through to mouser it shows a ton in stock?


Appears to be 18 months old, give or take: https://youtu.be/hS-NiaGeeVA


You can buy this device today. I bought a few on Digikey.


Might want to order one before the injunction.

https://www.theregister.com/2024/01/10/google_tpu_patent_dis...


This is the Edge TPU, which is completely different from their datacenter TPUs, despite the similar name (branding).


It'll be interesting to see how this one goes. A lot of this falls under obviousness. Hardware parallelization of NNs will follow similar paths that anyone skilled in the art would follow. Such paths can't be protected by patent, at least, not for long. If Google is forced to invalidate this patent through obviousness, then this could open similar designs to anyone who can afford an ASIC production run.


I wouldn't hold my breath. Patents seem to be granted purely based on the power and persistence of your patent attorney or team of parent attorneys.

Look at the Amazon one click checkout. Extremely obvious, there was prior art, how you can patent something as obvious as storing the customer's credit card information is beyond me.

The EU patent office at least had the brains to reject the patent for obviousness.


> The EU patent office at least had the brains to reject the patent for obviousness.

This helps, and actually, the one-click patent has shaken up the courts a bit. The company I worked for was sued by a competitor over a baseless patent infringement. I compiled a three inch thick document full of references to prior art, and we hired an industry expert to write this up and act as an expert witness. When they saw the name of the expert witness and knew that we were going straight for invalidation, they folded and we settled out of court. I can't talk to any of the specifics, but it's a much different world now, even with first-to-file. But, you have to be aggressive in your defense.

I don't know the merits of this case. The article is lean on details. But, unless the patent covers something quite specific that is definitely being used by the TPUs and isn't obvious, invalidation is a great strategy to bring the case to a favorable out-of-court settlement. Gamble the value of the patent against the value of claiming that you got Google to settle for an "undisclosed amount". Even settling for $1 and an agreement not to sue each other further makes the patent valuable enough to sell to someone else as a defensive patent.


So the win was that you settled out of court? They didn't just drop the suit? You actually paid them money to go away and they got to claim you settled?


I can't get into the specifics, but settling out of court doesn't necessarily mean that the defendant pay anything. Consider that by both sides agreeing to drop the suit, alternative negotiations could be made.

I'll let you read between the lines. The plaintiff wants to sue a highly visible competitor over a patent. The competitor -- the defendant -- makes motions to begin the process of invalidating the plaintiff's patent based on very strong evidence. Suddenly, the case is settled out of court.


Something like it but with (large) in-memory computation would be a welcome addition. Ability to run LLMs on edge platforms needs to be addressed. The question is: will it take 1 or 10 years. Sadly the coral was already outdated when released.


What would you be able to build with this? Is it possible to inference LLMs on these devices?


Hardly.

1) Coral Edge TPU chip has 8 MB SRAM; for comparison, Whisper Tiny (https://huggingface.co/openai/whisper-tiny) is 40M parameters, and most of LLMs have billions of parameters.

2) I would be surprised if Coral Edge TPU supports all TensorFlow ops required to run Transformers. The chip was designed with convolutional networks in mind.


Google abandonware, with the fact its Google very hidden. TensorFlow lost, or never really got started tbh, for local inference of the stuff everyones built the last few years, including LLMs.

My understanding is if you wanted to build "is it a hot dog" from scratch and deploy it locally its great


Frigate uses Google Coral for AI-based analysis of NVR video using cheap cameras. Works really well.

https://docs.frigate.video


Frigate uses the TPU with an application processor, not this MCU + TPU version, right?


Yeah but their point is still relevant given the direction the thread has gone


It's so weird that 10 years after seeing Andy Barry's drone dodging tree branches using high FPS low-res cameras(120FPS@320*240 via FPGAs), we still don't have cheap enough sensors to do this over USB3/4 today.


Those i.MX RT parts are BEASTS!


What would you be doing on the edge with this? 64mb ram is too small for load any "performing" neural network?


I stopped reading when I got to the camera resolution... 324x324 px. Are they serious? Either it should run at 1k fps or the board should cost $10.

Also the whole Edge TPU, the implied promise was that this is a beginning. That there will be more powerfull "TPUs" available in future as well as the current Edge TPU was supposed to be available as a generic electronic component one can buy and incorporate in ones own designs for cheap. I never saw one in stock.


Many convolutional neural networks for image classification work on 299x299 inputs. If it had a higher resolution camera, it would just need to scale down every frame.

You can see input sizes of pre-trained models from Google: https://coral.ai/models/image-classification/


Most listed there are even smaller, 224×224px. These are models for Coral, though, right? It also says "These are not production-quality models; they are for demonstration purposes only."; the resolution may be part of the reason for that.


The CNN architectures are really designed to operate on such small inputs, even if you run them on RTX 4090s. It makes sense: even with 224x224 pixels you would find it easy to identify the subject of an image. Having a 50MP image of a chair doesn't really help you figure out what it is, and even makes it harder if you're only looking at one zoomed in region of the image at a time.

"Not production-quality models" might refer to: Training was not done for as many epochs as you might want to achieve peak accuracy, or different quantization methods might yield better performance, etc. Or, it's just a disclaimer that if you decide to sell a product using one of these models, don't blame them if it is bad at detecting hot dog vs not hot dog.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: