Hacker News new | past | comments | ask | show | jobs | submit login
The Mythical Non-Roboticist (generalrobots.substack.com)
199 points by robobenjie 5 months ago | hide | past | favorite | 114 comments



I am fond of saying there are only two hard problems in robotics: Perception and Funding. If you have a magical sensor that answers questions about the world, and have a magic box full of near-limitless money, you can easily build any robotic system you want. If perception is "processing data from sensors and users so we can make decisions about it", then there isn't much robotics left.

Got a controls problem? forward predict using the magic sensor.

Got a planning problem? just sense the world as a few matrices and plug it into an ILP or MDP.

What did the user mean? Ask the box.

etc etc. Distilling the world into the kind of input our computers require is immesnely difficult, but once that's done "My" problem (being a planning expert) is super easy. I'm often left holding the bag when things go wrong because "my" part is built last (the planning stack), and has the most visible "breaks" (the plan is bad). But it's 90% of the time traceable up to the perception, or a violated assumption about the world.

TFA is spot on - it's just not clear how to sense the world to make "programming" robotics a thing. In the way you'd "program" your computer to make lines appear on a screen or packets fly across the internet, we'd love to "program" a robot to pick up an object and put it away, but even a specious attempt to define generally what "object" and "put away" mean is still 100s of PhD theses away.So it's like we invent the entire ecosystem from scratch each time we build a new robot.


I love this perspective.

It’s also made me draw parallels between the experiences with actual people, especially others in my household. With young children who are at the early parts of “doing household chores” of development there is basically constant refinement on what “clean the floor”, “put things away”, etc. _really_ means. I know my wife and I have different definitions on these things too. Our ability to be clear and exhaustive enough upfront on the definitions to have a complete perception and set of assumptions is basically non-existent. We’re all only human! But our willingness to engage in fixing that with humans is also high. If my kids repeatedly miss a section under some chairs when vacuuming we talk about it and know it will improve. When my Roomba does it it sucks and can’t do its job properly. Even thinking about hiring professional trades people to come do handiwork it’s rarely perfect the first time. Not because they’re bad, just because being absolutely precise about things upfront can be so difficult.


Really there are three problem in robotics: Perception, Funding, and Cables :)


Connectors imo :)


And fasteners. I swear any automation system is 90% cables, connectors and fasteners by weight.


Totally. I worked on the electronics in robot arms for a while and EVERY TIME there was a failure in the field - it was the cables.


Only one of them is fun to manage.


Perception, right?


It's so great to read genuine yet experienced insight like this.

Like last night on Twitter I saw an opening for Robotic Behavior Coordinator at Figure. I know for sure, having analyzed this problem with "nothing else" to do for 20 years, I would crush it with humility, and humanity would profit in orders of magnitude.

But they are not set up to hand me control of the rounding error of $40M I'd like [and would pay forward], *nor would their teams listen to me, due to human nature and academ-uenza*.

Such is our loss.

(as you ~say, "reinventing the ecosystem from scratch...")


> humility

> humanity would profit in orders of magnitude


>> touché :)

>> but please believe, I would not risk ostracism on this (my favorite) forum if I were not [approaching] 100% sure.


Ah, sorry if I sounded like a douche.

Have my Y-C idea now.

here we gooooo ..!.. ;)


even a specious attempt to define generally what "object" and "put away" mean is still 100s of PhD theses away

Is this part still true? There are widely available APIs (and even running at home on consumer level hardware to some extent) that can pick an object out of an image, describe what it might be useful for and where it could go.


Imagine you program a robot to "put away" a towel. Then it opens the door and finds there's a cup in the place already. Now what? Or a mouse. Or a piece of paper that looks like a towel in this lighting. Or a child.

Imagine the frustration if the robot kept returning to you saying "I cannot put this away". You'd get rid of the robot quickly. Reasoning at that level is so difficult.

But then imagine it was just a towel all along - oops, your perception system screwed up and now you put the towel in the dishwasher. Maybe this happens 1/1,000,000 times, but that person posts pictures on the internet and your company stock tanks.


Most robotic companies today still use traditional tracking and filtering (e.g. kalman filters) to help with associating detected objects with tracks (objects over time). Solving this in an fully differentiable / ML-first way for multiple targets is still WIP at most companies, since deepnet-to-detect + filtering is still a strong baseline and there are still challenges to be solved.

Occlusions, short-lived tracks, misassociations, low frame rate + high-rate-of-change features (e.g. flashing lights) are all still very challenging when you get down to brass tacks.


It's definitely not a solved problem in general, especially in realtime.

It's a lot easier to get started on something interesting and maybe even useful than it was even 10 years ago.

A lot of the "ah we can just use X API" falls apart pretty fast when you do risk analysis on a real system. Lots of these APIs are do a decent job most of the time under somewhat ideal conditions, beyond that things get hairy.


> that can pick an object out of an image

You have to do it in real time, from a video feed, and make sure that you're tracking the same unique instance of that object between frames.


Robots could make a short stop or go slower to process an unclear picture, that is probably not the problem - but the image processing itself, is still way too unreliable. Under ideal condition it mostly works, but have some light fog in the picture or strong sunlight and ... usually all fails.

Otherwise the Teslas would have indeed full self driving mode, using only cameras.


>Robots could make a short stop or go slower to process an unclear picture

The costs of doing so are hugely dependent application. It is not, for example, an attractive strategy for an image-guided missile, though it's probably fine for an autonomous vacuum cleaner.


And then you need to grasp it.


If someone could readily do it using GPT-4V with its apparent sentience, it must be happening already. So far there have been just few demos that shows obvious signs of manual programming, manual remote operation, and/or even VFX editing in some cases.


That language sounds borne of hair-pulling disbelief.

If they can put ImageNet on a SOC, they can do it. [probably too big/watt]

Better yet: ImageNet bones on SOC, cacheable "Immediate Situation" fed by [the obvious logic programming that everyone glances past :) ]


> This is how Cybernetics starts y'all. <


Cute quote - added to https://github.com/globalcitizen/taoup :)

I would add supply chain, however.


To solve that:

Assumption: Apple's supply chain is gold standard [~max iterative tech envelope push & max known demand]

Hypothesis: This is swiftly re-creatable for any [max believable & max useful] product. "Detroit, waiting".


An honor! Pleased to contribute.


What about transformers for robotics, like ALOHA, they seems to help with learning new tasks.


Loved the TFA.

I've been working on robotics pretty much my whole career and people usually miss how complicated it can get even for simple things once you consider what can go wrong AND it's a meeting place for a multitude of areas: hardware, software, mechanical, electrical, machine learning, computer vision, control, driver, database, etc. An issue can hide in between any of those for months before it shows up with bells and whistles.

What is sometimes difficult to get across to people is that building robots is not only difficult per se, but the base of comparison is unusually unfair: if you build an e-commerce website you benchmark it against other e-commerce websites, maybe Amazon, maybe ebay; for robots usually the benchmark is against people, the most adaptable and fault tolerant machine that exists, every robot will suck compared to a human doing the same task, but that's what we compare it to every time.


Last week I stumbled onto a bug in a sensing system that has lived in our codebase for at least 16 months and wasn’t ever triggered. In being vague but the system has been used pretty heavily in a whole bunch of different environments attached to several different host (mechanical) machines. What tickled it? A counter wraparound in 3rd-party FPGA logic at exactly the wrong moment.

And per Murphy’s law, it happened for the first observed time in a relatively high-stakes situation while there were a lot of eyes on it. Naturally.


I feel you. This week we found a race condition induced segmentation fault in our lidar drivers (by Sick) that have always been there and the only reason we came across it was because we had to customize them and increase the poll frequency for a specific use case. Like you, high visibility and during a crunch.


AKA the demo effect: if you want to show something working (like to a customer, investor, boss), it won't. If you want to show something not working (like to a supplier or co-worker for refund/debugging), it will.


"every robot will suck compared to a human doing the same task, but that's what we compare it to every time"

What about a factory robot, welding together a part of a car?


Those aren't robots they are industrial automation. :)

As soon as it gets practical it stops being robotics.


A potential corollary: As soon is gets practical, it also stops looking like a robot.

Once you know how many degrees of freedom are truly needed to solve a problem, you start removing unnecessary parts in the design to lower cost and assembly complexity.

Thus, once your cool new C-3PO has perfected the art of making toast, it's only a matter of time until you re-engineer it into looking like a toaster.


No, I don't think so. We already have toasters, robots are for the tasks that can't be done by such simple machines.


Form factors are so important. I'm right in between robots and toasters. I see a building as the form factor for a 100 year computer.

The best illustration of this subtle difference is how I'm contemplating snow and ice management. I have the solid state idea of installing quartz IR lights around the building to control the ice and snow. I also have been working on using de-icing and pre-icing liquids with hopes of getting some droids to take over the physical part of applying the liquids and brushing away the snow.

I have settled on doing both with the building controller acting as the overall manager of the process.

I looked at the posetree.py that the author wrote and linked to and it looks like as good a place for me to start.

Form factor is critical in assigning human names and commumnicating use. I find when organizing a solution to a problem adopting a form factor too early is a hidderence.


It's a matter of interface: people like the idea of humanoid robots because all the interfaces are already optimized for humans, thus if robots have that form factor they can use the same devices/tools that humans do and we wouldn't have to change the designs of these machines.

The question is: how much information is lost in the process? How many layers of complexity we would add to a machine ensemble to be able to operate together at a satisfactory level? The machine learning corollary of understanding the whole picture of the problem/solution space and that leading to simpler solutions (because you don't have to optimize further) applies here. At the end of the day, cost, complexity and practicality will have the final word.


There can also be human handles and touch points and mechanical interfaces for direct manipulation.


True, but all the buzz on the humanoid robots from recently is putting the robots in the same environments human use without having to change it at all, not because it's impossible but it's a lot of effort and every minimal possible interaction needs to be mapped.

There are autonomous forklifts, but a humanoid robot that could sit in a normal forklift, regulations aside, would be almost an insta buy in logistics.


I think it will first be a kit to convert your average forklift into autonomous by retrofitting some sensors, motors for pulling cables or changing pressure on hydraulic lines electronically, and a laptop. The humanoid thing will be killer when it's out but it will be a century long boundary pushing dream to be chased.


I understand I'm just designing for a world that doesn't quite exist but is technically possible.

I'm designing for a future that is as far out that I both see it and achieve on the scale of an 8 unit apartment building.


A toaster cannot make a toast alone. It has to be inserted and removed and put on a plate. That's what I want a robot for (one day, after I am sure, it will not accidently toast me).


The question is, is this actually worth it over a toaster with climate-controlled toast-hopper that dispenses directly onto a clean plate from a plate dispenser.

And if that feels too expensive and space-intensive for mere toast, just think of how much worse a robot would be!


> As soon as it gets practical it stops being robotics.

This idea co-evolved in "AI"


That would fall under the "automation" category: a very specialized customized application of robotics, doing the same set of tasks over and over, this is the kind of application where we can really see the power of robotics, but rest assured that countless hours were spent testing/improving/optimizing/safe guarding these workflows and after every section in an assembly line there will be manual inspection to flag for bad / missing weldings and potential service of the machinery involved.


Would "single purpose robot" be another reasonable term for welding robots? Just musing.

The earlier "when compared to humans" statement definitely sounds pretty accurate to me, worded as "mutli-purpose robots currently always are less robust than humans at the same set of tasks" (or similar)


>mutli-purpose robots currently always are less robust than humans at the same set of tasks

Specialization has tradeoffs. Humans are very optimized generalists but very few of us become specialist at more than one thing. Even in that case a specialized machine/robot can be far faster, depending on the task of course.

Of course humans have a lot of trade offs for their abilities as generalists... taking years to mature, requiring sleep, poor integration with computer systems are just some of them.


Do you like working in robotics? How is the work, pay, environment, and industry?

I've entertained the idea of entering that space as a software engineer. No real experience in robotics though.


Working on robotics software is still exciting to me, unfortunately that is but a small part of working in the field: supporting operations/customer support is something that can take a lot of your time due to a multitude of factors (lack of specialized knowledge, bad designs, bad components/suppliers that are hard to move away from, environmental issues that can take some investigation to uncover, etc.), handling the expectations from (usually not technical) management can be challenging as well.

Projects are usually complex in part due to having a lot of moving parts (hw, sw, mechanical), iterating (bad) designs/components is not practical due to support reasons, so you may be stuck with a known bad stack.

And copying a previous comment from me on another thread: Robotics is very niche and the market is dominated by early stage startups (since most of them go out of business a few years in), so salaries are average unless you are working specific jobs for FAANG (which is a small pool). Job hoping usually means moving elsewhere, since working close to the hardware makes it much easier, which in turn means having a good picture of what is a competitive salary sometimes is not obvious.

Overall I would say that if you are optmizing for money / career mobility robotics is not great and you can do better some place else.


> once they are programming a robot, I feel they become roboticists

Yes! I am not a roboticist (or at least a good one in any sense) but I was having a similar discussion regarding enabling non-technical users do data analysis. Once they start doing anything more complicated than `SELECT COUNT(*) FROM blah WHERE foo=otherblah` it's going to get real real quick. You can't just give them some cheap point and click stuff because their questions will immediately overrun the extent of what's practicable. Asking interesting questions of data is roughly as difficult as phrasing the questions in SQL (or any other formal query language) and anyone who can do the first can do the latter easily enough.

(or the point and click stuff _is_ really powerful but it's some proprietary non-googleable voodoo that requires a month long training course that costs $5K/week to get a certificate and become middlingly powerful)


Yep. The entire article is the low-code fallacy applied to robot programing.

It will be the same in any branch of programing you look.


> low-code fallacy

I like it that we have a name for this now. Let's keep calling it the "low-code fallacy", because I'm tired of explaining over and over the same idea that semicolons and for loops are not what makes programming hard.


Exactly. It's that damned JS "class" keyword! ...right?


Actually yes, I think stuff like this makes programming hard. A half ass implementation of "class", not behaving like a class, brings unnecessary confusion. Programming in the real world, is full of these details, you have to know to be productive. 0.1+0.2 = 0.30000000000000004 in many languages is another one.

(And semicolons are ugly and I avoid them, wherever I can get away with it, but no, are probably not the reason)


I agree that the JS implementation of "class" is bolted-on and obscures the underlying prototypical inheritance, and that this kind of thing makes programming harder. I wish JS had leaned more into the theory of prototypes, possibly discovering new ideas there, instead of pretending like they're using the same inheritance scheme as other languages (although perhaps we should have expected that from a language whose literal name is from bandwagoning Java). The way to reduce this difficulty is by making better programming languages, by improving the underlying theory of programming language design, software engineering, etc. Cleaner, purer languages, closer to the math (being the study of self-consistent systems). This is the opposite direction of "low code". It's more like "high code". Low code is chock full of this kind of poorly thought-out, bolted-on, leaky, inconsistent abstraction, because their entire point is to eschew ivory-tower theory; they avoid the math, and so become internally inconsistent and full of extraneous complexity.

I also agree that 0.1 + 0.2 != 0.3 is another thing that makes programming hard. This is intrinsic complexity, because it is a fundamental limitation in how all computers work. The way around this is -- you guessed it -- better programming languages, that help you "fall into the pit of success". Perhaps floating point equality comparisons should even be a compiler error. Again, low-code goes the opposite direction, by simply pretending this kind of fundamental complexity doesn't exist. You are given no power to avoid it biting you nor to figure out what's going on when it does. Low-code's entire premise is that you shouldn't need to understand how computers work in order to program them, but of course understanding how floating-point numbers are represented is exactly how you avoid this issue.


I think it is pessimistic to say that number precision is a problem fundamental to computing. The bitter lesson gives me hope that someday no one will have to care about non-arbitrary-precision math. Programming could get that simplified by a great platform.


I suspect that if you dive deeply into arbitrary-precision math (although I don't mean to assume you haven't), you'll probably find that a programming language that supports such a thing forces quite a bit more thought into exactly what numbers are and how they work. Arbitrarily precise arithmetic is deeply related to computability theory and even the fundamental nature of math (e.g. constructivism). A language that tried to ignore this connection would fail as soon as someone tried to compare (Pi / 2) * 2 == Pi; such a comparison would run out of memory on all finite computers. In fact it's not clear that such a language could support Pi or the exponential function at all.

A language that was built around the philosophy of constructivist math in order to allow arbitrary precision arithmetic would basically treat every number as a function that takes a desired precision and returns an approximation to within that precision, or something very similar to that. All numbers are constructed up to the precision they're needed, when they're needed. But it would still not be able to evaluate whether (Pi / 2) * 2 == Pi exactly in finite time -- you could only ask if they were equal up to some number of digits (arbitrarily large, but at a computational cost). If you calculate some complex value involving exponentials and cosines and transcendentals using floating point, you can just store the result and pass it off to others to use. If you do it with arbitrary precision, you never can, unless you know ahead of time the precision that they're going to need. There are no numbers: only functions. You could probably even come up with a number that suddenly fails at the 900th digit, which works perfectly fine until someone compares it to a transcendental in a completely different part of the software and it blows up.

This does not sound like it's simplifying anything. Genuinely, a healthily-sized floating point is the simplest way to represent non-integer math; this is why Excel, many programming languages, and most science and engineering software uses it as their only (non-integer) number format. It's actually hard to come up with a situation where arbitrary precision is actually what the users need; if it really seems like you do need it, then you might actually want a symbolic math package like MATLAB or Mathematica/Wolfram Alpha or something.


I'm sorry, but 0.1+0.2 != 0.3 is fundamental. It creates difficulty, but you are not capable of doing math in a computer if you don't understand it and why it happens. Even if your environment uses decimals, rationals, or whatever.

The SQL `numeric` makes the right choice here, putting the problem right at the front so you can't ignore it.

That said, I completely agree with your main point. Modern software development is almost completely made of unnecessary complexity.


This is just a phase. The Internet went through this. It was criticized in the early days as requiring "too many PhDs per packet". Eventually, with standardization and automation, we got past that. Now anybody can connect.

Rethink Robotics went bust because they couldn't solve this usability problem. It's a problem at a much higher level than the author is talking about. If you're driving your robot with positional data, that's easy to understand, but a huge pain to set up. Usually, you have very rigid tooling and feeders, so that everything is where it is supposed to be. If it's not, you shut down and call for a human.

What you'd often like to do is an assembly task like this:

- Reach into bin and pull out a part.

- Manipulate part until part is in standard orientation.

- Place part against assembly so that holes align.

- Put in first bolt, leave loose.

- Put in other bolts, leave loose.

- Tighten all bolts to specified torque.

Each of those is a hard but possible robotic task at present. Doing all of those together is even harder. Designing a system where the end user can specify a task at that level of abstraction does not seem to have been done yet.

Somebody will probably crack that problem in the next five years.


There are only three timeframes for tech forecasts:

- one year (someone is building this)

- five years (no one knows how to solve this problem but a lot of people are working on it and y'know, eventually you get lucky)

- ten years (this isn't forbidden by the laws of physics but it's bloody impossible as far as anyone knows)


It's more that robotics can now mooch off the AI boom. All that money going into adtech and surveillance produces technology that can be used to solve practical problems.


> All that money going into adtech and surveillance produces technology that can be used to solve practical problems.

Problems like "how do we build better automated surveillance robots? it's so inconvenient to have to actually have a human remotely piloting the kill-bots"


Yes please. Just gotta make a convincing case, and ideally make sure all the folks "losing jobs" have a good pivot.

Which is the other, equally shiny part of the coin.

Elder care, anyone? They're as cool as you and me (+30yrs) :)


I'll still be waiting another 10 years for my flying car, but at least CostCo has robots that can automatically wash your hiney hole on sale for just $300 this week.


- twenty years (this is forbidden by the laws of physics)



> Somebody will probably crack that problem in the next five years

Nothing in your list has really changed in the last 5 years. What makes you think we are significantly closer now?

NB: I'm not saying we aren't making strides in robotics. A lot of these problems are really tough though; smart people have been working hard on them for the last 4+ decades, and making some headway. We are definitely enjoying the benefits of that work, but I don't have any reason to think we're "nearly there"

What I do think is much improved in the last decade or so is the infrastructure and vendor ecosystem - you can get a lot done no with commodity and near-commodity components, there is less need to start "from scratch" to do useful things. But the hard problems are still hard.


> Where are we closer?

Vision. Computer vision keeps getting better. Depth sensors are widely available. Interpretation of 3D scenes kind of works. A decade ago, the state of the art was aligning an IC over the right spot and a board and putting it in place.

> What I don think is much improved in the last decade is the infrastructure and vendor ecosystem - you can get a lot done no with commodity and near-commodity components, there is less need to start "from scratch" to do useful things.

Very true. Motors with sensors and motor controllers alone used to be expensive, exotic items. I once talked to a sales rep from a small industrial motor company that had just started making controllers. He told me that they'd done that because the motor and the controller cost about the same to make but the controller had 10x the markup.


> Vision. Computer vision keeps getting better. Depth sensors are widely available. Interpretation of 3D scenes kind of works. A decade ago, the state of the art was aligning an IC over the right spot and a board and putting it in place.

I disagree, at least with this as evidence for your 5 year timeline - computer vision has been improving, yes, but nothing earth shattering in the last 5 years that I've seen. We've seen good incremental improvements over 30 years here but they don't seem to be approaching "good enough" yet, at least not in a way that would give me confidence we're at an inflection point. Most of the most recent interesting improvements have been in areas that don't push the boundaries - they make it easier to get closer to state of the art performace with less - fewer sensors, less dimensional & depth info, etc. But state of the art with expensive multiple sensor setups isn't good enough anyway, so getting closer to it isn't going to solve everything.

Same with the 3D scene stuff still people have been plugging away at that for 30 years and while I think some of the recent stuff is pretty cool, still has a long way to go. Whenever you start throwing real world constraints in the limitations show up fast.


> They make it easier to get closer to state of the art performace with less

Which gets us, for example, cost-effective robotic weeding, and sorting of recyclables. When each sensor only needs about a smartphone's worth of processing capacity, and cameras are cheap, they can be applied in bulk to mundane tasks.


Sure, there are applications where it is a real benefit. Typically (like your examples) where we can manipulate the environment to work around the limitations of the technology. This is a good thing! When the tech gets cheap, it’s easier to apply more broadly.

However it doesn’t really speak to your contention. This is an example of doing less than state of the art perception for much cheaper, but to meet your goal (5 years or otherwise) we need to significantly improve the state of the art.


> computer vision has been improving, yes, but nothing earth shattering in the last 5 years

I totally and completely disagree. Sure, "computer vision" industrial cameras doing edge detection haven't changed much, but the computer vision my phone can do is many orders of magnitude better today than it was 5 years ago.

There's tools now that can take a short video of your bookcase and identify every book. That's serious progress!

Edit: This is the example I was referencing https://simonwillison.net/2024/Feb/21/gemini-pro-video/

Breaking down video into tokens for large language models and asking for structured data out. That's ground breaking compared to any non-LLM style machine vision.


I agree there is cool stuff going on in vision, absolutely. But I wasn’t taking about the field in general.

I just don’t think it moves the needle significantly in this particular area. For example, structured data out of a single camera is way better than it was 5+ years ago, but it isn’t as good as a dedicated multi sensor setup (ie state of the art for robotics) and that in turn isn’t good enough for the problems in GP post - which was the point.


> Depth sensors are widely available.

Eh, they're better than they were, but there's nothing that can meet the needs of generalizable robotics.

Every depth camera on the market does badly in some common situations. Even the ones that cost as much as a house.

> A decade ago, the state of the art was aligning an IC over the right spot and a board and putting it in place.

Are you sure you don't mean 3-4 decades ago?


You're right, that's why I'm surprised Honda has not done more.

Shoulda teamed up with the Nintendo folks, probably.


The goal of computing is, and has always been, controlling the behavior of machines the same way or easier than we do with other agents in the world toward some measurable end

So, to what level of granularity do you have to specify a system task in order for it to do the thing you want it to do, at the level of accuracy that you wanted to operate in?

That all depends on how accurate you can specify what you want to do

which means you have a sense of all of the systems that interact with, and impede the successful task of the set of systems

We can build abstraction layers we can build filters, but at some point somebody has to map a set of actions with a set of inputs and outputs, in order to sequentially build this set of tasks, which rolls out into the function of a physical manifestation of some sort

Add to that the complexities of mobile actuation complex environments and just the general state of power, computing, routing, etc. and you have a 15 body problem simply to have anything that someone would look at as benefit to humanity

Only a couple of disciplines can totally encapsulate all that and none of them are available to study anymore primarily cybernetics, and all of the interactions necessary to fully build a human machine symbiotic system


> "you have a 15 body problem simply to have anything..."

I like that! Although...Physics [so gpu] is enough to do it, when supplied with an optimized way to "know" momentary_[intent/\status] as a reduced ongoing string of equations.


I’m not going to pretend I know anything about the field, nor do I intend to insult your comment … but this reads to me exactly like the reduction of the subject that the article mentions.


I was super excited to take a robotics class in college. I'd fallen in love with programming and was excited to take all that magic into the real world.

We all had to buy roombas to program. The final exam was getting it to traverse a maze. It seemed so simple! They even gave us the exact dimensions and layout ahead of time. Just hard-code the path, right? Spin the wheels so many rotations, turn 90 degrees, spin some more.

Except the real world is messy, and tiny errors add up quickly. One of the wheels hits a bump, or slips a little on the tile, and suddenly you're way off course. Without some kind of feedback loop to self-correct, everything falls apart.

My excitement for robotics died quickly. I much prefer the perfectly constrained environment of a CPU.


Oh yes, we were building and programming Lego Mindstorm robots in university. Also with the goal to go through a simple maze and follow a line. Booring simple everyone thought. But the thirst thing we learned, was to not trust our sensors. My expectation was, if the ultrasonic sensor said, there is 1 m to an obstacle, then there is 1 m of free space. Well, not really. Partly because the sensors were really bad and only worked reliable when the obstacle was in a 90 degree angle, partly it is in the nature of sensors to not be perfect.

I am still excited for robots though, but haven't worked on one in quite a while.


Forty years in university (while I resided in a residential college) I was also excited to work on robotics.

We were expected to assist in machining parts, building control libraries from scratch, working out algorithms from scratch for path generation, etc.

The goal was to shear a sheep: https://www.youtube.com/watch?v=6ZAh2zv7TMM


> Design your APIs for someone as smart as you, but less tolerant of stupid bulls*t.

This is definitely applicable outside of robotics. For example, I work on a large-scale LLM training framework and tend to think this way when thinking about design decisions.


> That would be great, right? We should make a software framework so that non-roboticists can program robots.

Lol, I work in the field of test automation and this is exactly how no/low code frameworks get pushed as well. And, it rarely does play out in a way that people think it will.

In fact, having read the entire article, I feel like a lot of it can be applied more broadly. Basically any time people go "X sure is complex, we should make a simple to use framework for non-X folks to use". Not that it will always fail, but I have seen it happen enough to recognize a pattern.


BTW, odd question - assume somebody built a low/less code test automation framework that didn't fail and was very effective but it also wasn't popular.

What could it do to stand out to you? What could it demonstrate in 15-20 seconds for you to think "OK this is different"?


1. Come without vendor lock in. 2. Format is human readable so it can actually be integrated in a proper development cycle (git, etc). 3. Doesn't make it a nightmare to do custom things where needed.

It is not impossible. Out the top of my head Robot Framework does fit those criteria. But I'd argue that Robot Framework isn't really low code, but rather a coded framework in a low code trench coat.


Thanks, that's very helpful.

1 and 2 make perfect sense and are easy do demonstrate but 3 seems to me to be incredibly difficult.

I haven't found an easy way to advertise convincingly to somebody who (quite reasonably) grants you a limited amount of attention that custom things won't be a nightmare. It's the kind of thing you only tend see when you get dug in the weeds and hence people will tend to make assumptions based upon surface details.

This is a problem I'm struggling with.

I think robot/cucumber could require less code if they were better abstractions (and would be more loved), but I find it hard to illustrate that an abstraction is going to be good or bad, particularly to people with limited attention and particularly to people who don't necessarily have the skills to recognize a good abstraction.


> I haven't found an easy way to advertise convincingly to somebody who (quite reasonably) grants you a limited amount of attention that custom things won't be a nightmare.

I'd say, have a highly emphasized set of examples of the things people are most likely to want to customize.

You probably don't want to put the examples inline with your basic description, but link them there.


Thanks!


My personal view, as an industrial control systems engineer, is that so much of the world's production software requires teams of software professionals to monitor it and keep it working. When these same software professionals look at systems which physically interact with the real world on a real time basis then a different dynamic comes into play.


There's certainly that. Noone cares about that hot stack when your distributed system needs to work 24/7/365 for a couple decades with 0 SREs.


Traditional machine vision developers: I have 10,000 problems. You can’t do this.

Neural network people: watch this space, I have a shotgun.


As a former traditional machine vision researcher, currently robotics-adjacent software engineer, I agree, but: they've been brandishing that shotgun for a pretty long time before it became standard issue, for varying reasons. To continue your analogy, I'm pretty sure the (pseudo-)reasoning ability of GPT4-level LLMs could solve a lot of hard robotic problems (perception ambiguity, external agent behavior prediction etc), but now it's a stationary missile silo, and we need this weapon on every APC to be a solution.


As someone who runs a medialab at an art school it is fascinating how many people believe because they understood the general principle of a thing, it is therefore simple to just do it.

Many people seem to long for a magical technology that you could just pour over things and they will work out in the ways you wanted, while miraculously sensing the ways you didn't.

Those with the edge on the new tech will always be those who have a good understanding of it's limitations, because once a new thing comes around they immediately see the possibilities.


as someone that messes with every low cost robotics thing. this part stuck out as painfully true :

“Oh yeah, if you try to move the robot without calling enable() it segfaults. That's a safety feature… I guess? But also if you call it twice, that also segfaults. Just call it exactly once, ever.”


Well, hate to tell you this, but it’s generally not much different in a lot of the professional world either. The amount of bullshit I’ve had to deal with to make $20k hardware work mostly reliably still boggles the mind.

From the article: Design your APIs for someone as smart as you, but less tolerant of stupid bullshit.

One of the most painful parts of doing this professionally is that the people that work at a few of our vendors are incredibly smart and are selling us hardware that we can’t really get anywhere else, but they’re generally Electrical Engineers or Optical Engineers or Physicists and don’t even realize that the APIs they’re providing are bad. You file a bug, they tell you you’re holding it wrong, you point out the footgun in the API, and they come back and ask what that even means.

…It’s not until you debug their closed source library using Ghidra and tell them they missed a mutex in a specific function that they start treating you as anything more than a moron.

Anyway </rant>


I work as a non-roboticist at a robot company. Most of my job is to enable people with PhDs to do work and to clean up after people who hastily stood up some sort of infrastructure (either actual infra, tooling, or libraries) so that they could go work on the thing they were actually interested in. Occasionally I'll get to work on vaguely actual robot things - drivers, communications frameworks, timing, etc.


I think the real takeaway from this article, which is applicable to pretty widely applicable, is that a lot of times when you get the requirement "make it simple" the actual requirement is "make it intuitive." Taking away functionality doesn't typically make things any more intuitive, indeed they often make things much less intuitive because a lot of things need to be coupled that a user would not naturally expect to be. Conversely giving the user tons of options but making sure they are distinct, clearly named, their interfaces are consistent, and the defaults are sensible allows someone who understands the problem they want to use the api to solve to jump right in.


This quote hits close to home. “So I got the arm to match the conveyor speed by monitoring the position and pre-empting the motion command, alternating ‘slow’ and ‘fast’ at 10Hz with a duty cycle depending on how we are tracking our target. The motion is pretty jerky but it works.”

I’ve been through exactly this scenario in two very popular robotics frameworks.

Countless hours were spent by well intentioned framework developers to abstract away the underlying PID loop from the end users. Then countless more hours were spent by the end users to work around this by implementing a PID loop on top of the abstraction.


I used to work with Benjie at X and he was one of my favorite people. Benjie if you see this it is great to see you doing this kind of writing and I love this article!


So if LLMs are so great, I would think binding black box robotics hardware with apis would lead to a revolution in robotics.

That sort of one step implementation seems to be a sweet spot for llm

Problem is a lack of available examples for training?


One would think...all of the above.

I think, really, the emperor has had no clothes for quite some time. But -- now that we are here, the optimal path is towards open standards.

All historical business acumen points straight to black-box profit bubble. The enormity of what "Useful Robotics" will bring about has got to transcend that.


...as soon as the MBAs realize "no bubble", in real actuality, Means "very many even bigger bubbles", and usher forth to unleash their true potential... we are set!


This is relevant to many domains, with different degrees of relevance.


s/roboticist/programmer/g and you get an infinite bullshit generator for the world of business and enterprise software, without even firing up ChatGPT!

Having worked in both industries, I concur that robotics is much, much messier, as the system has to engage via hardware with the super-messy physical world as opposed to the comparatively modestly messy world of business transactions, data analysis, or whatever. But if we stop trying to solve for "programming for nonprogrammers" and assume that anyone who uses a language or API is a programmer (because once you start programming, that's what you become, irrespective of what's in your job title), we can remove a whole lot of wasted effort from the industry.


Interestingly, a lot of these things are the same challenges that no-code platforms face


> Design your APIs for someone as smart as you, but less tolerant of stupid bullshit.

I feel like this has been the problem plaquing the ROS navigation stack since move_base and now nav2. They design the API for people a few standard deviations smarter than everyone else on the planet. Billions of parameters that affect each other in unpredictable ways and you're supposed to read the thesis on each one.

Or do what most everyone else does and use the defaults and hope for the best, lmao. You either make an API that the average user will understand or it'll inevitably be used as a black box.


Robotics has been completely transformed in the last six months, check out this Figure video from yesterday

Robotics is dead. Long live robotics.

https://x.com/Figure_robot/status/1767913661253984474?s=20


It's easy to make a cool looking demo with robots, even one that runs right in front of you in person. Making a video is basically nothing, though, too much cherry-picking. Anyone that works with robots will just assume you have burned a lot of time repeating takes to cover up all of the failures. For the rubber to meet the road you chop that scope way, way, waaaaaaaaaaay down to make a robot do something useful.

Maybe this new ML wave will bring about a more generally useful robot, it certainly feels like it will at least open up a ton of new avenues for R&D.


That's fake man. But yeah, I checked out those jobs ;) They certainly know what to do. Just not how.


Why do you call it fake?


well, CGI as opposed to footage of a physical robot doing those things shown. They forgot the: *for inspirational purposes only


how did I not notice this obvious greenscreen composition!? it is!


I’m willing to bet a million dollars there was no CGI in figures recent video. What video are you even talking about??


"I hope you're right ;)"

"They have a better chance at proving or disproving this than we do."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: