I was once on 25th street in Midtown, when I saw someone drop a tiny object with a little parachute from a window at least 8 or 9 stories up. Once it had finished slowly gliding down to the street, someone picked it up and used it to enter the building. It was the key - I guess the buzzer didn't work! It was a delightful sight.
Chute to drop a key?? That's too much, something that light should be dropped by streamer. Less vulnerable to wind and more reliable.
Long, long ago I did model rocketry. Using too much chute was not a good thing because it could go so far off course. The high power rocket guys do it with two chutes--a drogue that deploys high and then the landing chute that deploys low. But that requires electronics and certifications and the like. In the lightweight stuff there's nothing fancy, just a delay built into the engine after which it burns through the top, momentarily exhausting into the interior.
It must have been dramatic, but from a practical point of view wrapping it inside a few layers of paper towel (so that it doesn't kill anyone) would be faster, and easier to target.
Is it even possible for a a parachute-retarded key to directly hurt someone? I’d be more worried about it surprising someone driving a car or riding a bike and causing an accident
I think the GP meant dropping the key wrapped in paper without the parachute. That's what makes it faster and easier to target. Which also answers your question about surprising someone driving a car, since the key won't drift with the wind anymore.
I wouldn't say it's impossible, but when you're driving/biking in dense, busy cities, you encounter all sorts of unexpected dangerous fast-moving obstacles all the time. I can't imagine this would be a bigger problem than any of the other random shit flying around Manhattan at any given moment.
I biked as my only method of transportation in Boston and New York for years, and never did I have to dodge a projectile coming from above, parachute or otherwise.
I didn't say cyclists constantly dodge projectiles-- I said fast=moving obstacles, like cars and car doors. Having dodged quite a few car doors in Boston myself, I have a hard time picturing someone who could do that being dangerously thrown off by a little key floating down on a little parachute.
I can’t believe I watched a story of using AI to drop hats on people and calling it drop shipping turn into a debate about parties, buzzkills and the risk of addictive substances vs “annoying” people against hat drop shipping and similar ideas, a discussion on the legal bounds of unwanted hat drop shipping, the effect of stray hats on babies and a quantitative analysis on the environmental impact of objects dropped from apartment windows in NYC. Followed by another debate on the mental effect of objects dropping from apartments in cities with skyscrapers. This is amazing.
Hacker news was traditionally an audience that's very hacker orientated. Over the years it's gained a significant portion of audience that are just 'in tech'. Some threads really show the clash between the two IMO.
I agree (though I don’t know if you agree with me)!
I think this post captures what I’d expect from hacker news quite well. A cool single person project messing around, creating something a community can enjoy.
What I don’t like are these weird doom mentality discussions over AGI (as an example), they just make me cringe really badly.
Then again I really can’t complain, all in all I love Hacker News, great topics, great comments!
First there was the excessively dismissive era that spawned the infamous Dropbox comment (Which was widely misunderstood).
Then came the functional programming era, where people worshipped Haskell and frequently got posted to /r/ProgrammingCirclejerk. Eventually we got past that one as people discovered that Haskell isn't really useful for anything besides showing off how you can implement Quicksort in a single line or starting arguments over what the hell a Monad is.
Then there was the needlessly pedantic era, which basically spawned the "ACKCHYUALLY" meme. The pedantry was often a huge distraction, never added anything to the conversation, and often was actually incorrect. If you've ever said "Actually, that's not ray tracing, that's ray casting!", then congrats, you're part of this era.
We're now in the era of being dismissive, not for technical merits like the previous dismissive era, but for being unproductive. Any time a project is done purely for fun or personal reasons (ie, nostalgia), there's someone in the comments talking about how useless it is, and that the time could be better spent Making The World A Better Place(tm)[0].
But we don’t know that and call me naive, but I see this as something that is done on a very small scale, by people who booked a “dropship” on his website. I don’t think you’ll find even a single of those hats in streets around his apartment. The humor is my flavor, and most of the imagine this as a viable product I take as such humor.
That's what a message board does, you post a thing and people say things about it. Not necessarily nice things, if only nice things were said that would be kind of pointless.
I wonder if there's a way to get data on this, even if it's only 'sentiment.' A bit hard to validate actual experience on what's effectively anonymous without intentional disclosure or a whole lot of rep.
I've been told repeatedly about the caustic nature of the HN crew, and it makes me wonder what steps could be taken to shift the culture in a healthier direction while not losing... well, HN haha.
I was contemplating whether to write this for quite a while and in general I wouldn't, because it contributes to a bad signal/noise ratio. And I'm not immune to making some semi-snarky remarks that might not contribute too much, myself. But there's a certain... shall we say _behavior_ I recognise from reddit that made me feel it was warranted.
Why? Is that your job? Why do you feel it's your responsibility to inform people when they have strayed from your idea of what someone else's vision of HN is supposed to be? Do you think it's a useful thing to do or that it brings you some benefit?
It's to my benefit because I'd prefer HN not to turn into what reddit ended up as, as then it'd lose its usefulness for me. I don't think it's my job, no, and hence why this is the first time I've done this. But knowing that the moderators aren't heavy handed around here it falls to the existing community to govern itself to a degree. People are free to flag my message if they feel it's inappropriate.
if i am curious about something and want to learn, i don't want to need to sift through jokes and sarcastic comments. i find joy in learning and people can still be informative and use humor.
I'm frustrated this is getting so much pushback - puns are noise. HN is more enjoyable than reddit precisely because of the higher signal-to-noise ratio. But a significant part of the comments of this post are arguing about how much fun to have in the comments, a complete waste of my time.
We are debating, with hushed academic rigour (well some of us are) an article where the author is talking about how they designed and implemented a system to drop hats out of a window at passers by.
Hats.
Out of a Window.
For a joke.
Not a cure for cancer.
Not a peace proposal
Not a way to get people out of poverty
Hats. Out. Of. A. Window.
This hushed "no we mustn't pun or mock" type attitude is one of the main drivers of stupid tech fads
It leads to people in positions of power to write down phrases like "This product isn't seen by our customers as a bridge to the metaverse". The product being a fucking chat app with bulletin board built in. At no point did anyone in the room mercilessly rip the piss out of them. And it shows.
If you watch the video, it actually falls several sidewalk tiles away and he has to go pick it up. From the text of the blog, I had assumed he was using AI to actually land it directly on a person’s head, which would’ve been crazy impressive.
I mean, the site is pretty blatant viral marketing for both his drop-shipped-hats-from-china side hustle and (I'm going to go out on a wild limb here and guess) his employer's ML-dataset-management-related startup.
I wish cool stuff like this wasn't always sullied by the slimy feeling from it only being done to draw attention to some startup sitting smack in the middle of the trendiest buzzwords of the month.
OpenCV was not the "AI" here, the "AI" was a computer vision model trained at the roboflow website that he mentioned multiple times and that he used in the line commented with "# Directly pass the frames to the Roboflow model".
I can assure you that if you develop a system to accurately place objects (bombs, say) on top of people and post the code on the open internet for everyone to see, the government will indeed have some critical question for you.
Accurately placing heavy, aerodynamic objects onto people when you start out directly above them is not very difficult. The hard parts are either placing the object on top of the person from a few hundred or thousand miles away, or - in this case - placing an object that tends to flutter rather than follow a ballistic trajectory.
I can assure you that you have no idea what you're talking about, starting with the fact that you obviously didn't watch the video.
It isn't aiming anything. It isn't adjusting for anything. It's doing so from a stationary point.
The ML isn't used for anything other than a simple "is there the thing I was trained to look for within this area?" It's basically a ML version of something one could pretty easily do in OpenCV.
There's NOTHING about this useful for aerial bombing, which involves dozens of problems much harder than "this is the spot you should aim for."
There are probably dozens of smartphone apps for helping marksmen calculate adjustments that are about a hundred times more complicated, and more useful for (potentially) hurting people, than this.
I can't stand people who act like it's reasonable for the government to monitor and harass people for stuff like this. The second our government is harassing him or the SMH guy, I'm moving to Canada.
You've replied to somebody talking about "if somebody developed (something not in this blog post)" with a long angry rant as if they had imagined the blog post claimed it had developed that thing.
It is not that they haven't read the article but they are commenting on a thread which is mussing about how much the government would be interested in if (IF!) someone would develop what the article title implies they developed but hasn't in reality.
The RC plane fandom on youtube has started to manufacture and drop fake bombs onto miniature targets. The bombs even have fins. I kinda wonder how long until they start adding electronics and flaps to start guiding the bomb, and how far they can get before they start to have feds knocking on their doors. I'd be interested in working on it but I'd prefer to keep my TSA precheck clearance.
Now if you had terminal guidance... Put flaps on the hat, and use shape-memory alloy wire and a coin cell to actuate them. The hats follow a laser beam projected by the drop unit. Minimal electronics required in the hat. This is how some "smart bombs" work.
> Imagine using AI to drop an object and it falls perfectly where you want it.
There is a fantasy series that depicts this as a game that two young gods would play together when they were growing up. (Or rather, since one of them had vastly superior foresight to the other one, he'd bully his brother into playing with him.)
This is the best thing I've seen on HN or indeed on the internet in general for quite a long time. Excellent work and thank you for brightening my day.
A lot of states are working on legislation that includes requirements for watermarking AI generated content. But it seldom defines AI with any rigor, making me wonder if soon everyone will need to label everything as made with AI to be on the safe side, kinda like prop 65 warnings.
This is not quite like the "AI" that's hyped in recent years, the key component is OpenCV and it has been around for decades. Few years ago, this might have been called Machine Learning (ML) instead of Artificial Intelligence (AI).
So it doesn't actually drop hats onto heads and doesn't use what most people would consider AI... I think I could probably rig up something to gracelessly shove an item out of an open window too which is basically what we're left with. It'd take longer to create the app for booking appointments, and to set up everything for payment processing.
You have discovered a secret area of my personalized "pet peeves" level: just a few days ago I saw an article (maybe video) about how "AI" tracks you in a restaurant. Screenshot was from an OpenCV-based app with a bounding box around each person, it counted how many people are in the establishment, who is a waiter and who is a customer, and how long they have been there.
There's an old saying: "Yesterday's AI is today's algorithm". Few would consider A* search for route-planning or Alpha-Beta pruning for game playing to be "Capital A Captial I" today, but they absolutely were back at their inception. Heck, the various modern elaborations on A* are mostly still published in a journal of AI (AAAI).
https://en.wikipedia.org/wiki/AI_effect We got it named already, it just needs to be properly propagated until there's no value left in calling things 'AI'.
This is a fair point and maybe someone more well versed can correct me but pretty much all state of the art image recognition is trained neural networks nowadays right? A* is still something a human can reasonably code, it seems to me that there is a legitimate distinction between these types of things nowadays.
Yes, no more machine code. Everything was to be written in BASIC. ...how we laughed at that outlandish idea. It was so obvious performance would be... well... what we have today pretty much.
IKR? If you can't hand-pick where instructions are located on the drum, you may have to use separate constants, and if that's the case what is even the point?
If you spend a few hours writing a bit of code that has to run for decades, millions or billions of times per day on hundreds of thousands or millions of machines it seems quite significant to use only the instructions needed to make it work. A few hundreds of thousands extra seems a lot. One would imagine other useful things could be done with quintillions or septillions of cycles besides saving a few development hours.
No, in an introduction to data structures and algorithms class. It’s pretty odd behavior to disagree with someone who is simply sharing their lived experience.
Yeah sorry, rereading, that came off as way aggressive for no reason. Rereading the chain, I think I just meant that it’s an algorithm that was frequently taught in AI classes, so at least some profs think it counts, even though it was called an algorithm.
Maybe it is easier to define what isn't AI? Toshiba's handwritten postal code recognizers from the 1970s? Fuzzy logic in washing machines that adjusts the pre-programmed cycle based on laundry weight and dirtyness?
Historically, we often call something AI while we don’t really understand how it works. After that it quietly gets subsumed into machine learning or another area and called X algorithm.
Adding two numbers, each having 100 digits? Reciting the fractional part of Π on and on? I have only seen that done by talented people appearing in TV shows. Seems AI.
That's my point: legislation seldom defines AI rigorously enough to exclude work like OpenCV. I presume that leaves it to courts or prosecutorial discretion.
Be it "AI" or not, these mostly fall under "AI" legistlation, at least in the new EU AI Act. Which is IMHO a better way to legislate than tying laws to specific algorithms d'jour.
If Big AI lobbyists get their way, this is exactly the kind of warnings we'll get.
Flood users with warnings on everything and it'll get ignored. Especially if there's no penalty for warning when there isn't a risk.
Big Tobacco must love Prop 65 warnings, because by making it look like everything causes cancer, smokers keep themselves blissfully ignorant at just how large the risk factor is for tobacco compared to most other things.
I fear you’re right — cookie banners will soon also come with endless AI disclaimers that net net desensitize the end user to any consideration as they seek to skip poorly crafted regulation and get on with their lives.
Poorly enforced regulation. Most of the cookie banners are illegal but businesses, especially large ones, have too much power to be effectively regulated.
The nags are kind of malicious semi-compliance, partly in effort to make the regulation look bad.
This comment is known to the State of California to contain text that may cause you to ignore warnings which may lead to cancer, reproductive defects, and some other shit that I can't remember because it's been almost a decade since I lived in California and weirdly I can't easily find the full text of one of these online through a quick search (emphasis: quick)
This concept is great, it’s also a brilliant idea for a webcam on a Bourbon St balcony in New Orleans to throw beads at parties below. I am friends with a guy who owns a multistory bar in the middle of the strip and would be open to this, so if OP or someone else is interested in developing an AI/remote control bead thrower, drop some contact info and I’ll reach out
I would hope that we have invented error-free software development by then, though. Otherwise, a small error leading to the wrong coordinates could really ruin your day (or head)... ;)
Or use lasers and tiny gum-shaped smoke bombs to sample and model the local air column currents, pre soften and flatten a portion of the gum paper-thin with some sort of wettimg/rolling assembly, stage, then let it drop and form its own miniature gum parachute or replica of one of those whirling propeller seeds that have a built-in wing to slow their fall.
What about a “we will remember it for you wholesale” version of the gum experience - you pay money and are then implanted with memories that are indistinguishable from chewing the gum.
I kinda think this is the end goal for all capitalism - you pay money for nothing.
Apparently the knowledge isn't wide enough, because this is the first I'm hearing of it... Why is gum bad for you? I knew it was in a downward sales trend, but I figured that was just consumer preferences changing over time.
Gum with sugar is bad for your teeth. Gum without sugar has xylitol in it, which is good for your teeth, but may increase your risk of heart attacks and strokes due to it promoting blood clotting[1].
Why does this remind me of something out of a certain old point and click adventure game, it was one that had the verb USE apply to every type of action.
click>(GUM)
click>(SELF)
click>(USE)
"You used the GUM on yourself.
Nothing special happens.
You now have 0 GUM."
There was another game in the same genre that did the same, but with the verb OPERATE. As teenagers my friends and I used to laugh way too much at dialogue responses these games would craft, where you would get things like "OPERATE GUM on SELF"
Maybe a receiving chute? Small, portable, and a clearer indication (cannot be confused with a yawn), plus it'll open up the variety of comestibles you can purchase just s mouthful of. No more forks, no more spoons, just a little sloped thing to slow and guide
Perhaps small guided parachutes that receive an auto-correction location from the RPi and track the mouth? The issue is that the gum will be expensive.
i work on roboflow. seeing all the creative ways people use computer vision is motivating for us. let me know (email in bio) if there's things you'd like to be better.
Slightly unrelated: Did the building owner/landlord complain about that? Is it legal?
I know a friend of mine whom the building asked to remove a camera they had. It was a camera used only to record the hill view in front of the building, so it isn't violating any privacy, and it was attached with magnets, so no damage whatsoever.
I was also curious about this. a bunch of BASE jumping hats dropping off a building is exactly the sort of project I would momentarily think about doing and never seriously entertain due to being certain that sooner or later someone, somewhere is going to sue me for some marginally harm-like side effect.
I don't know how litigous your region is but of all the people you know who have been sued, how many of them got sued for something silly vs a more low effort scheme like the classic throw yourself onto someones car and have 'back pain'? You might be safe to do silly shit on the basis that there are easier and better targets available.
Also curious if they had any grounds for that. I was under the impression that if you have a camera within your apartment (looking through window), nobody should be able to tell you no.
Unless perhaps the camera was attached outside their window (no longer their apartment), in a way that could be deemed unsafe and fall off and hurt someone, whereupon the building owner could be held liable? In that case I would find it reasonable to tell them to remove it.
What if we had like a fridge with glass window and drinks or snacks organized in rows with identifiers for each. You could enter the identifier and make your payment to the fridge and it would drop the corresponding drink/snack to a slot on the bottom of the fridge.
My immediate response to this was “ew, there’s already so much gum on the street”. Then I realized you meant you want to chew gum while walking down the street and I became enlightened.
What an unexpectedly cool post, I clicked the link thinking it would be "typical dumb", but it ended up being atypically dumb in the greatest way! Fascinating. The author overcame many challenges and wrote about them in a style as if he solved the hardest parts with only a little fiddling. Maybe he's already seasoned in the ML and robotics domains? So much fun to read.
Regarding the Video Object Detection:
Why does inference need to be done via Roboflow SaaS?
Is it because the Pi is too underpowered to run a fully on-device solution such as Frigate [0] or DOODS [1]? And presumably a Coral TPU wasn't considered because the author mostly used stuff he happened to have laying around.
Can anyone comment contrasting experience with Roboflow? Does it perform better than Frigate and DOODS?
Asking for a friend. I totally don't have announcement speakers throughout my house that I want to say "Mom approaching the property", "Package delivered", "Dog spotted on a walk", "Dog owner spotted not picking up after their beast", and so on. That last one will be tricky to pull off. Ah well :)
You are hereby put on notice that the undersigned intends to and henceforth will appropriate for his own further use without attribution to you the phrase “atypically dumb in the greatest way,” and furthermore that the undersigned may modify said phrase by replacing “greatest” with “best.” Any objection by you to said appropriation and/or modification by said undersigned will be and thereby is deemed waived by you, provided you do not respond to this notice within 48 hours. Please redirect your reply, if any, to /dev/null. Thank you.
FWIW you can use roboflow models on-device as well. detect.roboflow.com is just a hosted version of our inference server (if you run the docker somewhere you can swap out that URL for localhost or wherever your self-hosted one is running). Behind the scenes it’s an http interface for our inference[1] Python package which you can run natively if your app is in Python as well.
Pi inference is pretty slow (probably ~1 fps without an accelerator). Usually folks are using CUDA acceleration with a Jetson for these types of projects if they want to run faster locally.
Some benefits are that there are over 100k pre-trained models others have already published to Roboflow Universe[2] you can start from, supports many of the latest SOTA models (with an extensive library[3] of custom training notebooks), tight integration with the dataset/annotation tools that are at the core of Roboflow for creating custom models, and good support for common downstream tasks via supervision[4].
I believe the whole project, and the talk of stores in particular, is humour. At least that's how I read it. I appreciate not everyone has the same sense of humour so that may have passed you by.
Love this! I play recreational ice hockey in an Adult league and for the past many years I've desired to use AI/Object recognition to recognize who was out on the ice during what times during the game to attribute who impacted goals and which players were taking longer than usual shifts ( every team has those one or two players!).
This may be achievable for me with the current state of AI and GPT to help fill the gaps that my knowledge is lacking in. Thanks for showing what you made and how you did it. It's encouragement to me.
This would be interesting, feel free to email me if you get stuck. If you had a camera at eye level, you could try to train it on recognizing the player jersey numbers.
Facial recognition would be better. Don’t forget that canonically in Mighty Ducks D2 Goldberg and Russ switched jerseys so that Russ could get his infamous “Knuckle Puck” shot off undisputed because everyone thought the puck was passed to Goldberg until the mask came off. So the ML training on jerseys would have missed this critical moment and potentially assigned the score to Goldberg, when really it was Russ (wearing Goldberg’s jersey) who should have gotten the credit.
One might argue that this sort of thing rarely happens so it’s not worth doing more complex facial recognition vis a vis Jersey numbering. But I say that while it may be rare, when it does happen it’s a major event, so no complexity should be spared to ensure we capture it accurately.
I would have multiple camera footage. One gopro would be just be a wide-angle of the bench behind the players, another would be on the game clock, and additional ones would be on-ice footage. Typically my gopro set-up has been behind the goalie (https://www.youtube.com/watch?v=CCavsdzc-OY) and the rinks have Livebarn feeds (here's one on my YT from 2018 https://www.youtube.com/watch?v=5WEE9y4cAHg) but there are challenges in quality abound.
I play in a rec soccer league and had a similar idea, except to also have everyone on the team wear a smartwatch that could intelligently buzz at you to sub out based on your heartrate and how long you've been in.
Iirc, LiveBarn offers this as a service if your local rink has it set up. Annoyingly, my local rink uses 30 minute video slots so it only ever captures half a game.
> Picture a world where you can walk around New York City and everything you need is falling out of windows onto you. At a moments notice, at the drop of a hat. That's a world I want to live in. That's why I'm teaching you how to do yourself. Remember this as the first place you heard of "Window Shopping."
I truly love the concept of pun-driven development (PDD). As a motivating economic principle, a world where every human being has the resources, time, and personal safety to dedicate absurd amounts of their time to inane levels of pun-driven development is perhaps my favorite definition of utopia.
It can't be the best. It's only one of many positive consequences. Not even a main justification, but only a point of defense for those so irrationally against the concept.
Sometimes I feel we live in a simulation in a real world a few levels down with universal income or something like that. They got bored so had to forget their existence by creating a simulation (or nested simulations).
This is probably a bottom of the barrel idea if you took it in that world where everyone can experiment and execute their ideas. Like, this would probably get you put in jail in that world, it's that lame.
Although I think the idea of nonconsensual hat drops is so fun and fantastic.
I wish I could register myself as being up for any sort of serendipity like this. While I like the idea of a hat randomly dropping onto my head, some people may not.
As a counter point, the hat is a great way to protect against AC water drips.
My biggest fear about walking around any city (but NYC in particular) is an actual AC machine dropping onto my head. Maybe you could offer the choice to drop down a hard hat on streets with high AC unit density (and then pick it up when I leave the area).
Fun demo, but it would work just as well for the customer to tap something on their phone (or even send/reply to an SMS) to trigger the hat-drop, and be much, much simpler, and likely more reliable. It looks like it isn't capable of actually placing the hat on the customer's head (it lands on the ground nearby), so the camera and AI stuff is only acting as a trigger, not a guide.
And presumably if another random person happens to stop inside the right sidewalk tile for at least 3 seconds during the 5-minute window, before the actual customer gets there, they'll get the hat instead!
This is so cool and just brings me a lot of joy :)
Also, I've been working on a project (non-commercial) that looks down on people and have found existing models don't work super well from that angle so thank you for publishing your work on Roboflow.
It would be cool to make something similar for a pet feeder. Imagine having two cats (like we do). A skinny one and a fat one. AI would recognize them and dispense more food for the skinny one throughout the day. Hmm... :-)
Also, this would be contrary to GP's comment - it would be the right reason. Imagine if a bald person is walking by and a toupee happens to fall on their head and they can see themselves in a window reflection of a toupee shop that just so happens to be there.
Use some ML/AI to choose the right fit, style, hair color etc., the drop orientation, and angle. Throw in some ChatGTP integration to suggest using scalp glue. Combined with OP's marking skills they will be in business in no time!