Hacker News new | past | comments | ask | show | jobs | submit login
iPhone lidar with applications for the geosciences (opentopography.org)
162 points by mooreds on March 14, 2022 | hide | past | favorite | 75 comments



This was the idea of Pixel8.earth [0] -- to combine data from many iPhones with lidar to accurately create a 3d map. The company was bought by Snap last year [1].

[0]: https://pixel8earth.medium.com/ [1]: https://techcrunch.com/2021/04/26/snap-has-acquired-pixel8ea...


I still don't understand why Apple has chosen to include such a sophisticated and expensive sensor of so little use to average consumers in its flagship products.

They're probably setting the stage for some future technology, maybe AR related?


Why do you think it has so little use?

One thing that comes to my mind is portrait background blur - so far computational implementations on smartphones that I’ve seen were not very good, I suspect this can improve effects like that a lot which seems relevant to consumers.

On the forward looking side I think app developers can also get a lot of value out of this. Eg (just a thought) changing the way clothes or shoes are bought by letting you scan body parts to determine fit.

And this can generate a lot of data feeding into their vision models + what you already mention about AR/VR.


Well, it doesn't seem to me that Apple has backed this new technology with any significant app that makes use of it. And I don't see any serious app in the apple store that makes use of lidar either.


I think you dismissed the portrait mode thing, which I think is by far the biggest use. Two other things I love -- "measure". It's very accurate. Also, there are several apps that allow you to scan an area and then send that to people. I've used it to show people an area (my office, a cool brewery I went to, etc). "3d scanner app" is one that I've used. Interestingly, I also used it to scan a piece of furniture I liked, which let me get precise measurements at a later date.

Edit: I forgot one super cool use of measure. Let's say you have a basement with an open ceiling. You can use the measure app to, say, measure a pipe. Then, when you walk upstairs, you can still see the pipe in 3D. So you can then understand just exactly under the floor where the pipe is. Very useful for things like pipes and electric wires.


> Edit: I forgot one super cool use of measure. Let's say you have a basement with an open ceiling. You can use the measure app to, say, measure a pipe. Then, when you walk upstairs, you can still see the pipe in 3D. So you can then understand just exactly under the floor where the pipe is. Very useful for things like pipes and electric wires.

That is really interesting. What app/service did you use to do this?


Just "Measure" on the iPhone. For instance, just measured the back wall of a closet. Then went to the next room and could see where the back wall of the closet was in AR.


Samsung, at least ultra line, has Lidar too and exactly same type of measure app (on top of some AR doodling). Just tried it and its actually pretty precise (on S22 ultra). I wouldn't build a shelf based on just that data but otherwise nice little addition to the toolset.


Measure, like he said. It's built in.


Wow, I completely missed that. I reread the comment about 3 times before I recognized it. Need more coffee, I guess.


The last one is pretty cool indeed. Still, I feel like Apple has still some "one more thing" for lidar in the future


They specifically called out the Camera app in all of the promos. It’s the reason I bought the Pro over the “base” model. It’s used for range-finding in the camera, both for portrait and low-light. It’s actually really neat and low-light performance has been drastically improved. (Which was one of my biggest gripes moving from flagship Google phones)

Any app used on half their billboards (Shot on iPhone 13 pro) and featured on the Lock Screen seems “significant” to me. :)


I bought an Apple device specifically because of LIDAR. I've found easy, convenient, "good enough" 3D scanning to be very useful both for recreational and professsional purposes.

Other uses of LIDAR (measurement/data collection, AR, photography) are also likely to become more popular over time.


Can I ask, what sort of things are you 3D scanning, and for what purpose? I have an iPhone 12 Pro that I'd like to utilize for similar uses, but I can't think of a practical output for them.


I'm not always entirely sure why I'm scanning things. I occasionally upload things to Sketchfab. I have vague plans to make a diorama out of pieces to view in VR.

Turning the question round - why take photos of things? A 3D scan isn't that different. It's a way to capture a memory and to show things to other people. It's so quick to capture something that sometimes I just do it on a whim when I find something interesting.

I'd like to make an art piece out of them at some point but I'm still waiting for the right inspiration to come along.


Polycam is used in the 3D modeling / VFX space to phonoscan assets--there are some others like it too that all use LiDAR. Another use could be to preserve historical artifacts similar to how it's used in this video: https://youtu.be/k1uXppV6TeA?t=225


IMO phones are so stagnant now that manufacturers just tack on random things to see what sticks. Lidar! Radar! Refresh rate! A 7th camera! Oooh.

Who the hell uses any of that stuff? But it lets them keep releasing new versions and charge huge amounts for new flagships.

Those gimmicks help to prevent the phones from becoming the next race to the bottom that the PC and cheap Android markets have become.


> Lidar! Radar! Refresh rate! A 7th camera! [..] Who the hell uses any of that stuff?

Well... you're right about the lidar, but I absolutely care about the camera and refresh rate. As a lifelong photography geek I basically upgrade for the camera improvements every year..

Camera, messaging, maps and dating apps are basically the only things I use my phone for these days...


Lidar is actually really useful for portraits, Samsung S22 ultra uses it extensively and can find a single hair strand it doesn't blur compared to the background. Everything looks more realistic compared to pure software processing where I can spot errors quickly.

As frequent user my good old Nikon full frame (D750), I am really astonished by output these tiny cameras produce. Sure, its not for big screen/print unless we're talking about sunny day. But for everything else, it makes fantastic portraits. I got fed up with the need to carry big heavy pouch around all the time, and missed way too many pictures of my kids to rely on it anymore.

It also sees in the dark much better than I do with my eyes, also thanx to Lidar - the pics I take from night walks where there is basically no light source hundreds meters around and I am in the forest (yes my night walks take me sometimes to interesting places, I normally don't use any light). Very dark scene becomes full of (usually) true colors and details simply invisible to me.

10x zoom on my phone allows me to read signs not readable to my eyesight (digital 'AI' zoom is very useful up to cca 30x). I was skeptical too, and its true full frame is still so far ahead, but at what cost - bulk, weight, the need to spend hours on postprocessing batch of photos instead of quick edits in phone in few seconds.


Yeah, portrait mode is basically face ID in reverse and works a lot better than room scanning.

How does the lidar help with night shots though? Maybe focusing, but otherwise isn't it just a long exposure with algorithmic corrections?


Well that focusing part is pretty important :) The rest is mostly about gathering enough light and compensating for handshake, probably with some ML.

For a photo you don't need much more. I can tell you that handheld shots are much better and easier compared to full frame dslr.


What do you suggest for a good camera with a tight budget? I used to have Samsung Galaxy S20 FE and it was just amazing. Trying to buy a new android preferably from flagship killers type of phones


That's cool. I'm glad there are still people interested in photography for its own sake instead of posing for Instagram!


"I basically upgrade for the camera improvements every year"

No wonder the earth is warming... You really need a new phone every year just for the camera ? I really don't get why anyone would do that without an actual professional need.


It’s partially because the cameras aren’t good enough one year, but do fulfill an edge case. The following year they fulfill another edge case. These reduces the need for using a dedicated camera, especially a bulky interchangeable lens ones, other times they fulfill an edge case that the bulky dedicated camera can’t do well either.

Makes them intriguing and fun to test out creatively. I can see the temptation.

The 13 has some things built in/enabled the 12 doesn’t have, which I had always wished the 12 had, such as video portrait mode, but I personally decided it wasn’t good enough for upgrade again, since its an ok assumption that the future models will also have that.


> The 13 has some things built in/enabled the 12 doesn’t have

Night mode in wide angle! Although it's quite a bit less capable than the other lenses. Maybe in the 14 ; )

And @eole666 of course I don't literally throw the old phone away every year, there's a whole downchain of very willing recipients of lightly used iPhones. Hell, my old iPhone 6 is still being used for QA.


Sometimes I forget some people don't buy things to use them for years, but prefers buying every year the last novelties. It's kind of a weird consumerist way of life.

Yes, smartphone camera's are awesome now, but it's sad an awesome product like that will be thrown away after a year of use.


With Apple supporting iPhones with IOS upgrades for years and security updates even longer, you don’t throw away an old iPhone, you either hand it down to someone else or you sell it.

Even an iPhone 6s from 2015 with a new battery is faster than most low end Android phones and it is still fully supported.


That's actually why I didn't upgrade from the 12 to the 13, because I wouldn't be doing the trade-ins. They were tempting offers but I hate being locked in to the same device for 24 - 30 months, which is a condition of most of the trade in offers. I would rather pay for the device outright and figure out what to do with my prior device, and so this newest iphone wasn't compelling enough for that.

(for context, my prior upgrades were in a more lenient trade-up program, which by coincidence resulted in me keeping up to date. not trying to pretend like avoiding a one year upgrade is some major sacrifice, its just what happened)


If you own the phone outright, you can either sell it directly or trade it in to Apple.


Yes thats what I meant by figure out what to do with my prior device, but thats alongside making sure I have backups of everything.

I've had some apps that stored files within them, and the app was no longer available for a higher version of iOS that a new phone shipped with. Would have been royally screwed if I had sold/traded in my prior phone because I wouldn't have immediately noticed except when I needed it.


Why is 3d scanning any more of a gimmick than photography? When I travel I scan things for the same reason I photograph things.


The lidar plays a trivial role in that. Most of it is just algorithmic photogrammetry. If you turned the laser off you'd probably still get similar results.

And it's a gimmick because it's a tiny niche of people who do that, and I wish I didn't have to pay extra for a phone for a lidar. Would take a fingerprint scanner any day. Or a headphone jack.


> Most of it is just algorithmic photogrammetry. If you turned the laser off you'd probably still get similar results.

This is simply untrue. Most 3D scanning apps on iOS make no use of photogrammetry or they do it as an alternate mode for smaller objects. I don't know of one 3D scanning app that combines the LIDAR with photogrammetry. They use one or the other depending on the scale of the object you're scanning.

> And it's a gimmick because it's a tiny niche of people who do that,

It's not the reason Apple added it. They added it for AR and photography. Arguably the former is a gimmick but the latter is future proofing and might end up being a game-changer.


> Those gimmicks help to prevent the phones from becoming the next race to the bottom that the PC and cheap Android markets have become.

See: it allows them to keep increasing the cost of their devices while simultaneously preventing them from passing the savings from their streamlined manufacturing to the customer.


Glad they didn't include a laserpointer.


hopefully thermal imaging next


Yeah, that and uv would be cool for looking at plant life.

The seek and flir imagers are cool for IR


UV seems more possible now that I think about it, thermal would present huge issues with export regulations


I'm iOS dev and few limitation:

1) Biggest one Lidar is only on iPhones Pro in contrast to TrueDepth that after iPhone X landed in all iPhone devices

2) Lidar has very low resolution

I wish the added Lidar to all iPhones + combined this with TrueDepth (both in front and back of iphone) - true depth has better resolution but is slower (only 30fps + higher latency) and only works best for distance < 1 meter


Focusing a camera in low light is still not great with traditional methods such as phase detection. LiDAR focus is spot on every time.

The quality of the camera is one of the main selling points of the iPhone (Eg. The ad campaign on billboards is “Shot on iPhone”. So I would argue it is a very appropriate feature.


Could the 3D model be used to interpolate the shape of an animal behind a cage?

I'm imagining a panda enclosure, with lots of people taking photos. 2D images are obscured by a line where the cage is. Eyes can correct for it by moving our heads around, but the photos can't.

There is an opportunity for cage-removal to be solved with software (guessing, or taking multiple photos and cross-stitching) or simpler hardware (camera on bottom of phone instead of top, use parallax like 2 eyes). But the lidar option might work too.


There's some practical stuff, like mapping a room so that you can see wallpaper or paint choices on your own walls. Or getting clothing sizes more accurately, etc.


This is more about laying the foundations for true ar/vr applications in current and future models. As well as probably testing for a consumer device as a headset with the same plug and play architecture. (My prediction)


Can't AI be used to "fill in" 3D models without having to scan from all directions? That seems the killer app to me. Just use your camera like normal.


First time an iPhone feature makes me want to buy one. I hope we'll find lidars in android phones soon !


IIRC Samsung Galaxy S10 from three years ago has lidar.


I’d be curious what geoscientists can do with (smaller, non-topological) 3D scans of geological features!


Is there any comparable non Apple device for easily capturing rgbd images?


Seems like photogrammetry is what Android's AR Core does with AI: https://developers.google.com/ar/develop/depth#depth_images

You should be able to get rgbd images from a phone supporting the AR Core Depth API: https://developers.google.com/ar/devices#google_play_devices


A self-contained device, not really. The closest would be an Azure Kinect or Luxonis OAK-D, both of which must be hooked up to a computer to capture and process images. (It's possible to use a HoloLens or Magic Leap to capture RGB-D with a low-res depth channel, but not without some developer-mode tweaks or custom software.) RIP Google Tango.


My compromise was to go with an iPad (second hand). I didn't own a tablet, I got access to other iOS apps and features that were mildly interesting (music creation mainly).

And I didn't have to switch my actual main phone which would have been far too much of a commitment to their walled garden.

The form factor is a bit clunky for scanning and I don't have the "it's in my pocket all the time" advantage you get with a phone.


Samsung S20/S21/S22 phones have a LiDAR sensor.


I thought some of the older ones did (S10/S20), but the newer ones dropped it. Yep: https://www.laserfocusworld.com/blogs/article/14205758/samsu...

Newer android phones do have a LDAF/TOF sensor, but it's much simpler than LiDAR AFAIK.


How about this: https://www.dotproduct3d.com/dot3dlite.html Works on Android and Windows. You will need an Intel Realsense camera.


I think those are no longer supported or something. Intel gave up on realsense AFAIK.


Are there any efforts to 3D-map cave systems? It seems like such a no-brainer application of depth-sensing technologies and yet it seems to me nobody is doing it! What gives?!


The lidar on the phones suck (I had one and was super disappointed). It has a range of like half a room, terrible resolution, was slow as heck, and the software was buggy and unstable and the algorithms often butchered all but the simplest scans. It's not like you put the phone on the tripod and you get a Ubisoft-like cave in a minute. It's like you get half the room at 20% fidelity after an hour of frustration.

Honestly the more mature photogrammetry techniques that stitch together multiple photos (from video or 360 shots taken successively) seemed much more impressive. Indeed, you can run most of the same apps with lidar off and still get similar results. There is a lot of depth info when you take pictures from different angles that a smart algorithm can figure out. The lidar adds more hinting but without professional grade equipment, the optical sensors (cameras) are much higher resolution and range.

The lidars on the i devices are cheap gimmicks.

Edit: just to add a bit more detail, lidar basically gets you a point cloud of depth data. The professional ones have big spinning mirrors that blanket the room in a matrix of powerful dots, like a Kinect on steroids. That takes a lot of power and, usually, moving parts.

The iPhone one uses more modern, smaller solid state technology but has a resolution of a few hundred dots and a range of 5m, vs the millions of pixels on the optical camera and a range of however much ambient light you have.

The lidar is an active emitter. It has to produce its own laser light, limited by its emitters and battery, and have a way to scan its emissions across 2d space (x and y axes) or else have a ton of micro lasers in a grid.

By comparison the optical camera is just a passive receiver, accepting reflected sunlight or light from the the weak LED flash. Those light sources are way stronger and more diffuse in terms of range and coverage.

Maybe in a few years the lidar will get better but for now it's just not useful for anything except face ID. Prosumer drones probably have better lidar and also better photogrammetry software.

More deets on i lidar: https://arstechnica.com/cars/2020/10/the-technology-behind-t...


There's some fantastic results here: https://sketchfab.com/search?q=iphone+lidar&type=models

And these match what I get - they aren't anomolous. It's not as good as photogrammetry but it takes minutes rather than hours.


You really think those are fantastic? The only ones that seem even remotely good are the knew where the camera rotates around a static object (couch, whatever). For the room scale ones, the results are terrible. Like this one of the sketch fab hq: https://sketchfab.com/3d-models/sketchfab-hq-scanned-with-ip...

It looks like something out of a video game dream sequence where your world is falling apart.


They are roughly comparable with results from photogrammetry at a fraction of the capture/processing time. Have you seen much raw photogrammetry output? Before it's been repaired/cleaned-up? You can get better than this with photogrammetry but it takes a lot of time and trial and error. This stuff is as good as my personal attempts at photogrammetry and in some cases better. When photogrammetry fails it generally fails catastrophically. With LIDAR you will always get something.


I think we generally agree. I'm just saying it's not good enough to take into a cave and casually scan it into a 3d model yet, not with the cheap consumer tech at least. I don't think those results are usable for anything (games, sharing, measurements, mapping), just novelty.


Actually I don't think we do agree. "Novelty" is a very low bar and I'm arguing that there's plenty of utility above and beyond that.

Measurement? Yes - it's accurate enough for many purposes like home furnishing and interior design mockups.

Games? Yes. It suffers from similar issues to photogrammetry (high poly count, need for repair and cleanup) but that's proved useful in many aspects of game content creation. There's also the fact that games are a creative medium and people develop new art styles all the time. There's some very creative uses of point clouds and 3d volumetric capture which is often glitchy as hell.

Sharing? Not sure what you mean.

Mapping? It's much better quality than the 3d meshes in Google Maps and they are incredibly useful for visualising and inspecting places you can't visit. Compare with the output from drone photogrammetry. This plugs a gap between large scale mapping via drones and laser site scans. It's better than the former and cheaper than the latter.


OK, then we disagree.


Not terrible, but not good enough for most uses.


What "uses" are you thinking of?

It's not going to replace industrial laser scanners for surveying and site inspection but it's good enough for many of things we're currently using photogrammetry for.

And the sheer ease of use could open up new applications that wouldn't have been viable with slower workflows. Speed is a feature in it's own right.


The standard cave survey technique is fiberglass tape measure, magnetic compass, and inclinometer, visually sighted point-to-point; passage width and height are recorded at each station in two planes, reflecting the traditional 2D cave map projections of plan and profile.

These days, the most common mapping technique is a similar workflow, but with a ruggedized handheld laser distance meter with digital compass and inclinometer. They may have on-device storage, and they may transmit measurements to a PDA or tablet via Bluetooth. Some people sketch the 2D projections of the cave on that digital lineplot, some use the data to sketch with pencil on paper. A cave is a very hostile environment to electronics, so "caveman technology" is often intentionally preferred.

Lidar and photogrammetry are quite difficult in a cave. The surfaces are irregular, of varying wet and muddy conditions whose reflectivity cause problems. And shadows - both visible and line-of-sight shadows - produce "swiss cheese" 3D models in all but the simplest caves. Lava tubes are often conducive to these techniques; their basalt composition wreaks havoc on magnetic compasses, so it's a welcome update.

Having used a $75k FARO Lidar device in an admittedly clean and comfy cave before, I sure wouldn't attempt to haul it into the average "wild" cave for fear of trashing it. The promise of SLAM-based devices, a computationally-expensive technique for realtime integration of 3D data building a 3D model as you traverse the cave, is amazing; but the expense still high, the hardware still fragile, and caves are still irregular enough to never make this easy.

Stone Aerospace in Texas has done some amazing work with autonomous underwater cave mapping using sonar. Their submersible robot has the same advantage that a quadcopter would have in an air-filled cave, whereas we human mappers are still stuck standing on (or crawling on) the floor.


These guys scanned Cerro Gordo and parts of the mineshafts underneath it.

https://youtu.be/k1uXppV6TeA


I’m still waiting for spider-like drones to supplant most manual exploration


Now we just need envionmental sensors and the Tricorder is here!


Are scans of nearby rocks of interest to geoscientists?


Does anyone know if comma.ai can use the lidar to improve its self driving car algos? In general, is this lidar sensor long range enough to help cars drive?


The lidar sensor is short range, around 5 meters from the article.

This is more in line with parking sensors or something, not the long range lidar that has been used for some self driving car research that can spot traffic and obstacles at a distance.


Comma uses a RTOS and a lot of hardware level tweaks. It would not work on an iphone.


This type of sensing is so unremote!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: