Hacker News new | past | comments | ask | show | jobs | submit login
Point Cloud Library (pointclouds.org)
61 points by gjvc on March 13, 2021 | hide | past | favorite | 15 comments



The PCL is amazing for getting started with point cloud processing. There is an immense variety of hugely useful algorithms.

Once you get to a certain point some issues start becoming prominent:

* It is quite bloated and pulls in huge dependencies such as VTK

* Many algorithms are quite unoptimized, for example the NDT registration is rather slow and inaccurate. Some recent, leaner implementations of KD Trees, such as nanoflann and libNABO may also be faster than the FLANN implementation that PCL uses.

* PCL is an old library and has some evolutionary vestiges such as relying on Boost pointers rather than using shared pointers and unique pointers recommended by modern ISO C++

Meanwhile, for some applications, storing a point cloud as a "struct of vectors" can be faster than a "vector of structs" when using SIMD operations due to coalesced memory access. i.e.

    struct PointCloud { Eigen::ArrayXdf x, y, z; };
may be better than

    struct Point { float x, y, z, c; }; using PointCloud = std::vector<Point>;
Oh well, at least PCL doesn't try to roll its own linear algebra (like OpenCV lol).


Glad I’m not the only one - getting research projects replicated was insanely difficult with PCL interacting with external libs. I must’ve spent days getting cmake happy with getting a cross-compiled setup working properly. I’m seriously considering trying to move the important bits to rust, but it’s just such a large project.


What’s the best device for capturing point clouds in the sub $500 range?


Azure Kinect DK[1] & Intel L515[2] are both strong contenders for 3D imaging. Azure Kinect DevKit uses a Time-of-Flight depth sensing good to 5m in low res mode, includes a 12MB rgb camera that can do 2160p30, accelerometer/gyroscope, and 7 microphone array. The L515 is a lidar unit that samples faster & extends to 9m, & includes a 1080p30 rgb camera.

If the goal is generating point cloud, the Azure Kinect's better rgb sensor could be good to have, where-as lidar's advantage- higher throughput depth sampling (faster or higher res)- isn't going to mean as much, probably? Additionally, the accelerometer/gyroscope on the Kinect could potentially be quite useful for registration (figuring out how the camera is pointed).

[1] https://azure.microsoft.com/en-us/services/kinect-dk/

[2] https://www.intelrealsense.com/lidar-camera-l515/


I just built an app for this, if you have a LiDAR iOS device. https://keke.dev/blog/2021/03/10/Stray-Scanner.html

The iPhone costs more than $500, but you can also use if for other stuff.


Hi, that looks interesting. Where can I make a bug report? Can’t see the red button to start recording.


Email me at hello@keke.dev. Lets figure it out.


Great! I sent an email.


I can recommend the Intel RealSense L515. Also the old Kinect and Primesense/Xtion cameras are still pretty capable.

And, shameless plug: If you need a software to turn the RGB-D streams into stitched point clouds, check out Dot3D by DotProduct (disclaimer: I'm the founder).


most accessible device these days is probably the realsense


I had good results with the flexx Series from pmdtec. Its super small and and light. https://pmdtec.com/picofamily/flexx/


Isn’t Kinect the obvious choice?


yeah but they asked for best, not most well known


This was news to me and seems very useful, thanks!

The tutorial image [1] seems very similar to the icon in Ikea's build instructions [2], that might be worth investigating to avoid problems.

[1]: https://pointclouds.org/assets/images/tutorials.png

[2]: https://www.google.com/search?q=ikea+man+instructions&prmd=i...


Related: Potree WebGL point cloud renderer https://potree.github.io/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: