Hacker News new | past | comments | ask | show | jobs | submit | TheaomBen's comments login

Photogrammetry & image processing, anything to do with dimension reduction (basically every field with real-world sensor data), ontology/semantic data processing...


You could interpret it more generously as be not concerned over that which you do not control.


Agree.

> I don’t believe that any supernatural force is at play, certainly not an all caring and loving one, for there is great and needless suffering in our world.

Consider an alternate reading, doubling down on unfolding:

“And whether or not it is clear to you, no doubt the universe is unfolding as it will. Therefore be at peace with entropy, whatever you conceive it to be.”

One can read the whole and happily strive to bring local order to local chaos as a vocation, both suspecting that may be futile down the infinite yet fulfilled by the local effort, and being at peace with entropy.


Are you aware of solutions like Alicevision/Meshroom (MPLv2)?

https://alicevision.org/



With just a splash of row_to_json and json_agg, you can JSON encode your entire query in PG.

  SELECT json_agg(row_to_json(t))
  FROM information_schema.tables as t;


The dealbreaker here is having to do this dance for every nested field on every entity, and having to write a separate backend query for each separate front-end use-case for a single entity.

It just isn’t feasible to write everything out when the schema has >30 entities, some with N:M relations, and when queries routinely go several levels deep to fetch dozens of joined fields at every level. The boilerplate overhead is too much.

A natively GraphQL database makes such queries a magnitude less verbose to write out, and all the queries can stay in the frontend (or can become persisted queries on a GraphQL ”middle-end” server).


Hasura, a GraphQL server, uses exactly this technique to transform final query results into JSON in PG before bringing back into local memory.


Shameless plug: I'm working on a project in this space (see my other comment on this post), I'd love to hear about your prefered workflow for ingesting 3d data into your editor.

Put another way, what output formats are most convenient for your use case? Are raw point clouds (XYZRGB) useful to you for eg. "prototyping volumes" or do you require meshes as a starting point? If the latter, how would you quantify how much retouching is acceptable (holes, suboptimal meshing, irregularities...) for an urban model?


If you'll allow a shameless plug: I'm about to publish a python package to help doing just that, by combining photogrammetric projects.

The workflow is take a couple hundred overlapping pictures, compute a model using an existing backend and store the image/3d data. Rinse and repeat. When you have a few neighbouring projects, we recompute submodels at the boundary and merge the involved 3d models by reprojection.

Currently wrangling some issues around packaging, a BSD licensed functional prototype should be out sometime next week. I'd love to get some users or eyes on it, do get in touch!


Plus tor hidden services -or whatever the current nomenclature is- offer a fairly robust and painless story for authentication in this "sorta VPN" scenario. Generate a couple extra keys and invite a friend and his bots.

https://community.torproject.org/onion-services/advanced/cli...


I do the client auth feature. The chances of someone stumbling across my hidden service is pretty low I think, but it’s not zero. With auth set up I don’t think a Tor client can even get the Id of the rendezvous server without having the correct key.


I've been meaning to take a crack at a software version of this for insects/birds/plants, do you know of any good ideally libre sources for the data?


I'd be willing to help you. The largest dychotomic key [1] is partially digitised but we could transform it into a proper dataset. Copyright no longer applies to part on the Flora. There are many other sources. I remember dychotomic key in hypercard [2].

Scanning 81 million science paper pdfs will yield most of the data in [1] and [2]. It would be possible to get a grant to make these transformations and add them to Plantnet and iNaturalist.

[1] https://en.wikipedia.org/wiki/Flora_Europaea

[2] http://www.etibioinformatics.nl/faq/


I am away from home atm but very interested in making this happen. I'll look into it a bit and get in touch by week's end! Thanks for the links.


It is also possible to add DNA material to field observations. A DNA scanner cost around $1000 and attaches to laptops and cellphones so they could be used in the field. I'm not sure if amateurs can handle the software yet, but that could be fixed.

The most progress could come from my speciality: data mining scientific papers. This requires a few hundred terabyte of harddisks and a fast computer plus 2000 hours of programming. This would require a minimum of $20K or more in grants, crowdfunding or donations but it would yield an enormous boost of data.

Another addition could be scanning all the herbaria and botanical gardens, both photo's and DNA samples. Crowdsourcing by thousands of citizen scientist would be the way to go.


I've downloaded almost 2 million photo's from several bird, insect and plant databases this weekend and (partially) compared them with the Plantnet and iNaturalist datasets. And I looked at the dozens of other databases with mosses, lichen, bacteria, etc. Still, no where near the >8,7 million species of the world are in those databases. The most complete list would be the plants of Europe (only a few thousand out of 400K species), most other regions and kingdoms are only partially identified. As we can see from dozens of papers, new species have been found through the iNaturalist collecting. Making better software and more data would certainly boost the rate of new species discovery. And that is vital, as we are in the middle of the 6th mass exinction event of the last 570 million years. Species go extinct before we even have photographed or identified!

I also have my dad's Flora research photo's from 30 year field trips and my own, of which 3000 plants where identified with Flora Europeae dychotomic key (a day's work per flower).


Yes, I know of quite e list of libre sources, only some of which are digitised and online. So (continuously) exporting data from the hundreds of small observation or collection databases or going our and scanning the old offline materials would be what you could organize with better software.

There are also ways to increase the collection of libre data. You could auto-generate field trips for holiday makers or school trips. Give them an auto-generated itinerary leading them past places with flowers, insects or birds with a list of things they should look out for and photograph or determine with keys. Make it into a game like geocaching or a treasure hunt. You generate it based on individual tastes (walking, bicycle, car tour, camper, bustour), individual or group size, age and knowledge level, temperature, season, time of day, climate, location, etc.


The US Department of Fish and Wildlife has a really useful service called the Feather Atlas for identifying bird feathers by their color, size, and shape:

https://www.fws.gov/lab/featheratlas/idtool.php


I'm working on a collaborative photogrammetry solution (think async/distributed 3d mapping from overlapping pictures) that shares data via IPFS. Flattering myself heavily, I believe this sort of public-data consuming application fits like nothing else.


You where going to contact me end of this week (morphle at ziggo dot nl) for the plant/species identification software: https://news.ycombinator.com/item?id=31537487.

Your collaborative photogrammetry can be combined with the open and free species identification API and my custom OpenStreetMap data extensions and KartaView/OpenStreetCam/OpenStreetView to get more photogrammetry location integration and more free crowdsourced open data to add to photogrammetry. A demo of Seadragon/Photosynth [1] inspired me to work on this.

[1] https://www.ted.com/talks/blaise_aguera_y_arcas_how_photosyn...


That sounds fascinating! Can you elaborate or point me to more information on it? I would love to hear your perspective on it from a real use case.


With pleasure, drop me a mail and I'll get back to you next week (last three letters of my username here @ rest of my username dot artificial intelligence). I haven't put anything online yet though sorry!


I actually got into software/systems engineering working on internal software for "minor" player groups in EVE. The level of integration[0] those orgs had 10+ years ago outranks 80% of my current real-world clients in terms of discoverability, documentation and depth.

I mean that's what you get from a bunch of eve-playing nerds committing to a labor of love. Best projects I've ever had.

[0] Real time updating mining & trading boards by location, 100bn+ ISK inventory tracking down to the cent across many corps, characters, inventories & contracts, ... Killboard & market data feeds, risk evaluation, resource allocation optimisers... Absolutely impeccable archives & backups going back 5+ years, on top of postgres & flask iirc.

edit: And audit logs EVERYWHERE. Seriously, I've seen better CYA/KYC/chain of responsabilities from eve recruitment corps than some actual for profit entities.


Also, the combination of (mostly) player-run economy + substantial no-consequence PvP zones + large player organizations seems to attract a different sort of person.

There are still your average gamers, but EVE had a much more adept playerbase (in terms of real world skills) than any other MMORPG I've played.


As the saying goes: “When an Eve Player quits Eve and moves to Wow, the average IQ of both games goes up”. Incredible game, took up far to much of my time.


This is wild. I had no idea the game was like this. Now I’m feeling inspired to finish that browser strategy game I started a few years ago.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: