Hacker News new | past | comments | ask | show | jobs | submit login
Self Hosting a Google Maps Alternative with OpenStreetMap (wcedmisten.fyi)
365 points by thunderbong on Nov 22, 2022 | hide | past | favorite | 119 comments



How come these map projects still use raster tiles? Are there no open source map projects that render the OSM data to vector data, and renders that vector data on the clients GPU? Maybe raster tiles are better at something I'm missing, but vector maps are easier to style[0], localize, they're sharper, easier to rotate and zoom smoothly. Maybe it's harder than I think to render it on all sorts of clients including mobile?

When writing this I found out that MapTiler[1] is maintaining MapLibre GL JS[2], a fork of a Mapbox project to do just that. It would be interesting to see the difference between self hosting raster and vector maps and compare pros and cons. You can even render raster tiles from vector tiles on the server if the client needs it[3].

[0] https://openmaptiles.org/docs/style/mapbox-gl-style-spec/

[1] https://www.maptiler.com/open-source/

[2] https://github.com/MapLibre/maplibre-gl-js

[3] https://openmaptiles.org/docs/host/tileserver-gl/


Just to set the record straight: MapTiler is *not* the maintainer of MapLibre. MapTiler is *one of* many participating companies in the MapLibre organization, which has a board and a charter [1].

Amazon has donated $300,000 to MapLibre, Facebook - $80,000, TomTom - $10,000, and lots more companies have donated. Most of the code maintenance is done by various volunteers and companies, e.g. Amazon has hired a whole team of engineers to work on MapLibre native, etc.

P.S. I came up with the name MapLibre, called people to unite around this effort, and am on the board of the MapLibre organization. [2]

[1]: https://maplibre.org/charter/ [2]: https://twitter.com/nyuriks/status/1336493514646052864


I'm almost too afraid to ask, but is WebGL rendering really necessary for displaying an interactive 2D vector map as opposed to using SVG or Canvas?

Is it, because sth. like a full street network layer would degrade performance too quickly?


If you want to display infinite fractional zoom levels, WebGL is mandatory. That's what most consumers will expect from a high fidelity map application, because that's the way Google and Apple Maps work.


High Fidelity.. Pity their road names don't zoom up to some degree.. it is so hard to read the road names, that as an old bugger i need a magnifying glass..


Not vector but raster with leaflet: would https://www.osmap.uk/larger-text/ (as an example) help in reading road names?


Infinite fractional zoom levels, right thanks.


I think running stuff like that on the GPU is just more efficient than running it on anything else.


You’re absolutely right and here’s an open source self-hosted project for MBTiles: https://github.com/akhenakh/kvtiles

More info from the author here: https://blog.nobugware.com/post/2019/self_hosted_world_maps/

The vector tiles are much smaller in filesize than pixel data and the rendering technology is much better. Almost all the styling is done on the client side so the server is basically just querying a spatial database.


A few reasons:

1) Some of the most popular map library frontends like Leaflet support only raster tile layers out of the box.

2) Quality vector generalization (feature-dropping) is much more difficult than raster generalization.

3) A large fraction of use cases just need a "good enough" basemap without those extra features.


Seems like MapTiler is maintaining an open source full stack vector alternative, and OpenLayers[0] looks good as well, so maybe it's time for legacy libraries to add vector support, or for users to switch libraries? There's even bindings from Maplibre GL to Leaflet [1].

I at least would find it interesting to see the two compared by someone other than me ;).

[0] https://openlayers.org/

[1] https://github.com/maplibre/maplibre-gl-leaflet


Well, I wrote one already for leaflet ;)

https://github.com/protomaps/protomaps.js

TBH, the blocker is not demand for vector rendering but the technical difficulty and performance constraints of doing rasterization in the browser.


Nice! Only on Hacker news can you make a comment about some kind of hypothetical programming project and have the reply be "Yes, I've made one of those already".

Do you think it's still too early to switch to vector map data for all clients then? Maybe render raster tiles on demand on the server to those clients that can't handle it?


The barriers to adopting vector-everywhere are social and commercial, not technical.

There are a couple great public raster services like osm.org's default style and http://maps.stamen.com. These are 100% free to use, so they get used everywhere, but incur significant expense to the organizations running (paying) for them.

There aren't equivalent solutions in vector-land yet... I wrote a bit about this previously: https://protomaps.com/blog/free-tier-maps


https://maps.qwant.com uses vector tiles, altough polygon simplification at lower zoom levels could be improved.

Their stack is open sorce, described there: https://github.com/Qwant/qwantmaps#tile-server


Thanks for this. I remember using MVTs on HTML Canvas with Leaflet via plugin way back in 2017.[1] It wasn't perfect, but browser rendering has definitely been the bigger issue than anything with Leaflet.

[1]: https://github.com/SpatialServer/Leaflet.MapboxVectorTile


Can't you just convert the data to SVG and let the browser display that?


SVG will become sluggish past a few thousand features, which is typical for viewing an urban area on the map.


>> SVG will become sluggish past a few thousand features, which is typical for viewing an urban area on the map.

So lets optimize the browsers SVG rendering?


I'm surprised I don't see more implementations caching and serving vector tiles. I know some (most?) large mapping companies do this.

I maintain a heavily used map of the US, and I found that storing data/serving in vector format seems to be the most efficient (on AWS s3 / DO spaces), and converting to raster in the browser or with a service call / lambda really opens the doors to a lot more use cases. I suspect parent can talk more about this than I can.


vector tiles are years-old now.. there are a dozen decent implementations.. why would Leaflet not support vector tiles ? is there some commercial agreement behind closed doors regarding crippled Leaflet now?


Because Leaflet is an extensible library for building map UIs using the browser, using DOM elements as grid cells, and using SVG in some cases for overlay geometry and user interaction.

Things that are mandatory for map display, like:

- Label priority and positioning

- Display of internationalized, RTL, and Bidi text

- Geometry rasterization and styling

Are outside the scope of Leaflet's core design and can be implemented by plugins, one of which I have linked in a sibling comment.


Small note of clarification about MapLibre. It's a vendor-neutral, community-led project supported by a lot of different industry participants (including AWS & Meta).

https://maplibre.org


> MapLibre GL JS[2]

Here's a nice example of the maps in action (though the other links also have showcases): https://openmaptiles.org/docs/website/maplibre-gl-js/

The transitions seem really smooth and the user experience is pleasant... until I open them on an Android device that came out just 2 years ago (one of those rugged models), in Firefox.

I'm sure that the slowness has something to do with rendering vector data, because all of the sudden those maps become unusable - slow zooming, slow loading (possibly the data is fetched reasonably quickly, just slow to display), slow scrolling. In a word, laggy. When looking at an area of my choice, using the vector maps (even after letting them load fully) I get probably less than 10 frames per second so panning feels choppy.

On the other hand, the raster maps are basically what you'd get when panning across an image - the latter experience remains reasonably smooth. There, the only disadvantage is the typical swapping out of raster tiles as you zoom in or out, but that's a small nuisance at best, apart from the visual difference in quality.

Have a look at OpenStreetMap, which the first link uses for its data: https://www.openstreetmap.org

As it currently stands, if we all used only vector data, I feel like a lot of people on lower spec devices would be left out. I wonder whether any serious tests have been made on the battery usage or performance of both types, but even tests this simple show that there are certain problems with the current implementations of vector maps.

I actually did a blog post about this just now, with a video example of the two types of maps running side by side: https://blog.kronis.dev/everything%20is%20broken/vector-maps...

Note: this isn't meant to be a super accurate comparison, but rather a quick look into some of the immediately apparent practical differences in what the user experience is like.


> The transitions seem really smooth and the user experience is pleasant... until I open them on an Android device that came out just 2 years ago (one of those rugged models), in Firefox.

My daily driver is a Samsung Galaxy S4 (2014) with LineageOS and Firefox (Fennec F-Droid, no google services).

Your link works pretty well, it took a few seconds to load the map, but zooming and panning is quite smooth, it seems.

Edit: panning is much less smooth on openstreetmap. It's faster to bring in new features when zooming, but you have to endure blurry text and less details for half a second or so. Also, you can't rotate the view. IIRC google maps renders text locally to solve that.


> Your link works pretty well, it took a few seconds to load the map, but zooming and panning is quite smooth, it seems.

This is exactly why I recorded a video and put one in that blog post - because sometimes people that are running even the same software will have vastly different experiences. At the end of the day "But it works/doesn't work on my machine." is just the reality that we need to deal with - which will vary for someone with a weaker or more powerful device, certain browser, certain drivers and so on.

> Panning is much less smooth on openstreetmap.

I'd say that from the best to worst for me it'd go a bit like this: Google Maps (application) > OpenStreetMap (web) > Google Maps (web) > MapLibre GL JS (web)

> It's faster to bring in new features when zooming, but you have to endure blurry text and less details for half a second or so.

Same here!

> Also, you can't rotate the view.

I can't rotate the view in Google Maps (web) either, only in the native app.

For me, the web maps also seem to use raster tiles, whereas the application seems to use vector data, added more info in another comment: https://news.ycombinator.com/item?id=33712150


That's strange, it works flawlessly on my 2018 phone, although that has a Snapdragon 845 which was considered the fastest SoC available for Android phones at the time. How is the Google Maps, OsmAnd, or Mapbox experience on your phone? They also use vector map data.

If you need to support old underpowered devices you can serve raster tiles to them. I don't know how to automatically detect which type to send to a device, one way would be to just time how long it takes to render a frame of vector data and switch to raster maps if the device is too slow, or at least present the user with the option.

[0] https://www.mapbox.com/maps/streets (Press "Preview Style")


> How is the Google Maps, OsmAnd, or Mapbox experience on your phone? They also use vector map data.

Google Maps (in Firefox) seem to be slow when loading data in, but decent when panning around. Though it does appear that they're using raster based tiles (I can quite literally see pixels when I zoom in, before the new tile loads and replaces the one for the previous zoom level).

Google Maps (Android app) seem to be really smooth when panning around, but also similarly slow when loading data in. Zooming is a bit sluggish, but not as bad as vector tiles when viewed through the browser.

I also remember using the HERE maps application a long time ago on an even slower device, somehow those performed way better than Google Maps application though.

My suspicions: a browser might have greater overhead, maybe not everything plays nicely with Firefox or my hardware. The slow data loading seems to be caused by the CPU/GPU instead of the network connection, because if new tiles are loaded in while I'm panning around, the movement also lags.

> If you need to support old underpowered devices you can serve raster tiles to them. I don't know how to automatically detect which type to send to a device, one way would be to just time how long it takes to render a frame of vector data and switch to raster maps if the device is too slow, or at least present the user with the option.

This is a good idea, provided that the people serving the maps would be interested in supporting both formats.


Firefox seems to have very inconsistent behaviour on Mapbox/Maplibre sites. I deal with a lot of bug reports with no pattern to them except Firefox is slow but on the same device, Chrome is perfect...


I would also like to throw in planetiler[0] and tileserver-gl[1] which I am happily running to serve vector tiles for some regions in Germany.

[0]: https://github.com/onthegomap/planetiler [1]: https://github.com/maptiler/tileserver-gl


In addition to the links others have given I'd note that serving dynamically generated Mapbox Vector Tiles (MVT) can be done very easily from a very standard postgis server these days [0]. For me this is the no brainer way to go. Your main problem is likely to be converting from OSM's labyrinthine data structure to something easier to serve.

[0] For the explanation and minmal version see https://www.youtube.com/watch?v=t8eVmNwqh7M source code here https://github.com/pramsey/minimal-mvt. For a full fledged package see https://github.com/maplibre/martin or https://github.com/CrunchyData/pg_tileserv


The zoom is especially bad with raster OSM right now since they only support zoom up to level 19, which results in a lot of pixelation if you want to go closer than seeing an entire street.


But with vector tiles you (normally) start overzooming at level 15, resulting in either much less detail or really large file sizes for a zoom 14 vector tile that would have to carry all the data to be used in overzooming.

So raster tiles load much faster and OpenStreetMap retina tiles help against pixelation.

And you can - though not advisable for large areas - render raster tiles at much higher zoom levels, e.g. SomeOneElse renders his raster style up to level 28 for the UK.


Couldn't you take this into consideration in the vector renderer? A lot of roads and paths in OSM have width and surface tags, you could render that using some appropriate texture on high zoom levels instead of the usual dotted lines for paths. Forest and grass doesn't have to be solid green fields, and crosswalks and traffic lights could be rendered, contour lines can have user specified intervals, and so on. There's lot's of additional detail you can render parametrically from data.


We're working on rendering more detail about roads for high-zoom cases: https://github.com/a-b-street/osm2streets. Try https://a-b-street.github.io/osm2streets/?test=st_georges_cy... and press "Generate details"


Well that is true, the vector tiles do have the opposite problem of having too much data when zoomed out, but using raster beyond level 21 is just completely wasteful in terms of storage required. Most of the tiles will just be one colour. I'm not sure how level 28 is even possible without taking up multiple HDDs.

I still think that the more proper solution would be to instead cap out the vector tiles on the other end, not going lower than say level 5.


Both arguments of yours are quite true, I also don't think that going beyond zoom level 20 or 21 is advisible within a raster tile stack (and I just checked that the UK map I mentioned earlier goes up to zoom 24 and not 28 as I stated, while the software stack (in this case mod_tile&renderd) is prepared for up to zoom 28).

I also don't intend to argue against a vector tile stack - both vector and raster have their use cases and pro's and con's. Vector tiles bring much more flexibility but less support in some use cases, raster normally loads/displays much faster (if served pre-rendered) and has universal support on the client side.


Yeah I mean if you ask me what the optimal solution is it would have to be a combination of the two, more specifically vector data overlaid on top of satelite imagery.

Unfortunately I'm not really aware of any FOSS dataset for that yet, but that may change in the coming decades if the price of mass to orbit goes down.


My experience with it is that vector tiles can become very CPU-heavy on the client, specially when moving into 3D space like with MapLibre's terrain layers.

It's less computationally expensive to just fetch a pre-rendered tile and drape it onto the map, which keeps rotating, panning an tilting very performant.

It doesn't look that nice, so there's the tradeoff. It can also be easier to just cache tiles on the end device and resort to them instead of constantly reassemble the vector layers.


Raster tiles are smaller in terms of amount data sent over the wire and work better for users with less powerful computers/ mobile

Really if you aren't interacting with the data on the client I would stick with good old rasters - it's harder to get set up with them but they perform way better


Ignorant question - is this why osmnand always looked and performed worse than expected?


OsmAnd uses offline vector tiles as standard, they support multiple different styles including custom ones. It performs fine on my phone, and it shows off many of the benefits with vector data. It's easy to choose which kind of point of interest you want displayed, hike and bike routes can be colored in, I think it uses much less storage space on the phone. You can add raster maps to OsmAnd if you want to though.

https://osmand.net/docs/user/map/vector-maps/

https://osmand.net/docs/technical/osmand-file-formats/osmand...

https://osmand.net/docs/user/map/raster-maps/

https://anygis.ru/Web/Html/Osmand_en


Thx; appthe detail - I guess I'll try it again!


Vector map of the whole Earth in PMTiles format is only ~65GB[1] and doesn't need any server or database - it's just a static file which you can host wherever you want.

bdon (author of PMTiles) already commented on this thread. I recommend taking a look at https://protomaps.com/docs - compared to this, raster tile servers sound like ancient technology.

[1]: https://app.protomaps.com/store/planet-z14


Google maps with its massive places data set is just too good. It’s expensive but I find it is always consistently up to date and mostly reliable. Have found some bugs with the places api which I reported, such as it not working for some queries which are off by 0.001 lat/lng


Of course Google is going to be more up to date and complete for POIs when these POIs need customers to survive and customers use Google search to find info. Google uses its monopoly in the search market to also become dominant in the geo data market. There's no competing with a hundred million shop owners contributing and updating their own data.

Hiking trails and other more niche things, though. Compare the data quality there :)


I work in e-commerce, and some of my clients have physical stores. Those clients want their site to have a store locator and a map to show the closest physical stores to customers. Over the years, I've looked into doing it with a self-hosted OSM solution, but never got too far, as it wasn't clear on what had to be done or run. Everyone seems to be OK with using a Google Maps product in the end (live or static map).

I'll favorite this submission so I'll have something to reference if anyone balks at Google Maps ripping them off, and has a spare server around.


> Everyone seems to be OK with using a Google Maps product in the end (live or static map).

Everyone is also okay with driving combustion vehicles for the time being, even while realizing that it might not be the cleanest option. Supporting Google similarly feels dirty when necessary, but yes it's the norm.

For some things like traffic info, there also just isn't much of an alternative that works in most of the world with one simple thing to implement. There are things where OSM or TomTom or so just can't compete because it requires tracking half a billion people's locations in real time. Or showing up with your opening hours on Google's search monopoly by existing as a POI on TomTom or OSM will also just never happen, whereas existing as a POI on Google Maps is enough (using dominance in one market to influence another market). I know "everyone" is fine with using these products, but that's not why nerds get enthusiastic about seeing OSM in different places :)


> Even on this smaller extract, these services already use a large portion of my 1TB SSD (667 GB total). Assuming the usage scales proportionally, I would need around 3.7TB of storage for the entire planet. Not to mention the RAM requirements also scaling.

Why? 20TB consumer storage is a thing. https://www.amazon.com/Western-Digital-20TB-Internal-Drive/d... ($420, heh)

What in the implementation makes the RAM usage explode?


Serving large data sets at acceptable latency typically requires some proportional amount of RAM to keep frequently accessed files and indexes in memory (especially if you're serving off of HDDs).

It depends on the service, but we frequently find geo services are RAM limited up to a certain point (at which point there's enough cached to make it CPU limited).


I'm curios about self hosting for personal usage. An offline google maps, just on my laptop, or hosted somewhere but serving very few users.


If it's just local--depending on the exact service--you probably wouldn't need a lot of RAM. Expect memory requirements to scale somewhat sub-linearly with diversity of regions simultaneously accessed.


Valhalla in particular in an in-memory router, so it's holding the whole transportation graph in memory. Not sure about the tiling and geocoding pieces. Most organizations with a "need to serve the world" requirement presumably don't balk at buying all-the-RAMs. Amazingly, the max-RAM-in-machine number has grown faster than the data-in-the-world number, and just being memory bound is a perfectly reasonable design decision, given the performance and complexity wins.


Osrm is in memory as well. We were serveing most of Europe and Asia from a box with 128 Gb RAM. It can use a mmaped file but it's too slow, even on a NVME drive. If you route anything more than 300 kms it will take a few seconds and then timeout.


Unfortunately, I am not aware of any open source project that covers the one feature that I still use Google Maps for. Reviews. Does anyone know of any?

I am wondering now if that's something that could be federated well.


https://mangrove.reviews/ exists but has almost no users. As far as I know, there's currently only the main instance, but the software is open source and they provide regular database dumps.


>The nuance is in the value that Google Maps brings: expertise, scalability, and proprietary map data.

note that this proprietary map data may be completely beating OSM or being vastly inferior, depending on your use case


Sometimes even scalability may be vastly inferior. I've implemented geocoding/geodecoding on OSM data, which for our use was 10-50x faster and our usage would chew through first tier plan in about an hour. Instead, our single server is about as expensive as that first tier.


hey........ i have a wildly awesome idea about improving OSM. can anyone build this and help the community? i would help by giving my time....

here is the idea.

a ride sharing app uses OSM for routing and such. fine. The driver follows the map but some times there are new turns or changes to road that are not in the map, or there are some improvements.

my idea. the ride sharing app has the historical "ACTUAL" rides that have already taken place.

we build a tool that takes point A&B from the route given and plots its own normal route. the actual trip and "Expected" trip are overlaid and if there are deviations, the software (after some adjustments for errors) this can give us a high level idea of the general map to actual route. this can also tell mappers which routes to fix and stuff...

there is one tool that does manually but i would be like for large number of routes that would be matched 1:1.


Grab (https://www.grab.com/my/), Amazon Logistics, Apple (afaik) and some other mobile apps already do this. Full-time staff are correcting OpenStreetMap with one-way restrictions, speed limits, road width. Strava allows OpenStreetMap mappers to see where people walked/cycled to identify missing paths. Facebook is tracing new roads based on machine-learning satellite imagery (https://wiki.openstreetmap.org/wiki/RapiD).


FYI there's a list of all the companies doing stuff with OSM on the wiki

https://wiki.openstreetmap.org/wiki/Organised_Editing/Activi...


I get that. I am saying such a tool should exist for people to do the same thing.


I notice an absence of transit directions despite an otherwise robust set of options. I would love to see OpenTripPlanner[0] included in this!

[0] http://www.opentripplanner.org


Yes, I was disappointed that the "bus" routing option appears to be for driving a bus.

The Valhalla pedestrian routing engine does include ferry routes, though it completely ignores ferry schedules. (Ferry use can be discouraged in the routing options... the interface for fine-tuning options for the different routing profiles is quite impressive. )


graphhopper supports loading gtfs public transit data.


For businesses who don't want to self-host maps, but still want to migrate away from Google Maps (or other major cloud providers' offerings), there's quite a few options (full disclosure, I run one of them):

- https://mapbox.com

- https://stadiamaps.com

- https://maptiler.com

- https://jawg.io

- https://thunderforest.com

- https://locationiq.com


As a solution in between, for those who want to include a OSM map on their website but don't want to put load on the official OSM tile servers, a simple tile proxy may be a good idea.

I am using something based on https://wiki.openstreetmap.org/wiki/ProxySimplePHP5 for a long time already and since I don't need to have up to date tiles every day, I set the TTL to a few weeks.

Together with leaflet this works beautifully.


Out of curiosity, what’s the motivation to error suppress the mkdir call here (my PHP is rusty)? Anyway, this is great. Thank you for sharing!


It returns an error if the directory already exists (same as in Python)


While we're on the topic, in Python there's a nice way to deal with this:

os.makedirs(path, exist_ok=True)


For sure! There's a lot of in-between solutions (depending on what services/regions/update frequency you need).


Ex mapbox customer and power user here. They became pretty terrible after being pretty amazing five years ago.


I'd love to hear your thoughts directly on this one. They're a primary competitor, and we would be happy to learn how to better serve your needs! (Email in bio.)



happy Stadia maps customer here! Good prices and solid service.


Thanks, Durkie! :)


This is cool and all, but I don't quite understand the point. You're not serving maps as part of a business case, what is your use case? I use OSM exclusively, for car navigation, topographical information, waterways and some custom overlay data and I run Osmand on my phone and store all the data for the entire US locally. If you're just planning trips and navigating around in your car, why run a server for it, especially one that powerful?


I did a similar thing but with graphhopper for routing since it supports gtfs for public transit. I was just barely able to host tiles for and travelroute Sweden on a 16GB M1 MBP. It wasn’t fun though, graphhopper felt yanky and the travel time isochrones looked very jagged and not quite right. OSM style tiles were fine to host though with enough disk. Routing is the tricky part, especially public transit.


> graphhopper felt yanky

Can you explain this in more detail? Slow? Is this slow for you too: https://graphhopper.com/maps/ ?

> I was just barely able to host tiles for and travelroute Sweden

Hosting tiles with graphhopper? Do you mean the vector tiles we serve only for debugging purposes?

> the travel time isochrones looked very jagged and not quite right

For public transit? The GTFS does not give much room to interpret this. Or do you mean road vehicles?

> Routing is the tricky part, especially public transit.

Exactly. And this is the reason your comment is tricky to interpret as the original post was about road routing and for isochrones you meant probably also road routing but in your case you included GTFS and then the initial setup is a lot more demanding. So not really comparing apples with apples ...


I don’t remember in detail but it wasn’t obvious for me how to set it up, which version to use, it was difficult to reason about error messages, etc.

I used the docker image overv/openstreetmap-tile-server for the tiles.

The isochrones looked jagged for both public transit and road as far as i remember.

What’s tricky to interpret about my comment? I wanted to share that it’s possible to host google maps alternatives on a laptop with free software but it’s tricky to setup and not a smooth and fun experience, especially if you also want public transport routing, which is a hard problem.


Thank you! It's on my todo-list to set up such a server and your work will really help with it.

Here is another excellent write-up from Stefan Erhardt who is the founder of OpenTopoMap [0][1] which gives step-by-step instructions [2] on how to set up a tile server with custom tile rendering, also based on OSM's data.

Also worth mentioning is GeoServer [3]

[0] https://opentopomap.org/

[1] https://github.com/der-stefan/OpenTopoMap

[2] https://github.com/der-stefan/OpenTopoMap/blob/master/mapnik...

[3] https://geoserver.org/


For those interested in deploying their own OSM stack including all the bells and whistles like Overpass, TagInfo etcetera, have a look at osmseed: https://github.com/developmentseed/osm-seed


Great writing, thanks for sharing! Special kudos to author for improving interface and contribute it to upstream.

Concerning hardware: I would not compare real hardware with cloud virtual servers. For a task like that I would rent a bare metal server. For example, Hetzner offers 6 cores AMD Ryzen 5 server with 128G RAM and 2T NVMe for only 90 EUR/month [1], but in Germany. I believe it is possible to find similar offer in US for comparable price.

[1] https://www.hetzner.com/dedicated-rootserver?ram_from=128&ra...


    >Finally, to get external traffic from the internet connected to my computer (and specifically to Nginx), I had to set up port-forwarding on my router.
FYI: Most providers will not allow this and will disconnect your services for it.


What exactly do you mean by most? None of the ISPs I've used have ever done this.


Every one I've ever looked at, here's the policy from my current provider: https://www.spectrum.com/policies/internet-use-policy#:~:tex....

I have run a personal web server open to the Internet. But a good 15 years ago, I actually tried running a public website. It lasted for a few months, then they disconnected me until I called and agreed to turn it off.

I'm willing to bet your provider has something similar, enforcement may be lax, but it's not something you can count on.


This would be very unusual in Australia. Perhaps it varies by country. Here's my current ISP's policy: https://www.tpg.com.au/forms/FINAL%20-%20TPG%20Acceptable%20...


That's great if true. However they have the catch all "non-ordinary use" clause. Obviously if you try to run a massive data center, they'll shut you down, which raises the question of where the cutoff is going to be.


The only reason they have this in the contract is so they can cancel your service over it, I can guarantee they're not monitoring this at all any more. If they wanted to block hosting services they'd block the ports on their side.


When it comes to running in the "cloud", I'd probably go for Hetzner. Way way way cheaper than the examples mentioned in this post. I guess the main question is, how much is privacy really worth to you - or what other alternatives are there that meet my demands ie. Mapbox?

Think I'll give this a go, always have wanted to run my own instance of OSM, to see how performant it is/how much traffic you can handle on a single server.


Also the quality, though. I don't know how many times we got email notifications that our VPS will be rebooted or our storage unavailable due to maintenance this or that. Don't know if that's normal but it's another argument, in my situation at least, to keep hosting my modest requirements at home or from the office if it's work stuff.


AWS just goes down without sending emails and without even acknowledging the outage on the status page.


Might be worth noting that Nextcloud offers an OSM maps plugin that handles a lot of this stuff with a single click. I haven't used the service too much, but I was surprised how easy it was to set up.


Nextcloud only offers a user interface to external services.


In a similar vain, there is maps.earth / headway. Aiming to be easy to self-host and very open to contributions.

https://github.com/headwaymaps/headway

https://about.maps.earth/

https://news.ycombinator.com/item?id=32551273


I was marveling at my new Garmin watch yesterday that has topographical street maps for the entire world in only eight gigabytes.

A watch having more than enough space for that is amazing in itself but really that's quite the compression for a seriously intense dataset.

Technically everyone could have that already pre-installed on their PC or phone these days, the problem of course that is a commercial dataset buut openstreetmap solves that part.


Personally I'd really like to see an OsmAnd based tile server. OsmAnd has a lovely file format that's compact and kept up to date.


> Based on the cheapest instance that matches my own PC's specs

I wonder how much comoute and ram you need for the routing part.

When I worked on a maps stack (no routing and cloud) we would only provision a big compute instance for importing data and seeding (pre-caching) the new tiles. The server itself could be less powerful, though you might have to pay extra for more hard drive space.


How easy is it to sync with the latest OSM?


Author here: the docker project for the tile server allows setting up automatic syncing fairly easily:

https://github.com/Overv/openstreetmap-tile-server#enabling-...

I'm not sure about Nominatim or Valhalla's ability to auto-sync the latest changes, at least with the dockerized version, but it may be possible.


I think the official Nominatim server has minutely updates so I'd be very surprised if the open source version isn't the same and can't do that as well. I've never set it up though, so I'm not sure where to find those docs.


Yes, you can do minutely updates with Nominatim, instructions are here: https://nominatim.org/release-docs/latest/admin/Update/


If you just want map tiles you don't really need much of anything. I run a map service on cheapest hetzner box, cost €4.50/month. Sure its not massive scale, but it would cost about $1000/month on mapbox (about 2m tiles). Just nginx + tileserver-gl + certbot.


I do, and my web interface works with noscript/basic (x)html browsers.


Link?



This is awesome, this is why HN rocks! Thanks for sharing!


Nice little article about hosting OSM on a desktop.

Including github repo.


> Because I don't have a static IP address through my Internet Service Provider, I needed to use a Dynamic Domain Name Service client, which updates my DNS records to point at the current IP address of my router.

I'm guessing that your router is available through the Internet directly, then? I wonder in how many places that's the case, because it isn't in my country (Latvia) and for my ISP (LMT, wireless connection in the countryside), because it seems like they're using CGNAT or something and any inbound traffic ended up getting dropped, at least the last time I tried: https://en.wikipedia.org/wiki/Carrier-grade_NAT

What I eventually did was rent a few cheap VPSes and set up WireGuard on them, so that my homelab servers could be accessed through the VPSes. I wrote a vague tutorial on my blog (though I should add the disclaimer that you probably don't want to forward almost all the ports if you're not lazy): https://blog.kronis.dev/tutorials/how-to-publicly-access-you...

Either way, this seems like a cool project!

> Even on this smaller extract, these services already use a large portion of my 1TB SSD (667 GB total). Assuming the usage scales proportionally, I would need around 3.7TB of storage for the entire planet. Not to mention the RAM requirements also scaling.

Though I can't help but to wonder why it wouldn't be possible to decrease the maximum level of detail that the map files contain, like the levels that you can zoom in or out. Even in the demo environment you get the lowest level of detail (land contours) for the whole world, even if you only have a part of it available in the higher fidelity.

> DigitalOcean Droplet (Memory Optimized): Total - $832.00 / mo

> AWS r6a.4xlarge EC2 Instance: Total - $735.42 / mo

> Azure D32as v5 VM Instance: Total - $1127.36 / mo

As for the cost aspect, someone suggested Hetzner, which does both cloud VPSes and dedicated ones: https://www.hetzner.com/

  Example of Hetzner VPS with 32 vCPU cores, 128 GB of RAM (CCX51, dedicated vCPU): 353.31 Euros a month + storage costs (600 GB by default)
  Example of Hetzner dedicated server with 16 CPU cores, 128 GB of RAM (from auction, not guaranteed): 46.05 Euros a month + storage costs (2 TB by default)
Then there's Contabo, which also has prices that are generally on the more affordable side: https://contabo.com/en/

  Example of Contabo VDS with 12 CPU cores, 96 GB of RAM (Cloud VDS XXL): 149.99 Euros a month + storage costs (720 GB by default)
  Example of Contabo dedicated server with 20 CPU cores, 256 GB of RAM (Intel Dual 10-Core): 149.44 Euros a month + storage costs (depends)
Apart from that, some people might also suggest that you look at https://lowendbox.com/ or similar sites for good deals, though typically the focus of such sites won't be on beefier specs.

Depending on exactly how many resources you need, a smaller piece of hardware might be suitable. There can sometimes be regional providers, for example I use Time4VPS for most of my hosting: https://www.time4vps.com/?affid=5294 (affiliate link, feel free to remove affid; though they to have a sale going on now)

Apart from all that, I wonder why we don't have a "champion" project here that does pretty much everything in a single offering. Like for mail servers there's Mail-In-a-Box, for file storage there's Nextcloud (though some prefer Seafile). Such offerings are seldom ideal for all use cases, but are good for mass adoption.


Personally, I decided to use CloudFlare tunnels to open the services that I need to the internet.

But lately, I've also been instead using Tailscale, no need for VPS then, just having the client on each device that needs to have access to the machine (plus it can handle ssh now).

I realized I spend enough time on different projects on my home lab that the "open service to be accessible everywhere" part isn't worth my time or interest anymore. I'm glad for Tailscale and CloudFlare.


> Though I can't help but to wonder why it wouldn't be possible to decrease the maximum level of detail that the map files contain, like the levels that you can zoom in or out

Note that processing to reduce accuracy will require even more resources and what someone wants to preserve or drop depends on person.

And in the end - at least sometimes you will want to zoom in to the highest detail when using map data.


> Note that processing to reduce accuracy will require even more resources and what someone wants to preserve or drop depends on person.

This is an excellent point, thanks!

In practice it probably depends on whether the party that's providing the map files for download (and prepares them) is okay with the bandwidth costs and/or the alternative costs of the downsampling (cutting out zoom levels).


Because your LMT ISP decided that if client wants their own IP they should upgrade to Business plan, that's all there is to it. They probably don't have enough IPv4 addresses to hand them out to each client and use NAT instead, many many clients under same external IP, so obviously incoming connections gets dropped. I find it extra interesting that they decided to do exactly the same for IPv6 addresses, which surely they have enough of, so most likely business plan upgrade is primary reason here.


I have the same problem with my ISP (Orange, France). In the router configuration (NAT / PAT something) it's possible to open a port on one machine that will be internet accessible. It also allows to link it to a No-Ip account so your ip will be synced with it.


I find that most cloud providers really earn their money from transit/bandwidth costs, whilst oversubscribing their compute resources to then sell them at a discount - does this not make the WireGuard proxying prohibitively expensive?


> I find that most cloud providers really earn their money from transit/bandwidth costs, whilst oversubscribing their compute resources to then sell them at a discount - does this not make the WireGuard proxying prohibitively expensive?

Oh, my VPSes don't have data transfer costs as a dynamic component (Time4VPS, linked above). I just pay a fixed fee for a given amount of bandwidth and if I exceed it, then the speed is reduced for that VPS until the end of the month.

Here's the relevant bit from their FAQ:

> We reduce your VPS server’s port speed 10 times until the new month starts. No worries, we won’t charge any extra fees or suspend your services.

That said, they're definitely not the only platform that does something similar, many other VPS providers also have certain amount of data transfer included, Hetzner and Contabo included.

I actually have to say that Hetzner is perhaps the best billing wise, because if you just need a VPS for an hour or something, you can also order it for that amount of time, instead of a full month like with many other providers.

Either way, in my case I don't need to worry about the bandwidth too much, because nothing that I want to expose publicly is that popular, at least from my homelab nodes - mostly test environments and such to show to friends/colleagues and so on. Most of the stuff that can generate a bit more traffic (for example, my blog) I host in the data center that gives me all my other VPSes as well, just to not overwhelm my residential connection.


I have a lot of stuff running on a raspberry-pi using duckdns and letsencrypt.

What problem do you have with this setup?


> What problem do you have with this setup?

With my old setup: the fact that my public IP address wasn't "mine", it didn't lead to my router so I couldn't do port forwarding and expose anything publicly - instead it routed to my ISPs infrastructure and thus any inbound traffic that I wanted to reach my servers at home was dropped.

Thus I used WireGuard to make a tunnel between my local homelab server (outgoing connection) and a VPS that I rented, which could then forward any traffic it receives on port X to the same (or a different) port of my local server through the tunnel. Of course, the wording I use could use some work, networking isn't my forte.

I also use some dynamic DNS (ddclient is great) in places and Let's Encrypt for TLS certificate renewal, no complaints there.


I found that Organic Maps (https://organicmaps.app) is a good open-source OSM-based alternative to Google Maps. Please check it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: