Subject: Overpass API developpement
- From: Roland Olbricht <>
- Subject: Re: [overpass] compressed database info
- Date: Thu, 26 Feb 2015 06:13:01 +0100
Noticed on earlier thread a mention of a compressed database.
Just looking for some introduction information on this.
How much space does it save? Does it have a performance impact? How bleeding
edge is the code?
The code is up to date. It has run for a non-public test updates for some weeks, without any problems. Nonetheless, there are other known bugs, so please consider to merge the "minor_issues" branch. The "repair_attic_updates" branch only affects you if you use attic data.
The whole world database including attic has a compressed size of 200 GB, as opposed to 500 GB for the uncompressed database.
The performance of the code is not known yet, but the first tests suggest that it is similar to the uncompressed code.
Both the performance and the compression rate can likely be improved by adjusting the values in settings.cc, but I haven't tried yet. If you see mostly processor load, then you could replace every 512*1024 by 256*1024 to reduce the amount of data to process. If the performance is limited by disk latency then it may help to even increase that value to 1024*1024. Likewise, replacing the divisor (is 4 at the moment, in get_block_size() and get_max_size() ) by doubling may improve compression rate but raise latency.
In general these adjustment tests are a lot of work because it needs a database rebuild each time.
Just wondering as I have a Ireland only hourly updated overpass instance and
I'm getting tight on diskspace ;-)
I encourage you to run the compressed version if you could live a day or so without the database, because of the necessary database rebuild.
- [overpass] compressed database info, donal.diamond, 02/24/2015
- Re: [overpass] compressed database info, Roland Olbricht, 02/26/2015
Archive powered by MHonArc 2.6.18.