Skip to Content.
Sympa Menu

overpass - Re: [overpass] Setup question

Subject: Overpass API developpement

List archive

Re: [overpass] Setup question


Chronological Thread 
  • From: Zecke <>
  • To:
  • Subject: Re: [overpass] Setup question
  • Date: Sat, 05 Nov 2016 11:47:42 +0100

Am 05.11.2016 10:30, schrieb mmd:

Thanks for you detailed comments.
Hi,

I hope you don't mind if I answer in English, so that the post is of
interest to a wider audience. I will also try to translate some of your
questions.
That's perfectly fine with me. Thanks, I was just lazy...

36 hours processing time for a full planet import still sounds
reasonable. In my tests release 0.7.53 took about 20 hours on fast SSD
and using lz4 compression ("configure --enable-lz4").
I also think that's ok, I have no SSD. Did not use lz4 however.

As you've probably also noticed in the documentation, there's an option
to start with a clone db instead. This would save you some time for the
first import.
Must have missed this one.

Regarding hourly diffs: do you want to continue processing hourly diffs
in the future as well? Did you set the replicate_id file to a value
which matches your planet file. It is very important that there's no gap
between the planet file's last edit (use osmconvert --out-statistics to
get that timestmap) and the first edit of your hourly diff.

I used the planet from 27 Oct, 53G. I downloaded from gwdg.de mirror so the dates may differ. (I see it having a date 26 Oct 23:37h on planet osm.org).
I started hourly diffs with seqeuence no 36136 as replicate id which dates 25 min earlier than the planet so that should be ok?
Can I see the seqno of the planet somewhere?

btw: from previous experience, I recommend to do a backup after a first
successful full import and only then start with (minutely/hourly/daily)
diffs. That way you can recover easily in case things go wrong during
subsequent db updates.
Goot idea! :-)

Lots of missing node messages is always a sign that something went wrong
somewhere before. I guess those messages were created when applying
hourly diffs, rather during your first full import? Most likely, there's
some issue with the replicate_id value you've chosen before starting db
updates.
Those messages also appeared already during planet initialization.

That's my command history so far (as non-root):

nohup time wget http://ftp5.gwdg.de/pub/misc/openstreetmap/planet.openstreetmap.org/planet/planet-latest.osm.bz2 > /tmp/wget.log 2&>1 &
nohup ./bin/bin/init_osm3s.sh /opt/osm3s/planet/planet-latest.osm.bz2 /opt/osm3s/db /opt/osm3s/bin --meta &
sudo start overpass
nohup /opt/osm3s/bin/bin/fetch_osc.sh 36136 http://planet.openstreetmap.org/replication/hour/ /opt/osm3s/diffs/ &
cd bin/bin
nohup ./apply_osc_to_db.sh /opt/osm3s/diffs/ 36136 --meta &

In any case, all Overpass processes should run as a dedicated non-root
user, and maybe www-data for the Apache CGI. The effects you see are all
caused by running the dispatcher as root.
I supposed that already.

I guess most of the long processing time is really caused by writing
lots of "Node ... used in way ... not found" messages. But in any case,
I don't think it makes sense to continue this process without fixing
those messages in the first place.
You're saying they should not appear during planet initialisation - what to do if they appear nonetheless?

Another question regarding dispatcher: I realized that osm3s_query is only possible when the dispatcher is stopped (otherwise there are error messages). If the dispatcher is stopped, queries run fine. On the other hand it is said, dispatcher is needed for continuously applying diffs?
I think I don't need a web interface, so I suppose maybe I don't need the dispatcher at all? What's the best way to run queries from a) a scripted b) a Java application?

PS. Is it possible we come from the same village? I remember faintly to have had a tagging discussion with you some years ago about some cycling facility in Riegelsberg?

Best regards,
Carsten



Archive powered by MHonArc 2.6.18.

Top of Page