Skip to Content.
Sympa Menu

overpass - Re: [overpass] batch query result to lot of 504 error

Subject: Overpass API developpement

List archive

Re: [overpass] batch query result to lot of 504 error

Chronological Thread 
  • From: Roland Olbricht <>
  • To: , yann Guillerm <>
  • Subject: Re: [overpass] batch query result to lot of 504 error
  • Date: Mon, 9 Apr 2018 17:04:26 +0200

Hi Yann,

I have already change the Apache settings du 3600 but that does not help.

I think that the number of concurrent call is the problem.
In may script if i limit the concurrent call to 20, my test programme work.
If i try the same script with a limit to 40, a got 504 errors.

- processor is a 25% ( one full core)
- memory stay flat at 24/30%

thank you for asking. This helps me to understand how people use non-public instances. The Overpass dispatcher uses two internal limits, "space" and "time" to determine whether it has load. Please run

dispatcher --osm-base --status

An example from

Number of not yet opened connections: 0
Number of connected clients: 19
Rate limit: 2
Total available space: 12884901888
Total claimed space: 4831838208
Average claimed space: 7416928460
Total available time units: 262144
Total claimed time units: 2415
Average claimed time units: 3387
Counter of started requests: 69180046
Counter of finished requests: 69171436
[...] [list of running and pending processes]

If the used space is close to the maximum space then raise the space:
dispatcher --osm-base --space=20000000000

If the used space is close to the maximum space then raise the space:
dispatcher --osm-base --space=20000000000

None of the two could work if the disk is the limiting factor. The best tool to check disk load I know is "iotop". It should suffice to cross-check whether disk performance matters.

I expect that separating for CPU kernels will not work. The queries are distinct processes, and virtually any operating system is good in distributing processes on CPUs. Opposed to that I have run into two other systematic bottlenecks:
- super slow motion of swap: The OS will not warn if the RAM is short and instead start to swap if too many processes perform actual work in parallel. Swap is horribly slow in such a setting.
- disk dead lock: Similarly, the OS lets the disk jump so often between many concurrent processes that most time is spent seeking. I have not systematically ivestigated whether the problem persists with a SSD.

Given that in addition on the public instance most requests have a very short runtime (90% of all successful queries take less than a second), I have designed the system to delay further requests if much more than the number of CPU kernels is already running, represented by the above mentioned limits. You can identify this phenomenon quite clearly if there are many requests with HTTP 504 and 15 seconds runtime in the log files. The number of parallel requests is essentially
(space - 1 GB)/512 MB, i.e. for 12 GiB, the default value, it is 22 concurrent requests.

Best regards,

Archive powered by MHonArc 2.6.19+.

Top of Page