Skip to Content.
Sympa Menu

overpass - Re: [overpass] std::bad_alloc runtime error

Subject: Overpass API developpement

List archive

Re: [overpass] std::bad_alloc runtime error


Chronological Thread 
  • From: Pierre Béland <>
  • To: "" <>
  • Subject: Re: [overpass] std::bad_alloc runtime error
  • Date: Sun, 17 Apr 2016 11:50:53 +0000 (UTC)

Hi Rolland

> Please report if requests still fail. I'm open to raise the limits even
> further if there is good reason to do so.

Thanks, API Requests are successful again.

In my case, I am gardening to assure that highway names and related addr:street tags have the exact same content.  Geofabrik OSM Inspectors shows that there are numerous incoherences and incomplete infos. I assure that street names follow toponymic rules (ie. Capital letter for first letter of generic names such as boulevard, street, etc,  street number  before the generic names, etc.)
 
I cover a large region (relation for province of Quebec) and filter for regex rules.  This saves a lot of time making corrections.

And thanks again to you and the developpers for this fantastic tool. I am always pleased to present the advantages of the Overpass query tool either to validate infos or extract specific layers of data.

best.
 
Pierre



De : Roland Olbricht <>
À :
Envoyé le : Dimanche 17 avril 2016 4h07
Objet : Re: [overpass] std::bad_alloc runtime error

Hi Pierre,

> runtime error: Query failed with the exception: std::bad_alloc
> Is this a bug? Or are they any options I should add to request memory or
> other ressources?

Thank you for reporting the issue. It is essentially a bug from a recent
change. I've fixed it now. Please try it again.

This is a consequence of a new sanity check. For the curious:
https://github.com/drolbr/Overpass-API/commit/260684bdc7270dd38fecdca649c1eefd3b3ef133

The rationale behind that sanity check is that once every 100 million
requests or so we had a query that used so much memory that it damaged
the server operations, i.e. it tried to use 30 GB of RAM. In such a
situation the operating system kills a process to free memory and it
doesn't always choose the right one. And before that such a request
already had steamrolled all the disk cache.

While each of these events is a software bug on its own, I would prefer
to catch those runaways before they disturb the server operation. So we
need at least some loose hard limits to detect when a query is
definitely runaway.

Hence, all queries are now by the operating system limited on allowed
processor time and memory. It turns out that the limits I expected to be
loose aren't loose enough. For that reason I've now adjusted the limit from
2 x requested memory + 128 MB
to
2 x requested memory + 1 GB

The time limit has been raised for a similar reason from
2 x requested time + 15 seconds

to

2 x requested time + 60 seconds

Please report if requests still fail. I'm open to raise the limits even
further if there is good reason to do so.

Best regards,

Roland







Archive powered by MHonArc 2.6.18.

Top of Page