Data API rate limits

Keeping an eye on the limits of your data API limits for the best performance.

Updated over a week ago

Rate limiting is a method of counting requests made to our server by your application. For counting these requests we currently handle two different methods to decide when to block incoming requests. The first is called count and is based on the amount of received requests. There is currently no distinction between read or write requests. The second is latency, where we use the duration of a request. These two methods are being used at the same time. So if you send few requests, but they take very long to complete, you will still receive rate limiter errors. Each application and sandbox still has a separate limit.


To count and check the amount of requests you make, and to make sure applications don't make too many, they are being collected in something we call a bucket.

There are multiple buckets for both methods. These buckets handle a separated limit for a specified time span.

In the new setup, 3 buckets are defined; one with a refresh rate per minute, one with an hourly refresh rate and one with a daily refresh rate. For example, there is a 1 hour bucket for the count method, which allows 2600 requests. That means that you can make 2600 requests every hour.

The bucket which refreshes most frequently will always be used first. For example, there is also a 1 minute bucket allowing 100 requests, in addition to the hourly bucket mentioned above. Incoming requests will be taken from the 1 minute bucket first. If the minute bucket is empty, the hour bucket will be used. When a minute passes, the minute bucket is refreshed and requests will take from that bucket first.

Why this method?

Using this method will result in favouring small requests and penalising large requests. This makes it possible for us to fine tune the rate limiter better for the overal use of your app, while keeping the platform more resistant against attacks / accidental overloading of our platform. With this change we are also adding remaining requests for limits to the logs so builders can identify if the limit is nearly reached. This can be found in the IDE under the logs tab.

Default limits

Bucket refresh rate



1 minute


30 seconds

1 hour


390 seconds

1 day


172.5 seconds

In sandbox applications, these limits will be divided by 2. The reason for this is that sandboxes are more likely to contain inefficient pages and used to test new code setups, which can slow down the server. We also want to discourage sandboxes from being used in production.

Future plans

After we rolled out this new rate limiter, we are going to add a method to automatically increase or decrease the amount of request you are able to send to your app. So if you start using your app more and more the amount of requests you are allowed to do will go up automatically until a predefined limit is reached.


What will happen if app XYZ will reach the limit?. What impact does it have on that app or on the whole cluster?

If app XYZ hits the limit, all the requests of that app will be blocked until one of the buckets refresh. One of the main reasons we have the limit is to prevent impact on the application cluster your app is on.

The limits seem lower than before. Doesn’t that mean we will encounter more rate limiting errors?

This change was introduced to reduce the amount of rate limiting errors. Usually these errors occur during peak times, for instance in the morning when your users start their working day. The new system is better suited to handle such peaks.

When exactly do buckets get refreshed?

All buckets refresh on the passing of their time span. So an hourly bucket can refresh at 13:00 or 14:00, and a daily bucket will refresh at midnight in the timezone of the server.

Does this affect my on-premise or private cloud application?

No, if your application runs on-premise or in a private cloud, the limits will be increased accordingly.

Can the limits be configured dynamically or without any downtime?

Yes, updating the limits for an app will happen without downtime.

Suppose I send 2600 requests at 13:59, exhausting the hour-rate limit. Will I be able to send another 2600 at 14:01?

Yes, but only within that hour, increased by the amount of queries you are allowed to make in that 1st minute (the minute bucket)

How many requests can I do per day at a maximum?

The max requests you can do per day is: (100 * 60 * 24) + (2600 * 24) + 1150, which is a bit more than 200k requests per day

Are both count and latency calculated as totals?

Yes they are. So if 25 requests are being sent which in total sum up to 15 seconds, the limit is reached. Also, you could perform 99 requests per minute which have a total latency of 14 seconds.

Is this rate limiting also applied to my actions?

This rate limit is only applied on the part of the api that is called by the page / custom frontends. The rate limiter of actions is a separate rate limiter.

Note: this Rate limiting method only applies to the public Data API. This API is used by the Page Builder where only data can be read. Mutations (update/delete/create) are not supported.

Did this answer your question?