Rate Limiting
The openPIM API implements a unique rate-limiting mechanism that goes beyond simple request counts. Instead, it uses a scoring metric to calculate the cost of GraphQL requests based on their complexity and resource usage. This scoring metric ensures fair usage of the API resources and prevents abuse or overloading of the server.
How Scoring Metric Works
When a client makes a GraphQL request, the API calculates a score for the request based on various factors such as:
- Complexity of the query: More complex queries with nested fields or deep traversals may have a higher score.
- Resource usage: Requests that involve heavy data processing or retrieval of large datasets may incur a higher score.
- Depth of query: Queries that request multiple levels of nested data may be assigned a higher score.
Deduction from Rate Limit Quota
The score assigned to each GraphQL request is deducted from the client's rate limit quota. Each GraphQL request consumes a portion of the client's rate limit quota, with the consumed amount determined by its score. Clients should monitor their rate limit usage and adjust their request rates accordingly to avoid exceeding their rate limit quotas.
Rate Limiting Policies
The rate limiting policies for the openPIM API are as follows:
- Rate Limit: This is a rate limit for all requests resetting every second. It is intended to limit the expense of single requests to the API, typically encountered during development when designing queries.
- Burst Rate Limit: This is a rate limit for all requests resetting every 5 seconds, available for burst requests consuming more quota than allowed in the standard Rate Limit.
- Total Rate Limit: This is the total rate limit quota for a user resetting every hour.
- Email Verification Rate Limit: This is a rate limit for email verification requests.
- Login Rate Limit: This is a rate limit for login requests.
When a client exceeds the allowed quota within a window, further requests from the client are temporarily rejected with an HTTP status code of 429.
Rate Limit Headers
When a request is rate-limited, the API includes rate limit information in the response headers. This information helps clients adjust their request rates accordingly.
- X-Rate-Limit-Remaining: This header indicates the rate limit quota remaining before the rate limit is reached within the current rate limit window. For example, if the rate limit is 1000 quotas per hour and a client has used 950 quotas so far, the value of this header would be 50.
- X-Rate-Limit-Reset: This header specifies the epoch time in milliseconds when the rate limit window will reset. After this time, the rate limit counters will be reset, and the client can make new requests without being limited by the previous rate limit window.
- X-Rate-Limit-Consumed: This header indicates the total number of requests consumed by the client within the current rate limit window. It represents the total number of requests made by the client since the start of the current rate limit window.
- X-Rate-Limit-First-In-Duration: This header specifies whether the current request is the first request made by the client within the current rate limit window. It is a boolean value (true or false) that indicates whether the client has exceeded the rate limit for the first time in the current window. This information can be useful for clients to adjust their request rates dynamically.
Example Rate Limit Response Headers:
X-Rate-Limit-Remaining: 20000
X-Rate-Limit-Reset: 1625241600
X-Rate-Limit-Consumed: 10000
X-Rate-Limit-First-In-Duration: false
In this example, the client is allowed a quota of 30,000 within the Total Rate Limit window. They have made requests with a total score of 10,000 so far, leaving them with 20,000 quotas left, and the rate limit window will reset at the specified time.
Handling Rate Limit Exceeded Errors
When a client exceeds the rate limit, the API returns an HTTP status code of 429 along with relevant rate limit headers in the response. Clients should implement logic to handle rate limit exceeded errors gracefully, such as backing off and retrying requests after the rate limit window resets.
Best Practices
To ensure smooth integration with the openPIM API, consider the following best practices:
- Monitor rate limit headers in API responses to avoid exceeding rate limits.
- Implement exponential backoff and retry mechanisms to handle rate limit exceeded errors gracefully.
- Cache responses where appropriate to reduce the number of requests made to the API.
Note: openPIM's rate-limiting algorithm is constantly being updated. We are still fine-tuning this mechanism. Currently, we are trying to cater to all loads, though in the future, openPIM will likely include a subscription plan for heavier rate-limiting users. In the meantime, if you encounter any issues or have questions, feel free to contact support for assistance.