The RUCKUS One APIs enforce rate limiting to ensure fair usage and maintain system stability. Rate limiting controls the number of API calls that can be made within a specific time period. If the rate limit is exceeded, requests will be throttled and the API will return a 429 Too Many Requests status code.
The RUCKUS One APIs implement a token bucket algorithm for rate limiting. Rate limiting is applied per tenant, and all API requests associated with the same tenant share the same rate limit quota.
The rate limiting configuration is determined by three parameters that are configurable per tenant:
- Capacity: The maximum number of tokens (API calls) available in the bucket at any given time.
- Rate Limit: Each tenant is allowed 2000 API calls per minute.
- Refill Rate: The number of tokens added to the bucket during each refill interval.
- Restoration Rate: After the rate limit is exceeded, API calls are restored at a fixed rate to prevent request flooding. The restoration rate is designed to gradually restore access rather than immediately resetting the limit.
Tokens are consumed when API requests are made. When the bucket is empty (no tokens available), requests will be throttled with a 429 Too Many Requests status code. Tokens are automatically refilled at the configured interval, allowing requests to proceed once tokens become available again.
When you receive a 429 Too Many Requests status code, your application should:
- Implement exponential backoff: Wait progressively longer between retry attempts (e.g., 1 second, 2 seconds, 4 seconds, etc.)
- Cache responses: Store frequently accessed data locally to reduce the number of API calls
- Monitor request patterns: Review your application's API usage to identify opportunities for optimization
To avoid rate limiting and optimize API usage:
- Implement retry logic: Use exponential backoff when retrying failed requests
- Cache frequently accessed data: Store responses locally to minimize redundant API calls
- Use webhooks and event subscriptions: When available, subscribe to events instead of polling APIs
- Batch operations: When possible, combine multiple operations into a single API call
- Monitor your usage: Track your API call patterns to ensure you stay within rate limits