Understanding API Rate Limits on VerseDB
What Are Rate Limits?
Rate limits control how many API requests you can make within a specific time window. They help ensure fair use, prevent abuse, and maintain optimal performance for all VerseDB API users.
Your account has a set number of API calls it can make per hour, which varies based on your user tier and the type of endpoint.
Rate Limits by User Tier
| User Tier | Hourly Limit | Best For |
|---|---|---|
| Free | 300 requests/hour | Personal projects, light automation |
| Pro | 1,000 requests/hour | Heavy use, production apps, integrations |
Additional benefits for Pro users:
- Higher limits across all endpoint types
- Priority API access during high traffic
- Early access to new API features
- Premium support for API issues
Rate Limits by Endpoint Type
Different types of operations have different rate limits to balance system resources:
1. Read-Only Endpoints (GET requests)
Free users: 300/hour
Pro users: 1,000/hour
Examples:
GET /api/publishers- Browse publishersGET /api/series/{id}- Get series detailsGET /api/issues- List issuesGET /api/characters/{id}- Get character infoGET /api/user/collection- View your collectionGET /api/user/wishlist- View your wishlist
Why higher limits? Read operations are less resource-intensive and don't modify data.
2. Write Operations (POST, PUT, DELETE, PATCH)
Free users: 150/hour
Pro users: 500/hour
Examples:
POST /api/user/collection- Add to collectionPUT /api/user/collection/{id}- Update collection itemDELETE /api/user/wishlist/{id}- Remove from wishlistPOST /api/user/pulllist- Add series to pull listPATCH /api/user/read/{id}- Mark issue as read
Why lower limits? Write operations modify data and require more validation and processing.
3. Search Endpoints
Free users: 60/hour
Pro users: 120/hour
Examples:
GET /api/search?q=spider-man- Global searchGET /api/series/search?title=batman- Series searchGET /api/characters/search?name=wolverine- Character searchGET /api/creators/search?name=stan+lee- Creator search
Why even lower? Search queries are computationally expensive, especially with complex filters.
4. Bulk Operations
Free users: 30/hour
Pro users: 60/hour
Examples:
POST /api/user/collection/bulk- Add multiple issues at onceDELETE /api/user/wishlist/bulk- Remove multiple itemsPOST /api/user/read/bulk- Mark multiple issues as read
Why the lowest? Bulk operations process multiple items per request and consume significant resources.
Rate Limit Headers
Every API response includes headers showing your current rate limit status:
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 247
X-RateLimit-Reset: 1712869200
Header Descriptions:
X-RateLimit-Limit: Your maximum requests per hour for this endpoint typeX-RateLimit-Remaining: How many requests you have left in the current windowX-RateLimit-Reset: Unix timestamp when your limit resets (UTC)
Example: Reading Headers in Code
JavaScript:
fetch('https://versedb.com/api/v1/series/123')
.then(response => {
console.log('Limit:', response.headers.get('X-RateLimit-Limit'));
console.log('Remaining:', response.headers.get('X-RateLimit-Remaining'));
console.log('Resets at:', new Date(response.headers.get('X-RateLimit-Reset') * 1000));
return response.json();
});
Python:
response = requests.get('https://versedb.com/api/v1/series/123')
print(f"Limit: {response.headers['X-RateLimit-Limit']}")
print(f"Remaining: {response.headers['X-RateLimit-Remaining']}")
print(f"Resets: {response.headers['X-RateLimit-Reset']}")
What Happens When You Hit the Limit?
If you exceed your rate limit, the API will return a 429 Too Many Requests error:
HTTP Status Code: 429
Response Body:
{
"error": "Too Many Requests",
"message": "API rate limit exceeded. Please try again later.",
"retry_after": 3600,
"limit": 300,
"reset_at": "2025-04-12T15:00:00Z"
}
Response Headers:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1712869200
Retry-After: 3600
retry_after: Seconds until you can make requests againreset_at: ISO 8601 timestamp when limit resets
Best Practices for Managing Rate Limits
1. Implement Exponential Backoff
When you receive a 429 error, wait before retrying:
async function fetchWithRetry(url, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || Math.pow(2, i) * 1000;
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
2. Cache API Responses
Store frequently accessed data locally:
- Cache series details for 24 hours
- Cache character info for 1 week
- Cache public profiles for 1 hour
- Invalidate cache when data changes
3. Use Bulk Endpoints
Instead of 100 individual requests, use one bulk request:
// ❌ Bad: 100 separate requests
for (const issueId of issueIds) {
await addToCollection(issueId);
}
// ✅ Good: 1 bulk request
await addToCollectionBulk(issueIds);
4. Monitor Rate Limit Headers
Check remaining requests before making calls:
if (parseInt(response.headers.get('X-RateLimit-Remaining')) < 10) {
console.warn('Approaching rate limit, slowing down requests');
await delay(1000);
}
5. Implement Request Queuing
Spread requests over time instead of bursts:
const queue = new RequestQueue({ requestsPerSecond: 5 });
queue.add(() => fetch('/api/series/1'));
queue.add(() => fetch('/api/series/2'));
6. Use Webhooks (Pro Feature)
Instead of polling for updates, subscribe to webhooks for push notifications.
7. Optimize Queries
- Use filters to reduce response size
- Request only fields you need
- Paginate results appropriately
- Combine related requests when possible
When to Upgrade to Pro
Consider upgrading if you:
- Regularly hit the 300/hour free tier limit
- Run automated scripts or bots
- Build production applications or integrations
- Need search-heavy operations
- Use bulk endpoints frequently
- Require higher reliability and priority access
Pro tier benefits:
- 3.3x more requests (1,000 vs 300)
- 3.3x more write operations (500 vs 150)
- 2x more search requests (120 vs 60)
- 2x more bulk operations (60 vs 30)
- Priority support for API issues
Tips & Troubleshooting
Debugging Rate Limit Issues
- Always log rate limit headers in development
- Monitor usage patterns to identify bottlenecks
- Set up alerts when approaching limits
- Test with lower limits in staging environments
Common Mistakes to Avoid
❌ Ignoring rate limit headers
❌ Not implementing retry logic
❌ Making requests in tight loops
❌ Caching nothing (repeated identical requests)
❌ Using individual requests when bulk endpoints exist
Calculating Time Until Reset
const resetTime = parseInt(response.headers.get('X-RateLimit-Reset'));
const now = Math.floor(Date.now() / 1000);
const secondsUntilReset = resetTime - now;
console.log(`Rate limit resets in ${secondsUntilReset} seconds`);
Next: Learn about available endpoints in "Exploring API Endpoints & Structure"
Was this article helpful?
Please login to provide feedback