
Rate Limiting Implementation in Lovable Applications
Rate Limiting Implementation in Lovable Applications
In the fast-paced world of vibe coding, Lovable has emerged as a powerful platform for quickly building functional applications with minimal technical expertise. However, this ease of development often comes with hidden security risks, particularly when it comes to protecting your APIs from abuse. Without proper rate limiting, your Lovable applications can be vulnerable to brute force attacks, denial of service (DoS) attempts, and resource exhaustion.
When developers instruct Lovable to “create an API endpoint” or “add user authentication,” the resulting code often prioritizes functionality over security. This leads to applications with no request throttling mechanisms, making them prime targets for attackers. In this blog post, we’ll examine common rate limiting vulnerabilities in Lovable applications and provide specific prompts you can use to generate secure code that prevents these vulnerabilities.
Common Rate Limiting Vulnerabilities in Lovable
1. Missing Rate Limiting on Authentication Endpoints
One of the most dangerous yet surprisingly common vulnerabilities in Lovable-generated applications is the complete absence of rate limiting on authentication endpoints:
// Vulnerable Lovable-generated code
app.post('/api/login', async (req, res) => {
try {
const { email, password } = req.body;
// Find user in database
const user = await User.findOne({ email });
// Check if user exists and password is correct
if (!user || !await bcrypt.compare(password, user.password)) {
return res.status(401).json({ message: 'Invalid credentials' });
}
// Generate JWT token
const token = jwt.sign({ userId: user._id }, process.env.JWT_SECRET, { expiresIn: '1h' });
res.status(200).json({ token });
} catch (error) {
res.status(500).json({ message: 'Server error' });
}
});
This approach is problematic because:
- Attackers can make unlimited login attempts to brute force passwords
- It enables credential stuffing attacks across multiple user accounts
- Server resources can be exhausted by a high volume of authentication requests
- It provides no protection against distributed attacks
2. Global API Endpoints Without Request Throttling
Another common vulnerability is creating API endpoints without any form of request throttling:
// Vulnerable Lovable-generated code
app.get('/api/products', async (req, res) => {
try {
const products = await Product.find();
res.status(200).json(products);
} catch (error) {
res.status(500).json({ message: 'Server error' });
}
});
app.get('/api/products/:id', async (req, res) => {
try {
const product = await Product.findById(req.params.id);
if (!product) {
return res.status(404).json({ message: 'Product not found' });
}
res.status(200).json(product);
} catch (error) {
res.status(500).json({ message: 'Server error' });
}
});
This approach is risky because:
- Public endpoints can be scraped aggressively by bots
- Database resources can be overwhelmed by excessive queries
- Legitimate users may experience degraded performance during attacks
- No differentiation between normal and abusive traffic patterns
3. Inadequate IP-Based Rate Limiting
Even when Lovable generates code with rate limiting, it often implements only basic IP-based limiting without considering more sophisticated approaches:
// Vulnerable Lovable-generated code with basic rate limiting
const rateLimit = require('express-rate-limit');
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: 'Too many requests from this IP, please try again after 15 minutes'
});
// Apply rate limiting to all requests
app.use(apiLimiter);
This approach is insufficient because:
- It applies the same limits to all endpoints regardless of sensitivity
- It doesn’t account for distributed attacks from multiple IPs
- It treats all users the same, regardless of their authentication status
- It lacks adaptive response mechanisms for different traffic patterns
Prompts for Implementing Secure Rate Limiting in Lovable
Now that we’ve identified common vulnerabilities, let’s look at specific prompts you can give Lovable to generate more secure code with proper rate limiting.
Prompt 1: Implementing Tiered Rate Limiting
Create an Express.js application with tiered rate limiting that implements:
1. A strict rate limiter for authentication endpoints (5 requests per minute)
2. A moderate rate limiter for authenticated API endpoints (30 requests per minute)
3. A basic rate limiter for public endpoints (100 requests per minute)
Use express-rate-limit and Redis for storage to handle distributed deployments. Include proper error responses with retry-after headers and implement logging for rate limit violations.
This prompt will generate code similar to:
const express = require('express');
const { rateLimit } = require('express-rate-limit');
const RedisStore = require('rate-limit-redis');
const Redis = require('ioredis');
const jwt = require('jsonwebtoken');
const app = express();
// Redis client for rate limiting
const redisClient = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: process.env.REDIS_PORT || 6379,
password: process.env.REDIS_PASSWORD,
enableOfflineQueue: false
});
// Configure rate limiters with different tiers
const authLimiter = rateLimit({
store: new RedisStore({
sendCommand: (...args) => redisClient.call(...args)
}),
windowMs: 60 * 1000, // 1 minute
max: 5, // 5 requests per minute
standardHeaders: true,
legacyHeaders: false,
message: {
status: 429,
message: 'Too many authentication attempts, please try again after a minute'
},
keyGenerator: (req) => {
// Use IP + partial email to prevent distributed attacks
return `${req.ip}:${req.body.email ? req.body.email.split('@')[1] : 'auth'}`;
}
});
const authenticatedLimiter = rateLimit({
store: new RedisStore({
sendCommand: (...args) => redisClient.call(...args)
}),
windowMs: 60 * 1000, // 1 minute
max: 30, // 30 requests per minute
standardHeaders: true,
legacyHeaders: false,
message: {
status: 429,
message: 'Rate limit exceeded, please slow down your requests'
},
keyGenerator: (req) => {
// Use user ID from JWT token for authenticated users
const token = req.headers.authorization?.split(' ')[1];
if (token) {
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
return `user:${decoded.userId}`;
} catch (err) {
return req.ip;
}
}
return req.ip;
}
});
const publicLimiter = rateLimit({
store: new RedisStore({
sendCommand: (...args) => redisClient.call(...args)
}),
windowMs: 60 * 1000, // 1 minute
max: 100, // 100 requests per minute
standardHeaders: true,
legacyHeaders: false,
message: {
status: 429,
message: 'Too many requests, please try again later'
}
});
// Apply rate limiters to different routes
app.use('/api/auth', authLimiter);
app.use('/api/login', authLimiter);
app.use('/api/register', authLimiter);
app.use('/api/reset-password', authLimiter);
// Protected routes with authenticated limiter
app.use('/api/users', authenticatedLimiter);
app.use('/api/orders', authenticatedLimiter);
app.use('/api/profile', authenticatedLimiter);
// Public routes with basic limiter
app.use('/api', publicLimiter);
// Log rate limit violations
app.use((req, res, next) => {
res.on('finish', () => {
if (res.statusCode === 429) {
console.log(`Rate limit exceeded: ${req.method} ${req.originalUrl} - IP: ${req.ip}`);
}
});
next();
});
// Routes implementation...
Prompt 2: Implementing Adaptive Rate Limiting
Create a Node.js Express application with adaptive rate limiting that:
1. Starts with generous limits but automatically reduces them when suspicious activity is detected
2. Implements a sliding window algorithm instead of fixed windows
3. Uses different rate limiting strategies based on user behavior and endpoint sensitivity
4. Includes proper monitoring and alerting for rate limit violations
Use express-rate-limit with a custom store implementation that can track suspicious patterns.
This prompt will generate code similar to:
const express = require('express');
const { rateLimit } = require('express-rate-limit');
const Redis = require('ioredis');
const app = express();
// Redis client for rate limiting
const redisClient = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: process.env.REDIS_PORT || 6379,
password: process.env.REDIS_PASSWORD
});
// Custom store with adaptive rate limiting
class AdaptiveStore {
constructor(redisClient) {
this.redis = redisClient;
this.prefix = 'rl:';
this.suspiciousPrefix = 'suspicious:';
this.windowSize = 60; // 60 seconds sliding window
}
// Increment and check if rate limit is hit
async increment(key) {
const now = Date.now();
const windowStart = now - (this.windowSize * 1000);
// Add the current timestamp to the sorted set
await this.redis.zadd(`${this.prefix}${key}`, now, now.toString());
// Remove timestamps outside the current window
await this.redis.zremrangebyscore(`${this.prefix}${key}`, 0, windowStart);
// Count requests in the current window
const requestCount = await this.redis.zcard(`${this.prefix}${key}`);
// Check for suspicious patterns (rapid succession of requests)
const recentRequests = await this.redis.zcount(
`${this.prefix}${key}`,
now - 5000, // Last 5 seconds
now
);
// If more than 10 requests in 5 seconds, mark as suspicious
if (recentRequests > 10) {
await this.redis.set(`${this.suspiciousPrefix}${key}`, 1, 'EX', 3600); // Mark as suspicious for 1 hour
}
// Check if this IP is marked as suspicious
const isSuspicious = await this.redis.exists(`${this.suspiciousPrefix}${key}`);
// Set TTL for cleanup
await this.redis.expire(`${this.prefix}${key}`, this.windowSize * 2);
// Return count and whether the IP is suspicious
return {
totalHits: requestCount,
resetTime: now + (this.windowSize * 1000),
isSuspicious: isSuspicious === 1
};
}
// Reset key
async resetKey(key) {
return this.redis.del(`${this.prefix}${key}`);
}
}
// Middleware to detect and adapt to suspicious behavior
const adaptiveRateLimiter = (options) => {
const store = new AdaptiveStore(redisClient);
// Normal and strict limits
const normalLimit = options.normalLimit || 60;
const strictLimit = options.strictLimit || 10;
return async (req, res, next) => {
const key = options.keyGenerator ? options.keyGenerator(req) : req.ip;
try {
const result = await store.increment(key);
// Set headers
res.setHeader('RateLimit-Limit', result.isSuspicious ? strictLimit : normalLimit);
res.setHeader('RateLimit-Remaining', Math.max(
(result.isSuspicious ? strictLimit : normalLimit) - result.totalHits,
0
));
res.setHeader('RateLimit-Reset', Math.ceil(result.resetTime / 1000));
// Determine if request should be limited
const limit = result.isSuspicious ? strictLimit : normalLimit;
if (result.totalHits > limit) {
// Log rate limit violation
console.log(`Rate limit exceeded: ${req.method} ${req.originalUrl} - IP: ${key} - Suspicious: ${result.isSuspicious}`);
// Send alert for repeated violations
if (result.totalHits > limit * 2) {
// In a real app, you would send this to your monitoring system
console.error(`ALERT: Possible attack detected from ${key} on ${req.originalUrl}`);
}
return res.status(429).json({
status: 429,
message: 'Too many requests, please try again later',
retryAfter: Math.ceil((result.resetTime - Date.now()) / 1000)
});
}
next();
} catch (err) {
// If Redis is down, fail open but log the error
console.error('Rate limiting error:', err);
next();
}
};
};
// Apply adaptive rate limiting to different routes
app.use('/api/auth', adaptiveRateLimiter({
normalLimit: 20,
strictLimit: 5,
keyGenerator: (req) => `auth:${req.ip}`
}));
app.use('/api', adaptiveRateLimiter({
normalLimit: 60,
strictLimit: 20,
keyGenerator: (req) => {
// Use user ID for authenticated users, IP otherwise
const userId = req.user?.id;
return userId ? `user:${userId}` : `ip:${req.ip}`;
}
}));
// Routes implementation...
Prompt 3: Implementing User-Based Rate Limiting with Slow-Down
Create a Node.js Express application that implements a comprehensive rate limiting strategy with:
1. User-based rate limiting that applies different limits based on user roles and subscription tiers
2. Progressive slow-down for authenticated users instead of hard blocking
3. Hard blocking for unauthenticated requests that exceed limits
4. Proper bypass mechanisms for internal services and admin users
Use express-rate-limit and express-slow-down together with proper error handling and response headers.
This prompt will generate code similar to:
const express = require('express');
const { rateLimit } = require('express-rate-limit');
const slowDown = require('express-slow-down');
const jwt = require('jsonwebtoken');
const app = express();
// Middleware to extract user from JWT
const extractUser = (req, res, next) => {
const token = req.headers.authorization?.split(' ')[1];
if (token) {
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
} catch (err) {
// Invalid token, continue as unauthenticated
}
}
next();
};
// Apply user extraction middleware
app.use(extractUser);
// Define rate limit tiers based on user roles and subscription
const getRateLimit = (req) => {
// Default limits for unauthenticated users
let limit = 30;
let windowMs = 60 * 1000; // 1 minute
if (req.user) {
// Authenticated user with role-based limits
switch (req.user.role) {
case 'admin':
// Admins get higher limits
limit = 300;
break;
case 'premium':
// Premium users get higher limits
limit = 120;
break;
case 'basic':
// Basic users get standard limits
limit = 60;
break;
default:
// Default authenticated user limits
limit = 60;
}
// Adjust based on subscription tier if available
if (req.user.subscription) {
switch (req.user.subscription) {
case 'enterprise':
limit *= 3;
break;
case 'professional':
limit *= 2;
break;
// Free tier uses default limit
}
}
}
return { limit, windowMs };
};
// Dynamic rate limiter middleware
const dynamicRateLimiter = (req, res, next) => {
// Skip rate limiting for internal services
if (req.headers['x-api-key'] === process.env.INTERNAL_API_KEY) {
return next();
}
// Get appropriate limits for this user/request
const { limit, windowMs } = getRateLimit(req);
// Generate appropriate key based on authentication status
const key = req.user ? `user:${req.user.id}` : `ip:${req.ip}`;
// For authenticated users, use slow-down approach
if (req.user) {
const speedLimiter = slowDown({
windowMs,
delayAfter: Math.floor(limit * 0.7), // Start slowing down at 70% of limit
delayMs: (hits) => (hits - Math.floor(limit * 0.7)) * 100, // 100ms additional delay per request over threshold
maxDelayMs: 2000, // Maximum delay of 2 seconds
keyGenerator: () => key,
skip: (req) => req.user.role === 'admin', // Admins bypass slow-down
onLimitReached: (req, res, options) => {
console.log(`Rate limit slow-down for user ${req.user.id} on ${req.originalUrl}`);
}
});
return speedLimiter(req, res, next);
}
// For unauthenticated users, use hard blocking
const hardLimiter = rateLimit({
windowMs,
max: limit,
standardHeaders: true,
legacyHeaders: false,
keyGenerator: () => key,
message: {
status: 429,
message: 'Too many requests, please try again later or log in for higher limits'
},
handler: (req, res, next, options) => {
console.log(`Rate limit exceeded for IP ${req.ip} on ${req.originalUrl}`);
res.status(429).json(options.message);
}
});
return hardLimiter(req, res, next);
};
// Apply dynamic rate limiting to all API routes
app.use('/api', dynamicRateLimiter);
// Routes implementation...
Best Practices for Rate Limiting in Lovable Applications
When implementing rate limiting in your Lovable applications, keep these best practices in mind:
-
Use Different Limits for Different Endpoints: Apply stricter limits to sensitive endpoints like authentication, password reset, and payment processing.
-
Consider User Context: Implement different rate limits based on authentication status, user roles, and subscription tiers.
-
Use Distributed Rate Limiting: For applications running on multiple servers, use Redis or another shared storage mechanism to track rate limits across instances.
-
Implement Progressive Responses: Start with response delays before hard blocking to provide a better user experience for legitimate users who occasionally exceed limits.
-
Add Proper Headers: Include standard rate limit headers (
RateLimit-Limit
,RateLimit-Remaining
,RateLimit-Reset
) to help clients properly handle limits. -
Monitor and Alert: Log rate limit violations and set up alerts for unusual patterns that might indicate an attack.
-
Provide Clear Error Messages: When users hit rate limits, provide clear error messages with information about when they can retry.
-
Consider IP Reputation: Implement more aggressive rate limiting for IPs with suspicious behavior patterns.
Conclusion
Rate limiting is a critical security feature that’s often overlooked in vibe-coded Lovable applications. By using the prompts provided in this blog post, you can generate code that implements sophisticated rate limiting strategies to protect your applications from abuse while maintaining a good experience for legitimate users.
Remember that rate limiting is just one layer of your application’s security strategy. It should be combined with other security measures like proper authentication, input validation, and monitoring to create a comprehensive defense against potential attacks.
By taking the time to implement proper rate limiting in your Lovable applications, you’re not only protecting your server resources but also ensuring a consistent and reliable experience for all your users.