Rate Limit
Protect your authentication endpoints from abuse with built-in rate limiting capabilities.
Light-Auth includes a built-in mechanism to automatically limit the number of authentication requests a user can make within a specified time frame.
This helps prevent abuse of your authentication endpoints, such as brute-force attacks or excessive login attempts.
What is Rate Limiting?
Rate Limit
Brute force attacks on login endpoints.
Denial of Service (DoS) attacks.
API abuse and resource exhaustion.
Automated bot attacks.
Enabling Rate Limit
Configuring your providers to support rate limit
Configure rate limiting options in your Light-Auth setup. You can specify the maximum number of requests allowed per time window, the duration of the time window, and the error message to return when the limit is exceeded.
Light-Auth provides a built-in rate limiter by calling the createLightAuthRateLimiter()
method, that you can easily integrate into your authentication flow.
import { CreateLightAuth } from "@light-auth/nextjs";
import { createLightAuthRateLimiter } from "@light-auth/core";
export const { providers, handlers, signIn, signOut, getAuthSession, getUser } = CreateLightAuth<MyLightAuthSession, MyLightAuthUser>({
providers: [googleProvider, microsoftProvider],
rateLimiter: createLightAuthRateLimiter({
maxRequestsPerTimeWindowsMs: 100,
timeWindowMs: 1000, // 1 second,
errorMessage: "Too many requests, please try again later.",
statusCode: 429,
}),
For production, consider using a distributed cache like Redis or Memcached.
You will find an example of using Redis for rate limiting later in this documentation.
Client Side Rate Limit Handling
How to handle a rate limit error in your client side components
"use client";
import { useState } from "react";
import { CreateLightAuthClient } from "@light-auth/nextjs/client";
const { signIn } = CreateLightAuthClient();
export function LoginButton() {
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const handleLogin = async () => {
try {
setIsLoading(true);
setError(null);
await signIn("google");
} catch (err: any) {
if (err.status === 429) {
// Rate limit exceeded
const retryAfter = err.headers?.['retry-after'];
setError(
"Too many login attempts. Please try again in a couple of seconds."
);
} else {
setError("Login failed. Please try again.");
}
} finally {
setIsLoading(false);
}
};
return (
<div>
<button
onClick={handleLogin}
disabled={isLoading}
className="px-4 py-2 bg-blue-500 text-white rounded disabled:opacity-50"
>
{isLoading ? "Signing in..." : "Sign in with Google"}
</button>
{error && (
<p className="text-red-500 text-sm mt-2">{error}</p>
)}
</div>
);
}
Server Side Rate Limit Handling
How to handle a rate limit error on your server side actions
"use server";
import { signIn } from "@/lib/auth";
import { redirect } from "next/navigation";
export async function loginAction(provider: string) {
try {
await signIn(provider);
} catch (error: any) {
if (error.status === 429) {
// Rate limit exceeded - redirect with error
redirect(`/login?error=rate_limit&retry_after=${error.retryAfter}`);
}
// Handle other errors
redirect("/login?error=auth_failed");
}
}
Best Practices for Rate limit
Use Different Limits for Different Endpoints
Login attempts should have stricter limits than token refresh or callback endpoints.
Consider User Experience
Provide clear error messages and indicate when users can try again.
Monitor Rate Limit Metrics
Track rate limit hits to identify potential attacks or adjust limits.
Use IP-Based and User-Based Limiting
Consider implementing both IP-based and user-based rate limiting for better security.
Test Your Limits
Test rate limiting in development to ensure it works as expected without blocking legitimate users.
Security Considerations
Distributed Rate Limiting
In production with multiple server instances, consider using a shared store (Redis, database) for rate limiting to ensure limits are enforced across all instances.
Proxy and Load Balancer Considerations
When behind proxies or load balancers, ensure the real client IP is properly forwarded and used for rate limiting. Configure your proxy to set appropriate headers likeX-Forwarded-For
.
Bypass Prevention
Be aware that sophisticated attackers might try to bypass rate limiting by:
- Using multiple IP addresses
- Rotating user agents
- Using proxy networks
Consider implementing additional security measures like CAPTCHA for repeated failures.
Redis Rate Limit example
If you want to use Redis for rate limiting, you can use this createLightAuthRedisRateLimiter()
example.
For distributed applications with multiple server instances, Redis provides shared storage to ensure rate limits are enforced consistently across all instances.
// lib/redis-rate-limiter.ts
import Redis from "ioredis";
import { getClientIp, type LightAuthRateLimit, type LightAuthRateLimiter, type LightAuthRateLimitOptions } from "@light-auth/core";
const redis = new Redis(process.env.REDIS_URL || "redis://localhost:6379");
export const createLightAuthRedisRateLimiter = (options: LightAuthRateLimitOptions): LightAuthRateLimiter => {
return {
async onRateLimit(args) {
const { url, headers } = args;
// Skip rate limiting if the function returns false
if (options.shouldApplyRateLimit && !options.shouldApplyRateLimit(args)) {
return undefined;
}
// Define the key for the rate limit based on the request IP and URL
const rateLimitKey = getClientIp(headers) + url;
const now = Date.now();
// Use Redis pipeline for atomic operations
const pipeline = redis.pipeline();
// Get current rate limit data
const rateLimit = await redis.hgetall(rateLimitKey);
const count = parseInt(rateLimit.count || "0", 10);
const lastRequestDateTime = parseInt(rateLimit.lastRequestDateTime || "0", 10);
// Check if we're outside the time window - reset if so
if (lastRequestDateTime + options.timeWindowMs < now) {
// Reset the count and update timestamp
pipeline.hset(rateLimitKey, {
count: "1",
lastRequestDateTime: now.toString(),
});
pipeline.expire(rateLimitKey, Math.ceil(options.timeWindowMs / 1000));
await pipeline.exec();
return undefined; // No rate limit exceeded
}
// Calculate retry after time
const retryAfter = lastRequestDateTime + options.timeWindowMs - now;
// Check if rate limit is exceeded
const isRateLimitExceeded = count >= options.maxRequestsPerTimeWindowsMs;
if (!isRateLimitExceeded) {
// Increment count and update timestamp
pipeline.hset(rateLimitKey, {
count: (count + 1).toString(),
lastRequestDateTime: now.toString(),
});
pipeline.expire(rateLimitKey, Math.ceil(options.timeWindowMs / 1000));
await pipeline.exec();
return undefined; // No rate limit exceeded
}
// If the rate limit is exceeded, return a response
const data =
typeof options.errorMessage === "string"
? { message: options.errorMessage ?? "Too Many Requests" }
: options.errorMessage ?? { message: "Too Many Requests" };
return {
data,
init: {
status: options.statusCode,
statusText: "Too Many Requests",
headers: {
"Content-Type": "application/json",
"Content-Length": Buffer.byteLength(JSON.stringify(data)).toString(),
"X-Retry-After": Math.ceil(retryAfter / 1000).toString(),
},
},
};
},
};
};