Trypema Rate Limiter
Trypema is a Rust rate limiting library for enforcing per-key request rates with ultra-fast local state, Redis-backed shared state, or a hybrid provider that buffers counts locally to avoid per-check network calls and flushes to Redis on a configurable interval. It implements a sliding window algorithm (Absolute) and a graceful degradation mode (Suppressed), with practical details like fractional rates, retry hints, and a cleanup loop for high-cardinality keys.

Name Inspiration
The name “Trypema” is inspired by the Koine Greek word “trypematos” ("hole/opening"), from the phrase “through the eye of a needle” (Matthew 19:24; Mark 10:25; Luke 18:25). It’s a reminder that good rate limiting is about narrowing the gate under load: letting the right work through while keeping systems stable.
Overview
Trypema provides three providers (Local, Redis, and Hybrid) and two strategies (Absolute + Suppressed) so you can enforce per-key request rates with low overhead and clear behavior near/over capacity. The Absolute strategy uses a sliding window algorithm to make enforcement predictable and fair across time boundaries. For distributed setups, Redis operations are implemented as atomic scripts so the admission check and increment happen together, making it suitable for multi-instance APIs and worker fleets. The distributed limiter approach is inspired by Ably's write-up: https://ably.com/blog/distributed-rate-limiting-scale-your-platform
Hybrid Provider
The hybrid provider exists because of two constraints. First, every pure Redis admission check requires a network round trip, and that latency becomes the ceiling on throughput no matter how efficient the server-side logic is. Second, even with an optimized implementation, it is hard to compete with something like redis-cell at high concurrency. Instead of treating that as a defeat, Trypema takes a different approach: buffer counts locally to avoid per-check network calls, then flush to Redis on a configurable interval. This keeps the system paced and dramatically reduces the number of times you are waiting on the network.
The tradeoff is a small window of looseness at flush boundaries, which for most real-world rate limiting scenarios is a reasonable and acceptable compromise. Aggressive flushing turns out to make things worse, not better. The key insight was pacing the flush interval properly so Redis never becomes a bottleneck.
If you need strict distributed enforcement, use the Redis provider. If you need high throughput with distributed state, the hybrid provider is where Trypema has a genuine edge.
Benchmarks
Local vs Governor
On a hot-key workload (single key, 1,000 ops/s limit), Trypema's local provider reaches 3.49M ops/s against Governor's 3.88M ops/s, within 12% of each other. Governor holds a slight edge on P99 latency (25µs vs 33µs). On a uniform-key workload (100k keys), the gap flips: Trypema hits 9.64M ops/s versus Governor's 6.28M ops/s, around 53% higher throughput.
Redis vs Hybrid (Distributed Rate Limiting)
Both the Redis and Hybrid providers are designed for distributed rate limiting, where limits need to be enforced consistently across multiple instances. The difference is how they interact with Redis. The pure Redis provider runs every admission check against Redis directly, which means throughput is fundamentally capped by network latency. On a hot-key workload, redis-cell reaches 64k ops/s and Trypema's Redis provider reaches 47.6k ops/s, both network-bound with P99 latency around 370-500µs. The hybrid provider removes the per-check network cost entirely by buffering counts locally and flushing on an interval: 11.36M ops/s with a P99 of 1µs. That is roughly 170x higher throughput than redis-cell on the same workload, at the cost of a small looseness window at flush boundaries.
These benchmarks were run locally. A lot of things can mess with the results at that level: CPU scheduling, background processes, thermal throttling, even ambient temperature. The numbers held up across multiple runs, but a controlled environment like an EC2 instance would give the cleanest picture. If you run them there, publish the results.
Basic usage
Local (in-process)
[dependencies]
trypema = "1"
use trypema::{RateLimit, RateLimitDecision};
// `rl`: a shared `RateLimiter` created once at startup
let rate = RateLimit::try_from(5.0).unwrap();
// Example key: per-user or per-IP
let key = "user:123";
match rl.local().absolute().inc(key, &rate, 1) {
RateLimitDecision::Allowed => {
// proceed
}
RateLimitDecision::Rejected { retry_after_ms, .. } => {
// reject (e.g. HTTP 429) and set a Retry-After hint
let _ = retry_after_ms;
}
RateLimitDecision::Suppressed { .. } => unreachable!("absolute never suppresses"),
}
Redis (distributed)
[dependencies]
trypema = { version = "1", features = ["redis-tokio"] }
redis = { version = "0.27", features = ["aio", "tokio-comp"] }
tokio = { version = "1", features = ["rt", "time"] }
use trypema::{RateLimit, RateLimitDecision};
use trypema::redis::RedisKey;
let rate = RateLimit::try_from(10.0).unwrap();
// Note: RedisKey is restricted to a safe character set (e.g. ':' is rejected)
let key = RedisKey::try_from("ip_203.0.113.10".to_string()).unwrap();
// Atomic admission check + increment via Redis Lua script
match rl.redis().absolute().inc(&key, &rate, 1).await.unwrap() {
RateLimitDecision::Allowed => {
// proceed
}
RateLimitDecision::Rejected { retry_after_ms, .. } => {
let _ = retry_after_ms;
}
RateLimitDecision::Suppressed { .. } => unreachable!("absolute never suppresses"),
}
Graceful degradation (Suppressed)
use trypema::{RateLimit, RateLimitDecision};
let rate = RateLimit::try_from(25.0).unwrap();
let key = "tenant:42:search";
match rl.local().suppressed().inc(key, &rate, 1) {
RateLimitDecision::Allowed => {
// normal path
}
RateLimitDecision::Suppressed { is_allowed: true, .. } => {
// near capacity: allow, but consider cheaper/faster code paths
}
RateLimitDecision::Suppressed { is_allowed: false, .. } => {
// shed load (e.g. serve cached response)
}
RateLimitDecision::Rejected { .. } => {
// hard cutoff (over hard limit)
}
}
Highlights
Providers
- Local provider for ultra-low latency in-process limiting
- Redis provider for shared limits across multiple instances
- Hybrid provider: buffer counts locally to eliminate per-check network calls, flush to Redis on a configurable interval
Strategies
- Absolute: deterministic sliding-window enforcement
- Suppressed: graceful degradation near capacity
Ergonomics
- Fractional rates (f64)
- Retry hints (e.g. retry_after_ms)
- Cleanup loop for stale keys
Use cases
- HTTP middleware (per user/IP/route)
- Worker queues (per tenant/job type)
- Downstream protection for internal clients
Performance
- Local: 9.64M ops/s on uniform keys, ~53% faster than Governor
- Local: 3.49M ops/s on hot-key (within 12% of Governor)
- Hybrid: 11.36M ops/s vs redis-cell's 64k ops/s (~170x faster)
- Hybrid P99: 1µs vs 370-500µs for pure Redis providers
Documentation
- Docs website with guides, concepts, strategies, and provider reference
- Quickstarts for Local, Redis, and Hybrid setups
Other Projects

Canvas Infinity & Circle Mouse
Pet project exploring HTML Canvas and JavaScript animations—drawing an infinity curve and a circle following the mouse.

Payaza Web SDK
A JavaScript Web SDK that simplifies integrating Payaza checkout on web applications. Built as part of my role at Payaza Africa.

Canvas Random Floating Circle
Pet project animating randomly floating circles on HTML Canvas with simple drift and easing.