Current Limitations
This document provides a detailed analysis of the current Sunday architecture, its limitations, and the trade-offs made for rapid development.
Executive Summary
Sunday is built for developer productivity with Next.js, prioritizing fast iteration over production scalability. Before scaling to production, these limitations must be addressed.
| Area | Current | Production Requirement |
|---|---|---|
| Rate Limiting | In-memory Map | Distributed (Redis) |
| Database | Single connection | Connection pooling |
| State | Client-side Zustand | Server-side caching |
| Sessions | JWT only | Distributed sessions |
| File Storage | Vercel Blob | Multi-region S3 |
| Search | Basic text | Full-text search |
| Background Jobs | None | Queue-based |
1. In-Memory Rate Limiting
Current Implementation
File: middleware.ts
// Simple in-memory rate limiter for Edge runtime
const rateLimit = new Map<string, { count: number; resetAt: number }>()
function checkRateLimit(
key: string,
maxRequests: number,
windowMs: number
): { allowed: boolean; remaining: number; resetAt: number } {
const now = Date.now()
const data = rateLimit.get(key)
// ... rate limiting logic
}File: lib/rate-limit.ts
// In-memory store for rate limiting (use Redis in production for distributed systems)
const store = new Map<string, TokenBucket>()Problem
- Not distributed - Each server instance has its own Map
- Memory leaks - No TTL-based cleanup of old entries
- Lost on restart - All rate limit data cleared on redeploy
- Edge runtime limitation - Can't use Redis directly in Edge
Impact
| Scenario | Issue |
|---|---|
| Multiple instances | Rate limits not shared, user can bypass by hitting different instances |
| High traffic | Map grows unbounded, memory exhaustion |
| Restart/Deploy | Rate limits reset, allows burst requests |
Solution
Use Redis with TTL:
import { Redis } from '@upstash/redis'
const redis = Redis.fromEnv()
async function checkRateLimit(key: string, limit: number, window: number) {
const count = await redis.incr(key)
if (count === 1) {
await redis.expire(key, window)
}
return count <= limit
}2. Single MongoDB Connection
Current Implementation
File: lib/mongodb.ts
let cachedClient: MongoClient | null = null
let cachedDb: Db | null = null
export async function connectToDatabase() {
if (cachedClient && cachedDb) {
return { client: cachedClient, db: cachedDb }
}
const { uri, dbName } = getMongoConfig()
if (!cachedClient) {
cachedClient = new MongoClient(uri)
await cachedClient.connect()
}
cachedDb = cachedClient.db(dbName)
return { client: cachedClient, db: cachedDb }
}Problem
- No connection pooling configuration - Uses MongoDB driver defaults
- Module-level caching - Works in serverless but not optimal
- No health checks - Connection could be stale
- No read replicas - All reads go to primary
Impact
| Scenario | Issue |
|---|---|
| High concurrency | Connection pool exhaustion |
| Network issues | No automatic reconnection handling |
| Read-heavy workload | Primary overloaded |
Solution
Configure connection pooling and read preference:
const client = new MongoClient(uri, {
maxPoolSize: 50,
minPoolSize: 10,
maxIdleTimeMS: 30000,
serverSelectionTimeoutMS: 5000,
readPreference: 'secondaryPreferred',
retryWrites: true,
w: 'majority'
})3. Client-Side State Management
Current Implementation
File: lib/store.ts (1,534 lines)
export const useAppStore = create<AppState>((set, get) => ({
// All data loaded into client memory
workspaces: [],
boards: [],
tasks: [],
users,
notifications: initialNotifications,
automations: initialAutomations,
// ... 1500+ lines of state and actions
}))The store loads all data on page load:
loadAppData: async () => {
// Fetches ALL workspaces, boards, tasks for the user
const [workspacesRes, boardsRes, tasksRes] = await Promise.all([
fetch('/api/workspaces', ...),
fetch('/api/boards', ...),
fetch('/api/tasks', ...)
])
}Problem
- Large initial payload - All data fetched at once
- No pagination - Entire dataset in memory
- No server-side caching - Every page load hits DB
- Memory pressure - Large boards can exhaust browser memory
- Stale data - No real-time sync mechanism
Impact
| Scenario | Issue |
|---|---|
| 1000+ tasks | Slow page load, browser lag |
| Multiple users | No real-time updates |
| Mobile devices | Memory constraints |
Solution
Implement server-side pagination and caching:
// API with pagination
GET /api/tasks?boardId=xxx&page=1&limit=50
// React Query for caching
const { data, fetchNextPage } = useInfiniteQuery({
queryKey: ['tasks', boardId],
queryFn: ({ pageParam = 1 }) => fetchTasks(boardId, pageParam),
getNextPageParam: (lastPage) => lastPage.nextPage
})4. No Caching Layer
Current Implementation
Every API request queries the database directly:
// Example from API route
export async function GET(request: Request) {
const db = await getDatabase()
const tasks = await db.collection('tasks').find({ boardId }).toArray()
return Response.json({ tasks })
}Problem
- No Redis/Memcached - All reads hit MongoDB
- No HTTP caching - No Cache-Control headers
- No query result caching - Identical queries repeat work
- No CDN caching - Static assets not edge-cached
Impact
| Scenario | Issue |
|---|---|
| Repeated queries | Unnecessary DB load |
| Board view | Columns/groups fetched repeatedly |
| High traffic | Database becomes bottleneck |
Solution
Implement multi-layer caching:
import { Redis } from 'ioredis'
const redis = new Redis(process.env.REDIS_URL)
async function getBoard(boardId: string) {
// Check cache first
const cached = await redis.get(`board:${boardId}`)
if (cached) return JSON.parse(cached)
// Query DB
const db = await getDatabase()
const board = await db.collection('boards').findOne({ _id: boardId })
// Cache for 5 minutes
await redis.setex(`board:${boardId}`, 300, JSON.stringify(board))
return board
}5. No Background Job Processing
Current Implementation
All operations are synchronous in request handlers:
// Automation execution - synchronous
if (automation.trigger.config.toStatus === updates.status) {
automation.actions.forEach((action) => {
if (action.type === 'notify') {
get().addNotification(...) // Synchronous
}
if (action.type === 'send_email') {
// Would block request
}
})
}Problem
- No queue - Slow operations block requests
- No retry - Failed operations lost
- No scheduling - No delayed/recurring jobs
- No email sending - Would timeout in serverless
Impact
| Scenario | Issue |
|---|---|
| Send email | Request timeout |
| Bulk operations | Long response times |
| Automation chains | Cascading delays |
Solution
Implement SQS/Redis-based queue:
import { SQSClient, SendMessageCommand } from '@aws-sdk/client-sqs'
const sqs = new SQSClient({ region: 'us-east-1' })
async function queueEmail(to: string, subject: string, body: string) {
await sqs.send(new SendMessageCommand({
QueueUrl: process.env.EMAIL_QUEUE_URL,
MessageBody: JSON.stringify({ to, subject, body })
}))
}6. File Storage Coupling
Current Implementation
Tightly coupled to Vercel Blob:
import { put } from '@vercel/blob'
const blob = await put(filename, file, {
access: 'public'
})Problem
- Vendor lock-in - Vercel Blob only
- Single region - No multi-region redundancy
- No CDN control - Limited caching configuration
- Cost - Vercel Blob pricing may not scale
Solution
Abstract storage with S3-compatible interface:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
const s3 = new S3Client({ region: 'us-east-1' })
async function uploadFile(key: string, body: Buffer) {
await s3.send(new PutObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: key,
Body: body
}))
return `https://${process.env.CDN_DOMAIN}/${key}`
}7. JWT-Only Sessions
Current Implementation
File: lib/auth.ts
export function generateToken(userId: string, email: string): string {
return sign({ userId, email }, jwtSecret, { expiresIn: "7d" })
}
export function verifyToken(token: string) {
try {
return verify(token, jwtSecret) as { userId: string; email: string }
} catch {
return null
}
}Problem
- No session revocation - Can't invalidate JWT before expiry
- No session tracking - Can't see active sessions
- Token stored in localStorage - XSS vulnerable
- Long expiry - 7 days without refresh tokens
Impact
| Scenario | Issue |
|---|---|
| Password change | Old tokens still valid |
| Account compromise | Can't force logout |
| Multiple devices | No visibility |
Solution
Implement Redis session store:
// Store session in Redis
async function createSession(userId: string) {
const sessionId = crypto.randomUUID()
await redis.setex(`session:${sessionId}`, 86400, userId)
return sessionId
}
// Revoke all sessions
async function revokeUserSessions(userId: string) {
const keys = await redis.keys(`session:*`)
for (const key of keys) {
const storedUserId = await redis.get(key)
if (storedUserId === userId) await redis.del(key)
}
}8. Basic Text Search
Current Implementation
Simple regex/includes matching:
const filtered = tasks.filter(task =>
task.name.toLowerCase().includes(query.toLowerCase())
)Problem
- No full-text search - Exact matching only
- No fuzzy matching - Typos break search
- No relevance ranking - Results unordered
- Client-side only - Entire dataset loaded
Solution
Use MongoDB Atlas Search or OpenSearch:
// MongoDB Atlas Search
const results = await db.collection('tasks').aggregate([
{
$search: {
index: 'tasks_search',
text: {
query: searchTerm,
path: ['name', 'description'],
fuzzy: { maxEdits: 1 }
}
}
},
{ $limit: 20 }
]).toArray()Summary
These limitations are intentional trade-offs for development speed. Address them systematically using:
- Scalability Guide - Code changes required
- AWS Deployment - Production architecture