Reference Video: Database Connection Pooling Explained by Husain Nasir
This project demonstrates the performance difference between creating new database connections for each request versus using a connection pool in Node.js with PostgreSQL.
postgresnode-pool.
├── api.js # Express server with connection pool endpoints
├── load-test.js # Load testing script for performance comparison
├── docker-compose.yaml # PostgreSQL database configuration
├── package.json # Project dependencies and scripts
├── pnpm-lock.yaml # Lock file for pnpm
└── README.md # This file
Database connection pooling is a technique that maintains a cache of database connections that can be reused for future requests. Instead of opening and closing a new connection for every database operation, applications can borrow a connection from the pool, use it, and return it to the pool when done.
In this demo, you’ll see that:
pnpm install
docker-compose up -d
This will start a PostgreSQL container with:
mydbmarcosecret5432Connect to the database and create a test table:
docker ps # Find the container name, usually something like 'db-pooling_db_1'
docker exec -it <container_name> psql -U marco -d mydb
Then run:
CREATE TABLE employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
department VARCHAR(100)
);
INSERT INTO employees (name, department) VALUES
('John Doe', 'Engineering'),
('Jane Smith', 'Marketing'),
('Bob Johnson', 'Sales');
node api.js
The server will start on port 9000.
GET /old - Uses new connection for each request (slow)GET /pool - Uses connection pool (fast)Run the load test to compare performance:
pnpm test
OR
npm test
OR
node load-test.js
This will:
/old endpoint with concurrency of 25/pool endpoint with concurrency of 25You can also test manually from browser console:
// Test old method (100 requests)
for (let i = 0; i < 100; i++) {
fetch("http://localhost:9000/old")
.then(res => res.json())
.then(console.log);
}
// Test pool method (100 requests)
for (let i = 0; i < 100; i++) {
fetch("http://localhost:9000/pool")
.then(res => res.json())
.then(console.log);
}
You should see output similar to:
🔥 Warming up connection pool...
🚀 Starting test for /old
Total requests: 10000
Concurrency : 25
✅ /old TEST COMPLETE
Success requests : 10000
Failed requests : 0
Avg latency : 45.67 ms
Min latency : 12.34 ms
Max latency : 123.45 ms
🚀 Starting test for /pool
Total requests: 10000
Concurrency : 25
✅ /pool TEST COMPLETE
Success requests : 10000
Failed requests : 0
Avg latency : 2.34 ms
Min latency : 0.56 ms
Max latency : 15.67 ms
🎯 LOAD TEST FINISHED
The pooled version should be significantly faster (often 10-20x improvement).
In api.js, the pool is configured with:
const pool = new Pool({
host: "localhost",
port: 5432,
user: "marco",
password: "secret",
database: "mydb",
min: 2, // Minimum connections in pool
max: 20, // Maximum connections in pool
});
In load-test.js, you can adjust:
const TOTAL_REQUESTS = 10000; // Total requests per test
const CONCURRENCY = 25; // Concurrent requests
Stop the database:
docker-compose down
Remove database volume (optional):
docker-compose down -v
.gitignore file is included to exclude node_modules/, logs, and other common files