- It's completely okay even if your redis data is lost, because at the end of the day it was just a cacheed data, your PRIMARY DB has all your data, which can be eaily cached on further upcoming requests when redis will be up and running again.
Think of an in-memory data structure store as a super-fast database that stores data in your system’s RAM instead of on disk. Since RAM is much faster than traditional storage (like an SSD or hard drive), retrieving and updating data happens almost instantly.
However, just because it works in memory doesn’t mean the data is lost when the system restarts. To prevent data loss, Redis (a popular in-memory database) provides two ways to save data:
RDB takes a snapshot (a full copy) of all the data at specific time intervals. This is like taking a backup photo of your entire database at regular times.
save 900 1 # Save every 900 seconds (15 min) if at least 1 key changed
save 300 10 # Save every 300 seconds (5 min) if at least 10 keys changed
save 60 10000 # Save every 60 seconds (1 min) if at least 10,000 keys changed
- 🔹 The more often Redis saves, the less data you lose in case of a crash.
- 🔹 But frequent saving can slow performance slightly because it takes a full snapshot each time.
AOF works differently. Instead of taking a full snapshot, it logs every write operation (like adding, updating, or deleting data) one by one.
-
Every time data changes, Redis adds (or appends) that change to a file.
-
If the server crashes, Redis can replay this file to restore all operations step by step.
-
🔹 AOF keeps more recent changes compared to RDB.
-
🔹 But the file can grow large over time because every single action is recorded.
✅ RDB is better if you want less disk usage and periodic backups (faster recovery but might lose recent changes). ✅ AOF is better if you want every single change saved (more reliable but uses more disk space). ✅ Many systems use both RDB and AOF together for a balance of speed and safety.
docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
- Here we are running our redis in docker.
- PORT
6379
is where our actual redis server is running. - PORT
8001
is where our redis GUI is running.
- run
docker exec -it 3af54f82e390 redis-cli
to switch to redis-cli. - to test run
PING
you will getPONG
. - Now you're connected with your redis server.
Redis strings store sequences of bytes (text, serialized objects, binary arrays, images, etc.). They are the simplest value type in Redis and are often used for caching. Keys in Redis are also strings, so using string values means mapping one string to another.
- ✔ Text ("hello world")
- ✔ Numbers ("100", "3.14")
- ✔ Serialized Objects (JSON, XML, etc.) :
JSON.stringify()
converts a JavaScript object into a string representation. - ✔ Binary Data (Images, audio, videos, etc.)
SET key value
GET key
✔ Example:
SET bike:1 Deimos
GET bike:1
📌 Explanation:
- SET assigns a value (Deimos) to a key (bike:1).
- GET retrieves the value associated with the key.
⚠️ Important: If the key already exists, SET will replace the existing value.
SET key value NX # Set only if key does NOT exist
SET key value XX # Set only if key ALREADY exists
✔ Example:
SET bike:1 "bike" NX # Won't set if bike:1 already exists
SET bike:1 "bike" XX # Will only set if bike:1 exists
📌 Explanation:
- NX (No eXist) ensures the key is only set if it doesn’t exist.
- XX (eXist) ensures the key is only set if it already exists.
MSET key1 value1 key2 value2 key3 value3
MGET key1 key2 key3
✔ Example:
MSET bike:1 "Deimos" bike:2 "Ares" bike:3 "Vanth"
MGET bike:1 bike:2 bike:3
📌 Explanation:
- MSET allows setting multiple key-value pairs in one command.
- MGET retrieves multiple values at once, reducing latency.
INCR key # Increments value by 1
INCRBY key N # Increments value by N
✔ Example:
SET total_crashes 0
INCR total_crashes # 1
INCRBY total_crashes 10 # 11
📌 Explanation:
- INCR converts the string into an integer and increases it.
- INCRBY allows incrementing by any number.
- This operation is atomic, meaning multiple clients won't cause race conditions.
DECR key # Decrements value by 1
DECRBY key N # Decrements value by N
✔ Example:
DECR stock_count
DECRBY stock_count 5
📌 Explanation:
- Works similar to INCR, but decreases values.
🔹 GETSET – Updating a Value While Retrieving the Old One
GETSET key new_value
✔ Example:
SET visitor_count 100
GETSET visitor_count 0 # Returns 100 and sets visitor_count to 0
- Useful for resetting counters while capturing the old value.
- ✔ Maximum string size: 512 MB
- ✔ Most string operations are O(1) (constant time), making them fast.
- ✔ Be careful with random-access commands like SUBSTR, GETRANGE, SETRANGE, as they can be O(n) and cause performance issues with large strings.
- Connect the client @
./client.js
const {Redis} = require('ioredis');
const client = new Redis(6379); // if no port specified then it connects to 6379
// OR THIS :-
const client = new Redis({
host: process.env.REDIS_HOST || '127.0.0.1',
port: process.env.REDIS_PORT || 6379,
});
module.exports = {client}
- Use the client for specific operation.
const client = require('./client');
const res1 = await client.set("bike:1", "Deimos");
console.log(res1); // OK
const res2 = await client.get("bike:1");
console.log(res2); // Deimos
- You can also expire the key in particular time period.
await client.expire("bike:1", 10) // expire this particular key in 10s.
Redis lists are linked lists of string values. They are commonly used for:
- Implementing stacks and queues.
- Building queue management for background worker systems.
LPUSH
adds an element to the head (left) of a list.RPUSH
adds an element to the tail (right) of a list.
LPUSH mylist "A"
RPUSH mylist "B"
LRANGE mylist 0 -1
Output: ["A", "B"]
LPUSH mylist "X"
RPUSH mylist "Y"
LRANGE mylist 0 -1
Output: ["X", "A", "B", "Y"]
LPOP
removes and returns an element from the head.RPOP
removes and returns an element from the tail.
LPUSH mylist "A" "B" "C"
LPOP mylist
Output: "C"
RPUSH mylist "X" "Y" "Z"
RPOP mylist
Output: "Z"
LLEN
returns the number of elements in a list.
RPUSH mylist "A" "B" "C"
LLEN mylist
Output: 3
LPUSH newlist "X" "Y"
LLEN newlist
Output: 2
LMOVE
moves elements from one list to another.- It can move from left to left or right to right.
LPUSH source "A" "B"
LMOVE source destination LEFT LEFT
Output: Moves "B"
from source
to destination
.
RPUSH source "X" "Y"
LMOVE source destination RIGHT RIGHT
Output: Moves "Y"
from source
to destination
.
LRANGE list start end
extracts a range of elements.- Negative indexes can be used to count from the end.
RPUSH mylist "A" "B" "C" "D"
LRANGE mylist 1 2
Output: ["B", "C"]
LRANGE mylist -2 -1
Output: ["C", "D"]
LTRIM
keeps only elements within a given range.
RPUSH mylist "A" "B" "C" "D"
LTRIM mylist 1 2
Output: ["B", "C"]
RPUSH numbers "1" "2" "3" "4"
LTRIM numbers 0 1
Output: ["1", "2"]
BLPOP
waits for an element to be available in a list, removing it from the head as soon as element is inserted within the provided time frame.BRPOP
does the same from the tail.
BLPOP mylist 10
Waits for 10 seconds to pop an element from the head.
BRPOP mylist 5
Waits for 5 seconds to pop an element from the tail.
BLMOVE
moves an element between lists, blocking if the source is empty.
BLMOVE source destination LEFT RIGHT 10
Waits for 10 seconds to move an element from source
to destination
.
BLMOVE queue processed RIGHT LEFT 5
Waits for 5 seconds to move an element from queue
to processed
.
- Implemented as linked lists, ensuring constant-time insertions and deletions.
- Fast for queue and stack operations but slower for indexed access.
- Efficient memory usage when handling large lists.
RPUSH queue "task1" "task2"
LPUSH queue "urgent_task"
LRANGE queue 0 -1
Output: ["urgent_task", "task1", "task2"]
LPOP queue
Output: "urgent_task"
RPOP queue
Output: "task2"
LPUSH pending "taskA"
LMOVE pending processing LEFT LEFT
Output: Moves "taskA"
from pending
to processing
.
RPUSH numbers "1" "2" "3" "4"
LTRIM numbers 0 2
Output: ["1", "2", "3"]
Redis Lists provide a fast, flexible way to manage ordered collections of elements. They are particularly useful for queues, stacks, and messaging systems. Mastering the key commands helps optimize performance and memory usage in Redis-based applications.
6. Redis Set :- read here
7. Redis Hashmap :- read here
8. Redis PriorityQueue :- read here
9. Redis Stream :- read here : Use to store fast changing data like driver location
, sensor catching temperature
10. Redis Geospatial data :- read here : Helps to find anything near you in the radius of Xkm.
.
.
.
app.get("/", async (req, res)=>{
const cachedValue = await client.get("todos");
if(cachedValue) return res.json(JSON.parse(cachedValue));
const {data} = await axios.get('https://jsonplaceholder.typicode.com/todos');
await client.set("todos", JSON.stringify(data));
await client.expire("todos", 30);
return res.json(data);
})
connection.js
- Creating seperate
connection.js
file because if you write the connection code inindex.js or wroker.js
and think to import from their then it will cause them to run. - let say IF you place your connection code in
worker.js
and import it inindex.js
then while running index.js(your goal was to just add task in queue, worker should starts when we run worker.js file) but here worker will start automatically because we are importing from worker.js.
import Redis from "ioredis"
export const connection = new Redis("rediss://default:AVExAAIjcDFiYzk3MGQyNDZhNmI0MjE2YTY4ODhiNTFlYzM4MjAyZHAxMA@modern-puma-20785.upstash.io:6379", {maxRetriesPerRequest:null})
index.js
: Adding task in Queue
import { Queue } from "bullmq";
import { connection } from "./connection.js";
const logQueue = new Queue("logQueue", {connection});
async function init(){
for(let i=67; i<71; i++){
console.log(`Adding logger ID : ${i} into QUEUE`);
await logQueue.add("logQueue", { loggerId: i});
}
}
init();
worker.js
: Pick up task from Queue and process it.
import { Worker } from "bullmq";
import {connection} from "./connection.js"
new Worker(
"logQueue",
async (job) => {
console.log(`Processing job ${job.id}: loging for Logger ID ${job.data.loggerId}`);
return new Promise((resolve)=>{
setTimeout(()=>{
resolve(`resolved ${job.data.loggerId}`)
}, 3000)
}).then((resolvedValue)=>console.log(`INSIDE .THEN > ${resolvedValue}`));
},
{ connection }
);
queue.ts
import { prisma } from "./config/db";
import { sendMail } from "./mailer";
import { Queue, Worker } from "bullmq";
import Redis from "ioredis"
const connection = new Redis(process.env.REDIS_URI || "",
{
maxRetriesPerRequest: null,
retryStrategy: (times) => {
console.log(`Redis reconnect attempt #${times}`);
return Math.min(times * 200, 5000); // Retry with backoff
},
reconnectOnError: (err) => {
console.error("Redis error, reconnecting...", err);
return true;
}
},
)
connection.ping()
.then(() => console.log("Redis connected successfully"))
.catch((err) => console.error("Redis connection failed", err));
export const emailQueue = new Queue("emailQueue", {connection});
const worker = new Worker(
"emailQueue",
async (job) => {
try {
const orderId = job.data.orderId;
console.log(`Sending mail for OrderId: ${orderId}`);
await sendOrderMail(orderId);
console.log(`✅ Mail sent for OrderId: ${orderId}`);
} catch (error) {
console.error(`❌ Error processing job ${job.id} for OrderId ${job.data.orderId}:`, error);
throw error;
}
},
{ connection}
);
worker.on("failed", (job, err) => {
console.error(`❌ Job ${job?.id} failed after retries:`, err);
});