Questions#
What is the primary difference between Redis and a traditional relational database like PostgreSQL?
A. Redis supports SQL queries, while PostgreSQL does not.
B. Redis stores data on disk by default, while PostgreSQL stores data in RAM.
C. Redis stores data in-memory (RAM) for high speed, while PostgreSQL primarily stores data on disk.
D. Redis is slower but more reliable than PostgreSQL.
Which Redis data structure is best suited for implementing a “Leaderboard” or “Ranking” system?
A. Strings
B. Sets
C. Sorted Sets (ZSets)
D. Lists
In the “Cache-Aside” pattern, what happens when a “Cache Miss” occurs?
A. The application returns an error to the user immediately.
B. The cache automatically fetches data from the database in the background.
C. The application queries the database, writes the result to the cache with a TTL, and then returns the data.
D. The application waits for the cache to be manually updated by an admin.
Why is it critical to set a “Time To Live” (TTL) for cached data?
A. To ensure the data is encrypted.
B. To prevent the cache from filling up with old/stale data and causing memory leaks or high costs.
C. To make the data load faster.
D. To allow multiple users to access the same key.
What is the main benefit of “Redis Serverless” compared to provisioning fixed-size nodes?
A. It is always free.
B. You pay based on actual usage (storage + requests) rather than idle instance time.
C. It runs on your own local computer.
D. It does not support TTLs.
When caching “Chat History” for an LLM application, why might we use the
LTRIMcommand?A. To delete the entire conversation.
B. To keep only the most recent N messages (Context Window) and remove older ones to save memory/tokens.
C. To encrypt the messages.
D. To translate the messages into another language.
How does “Connection Pooling” optimize Redis performance?
A. It creates a new connection for every single request to ensure freshness.
B. It reuses a set of established connections, reducing the overhead of constantly opening and closing handshakes.
C. It allows unlimited connections to the server.
D. It disconnects users who are idle for too long.
What is a “Cache Hit”?
A. When the requested data is NOT found in the cache.
B. When the cache server crashes.
C. When the requested data IS found in the cache and returned immediately.
D. When the user manually clears their browser cache.
Which scenario is a good candidate for a “Long TTL” (e.g., hours or days)?
A. Real-time stock prices.
B. User session active status.
C. Reference data like “Country Codes” or “Product Categories” that rarely change.
D. The exact location of a delivery driver.
What is the risk of the “Write-Behind” caching strategy?
A. It is too slow for real-time applications.
B. If the cache crashes before the data is asynchronously synced to the database, data loss can occur.
C. It blocks the main thread.
D. It requires more RAM than Write-Through.
In a RAG application, what does “Semantic Caching” store?
A. The exact string match of the user’s question.
B. The vector embedding of the query, allowing retrieval of answers to similar (not just identical) questions.
C. The raw PDF documents.
D. The user’s password.
For AWS ElastiCache, why does failing to use TTLs result in higher costs?
A. AWS charges extra for keys without TTLs.
B. You are charged for every
GETrequest.C. Stale data consumes RAM, forcing you to vertically scale to larger, more expensive node types (e.g., moving from
cache.t3.microtocache.r6g.large).D. It slows down the internet connection.
View Answer Key
C - Speed vs. Persistence trade-off.
C - ZSets store a score with each member, perfect for ranking.
C - The application “lazily” loads data only when needed.
B - Memory is expensive; cleaning up OLD data is essential.
B - Serverless scales down to zero cost (mostly) when idle.
B - Optimizes for the limited context window of LLMs.
B - Handshakes are expensive; reuse is efficient.
C - The happy path!
C - Static data doesn’t degrade quickly.
B - Durability is compromised for write speed.
B - Semantic = Meaning. Vector search finds similar meanings.
C - You pay for provisioned capacity (RAM/CPU), not just data size.