Questions#

  1. What is the primary difference between Redis and a traditional relational database like PostgreSQL?

    • A. Redis supports SQL queries, while PostgreSQL does not.

    • B. Redis stores data on disk by default, while PostgreSQL stores data in RAM.

    • C. Redis stores data in-memory (RAM) for high speed, while PostgreSQL primarily stores data on disk.

    • D. Redis is slower but more reliable than PostgreSQL.

  2. Which Redis data structure is best suited for implementing a “Leaderboard” or “Ranking” system?

    • A. Strings

    • B. Sets

    • C. Sorted Sets (ZSets)

    • D. Lists

  3. In the “Cache-Aside” pattern, what happens when a “Cache Miss” occurs?

    • A. The application returns an error to the user immediately.

    • B. The cache automatically fetches data from the database in the background.

    • C. The application queries the database, writes the result to the cache with a TTL, and then returns the data.

    • D. The application waits for the cache to be manually updated by an admin.

  4. Why is it critical to set a “Time To Live” (TTL) for cached data?

    • A. To ensure the data is encrypted.

    • B. To prevent the cache from filling up with old/stale data and causing memory leaks or high costs.

    • C. To make the data load faster.

    • D. To allow multiple users to access the same key.

  5. What is the main benefit of “Redis Serverless” compared to provisioning fixed-size nodes?

    • A. It is always free.

    • B. You pay based on actual usage (storage + requests) rather than idle instance time.

    • C. It runs on your own local computer.

    • D. It does not support TTLs.

  6. When caching “Chat History” for an LLM application, why might we use the LTRIM command?

    • A. To delete the entire conversation.

    • B. To keep only the most recent N messages (Context Window) and remove older ones to save memory/tokens.

    • C. To encrypt the messages.

    • D. To translate the messages into another language.

  7. How does “Connection Pooling” optimize Redis performance?

    • A. It creates a new connection for every single request to ensure freshness.

    • B. It reuses a set of established connections, reducing the overhead of constantly opening and closing handshakes.

    • C. It allows unlimited connections to the server.

    • D. It disconnects users who are idle for too long.

  8. What is a “Cache Hit”?

    • A. When the requested data is NOT found in the cache.

    • B. When the cache server crashes.

    • C. When the requested data IS found in the cache and returned immediately.

    • D. When the user manually clears their browser cache.

  9. Which scenario is a good candidate for a “Long TTL” (e.g., hours or days)?

    • A. Real-time stock prices.

    • B. User session active status.

    • C. Reference data like “Country Codes” or “Product Categories” that rarely change.

    • D. The exact location of a delivery driver.

  10. What is the risk of the “Write-Behind” caching strategy?

    • A. It is too slow for real-time applications.

    • B. If the cache crashes before the data is asynchronously synced to the database, data loss can occur.

    • C. It blocks the main thread.

    • D. It requires more RAM than Write-Through.

  11. In a RAG application, what does “Semantic Caching” store?

    • A. The exact string match of the user’s question.

    • B. The vector embedding of the query, allowing retrieval of answers to similar (not just identical) questions.

    • C. The raw PDF documents.

    • D. The user’s password.

  12. For AWS ElastiCache, why does failing to use TTLs result in higher costs?

    • A. AWS charges extra for keys without TTLs.

    • B. You are charged for every GET request.

    • C. Stale data consumes RAM, forcing you to vertically scale to larger, more expensive node types (e.g., moving from cache.t3.micro to cache.r6g.large).

    • D. It slows down the internet connection.

View Answer Key
  1. C - Speed vs. Persistence trade-off.

  2. C - ZSets store a score with each member, perfect for ranking.

  3. C - The application “lazily” loads data only when needed.

  4. B - Memory is expensive; cleaning up OLD data is essential.

  5. B - Serverless scales down to zero cost (mostly) when idle.

  6. B - Optimizes for the limited context window of LLMs.

  7. B - Handshakes are expensive; reuse is efficient.

  8. C - The happy path!

  9. C - Static data doesn’t degrade quickly.

  10. B - Durability is compromised for write speed.

  11. B - Semantic = Meaning. Vector search finds similar meanings.

  12. C - You pay for provisioned capacity (RAM/CPU), not just data size.