Basic AI Fundamentals Quiz#

No.

Training Unit

Lecture

Training content

Question

Level

Mark

Answer

Answer Option A

Answer Option B

Answer Option C

Answer Option D

Explanation

1

Unit 1: Basic AI Fundamentals

Lec1

RAG Architecture

What is RAG (Retrieval-Augmented Generation), a hybrid AI architecture, designed to do?

Medium

1

C

Increase the speed of natural language processing

Reduce the cost of training language models

Enhance the quality and reliability of Large Language Models

Increase the creativity of language models

RAG is designed to enhance the quality and reliability of Large Language Models (LLMs) by integrating an information retrieval step from an external knowledge base before the LLM generates text.

2

Unit 1: Basic AI Fundamentals

Lec1

RAG Core Problems

What is one of the core technical problems that RAG solves?

Easy

1

A

Reduce hallucination (making up information)

Improve data retrieval speed

Increase data storage capacity

Enhance information security

RAG addresses limitations of traditional LLMs such as hallucination, outdated knowledge, lack of transparency, and difficulty accessing specialized knowledge.

3

Unit 1: Basic AI Fundamentals

Lec1

RAG vs Fine-tuning

What is the advantage of RAG over fine-tuning when updating knowledge for LLMs?

Medium

1

D

RAG is only suitable for unstructured data

RAG requires greater computing resources

RAG has lower transparency

RAG allows faster knowledge updates

RAG allows quick and nearly instant knowledge updates by updating the vector database, while fine-tuning requires retraining the model, which is expensive and slower.

4

Unit 1: Basic AI Fundamentals

Lec1

RAG Use Cases

When should you choose RAG instead of fine-tuning an LLM?

Medium

1

A

When you need to add factual knowledge and answer questions based on new data

When you need to reduce model operating costs

When you need to enhance the model’s reasoning ability

When you need to adjust the model’s behavior and style

RAG is suitable when you need to add factual knowledge and answer questions based on new data, while fine-tuning is appropriate when you need to adjust behavior, style, or learn a new skill.

5

Unit 1: Basic AI Fundamentals

Lec1

RAG Pipeline

In the RAG architecture, which phase occurs once or periodically to prepare data?

Easy

1

D

Query vectorization phase

Similarity search phase

Retrieval and answer generation phase (Retrieval Generation Online)

Data indexing phase (Indexing Offline)

The Data Indexing phase (Indexing Offline) occurs once or periodically to prepare data for RAG.

6

Unit 1: Basic AI Fundamentals

Lec1

Chunking

What is the purpose of dividing data into smaller text chunks in the ‘Load and Chunk’ step?

Easy

1

A

To ensure semantics are not lost and optimize for searching

To simplify the vectorization process

To reduce the storage capacity of data

To speed up data loading into the system

Chunking ensures that semantics are not lost and optimizes for searching.

7

Unit 1: Basic AI Fundamentals

Lec1

Vector Similarity

What is the most common method for measuring similarity between query vectors and document vectors in a Vector Database?

Medium

1

C

Manhattan distance

Jaccard similarity

Cosine Similarity

Euclidean distance

Cosine Similarity is the most common method for measuring the cosine angle between two vectors.

8

Unit 1: Basic AI Fundamentals

Lec1

RAG Online Phase

What happens to the user’s question in the first step of the ‘Retrieval and Answer Generation’ phase?

Easy

1

D

The question is stored in the database

The question is divided into smaller chunks

The question is translated to another language

The question is vectorized using an Embedding model

The user’s question is vectorized using an Embedding model.

9

Unit 1: Basic AI Fundamentals

Lec1

Embedding Quality

The quality of which component directly affects the effectiveness of the entire RAG system?

Medium

1

A

Embedding model

Similarity search method

Vector database

Prompting technique

The quality of the Embedding model directly affects the effectiveness of the entire system.

10

Unit 1: Basic AI Fundamentals

Lec1

Softmax Function

In the LLM model, what is the role of the Softmax function?

Hard

1

A

Convert scores (logits) into a probability distribution to select the most likely word

Filter out irrelevant sentences or information in text chunks

Calculate scores (logits) for all words in the vocabulary

Search for suitable text chunks

The Softmax function converts scores (logits) into a probability distribution, helping the model select the most likely word to appear.

11

Unit 1: Basic AI Fundamentals

Lec1

HyDE Technique

What is the HyDE (Hypothetical Document Embeddings) technique used for?

Hard

1

A

Expand the input query to improve retrieval results

Re-evaluate the relevance of each (question, chunk) pair

Filter out irrelevant information in text chunks

Combine the power of keyword search and vector search

HyDE uses a small LLM to generate a hypothetical document containing the answer, then uses this document’s vector for searching, improving retrieval results.

12

Unit 1: Basic AI Fundamentals

Lec1

Hybrid Search

What is Hybrid Search?

Medium

1

A

A method that combines the power of keyword search and vector search

A method that re-evaluates the relevance of each (question, chunk) pair

A method that transforms questions to improve retrieval results

A method that compresses context before putting it into the prompt

Hybrid Search combines keyword search (e.g., BM25) and vector search to achieve more comprehensive results.

13

Unit 1: Basic AI Fundamentals

Lec1

Context Compression

What is the purpose of Context Compression?

Medium

1

D

Rearrange potential candidates to select the top quality chunks

Transform input questions to improve retrieval results

Improve the accuracy of information retrieval

Reduce prompt length and help LLM focus on core information

Context Compression helps reduce prompt length and helps the LLM focus on core information by filtering out irrelevant information.

14

Unit 1: Basic AI Fundamentals

Lec1

Re-ranker

What is the role of a Re-ranker in the RAG process?

Medium

1

C

Compress text chunks to reduce prompt length

Transform the original question to improve retrieval results

Re-evaluate the relevance of each (question, chunk) pair and reorder them

Search for text chunks based on keywords

Re-ranker re-evaluates the relevance of each (question, chunk) pair and reorders them to select the top quality chunks.

15

Unit 1: Basic AI Fundamentals

Lec1

Retriever Failure

What happens if the retrieval system (retriever) does not find accurate documents in the RAG system?

Medium

1

B

The system will automatically adjust retrieval parameters to find more suitable documents

The Large Language Model (LLM) cannot answer correctly

The Large Language Model (LLM) will search for information from external sources to compensate for missing data

The Large Language Model (LLM) can still generate accurate answers based on prior knowledge

If the retriever does not find the correct documents, no matter how smart the LLM is, it cannot answer correctly.

16

Unit 1: Basic AI Fundamentals

Lec1

Lost in the Middle

What does the ‘Lost in the Middle’ syndrome in RAG systems refer to?

Hard

1

A

The tendency of LLMs to focus on information at the beginning and end of long contexts, ignoring information in the middle

Text chunks having duplicate information in the middle, causing noise in processing

Difficulty integrating LLMs in the middle of the retrieval and generation process

Delays in information retrieval when relevant documents are in the middle position in the database

When prompts contain long contexts, LLMs tend to focus only on information at the beginning and end, easily ignoring important details in the middle.

17

Unit 1: Basic AI Fundamentals

Lec1

Faithfulness Evaluation

What does ‘Faithfulness’ evaluation in RAG systems measure?

Medium

1

A

The degree to which the generated answer adheres to the provided context

The speed of processing and generating answers by the system

The relevance of the answer to the user’s question

The system’s ability to retrieve information from different sources

Faithfulness measures the degree to which the generated answer adheres to the provided context. Does the system add information on its own?

18

Unit 1: Basic AI Fundamentals

Lec1

Attention Mechanism

What role does the Attention Mechanism play in the Transformer architecture of RAG systems?

Hard

1

C

Improve the model’s parallel processing capability, helping to speed up computation

Reduce dependence on fully connected layers in the model

Allow the model to weigh the importance of different words in the input sequence for deep context understanding

Enhance the ability to encode input information into semantic vectors

The Attention Mechanism allows the model to weigh the importance of different words in the input sequence for deep context understanding.

19

Unit 1: Basic AI Fundamentals

Lec1

MRR Metric

What does the Mean Reciprocal Rank (MRR) metric measure in Retrieval Evaluation?

Hard

1

C

Measure the system’s ability to synthesize information from different sources

Measure the relevance between the question and the generated answer

Measure the position of the first correct chunk in the returned result list

Measure the percentage of questions for which the system retrieves at least one chunk containing correct answer information

Mean Reciprocal Rank (MRR) measures the position of the first correct chunk in the returned result list. The higher the position, the higher the MRR score.

20

Unit 1: Basic AI Fundamentals

Lec1

Value in RAG

In the RAG model, which element represents the actual extracted information?

Medium

1

D

Key

Query

Key vector dimension (d_k)

Value

Value represents the actual extracted information in the RAG model.

21

Unit 1: Basic AI Fundamentals

Lec1

Multimodal RAG

Which RAG development direction allows retrieving information from different types of data such as images, audio, and text?

Easy

1

A

Multimodal RAG

Internal RAG system

Agentic RAG

RAG Chatbot

Multimodal RAG allows retrieving information from different data sources, not just text.

22

Unit 1: Basic AI Fundamentals

Lec1

Agentic RAG

Which type of RAG application has the ability to ask sub-questions and interact with external tools to gather information?

Medium

1

B

Internal document RAG system

Agentic RAG

Multimodal RAG

RAG Chatbot

Agentic RAG is more proactive in gathering information by asking sub-questions and interacting with external tools.

23

Unit 1: Basic AI Fundamentals

Lec1

Enterprise RAG

Which RAG application helps employees search for information in the company’s internal documents quickly and accurately?

Easy

1

D

Multimodal RAG

Research and specialized analysis assistant

Smart customer support chatbots

Enterprise internal document RAG system

Enterprise internal document RAG systems help employees search for information quickly and accurately.

24

Unit 1: Basic AI Fundamentals

Lec1

Interactive Learning

What problem does RAG (Retrieval-Augmented Generation) application solve in interactive learning?

Medium

1

C

Limited access to learning materials

Inaccurate assessment of learning outcomes

Boredom and passivity when learning through textbooks

Lack of updated information in textbooks

RAG creates interactive tools that allow students to interact with learning materials more actively compared to reading traditional textbooks.

25

Unit 1: Basic AI Fundamentals

Lec1

Financial RAG

In the financial field, how can RAG support analysts?

Medium

1

A

Summarize and analyze risks from long financial reports

Manage personal investment portfolios

Predict stock market fluctuations

Automatically create financial reports

RAG can summarize and analyze risks from long financial reports, helping analysts save time and make decisions faster.

26

Unit 1: Basic AI Fundamentals

Lec1

E-commerce RAG

How does RAG improve product recommendation systems on e-commerce sites?

Medium

1

A

Retrieve information from detailed descriptions, product reviews, and technical specifications

Optimize product prices based on competitors

Provide 24/7 online customer support services

Enhance the ability to predict customer needs

RAG retrieves information from detailed descriptions, product reviews, and technical specifications to provide personalized recommendations, rather than relying solely on click history.

27

Unit 1: Basic AI Fundamentals

Lec1

RAG Distinctive Feature

What is the distinctive feature of RAG compared to traditional generative AI systems?

Medium

1

D

Integration with cloud platforms to increase scalability

Using the most advanced deep learning algorithms

Ability to automatically adjust parameters to optimize performance

Combining the deep language capabilities of LLMs with the accuracy of external knowledge bases

RAG combines the language capabilities of LLMs with the accuracy and up-to-date nature of external knowledge bases, creating more reliable and transparent AI applications.

28

Unit 1: Basic AI Fundamentals

Lec1

Vector Database

What is the primary purpose of a Vector Database in a RAG system?

Easy

1

B

Store raw text documents for quick retrieval

Store and efficiently search through vector embeddings

Manage user authentication and access control

Cache frequently asked questions and answers

A Vector Database is specifically designed to store and efficiently search through vector embeddings, enabling fast similarity searches in the RAG pipeline.

29

Unit 1: Basic AI Fundamentals

Lec1

Chunking Strategies

Which chunking strategy maintains the logical structure of a document by splitting at natural boundaries?

Medium

1

C

Fixed-size chunking

Random chunking

Semantic chunking

Overlapping chunking

Semantic chunking splits documents at natural boundaries (paragraphs, sentences, sections) to maintain logical structure and preserve meaning within each chunk.

30

Unit 1: Basic AI Fundamentals

Lec1

Top-K Retrieval

What does the ‘Top-K’ parameter control in RAG retrieval?

Easy

1

A

The number of most similar documents to retrieve

The maximum length of each chunk

The threshold for similarity scores

The number of re-ranking iterations

Top-K parameter controls how many of the most similar documents are retrieved from the vector database to provide context for the LLM.

31

Unit 1: Basic AI Fundamentals

Lec1

Prompt Engineering

In RAG systems, what is the role of the system prompt when generating answers?

Medium

1

B

To store retrieved documents permanently

To instruct the LLM on how to use the retrieved context to generate answers

To perform the similarity search in the vector database

To convert user queries into embeddings

The system prompt instructs the LLM on how to use the retrieved context to generate accurate, grounded answers and may include formatting guidelines and constraints.

32

Unit 1: Basic AI Fundamentals

Lec1

Answer Relevance

What does ‘Answer Relevance’ measure in RAG evaluation?

Medium

1

C

How fast the system generates responses

The accuracy of the embedding model

How well the generated answer addresses the user’s original question

The number of retrieved documents used

Answer Relevance measures how well the generated answer addresses the user’s original question, ensuring the response is pertinent and useful.

33

Unit 1: Basic AI Fundamentals

Lec1

Context Window

What limitation does the ‘context window’ impose on RAG systems?

Hard

1

D

The maximum number of documents that can be stored

The time limit for generating responses

The minimum similarity score for retrieval

The maximum amount of text that can be processed by the LLM at once

The context window limits the maximum amount of text (retrieved chunks + query + system prompt) that can be processed by the LLM at once, requiring careful management of chunk sizes.

34

Unit 1: Basic AI Fundamentals

Lec1

Metadata Filtering

What is the benefit of using metadata filtering in RAG retrieval?

Medium

1

A

Narrow down search results based on document attributes before semantic search

Increase the size of the vector database

Speed up the embedding generation process

Reduce the cost of LLM API calls

Metadata filtering allows narrowing down search results based on document attributes (date, source, category) before or during semantic search, improving retrieval precision.

35

Unit 1: Basic AI Fundamentals

Lec1

Hallucination Prevention

Which technique helps prevent hallucination in RAG systems by ensuring answers are grounded in retrieved content?

Hard

1

B

Increasing the temperature parameter

Instructing the LLM to only use information from the provided context

Using larger embedding dimensions

Reducing the Top-K value to 1

Instructing the LLM through the system prompt to only use information from the provided context and to say “I don’t know” when information is not available helps prevent hallucination and ensures answers are grounded in retrieved content.