Final Exam#
No. |
Training Unit |
Lecture |
Training content |
Question |
Level |
Mark |
Answer |
Answer Option A |
Answer Option B |
Answer Option C |
Answer Option D |
Explanation |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 |
LangGraph & Agentic AI |
Lec1 |
State Management |
What is the core field used for ALL input/output from nodes in a LangGraph State? |
Easy |
1 |
C |
|
|
|
|
The |
2 |
LangGraph & Agentic AI |
Lec1 |
State Management |
Which concept allows LangGraph to support complex workflows compared to standard LangChain chains? |
Easy |
1 |
B |
Linear flows only |
Cyclic flows and conditional routing |
Stateless operations |
Basic sequential pipelines |
Extends basic chains with cyclic flows and conditional routing for loops / complex logic. |
3 |
LangGraph & Agentic AI |
Lec1 |
State Management |
What is the role of |
Easy |
1 |
A |
Appending new messages and handling deduplication |
Deleting old messages automatically |
Summarizing long conversations |
Replacing the current message list with a new one |
|
4 |
LangGraph & Agentic AI |
Lec1 |
State Management |
Which of the following is NOT a standard LangChain message type used in LangGraph? |
Easy |
1 |
D |
|
|
|
|
Standard types are |
5 |
LangGraph & Agentic AI |
Lec1 |
State Management |
In LangGraph’s State structure, what should non-conversational context like |
Easy |
1 |
B |
Sent directly to the LLM response |
Storing configuration and metadata |
Replacing the standard message history |
Caching LLM tokens |
Context fields are meant for metadata and configuration, not standard I/O messages. |
6 |
LangGraph & Agentic AI |
Lec1 |
State Management |
Which object serves as the core director engine orchestrating LLM workflows in LangGraph? |
Easy |
1 |
D |
|
|
|
|
|
7 |
LangGraph & Agentic AI |
Lec1 |
State Management |
How does LangGraph handle context injection before starting the graph execution? |
Medium |
1 |
C |
By loading it from an external JSON file automatically. |
By sending a special |
By initializing the state with context variables when calling |
Context cannot be injected; the LLM must generate it. |
Context is provided to |
8 |
LangGraph & Agentic AI |
Lec1 |
State Management |
When building a multi-agent system, how do different agents (nodes) share findings with one another in a messages-centric pattern? |
Medium |
1 |
A |
By appending |
By modifying the global |
By resetting the |
By sending direct peer-to-peer API calls bypassing the state. |
Agents append named |
9 |
LangGraph & Agentic AI |
Lec1 |
State Management |
What is the primary purpose of adding nodes and edges to a |
Medium |
1 |
D |
To train a new deep learning model. |
To clean the data before input into a LangChain chain. |
To replace the standard LLM reasoning layers. |
To map out functions as nodes and execution paths as edges. |
Nodes represent functions/agents; edges dictate the workflow paths and conditionals. |
10 |
LangGraph & Agentic AI |
Lec1 |
State Management |
If an LLM node returns |
Medium |
1 |
B |
It merges the new message safely. |
It overwrites the existing message list. |
It throws a syntax error. |
It drops the message entirely. |
Without a reducer like |
11 |
LangGraph & Agentic AI |
Lec1 |
State Management |
According to LangGraph Best Practices, why should conversational data (I/O) be kept strictly in |
Hard |
1 |
B |
Because LangChain parsers crash if state contains integers. |
It enables robust State Persistence (Checkpointers) which rely on deterministic, append-only message histories. |
It saves tokens directly since context fields are automatically hidden from the LLM. |
Context fields are only valid in the |
Checkpointers reconstruct and replay the state efficiently when conversational history relies on the standardized, append-only messages slice. |
12 |
LangGraph & Agentic AI |
Lec1 |
State Management |
How can conditional routing leverage the State to decide whether to call a tool or end the workflow? |
Hard |
1 |
A |
By inspecting |
By manually polling an external database at every node. |
By counting the number of characters in the previous |
By throwing an exception when the state is exhausted. |
The conditional edge function looks at the last message to see if the LLM populated |
13 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
What does the ReAct pattern stand for in agentic workflows? |
Easy |
1 |
B |
Refresh and Activate |
Reason and Act |
Respond and Acknowledge |
Request and Action |
ReAct combines explicit reasoning (Think) before acting (Tool Use) in a loop. |
14 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
Why is a Multi-Expert pattern generally preferred over a single generic web search tool for complex research? |
Easy |
1 |
A |
It provides specialized domain knowledge and structured reasoning. |
It uses fewer tokens. |
It operates completely offline. |
It requires zero prompt engineering. |
Specialized LLMs acting as tools provide better domain insights and consistent reasoning. |
15 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
What is the purpose of the |
Easy |
1 |
D |
To prompt the LLM to generate code. |
To browse the internet using a headless browser. |
To compress message history. |
To automatically handle the parsing and execution of multiple tools. |
|
16 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
In a ReAct loop, what is the sequence of steps the coordinator LLM usually follows? |
Easy |
1 |
C |
Act \(\to\) Think \(\to\) Stop |
Observe \(\to\) Act \(\to\) Think |
Think \(\to\) Act \(\to\) Observe |
Stop \(\to\) Observe \(\to\) Think |
The standard ReAct loop is: Think (Reason), Act (Call Tool), Observe (Tool Result), and Repeat. |
17 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
What is a common way to prevent an agent from getting trapped in an infinite ReAct loop? |
Easy |
1 |
B |
Disabling all tools permanently. |
Adding an |
Forcing the LLM to answer in 10 words or less. |
Unplugging the server. |
Checking an iteration limit in the conditional edge is best practice to stop runaway loops. |
18 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
How do Multi-Expert Tools differ technically from standard external API tools (like web search) inside a LangGraph setup? |
Easy |
1 |
C |
They don’t use the |
They execute JavaScript code. |
They are themselves LLM invocations with specialized system prompts. |
They bypass the |
Expert tools invoke another instance of an LLM primed with a specific expert persona. |
19 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
If an agent is deciding which expert to call during the “Act” phase, what enables the LLM to provide structured function calls automatically? |
Medium |
1 |
B |
Regular Expressions parsing. |
Using |
Writing manual JSON format instructions in the prompt. |
Training a custom fine-tuned router model. |
|
20 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
What is the main architectural upgrade introduced when adding a Planning Agent to a simple ReAct flow? |
Medium |
1 |
A |
The Coordinator is relieved of analyzing the user’s initial message; a separate Planner handles decomposition first. |
Tools are executed synchronously without LLM intervention. |
The agent switches to using a completely different model provider. |
State management is no longer required. |
A Planner separates the complex task of understanding and task decomposition from the execution/coordinator task. |
21 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
During the “Observe” phase of standard ReAct with Langgraph |
Medium |
1 |
D |
|
|
|
|
After executing a tool, |
22 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
What happens if multiple expert tools are called simultaneously by the Coordinator LLM? |
Medium |
1 |
B |
They are ignored and skipped. |
The |
The graph crashes due to a concurrency error. |
Only the first tool is executed. |
Modern models can return multiple tool calls at once, which |
23 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
In a robust production-ready Multi-Expert Research agent, how should tool execution failures be handled? |
Hard |
1 |
D |
By shutting down the LangGraph server. |
By letting the unhandled exception crash the application so developers can debug. |
By automatically switching model providers mid-workflow. |
By catching the exception inside the tool or custom node and returning a |
Returning the error as a string message allows the Coordinator LLM to “Reason” about the failure and take alternative action. |
24 |
LangGraph & Agentic AI |
Lec2 |
Agentic Patterns |
Why does a Multi-Expert ReAct pattern consume significantly more tokens than a simple linear agent? |
Hard |
1 |
C |
Because it stores all memory in a vector database. |
Because LangGraph adds a large metadata overhead to every variable. |
The complete conversation history ( |
Because expert LLMs generate longer responses to simple questions. |
In ReAct loops, the context window GROWS each cycle as new |
25 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
What is the main difference between traditional LLM prompts and Tool Calling capabilities? |
Easy |
1 |
D |
Prompts use more tokens. |
Tool Calling avoids external APIs. |
Tool Calling is only available in open-source models. |
Tool Calling enables the model to issue structured JSON parameters to invoke external code automatically. |
Structural return formats from the LLM via defined JSON schemes is the core innovation in Tool Calling. |
26 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
Which terminology specifically refers to OpenAI’s native API parameter for passing a JSON schema? |
Easy |
1 |
A |
|
|
|
|
OpenAI specifically categorizes the schema object passing under “Function Calling.” |
27 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
Which python decorator is used in LangChain to easily convert a standard Python function into a Tool? |
Easy |
1 |
C |
|
|
|
|
The |
28 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
What makes Tavily Search specifically optimized for AI applications compared to standard generic web search APIs? |
Easy |
1 |
B |
It is slower but cheaper. |
It pre-formats results for LLMs, filters noise, and provides context for RAG. |
It only searches Wikipedia. |
It bypasses the internet using a local database. |
Tavily removes clutter (HTML/Ads) and extracts clean content structured for immediate LLM context window ingestion. |
29 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
What is a common best practice regarding Tool Descriptions in the code? |
Easy |
1 |
A |
They should be highly detailed so the LLM knows exactly when and how to call the tool. |
They are ignored by the LLM, so they can be left blank. |
They must be written in JSON. |
They should be under 5 words to save tokens. |
High-quality descriptions help the model “Reason” appropriately about when the tool is useful. |
30 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
What is “Tool Chaining”? |
Easy |
1 |
D |
Storing tool outputs in a blockchain. |
Running the same tool 100 times to check consistency. |
Restricting tool execution to an administrator. |
Using the output of one tool as the direct input argument for another tool recursively. |
A common pattern is having one tool’s result guide the parameter execution of the next tool (like extracting a company name, then passing a stock ticker to a finance tool). |
31 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
How should developers securely manage API keys (like |
Medium |
1 |
B |
Hardcoding them at the top of the python script. |
Using Environment Variables or a Secret Management service (like Azure KeyVault). |
Passing them directly inside the user prompt. |
Storing them inside the |
Best practices strongly dictate loading secrets via ENV variables (e.g. |
32 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
When handling tool execution errors (such as network timeouts or API failures), what is the recommended fallback strategy? |
Medium |
1 |
C |
Raising a fatal exception to stop the script immediately. |
Silently ignoring the error and proceeding with an empty string. |
Catching the exception and returning a |
Switching to an older language model automatically. |
Returning the exception as a string in |
33 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
What optimization technique can significantly reduce duplicate external API calls from tools? |
Medium |
1 |
A |
Implementing a caching layer (e.g. |
Disabling the |
Limiting the LLM to 1 iteration entirely. |
Removing the system prompt. |
Caching recent tool queries locally drastically saves external latency and cost for repeated inquiries. |
34 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
If you want to use a Custom Tool class in LangChain instead of a decorator, which base class must you inherit from? |
Medium |
1 |
D |
|
|
|
|
Class-based tools need to inherit from |
35 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
How does the Tavily API |
Hard |
1 |
C |
It executes SQL queries on the backend instead. |
It forces the agent to ask the user permission. |
It performs a multi-step semantic search to extract comprehensive answers rather than returning simple link snippets. |
It parses local PDF files instead of the web. |
Advanced depth leverages an AI sub-agent during search to synthesize answers and return higher-quality textual analysis. |
36 |
LangGraph & Agentic AI |
Lec3 |
Tool Calling |
When building an architecture where an Orchestrator routes tasks, why would you implement a specific “Web Search Agent” rather than just giving the generic tools directly to the primary assistant? |
Hard |
1 |
B |
Because the primary assistant cannot accept tools format APIs. |
To separate concerns: a specialized agent can execute multi-step tool queries recursively without overloading the main router’s prompt context. |
Because Tavily Search restricts execution to sub-nodes by design. |
Web Search agents use zero tokens. |
Sub-agents handle the cognitive load of browsing, reading snippets, and re-searching autonomously, returning only polished synthesis to the main router. |
37 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
What is the main structural advantage of a Hierarchical (Supervisor) multi-agent system? |
Easy |
1 |
A |
A Primary Assistant coordinates user intent and cleanly routes requests to specialized sub-agents. |
Every agent talks to every other agent at the same time. |
It prevents the use of external APIs. |
It runs on a single linear LangChain pipeline. |
Supervisors manage the workflow orchestration cleanly while sub-agents handle specific deep domains. |
38 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
Why would a system designer choose multi-agent architectures over a single sophisticated LLM? |
Easy |
1 |
C |
Single LLMs cannot use Python code. |
A single LLM always hallucinates. |
It promotes specialization, modularity, parallel processing, and avoids prompt overloading. |
Multi-agent systems guarantee faster latency in all scenarios. |
Splitting into separate specialized models (e.g., Architect, Coder, Reviewer) improves accuracy and creates maintainable codebases. |
39 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
What does a Network (Peer-to-Peer) coordination pattern imply? |
Easy |
1 |
C |
Agents are executed manually by humans. |
All agents must report back to a supervisor before interacting. |
Agents can communicate with each other directly without central supervision. |
It is a centralized routing protocol. |
Unlike supervisors, peer-to-peer agents message each other directly to resolve tasks. |
40 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
In a Hierarchical system, how does a Sub-Agent signal that its task is complete and it wishes to return control to the Primary Assistant? |
Easy |
1 |
D |
By crashing the program. |
By calling the end user via SMS. |
By erasing the shared state’s message list. |
By executing a “CompleteOrEscalate” tool call, signaling the workflow to pop the dialog stack. |
The common pattern relies on returning a specific signal (like |
41 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
In multi-agent LangGraph architectures, what prevents agents from losing the overarching conversation context? |
Easy |
1 |
B |
They read the local filesystem. |
They all read and append to a centralized shared |
The developer manually pastes the JSON transcript into each prompt. |
They query a vector database at every step. |
Shared TypedDict State containing |
42 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
What is the purpose of the |
Easy |
1 |
A |
To push and pop agent identifiers corresponding to the current active agent in the conversation tree. |
To log errors to a debugging console. |
To translate different languages. |
To count the number of LLM tokens used. |
The dialog stack ( |
43 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
What is “Context Injection” referring to in multi-agent tool execution? |
Medium |
1 |
D |
Injecting system prompts into the vector database. |
Overriding the user’s internet connection. |
Re-training the model mid-conversation. |
Automatically supplying known session metadata (like |
Context fields defined in the |
44 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
How do routing functions (conditional edges) decide to shift execution from the Primary Assistant to a designated Sub-Agent? |
Medium |
1 |
C |
The user types “Route” in the chat window. |
A random hash evaluates to true. |
By inspecting the |
They execute raw SQL queries tracking agent status. |
Standard routers look at the Assistant’s final |
45 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
Why might an agentic architecture include an “Entry Node” when transitioning to a child agent? |
Medium |
1 |
B |
To charge the user additional credits. |
To silently append a |
To block external api requests permanently. |
To delete previous session checkpoints. |
Entry nodes serve as a trampoline, providing localized instructions to the incoming sub-agent without confusing the Primary Assistant’s prompt. |
46 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
During multi-agent fallback, what happens when a tool execution fails inside an agent’s subgraph? |
Medium |
1 |
A |
A custom |
The |
The system crashes. |
It switches out the open-source LLM for an OpenAI model. |
A structured fallback catcher prevents silent failures or crashes and turns exceptions into conversational events the agent can rectify. |
47 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
In a highly complex Competitive multi-agent arrangement, how do agents ultimately converge on a single answer? |
Hard |
1 |
C |
They execute a random dice roll. |
The graph hangs infinitely until restarted. |
A separate Evaluator/Synthesizer agent compares the outputs of all competing agents and selects or merges the best response into the final message. |
Only the agent that responds first is recorded in state. |
Competitive architectures require downstream synthesis nodes that “Observe” multiple paths and judge the optimal conclusion analytically. |
48 |
LangGraph & Agentic AI |
Lec4 |
Multi-Agent Collab |
Consider the structure: |
Hard |
1 |
B |
It adds a third string to the stack. |
It returns the list to |
It deletes the entire stack. |
It loops infinitely within |
The custom reducer pops the last active element ( |
49 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
Why is a “Human-in-the-Loop” (HITL) step strongly recommended for applications performing financial transactions? |
Easy |
1 |
A |
They involve irreversible critical actions that require human oversight to prevent costly AI mistakes. |
It accelerates the transaction speed natively. |
Models cannot do math. |
HITL is an obsolete pattern replaced by GPT-4. |
Financial transactions are high-stakes operations requiring human intervention and compliance audit trails before final execution. |
50 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
In LangGraph, what prevents all computation from being lost when an agent pauses to wait for human input? |
Easy |
1 |
C |
Writing logs to a simple text file. |
LangChain’s built-in |
LangGraph’s native Checkpointing mechanism (e.g., |
Caching the prompt on the client side. |
Checkpointers serialize the exact state graph, allowing it to rest safely in memory or DB until resumed. |
51 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
How does passing |
Easy |
1 |
B |
It forces the node to timeout after 3 seconds. |
It suspends execution right before the specified node executes, returning control back to the application. |
It skips the node altogether. |
It triggers an infinite loop of human questions. |
|
52 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
What is the main drawback of using |
Easy |
1 |
D |
It requires setting up a massive cluster. |
It runs too slowly for modern models. |
It writes to a file that fills up the hard drive instantly. |
Checkpoints disappear completely when the python process drops or server restarts. |
|
53 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
Which checkpointer is recommended for a scalable, production-grade distributed LangGraph service? |
Easy |
1 |
C |
|
|
|
|
|
54 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
How does LangGraph distinguish parallel user conversations hitting the same graph application simultaneously? |
Easy |
1 |
B |
By creating separate python processes. |
By assigning each conversation a unique |
By deleting the older users’ conversations. |
By using separate API keys. |
|
55 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
What information does LangGraph’s |
Medium |
1 |
A |
A complete historical log of all checkpointed states, parent markers, and metadata modifications across a conversation. |
Only the very first |
The system prompt token usage. |
Live streaming characters from the LLM. |
Pulling state history allows time-travel debugging and viewing the explicit step-by-step data modification over the thread’s lifespan. |
56 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
Given a graph paused before a “Publishing” node, what code pattern can update the state manually, say, switching |
Medium |
1 |
C |
|
Modifying the global variables inside the python script. |
Calling |
Redefining the TypedDict. |
|
57 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
Why would a multi-agent framework require separate short-term Checkpointers vs explicit long-term external vector databases? |
Medium |
1 |
D |
Because LangChain deprecates long-term storage natively. |
Short-term databases always truncate after 1 megabyte. |
To prevent open-source models from scraping data. |
Checkpointers handle immediate conversational state securely per thread, while Vector stores aggregate historical knowledge and profiles persistently across unrelated sessions. |
Checkpointers = Thread-scoped conversational state. VectorDB = Global user-scoped background context fetching. |
58 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
How does the |
Medium |
1 |
B |
It overwrites the database completely. |
It creates a new |
It throws a primary key error. |
It switches back to |
The DB schema retains parent-child snapshot ID graphs, effectively allowing true non-destructive time travel. |
59 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
If an agent architecture has a manual Node simulating an “As-Node” state update ( |
Hard |
1 |
C |
The app skips ahead 10 checkpoints automatically. |
The update is discarded silently because the node was skipped. |
It behaves as if the actual |
The agent loops forever. |
|
60 |
LangGraph & Agentic AI |
Lec5 |
Human-in-the-Loop |
In a scenario where an AI is suggesting Medical treatment protocols, how might |
Hard |
1 |
A |
Pausing after the |
Halting the system if the internet disconnects. |
Interrupting the LLM mid-token generation. |
Making the LLM stream results to a text-to-speech engine. |
This allows the state to fully materialize the AI’s proposal, giving the human doctor a complete object to assess before continuing. |