Questions 1 to 40 Flashcards
- In LangChain, which retriever search type is used to balance between relevancy and diversity?
* topk
* mmr
* similarity_score_threshold
* similarity
MMR
- What does a dedicated RDMA cluster network do during model fine-tuning and inference?
* It leads to higher latency in model inference.
* It enables the deployment of multiple fine-tuned models within a single cluster.
* It limits the number of fine-tuned models deployable on the same GPU cluster.
* It increases GPU memory requirements for model deployment.
It enables the deployment of multiple fine-tuned model within a single cluster
- Which role does a “model endpoint” serve in the inference workflow of the OCI Generative AI service?
* Hosts the training data for fine-tuning custom models
* Evaluates the performance metrics of the custom models
* Serves as a designated point for user requests and model responses
* Updates the weights of the base model during the fine-tuning process
Serves as a designated point for user requests and model responses
- Which is a distinguishing feature of “Parameter-Efficient Fine-tuning (PEFT)” as opposed to classic “Fine-tuning” in
Large Language Model training?
* PEFT involves only a few or new parameters and uses labeled, task-specific data.
* PEFT modifies all parameters and uses unlabeled, task-agnostic data.
* PEFT does not modify any parameters but uses soft prompting with unlabeled data.
* PEFT modifies all parameters and is typically used when no training data exists.
PEFT involves only a few or new parameters and uses labeled task specif data
- How does the Retrieval-Augmented Generation (RAG) Token technique differ from RAG Sequence when generating a model’s response?
* Unlike RAG Sequence, RAG Token generates the entire response at once without considering individual parts.
* RAG Token does not use document retrieval but generates responses based on pre-existing knowledge only.
* RAG Token retrieves documents only at the beginning of the response generation and uses those for the entire content.
* RAG Token retrieves relevant documents for each part of the response and constructs the answer incrementally.
RAG Token retrieves relevant documents for each part of the response and constructs the answer incrementally.
- Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by
the retrieval system?
* Retriever
* Encoder-decoder
* Ranker
* Generator
Ranker
- Which statement describes the difference between “Top k” and “Top p” in selecting the next token in the OCI
Generative AI Generation models?
* Top k selects the next token based on its position in the list of probable tokens, whereas “Top p” selects based on the cumulative probability of the top tokens.
* Top k considers the sum of probabilities of the top tokens, whereas “Top p” selects from the “Top k” tokens sorted by probability.
* Top k and “Top p” both select from the same set of tokens but use different methods to prioritize them based on frequency.
* Top k and “Top p” are identical in their approach to token selection but differ in their application of penalties to tokens.
Top k selects the next token based on its position in the list of probable tokens, whereas “Top p” selects based on the cumulative probability of the top tokens.
- Which statement is true about the “Top p” parameter of the OCI Generative AI Generation models?
* Top p assigns penalties to frequently occurring tokens.
* Top p determines the maximum number of tokens per response.
* Top p limits token selection based on the sum of their probabilities.
* Top p selects tokens from the “Top k” tokens sorted by probability.
Top p limits token selection based on the sum of their probabilities.
- What is the primary function of the “temperature” parameter in the OCI Generative AI Generation models?
* Determines the maximum number of tokens the model can generate per response
* Specifies a string that tells the model to stop generating more content
* Assigns a penalty to tokens that have already appeared in the preceding text
* Controls the randomness of the model’s output, affecting its creativity
Control randomness of the model output, affecting its creativity
- What distinguishes the Cohere Embed v3 model from its predecessores in the OCI Generative AI service?
* Improved retrievals for Retrieval-Augmented Generation (RAG) systems
* Capacity to translate text in over 20 languages
* Support for tokenizing longer sentences
* Emphasis on syntactic clustering of word embeddings
Improved retrievals for retrieval augmented generation systems
- What is the purpose of the “stop sequence” parameter in the OCI Generative AI Generation models?
* It controls s the randomness of the model’s output, affecting its creativity.
* It specifies a string that tells the model to stop generating more content.
* It assigns a penalty to frequently occurring tokens to reduce repetitive text.
* It determines the maximum number of tokens the model can generate per response.
It specifies a string that tells the model to stop generating more content.
- What does a higher number assigned to a token signify in the “Show Likelihoods” feature of the languag model token generation?
* The token is less likely to follow the current token.
* The token is more likely to follow the current token
* The token is unrelated to the current token and will not be used
* The token will be the only one considered in the next generation step.
The token is more likely to follow the current token
- Given the following code:
Prompt Template
(input_variables["”human_input””, ““city””], template-template)
Which statement is true about Prompt Template in relation to input_variables?
- Prompt Template requires a minimum of two variables to function properly.
- Prompt Template can support only a single variable at a time.
- Prompt Template supports any number of variables, including the possibility of having none.
- PromptTemplate is unable to use any variables.
Prompt Template supports any number of variables, including the possibility of having none.
- Which is NOT a built-in memory type in LangChain?
* ConversationTokenBufferMemory
* ConversationImageMemory
* ConversationBufferMemory
* Conversation SummaryMemory
ConversationImageMemory
- Given the following code:
chain = prompt | 11m
Which statement is true about LangChain Expression Language (LCEL)?
- LCEL is a programming language used to write documentation for LangChain.
- LCEL is a legacy method for creating chains in LangChain.
- LCEL is a declarative and preferred way to compose chains together.
LCEL is a declarative and preferred way to compose chains together.
- Given a block of code:
qa = Conversational Retrieval Chain. from 11m (11m, retriever-retv, memory-memory)
when does a chain typically interact with memory during execution?
- Continuously throughout the entire chain execution process
- Only after the output has been generated
- After user input but before chain execution, and again after core logic but before output
- Before user input and after chain execution
After user input but before chain execution, and again after core logic but before output