This Generative AI E1 Assessment course includes 26 questions and answers, designed to help you quickly grasp the material and pass your course. The questions are based on Fresco Play, making it easier for you to prepare. After taking the assessment, let us know if it helped you, so we can update our article to improve it.
Generative AI E1 Course | Questions (1 to 5)
1 In the context of fine-tuning an LLM for a specific application, why might one opt to use a lower temperature setting during inference? TCS Generative AI E1 Answers
Answer: To reduce hallucinations and generate more precise responses
2 What is the primary issue with the “bias amplification” phenomenon in Al systems?
Answer: It leads to the reinforcement and exaggeration of existing biases in the data.
3 How does a high temperature value affect the probability distribution of the next token in LLM outputs?
Answer: It flattens the distribution, making low-probability tokens more likely.
4 Which of the following approaches is best suited for optimizing prompts to ensure that a language model generates responses that are both concise and contextually relevant?
Answer: Iteratively refining the prompt based on generated outputs.
5 What is a potential drawback of few-shot prompting that practitioners should be aware of?
Answer: Risk of overfitting to the examples in the prompt
Generative AI E1 ion Direct Assessment Questions ( 6 to 17 )
6 When optimizing prompts for generating structured outputs (like JSON), which of the following modifications can significantly improve the model’s accuracy in producing the desired structure?
Answer: Adding explicit instructions to the prompt
7 Which of the following strategies is most effective for reducing the length of responses generated by a language model without significantly compromising on the quality of the response?
Answer: Setting a maximum token limit for the response.
8 In what ways can the efficacy of prompts in multilingual models be improved?
Answer: Applying language-specific nuances
9 In what ways can the efficacy of prompts in multilingual models be improved?
Answer: Applying language-specific nuances Providing examples in multiple languages
10 How can developers ensure generative Al avoids spreading misinformation?
Answer: Implementing cross-referencing mechanisms within the model
11 Which of the following statements is true about the licensing of open-source LLMs?
Answer: Open-source LLMs can have a variety of licenses, some of which may impose specific usage restrictions
12 In the context of preventing hallucinations in generative Al models, what does “model distillation” refer to?
Answer: Reducing the model size by approximating a larger model
13 What practice would help reduce hallucinations in an LLM giving factual advice?
Answer: Providing specific source references in prompts
14 In a customer recommendation system, how can hallucination errors be minimized?
Answer: Process real customer feedback in updates
15 Which scenario best exemplifies the use of one-shot prompting?
Answer: Giving one example of a complex task and expecting the model to generalize
16 A generative Al used for educational content sometimes includes outdated information. What methods can address this?
Answer: Regular updates with the latest academic research
Cross-verifying outputs with up-to-date references
Incorporate a feedback mechanism for educators
17 In one-shot prompting, the primary goal is to
Answer: Generate a desired response from a model with one example in the prompt
Generative AI E1 ion direct assessment Questions ( 18 to 23 )
18 Which of the following scenarios would most benefit from using a higher temperature setting for an LLM?
Answer: Generating poetry or creative writing.
19 Which of the following is a key difference between the development communities of open-source and closed-source LLMs?
Answer: Open-source communities typically involve contributions from a wide range of independent developers and organizations
20 Which of the following scenarios would most benefit from using a higher temperature setting for an LLM?
Answer: Generating poetry or creative writing.
21 Which scenario best exemplifies the use of one-shot prompting?
Answer: Giving one example of a complex task and expecting the model to generalize
22 generative Al E1 used for educational content sometimes includes outdated information. What methods can address this?
Answer: Regular updates with the latest academic research
Cross-verifying outputs with up-to-date references
Incorporate a feedback mechanism for educators
23 What steps can be taken to ensure LLMs provide culturally sensitive outputs?
Answer: Curate culture-specific datasets with diverse perspectives
Employ region-specific context in prompts
Validate outputs by cultural expert
Generative AI E1 Course Questions ( 24 to 26 )
24 What is the primary issue with the “bias amplification” phenomenon in Al systems?
Answer: It leads to the reinforcement and exaggeration of existing biases in the data.
25 What practice would help reduce hallucinations in an LLM giving factual advice?
Answer: Providing specific source references in prompts
26 Which of the following strategies is least effective in reducing hallucinations in language models?
Answer: Using a smaller dataset for training