Self-assessment quiz
Presentation & objectives
The following quizzes are here to help you check that you understood the articles you had to study. At the end of a quiz, you will be given explanations on your answers. If some of them are wrong, you will have the possibility to click on the question you failed to try again.
These quizzes are provided for self-assessment and will not be graded or stored.
Don’t hesitate to reach out on the Discord server for any precision/explanation!
Quizzes
What is a problem?
# How is a problem defined in computer science?
- [x] As a function from a domain to a codomain
> ✅ A problem can be represented as a function mapping inputs (domain) to outputs (codomain).
- [ ] As a series of steps to solve a specific task
> ❌ This describes an algorithm, not a problem. A problem is the abstract function the algorithm solves.
- [ ] As a decision tree
> ❌ A decision tree is a specific model used for classification or decision-making, not a general definition of a problem.
- [x] As a function that can be partial or total
> ✅ Problems can be partial (defined for some inputs) or total (defined for all inputs in the domain).
- [ ] As a set of constraints
> ❌ While constraints may define specific problems, they are not the general definition of a problem.
# What is a decision problem?
- [x] A problem with a boolean codomain
> ✅ A decision problem maps inputs to a boolean codomain, returning `true` or `false` as the solution.
- [ ] A problem with multiple possible solutions
> ❌ This describes a search problem, not a decision problem.
- [ ] A problem that cannot be solved by an algorithm
> ❌ Decision problems can often be solved by algorithms, provided they are well-defined.
- [x] A problem that answers yes or no to a question
> ✅ Decision problems are designed to answer binary (yes/no) questions.
- [ ] A problem with an infinite domain
> ❌ While decision problems can have infinite domains, this is not a defining characteristic.
# What distinguishes a search problem from a decision problem?
- [x] A search problem may have multiple solutions
> ✅ Search problems allow for multiple possible solutions, unlike decision problems, which return a single boolean value.
- [ ] A search problem always has exactly one solution
> ❌ Search problems can have one, many, or no solutions, depending on the input.
- [ ] A decision problem is always computationally harder than a search problem
> ❌ This depends on the specific problem; decision problems are not inherently harder or easier than search problems.
- [x] A search problem can be defined using relations rather than functions
> ✅ Search problems are often defined using relations, as they may produce sets of solutions rather than a single output.
- [ ] Search problems cannot be solved by algorithms
> ❌ Search problems can often be solved algorithmically, though finding all solutions may not always be feasible in finite time.
# What is an optimization problem?
- [x] A refinement of a search problem where the best solution is desired
> ✅ Optimization problems aim to find the best solution according to a cost function.
- [ ] A problem with boolean outputs
> ❌ Boolean outputs characterize decision problems, not optimization problems.
- [x] A problem defined by a cost function
> ✅ Optimization problems use a cost function to evaluate and identify the best solution among many.
- [ ] A problem that always has a unique solution
> ❌ Optimization problems may have multiple optimal solutions depending on the cost function and constraints.
- [x] A problem where solutions are evaluated to minimize or maximize a value
> ✅ Optimization problems involve minimizing or maximizing a value based on a given cost function.
# What is the purpose of a cost function in optimization problems?
- [x] To evaluate the quality of a solution
> ✅ The cost function assigns a value to each solution, allowing the algorithm to evaluate its quality.
- [ ] To determine if a problem has no solution
> ❌ The cost function evaluates solutions, not the existence of a solution.
- [x] To find the best solution among possible solutions
> ✅ The cost function helps identify the optimal solution by comparing values across all possible solutions.
- [ ] To simplify the problem into a decision problem
> ❌ The cost function does not simplify the problem but helps refine the search for an optimal solution.
- [x] To minimize or maximize a specific value in the problem
> ✅ Optimization problems aim to minimize or maximize the cost function to determine the best solution.
Divide and conquer
# What are the three main steps in the divide-and-conquer strategy?
1. [x] Divide, Conquer, Combine
> ✅ Divide-and-conquer involves dividing the problem into smaller subproblems, solving them independently, and combining their solutions.
1. [ ] Divide, Solve, Iterate
> ❌ The correct steps are divide, conquer, and combine, not "solve" or "iterate."
1. [ ] Divide, Conquer, Repeat
> ❌ While repetition may occur, the third step is "combine," not "repeat."
# What inefficiency can arise in recursive implementations of divide-and-conquer?
- [x] Redundant computations due to overlapping subproblems
> ✅ Overlapping subproblems can cause the same computations to be performed multiple times, leading to inefficiency.
- [ ] The need to rewrite the problem recursively
> ❌ The issue is redundant computations, not the need to rewrite the problem.
- [x] Exponential time complexity in some cases
> ✅ Without optimization, recursive divide-and-conquer can lead to exponential time complexity.
- [ ] Difficulty combining subproblem solutions
> ❌ While combining solutions can be complex, the primary inefficiency stems from redundant computations.
- [ ] Problems must always be evenly divided
> ❌ The size of subproblems can vary; even division is not a strict requirement.
# How does memoization improve recursive algorithms?
- [x] By storing and reusing results of previously computed subproblems
> ✅ Memoization avoids redundant computations by caching results of solved subproblems.
- [ ] By executing all subproblems simultaneously
> ❌ Memoization does not involve parallel execution; it involves caching results to avoid recomputation.
- [x] By reducing the time complexity of overlapping subproblems
> ✅ Memoization reduces time complexity by preventing the recomputation of overlapping subproblems.
- [ ] By reducing the space complexity of recursive calls
> ❌ Memoization often increases space complexity due to the storage of results.
- [ ] By eliminating the need for recursion
> ❌ Memoization works with recursion; it does not eliminate it.
# What distinguishes dynamic programming from memoization?
- [x] Dynamic programming solves problems in a bottom-up manner
> ✅ Dynamic programming processes subproblems iteratively, starting from the smallest, rather than recursively.
- [ ] Dynamic programming always uses less memory
> ❌ Dynamic programming may use more memory depending on the problem and implementation.
- [x] Dynamic programming avoids recursion by iterating through subproblems
> ✅ Unlike memoization, dynamic programming avoids recursion by solving subproblems iteratively.
- [ ] Memoization is faster than dynamic programming
> ❌ The efficiency of memoization and dynamic programming depends on the problem, but dynamic programming often has a slight edge for iterative problems.
- [x] Dynamic programming uses a precomputed table for subproblem solutions
> ✅ Dynamic programming uses a table to store and look up subproblem solutions in a structured manner.
# What is the main advantage of dynamic programming?
- [x] It ensures that each subproblem is solved only once
> ✅ Dynamic programming avoids recomputing subproblems, ensuring efficiency by solving each only once.
- [ ] It guarantees constant time complexity for all problems
> ❌ Dynamic programming does not guarantee constant time complexity; its efficiency depends on the problem.
- [x] It reduces time complexity compared to naive recursion
> ✅ Dynamic programming significantly reduces time complexity by avoiding redundant computations.
- [ ] It requires no additional memory
> ❌ Dynamic programming often requires additional memory to store solutions in a table.
- [ ] It eliminates the need for subproblem dependencies
> ❌ Dynamic programming explicitly relies on solving subproblem dependencies in a structured manner.
# What is the subproblem graph in dynamic programming?
- [x] A directed acyclic graph showing dependencies between subproblems
> ✅ The subproblem graph represents dependencies, with edges indicating which subproblems are required to solve others.
- [ ] A tree that repeats the same subproblems
> ❌ The subproblem graph avoids repetition by merging nodes with the same labels.
- [x] A compacted version of the recursive call tree
> ✅ The subproblem graph is a compact representation that avoids redundant nodes by merging duplicate subproblems.
- [ ] A structure that guarantees linear time complexity
> ❌ While the subproblem graph supports efficiency, time complexity depends on the number of nodes and edges.
- [ ] A graph that eliminates dependencies between subproblems
> ❌ The subproblem graph explicitly shows dependencies to guide the solution process.
From optimization to approximate approaches
# What is the key characteristic of dynamic programming in optimization problems?
- [x] It avoids recomputation by storing intermediate results
> ✅ Dynamic programming uses memoization or tabulation to store intermediate results and prevent redundant calculations.
- [ ] It uses randomness to explore multiple solutions
> ❌ Dynamic programming is deterministic and does not rely on randomness.
- [x] It breaks problems into overlapping subproblems
> ✅ Dynamic programming is effective for problems with overlapping subproblems that can be solved recursively.
- [ ] It guarantees constant time complexity for all problems
> ❌ The time complexity of dynamic programming depends on the problem size and structure, not constant for all problems.
- [ ] It always produces approximate solutions
> ❌ Dynamic programming aims to find exact solutions, not approximations.
# What does "optimal substructure" mean in the context of dynamic programming?
- [x] The optimal solution can be built from the solutions to its subproblems
> ✅ Optimal substructure means that the solution to the entire problem can be composed of solutions to smaller subproblems.
- [ ] Each subproblem must have the same size
> ❌ Subproblems may vary in size; the key is that their solutions combine to solve the overall problem.
- [ ] The problem must always be split into two parts
> ❌ Dynamic programming does not require the problem to be split into exactly two parts; it depends on the problem structure.
- [x] The solution to a subproblem does not affect other subproblems
> ✅ Subproblems in dynamic programming are independent, making their solutions reusable.
- [ ] The algorithm runs in logarithmic time
> ❌ Optimal substructure does not guarantee logarithmic time complexity.
# What is the greedy approach to solving problems?
- [x] Making locally optimal choices at each step
> ✅ The greedy approach chooses the best option at each step, aiming for a global optimum.
- [ ] Exploring all possible solutions exhaustively
> ❌ This describes brute-force methods, not greedy algorithms.
- [x] Prioritizing simplicity and efficiency over guaranteed optimality
> ✅ Greedy algorithms are efficient and simple but may not always guarantee the optimal solution.
- [ ] Ensuring all subproblems are solved before combining solutions
> ❌ This approach describes dynamic programming, not greedy algorithms.
- [x] Sometimes providing exact solutions depending on the problem structure
> ✅ Greedy algorithms can produce exact solutions for problems that exhibit the greedy-choice property.
# What is the main drawback of the greedy approach?
- [x] It may miss the global optimum
> ✅ Greedy algorithms focus on local decisions and may overlook better global solutions.
- [ ] It always requires more computational resources than dynamic programming
> ❌ Greedy algorithms are typically more resource-efficient than dynamic programming.
- [x] It assumes local optima lead to global optima
> ✅ Greedy algorithms assume that locally optimal choices will lead to globally optimal solutions, which is not always true.
- [ ] It cannot handle optimization problems
> ❌ Greedy algorithms can solve optimization problems, but their effectiveness depends on the problem structure.
- [ ] It requires overlapping subproblems
> ❌ Overlapping subproblems are a characteristic of dynamic programming, not greedy algorithms.