NEOCODE

Advanced Graph Algorithms MCQs

1. All-Pairs Shortest Paths

Which algorithm is used for all-pairs shortest paths?

Correct Answer: c) Floyd-Warshall

Explanation:
Floyd-Warshall algorithm is specifically designed for finding shortest paths between all pairs of vertices in a weighted graph. It works with both positive and negative edge weights (but no negative cycles), and is particularly efficient for dense graphs represented as adjacency matrices.

The time complexity of Floyd-Warshall is:

Correct Answer: b) O(V³)

Explanation:
The Floyd-Warshall algorithm uses three nested loops over all vertices, resulting in O(V³) time complexity. The space complexity is O(V²) for storing the distance matrix. While this seems high, it's often the best choice for dense graphs where E ≈ V².

2. Maximum Flow & Lower-Bound Theory
Maximum-Flow Problem

The Ford-Fulkerson algorithm solves:

Correct Answer: b) Maximum flow

Explanation:
The Ford-Fulkerson method is used to compute the maximum flow in a flow network. It works by finding augmenting paths from the source to the sink and increasing the flow along these paths until no more augmenting paths can be found. The max-flow min-cut theorem guarantees its correctness.

The Edmonds-Karp algorithm improves Ford-Fulkerson using:

Correct Answer: b) BFS

Explanation:
Edmonds-Karp is a specific implementation of Ford-Fulkerson that uses BFS to find the shortest augmenting path in each iteration. This guarantees a polynomial time complexity of O(VE²), unlike the basic Ford-Fulkerson which might not terminate for irrational capacities when using DFS.

Lower-Bound Theory

Lower-bound theory helps determine:

Correct Answer: b) Minimum time required for any algorithm

Explanation:
Lower-bound theory establishes the minimum number of operations needed to solve a problem in the worst case, regardless of the algorithm used. It helps determine when an algorithm is optimal (when its upper bound matches the problem's lower bound) and identifies problems that are inherently difficult.

A problem with a known lower bound of Ω(n log n) is:

Correct Answer: a) Sorting

Explanation:
Comparison-based sorting algorithms have a lower bound of Ω(n log n), meaning no comparison-based algorithm can sort in better than O(n log n) time in the worst case. This is derived from the decision tree model, where sorting requires at least log₂(n!) ≈ n log n comparisons.

3. Additional Advanced Questions

Which problem cannot be solved using Greedy approach?

Correct Answer: c) 0/1 Knapsack

Explanation:
The 0/1 Knapsack problem cannot be optimally solved with a greedy approach because it doesn't have the greedy-choice property. Unlike the fractional version where we can take parts of items, the 0/1 version requires either taking or leaving whole items, making dynamic programming necessary for optimal solutions.

The space complexity of the DP solution for the 0/1 Knapsack problem is:

Correct Answer: b) O(nW)

Explanation:
The standard DP solution for 0/1 Knapsack uses a 2D table of size n×W, where n is the number of items and W is the knapsack capacity. This gives O(nW) space complexity. Note that this is pseudo-polynomial since W is a numeric value, not input size. Space can be optimized to O(W) using a 1D array.

The Bellman-Ford algorithm is preferred over Dijkstra's for graphs with:

Correct Answer: a) Negative weights

Explanation:
Bellman-Ford can handle graphs with negative edge weights (and can detect negative cycles), while Dijkstra's cannot. However, Bellman-Ford has higher time complexity (O(VE) vs Dijkstra's O(E + V log V)), so Dijkstra's is preferred for graphs with non-negative weights.

In Prim's algorithm, the graph must be:

Correct Answer: b) Weighted and connected

Explanation:
Prim's algorithm finds a minimum spanning tree (MST) for a connected, undirected graph with weighted edges. The weights are necessary to determine the "minimum" spanning tree, and connectivity is required to ensure a spanning tree exists (a spanning tree connects all vertices).

The greedy choice property is used in:

Correct Answer: c) Greedy algorithms

Explanation:
The greedy choice property is fundamental to greedy algorithms. It means that a locally optimal choice at each step leads to a globally optimal solution. Problems suitable for greedy approaches must have both this property and optimal substructure (like Huffman coding or Dijkstra's algorithm).

The time complexity of Kruskal's algorithm depends on:

Correct Answer: c) Both (a) and (b)

Explanation:
Kruskal's algorithm's time complexity has two main components: 1. Sorting all edges: O(E log E) 2. Union-Find operations: O(E α(V)) where α is the inverse Ackermann function Since E log E dominates, the overall complexity is O(E log E) or equivalently O(E log V).

Which algorithm guarantees optimal substructure?

Correct Answer: c) Both (a) and (b)

Explanation:
Both greedy algorithms and dynamic programming require problems to have optimal substructure - the property that an optimal solution to the problem contains optimal solutions to subproblems. The difference is that greedy algorithms make irrevocable choices while DP explores all possibilities.

The memory function technique is used in:

Correct Answer: a) DP to store intermediate results

Explanation:
Memory function (or memoization) is a technique where we store the results of expensive function calls and return the cached result when the same inputs occur again. It's primarily used in dynamic programming to avoid recomputation, effectively trading space for time.

The Traveling Salesman Problem (TSP) can be solved using:

Correct Answer: a) Only DP

Explanation:
The exact solution to TSP can be found using dynamic programming (Held-Karp algorithm) with O(n²2ⁿ) time. Greedy approaches (like nearest neighbor) can provide approximate solutions but don't guarantee optimality. TSP is NP-hard, so no known polynomial-time exact algorithm exists for large n.