You are given a triangle represented as a nested list of integers. The structure of the triangle is such that the first row contains one element, the second row contains two elements, and so on. For example:
[
[2],
[3,4],
[6,5,7],
[4,1,8,3]
]
Your task is to find the minimum path sum from the top of the triangle to the bottom. At each step, you can only move to an adjacent number in the row below. "Adjacent" means you can move to either the element directly below the current element or the element to the right of the element below.
For the example above, the minimum path sum is 11 (2 -> 3 -> 5 -> 1). Your function should return this minimum sum.
Clarifying questions:
Can you provide an algorithm to solve this problem efficiently?
The most straightforward approach is to use recursion to explore all possible paths from the top to the bottom of the triangle. At each step, we have two choices: move to the element directly below or move to the element to the right of the element below. We recursively explore both options and return the minimum path sum.
def min_path_sum_recursive(triangle, row, col):
if row == len(triangle) - 1:
return triangle[row][col]
return triangle[row][col] + min(
min_path_sum_recursive(triangle, row + 1, col),
min_path_sum_recursive(triangle, row + 1, col + 1),
)
def triangle_min_path(triangle):
return min_path_sum_recursive(triangle, 0, 0)
The time complexity of this recursive approach is O(2^n), where n is the number of rows in the triangle. This is because, at each level, we are making two recursive calls.
The space complexity is O(n) due to the call stack of the recursive function, where n is the number of rows.
A more efficient approach is to use dynamic programming. We can build a DP table (or modify the input triangle in place) to store the minimum path sum to reach each element in the triangle. We start from the bottom of the triangle and work our way up. For each element, the minimum path sum is the element's value plus the minimum of the path sums of its two children (elements in the row below).
def triangle_min_path_dp(triangle):
n = len(triangle)
# Start from the second to last row and iterate upwards
for i in range(n - 2, -1, -1):
for j in range(len(triangle[i])):
triangle[i][j] += min(triangle[i + 1][j], triangle[i + 1][j + 1])
# The minimum path sum will be at the top of the triangle
return triangle[0][0]
The time complexity of this dynamic programming approach is O(n^2), where n is the number of rows in the triangle. We iterate through each element of the triangle once.
The space complexity of this approach is O(1) if we modify the input triangle in place. If we create a separate DP table, the space complexity would be O(n^2).
The same edge cases as the recursive solution apply here:
We can achieve O(n) extra space by using a 1D array to store the minimum path sums for the current row. We iterate from the bottom row up, updating the 1D array in each step. This is effectively optimizing the DP approach further.
def triangle_min_path_dp_optimized(triangle):
n = len(triangle)
dp = triangle[-1]
for i in range(n - 2, -1, -1):
for j in range(len(triangle[i])):
dp[j] = triangle[i][j] + min(dp[j], dp[j + 1])
return dp[0]
The time complexity remains O(n^2).
The space complexity is reduced to O(n).