A frog is crossing a river. The river is divided into some number of units, and at each unit, there may or may not exist a stone. The frog can jump on a stone, but it must not jump into the water.
Given a list of stones
positions (in units) in sorted ascending order, determine if the frog can cross the river by landing on the last stone. Initially, the frog is on the first stone and assumes the first jump must be 1
unit.
If the frog's last jump was k
units, its next jump must be either k - 1
, k
, or k + 1
units. The frog can only jump in the forward direction.
For example: Input: stones = [0,1,3,5,6,8,12,17] Output: true Explanation: The frog can jump to the last stone by jumping 1 unit to the 2nd stone, then 2 units to the 3rd stone, then 2 units to the 4th stone, then 3 units to the 6th stone, 4 units to the 7th stone, and 5 units to the 8th stone.
Input: stones = [0,1,2,3,4,8,9,11] Output: false Explanation: There is no way to jump to the last stone as the gap between the 5th and 6th stone is too large.
Constraints: 2 <= stones.length <= 2000 0 <= stones[i] <= 2^31 - 1 stones[0] == 0 stones is sorted in a strictly increasing order.
Here's the breakdown of how to solve this frog jumping problem:
We're given a list of stone positions along a river, sorted in ascending order. A frog starts at the first stone (position 0) and wants to reach the last stone. The frog's first jump is 1 unit. After that, if the frog's last jump was k
units, the next jump must be k - 1
, k
, or k + 1
units. The frog can only jump forward.
The task is to determine if the frog can reach the last stone.
A brute-force approach would involve exploring all possible jump sequences from the starting stone. This can be done recursively. At each stone, we try all possible jump lengths (k - 1
, k
, k + 1
) and see if they lead to the next stone. If we reach the last stone, we return true. If we exhaust all possibilities without reaching the last stone, we return false.
However, this approach is highly inefficient due to the overlapping subproblems. We'd end up recalculating the same paths multiple times, leading to exponential time complexity.
We can use dynamic programming to optimize this solution. We can use a hash map to store all the possible jump sizes that can reach a particular stone. The keys of the map are the stone positions, and the values are sets of jump sizes that can reach that stone.
Here's the algorithm:
dp
where keys are stone positions and values are sets of possible jump sizes.k
, calculate the next possible jump lengths: k - 1
, k
, and k + 1
.def canCross(stones):
dp = {stone: set() for stone in stones}
dp[0].add(0)
for stone in stones:
for k in dp[stone]:
for step in range(k - 1, k + 2):
if step > 0 and stone + step in dp:
dp[stone + step].add(step)
return len(dp[stones[-1]]) > 0
Here's an example of how the dp
map evolves with the input stones = [0,1,3,5,6,8,12,17]
:
dp = {0: {0}, 1: set(), 3: set(), 5: set(), 6: set(), 8: set(), 12: set(), 17: set()}
dp = {0: {0}, 1: {1}, 3: set(), 5: set(), 6: set(), 8: set(), 12: set(), 17: set()}
dp = {0: {0}, 1: {1}, 3: {1, 2}, 5: set(), 6: set(), 8: set(), 12: set(), 17: set()}
dp[17]
is not empty, it means the frog can reach the last stone.The time complexity of this solution is O(N^2), where N is the number of stones. This is because, in the worst case, for each stone, we iterate through all possible jump sizes, which can be at most N.
Specifically:
Thus, the overall time complexity is O(N * N) = O(N^2).
The space complexity of this solution is O(N^2), where N is the number of stones. This is because the dp
hash map stores the stone positions as keys and sets of jump sizes as values. In the worst case, each stone can have a set of jump sizes, and the maximum number of jump sizes for a stone can be proportional to N. Therefore, the space required to store the dp
map is O(N * N) = O(N^2).
stones
is null or has fewer than 2 stones, return false
because the frog needs at least two stones to make a jump.false
because the frog always starts at position 0.dp
map.[0]
, return true
as the frog is already on the last stone (though this violates the problem's constraint that there must be at least 2 stones).By using dynamic programming, we can efficiently solve the frog jumping problem, handling various edge cases and optimizing the time and space complexity.