Given an array of async functions functions, return a promise that resolves when all of them resolve. However, more than one function can be running concurrently (limited by n).
setTimeout is often used to simulate time-consuming tasks.
Example 1:
Input: functions = [ () => new Promise(resolve => setTimeout(() => resolve(5), 200)), () => new Promise(resolve => setTimeout(() => resolve(10), 100)), () => new Promise(resolve => setTimeout(() => resolve(15), 300)) ] n = 2 Output: 20 Explanation: The two longest promises are 200ms and 300ms. Together they take 500ms. The third promise starts after 200ms and finishes after 100ms. The last promise finishes in the range [200, 500], so the entire pool finishes after 500ms.
Example 2:
Input: functions = [ () => new Promise(resolve => setTimeout(() => resolve(1), 200)), () => new Promise(resolve => setTimeout(() => resolve(2), 100)), () => new Promise(resolve => setTimeout(() => resolve(3), 300)) ] n = 1 Output: 6 Explanation: The longest promise takes 200ms, 100ms, and 300ms individually. Together they take 600ms. The function returns when the last promise resolves.
Constraints:
0 <= functions.length <= 101 <= n <= 10When you get asked this question in a real-life environment, it will often be ambiguous (especially at FAANG). Make sure to ask these questions in that case:
The brute force approach to the Promise Pool problem involves trying to run every possible combination of tasks concurrently. We essentially explore all possible schedules for running the tasks within the concurrency limit, making sure we eventually execute them all. This approach guarantees finding the correct result, though potentially inefficiently.
Here's how the algorithm would work step-by-step:
import itertools
def promise_pool_brute_force(tasks, concurrency_limit):
best_schedule = None
min_completion_time = float('inf')
# Generate all possible schedules.
for schedule in generate_schedules(tasks, concurrency_limit):
completion_time = calculate_completion_time(schedule)
if completion_time < min_completion_time:
min_completion_time = completion_time
best_schedule = schedule
return min_completion_time
def generate_schedules(tasks, concurrency_limit):
schedules = []
queue = [([], tasks)] # (current_schedule, remaining_tasks)
while queue:
current_schedule, remaining_tasks = queue.pop(0)
if not remaining_tasks:
schedules.append(current_schedule)
continue
# Iterate over all possible combinations of tasks to run concurrently.
for task_combination in combinations(remaining_tasks, concurrency_limit):
new_schedule = current_schedule + [list(task_combination)]
new_remaining_tasks = [task for task in remaining_tasks if task not in task_combination]
queue.append((new_schedule, new_remaining_tasks))
return schedules
def combinations(tasks, concurrency_limit):
# Generate valid combinations to respect the concurrency limit.
valid_combinations = []
for i in range(min(len(tasks), concurrency_limit) + 1):
for combination in itertools.combinations(tasks, i):
valid_combinations.append(combination)
return valid_combinations
def calculate_completion_time(schedule):
start_times = {}
end_times = {}
current_time = 0
running_tasks = set()
for time_slot in schedule:
# Wait for tasks to finish before starting new ones
if running_tasks:
min_end_time = float('inf')
for task_id in running_tasks:
min_end_time = min(min_end_time, end_times[task_id])
current_time = min_end_time
running_tasks = set()
# Start tasks for the current time slot.
for task in time_slot:
task_id, task_duration = task
start_times[task_id] = current_time
end_times[task_id] = current_time + task_duration
running_tasks.add(task_id)
# Account for the last batch of tasks that finished.
if running_tasks:
min_end_time = float('inf')
for task_id in running_tasks:
min_end_time = min(min_end_time, end_times[task_id])
current_time = min_end_time
# The overall completion time equals time the very last task finishes at
return current_timeThe Promise Pool problem is about limiting how many tasks run at the same time. We want to start tasks as soon as possible, but not overload the system. The trick is to keep track of which tasks are running and start new ones only when old ones finish, until all tasks are done.
Here's how the algorithm would work step-by-step:
import asyncio
async def promise_pool(task_list, pool_size):
running_tasks = set()
completed_tasks = 0
task_iterator = iter(task_list)
async def run_task(task):
try:
await task
finally:
running_tasks.remove(task)
nonlocal completed_tasks
completed_tasks += 1
# Start new tasks after completion.
await start_new_tasks()
async def start_new_tasks():
while len(running_tasks) < pool_size:
try:
task = next(task_iterator)
except StopIteration:
break
running_tasks.add(task)
# Ensure tasks run concurrently.
asyncio.create_task(run_task(task))
# Initialize the pool with initial tasks.
await start_new_tasks()
# Wait for all tasks to complete.
while completed_tasks < len(task_list):
await asyncio.sleep(0.01)| Case | How to Handle |
|---|---|
| Empty input array of promise-generating functions | Return a promise that resolves to an empty array since there are no promises to fulfill. |
| Concurrency limit of 0 or negative value | Throw an error, as it's impossible to execute promises with non-positive concurrency. |
| Promise-generating functions that throw errors | Catch the rejected promise and propagate the error to reject the overall promise pool. |
| Large input array causing potential memory exhaustion | Implement a streaming approach to process promises in chunks if the number of promises can cause memory issues. |
| Promises that never resolve or reject (hanging promises) | Implement a timeout mechanism to reject promises that take too long to prevent indefinite blocking. |
| Concurrency limit exceeding the number of input promises | Treat the concurrency limit as the size of the input array to avoid unnecessary overhead. |
| Input array contains non-function elements. | Check if the element is a function before calling it, and throw an error if not. |
| Promises resolving with different data types. | The solution should handle different data types returned by the promises without specific type assumptions. |