There are two mice and n
different types of cheese. Each type of cheese should be eaten by exactly one mouse. A point of the cheese with index i
(0-indexed) is: reward1[i]
if the first mouse eats it, and reward2[i]
if the second mouse eats it. You are given a positive integer array reward1
, a positive integer array reward2
, and a non-negative integer k
. Return the maximum points the mice can achieve if the first mouse eats exactly k
types of cheese.
For example:
reward1 = [1,1,3,4]
, reward2 = [4,4,1,1]
, k = 2
. Output: 15
. The first mouse eats the 2nd and 3rd types of cheese, and the second mouse eats the 0th and 1st types of cheese. The total points are 4 + 4 + 3 + 4 = 15.
reward1 = [1,1]
, reward2 = [1,1]
, k = 2
. Output: 2
. The first mouse eats the 0th and 1st types of cheese, and the second mouse does not eat any cheese. The total points are 1 + 1 = 2.
Constraints:
1 <= n == reward1.length == reward2.length <= 10^5
1 <= reward1[i], reward2[i] <= 1000
0 <= k <= n
You are given two mice and n
different types of cheese. Each type of cheese should be eaten by exactly one mouse. For each cheese i
, the first mouse gets reward1[i]
points if it eats the cheese, and the second mouse gets reward2[i]
points if it eats the cheese. You are also given an integer k
, representing the number of cheese types the first mouse must eat. The objective is to maximize the total points earned by both mice.
A brute-force solution would involve trying all possible combinations of k
cheeses for the first mouse and assigning the remaining cheeses to the second mouse. Calculate the total points for each combination, and return the maximum. This involves generating all possible subsets of size k
from a set of size n
, which is highly inefficient.
A more efficient approach involves using a greedy algorithm combined with sorting. We prioritize the cheeses for which the difference between reward1[i]
and reward2[i]
is the greatest. Intuitively, we want the first mouse to eat the cheeses where it gets significantly more points than the second mouse. Here's the algorithm:
diff[i] = reward1[i] - reward2[i]
for each cheese.diff[i]
in descending order.k
cheeses (after sorting) to the first mouse.n - k
cheeses to the second mouse.k = 0
: The first mouse eats no cheese, so the total score is the sum of reward2
.k = n
: The first mouse eats all cheeses, so the total score is the sum of reward1
.reward1
and reward2
are identical, meaning the order of selection doesn't matter beyond satisfying the k
cheeses eaten.def max_points(reward1, reward2, k):
n = len(reward1)
diff = [reward1[i] - reward2[i] for i in range(n)]
indices = sorted(range(n), key=lambda i: diff[i], reverse=True)
total_points = 0
for i in range(k):
total_points += reward1[indices[i]]
for i in range(k, n):
total_points += reward2[indices[i]]
return total_points
diff
array and the indices
array.Let reward1 = [1, 1, 3, 4]
, reward2 = [4, 4, 1, 1]
, and k = 2
. Here's how the algorithm works:
diff = [-3, -3, 2, 3]
indices = [3, 2, 0, 1]
(sorted indices based on diff
in descending order)