Editorial for DMOPC '20 Contest 3 P4 - Bob and Typography
Submitting an official solution before solving the problem yourself is a bannable offence.
Author:
The first subtask can be solved using brute-force. We can generate all possible splits recursively and choose the longest pretty one.
Time complexity:
The second subtask can be solved using dynamic programming. For simplicity, let's consider purely non-increasing splits only.
Let be the longest non-increasing split whose last line starts at word
and ends at word
, and let
be the total number of letters in the words between indices
and
.
To calculate , we have
choices for the first word
in the second-last line, and we want to consider only the
s for which
(in order to maintain non-increasingness). Thus,
. Our longest non-increasing split ending at
is then
. We can find all these
values in
total.
What about the longest pretty split? Well, we can find the longest non-decreasing split whose first line starts at word simply by running the above algorithm backwards. Then we can paste the non-increasing and non-decreasing splits together to get a pretty split. Taking the longest of these pretty splits over all choices of
will give us our final answer.
Time complexity:
To solve the third subtask, we can optimize the above solution by using a prefix sum array to find each in
.
Time complexity:
We can solve the fourth subtask by making further optimizations. Notice that when we iterate from
to
,
decreases monotonically. Hence, for each choice of
and
, there exists a "crossing point"
such that
for all
and
for all
. We can precompute all
s using a "two pointers" method in
total. Alternatively, we can use binary search to compute them in
.
Our DP calculation then becomes . We can compute each maximum in
using a prefix max array for a total time complexity of
.
Time complexity:
The fastest solution is based on assuming sparsity of the DP state table.
Observe that if and
, there is no need to keep the
state at all: any subsequent configuration on it works just as well on
instead.
So we keep only these values (known as the Pareto optimums) in our DP table (using vectors), and transit only on these values.
This can be further optimized using the following two observations:
- By considering the
transitions in increasing order of
, we only need to check whether the new value has a higher value than the previous Pareto optimum.
- The values of the maximum DP value associated with each
is monotonically decreasing as
increases. So if we binary search for the max extent that
can reach, we can walk downward with a pointer until the first location where the new DP value is no longer useful.
The first optimization brings the memory down to the number of Pareto opts, and is already sufficient for passing the last batch (with runtime of about 1s). The second optimization gets the runtime down to times the number of Pareto optimums as well, and does about 0.3s on the data we generated.
Time complexity: ??????
- If you find a case that TLEs all judge + in-contest AC codes before Lunar New Year '21 (Feb 12), rpeng will curbside deliver a cooked goose to a location within 100km driving of Highway 401 Exit 299 of your choice.
- If you prove a subquadratic runtime bound, rpeng will help you write a paper.
Comments