Editorial for Baltic OI '19 P3 - Alpine Valley
Submitting an official solution before solving the problem yourself is a bannable offence.
Subtask 1
(; the graph looks like this:
)
Removing an edge will split the graph into two parts: those with indices
and those with indices
. It should be easy to check whether
and
are on the same "side". If they are, the answer is
escaped
. Otherwise, it is fairly straightforward to iterate over all shop vertices on the "side" of and calculate distances to find the closest one.
Subtask 2
()
The simplest solution (and typically, insufficient) to any problem is to simply do what the problem statement tells us to do, without giving it any more thought. This subtask can be solved by doing exactly that.
Suppose a query comes along. Then, we:
- traverse the graph (pretending that the edge
does not exist) to calculate, for each vertex
, the distance from
to
;
- if the distance from
to
is not
, announce that it is possible to reach
from
;
- otherwise, iterate over all shop vertices to find the one closest to
.
And this is done for all queries, separately.
Steps 2 and 3 should be straightforward to implement. There are many slightly different ways to do step 1. What makes it simpler is the fact that in a tree, there is exactly one path (that doesn't repeat vertices) between any two vertices. Thus the distance from to
(for any
) is the length of that only path.
The idea is as follows. We know that the distance from to
is
. From this, we can calculate the distance from
to its neighbours. Knowing the distance from
to its neighbours, we can calculate the distance from
to those neighbours' other neighbours. And so on. Formalizing and implementing this gives us something like the following:
Algorithm 1: Breadth-first search to answer queries
queue ← a first-in, first-out queue, initially empty
dist ← an array, initially dist[v] = ∞ for any v
add R to the back of queue
dist[R] ← 0
while queue is not empty do
u ← first element of queue
remove the first element from queue
for each edge (u, v) from u do
if we have not visited v and (u, v) ≠ I then
add v to the back of queue
dist[v] ← dist[u] + weight(u, v)
end
end
end
Here, denotes the weight of the edge between
and
.
A practical note: the answer to the question can be up to . Thus, using
int
(in C++ and Java) will lead to overflow. Use long
(in Java) or long long
(in C++) instead.
Let's calculate the complexity of this solution. In algorithm 1, we visit each vertex and each edge at most once. Thus the complexity of step 1 is . Steps 2 and 3 also take no more than this. As such, we take
time for each query, for a total complexity of
.
If we only had one query, this would be optimal. But, we are neglecting the fact that all those queries take place on the same graph…
Subtask 3
(; all vertices are shop vertices)
From this point on, we consider our graph as a rooted tree, rooted at the exit vertex . It might look something like this:
Among the neighbours of a vertex, we distinguish between its parent and its children. For example, among the neighbours of , the parent is
and the children are
,
and
.
Why is this representation convenient? Suppose we erased an edge, for example, the edge , which is dashed in the picture above. Then we can verify that the answer is
escaped
for the vertices outside the subtree of , and something else for the vertices inside the subtree of
. It is easy to confirm that this holds for any query
— the answer is
escaped
if and only if does not lie in the "lower vertex" of the edge
.
In this subtask, this is enough! If we can't escape the valley, the vertex is itself a shop vertex. Therefore the answer to each query is either
or
escaped
. Now we only need a way to quickly tell if one vertex is in a subtree of another. There are various approaches, we describe a particularly elegant one.
Algorithm 2: A recursive function
Function dfs(u)
print u
for each child v of u do
call dfs(v)
end
print u
end
Consider the function in algorithm 2. It recursively explores the subtree of a vertex, printing the index of the current vertex upon "entering" and "exiting" that subtree. For example, the output of running
is:
Let's look at the output of . It is easy to see that
is in the subtree of
if and only if the first occurrence of
happens after the first occurrence of
and the last occurrence of
happens before the last occurrence of
. Indeed, if
is truly in the subtree of
, then "entering" and "exiting" the vertex
must occur while we are in the subtree of
— between "entering" and "exiting"
.
After running once, we can answer queries in
time, using this criterion. Running
takes
time, thus the time complexity is
.
Subtask 4
()
We already know how to tell if the answer to a query is escaped
. Thus in this section, we only focus on calculating the answer in the other case. Suppose we have a query . Let
be the "lower vertex" of
and suppose
is in the subtree of
. Then we need to calculate the closest shop vertex to
within the subtree of
.
Let denote the lowest common ancestor of vertices
and
and
be the distance from
to
. We can calculate the array
like we did in subtask 2. Notice that the distance between
and
is:
Let be the closest shop vertex to
within the subtree of
. Let's pretend we don't know where
is, but somehow know
. Then we can calculate the answer to the query as:
where the minimum is taken over all shop vertices in the subtree of . We define:
Algorithm 3 provides a dynamic programming solution to calculate the array .
Algorithm 3: Calculating magic.
Function buildMagic(u)
for each child v of u do
call buildMagic (v)
end
if u is a shop vertex then
magic[u] ← distE[u]
else
magic[u] ← ∞
end
for each child v of u do
magic[u] ← min(magic[u], magic[v])
end
end
call buildMagic (E)
for each vertex u do
magic[u] ← magic[u] - 2 distE[u]
end
Calculating gives the shortest path from
to a shop, provided that
is the "uppermost" vertex on the path. The "uppermost" vertex on the path will be on the path from
to
. Thus, the answer to the query is:
where the minimum is taken over the path from to
(including both
and
).
Now we only need a way to calculate quickly. There are many ways to take minimums over paths on a tree — we describe one which is called "binary lifting". We initialize two 2D arrays:
and
. We want
to be the
-th ancestor
; that is — the result of moving
steps towards
from
. And
should be the minimum of
on the path between
and the
(including
, but excluding
). One can verify that the following relations hold:
For example, the 8th ancestor of a vertex is the 4th ancestor of its 4th ancestor. We can use these equations to initialize the arrays and
.
How can we use these arrays to take minimum of on the path from
to
? We start at
, then use
to jump up as far as possible without passing over
. The corresponding value of
is the minimum of
over all the vertices we skipped over. We keep jumping up until we reach
. Algorithm 4 provides the details.
Algorithm 4: Binary lifting
Function buildLifting(u, p) /* p is the parent of u */
jumpVertex[u][0] ← p
jumpMagic[u][0] ← magic[u]
for k ← 1 to dlog2 Ne do
/* jumping dlog2 Ne is enough to get us to the root */
jumpVertex[u][k] ← jumpVertex[jumpVertex[u][k - 1]][k - 1]
jumpMagic[u][k] ← min(jumpMagic[u][k - 1], jumpMagic[jumpVertex[u][k - 1]][k - 1])
end
for each child v of u do
call buildLifting (v, u)
end
end
Function minPath(R, p)
/* calculates the minimum of magic over the path from R to p. */
u ← R
answer ← ∞
for k ← dlog2 Ne to 0 do
if jumpVertex[u][k] is in the subtree of p then
answer ← min(answer, jumpMagic[u][k])
u ← jumpVertex[u][k]
end
end
answer ← min(answer, magic[p])
return answer
end
To summarize:
- we call
and other auxiliary pre-processing functions;
- we call
to initialize the binary lifting tables;
- for each query
, where
is the "lower vertex" of
, answer the query as
(or
escaped
ifis not in the subtree of
).
Step 1 should not take more than time. By analyzing the pseudocode we can see that step 2 takes
and step 3 takes up to
for each query, i.e.
in total. Total complexity is
.
Credits
- Task: Lukas Michel (Germany)
- Solutions and tests: Tähvend Uustalu, Andres Unt (Estonia)
Comments