mirror of
https://github.com/janishutz/eth-summaries.git
synced 2025-11-25 10:34:23 +00:00
[AW] Update summary to new version of helpers
This commit is contained in:
2
latex
2
latex
Submodule latex updated: 496ac27f02...565b600ef0
Binary file not shown.
@@ -74,8 +74,6 @@
|
||||
\input{parts/algorithms/flows/max-flow-min-cut.tex}
|
||||
\input{parts/algorithms/flows/algorithms.tex}
|
||||
\input{parts/algorithms/flows/examples.tex}
|
||||
\input{parts/algorithms/}
|
||||
|
||||
|
||||
\input{parts/coding.tex}
|
||||
|
||||
|
||||
@@ -16,20 +16,20 @@ Most algorithms for the max-flow problem use a residual network, where $\mathcal
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
The problem with this algorithm is that it may not terminate for irrational capacities. If we however only consider integral networks without bidirectional edges, it can be easily seen that if we denote $U \in \N$ the upper bound for capacities, the time complexity of this algorithm is \tco{nUm} where \tco{m} is the time complexity for constructing residual network.
|
||||
The problem with this algorithm is that it may not terminate for irrational capacities. If we however only consider integral networks without bidirectional edges, it can be easily seen that if we denote $U \in \N$ the upper bound for capacities, the time complexity of this algorithm is $\tco{nUm} where \tco{m}$ is the time complexity for constructing residual network.
|
||||
\begin{theorem}[]{Max-Flow Algorithm}
|
||||
If in a network without bidirectional edges and all capacities integral and no larger than $U$, there is an integral max-flow we can compute in \tco{mnU}, whereas $m$ is the number of edges and $n$ the number of vertices in the network.
|
||||
If in a network without bidirectional edges and all capacities integral and no larger than $U$, there is an integral max-flow we can compute in $\tco{mnU}$, whereas $m$ is the number of edges and $n$ the number of vertices in the network.
|
||||
\end{theorem}
|
||||
|
||||
There are more advanced algorithms than this one that can calculate solutions to this problem faster or also for irrational numbers.
|
||||
For the following two proposition, $m = |E|$ and $n = |V|$, i.e. $m$ is the number of edges and $n$ the number of vertices
|
||||
|
||||
\begin{proposition}[]{Capacity-Scaling}
|
||||
If in a network all capacities are integral and at most $U$, there exists an integral max-flow that can be computed in \tco{mn(1 + \log(U))}
|
||||
If in a network all capacities are integral and at most $U$, there exists an integral max-flow that can be computed in $\tco{mn(1 + \log(U))}$
|
||||
\end{proposition}
|
||||
|
||||
\begin{proposition}[]{Dynamic-Trees}
|
||||
The max-flow of a flow in a network can be calculated in \tco{mn\log(n)}
|
||||
The max-flow of a flow in a network can be calculated in $\tco{mn\log(n)}$
|
||||
\end{proposition}
|
||||
|
||||
|
||||
|
||||
@@ -91,7 +91,7 @@ In the following section we use \textit{multigraphs}.
|
||||
|
||||
We define $\mu(G)$ to be the cardinality of the \textit{min-cut} (this is the problem).
|
||||
This problem is similar to the min-cut problem for flows, only that we have a multigraph now. We can however replace multiple edges with a single, weighted edge, allowing us to use the algorithms discussed above.
|
||||
Since we need to compute $(n - 1)$ $s$-$t$-cuts, our total time complexity is \tco{n^4 \log(n)}, since we can compute $s$-$t$-cuts in \tco{n^3\log(n)} = \tco{n\cdot m\log(n)}
|
||||
Since we need to compute $(n - 1)$ $s$-$t$-cuts, our total time complexity is $\tco{n^4 \log(n)}$, since we can compute $s$-$t$-cuts in $\tco{n^3\log(n)} = \tco{n\cdot m\log(n)}$
|
||||
|
||||
|
||||
|
||||
@@ -123,7 +123,7 @@ Of note is that there is a bijection: $\text{Edges in G without the ones between
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
If we assume that we can perform edge contraction in \tco{n} and we can choose a uniformly random edge in $G$ in \tco{n} as well, it is evident that we can compute \textsc{Cut}($G$) in \tco{n^2}
|
||||
If we assume that we can perform edge contraction in $\tco{n}$ and we can choose a uniformly random edge in $G$ in $\tco{n}$ as well, it is evident that we can compute \textsc{Cut}($G$) in $\tco{n^2}$
|
||||
|
||||
\begin{lemma}[]{Random edge contraction}
|
||||
If $e$ is uniformly randomly chosen from the edges of multigraph $G$, then we have
|
||||
@@ -155,10 +155,10 @@ Thus, we repeat the algorithm \textsc{Cut}$(G)$ $\lambda {n \choose 2}$ times fo
|
||||
\begin{theorem}[]{\textsc{Cut}$(G)$}
|
||||
For the algorithm that runs \textsc{Cut}$(G)$ $\lambda{n \choose 2}$ times we have the following properties:
|
||||
\begin{enumerate}[label=(\arabic*)]
|
||||
\item Time complexity: \tco{\lambda n^4}
|
||||
\item Time complexity: $\tco{\lambda n^4}$
|
||||
\item The smallest found value is with probability at least $1 - e^{-\lambda}$ equal to $\mu(G)$
|
||||
\end{enumerate}
|
||||
\end{theorem}
|
||||
If we choose $\lambda = \ln(n)$, we have time complexity \tco{n^4 \ln(n)} with error probability \textit{at most} $\frac{1}{n}$
|
||||
If we choose $\lambda = \ln(n)$, we have time complexity $\tco{n^4 \ln(n)}$ with error probability \textit{at most} $\frac{1}{n}$
|
||||
|
||||
Of note is that for low $n$, it will be worth it to simply deterministically determine the min-cut
|
||||
|
||||
@@ -18,10 +18,10 @@ The graph $G'$ fulfills the above implication because
|
||||
Let's assume $v_1 = v$ (the vertex removed during construction). However, $\langle \hat{v_2}, v_2, \ldots, \hat{v_n}, v_n \rangle$ is a path of length $n$
|
||||
\item Let $\langle u_0, u_1, \ldots, u_n \rangle$ be a path of length $n$ in $G'$ and let $\deg(u_i) \geq 2 \smallhspace \forall i \in \{1, \ldots, n - 1\}$ These vertices hence have to be the $n - 1$ remaining vertices of $G$, thus we have $u_0 = \hat{w_i}$ and $u_n = \hat{w_j}$ two different ones of new vertices of degree $1$ in $G'$. Thus, we have $u_1 = w_i$ and $u_{n - 1} = w_j$ and we have $\langle v, u_1, \ldots, u_{n - 1}, v \rangle$, which is a Hamiltonian cycle in $G$
|
||||
\end{enumerate}
|
||||
Due to the construction of the graph $G'$ we can generate it from $G$ in \tco{n^2} steps. We thus have:
|
||||
Due to the construction of the graph $G'$ we can generate it from $G$ in $\tco{n^2}$ steps. We thus have:
|
||||
|
||||
\begin{theorem}[]{Long Path Problem}
|
||||
If we can find a \textit{long-path} in a graph with $n$ vertices in time $t(n)$, we can decide if a graph with $n$ vertices has a Hamiltonian cycle in $t(2n - 2) + \text{\tco{n^2}}$
|
||||
If we can find a \textit{long-path} in a graph with $n$ vertices in time $t(n)$, we can decide if a graph with $n$ vertices has a Hamiltonian cycle in $t(2n - 2) + \tco{n^2}$
|
||||
\end{theorem}
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ For the algorithm, we need to also define $N(v)$ which returns the neighbours of
|
||||
\EndProcedure
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
The time complexity of this algorithm is \tco{2^k km}. If we now have $k = \text{\tco{\log(n)}}$, the algorithm is polynomial.
|
||||
The time complexity of this algorithm is $\tco{2^k km}$. If we now have $k = \tco{\log(n)}$, the algorithm is polynomial.
|
||||
|
||||
|
||||
\shade{ForestGreen}{Random colouring}
|
||||
@@ -91,7 +91,7 @@ For our algorithm we choose a $\lambda > 1 \in \R$ and we repeat the test at mos
|
||||
|
||||
\begin{theorem}[]{Random Colouring Algorithm}
|
||||
\begin{itemize}
|
||||
\item Time complexity: \tco{\lambda(2e)^k km}
|
||||
\item Time complexity: $\tco{\lambda(2e)^k km}$
|
||||
\item If we return ``\textsc{Yes}'', the graph is \textit{guaranteed} to contain a path of length $k - 1$
|
||||
\item If we return ``\textsc{No}'', the probability of false negative is $e^{-\lambda}$
|
||||
\end{itemize}
|
||||
|
||||
@@ -27,7 +27,7 @@ The following algorithm correctly computes \textit{\textbf{a}} valid colouring.
|
||||
\begin{align*}
|
||||
\mathscr{X}(G) \leq C(G) \leq \Delta(G) + 1
|
||||
\end{align*}
|
||||
where $\Delta(G) := \max_{v\in V}\deg(v)$ is the maximum degree of a vertex in $G$. If the graph is stored as an adjacency list, the algorithm finds a colouring \tco{|E|}
|
||||
where $\Delta(G) := \max_{v\in V}\deg(v)$ is the maximum degree of a vertex in $G$. If the graph is stored as an adjacency list, the algorithm finds a colouring $\tco{|E|}$
|
||||
\end{theorem}
|
||||
|
||||
\begin{algorithm}
|
||||
@@ -46,12 +46,12 @@ The following algorithm correctly computes \textit{\textbf{a}} valid colouring.
|
||||
\begin{align*}
|
||||
\mathscr{X}(G) \leq \Delta(G)
|
||||
\end{align*}
|
||||
and there is an algorithm that colours the graph using $\Delta(G)$ colours in \tco{|E|}. Otherwise $\mathscr{X}(G) \leq \Delta(G) + 1$
|
||||
and there is an algorithm that colours the graph using $\Delta(G)$ colours in $\tco{|E|}$. Otherwise $\mathscr{X}(G) \leq \Delta(G) + 1$
|
||||
\end{theorem}
|
||||
Of note is that a graph with an even number of vertices and edges does not contain an uneven cycle, so for an incomplete graph with an even number of edges and vertices, we always have that $\mathscr{X}(G) \leq \Delta(G)$
|
||||
|
||||
\begin{theorem}[]{Maximum degree}
|
||||
Let $G$ be a graph and $k \in \N$ the number representing the maximum degree of any vertex of any induced subgraph of $G$. Then we have $\mathscr{X}(G) \leq k + 1$ and a $(k + 1)$-coloring can be found in \tco{|E|}
|
||||
Let $G$ be a graph and $k \in \N$ the number representing the maximum degree of any vertex of any induced subgraph of $G$. Then we have $\mathscr{X}(G) \leq k + 1$ and a $(k + 1)$-coloring can be found in $\tco{|E|}$
|
||||
\end{theorem}
|
||||
|
||||
\begin{theorem}[]{Mycielski-Construction}
|
||||
@@ -63,6 +63,6 @@ To conclude this section, one last problem:
|
||||
|
||||
We are given a graph $G$ which we are told has $\mathscr{X}(G) = 3$. This means, we know that there is exists an order of processing for the \textsc{Greedy-Colouring} algorithm that only uses three colours. We don't know the colours, but we can find an upper bound for the number of colours needed
|
||||
\begin{theorem}[]{$3$-colourable graphs}
|
||||
Every $3$-colourable graph $G$ can be coloured in time \tco{|E|} using at most \tco{\sqrt{|V|}} colours
|
||||
Every $3$-colourable graph $G$ can be coloured in time $\tco{|E|}$ using at most $\tco{\sqrt{|V|}}$ colours
|
||||
\end{theorem}
|
||||
Since the graph has to be bipartite (because for each vertex $v$, its neighbours can only be coloured in $2$ other colours, because the graph can be $3$-coloured), we can use BFS and thus have linear time. The algorithm works as follows: We choose the vertices with the largest degree and apply three colours to them. For the ones of smaller degree, we apply Brook's theorem.
|
||||
|
||||
@@ -74,7 +74,7 @@ $v$ is an articulation point $\Leftrightarrow$ ($v = s$ and $s$ has degree at le
|
||||
|
||||
\stepcounter{all}
|
||||
\begin{theorem}[]{Articulation points Computation}
|
||||
For a connected graph $G = (V, E)$ that is stored using an adjacency list, we can compute all articulation points in \tco{|E|}
|
||||
For a connected graph $G = (V, E)$ that is stored using an adjacency list, we can compute all articulation points in $\tco{|E|}$
|
||||
\end{theorem}
|
||||
|
||||
|
||||
@@ -104,7 +104,7 @@ The idea now is that every vertex contained in a bridge is either an articulatio
|
||||
\end{center}
|
||||
|
||||
\begin{theorem}[]{Bridges Computation}
|
||||
For a connected graph $G = (V, E)$ that is stored using an adjacency list, we can compute all bridges and articulation points in \tco{|E|}
|
||||
For a connected graph $G = (V, E)$ that is stored using an adjacency list, we can compute all bridges and articulation points in $\tco{|E|}$
|
||||
\end{theorem}
|
||||
|
||||
\subsubsection{Block-Decomposition}
|
||||
|
||||
@@ -41,7 +41,7 @@ If we combine the entirety of the explanations of pages 43-45 in the script, we
|
||||
\begin{theorem}[]{Eulerian Graph}
|
||||
\begin{enumerate}[label=\alph*)]
|
||||
\item A connected graph $G$ is eulerian if and only if the degree of all vertices is even
|
||||
\item In a connected eulerian graph, we can find a eulerian cycle in \tco{|E|}
|
||||
\item In a connected eulerian graph, we can find a eulerian cycle in $\tco{|E|}$
|
||||
\end{enumerate}
|
||||
\end{theorem}
|
||||
|
||||
@@ -58,7 +58,7 @@ The issue with Hamiltonian cycles is that the problem is $\mathcal{N}\mathcal{P}
|
||||
|
||||
\stepcounter{all}
|
||||
\begin{theorem}[]{Hamiltonian Cycle Algorithm}
|
||||
The algorithm \textsc{HamiltonianCycle} is correct and has space complexity \tco{n \cdot 2^n} and time complexity \tco{n^2 \cdot 2^n}, where $n = |V|$
|
||||
The algorithm \textsc{HamiltonianCycle} is correct and has space complexity $\tco{n \cdot 2^n}$ and time complexity $\tco{n^2 \cdot 2^n}$, where $n = |V|$
|
||||
\end{theorem}
|
||||
|
||||
In the below algorithm, $G = (V, E)$ is a graph for which $V = [n]$ and $N(v)$ as usual the neighbours of $v$ and we define $S$ as a subset of the vertices of $G$ with $1 \in S$.
|
||||
@@ -137,10 +137,10 @@ We thus reach the following algorithm:
|
||||
\end{algorithm}
|
||||
|
||||
\begin{theorem}[]{Count Hamiltionian Cycles Algorithm}
|
||||
The algorithm computes the number of Hamiltonian cycles in $G$ with space complexity \tco{n^2} and time complexity \tco{n^{2.81}\log(n) \cdot 2^n}, where $n = |V|$
|
||||
The algorithm computes the number of Hamiltonian cycles in $G$ with space complexity $\tco{n^2}$ and time complexity $\tco{n^{2.81}\log(n) \cdot 2^n}$, where $n = |V|$
|
||||
\end{theorem}
|
||||
The time complexity bound comes from the fact that we need \tco{\log(n)} matrix multiplications to compute $|W_S|$, which can be found in entry $(s, s)$ in $(A_S)^n$, where $A_S$ is the adjacency matrix of the induced subgraph $G[V\backslash S]$.
|
||||
Each matrix multiplication can be done in \tco{n^{2.81}} using Strassen's Algorithm.
|
||||
The time complexity bound comes from the fact that we need $\tco{\log(n)}$ matrix multiplications to compute $|W_S|$, which can be found in entry $(s, s)$ in $(A_S)^n$, where $A_S$ is the adjacency matrix of the induced subgraph $G[V\backslash S]$.
|
||||
Each matrix multiplication can be done in $\tco{n^{2.81}}$ using Strassen's Algorithm.
|
||||
The $2^n$ is given by the fact that we have that many subsets to consider.
|
||||
|
||||
|
||||
@@ -187,14 +187,14 @@ In words, we are looking for the hamiltonian cycle with the shortest length amon
|
||||
|
||||
\stepcounter{all}
|
||||
\begin{theorem}[]{Travelling Salesman Problem}
|
||||
If there exists for $\alpha > 1$ a $\alpha$-approximation algorithm for the travelling salesman problem with time complexity \tco{f(n)}, there also exists an algorithm that can decide if a graph with $n$ vertices is Hamiltonian in \tco{f(n)}.
|
||||
If there exists for $\alpha > 1$ a $\alpha$-approximation algorithm for the travelling salesman problem with time complexity $\tco{f(n)}$, there also exists an algorithm that can decide if a graph with $n$ vertices is Hamiltonian in $\tco{f(n)}$.
|
||||
\end{theorem}
|
||||
This obviously means that this problem is also $\mathcal{N}\mathcal{P}$-complete.
|
||||
If we however use the triangle-inequality $l(\{x, z\}) \leq l(\{x, y\}) + l(\{y, z\}))$, which in essence says that a direct connection between two vertices has to always be shorter or equally long compared to a direct connection (which intuitively makes sense),
|
||||
we reach the metric travelling salesman problem, where, given a graph $K_n$ and a function $l$ (as above, but this time respecting the triangle-inequality), we are again looking for the same answer as for the non-metric problem.
|
||||
|
||||
\begin{theorem}[]{Metric Travelling Salesman Problem}
|
||||
There exists a $2$-approximation algorithm with time complexity \tco{n^2} for the metric travelling salesman problem.
|
||||
There exists a $2$-approximation algorithm with time complexity $\tco{n^2}$ for the metric travelling salesman problem.
|
||||
\end{theorem}
|
||||
|
||||
\shortproof This algorithm works as follows: Assume we have an MST and we walk around the outside of it.
|
||||
@@ -202,4 +202,5 @@ Thus, the length of our path is $2$ \verb|mst|($K_n, l$).
|
||||
If we now use the triangle inequality, we can skip a few already visited vertices and at least not lengthen our journey around the outside of the MST.
|
||||
Any Hamiltonian cycle can be transformed into an MST by removing an arbitrary edge from it.
|
||||
Thus, for the optimal length (minimal length) of a Hamiltonian cycle, we have $\text{opt}(K_n, l) \geq \verb|mst|(K_n, l)$.
|
||||
If we now double the edge set (by duplicating each edge), then, since for $l(C) = \sum_{e \in C} l(e)$ for our Hamiltonian cycle $C$, we have $l(C) \leq 2 \text{opt}(K_n, l)$, we can simply find a eulerian cycle in the graph in \tco{n}, and since it takes \tco{n^2} to compute an MST, our time complexity is \tco{n^2}
|
||||
If we now double the edge set (by duplicating each edge), then, since for $l(C) = \sum_{e \in C} l(e)$ for our Hamiltonian cycle $C$, we have $l(C) \leq 2 \text{opt}(K_n, l)$, we can simply find a eulerian cycle in the graph in $\tco{n}$, and since it takes $\tco{n^2}$ to compute an MST, our time complexity is $\tco{n^2}$
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ That system will have to fulfill the performance requirements of the task, but p
|
||||
\end{algorithm}
|
||||
The above algorithm doesn't return the maximum matching, just a matching
|
||||
\begin{theorem}[]{Greedy-Matching}
|
||||
The \textsc{Greedy-Matching} determines a maximal matching $M_{Greedy}$ in \tco{|E|} for which we have
|
||||
The \textsc{Greedy-Matching} determines a maximal matching $M_{Greedy}$ in $\tco{|E|}$ for which we have
|
||||
\begin{align*}
|
||||
|M_{Greedy}| \geq \frac{1}{2} |M_{\max}|
|
||||
\end{align*}
|
||||
@@ -48,7 +48,7 @@ The above algorithm doesn't return the maximum matching, just a matching
|
||||
\end{theorem}
|
||||
\inlineproof If $M$ is not a maximum matching, there exists a matching $M'$ with higher cardinality, where $M \oplus M'$ ($M$ xor $M'$) has a connected component that contains more edges of $M'$ than $M$. Said connected component is the augmenting path for $M$
|
||||
|
||||
This idea leads to an algorithm to determine a maximum matching: As long as a matching isn't a maximum matching, there exists an augmenting path that allows us to expand the matching. After \textit{at most} $\frac{|V|}{2} - 1$ steps, we have a maximum matching. For bipartite graphs, we can use modified BFS with time complexity \tco{(|V| + |E|) \cdot |E|} to determine the augmenting paths.
|
||||
This idea leads to an algorithm to determine a maximum matching: As long as a matching isn't a maximum matching, there exists an augmenting path that allows us to expand the matching. After \textit{at most} $\frac{|V|}{2} - 1$ steps, we have a maximum matching. For bipartite graphs, we can use modified BFS with time complexity $\tco{(|V| + |E|) \cdot |E|}$ to determine the augmenting paths.
|
||||
\begin{algorithm}
|
||||
\caption{\textsc{AugmentingPath}$(G = (A \uplus B, E), M)$}
|
||||
\begin{algorithmic}[1]
|
||||
@@ -100,11 +100,11 @@ The algorithm discussed above uses layers $L_i$ to find the augmenting paths. Ea
|
||||
\end{algorithm}
|
||||
To find the shortest augmenting path, we observe that if the last layer has more than one non-covered vertex, we can potentially (actually, likely) find more than one augmenting path.
|
||||
We find one first, remove it from the data structure and find more augmenting paths by inverting the tree structure (i.e. cast \textit{flippendo} on the edges) and using DFS to find the all augmenting paths.
|
||||
We always delete each visited vertex and we thus have time complexity \tco{|V| + |E|}, since we only visit each vertex and edge once.
|
||||
We always delete each visited vertex and we thus have time complexity $\tco{|V| + |E|}$, since we only visit each vertex and edge once.
|
||||
|
||||
\begin{theorem}[]{Hopcroft and Karp Algorithm}
|
||||
The algorithm of Hopcroft and Karp's while loop is only executed \tco{\sqrt{|V|}} times.
|
||||
Hence, the maximum matching is computed in \tco{\sqrt{|V|} \cdot (|V| + |E|)}
|
||||
The algorithm of Hopcroft and Karp's while loop is only executed $\tco{\sqrt{|V|}}$ times.
|
||||
Hence, the maximum matching is computed in $\tco{\sqrt{|V|} \cdot (|V| + |E|)}$
|
||||
\end{theorem}
|
||||
|
||||
\newpage
|
||||
@@ -113,14 +113,14 @@ We always delete each visited vertex and we thus have time complexity \tco{|V| +
|
||||
In Section 4, using flows to compute matchings is discussed.
|
||||
|
||||
\begin{theorem}[]{Weighted Matching problem}
|
||||
Let $n$ be even and $l: {[n] \choose 2} \rightarrow \N_0$ be a weight function of a complete graph $K_n$. Then, we can compute, in time \tco{n^3}, a minimum perfect matching with
|
||||
Let $n$ be even and $l: {[n] \choose 2} \rightarrow \N_0$ be a weight function of a complete graph $K_n$. Then, we can compute, in time $\tco{n^3}$, a minimum perfect matching with
|
||||
\begin{align*}
|
||||
\sum_{e \in M} l(e) = \min\left\{ \sum_{e \in M'} l(e) \smallhspace \Big| \smallhspace M' \text{ is a perfect matching in } K_n \right\}
|
||||
\end{align*}
|
||||
\end{theorem}
|
||||
|
||||
\begin{theorem}[]{MTSP with approximation}
|
||||
There is a $\frac{3}{2}$-approximation algorithm with time complexity \tco{n^3} for the metric travelling salesman problem
|
||||
There is a $\frac{3}{2}$-approximation algorithm with time complexity $\tco{n^3}$ for the metric travelling salesman problem
|
||||
\end{theorem}
|
||||
|
||||
\subsubsection{Hall's Theorem}
|
||||
@@ -137,7 +137,7 @@ The following theorem follows from Hall's Theorem immediately. We remember that
|
||||
\end{theorem}
|
||||
|
||||
\begin{theorem}[]{Algorithm for the problem}
|
||||
If $G$ is a $2^k$-regular bipartite graph, we can find a perfect matching in time \tco{|E|}
|
||||
If $G$ is a $2^k$-regular bipartite graph, we can find a perfect matching in time $\tco{|E|}$
|
||||
\end{theorem}
|
||||
It is important to note that the algorithms to determine a perfect matching in bipartite graph do not work for non-bipartite graphs, due to the fact that when we remove every other edge from the eulerian cycle, it is conceivable that the graph becomes disconnected. While this is no issue for bipartite graphs (as we can simply execute the graph on all connected components), for $k = 1$, such a connected component can could contain an odd number of vertices, thus there would be a eulerian cycle of odd length from which not every other edge can be deleted. Since that component has odd length, no perfect matching can exist.
|
||||
|
||||
|
||||
@@ -81,7 +81,7 @@ The QuickSort algorithm is a well-known example of a Las-Vegas algorithm. It is
|
||||
\begin{recall}[]{QuickSort}
|
||||
As covered in the Algorithms \& Data Structures lecture, here are some important facts
|
||||
\begin{itemize}
|
||||
\item Time complexity: \tcl{n \log(n)}, \tct{n \log(n)}, \tco{n^2}
|
||||
\item Time complexity: $\tcl{n \log(n)}$, $\tct{n \log(n)}$, $\tco{n^2}$
|
||||
\item Performance is dependent on the selection of the pivot (the closer to the middle the better, but not in relation to its current location, but rather to its value)
|
||||
\item In the algorithm below, \textit{ordering} refers to the operation where all elements lower than the pivot element are moved to the left and all larger than it to the right of it.
|
||||
\end{itemize}
|
||||
@@ -104,18 +104,18 @@ The QuickSort algorithm is a well-known example of a Las-Vegas algorithm. It is
|
||||
|
||||
\newcommand{\qsv}{\mathcal{T}_{i, j}}
|
||||
We call $\qsv$ the random variable describing the number of comparisons executed during the execution of \textsc{QuickSort}($A, l, r$).
|
||||
To prove that the average case of time complexity in fact is \tct{n \log(n)}, we need to show that
|
||||
To prove that the average case of time complexity in fact is $\tct{n \log(n)}$, we need to show that
|
||||
\begin{align*}
|
||||
\E[\qsv] \leq 2(n + 1) \ln(n) + \text{\tco{n}}
|
||||
\E[\qsv] \leq 2(n + 1) \ln(n) + \tco{n}
|
||||
\end{align*}
|
||||
which can be achieved using a the linearity of the expected value and an induction proof. (Script: p. 154)
|
||||
|
||||
|
||||
\fhlc{Cyan}{Selection problem}
|
||||
|
||||
For this problem, we want to find the $k$-th smallest value in a sequence $A[1], \ldots, A[n]$. An easy option would be to simply sort the sequence and then return the $k$-th element of the sorted array. The only problem: \tco{n \log(n)} is the time complexity of sorting.
|
||||
For this problem, we want to find the $k$-th smallest value in a sequence $A[1], \ldots, A[n]$. An easy option would be to simply sort the sequence and then return the $k$-th element of the sorted array. The only problem: $\tco{n \log(n)}$ is the time complexity of sorting.
|
||||
|
||||
Now, the \textsc{QuickSelect} algorithm can solve that problem in \tco{n}
|
||||
Now, the \textsc{QuickSelect} algorithm can solve that problem in $\tco{n}$
|
||||
\begin{algorithm}
|
||||
\caption{\textsc{QuickSelect}}
|
||||
\begin{algorithmic}[1]
|
||||
@@ -136,11 +136,12 @@ Now, the \textsc{QuickSelect} algorithm can solve that problem in \tco{n}
|
||||
|
||||
|
||||
\subsubsection{Primality test}
|
||||
Deterministically testing for primality is very expensive if we use a simple algorithm, namely \tco{\sqrt{n}}. There are nowadays deterministic algorithms that can achieve this in polynomial time, but they are very complex.
|
||||
Deterministically testing for primality is very expensive if we use a simple algorithm, namely $\tco{\sqrt{n}}$. There are nowadays deterministic algorithms that can achieve this in polynomial time, but they are very complex.
|
||||
|
||||
Thus, randomized algorithms to the rescue, as they are much easier to implement and also much faster. With the right precautions, they can also be very accurate, see theorem 2.74 for example.
|
||||
|
||||
A simple randomized algorithm would be to randomly pick a number on the interval $[2, \sqrt{n}]$ and checking if that number is a divisor of $n$. The problem: The probability that we find a \textit{certificate} for the composition of $n$ is very low (\tco{\frac{1}{n}}). Looking back at modular arithmetic in Discrete Maths, we find a solution to the problem:
|
||||
A simple randomized algorithm would be to randomly pick a number on the interval $[2, \sqrt{n}]$ and checking if that number is a divisor of $n$.
|
||||
The problem: The probability that we find a \textit{certificate} for the composition of $n$ is very low ($\tco{\frac{1}{n}}$). Looking back at modular arithmetic in Discrete Maths, we find a solution to the problem:
|
||||
|
||||
\begin{theorem}[]{Fermat's little theorem}
|
||||
If $n \in \N$ is prime, for all numbers $0 < a < n$ we have
|
||||
@@ -148,7 +149,7 @@ A simple randomized algorithm would be to randomly pick a number on the interval
|
||||
a^{n - 1} \equiv 1 \texttt{ mod } n
|
||||
\end{align*}
|
||||
\end{theorem}
|
||||
Using exponentiation by squaring, we can calculate $a^{n - 1} \texttt{ mod } n$ in \tco{k^3}.
|
||||
Using exponentiation by squaring, we can calculate $a^{n - 1} \texttt{ mod } n$ in $\tco{k^3}$.
|
||||
|
||||
\begin{algorithm}
|
||||
\caption{\textsc{Miller-Rabin-Primality-Test}}\label{alg:miller-rabin-primality-test}
|
||||
@@ -178,13 +179,13 @@ Using exponentiation by squaring, we can calculate $a^{n - 1} \texttt{ mod } n$
|
||||
\EndProcedure
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
This algorithm has time complexity \tco{\ln(n)}. If $n$ is prime, the algorithm always returns \texttt{true}. If $n$ is composed, the algorithm returns \texttt{false} with probability at least $\frac{3}{4}$.
|
||||
This algorithm has time complexity $\tco{\ln(n)}$. If $n$ is prime, the algorithm always returns \texttt{true}. If $n$ is composed, the algorithm returns \texttt{false} with probability at least $\frac{3}{4}$.
|
||||
|
||||
\newpage
|
||||
|
||||
\fhlc{Cyan}{Notes} We can determine $k, d \in \Z$ with $n - 1 = d2^k$ and $d$ odd easily using the following algorithm
|
||||
\begin{algorithm}
|
||||
\caption{Get $d$ and $k$ easily}\label{alg:get-d-k}
|
||||
\caption{Get $d$ and $k$ easily}
|
||||
\begin{algorithmic}[1]
|
||||
\State $k \gets 1$
|
||||
\State $d \gets n - 1$
|
||||
|
||||
@@ -74,7 +74,7 @@ With that, let's determine
|
||||
\[
|
||||
\E[\mathcal{X}] = \sum_{i = 1}^{n} \E[\mathcal{X}_i] = \sum_{i = 1}^{n} \frac{n}{n - i + 1} = n \cdot \sum_{i = 1}^{n} \frac{1}{i} = n \cdot H_n
|
||||
\]
|
||||
where $H_n := \sum_{i = 1}^{n} \frac{1}{i}$ is the $n$th harmonic number, which we know (from Analysis) is $H_n = \ln(n) +$\tco{1}, thus we have $\E[\mathcal{X}] = n \cdot \ln(n) +$\tco{n}.
|
||||
where $H_n := \sum_{i = 1}^{n} \frac{1}{i}$ is the $n$th harmonic number, which we know (from Analysis) is $H_n = \ln(n) + \tco{1}$, thus we have $\E[\mathcal{X}] = n \cdot \ln(n) + \tco{n}$.
|
||||
|
||||
The idea of the transformation is to reverse the $(n - i + 1)$, so counting up instead of down, massively simplifying the sum and then extracting the $n$ and using the result of $H_n$ to fully simplify
|
||||
|
||||
|
||||
Binary file not shown.
@@ -1,2 +1,9 @@
|
||||
\newsection
|
||||
\subsection{Rechnen mit Matrizen}
|
||||
\begin{tables}{lcccc}{Name & Operation & Mult & Add & Komplexität}
|
||||
Skalarprodukt & $x^H y$ & $n$ & $n - 1$ & $\tco{n}$ \\
|
||||
Tensorprodukt & $x y^H$ & $nm$ & $0$ & $\tco{mn}$ \\
|
||||
Matrix $\times$ Vektor & $Ax$ & $mn$ & $(n - 1)m$ & $\tco{mn}$ \\
|
||||
Matrixprodukt & $AB$ & $mnp$ & $(n - 1)mp$ & $\tco{mnp}$ \\
|
||||
\end{tables}
|
||||
Das Matrixprodukt kann mit dem Strassen Algorithmus mithilfe der Block-Partitionierung in $\tco{n^{\log_2(7)}}$
|
||||
|
||||
@@ -42,7 +42,7 @@ Falls jedoch hier die Auswertung von $\text{Im}(f(x_0 + ih))$ nicht exakt ist, s
|
||||
\begin{align*}
|
||||
y f'(x) & = y d\left(\frac{h}{2}\right) + \frac{1}{6} f'''(x) h^2 + \frac{1}{480}f^{(s)} h^4 + \ldots - f'(x) \\
|
||||
& = -d(h) - \frac{1}{6}f'''(x) h^2 + \frac{1}{120} f^{(s)}(x) h^n \Leftrightarrow 3 f'(x) \\
|
||||
& = 4 d\left(\frac{h}{2}\right) d(h) + \text{\tco{h^4}} \Leftrightarrow
|
||||
& = 4 d\left(\frac{h}{2}\right) d(h) + \tco{h^4} \Leftrightarrow
|
||||
\end{align*}
|
||||
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
\newsection
|
||||
\subsection{Rechenaufwand}
|
||||
In NumCS wird die Anzahl elementarer Operationen wie Addition, Multiplikation, etc benutzt, um den Rechenaufwand zu beschreiben.
|
||||
Wie in Algorithmen und * ist auch hier wieder \tco{\ldots} der Worst Case.
|
||||
Wie in Algorithmen und * ist auch hier wieder $\tco{\ldots}$ der Worst Case.
|
||||
Teilweise werden auch andere Funktionen wie $\sin, \cos, \sqrt{\ldots}, \ldots$ dazu gezählt.
|
||||
|
||||
Die Basic Linear Algebra Subprograms (= BLAS), also grundlegende Operationen der Linearen Algebra, wurden bereits stark optimiert und sollten wann immer möglich verwendet werden und man sollte auf keinen Fall diese selbst implementieren.
|
||||
|
||||
Reference in New Issue
Block a user