mirror of
https://github.com/janishutz/eth-summaries.git
synced 2025-11-25 10:34:23 +00:00
Move A&D summary
This commit is contained in:
@@ -0,0 +1,175 @@
|
||||
\newpage
|
||||
\subsection{Sort}
|
||||
Sorted data proved to be much quicker to search through, but how do we sort efficiently?
|
||||
|
||||
First, how to check if an array is sorted. This can be done in linear time:
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{\textsc{sorted(A)}}
|
||||
\begin{algorithmic}[1]
|
||||
\For{$i \gets 1, 2, \ldots, n - 1$}
|
||||
\If{$A[i] > A[i + 1]$} \Return false \Comment{Item is unsorted}
|
||||
\EndIf
|
||||
\EndFor
|
||||
\State \Return true
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
\tc{n}
|
||||
|
||||
|
||||
\subsubsection{Bubble Sort}
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{\textsc{bubbleSort(A)}}
|
||||
\begin{algorithmic}[1]
|
||||
\For{$i \gets 1, 2, \ldots, n$}
|
||||
\For{$j \gets 1, 2, \ldots, n$}
|
||||
\If{$A[j] > A[j + 1]$}
|
||||
\State exchange $A[j]$ and $A[j + 1]$ \Comment{Causes the element to ``bubble up''}
|
||||
\EndIf
|
||||
\EndFor
|
||||
\EndFor
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
\tc{n^2}
|
||||
|
||||
|
||||
|
||||
% ────────────────────────────────────────────────────────────────────
|
||||
\subsubsection{Selection Sort}
|
||||
The concept for this algorithm is selecting an element (that being the largest one for each iteration, where the iteration variable determines the upper bound for the indices of the array) and swapping it with the last item (thus moving the largest elements up, whilst moving the smallest items down). This algorithm uses a similar concept to bubble sort, but saves some runtime compared to it, by reducing the number of comparisons having to be made.
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{\textsc{selectionSort(A)}}
|
||||
\begin{algorithmic}[1]
|
||||
\For{$i \gets n, n - 1, \ldots, 1$}
|
||||
\State $k \gets$ Index of maximum element in $A[1, \ldots, i]$ \Comment{Runtime: $O(n)$}
|
||||
\State exchange $A[k]$ and $A[i]$
|
||||
\EndFor
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
\tc{n^2} because we have runtime \tco{n} for the search of the maximal entry and run through the loop \tco{n} times, but we have saved some runtime elsewhere, which is not visible in the asymptotic time complexity compared to bubble sort.
|
||||
|
||||
|
||||
|
||||
% ────────────────────────────────────────────────────────────────────
|
||||
\newpage
|
||||
\subsubsection{Insertion Sort}
|
||||
\begin{definition}[]{Insertion Sort}
|
||||
Insertion Sort is a simple sorting algorithm that builds the final sorted array (or list) one element at a time. It iteratively takes one element from the input, finds its correct position in the sorted portion, and inserts it there. The algorithm starts with the first element, assuming it is sorted. It then picks the next element and inserts it into its correct position relative to the sorted portion. This process is repeated for all elements until the entire list is sorted. At each iteration step, the algorithm moves all elements that are larger than the currently picked element to the right by one, i.e. an element $A[i]$ to $A[i + 1]$
|
||||
\end{definition}
|
||||
|
||||
\begin{properties}[]{Characteristics and Performance}
|
||||
\begin{itemize}
|
||||
\item \textbf{Efficiency:} Works well for small datasets or nearly sorted arrays.
|
||||
\item \textbf{Time Complexity:}
|
||||
\begin{itemize}
|
||||
\item Best case (already sorted): \tcl{n\log(n)}
|
||||
\item Worst case (reversed order): \tco{n^2}
|
||||
\item Average case: \tct{n^2}
|
||||
\end{itemize}
|
||||
\item \textbf{Limitations:} Inefficient on large datasets due to its \tct{n^2} time complexity and requires additional effort for linked list implementations.
|
||||
\end{itemize}
|
||||
\end{properties}
|
||||
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{\textsc{insertionSort(A)}}
|
||||
\begin{algorithmic}[1]
|
||||
\Procedure{InsertionSort}{$A$}
|
||||
\For{$i \gets 2$ to $n$} \Comment{Iterate over the array}
|
||||
\State $key \gets A[i]$ \Comment{Element to be inserted}
|
||||
\State $j \gets i - 1$
|
||||
\While{$j > 0$ and $A[j] > key$}
|
||||
\State $A[j+1] \gets A[j]$ \Comment{Shift elements}
|
||||
\State $j \gets j - 1$
|
||||
\EndWhile
|
||||
\State $A[j+1] \gets key$ \Comment{Insert element}
|
||||
\EndFor
|
||||
\EndProcedure
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
|
||||
|
||||
|
||||
% ────────────────────────────────────────────────────────────────────
|
||||
\newpage
|
||||
\subsubsection{Merge Sort}
|
||||
\begin{definition}[]{Definition of Merge Sort}
|
||||
Merge Sort is a divide-and-conquer algorithm that splits the input array into two halves, recursively sorts each half, and then merges the two sorted halves into a single sorted array. This process continues until the base case of a single element or an empty array is reached, as these are inherently sorted.
|
||||
\end{definition}
|
||||
|
||||
\begin{properties}[]{Characteristics and Performance of Merge Sort}
|
||||
\begin{itemize}
|
||||
\item \textbf{Efficiency:} Suitable for large datasets due to its predictable time complexity.
|
||||
\item \textbf{Time Complexity:}
|
||||
\begin{itemize}
|
||||
\item Best case: \tcl{n \log n}
|
||||
\item Worst case: \tco{n \log n}
|
||||
\item Average case: \tct{n \log n}
|
||||
\end{itemize}
|
||||
\item \textbf{Space Complexity:} Requires additional memory for temporary arrays, typically \tct{n}.
|
||||
\item \textbf{Limitations:} Not in-place, and memory overhead can be significant for large datasets.
|
||||
\end{itemize}
|
||||
\end{properties}
|
||||
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{Merge Sort}
|
||||
\begin{algorithmic}[1]
|
||||
\Procedure{MergeSort}{$A[1..n], l, r$}
|
||||
\If{$l \geq r$}
|
||||
\State \Return $A$ \Comment{Base case: already sorted}
|
||||
\EndIf
|
||||
\State $m \gets \floor{(l + r)/2}$
|
||||
\State $\Call{MergeSort}{A, l, m}$ \Comment{Recursive sort on left half}
|
||||
\State $\Call{MergeSort}{A, m + 1, r}$ \Comment{Recursive sort on right half}
|
||||
\State \Call{Merge}{$A, l, m, r$}
|
||||
\EndProcedure
|
||||
|
||||
\Procedure{Merge}{$A[1..n], l, m, r$} \Comment{Runtime: \tco{n}}
|
||||
\State $result \gets$ new array of size $r - l + 1$
|
||||
\State $i \gets l$
|
||||
\State $j \gets m + 1$
|
||||
\State $k \gets 1$
|
||||
\While{$i \leq m$ and $j \leq r$}
|
||||
\If{$A[i] \leq A[j]$}
|
||||
\State $result[k] \gets A[i]$
|
||||
\State $i \gets i + 1$
|
||||
\Else
|
||||
\State $result[k] \gets A[j]$
|
||||
\State $j \gets j + 1$
|
||||
\EndIf
|
||||
\State $k \gets k + 1$
|
||||
\EndWhile
|
||||
\State Append remaining elements of left / right site to $result$
|
||||
\State Copy $result$ to $A[l, \ldots, r]$
|
||||
\EndProcedure
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
\begin{table}[h!]
|
||||
\centering
|
||||
\begin{tabular}{lccccc}
|
||||
\toprule
|
||||
\textbf{Algorithm} & \textbf{Comparisons} & \textbf{Operations} & \textbf{Space Complexity} & \textbf{Locality} & \textbf{Time complexity}\\
|
||||
\midrule
|
||||
\textit{Bubble-Sort} & \tco{n^2} & \tco{n^2} & \tco{1} & good & \tco{n^2}\\
|
||||
\textit{Selection-Sort} & \tco{n^2} & \tco{n} & \tco{1} & good & \tco{n^2}\\
|
||||
\textit{Insertion-Sort} & \tco{n \cdot \log(n)} & \tco{n^2} & \tco{1} & good & \tco{n^2}\\
|
||||
\textit{Merge-Sort} & \tco{n\cdot \log(n)} & \tco{n \cdot \log(n)} & \tco{n} & good & \tco{n \cdot \log(n)}\\
|
||||
\bottomrule
|
||||
\end{tabular}
|
||||
\caption{Comparison of four comparison-based sorting algorithms discussed in the lecture. Operations designates the number of write operations in RAM}
|
||||
\end{table}
|
||||
|
||||
|
||||
@@ -0,0 +1,130 @@
|
||||
\newpage
|
||||
\subsubsection{Heap Sort}
|
||||
\begin{definition}[]{Heap Sort}
|
||||
Heap Sort is a comparison-based sorting algorithm that uses a binary heap data structure. It builds a max-heap (or min-heap) from the input array and repeatedly extracts the largest (or smallest) element to place it in the correct position in the sorted array.
|
||||
\end{definition}
|
||||
|
||||
\begin{properties}[]{Characteristics and Performance}
|
||||
\begin{itemize}
|
||||
\item \textbf{Efficiency:} Excellent for in-place sorting with predictable performance.
|
||||
\item \textbf{Time Complexity:}
|
||||
\begin{itemize}
|
||||
\item Best case: \tcl{n \log n}
|
||||
\item Worst case: \tco{n \log n}
|
||||
\item Average case: \tct{n \log n}
|
||||
\end{itemize}
|
||||
\item \textbf{Space Complexity:} In-place sorting requires \tct{1} additional space.
|
||||
\item \textbf{Limitations:} Inefficient compared to Quick Sort for most practical datasets.
|
||||
\end{itemize}
|
||||
\end{properties}
|
||||
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{Heap Sort}
|
||||
\begin{algorithmic}[1]
|
||||
\Procedure{HeapSort}{$A$}
|
||||
\State $H \gets \Call{Heapify}{A}$
|
||||
\For{$i \gets \text{length}(A)$ to $2$}
|
||||
\State $A[i] \gets \Call{ExtractMax}{A}$
|
||||
\EndFor
|
||||
\EndProcedure
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
The lecture does not cover the implementation of a heap tree. See the specific section \ref{sec:heap-trees} on Heap-Trees
|
||||
|
||||
|
||||
\newpage
|
||||
\subsubsection{Bucket Sort}
|
||||
\begin{definition}[]{Bucket Sort}
|
||||
Bucket Sort is a distribution-based sorting algorithm that divides the input into a fixed number of buckets, sorts the elements within each bucket (using another sorting algorithm, typically Insertion Sort), and then concatenates the buckets to produce the sorted array.
|
||||
\end{definition}
|
||||
|
||||
\begin{properties}[]{Characteristics and Performance}
|
||||
\begin{itemize}
|
||||
\item \textbf{Efficiency:} Performs well for uniformly distributed datasets.
|
||||
\item \textbf{Time Complexity:}
|
||||
\begin{itemize}
|
||||
\item Best case: \tcl{n + k} (for uniform distribution and $k$ buckets)
|
||||
\item Worst case: \tco{n^2} (when all elements fall into a single bucket)
|
||||
\item Average case: \tct{n + k}
|
||||
\end{itemize}
|
||||
\item \textbf{Space Complexity:} Requires \tct{n + k} additional space.
|
||||
\item \textbf{Limitations:} Performance depends on the choice of bucket size and distribution of input elements.
|
||||
\end{itemize}
|
||||
\end{properties}
|
||||
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{Bucket Sort}
|
||||
\begin{algorithmic}[1]
|
||||
\Procedure{BucketSort}{$A, k$}
|
||||
\State $B[1..n] \gets [0, 0, \ldots, 0]$
|
||||
\For{$j \gets 1, 2, \ldots, n$}
|
||||
\State $B[A[j]] \gets B[A[j]] + 1$ \Comment{Count in $B[i]$ how many times $i$ occurs}
|
||||
\EndFor
|
||||
\State $k \gets 1$
|
||||
\For{$i \gets 1, 2, \ldots, n$}
|
||||
\State $A[k, \ldots, k + B[i] - 1] \gets [i, i, \ldots, i]$ \Comment {Write $B[i]$ times the value $i$ into $A$}
|
||||
\State $k \gets k + i$ \Comment{$A$ is filled until position $k - 1$}
|
||||
\EndFor
|
||||
\EndProcedure
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
|
||||
\newpage
|
||||
\subsection{Heap trees}
|
||||
\label{sec:heap-trees}
|
||||
\subsubsection{Min/Max-Heap}
|
||||
\begin{definition}[]{Min-/Max-Heap}
|
||||
A Min-Heap is a complete binary tree where the value of each node is less than or equal to the values of its children.
|
||||
Conversely, a Max-Heap is a complete binary tree where the value of each node is greater than or equal to the values of its children.
|
||||
In the characteristics below, $A$ is an array storing the value of a element
|
||||
\end{definition}
|
||||
|
||||
\begin{properties}[]{Characteristics}
|
||||
\begin{itemize}
|
||||
\item \textbf{Heap Property:}
|
||||
\begin{itemize}
|
||||
\item Min-Heap: $A[parent] \leq A[child]$ for all nodes.
|
||||
\item Max-Heap: $A[parent] \geq A[child]$ for all nodes.
|
||||
\end{itemize}
|
||||
\item \textbf{Operations:} Both Min-Heaps and Max-Heaps support:
|
||||
\begin{itemize}
|
||||
\item \textbf{Insert:} Add an element to the heap and adjust to maintain the heap property.
|
||||
\item \textbf{Extract Min/Max:} Remove the root element (minimum or maximum), replace it with the last element (bottom right most element), and adjust the heap.
|
||||
\end{itemize}
|
||||
\item \textbf{Time Complexity:}
|
||||
\begin{itemize}
|
||||
\item Insert: \tct{\log n}.
|
||||
\item Extract Min/Max: \tct{\log n}.
|
||||
\item Build Heap: \tct{n}.
|
||||
\end{itemize}
|
||||
\end{itemize}
|
||||
\end{properties}
|
||||
|
||||
\begin{example}[]{Min-Heap}
|
||||
The following illustrates a Min-Heap with seven elements:
|
||||
\begin{center}
|
||||
\begin{forest}
|
||||
for tree={
|
||||
circle, draw, fill=blue!20, minimum size=10mm, inner sep=0pt, % Node style
|
||||
s sep=15mm, % Sibling separation
|
||||
l sep=15mm % Level separation
|
||||
}
|
||||
[2
|
||||
[4
|
||||
[8]
|
||||
[10]
|
||||
]
|
||||
[6
|
||||
[14]
|
||||
[18]
|
||||
]
|
||||
]
|
||||
\end{forest}
|
||||
\end{center}
|
||||
\end{example}
|
||||
@@ -0,0 +1,59 @@
|
||||
\newpage
|
||||
\subsubsection{Quick Sort}
|
||||
\begin{definition}[]{Quick Sort}
|
||||
Quick Sort is a divide-and-conquer algorithm that selects a pivot element from the array, partitions the other elements into two subarrays according to whether they are less than or greater than the pivot, and then recursively sorts the subarrays. The process continues until the base case of an empty or single-element array is reached.
|
||||
\end{definition}
|
||||
|
||||
\begin{properties}[]{Characteristics and Performance}
|
||||
\begin{itemize}
|
||||
\item \textbf{Efficiency:} Performs well on average and for in-place sorting but can degrade on specific inputs.
|
||||
\item \textbf{Time Complexity:}
|
||||
\begin{itemize}
|
||||
\item Best case: \tcl{n \log n}
|
||||
\item Worst case: \tco{n^2} (when the pivot is poorly chosen)
|
||||
\item Average case: \tct{n \log n}
|
||||
\end{itemize}
|
||||
\item \textbf{Space Complexity:} In-place sorting typically requires \tct{\log n} additional space for recursion.
|
||||
\item \textbf{Limitations:} Performance depends heavily on pivot selection.
|
||||
\end{itemize}
|
||||
\end{properties}
|
||||
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{Quick Sort}
|
||||
\begin{algorithmic}[1]
|
||||
\Procedure{QuickSort}{$A, l, r$}
|
||||
\If{$l < r$}
|
||||
\State $k \gets \Call{Partition}{A, l, r}$
|
||||
\State \Call{QuickSort}{$A, l, k - 1$} \Comment{Sort left group}
|
||||
\State \Call{QuickSort}{$A, k + 1, r$} \Comment{Sort right group}
|
||||
\EndIf
|
||||
\EndProcedure
|
||||
|
||||
\Procedure{Partition}{$A, l, r$}
|
||||
\State $i \gets l$
|
||||
\State $j \gets r - 1$
|
||||
\State $p \gets A[r]$
|
||||
\While{$i > j$} \Comment{Loop ends when $i$ and $j$ meet}
|
||||
\While{$i < r$ and $A[i] \leq p$}
|
||||
\State $i \gets i + 1$ \Comment{Search next element for left group}
|
||||
\EndWhile
|
||||
|
||||
\While{$i > l$ and $A[j] > p$}
|
||||
\State $j \gets j - 1$ \Comment{Search next element for right group}
|
||||
\EndWhile
|
||||
|
||||
\If{$i \leq j$}
|
||||
\State Exchange $A[i]$ and $A[j]$
|
||||
\EndIf
|
||||
\EndWhile
|
||||
\State Swap $A[i]$ and $A[r]$ \Comment{Move pivot element to correct position}
|
||||
\State \Return $i$
|
||||
\EndProcedure
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
|
||||
The tests $i < r$ and $j > l$ in the while loops in \textsc{Partition} catch the cases where there are no elements that can be added to the left or right group.
|
||||
|
||||
The correct position for the pivot element is $k = j + 1 = i$, since all elements on the left hand side are smaller and all on the right hand side larger than $p$.
|
||||
@@ -0,0 +1,35 @@
|
||||
\newsection
|
||||
\section{Search \& Sort}
|
||||
\subsection{Search}
|
||||
\subsubsection{Linear search}
|
||||
Linear search, as the name implies, searches through the entire array and has linear runtime, i.e. $\Theta(n)$.
|
||||
|
||||
It works by simply iterating over an iterable object (usually array) and returns the first element (it can also be modified to return \textit{all} elements that match the search pattern) where the search pattern is matched.
|
||||
|
||||
\tc{n}
|
||||
|
||||
\subsubsection{Binary search}
|
||||
If we want to search in a sorted array, however, we can use what is known as binary array, improving our runtime to logarithmic, i.e. $\Theta(\log(n))$.
|
||||
It works using divide and conquer, hence it picks a pivot in the middle of the array (at $\floor{\frac{n}{2}}$) and there checks if the value is bigger than our search query $b$, i.e. if $A[m] < b$. This is repeated, until we have homed in on $b$. Pseudo-Code:
|
||||
|
||||
\begin{algorithm}
|
||||
\begin{spacing}{1.2}
|
||||
\caption{\textsc{binarySearch(b)}}
|
||||
\begin{algorithmic}[1]
|
||||
\State $l \gets 1$, $r \gets n$ \Comment{\textit{Left and right bound}}
|
||||
\While{$l \leq r$}
|
||||
\State $m \gets \floor{\frac{l + r}{2}}$
|
||||
\If{$A[m] = b$} \Return m \Comment{\textit{Element found}}
|
||||
\ElsIf{$A[m] > b$} $r \gets m - 1$ \Comment{\textit{Search to the left}}
|
||||
\Else \hspace{0.2em} $l \gets m + 1$ \Comment{\textit{Search to the right}}
|
||||
\EndIf
|
||||
\EndWhile
|
||||
\State \Return "Not found"
|
||||
\end{algorithmic}
|
||||
\end{spacing}
|
||||
\end{algorithm}
|
||||
\tc{\log(n)}
|
||||
|
||||
Proving runtime lower bounds (worst case runtime) for this kind of algorithm is done using a decision tree. It in fact is $\Omega(\log(n))$
|
||||
|
||||
% INFO: If =0, then there is an issue with math environment in the algorithm
|
||||
Reference in New Issue
Block a user