mirror of
https://github.com/janishutz/eth-summaries.git
synced 2026-03-14 17:00:05 +01:00
67 lines
2.8 KiB
TeX
67 lines
2.8 KiB
TeX
\newpage
|
|
\subsection{Virtual Memory}
|
|
|
|
Conceptually, Assembly operations treat memory as a very large contiguous array of memory: Each byte has an individual address.
|
|
\begin{minted}{gas}
|
|
movl (%rcx), %eax # Refers to a Virtual Address
|
|
\end{minted}
|
|
In truth of course, this is an abstraction for the memory hierarchy. Actual allocation is done by the compiler \& OS.
|
|
|
|
The main advantages are:
|
|
\begin{itemize}
|
|
\item Efficient use of (limited) RAM: Keep only active areas of virtual address space in memory
|
|
\item Simplifies memory management for programmers
|
|
\item Isolates address spaces: Processes can't interfere with other processes
|
|
\end{itemize}
|
|
|
|
\subsubsection{Address Translation}
|
|
|
|
Address translation happens in a dedicated hardware component: The Memory Management Unit (MMU).
|
|
|
|
Virtual and Physical Addresses share the same structure, buth the VPN is usually far longer than the PPN, since the virtual space is far bigger. Offsets match.
|
|
|
|
\begin{multicols}{2}
|
|
\begin{center}
|
|
Virtual:
|
|
\begin{tabular}{|c|c|}
|
|
\hline
|
|
V. Page Number & V. Page Offset \\
|
|
\hline
|
|
\end{tabular}
|
|
\end{center}
|
|
\newcolumn
|
|
\begin{center}
|
|
Physical:
|
|
\begin{tabular}{|c|c|}
|
|
\hline
|
|
P. Page Number & P. Page Offset \\
|
|
\hline
|
|
\end{tabular}
|
|
\end{center}
|
|
\end{multicols}
|
|
|
|
The Page Table (Located at a special Page Table Base Register (PTBR)) contains the mapping $\text{VPN} \mapsto \text{PPN}$. Page Table Entries (PTE) are cached in the L1 cache like any other memory word.
|
|
|
|
The Translation Lookaside Buffer (TLB) is a small hardware cache inside the MMU, which is faster than an L1 hit.\footnote{In practice, most address translations actually hit the TLB.}
|
|
|
|
\content{Example} We consider $N=14$ bit virtual addresses and $M=12$ bit physical addresses. The offset takes $6$ bits.\footnote{The images in this example are from the SPCA lecture notes for FS25.}
|
|
|
|
If we assume a TLB with $16$ entries, and $4$ way associativity, the VPN translates like this:
|
|
|
|
\begin{center}
|
|
\includegraphics[width=0.7\linewidth]{images/VPN-to-TLB.png}
|
|
\end{center}
|
|
|
|
Similarly, if we assume a direct-mapped $16$ line cache with $4$ byte blocks:
|
|
|
|
\begin{center}
|
|
\includegraphics[width=0.65\linewidth]{images/PPN-to-Cache.png}
|
|
\end{center}
|
|
|
|
Multi-Level page tables add further steps to this process: Instead of a PT we have a Page Directory Table which contains the addresses of separate Page Tables. The top of the VPN is used to index into each of these, which technically allows any depth of page tables.
|
|
|
|
\subsubsection{x86 Virtual Memory}
|
|
|
|
In \verb|x86-64| Virtual Addresses are $48$ bits long, yielding an address space of $256$TB.\\
|
|
Physical Addresses are $52$ bits, with $40$ bit PPNs, yielding a page size of $4KB$.
|