Memory Management in Computer System
What is Memory Management
Types of Memory Management
Problems Relating to Memory Management
Solutions to Memory Management Problems
Preferred Operating System
What is Memory Management
Memory management is concerned with how data is stored in a computer and how those data bits are interpreted and given meaning. It also deals with identifying and enforcing the rules for doing this. It, therefore, has a tight connection to database administration as well. Although there is no agreed-upon division, the main differences are that database management is primarily concerned with data used within a program for periods up to the duration of the program’s execution, while memory management is primarily concerned with data used within a program for periods up to the duration of the program’s execution in the processor and memory. (Memory management | Encyclopedia of Computer Science, 2022)
In the virtual address space, we have our numbered pages. The Page Table structure contains information about the addresses of the pages and whether they are retained in physical memory or on a hard drive. The information indicating where that memory page is located in the physical memory is contained in each page table entry (PTE). The hardware component that controls address translation is called the Memory Management Unit (MMU). The CPU includes this device. On the other hand, the OS is in charge of transferring pages from RAM to the hard drive and back again. Let’s observe how they interact while being translated. Consider a program (let’s say written in the C programming language) that has a straightforward command to assign a value to a location in memory. When we ask the OS to execute our program, it leaves the CPU in charge of doing so. The CPU queries the MMU for the physical address of a when it reads this instruction. MMU examines the address next and takes out the section that contains the page number. For the sake of speed improvement, it uses this virtual page number to query the Translation Lookaside Buffer (TLB) to retrieve the physical page number, which consists of a portion of the Page Table cached inside the MMU chip. The original Page Table in memory is consulted by MMU if the page is not found in the TLB. The MMU can translate the physical address such that the CPU can access it once it has the Page Table entry. But what if your memory doesn’t have that page? In this instance, the OS retrieves that page from the hard drive after the MMU raises a page fault exception that needs to be handled by it. When this occurs, the operating system (OS) employs a page replacement algorithm to identify a different page that hasn’t been used recently and may be transferred to the hard drive. (“Addressing”, 2022; “Virtual Memory Address”, 2022)
Types of Memory Management
There are three perspectives of memory management: hardware memory management, operating memory management, and application memory management. In hardware memory management, hardware such as a processor and the memory management unit manages the memory. The processor creates a virtual address for each memory access it makes, which must be converted to a physical address (Hailperin, 2019). The important hardware is called TLB (Translation Lookaside Buffer) which may include a hierarchy of TLBs, similar to the cache hierarchy. The MMU typically loads page table entries into the TLB based on demand. In other words, when memory access causes a TLB miss, the MMU loads the relevant translation from the page table into the TLB so that subsequent accesses to the same page will result in TLB hits. The main distinction across computer architectures is whether the MMU performs this TLB loading independently or whether it does so with significant assistance from processor-based operating system software.
The operating system memory management functions in collaboration with the hardware memory management. The operating system designers receive various tools from the hardware designers to lower the need for TLB entries. The operating system will be able to map big, contiguously allocated structures with fewer TLB entries while still maintaining flexible allocation for the remaining virtual memory, for instance, if separate TLB entries may give mappings for pages of varied sizes. To lower TLB entry consumption, operating system designers must employ techniques like variable page size. The operating system itself can at least use larger pages, even if all application processes only use small pages (4 KiB). Given the importance of TLB pressure as a performance component, decisions regarding the design of operating systems must take this into consideration. The typical page size is one clear illustration. Another, less visible, example is the size of the scheduler’s time slices: even if the TLB doesn’t need to be flushed at every process switch, switching processes frequently would increase TLB pressure and harm performance (Hailperin, 2019). Also, memory must be allotted to user programs in the operating system and then reused by other programs when it is no longer needed. Both of these capabilities of virtual memory systems allow the operating system to make the computer appear to have more memory than it actually does and to pretend that each program has exclusive access to the system’s memory (“1. Overview — Memory Management Reference 4.0 documentation”, 2022).
The application has a part to play as well. The strong locality of reference programs will operate significantly more efficiently due to the cache hierarchy as well as the TLB. When a program reaches the TLB’s limit, performance typically drops down rather suddenly. Different data structures are intrinsically more or less TLB-friendly. For instance, a huge, sparsely populated table can perform significantly worse than a smaller, denser populated table. If theoretical evaluations of algorithms include the assumption that all memory operations take a consistent amount of time, they may be misleading in this sense (Hailperin, 2019).
Application memory management entails allocating from the limited resources available the memory needed for a program’s objects and data structures and recycling that memory for reuse when it is no longer needed. Application programs need additional code to manage their fluctuating memory requirements because they can rarely foresee in advance how much memory they will use. The management of application memory involves two related tasks: allocation which is the memory management must distribute the smaller blocks it has obtained from the operating system when the program demands a block of memory. The allocator, which performs this function, is a component of the memory manager, and recycling which is the memory blocks that can be recycled for usage if they have been allocated but the data they hold is no longer needed by the program. Memory recycling can be done in one of two ways: either the programmer must choose when memory can be reused (manual memory management), or the memory manager must be able to figure it out (known as automatic memory management) (Hailperin, 2019; “1. Overview — Memory Management Reference 4.0 documentation”, 2022).
Problems Relating to Memory Management
Some problems that may arise in memory management are slowing down of the computer due to memory access and translation, premature frees, dangling pointers, memory leaks, external fragmentation, rigid design, and difficult interface.
The CPU must do a page table lookup for each and every memory access. Even if the page table were represented very efficiently, running that lookup action would cost at least one additional memory access. As a result, there would be at least two times as many memory accesses because there would be one-page table access for every genuine access. Virtual memory may make applications run half as quickly if the page table lookup cannot be mostly avoided because memory speed is frequently the bottleneck in current computer systems (Hailperin, 2019).
Many programs release memory, yet when they later try to use it, they crash or act erratically. The remaining memory reference in this situation is referred to as a dangling pointer, and the condition is known as a premature free. Usually, this is limited to manual memory management. Some programs constantly allocate memory and never stop doing so, which leads to memory exhaustion. A memory leak is a proper word for this problem. Even though it has ample spare memory, a bad allocator can perform its task of allocating and receiving memory blocks so poorly that it can no longer allocate big enough blocks. This is due to the possibility of the free memory being divided into numerous small blocks and spaced apart by blocks still in use. External fragmentation is the term for this situation. If memory managers are used in a way other than how they were intended to be, they may also result in serious performance issues. Any memory management system has a tendency to make assumptions about how the application will use memory, such as usual block sizes, reference patterns, or object lifetimes, which leads to these issues. If these presumptions are incorrect, the memory manager may have to spend a lot more time keeping track of everything. If objects are transferred between modules, the interface design needs to take memory management into account (“1. Overview — Memory Management Reference 4.0 documentation”, 2022).
Solutions to Memory Management Problems
The issue of slowing down is solved by the MMU (memory management unit) through the concept of locality, that is, Once-accessed addresses will probably be viewed again shortly, and adjacent addresses will probably also be accessed soon. By retaining an easily available copy of a small number of previously used virtual-to-physical translations, the MMU takes use of this locality. It only keeps a finite number of pairs, each containing a page number and the page frame number that goes with it. The Translation Lookaside Buffer is the name given to this group of pairs (TLB). The MMU will be able to generate the relevant page frame number without having to query the page table because the majority of memory accesses will use page numbers found in the TLB. Also, TLB hardware design affects the issue of slowing down (Hailperin, 2019).
The solution to fragmentation is paged segmentation to be used to enhance memory management. The segment table is divided into pages using the paged segmentation approach, which helps to make the table smaller. Here, the segments are split into pages. Paged segmentation is a dual concept where paging is applied to segmentation. The address space is divided into equal-sized blocks, known as pages, in paging, a type of automatic overlaying, whereas segmentation is a separate type of virtual memory that enables several one-dimensional address spaces to exist concurrently. This makes it possible to manage tables, stacks, and other data structures as logical entities that develop independently. Each process is divided into segments during segmentation. The size of each segment varies. The logical address memory space, which consists of a collection of segments of different lengths, is where segments are loaded. There is a name and a length for each segment. A segment’s logical address is loaded into the physical memory area in order to execute it. In most cases, we refer to a section by its number rather than by its name. Therefore, the segment’s number and offset are needed in order to access it. The number is used as an index in the operating system’s segment table. The offset’s value aids in identifying the segment’s limit. The segment address in physical memory space is created by adding the segment number and the offset value. (“Memory management | Encyclopedia of Computer Science”, 2022; “Memory protection | Encyclopedia of Computer Science”, 2022; Science, 2022; “Segmented Paging vs. Paged Segmentation”, 2022)
After allocating memory, initialize the pointer “ptr” to NULL, for example in C, to fix a dangling pointer. Consequently, the pointer no longer points anywhere. By using garbage collection, memory leaks can be prevented. Today’s most widely used programming languages come with tools that help programmers manage memory automatically. They put into place a garbage collector system, which releases RAM that the application doesn’t require. Writing code that releases unused resources is a crucial step in preventing memory leaks. Almost all languages have resource kinds that aren’t released automatically. File handles, networked resource connections, and other items are examples. (“Memory leak detection — How to find, eliminate, and avoid | Raygun Blog”, 2022)
Preferred Operating System
Window’s data structure is a tree. Virtual Address Descriptors refer to each node of the tree (VAD). Each node is designated by VAD as free, committed, or reserved. Nodes that are currently in use are referred to as committed nodes. Unused nodes are free nodes. Nodes that are reserved cannot be used until the reservation is released. Cluster demand paging is used by Windows. The two components of an address in Windows are the page number and the page offset.
Linked list data structures are used by Linux. It keeps track of the VM area structs. Every time a page has to be located, this list is searched. Additionally, it keeps track of the address range, the security setting, and the growth direction (up or down). If there are more than 32 elements, Linux turns the linked list into a tree. In certain circumstances, Linux employs data structures. Linux only employs demand paging; pre-paging is not used. There are four components to a linear address: Page Table, Offset, Middle Directory, and the Global Directory.
(“Compare the memory management of windows with linux”, 2022)
UNIX is basic and elegant but still modern, in contrast to the sophisticated, convoluted code that Windows has evolved into. As a result, Windows has more features but is harder for developers to maintain and develop, whereas Unix has fewer features but is simpler to maintain and create. Windows is likely to operate better for the end-user, despite occasionally crashing. (Mác, 2022)
1. Overview — Memory Management Reference 4.0 documentation. Memorymanagement.org. (2022). Retrieved 2 July 2022, from https://www.memorymanagement.org/mmref/begin.html.
Addressing. Dl.acm.org. (2022). Retrieved 1 July 2022, from https://dl.acm.org/doi/epdf/10.5555/1074100.1074110.
Compare the memory management of windows with linux. Ukessays.com. (2022). Retrieved 2 July 2022, from https://www.ukessays.com/essays/engineering/compare-the-memory-management.php
Hailperin, M. (2019). Ch. 6 Operating systems and middleware : supporting controlled interaction. Thomson Learning, Inc.
Mác, E. (2022). Comparison of Memory Management of Windows With LINUX | PDF | Kernel (Operating System) | Computer Engineering. Scribd. Retrieved 2 July 2022, from https://www.scribd.com/doc/42786606/Comparison-of-memory-management-of-windows-with-LINUX.
Memory leak detection — How to find, eliminate, and avoid | Raygun Blog. Raygun Blog. (2022). Retrieved 2 July 2022, from https://raygun.com/blog/memory-leak-detection/.
Memory management | Encyclopedia of Computer Science. DL Books. (2022). Retrieved 1 July 2022, from https://dl.acm.org/doi/10.5555/1074100.1074591.
Memory protection | Encyclopedia of Computer Science. DL Books. (2022). Retrieved 1 July 2022, from https://dl.acm.org/doi/10.5555/1074100.1074595.
Science, C. (2022). Youtube.com. Retrieved 1 July 2022, from https://www.youtube.com/watch?v=p9yZNLeOj4s.
Segmented Paging vs. Paged Segmentation. Baeldung. (2022). Retrieved 1 July 2022, from https://www.baeldung.com/cs/segmented-paging-vs-paged-segmentation.
Virtual Memory Address. Baeldung. (2022). Retrieved 1 July 2022, from https://www.baeldung.com/cs/virtual-memory-address.