Virtual memory is a technique that allows you to run processes (physical memory) that do not fully fit in RAM. And it encourages the creation of programs larger than physical memory.
What is Virtual Memory (RAM) Used in Windows Operating System?
In addition, virtual memory helps create a memory abstraction scheme that separates it from the logical region that the user sees, which greatly simplifies things for programmers as they don’t have to worry about memory limitations.
Virtual memory implementation procedures are based on the fact that when a program is executed, it is partially in memory, that is, not the entire program, but only the code and data required at that moment are loaded.
It is the distinction between logical memory and RAM memory that the user can use, although it can also be applied in a system with segmentation, it is usually implemented by an optional paging method.
As soon as the system runs out of memory, a SWAP file is created on the disk that functions as an auxiliary memory expansion.
When there are many applications running on Windows operating systems and RAM is exhausted, the system uses the SWAP file to perform movements from the hard disk to RAM and vice versa.
In this way, they create gaps in physical memory to perform operations. However, this clearly slows down the system.
However, despite the absence of 4GB RAM in a computer, it allows us to simulate the presence of 4GB RAM in the computer and provide excellent execution capacity to multiple applications.
Most computers have four types of memory, such as the CPU, cache, physical memory, and records on the hard drive.
Many applications require access to more information as they can be held in physical memory. This is especially true when the operating system allows multiple processes and applications to run simultaneously.
One solution to the problem of needing more memory than what is available is that applications keep some information on disk and move it to the main memory when needed. There are several ways to do this.
One of the options is that the application is responsible for deciding what information to save in each segmentation, bringing it, and moving it.
The disadvantage of this is that in addition to the difficulty in the design and implementation of the program, the interests in the memory of two or more programs create contradictions. Of course, every programmer can do this considering their designs and think that it is the only program running in the system.
The alternative is to use virtual ram, where the combination of special hardware and the operating system uses main and secondary memory to show that the computer has much more main memory than it actually is.
This method is invisible to processes. The maximum amount of memory that can be seen there is related to the processor’s features.
For example, in a 32-bit system, the maximum amount of RAM is 4GB. All this completely ignores the need to move data between different memory areas, making the application programmer’s job much easier.
Although virtual ram can be implemented by operating system software, in practice, it is almost always used if hardware and software are used together and extra effort is made.
How Does It Work?
Translation of virtual addresses to real addresses is done by a Memory Management Unit (MMU).
The operating system is responsible for deciding which parts of program memory to be stored in physical memory.
It also protects address translation tables that provide relationships between virtual and physical addresses for use by MMU.
Finally, when an exception occurs, the operating system is responsible for finding a physical memory space to store the missing information, retrieving information from the disk, updating the translation tables, and finally continuing the execution of the program that gives the virtual ram.
On most computers, the memory address translation tables are physical memory.
This means that a reference to the memory address will need one or two references to find the entry in the translation table, and one more reference to complete access to that address.
To speed up the performance of this system, most CPUs contain an MMU on the same chip and maintain a table of newly used virtually to real address translations called TLB (Translation Lookaside Buffer).
Using this buffer means that no additional memory reference is required, thus saving time when translating.
In some processors, this is done entirely by hardware.
The operating system’s help is needed. An exception occurs and the operating system replaces one of the entries in TLB with an entry in the translation table, and the instruction that made the original memory reference is executed again.
Hardware with virtual ram support often allows ram protection.
MMU can change the way it works according to the memory reference type and the shape of the CPU during memory reference creation.
This allows the operating system to corrupt its code and data by an application and protect applications that can cause problems with each other.
When memory is used or when an address is read or written by the CPU, some of the hardware inside the computer translates the memory addresses created by the software as an indication that the actual memory address and memory address are not in the main memory.
In the first case, the reference is completed as if the virtual ram is not included, and the software reaches where it is needed and continues to operate normally.
In the second case, the operating system is called to handle the state, and the program is allowed to run or stop according to the state.
It is a technique to simulate much more memory space than a machine’s physical memory. This allows programs to run regardless of the full size of physical memory.
The illusion of the memory file is supported by the memory translation mechanism with a large amount of fast hard disk storage.
Thus, the address area always follows, so that a small part of it is in real memory, and the rest is stored on disk and can be easily referenced.
Since only the part of the ram stored in the main memory is accessible by the CPU, the proximity of the memory references changes while a program is running and requires that some parts of the ram be brought from disk to main memory. In short, it has become an important component of most existing operating systems.
And at a certain moment, you can keep more processes in memory, since you only have a few parts of a particular process in memory.
Moreover, it saves time because unused parts are not loaded into memory or removed from memory. However, the operating system must know how to manage this scheme.
It also makes it easier to load the program for execution called displacement, this procedure allows the same program to run in any location in physical memory.
In a stable state, almost all main memory will be filled with process parts, so the processor and operating system will have direct access to as many processes as possible, and when the operating system brings one item into memory, it will need to remove another part.
If you remove a part just before it is used, you must bring it back immediately.
Too many broken swaps lead to what is known as hyper-paging, where the processor spends more time replacing the parts than following the user instructions.
To avoid this, the operating system tries to predict which parts will be less likely to be used in the recent past.
The above arguments are based on the principle of proximity to references, or the principle of locality, which indicates that data in a process and references to the program tend to be grouped together.
Therefore, the assumption is that only a few parts of a process will be needed for short periods of time.
One way to verify the proximity policy is to consider the performance of a process in a virtual ram environment. The principle of proximity suggests that memory schemes can work. In addition, two components are needed for memory to be practical and effective.
First, there should be hardware support, and secondly, S.O should include software to manage the movement of pages or segments between secondary memory and main memory.
The cache memory is searched immediately after obtaining the physical address and before consulting the data in the main memory, if it is among the most recently used data, the search is successful. However, if it fails, it consults the main memory or, in the worst case, the disk query.
Virtual memory is usually implemented using paging.
In paging, the least significant bits of the ram address is preserved and are used directly as the least important bits of the physical memory address.
The most important bits are used as keys in one or more address translation tables to find the rest of the physical address searched.