An Operating System (OS) is a set of programs or programs that manage the basic processes of a computer system and allow the remaining processes to run normally.
What is the Operating System, What are its Types?
An operating system is a set of programs or software designed to provide communication between the user and the machine comfortably and efficiently. It is responsible for the management of computer resources, including managing the hardware from the most basic levels.
At the beginning of the calculation, the programmer should have some deep knowledge and should be able to contact the hardware if its program fails.
It should examine the values of the indicator lights and panels of the computer’s status, to determine the cause of the error, and to once again look at the procedures to set aside and adjust the system time, as well as to correct your program.
The importance of operating systems has historically emerged from the ways of operating a computer through the first generation add-on boards since the 50s.
And then through collections in the second generation, the operator can be significantly improved, as always performs a series of repetitive steps, one of the features considered in the definition of a program.
That is, it became clear that the tasks of the operator themselves can become concrete in a program called “Operating System” over time, and due to its enormous complexity. Therefore, the first operating systems include the Fortran Monitor System (FMS) and IBSYS.
Each computer system is generally divided into two parts as hardware and software.
The software makes the hardware useful and can be divided into two classes as system programs (basic software) that manage computer operation and application software that perform useful actions for users.
Among the system programs, we can talk about operating systems, compilers, translators, editors. Every general-purpose computer must have an operating system for other programs to work.
Operating systems have historically been associated with the architecture of the computers they work in, so this is their history.
Operating systems have undergone a series of revolutionary changes called generations.
As far as hardware is concerned, generations are marked by major advances in components used in valves (first generation) to transistors (second generation), integrated circuits (third generation), large and very large-scale integrated circuits for the fourth generation.
Each consecutive hardware object was accompanied by significant reductions in cost, size, heat emission and power consumption, and significant increases in speed and capacity. Currently, there is a wide variety of systems such as Windows 10, Linux.
In this decade, batch processing systems have emerged where jobs are collected by groups or groups.
When any task was performed, the machine had full control. At the end of each task, control is returned to the OS that clears, reads, and starts the next task.
The concept of system file names appears to have achieved information independence.
General Motors research labs have made history, being the first to operate an OS for the IBM 701s.
Multi-program common systems were developed in this generation, where several processors were used in a single system to increase the processing power of the machine.
The program just stated that a file would be written to a tape drive with a certain number of tracks and a certain density.
The OS then installed a tape drive with the desired features and instructed the operator to attach the tape to that drive.
Currently, the IBM/360 family of computers has been created, designed as general use systems, and requiring the processing of large volumes of different types of data, resulting in a new evolution of systems: multi-mode systems that support concurrent batch support processes, time-sharing, real-time processing, and multiprocessing.
The systems known in the current period are considered as the fourth generation systems.
With the widespread use of computer networks and online transactions, it is possible to gain access to remote computers geographically through various terminal types.
In these systems, the concept of virtual machines unrelated to the computer hardware to which the user wants to connect appears, and instead, the user observes a graphical interface created by the system.
The system consists of a set of software packages that can be used to manage interactions with the hardware.
Memory is the core that represents the basic functions of the system, such as processes, files, main inputs/outputs, and communication functions.
Providing communication with the OS through a control language, the command interpreter allows the user to control the peripherals without knowing the features of the hardware used, the management of physical addresses.
It is the file system that allows files to be saved in the tree structure.
In short, operating systems are an interface with operators, application developers, system programmers, programs, hardware, and users.
The OS manages the distribution of the processor between different programs via a programming algorithm.
The programmer type depends entirely on the OS, depending on the desired target.
Random Access Memory Management
The OS is responsible for managing the memory space reserved for each application and user, if appropriate.
When physical memory is low, the OS can create a memory space on the hard drive called virtual memory.
Virtual memory space allows you to run applications that require more memory than RAM in the system. However, this memory is much slower.
It enables one to combine and control programs’ access to material resources through drivers.
Application Execution Management
Allows applications to run smoothly by allocating the resources they need to perform their functions.
It is responsible for the security of the execution of the programs and guarantees that the resources are used only by programs and users with relevant authorities.
It manages the read and write operations to the file system and accesses powers for application and user files.
It provides a set of indicators that can be used to diagnose the correct operation of the computer.
It makes using a computer easier.
It provides the most efficient use of computer resources.
It must be created in such a way as to allow the development, testing, or effective entry of new system functions without interfering with a service.
It is responsible for better managing the computer’s resources in terms of hardware, that is, it assigns a portion of the processor to each resource to share resources.
The user should be responsible for communicating with peripheral devices when required. It organizes data for fast and secure access.
It allows the user to easily manage everything related to the setup and use of computer networks.
Providing Input and Output
It should make it easier for the user to access and manage the computer’s input/output devices.
It prevents users from blocking each other and reports whether this application is occupied by another user.
It enables the sharing of hardware and data among users.
Another task of an OS is to manage the resources of the computer when there are two or more programs running at the same time and which need to use the same resource.
In addition, in a multi-user system, in addition to physical devices, sharing information is often necessary or appropriate.
Safety issues should also be taken into account. For example, only authorized users should access confidential information, no user can overwrite critical areas of the System.
In short, the OS should keep track of who is using which resources are providing resources to those who are requesting resources and should arbitrate any conflicting requests.
- It provides the user interface with the system.
- It shares hardware resources among users.
- It allows users to share their data with each other.
- Prevents a user’s activities from interfering with other users’ activities.
- Facilitates access to I/O devices.
- Manages errors.
- It has control over the use of resources.
The resources it manages are processors, storage, input/output devices, and data.
A classification was required due to the evolution of systems; Considering the differences between its components, we can classify them as follows:
1) Batch Systems
Batch systems require that information be collected collectively or collectively.
The works are processed according to the “first come first” model in the order of acceptance. In these systems, memory is divided into two zones.
One of them is occupied by the OS and the other is used to install temporary programs for execution. When the execution of a program is completed, a new program is loaded in the same memory area.
2) Multiprogramming Systems
Multiple programming systems can support two or more multiple simultaneous processes and allow instructions and data from two or more processes to remain in primary memory at the same time.
These systems include a multi-processing operation for information management. They are primarily characterized by a large number of simultaneously active programs competing for system resources such as processors, memory, and I/O devices.
These systems monitor the status of all active programs and system resources.
3) Multi-User Systems
Multi-user systems provide simultaneous access to a computer system through two or more terminals.
This type of OS is essential in the management of computer networks today.
4) Timeshare Systems
Timeshare systems try to provide equal sharing of common resources to give users the impression that they have a separate computer.
In these systems, the memory manager often ensures the isolation and protection of programs, since they do not need to communicate with each other.
I/O control is responsible for providing or removing mappings to maintain the system integrity of the devices and serve all users.
The file manager provides protection and control over the access of information, given the possibility of compliance and conflict when trying to access the files.
5) Real-Time Systems
These systems aim to provide faster response times and to process information without interruption.
In these systems, the memory manager is relatively less demanded as many processes are permanently in memory.
The file manager is usually found on large real-time systems, and its primary purpose is to manage access speed rather than efficient use of secondary storage.
♦ Microsoft Windows Defender
♦ Windows NT System
♦ Installing Windows 10 OS
♦ What is Internet Explorer?
♦ Virtual Machines