The operating system

Reading time30 min

In brief

Article summary

In this lecture, we look at the history of operating systems (OS) and name their main functionalities. We then detail the modern OS that is most probably running on you own computer.

Main takeaways

  • The first computers needed human operations to manually assign tasks and schedule them sequentially.

  • With the growing computing power, automation in logging, accounting, and payment took place as more users could run more programs.

  • Operating systems moved from single-user, single-task systems to multi-user, multitasking environments.

  • Modern operating systems still follow these principles. They consist in a kernel, that handles the management of the hardware components and resources, and a user interface or shell (that can be graphical or not) to allow user interaction (e.g., run programs).

  • Operating systems also implement various security mechanisms to ensure that users can only access the files and resources they are permitted to, the OS can log activities for auditing and accountability, which is also important for security.

Article contents

We have defined in the previous course what a computer is and given a description of its main components. We have seen that a computer is basically an electronic device that can be programmed to automatically carry out sequences of arithmetic or logical operations. However, we concluded that it would be cumbersome to use such a computer if every program needed the full hardware specification going from its CPU’s instruction set to the peripheral drivers.

To prevent that, a special program, the operating system (OS), acts as an intermediary between users and the computer hardware.

1 — History of Operating Systems

1.1 — Early computers (1940s – 1950s)

The earliest computers were mainframes that lacked any form of operating system which means that computers like the ENIAC, which was briefly mentioned in the last course, required manual operation, with programmers inputting machine code directly using switches and patch panels. To improve efficiency, the concept of batch processing emerged, where jobs (programs) were collected, grouped, and processed sequentially. Systems like IBM’s 701 used punch cards and magnetic tape for job input and magnetic drums for secondary storage. Programs could generally be debugged via a control panel using dials, toggle switches and panel lights.

As computers became more powerful, the time needed to run programs decreased, making the process of switching between users more significant. This led to automated logging for usage accounting and payment, replacing manual checks. Job queues also evolved from physical lines of people or stacks of punch cards to the machine managing its own job sequences. Programmers were no longer the ones that had access to the physical machine as they were replaced by dedicated machine operators who looked after the computer and its peripherals.

To prevent data tampering and operational errors, vendors enhanced runtime libraries, and automated monitoring was introduced for various resources. Security features were added to operating systems to track program access and prevent unauthorized file use. Eventually, the runtime libraries evolved into a unified program that initiated before the first customer job. This program could read the customer’s job, manage its execution, log its usage, reassign hardware resources once the job completed, and seamlessly proceed to process the next job. This lead to a first kind of operating systems called monitors. Interestingly enough, these monitors were developed not by the computer vendors but by their clients such as General Motors’ North American Monitor for the IBM 704.

1.2 — Mainframe era (1960s)

During the 1960s, the era of mainframe computers saw significant advancements in operating systems, laying the groundwork for many modern concepts in computing. This period marked a transition from single-user, single-task systems to multi-user, multitasking environments, driven by the needs of large organizations for more efficient and powerful computing solutions.

As computing technology progressed, the concept of time-sharing emerged, allowing multiple users to interact with a computer simultaneously. Time-sharing systems, such as the Compatible Time-Sharing System (CTSS) and the Multics (Multiplexed Information and Computing Service), were developed to allow users to access the mainframe through terminals. These systems allocated a small time slice of the CPU to each user, creating the illusion that each user had their own dedicated machine. This innovation was crucial for academic and research institutions, where many users needed to access the computer for various tasks.

Example

Some early operating systems:

  • IBM OS/360: One of the most influential operating systems of the 1960s was IBM’s OS/360. Designed for the System/360 mainframe family, OS/360 introduced many features that became standard in later operating systems, such as a hierarchical file system, job control language (JCL), and support for multiple programming languages. It was a robust, versatile system that could handle a wide range of tasks from scientific calculations to business data processing.

  • MULTICS: The Multics project, initiated by MIT, Bell Labs, and General Electric, aimed to create a highly secure, reliable, and capable time-sharing system. While Multics was complex and had a significant influence on later operating systems, its most notable legacy is the inspiration it provided for the creation of Unix.

The growing complexity of mainframe operations and the increasing number of users necessitated advancements in security and resource management. Operating systems began incorporating features to prevent unauthorized access and misuse of resources. They implemented access control mechanisms, audit trails, and resource accounting to ensure that each user’s activities were properly monitored and restricted to their assigned privileges.

1.3 — Unix and early minicomputers (1970s)

The 1970s saw the rise of minicomputers, smaller and more affordable machines compared to the mainframes of the previous decade. These minicomputers, like the Digital Equipment Corporation (DEC) PDP series, became popular in businesses, universities, and laboratories due to their cost-effectiveness and versatility. The growing demand for computing power in smaller environments spurred significant advancements in operating systems.

One of the most influential developments of this era was the creation of the Unix operating system. Unix was initially developed in the late 1960s and early 1970s at AT&T’s Bell Labs by Ken Thompson and Dennis Ritchie. Its design principles and features left a lasting impact on the world of computing.

Key features of Unix:

  • Portability – Unix was written in the C programming language, which made it easier to port to different hardware platforms. This was a revolutionary shift from the assembly language used in most earlier operating systems.

  • Multiuser and Multitasking – Unix supported multiple users and multitasking, allowing several programs to run concurrently. This capability made Unix suitable for both personal and shared computing environments.

  • File system – Unix introduced a hierarchical file system with directories and subdirectories, providing a structured way to organize and access files.

  • Simple and powerful tools – Unix came with a set of small, simple, and powerful command-line utilities that could be combined to perform complex tasks. This philosophy of combining small tools became a hallmark of Unix and its derivatives.

  • Security and permissions – Unix implemented a robust security model with file permissions and user authentication, ensuring that users could only access resources they were authorized to use.

In addition to Unix, the 1970s saw the development of several other operating systems for minicomputers. These systems were tailored to the needs of smaller organizations and offered features similar to those found in mainframe operating systems but scaled down for less powerful hardware.

Example

Some examples of these OS are:

1.4 — Personal computers (1980s)

The 1980s marked a revolutionary period in the history of computing with the advent of personal computers (PCs). This era saw the transition of computing from large, centralized mainframes and minicomputers to affordable, smaller, and user-friendly machines that could fit on a desk, making computing accessible to individuals and small businesses.

Key milestones:

  • Apple II (1977) – Although released in the late 1970s, the Apple II became one of the most successful and influential early personal computers. It was designed by Steve Wozniak and marketed by Apple, making computing accessible to the masses with its user-friendly design and wide range of software.

  • IBM PC (1981) – The release of the IBM PC (Model 5150) in 1981 was a pivotal moment in the history of personal computing. The IBM PC set the standard for personal computers, and its open architecture allowed other manufacturers to produce compatible systems, leading to the proliferation of “IBM-compatible” PCs.

The rise of personal computers necessitated the development of operating systems that could manage hardware resources and provide a user-friendly interface for non-expert users. Unlike mainframes and minicomputers, personal computers were designed for individual use, so their operating systems needed to be simpler, more intuitive, and capable of running on less powerful hardware.

Example

Examples of operating systems and their features:

  • PC DOS/MS DOS (1981) – Microsoft Disk Operating System (MS-DOS) was one of the most important operating systems of the 1980s, especially in the IBM-compatible PC market. MS-DOS was a command-line interface (CLI) operating system that allowed users to interact with the computer by typing commands.

  • Apple Macintosh (1984) and the Graphical User Interface (GUI) – The Macintosh was the first commercially successful personal computer with a graphical user interface (GUI). The Mac’s GUI was revolutionary because it allowed users to interact with the computer using a mouse and visual icons, making computing far more accessible to the average person. The original Macintosh operating system, known as System Software, featured a desktop metaphor with windows, icons, menus, and a trash can for deleted files. This user-friendly interface made the Mac popular in creative industries such as graphic design and publishing.

In response to the growing popularity of GUIs, Microsoft released Windows 1.0 in 1985 as a graphical extension to MS-DOS. Although it was not an operating system in the full sense, Windows 1.0 provided a windowed environment that allowed users to run multiple MS-DOS applications simultaneously.

1.5 — Rise of modern operating systems (1990s)

The 1990s was a transformative decade for operating systems, marking the transition from the early days of personal computing to the more sophisticated, user-friendly, and powerful systems that we recognize today. This period saw significant developments in both consumer and enterprise-level operating systems, as well as the rise of the internet, which profoundly influenced the design and functionality of operating systems.

1.5.1 — Microsoft Windows – From 3.0 to 95 and beyond

Released in 1990 and 1992, respectively, Windows 3.0 and 3.1 were pivotal in establishing Microsoft Windows as the dominant desktop operating system. They were followed by Windows 95 which combined the GUI and ease of use of the previous Windows versions with the more advanced features of a modern operating system. It introduced the Start menu, taskbar, and a more integrated file management system, which became the foundation for all subsequent Windows versions.

1.5.2 — GNU/Linux – The open-source revolution

GNU (GNU’s Not Unix!) is a project started by Richard Stallman in 1983 with the goal of creating a free and open-source Unix-like operating system. The GNU project developed many essential software components, such as the GNU Compiler Collection (gcc), GNU C Library (glibc), and various utilities and tools that make up the operating system. However, the GNU project initially lacked a kernel, which is the core part of an operating system that manages hardware and system resources.

Information

The GNU project started the development of its own complete-free kernel called HURD in 1990. However, due to the fast adoption of Linux as a kernel the developments never really catched-up.

In 1991, Linus Torvalds, a Finnish computer science student, released the first version of the Linux kernel, which he had developed as a hobby project. The kernel is responsible for managing hardware, memory, processes, and system calls. While Linux is often referred to as an operating system, it is technically just the kernel, not a complete operating system on its own.

When combined, the GNU tools and utilities with the Linux kernel form a complete operating system, commonly referred to as “GNU/Linux”. This operating system is what most people mean when they refer to “Linux” in general conversation.

Information

When looking for alternative Operating Systems you may encounter another Unix-like family called Berkeley Software Distribution (BSD) including FreeBSD, OpenBSD, NetBSD or DragonFly BSD. These OS are mainly used for very specific tasks such as security appliances, Web servers and File servers. However, as a side note, large parts of FreeBSD code were incorporated in Apple’s MacOS Operating System.

1.5.3 — MacOS – From System 7 to Mac OS X

In the early 1990s, Apple’s operating system was still known as System 7. It continued to build on the graphical interface pioneered by earlier versions, with features like multitasking, virtual memory, and a more refined user experience. However, by the mid-1990s, Apple faced increasing competition from Microsoft and struggled to innovate at the same pace.

The late 1990s were a period of significant transition for Apple. After acquiring NeXT in 1997, Apple began developing a new operating system based on NeXTSTEP, the operating system created by Steve Jobs’ company NeXT. This led to the development of Mac OS X, which was released in 2001. Mac OS X was a radical departure from previous versions, featuring a Unix-based core, a modern graphical interface called Aqua, and advanced features like preemptive multitasking, protected memory, and a more robust architecture.

1.5.4 — The rise of the internet

The 1990s also saw the rapid growth of the internet, which influenced the development of operating systems in significant ways. Operating systems increasingly included built-in support for networking (TCP/IP protocols for example), internet browsing, and email. Microsoft included Internet Explorer with Windows 95, and Apple introduced Cyberdog as part of its OpenDoc framework. Networked file systems, remote access tools, and security features became standard as the internet became an integral part of daily computing.

1.6 — Modern era (2000s – Present)

The modern era of operating systems, spanning from the 2000s to the present, has been characterized by significant advancements in computing technology, changes in user expectations, and the emergence of new paradigms in software development and deployment.

1.6.1 — Rise of Open Source and Linux Dominance

By the 2000s, Linux had firmly established itself as a dominant force in the server market, powering everything from web servers to supercomputers. Its open-source nature allowed for rapid development and adaptation to various needs, leading to its widespread adoption in enterprises, academia, and even on personal computers.

Examples of Modern Linux distributions
  • Debian is a widely used, free and open-source operating system based on the Linux kernel. It is one of the oldest and most influential Linux distributions, known for its stability, extensive software repository, and commitment to open-source principles.

  • Ubuntu is a popular Linux-based operating system derived from Debian. It is designed to be user-friendly, accessible, and suitable for a wide range of devices, including desktops, servers, and IoT devices. Ubuntu is developed and maintained by Canonical Ltd., which provides commercial support and services for the OS.

  • Arch-Linux is a lightweight and flexible Linux distribution designed for advanced users who value simplicity, transparency, and control over their operating system. Unlike many other Linux distributions, Arch Linux follows a “rolling release” model, which means that it is continuously updated, and users receive the latest software versions without needing to upgrade to a new version of the OS.

  • Fedora Linux is a community-driven Linux distribution developed by the Fedora Project and sponsored by Red Hat, a major enterprise open-source software company. It is often considered a “testing ground” for new features that might eventually be included in Red Hat Enterprise Linux (RHEL).

  • And many more! Due to the opensource nature of Linux you could build your own distribution with guides such as Linux From Scratch that gives you all the instructions on how to build Linux from source or use a distribution such as Gentoo Linux that is a highly customizable and performance-oriented for users who want to have complete control over their system.

1.7 — Advances in Windows Operating Systems

Windows XP, released in 2001, marked a turning point for Microsoft. It combined stability and usability, becoming one of the most widely used operating systems in history. Successive versions, including Windows Vista, 7, 8, 10, and now 11, continued to refine the Windows experience, with improvements in security, user interface design, and integration with cloud services.

1.8 — Mobile Operating Systems

The 2000s and 2010s saw the explosive growth of mobile operating systems. Android, based on Linux, and Apple’s iOS, derived from MacOS, became the dominant platforms. These operating systems not only transformed mobile phones into powerful computing devices but also led to the creation of a vast ecosystem of apps and services.

1.9 — Cloud Computing and Virtualization

The 2000s saw the rise of virtualization technology, allowing multiple operating systems to run concurrently on a single physical machine. VMware, Xen, and later, KVM became key players, revolutionizing data centers by improving resource utilization and enabling cloud computing.

The emergence of cloud computing led to the development of operating systems designed specifically for cloud environments. Operating systems like Amazon’s AWS AMIs, Google’s Cloud Operating Systems, and Microsoft’s Azure platform became integral to modern IT infrastructure.

1.10 — The Internet of Things (IoT)

The proliferation of connected devices led to the development of specialized operating systems for the Internet of Things (IoT). These OSes, were designed to run on a wide range of devices, from smart home appliances to industrial sensors.

For applications where timing and reliability are critical, real-time operating systems became more prevalent. These systems are used in embedded devices, medical equipment, automotive systems, and other areas where precise control is essential.

2 — Operating Systems components and features

Many of the features of modern operating systems were presented in the previous history section. However, as operating systems become more and more diverse, it becomes increasingly difficult to give a definitive definition. Nevertheless, there are some key elements that defines them.

2.1 — Resource Management/Kernel

The OS ensure the management of the hardware components and resources, especially those of the von Neumann architecture, which are the Central Processing Unit (CPU), the Memory, Input/Outputs with the added file storage management. More specifically, it has the responsibility to allocate these resources to the other programs that can run on the computer.

These tasks are usually taken care of by the core component of the system that is called a Kernel and which is one of the first programs loaded on startup (after the bootloader). The code of the Kernel is usually loaded in a separate area of memory, which other applications cannot access, that is called the Kernel space to protect the computer memory and hardware from malicious or bugged software. The area where all the other programs are stored in memory is called User space.

It’s key functions are given below:

  • Process Management – The kernel handles the creation, execution, and termination of processes. It schedules CPU time among processes, ensuring efficient multitasking.

  • Memory Management – The kernel manages the system’s memory, allocating space to processes and ensuring that they do not overwrite each other’s data. It also handles virtual memory, which allows the system to use more memory than is physically available.

  • Device Management – The kernel communicates with hardware devices through device drivers, which are specialized software components that translate generic commands into hardware-specific instructions.

  • File System Management – The kernel manages files and directories on storage devices, handling file creation, deletion, reading, and writing. It also enforces access permissions to ensure data security.

  • Inter-Process Communication (IPC) – The kernel facilitates communication between processes, allowing them to share data and synchronize their actions.

  • Security and Access Control – The kernel enforces security policies, such as user authentication and permission management, to protect the system from unauthorized access and misuse.

Kernels can be of different types:

  • Monolithic Kernel – In this design, all the basic functions of the OS (such as device management, file system management, and memory management) are integrated into a single, large kernel. While this can offer performance advantages, it may be more prone to bugs and less modular. Examples include the Linux and Unix kernels.

  • Microkernel – This type of kernel minimizes the functions performed in kernel mode, often limiting them to basic process and memory management. Other services, like device drivers and file systems, run in user space. Microkernels are more modular and can be more stable, but they may involve more overhead due to the increased context switching between user space and kernel space. An example is the MINIX kernel.

  • Hybrid Kernel – A hybrid kernel is a compromise between monolithic and microkernels. It runs some services in kernel space for performance reasons, while others are run in user space for modularity and stability. Examples include the Windows NT kernel (The actual Kernel of Windows 11) and the MacOS XNU kernel.

2.2 — User Interface (UI)/SHELLS

The User Interface, sometime referred to as Shell, is what allows the user to interact with the operating system. It acts as an intermediary between the user and the system’s kernel, enabling the execution of commands, scripts, and programs.

This shell can be of two different types:

  • Command-line shells are a text-based interface that allows users to interact with the operating system by typing commands. Unlike graphical interfaces, which rely on visual elements like windows, buttons, and icons, command-line shells require users to input textual commands to perform tasks such as file management, process control, and system configuration.

    Command-line shells allow users to write scripts, which are sequences of commands stored in a file. These scripts can automate repetitive tasks, simplify complex processes, and perform batch processing. Shells manage a set of environment variables that define the context in which commands run. These variables include information like the current working directory, user preferences, and system configuration details. Command-line shells provide robust job control features, allowing users to manage multiple processes (jobs) simultaneously. Users can start, stop, pause, or resume jobs, and they can run jobs in the foreground or background.

    Examples of command-line shells
    • Bourne shell (sh) is one of the earliest and most influential Unix command-line interpreters, developed by Stephen Bourne at Bell Labs in the late 1970s. It was the default shell for Unix systems before being replaced by more advanced shells in later Unix versions.

    • Bash (the Bourne-Again shell) is the default shell on many Unix-like systems, including most Linux distributions. It is known for its compatibility with older Bourne shell scripts, along with numerous enhancements like improved scripting capabilities, command history, and brace expansion.

    • Zsh is an extended version of Bash with more features, such as advanced autocompletion, globbing (pattern matching for file names), and plugin support. Zsh is highly customizable and is the default shell in MacOS as of version 10.15 (Catalina).

    • COMMAND.com/cmd.exe is the command-line interpreter for DOS (Disk Operating System) and the early versions of Windows. It has been replaced by cmd.exe the command-line interpreter for Windows NT-based operating systems (including Windows 2000, XP, Vista, 7, 8, 10, and 11). The language used by cmd.exe is called Batch script (or simply Batch).

    • PowerShell, developed by Microsoft, PowerShell is used primarily on Windows but is also available on Linux and MacOS. Unlike traditional Unix-like shells, PowerShell is object-oriented, meaning it passes objects rather than plain text between commands. The language used by PowerShell is called PowerShell Scripting Language (or simply PowerShell).

  • Graphical shells are a type of user interface that provides users with access to the functions of an operating system without having to type-in commands in the CLI. Most graphical shells provides users with windows to display information, buttons and menu bars as well as icons and a mouse pointer. Most desktop oriented OS use these kind of graphical shell such as Windows, MacOS and many Linux distribution they are usually built on the concept of an “electronic desktop,” where data files are depicted as if they were paper documents on a physical desk, and application programs have visual icons rather than being started through command-line instructions.

    These graphical shell are often build on top of what is called a windowing system such as the X window system for Linux. We will not elaborate on these in this lecture but feel free to check them up.

    Examples of command-line shells
    • GNOME shell is the graphical shell of the GNOME desktop environment for Linux. This environment is widely used, user-friendly, focuses on simplicity and ease of use. It is the default environment for many Linux distributions like Ubuntu and Fedora.

    • KDE Plasma is a graphic shell for Linux that is known for its customization and rich features, KDE Plasma offers a more traditional desktop experience with a wide range of options for users to tailor their environment.

    • Windows shell is the graphical user interface provided by Microsoft Windows. It encompasses various elements that users interact with to operate their computers, such as the desktop, taskbar, Start menu, file explorer, and system tray.

    • Aqua is graphical user interface and visual theme for MacOS it integrates seamlessly with MacOS’s underlying Unix-based architecture, offering a cohesive and visually appealing user experience. The Dock, Finder, and Mission Control are key components of the Aqua interface, providing users with easy access to applications, files, and system management tools.

3 — Other components and functionalities

We have presented the main components of the Operating System from the kernel to the UI (whether it be a CLI or a GUI). However, on modern operating systems there are a lot more components that are needed to properly use the computer. Here we intend to summarize them and other important functionalities in order to give you a brief overview of the inner working of your own machine.

3.1 — Init

We already mentioned that upon startup the kernel was one of the first program to run on the computer. However, there are a few more steps between its startup and a fully working environment. In most system these steps are under the responsibility of a specific program that is in charge of launching the other ones. Under linux, this program is usually called Init in reference to the original Unix Init program that was in charge of launching the various shells. This program, however can be different from an OS to another. Under Linux two Init programs are usually used:

  • The historical SysV Init.

  • It has been mostly replaced in modern distribution by the systemd program.

The launchd assumes this role on MacOS and the Windows Service Control Manager (SCM) can be seen as an equivalent under Windows.

3.2 — Deamons and services

If you take a look at all the process that are running when you use your OS you will most certainly find that Init and the Shell are not the only programs running. Daemons and services are both background processes in operating systems that perform tasks without direct user interaction. They are crucial for managing various system functions, handling requests, and supporting the overall operation of the OS. While the terms are sometimes used interchangeably, they have specific connotations depending on the context or the operating system.

  • Daemons – In Unix-like operating systems (e.g., Linux, MacOS), a daemon is a background process that runs continuously and performs specific functions. Daemons are often started during the boot process and continue running in the background, waiting for requests to come in or performing periodic tasks. Daemon names often end with the letter “d” and can be used to manage network connections (ftpd, sshd), log system event (syslogd) or schedule specific tasks (cron) for example.

  • Services – The term service is more commonly used in the context of Windows operating systems, though it can apply to any OS. A service is essentially a program or process that runs in the background, similar to a daemon, and provides specific functionality or support to other programs and system components. Services on Windows can be managed using the Services control panel or via command-line tools like sc (Service Control). There are no specific naming convention nevertheless you can usually find the term service, agent or manager in their name.

3.3 — Users

In an operating system (OS), users are entities that interact with the system, typically representing individuals or processes that access and utilize the OS’s resources and services. Users are categorized in different ways depending on their roles, permissions, and how they interact with the OS.

  • Human Users:

    • Regular Users are individual accounts created for everyday tasks. They have limited privileges and can only access their files, programs, and data.
    • Superuser (Administrator or Root) has elevated privileges that allow them to perform administrative tasks such as installing software, changing system settings, and managing other user accounts. In Unix-like systems (e.g., Linux, MacOS), the superuser is called root. In Windows, it’s often referred to as an Administrator.
  • Service Accounts:

    • System Processes are special-purpose accounts used by the OS to run services, daemons, and background processes. They typically do not have login capabilities and are used internally by the OS to manage system-level tasks.
    • Guest Users are accounts with very limited permissions, usually used for temporary access. These accounts cannot make system-wide changes or access other users’ data.

Users have different permission levels, which help maintain the security of the system. The OS ensures that users can only access the files and resources they are permitted to, preventing unauthorized actions. The OS tracks which user is using which resources (CPU, memory, files) and manages them accordingly to ensure efficient operation. By associating actions with specific users, the OS can log activities for auditing and accountability, which is important for both security and system administration.

To go further

Important

The content of this section is optional. It contains additional material for you to consolidate your understanding of the current topic.

4 — Complete boot sequence

In this lecture, we have presented the main components of the operating system. However, in order to start, the computer generally cannot load the system kernel directly. The boot sequence of a computer is the process that occurs when a computer is powered on or restarted, initializing the hardware and loading the operating system. Here’s a detailed overview of the steps involved in the boot sequence:

  1. Power On – When the computer is powered on, the power supply unit (PSU) sends power to the motherboard and other components. The CPU receives power and starts executing instructions.

  2. POST (Power-On Self-Test) – The BIOS/UEFI firmware, stored on a chip on the motherboard, runs a POST to check the basic hardware components (CPU, RAM, keyboard, storage devices, etc.) to ensure they are functioning properly. If an error is detected, the POST will usually signal this with a series of beeps or error codes.

  3. Load BIOS/UEFI – After POST, the BIOS/UEFI initializes the hardware, identifies connected devices (such as hard drives, SSDs, USB drives), and prepares the system to load the operating system.

  4. Boot Device Selection – The BIOS/UEFI looks for a bootable device based on the boot order configured in the settings (usually the internal hard drive or SSD first). It then searches the selected device for a Master Boot Record (MBR) or GUID Partition Table (GPT) that contains the bootloader.

  5. Bootloader Execution – The bootloader (e.g., GRUB for Linux, Windows Boot Manager for Windows) is loaded into memory from the boot device. The bootloader’s role is to load the operating system kernel into memory. It may provide a menu to select different operating systems or recovery options if multiple are installed.

  6. Kernel Loading – The bootloader loads the operating system kernel into RAM. The kernel is the core of the operating system, responsible for managing hardware, memory, processes, and system calls.

  7. Kernel Initialization – The kernel initializes the rest of the operating system, including device drivers, memory management, and system processes. It also mounts the root filesystem.

  8. Starting System Processes – The operating system starts the initial processes, such as init or systemd in Linux, or the Session Manager Subsystem (SMSS) in Windows. These processes initialize other services and user interfaces.

  9. User Authentication – Finally, the system presents a login screen or prompt for user authentication. After the user logs in, the operating system loads the user environment (desktop, applications, etc.).

  10. User Environment Loaded –Once the user environment is loaded, the computer is fully booted and ready for use.

This entire sequence typically occurs within seconds to a minute, depending on the hardware and operating system.

To go beyond

Important

The content of this section is very optional. We suggest you directions to explore if you wish to go deeper in the current topic.

  • Protection ring.

    Protection rings are a security architecture used by operating systems to manage access to system resources and isolate different levels of privilege.

  • Authentication.

    Debian authentication and access control page