In the context you’ve described, these are all Operating System terms
" in this context has nothing to do with human activities in areas such as product development – it also has nothing to do with industrial process control and management.
May I suggest standard references to operating system theory – a good place to start searching for these works are the reference lists associated with the Wikipedia articles on “Operating system”, “Kernel” and “Process”:
Besides the Wikipedia entries suggested by others, consider that any combination of threads is considered a “process” and is managed at multiple levels… eg including and not limited to in the CPU firmware and at the OS level (Perhaps consider looking at the OSI 7 model for layers above the processor). Nowadays, I’d consider systemd and cgroups as the main ways at the OS level that openSUSE manages processes and process trees. Note that there is also a broad category of <unmanaged> processes which simply are instantiated and run, then terminated when no longer needed… and can be considered not specially managed by anything.
Similar to Processes, memory is managed at multiple levels, although for practical purposes most Users will find it more useful to understand how memory is managed relative to, and as close to the Application layer as possible, because at that level memory and other resources need to support specific things the User is trying to do. Recommend start with understanding how OS maps memory into different regions. An important foundation concept to know is real mode memory and virtual memory, which is why although a machine might have very limited physical RAM, its virtual memory addresses which applications can use is enormous, and generally speaking even when multiple applications are running simultaneously, each and every application thinks it’s the <only> application running on the machine.
Secondary Disk Scheduling Management
I don’t know if you mean something different than just Disk I/O Scheduling, there are not only Wikipedia entries but documentation embedded in your OS as well as numerous System Optimization guides across the Internet which delve into Disk I/O Scheduling. Simply put, although the disk firmware does plenty of optimization, the OS can modify the method used for sending/receiving disk reads/writes to lessen pressure on the disk I/O queue.
There’s plenty written about this everywhere as well, and same as the above deadlocks can occur at different levels. Deadlocks can happen for a number of reasons, but generally are because more than one process attempts to access a resource (a file, memory address, etc), either then violating access rules or while that resource is changing. The result is an unhandled contention, so the entities attempting to access the resource have no rule to guide their next step and therefor become “deadlocked.” Ordinary applications often become deadlocked during their early development, and applications like Database Management apps which place a high premium on data integrity contain plenty of managed code to address anything like deadlocks which might result in data corruption.
Alternatively buy this reference work: <https://en.wikipedia.org/wiki/Operating_Systems:_Design_and_Implementation>“Operating Systems: Design and Implementation”
Author: Andrew S. Tanenbaum
Publisher: Prentice Hall
If you’re getting into OS design, you’ll need the thing on your bookshelf anyway.
[HR][/HR]Alternatively: begin reading a fully commented copy of the Linux Kernel’s source code.
Could be an excellent primer,
But be aware that the Linux OS (and other OS) has undergone a revolution in how it works since 2006 (possibly that book’s last update) after very little change for most of the previous decades.
In the context of what ‘zosh’ has for an assignment, I beg to differ.
What she or he has to explain are mostly scheduler functions – and that, AFAICS, hasn’t really changed so much over the years.
Virtual Memory management will remain as one of the more difficult concepts which need to be understood.
The concept of process Multi-Threading and the associated locking issues will also remain as a difficult concept to understand.
At a deeper level, the effects of CPU pipe-lining will also remain as being difficult to analyse and understand.
Yes, what has changed is the way “User Space” and “System Space” are being handled by the Virtual Memory routines and, the way system routines are using “User Space” instead of “System Space”.
Yes, the CPU Instruction Sets have changed considerably since 2006 and, 64-bit Address Space and Instruction Space has replaced the 32-bit spaces which were prevalent in 2006.
The Scheduler still, AFAICS, has 100 ms per user process time slices – in other words, when a user process begins to execute, it is allocated 100 ms execution time before a decision is made to either allow another process to begin execution or, to allow the given process to continue execution.
And, despite the “last issue date” of Tannenbaum’s book being 2006, what is being taught there are the basics one needs to understand how Operating Systems work.