Operating System Functions
At the simplest level, an operating system does two things:
- It manages the hardware and software resources of the system. In a desktop computer, these resources include such things as the processor, memory, disk space and more (On a cell phone, they include the keypad, the screen, the address book, the phone dialer, the battery and the network connection).
- It
provides a stable, consistent way for applications to deal with the
hardware without having to know all the details of the hardware.
The
first task, managing the hardware and software resources, is very
important, as various programs and input methods compete for the
attention of the
central processing unit (CPU) and
demand memory, storage and input/output (I/O) bandwidth for their own
purposes. In this capacity, the operating system plays the role of the
good parent, making sure that each application gets the necessary
resources while playing nicely with all the other applications, as well
as husbanding the limited capacity of the system to the greatest good of
all the users and applications.
The second task, providing a
consistent application interface, is especially important if there is to
be more than one of a particular type of computer using the operating
system, or if the hardware making up the computer is ever open to
change. A consistent
application program interface
(API) allows a software developer to write an application on one
computer and have a high level of confidence that it will run on another
computer of the same type, even if the amount of memory or the quantity
of storage is different on the two machines.
Even if a particular
computer is unique, an operating system can ensure that applications
continue to run when hardware upgrades and updates occur. This is
because the operating system -- not the application -- is charged with
managing the hardware and the distribution of its resources. One of the
challenges facing developers is keeping their operating systems flexible
enough to run hardware from the thousands of vendors manufacturing
computer equipment. Today's systems can accommodate thousands of
different printers, disk drives and special peripherals in any possible
combination
Memory hierarchy:
search
Diagram of the computer memory hierarchy
The term
memory hierarchy is used in
computer architecture when discussing performance issues in computer architectural design, algorithm predictions, and the lower level
programming constructs such as involving
locality of reference. A "memory hierarchy" in
computer storage distinguishes each level in the "hierarchy" by response time. Since response time, complexity, and capacity are related,
[1] the levels may also be distinguished by the controlling technology.
The many trade-offs in designing for high performance will include
the structure of the memory hierarchy, i.e. the size and technology of
each component. So the various components can be viewed as forming a
hierarchy of memories (m
1,m
2,...,m
n) in which each member m
i is in a sense subordinate to the next highest member m
i-1
of the hierarchy. To limit waiting by higher levels, a lower level will
respond by filling a buffer and then signaling to activate the
transfer.
There are four major storage levels.
[1]
- Internal – Processor registers and cache.
- Main – the system RAM and controller cards.
- On-line mass storage – Secondary storage.
- Off-line bulk storage – Tertiary and Off-line storage.
This is a most general memory hierarchy structuring. Many other
structures are useful. For example, a paging algorithm may be considered
as a level for
virtual memory when designing a
computer architecture.