![]() If that starting address changes, then the program must be reloaded but not recompiled. Load Time - If the location at which a program will be loaded is not known at compile time, then the compiler must generate relocatable code, which references addresses relative to the start of the program.However if the load address changes at some later time, then the program will have to be recompiled. Compile Time - If it is known at compile time where a program will reside in physical memory, then absolute code can be generated by the compiler, containing actual physical addresses.These symbolic names must be mapped or bound to physical memory addresses, which typically occurs in several stages: User programs typically refer to memory addresses with symbolic names such as "i", "count", and "averageTemperature".It should also be obvious that changing the contents of the base and limit registers is a privileged activity, allowed only to the OS kernel.įigure 8.1 - A base and a limit register define a logical addresss spaceįigure 8.2 - Hardware address protection with base and limit registers 8.1.2 Address Binding ![]() The OS obviously has access to all existing memory locations, as this is necessary to swap users' code and data in and out of memory. Every memory access made by a user process is checked against these two registers, and if a memory access is attempted outside the valid range, then a fatal error is generated. ![]() This is usually implemented using a base register and a limit register for each process, as shown in Figures 8.1 and 8.2 below. User processes must be restricted so that they only access memory locations that "belong" to that particular process.The basic idea of the cache is to transfer chunks of memory at a time from the main memory to the cache, and then to access individual memory locations one at a time from the cache. This would require intolerable waiting by the CPU if it were not for an intermediary fast memory cache built into most modern CPUs. Memory accesses to main memory are comparatively slow, and may take a number of clock ticks to complete.Memory accesses to registers are very fast, generally one clock tick, and a CPU may be able to execute more than one machine instruction per clock tick.The disk controller monitors the bus for such instructions, transfers the data, and then notifies the CPU that the data is there with another interrupt, but the CPU never gets direct access to the disk. ) ( Device drivers communicate with their hardware via interrupts and "memory" accesses, sending short instructions for example to transfer data from the hard drive to a specified location in main memory. It cannot, for example, make direct access to the hard drive, so any data stored there must first be transferred into the main memory chips before the CPU can work with it. The CPU can only access its registers and main memory.This is almost true of the OS as well, although not entirely. The memory hardware doesn't know what a particular part of memory is being used for, nor does it care. It should be noted that from the memory chips point of view, all memory accesses are equivalent.Shared memory, virtual memory, the classification of memory as read-only versus read-write, and concepts like copy-on-write forking all further complicate the issue.The advent of multi-tasking OSes compounds the complexity of memory management, because because as processes are swapped in and out of the CPU, so must their code and data be swapped in and out of memory, all at high speeds and without interfering with any other processes.Every instruction has to be fetched from memory before it can be executed, and most instructions involve retrieving data from memory or storing data in memory or both. Obviously memory accesses and memory management are a very important part of modern computer operation. ![]() Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Chapter 8.
0 Comments
Leave a Reply. |