Addressing and Reloaction
  1. Memory hierarchy.
    1. Smaller, faster on left, larger, cheaper on right.
    2. Keep recent data in faster storage.
    3. Create a large, cheap storage system.
  2. Address Mapping
    1. Programs refer to memory locations by address.
      MOV eax, [ebx]
    2. The CPU uses addresses to refer to locations in the memory system.
    3. Only in the simplest systems are these actually the same: Addresses generated by the program are usually translated by the CPU before being sent to the memory unit.
      1. Addresses used by the hardware are real addresses
      2. Addresses used by the software are virtual addresses
      3. Each time the program does a store or fetch, the virtual address is translated to a real address.
      4. User mode code usually has no way to know what real addresses exist or how any virtual ones translate.
    4. The relocation problem.
      1. For multiprogramming, multiple programs must be loaded into memory at the same time.
      2. They will need to occupy different locations on different occasions.
      3. Since this location will not be known at compile time, so it must be possible to relocate a program after compilation.
    5. Software Relocation.
      1. Compiler generates object program starting at zero.
      2. Object format must distinguish pointer values (relative symbols) from others (absolute symbols).
      3. When the program is copied from disk to memory, the software adds its memory location to each relative symbol.
      4. All addresses are now adjusted and the program can run wherever it is located.
      5. Static relocation
    6. Hardware: Base and Limit Registers
      1. Compiler (still) generates object program starting at zero.
      2. On each memory reference, the hardware adds the base address to the address generated by the instruction.
      3. User programs are written or compiled to load at address zero.
      4. The user program has no way to generate addresses which are not adjusted by the hardware.
      5. Usually, a limit register is included to trap reference which are beyond a specified region.
      6. The user program generally has no access to the base and limit registers, so it cannot change them or even tell what its region is.
      7. Dynamic relocation
    7. The set of usable memory addresses is an address space.
    8. Swapping
      1. If memory is over-committed, some process is removed.
      2. The image is copied to disk.
      3. It is returned later after some other process(es) have exited.
      4. Without hardware relocation, it must be returned to the exact same location.
        1. The loaded program doesn't have any markings for absolute or relative symbols.
        2. New values have been computed, so the original file won't help.
  3. Partition Management. The region of memory occupied by a running program is called a partition.
    1. Fixed partitions. Largely obsolete.
      1. Memory is divided at boot time, and new jobs are placed where they will fit.
      2. Waste when jobs are smaller than any available partitions.
      3. Delay or waste dilemma if the only available partition is too big.
      4. The waste from this un-needed space is called internal fragmentation.
    2. Variable partitions.
      1. Partitions are created or destroyed as jobs have need.
      2. Free partitions can be kept in a linked list.
        1. Simply link free blocks together into a list.
        2. The empty partitions themselves are the nodes, so the space is “free”.
        3. Can be expensive to search.
        4. Finding adjacent free blocks can be expensive.
      3. Free partitions can be kept in a map.
        1. Still requires linear search, though using offsets.
        2. Space is not free.
        3. Main advantage: easy to find adjacent groups of free blocks.
      4. External Fragmentation.
        1. System tends to accumulate empty slots too small to use.
          1. When a new partition is needed, the chance that it fits any hole exactly is very small.
          2. If it's a little too big, we put it elsewhere (or can't store it at all).
          3. If it's a bit smaller, we create a useless fragment.
          4. If it's small enough to leave a useful partition, we'll just create the fragment next time around.
        2. Compacting is possible (with hardware relocation), but expensive.
  4. Internal organization.
    1. A partition may have extra space to allow for growth.
    2. Typical to allow the heap and stack to grow together, so we don't have to predict which will grow more.
    3. Some architectures will allocate separate partitions for stack and heap, even though they belong to the same process.