1. What is a thread, anyway?
    1. A process has several resources.
      1. Memory region, including a stack.
      2. Registers, including the the PC.
      3. System resources: open files, network connections, etc.
    2. Divide the spoils.
      1. A process holds the memory region and system resources.
      2. A thread resides in a process.
      3. A thread has a stack and registers, including the PC.
      4. A process may contain one or more threads.
      5. Multiprogramming is among the threads.
    3. Terrible terminology.
      1. The first one is a traditional process, the second is a threaded process or modern process.
      2. A traditional process is a modern process with exactly one thread.
      3. Some authors (not Tanenbaum) use task for our modern process.
  2. What is a thread good for?
    1. GUIs: kb, screen and disk.
    2. Servers: Listening and multiple clients.
    3. Performance: Using multiple CPUs (cores) simultanously.
  3. Threads v. Processes
    1. Can use either.
    2. Threads are cheaper to create, destroy and switch.
    3. Threads can communicate more efficiently.
    4. Threads can clobber each other (same memory space).
    5. Shared variables.
      1. When multiple threads share data, any can update at any time.
      2. Final results may depend on how the threads happen to be scheduled. race condition
      3. Subject of the next section.
  4. Implementing threads.
    1. User space (library).
      1. Kernel does not know about the threads.
      2. Kernel schedules processes.
      3. When a process runs, the library schedules the threads.
    2. Kernel.
      1. Kernel knows about the threads.
      2. Kernel schedules threads, not processes.
    3. Compare.
      1. Easier to write a library instead of modify the kernel.
      2. Library is more portable.
      3. Thread switching faster w/o the kernel.
      4. Library can be tailored to the application.
      5. Kernel scheduling can be fairer.
        Consider processes with different numbers of threads.
      6. When a thread of the library process blocks, all the threads block.
        1. Possible to re-implement I/O calls using non-blocking I/O.
        2. But then you have to re-implement all the basic I/O calls to support threading.
      7. One thread per process can be running, even under SMP.
    4. Hybrid.
      1. Kernel supports threads.
      2. Kernel threads are divided in user space.
      3. One kernel thread per CPU can be a reasonable model.
    5. Scheduler Activations.
      1. Kernel creates threads, but lets the library schedule execution on them.
      2. Kernel notifies the library when a thread must block. Library can schedule another.
      3. This requires an up-call: Sort of a system call in reverse, where the O/S calls a specified function in the scheduler.
        1. The kernel loads a state onto the CPU that starts the library monitor from a known PC value.
        2. The monitor can then check if another thread is ready and start it.
    6. Pop-up thread
      1. System creates a thread upon the arrival of a networking message.
      2. Useful way to serve web requests.
      3. Must pre-arrange the context in which the thread will run.
  5. Simple examples of threaded programs.
    1. C/Pthreads.
    2. C++ (2011 Standard).
    3. Java.
  6. Re-writing a program to use threads can be difficult.
    1. Static shared variables create race conditions.
    2. Traditional libraries often use these.
    3. Most threading systems support thread-local storage.
      1. A block of storage attached to the thread.
      2. Global to the function invocations within the thread, but local to the thread.
      3. Can be used for these globals, but requires coding changes.
      4. Usually no direct support in languages.
  7. Support.
    1. Unix and friends.
      1. Designed with traditional processes.
      2. Threads tacked on. Posix threads.
    2. Windows: Designed with threading in mind.