Chapter 1: Introduction
  1. Organization and architecture.
    1. Computer organization: how parts are combined to make a working whole.
      1. Physical aspects of the computer.
      2. Types memory and other components.
      3. Signals and control protocols.
    2. Computer architecture: how the organization is presented to the programmer.
      1. Instruction sets and codes
      2. How memory is addressed
      3. How I/O is accomplished
      4. Instruction Set Architecture (ISA): The interface between software and hardware.
  2. Hardware and software
    1. Any program is executed by some interpreter.
    2. That interpreter may be software or hardware.
      1. Of course, you have to reach hardware eventually to have something you can run.
      2. Could you build a JVM in hardware?
    3. Any task done by software could be done by hardware instead.
    4. Any task done by hardware could be done by software instead.
      1. Though the software still needs an interpreter, so you can apply the rule recursively.
      2. Since recursion needs a base case, you'll have some hardware eventually.
      3. Hardware solutions usually run faster.
  3. A computer has
    1. Processor(s) to interpret and execute instructions
    2. Memory to store both data and programs.
    3. Mechanism to transfer data to and from the outside.
  4. Computer Guts.
    1. Sizes.
      1. Memory is always in binary: 1K = 1024, 1M = 10242, etc.
      2. Disks sizes
        1. Often in binary
        2. Formatting consumes a good bit of the hardware space.
    2. Processors.
      1. Typically multi-core these days: multiple CPUs on one chip.
      2. Clock speed in the GHz range.
        1. The clock is a circuit which generates a pulse at a regular rate.
        2. Makes the CPU do the next thing, whatever that is.
        3. Chapter 3.
      3. Chapters 4 and 5.
    3. Main Memory. Typically SDRAM.
      1. Synchronous Dynamic Random Access Memory.
      2. Chapter 6.
    4. System bus. Connects between CPU, Memory and some other parts. Chapter 4.
    5. Cache memory.
      1. Holds a copy of a portion of the main memory.
      2. Operates must faster.
      3. Speeds up a large percentage of memory operations, since they can be satisfied from cache.
      4. Chapter 6.
    6. Storage
      1. I/O bus speed.
        1. Moves data to and from I/O devices.
        2. Separate from the system bus.
      2. Hard drive: rotational speed matters.
      3. Solid state (flash) storage: Very different properties from a hard drive.
      4. Optical
      5. Chapter 7.
    7. (Other) peripheral devices.
      1. Display
      2. Graphics processor. (Yes, it's an extra processor.)
      3. Network Interface Card (NIC). (Which usually isn't a separate card anymore.)
  5. Standards Organizations. Standards allow various components to work together, even when made by different manufacturers.
    1. Institute of Electrical and Electronics Engineers (IEEE). Buses, low-level networking, many others.
    2. International Telecommunications Union (ITU).
    3. International Standards Organization (ISO).
      1. Coordinates national standards bodies.
      2. US is the American National Standards Institute (ANSI).
    4. Organizations specialized around specific standards.
      1. InterNational Committee for Information Technology Standards (INCITS). Disk I/O bus standards, and some others.
      2. The USB Implementers Forum.
  6. Generations.
    1. Gen. 0. Mechanical computation.
      1. Abacus. Really a memory system.
      2. Pascal's calculator. Invented 1642.
        Basic design still used in the 1950s.
      3. Babbage
        1. Difference engine. 1882. A calculator.
        2. Analytical engine. A computer.
      4. Punched cards.
        1. Early use: Jacquard Loom. Controlled patterns sewn into the cloth.
        2. Babbage used them to control his equipment.
        3. Hollerith cards.
          1. Created for 1890 census.
          2. Standard for computing into the (gak!) 1980's.
      5. Electro-mechanical: relays.
        1. Conrad Zuse, Germany, during WWII.
        2. Harvard Mark I (started 1939).
    2. Gen. 1. Vacuum tubes.
      1. Tubes must be heated. Use lots of power, leave lots of heat.
      2. Routinely burn out and must be replaced.
      3. Conrad Zuse again. Design only, though. Couldn't get the tubes.
      4. Atanasoff-Berry (ABC).
      5. Mauchly and Eckert, ENIAC.
        1. More than a lab toy.
        2. 17,468 vacuum tubes.
        3. Funded by the army to calculate ballistic tables.
    3. Gen. 2. Transistors.
      1. Bell Labs: Bardeen, Brattain, Schockley.
      2. Solid state (v. ionized) version of the vacuum tube, but
        1. Uses much less power.
        2. Generates much less heat.
        3. Doesn't burn out.
      3. Start of a large commercial computing industry.
    4. Gen. 3. Integrated circuits.
      1. Transistor circuits (including many transistors) built at once in a single device.
        1. Dozens at first.
        2. Constantly increasing.
      2. Allowed small computers to be cheaper, expanding use beyond government and the largest companies and universities.
      3. Allowed the largest computers to be larger. Rise of “supercomputers.”
    5. The electric valve.
      1. Essentially a switch controlled by current on another circuit.
      2. Relays, vacuum tubes, and transistors are all such controlled switches.
    6. Gen. 4. Very-large-scale integrated circuits.
      1. Chips in excess of 10,000 components.
      2. Or millions. Or billions.
      3. Particularly, a complete CPU.
      4. Allows the creation of the PC, and later portable computing.
    7. Moore's Law.
      1. The density of silicon chips doubles every 18 months.
      2. Not some law of nature; will hold as long as engineers keep figuring out how make it hold.
      3. Originally a prediction for 10 years out. Has held for 40.
  7. Layered Design.
    1. Think the figure in the book mixes some different things.
    2. Machine instructions are bit patterns stored in the computer's memory.
    3. The Instruction Set Interface (ISA) describes the format and meaning of the instructions.
    4. The ISA is the interface between software and hardware.
    5. Assembly language is a symbolic form of machine instructions.
  8. Von Neumann Architecture
    1. Stored program: Program stored in memory along with the data.
    2. Sequential processing of instructions.
    3. Single path between CPU and memory: The Von Neumann Bottleneck.
    4. Fetch-execute cycle.
      1. CPU fetches the instruction denoted by the PC.
      2. CPU decodes the fetched instruction.
      3. Any needed operands are fetched.
      4. The operation is performed, and any results are stored.
    5. Some parallel systems discard Von Neumann; common ones just have several.
      1. Clustering.
      2. Multicomputers.
      3. Multi-core computers.
    6. What's not Von Neumann?
      1. Things like multiple buses, or a small number of multiple memories, that aren't very different.
      2. Send the operation to the data, not the other way 'round.
        1. Instructions applied in parallel to a set of data.
        2. Adding computing power to memory, including hardware neural networks.