A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Supercomputers are usually physically multiple computers that do parallel processing (aka pipelining).

Parallel processing has been done by several means. Here are different ways to parallel process in historical order.

  • Interleaved execution. This is starting a second job while the first one is still executing if they jobs use different parts of the system. EG: Run an I/O (like printing) then running something process intensive (like an app).
  • Vector processing. This basically allows a single instruction to execute on arrays or matrices instead of having to do the same work more sequentially.
  • Multithreading. This is executing different parts (threads) of a task/job/program simultaneously. The threads must not interfere with each other.
    • Hyper-threading. Aka SMT (Simultaneou MultiThreading). Older multithreading required multiprocessing, otherwise the multithreaded jobs were executed serially instead of in parallel. Intel's Hyper-Threading can multithread on a single processor.
  • Multitasking. Aka multiprogramming, time sharing, time slicing. This is executing different tasks/jobs/programs/apps simultaneously with a single processor. Multiple jobs would alternate turns at the processor, job 1 would work for a slice of time, then job 2 would work for a slice of time, and so on, thus giving the appearance that multiple jobs were running simultaneously. Problems would arise with resource conflicts that might result in deadlocks.
    • Cooperative multitasking. The jobs must know to share the processor, otherwise an uncooperative job may hog the processor. EGs: Mac OS 8-9.2 with MultiFinder, Win 3.x.
    • Preemptive multitasking. The operating system gives each job equal time/priority to the processor. EGs: Unix, Windows 95+, Mac 10+, OS/2, Amiga.
  • Multiprocessing. This is executing different tasks/jobs/programs/apps simultaneously with two or more processors. In this system typically one computer was the master and the one or more computers were the slaves. The master would assign work to the slaves.
    • SMP. Symmetric MultiProcessing. Aka tightly coupled or shared everything systems. SMP systems have multiple processors but a single OS, memory and I/O bus. An SMP system balances the workload across up to 16 processors. SMP is considered good for OLTP with many users accessing the same database.
      • NUMA. Non-Uniform Memory Access. This is an SMP where each processor has its own local memory as well as a group memory. The term non-uniform is used because local memory is faster than the group memory. Up to 16 processors can act as one, thus NUMA systems can have up to 256 processors.
    • MPP. Massively Parallel Processing. Aka loosely coupled, shared nothing, grid, or distributed systems. MPP systems have multiple processors, each with its own OS, memory, and I/O bus. Some sort of messaging system must interconnect the processors, hit the appropriate database, and divide up the work to be done. Dozens or hundreds or thousands of processors can be used for a single MPP app. An MPP system can be made with PlayStations and Linux! MPP is considered good for OLAP or AI with a user accessing multiple databases.

The next level of supercomputing may utilize different technologies.

  • DNA computing. DNA replicates billions of bits of data in seconds through parallel processing.
  • Quantum computing. Quantum computers in theory have near infinite computing capability. A "qubit" can be both zero and one, a "qubyte" can be all values from 0 to 255 simultaneously, and so on exponentially. Keywords: quantum dot.
  • Neural networks. A system of programs and data structures that approximates the operation of the human brain. Keywords: fuzzy logic, knowledge layers, feed-forward systems.


Here are links that lead to off-site pages about supercomputers and parallel processing.

GeorgeHernandez.comSome rights reserved