Underneath the layers of software abstractions, ultimately, the physical world must implement the computations we wish to perform. How do we organize our physical building blocks to perform a computation, particularly a programmable or universal computation? How much and what resources does a particular computation require? How do we optimize our physical resources to support a computation efficiently? What do we do when given billions of transistors? These are some of the factors behind the study of this course.

We observe unprecedented expansion of computational capabilities fueled both by advanced processing and architectural innovations to exploit that processing capability. As a result of these capabilities automatic computation is having a big impact on the way we live, work, communicate, and especially how we do science and engineering. If anything, our biggest limitation today is not the raw capacity and capabilities available in the hardware, but to fully exploiting that capacity. The design space for computing devices is large and fascinating with efficiencies that vary by orders of magnitude. Further, the limitations of our underlying building blocks are changing, so the "right answers" of the past will almost certainly become stale and outdated. A good engineer must know the fundamental tradeoffs to re-evaluate solutions as the underlying technology changes.

Who should study this?

This course is a must for anyone who is interested in the design of computers, System-on-a-chip ICs, or modern embedded systems (including multimedia, communications, signal processing, and control). It is also of great value to anyone who will be designing “systems” which use these components (e.g. telecom, robotics and autonomous vehicles, instrumentation, control, electronic appliances). A deep understanding of computer organization is also of great value to anyone who hopes to develop high-performance software, understanding the capabilities of any machine and how to get the most out of it.