The paper extrapolates several basic trends in computer architecture:
- Device scaling model (DevM): area, frequency, and power requirements at future technology nodes through 2024.
- Core scaling model (CorM): power/performance and area/performance single core Pareto frontiers derived from a largeset of diverse microprocessor designs.
- Multicore scaling model (CmpM): area, power and performance of any application for “any” chip topology for CPUlike and GPU-like multicore performance.
The point of this exercise is that, when they work out the numbers and projections carefully, it becomes clear that the power requirements of computer processor chips are growing faster than the ability of application software to use all those transistors effectively.
The result, the authors predict, is that we will be facing the practical end of Moore's law much sooner than others have suggested:
The study shows that regardless of chip organization and topology, multicore scaling is power limited to a degree not widely appreciated by the computing community. Even at 22 nm (just one year from now), 21% of a fixed-size chip must be powered off, and at 8 nm, this number grows to more than 50%. Through 2024, only 7.9 average speedup is possible across commonly used parallel workloads, leaving a nearly 24-fold gap from a target of doubled performance per generation.
Since the essential problem is that hardware designs are moving to extreme multi-core techniques much faster than software has been able to adapt, the only solution that they can see is to re-double our efforts to improve our software techniques and learn to build software that can effectively use these highly parallel machines:
On the other hand, left to the multicore path, we may hit a “transistor utility economics” wall in as few as three to five years, at which point Moore’s Law may end, creating massive disruptions in our industry. Hitting a wall from one of these two directions appears inevitable. There is a silver lining for architects, however: At that point, the onus will be on computer architects–and computer architects only–to deliver performance and eciency gains that can work across a wide range of problems. It promises to be an exciting time.
From my perspective, the challenge of building software that more effectively uses multi-core hardware is not insuperable. I think that many software authors have been thinking about this, but haven't been sufficiently motivated to do so because Moore's law just keeps on delivering.
It's not that we can't write efficient, power-sensitive, highly-parallel code, it's just that if we don't need to, then we won't, because we can spend that time adding more features and building more interesting types of applications.
When that changes, we'll change, too.
Haven't read the paper yet, but I did see this video with one of the authors. Very interesting.
ReplyDeletehttp://youtu.be/CQVd68U_Y8U
The paper also investigated parallelism and concluded that it is also an important source of dark silicon (See sec 10.3 on page 9).
ReplyDeleteFor earlier work on dark silicon, see http://darksilicon.org
ReplyDelete