Is a higher core count or higher clock speed more beneficial to a computer's performance? [closed]

There are two basic situations that are to be considered:

  1. The processor is used with a computer that solely does calculations for a single program

  2. The processor is used for multiple programs running at the same time

The first situation is where processor 'speed' is more important, as the user wants the ability to make calculations quickly and efficiently. These situations are typically for calculation intensive processing i.e. calculating prime numbers for encryption/decryption

The second is where multiple cores come in handy, as each program can be assigned to a separate core, thus freeing each program from 'bottle-necking' each other. In today's world, the average user is going to be using their computer for multiple programs at a time, thus making the multi-core processing a desirable thing.

However, multi-core != faster speeds or higher performance in all cases. Since most programs are written for single core processing*, clock speed is still important to look at. A combination of both must be taken into consideration (along with many other factors as well).


*There are some programs, and hopefully soon more will be created, where multiple cores can be used at the same time. The future of software is found with this "Parallel Programming":

Software developers can no longer rely on increasing clock speeds alone to speed up single-threaded applications; instead, to gain a competitive advantage, developers must learn how to properly design their applications to run in a threaded environment. Multi-core architectures have a single processor package that contains two or more processor "execution cores," or computational engines, and deliver—with appropriate software—fully parallel execution of multiple software threads.

-Intel


I personally think core count is the way to go. Software development has shifted to networked systems so no longer are local resources the only resources available to you. The most important factor in how you work now is what network you are a part of.

Notice the shift to mobile broadband, constant connectivity, remote access, etc etc. With that, constant connectivity requires battery life. While it is debatable which CPU factors are more optimal for battery life (You've got the a classic optimization equation of work value vs time), I personally think, if you had to pick one, I'd pick more cores.

Intel now allows you to power cores on demand. While not as optimal as having no cores to sleep, having the option to use more cores give you the flexibility to run more applications off the same hardware platform.


As ChrisF mentions in a comment, it depends. But as answers like that aren't really answers, I'll try to make out some scenarios where one will be more beneficial than the other:

In most of the common processes you mention, the number of cores aren't going to matter very much, since most of the work is done in a single thread which can only execute on a single core (at a time). For such processes, a single but very powerful core will perform better than a couple of slower cores. Both encryption and file compression could be exceptions to this, but it depends a lot on what algorithms are used, and if they can be executed in parallel.

However, you have forgotten one of the most common tasks performed on computers today: browsing. Several popular browsers open each tab in a separate process (Chrome being the only one I'm sure does this, since it's the one I use), meaning that if you have four tabs open on a quad-core system, each browsing window can (in theory) have a core "to itself" (ignoring OS threads and stuff), and be as fast as if there was no other browser tabs/windows open. For people who browse with many tabs open at a time, this can be a serious performance improvement without having to build extremely fast CPU cores.

The key to knowing whether a multi-core system with slower cores will be faster than a single-core system with a fast core is knowing if you will do a lot of different things simultaneously or a few, but heavy, things. As this will differ a lot from user to user, so will the answer to your question.


The other answers make a couple of important points too:

  • processor performance isn't all about clock speed or number of cores anymore - other parts of the processor are becoming bottle necks as clock speed and core count improve.
  • for most users, processor performance isn't even the bottle neck to begin with. If you're spending your time in hosted applications like Google Docs, the speed of your network card is going to matter more than the speed of your processor core(s). If you're watching or editing high-res movie material, hard disk performance is going to matter more. Etc...

First of all, single-core speeds have not really gone down that much. The only reason Intel's current Sandy Bridge lineup does not top single-core Pentium 4s in terms of megahertz is that Intel lacks competition, so they don't have to push that hard.

Second, clock speed is not everything, even on single core. When looking at application performance, again against Pentium 4, current Intel lineup is around 50% faster per clock cycle. The reasons why Sandy Bridge is faster per clock cycle than Pentium 4 (Prescott being the last incarnation of it) are multitudinous, but having pre-fetching intelligent memory controller, having memory controller on same die with CPU and higher Instruction Level Parallelism (ILP) contribute to that.

Instruction level parallelism basically means that the processor looks at the instructions and their dependencies and if two instructions are not depending on each other, the CPU can start loading data for both at the same time and possibly reorder the instructions, of the data for one of them arrives before the other one.

Third, some application indeed benefit very nicely from multiple cores. For example Photoshop almost always prefers more cores over operating frequency. Ie. even a slow quad-core almost always beats any dual-core chip, and any dual-core beats any single-core chip. Tri-cores are a mixed bag, they often win over dual-cores, but not always.

Generally applications that do same kind of operations for lot of different sets of data benefit from parallelism most. For example video compression or photo editing often can be parallelized quite easily. On the other hand, computer games have proved out to be hard to parallelize. The graphics on them of course parallelizes very well, but that part is executed on GPU, not CPU. The remaining physics, game world bookkeeping and AIs parallelize less easily.