Are CPU clock ticks strictly periodic?

Like any complicated thing, you can describe the way a CPU operates at various levels.

At the most fundamental level, a CPU is driven by an accurate clock. The frequency of the clock can change; think Intel’s SpeedStep. But at all times the CPU is absolutely 100% locked to the clock signal.

CPU instructions operate at a much higher level. A single instruction is a complex thing and can take anywhere from less than one cycle to thousands of cycles to complete as explained here on Wikipedia.

So basically an instruction will consume some number of clock cycles. In modern CPUs, due to technologies like multiple cores, HyperThreading, pipelining, caching, out-of-order and speculative execution, the exact number of clock cycles for a single instruction is not guaranteed, and will vary each time you issue such an instruction!

EDIT

is there any information available about the variance for a specific CPU?

Yes and no. 99.99% of end-users are interested in overall performance, which can be quantified by running various benchmarks.

What you're asking for is highly technical information. Intel does not publish complete or accurate information about CPU instruction latency/throughput.

There are researchers who have taken it upon themselves to try figure this out. Here are two PDFs that may be of interest:

  • instruction_tables.pdf
  • x86-timing.pdf

Unfortunately it's hard to get variance data. Quoting from the first PDF:

numbers listed are minimum values. Cache misses, misalignment, and exceptions may increase the clock counts considerably.

Interesting reading nevertheless!


Are CPU clock ticks strictly periodic in nature?

Of course not. Even the very, very best clocks aren't strictly periodic. The laws of thermodynamics say otherwise:

  • Zeroth law: There's a nasty little game the universe plays on you.
  • First law: You can't win.
  • Second law: But you just might break even, on a very cold day.
  • Third law: It never gets that cold.

The developers of the very, very best clocks try very, very hard to overcome the laws of thermodynamics. They can't win, but they do come very, very close to breaking even. The clock on your CPU? It's garbage in comparison to those best atomic clocks. This is why the Network Time Protocol exists.


Prediction: We will once again see a bit of chaos when the best atomic clocks in the world go from 2015 30 June 23:59:59 UTC to 2015 30 June 23:59:60 UTC to 2015 1 July 2015 00:00:00 UTC. Too many systems don't recognize leap seconds and have their securelevel set to two (which prevents a time change of more than one second). The clock jitter in those systems means that the Network Time Protocol leap second will be rejected. A number of computers will go belly up, just like they did in 2012.


Around 2000, when clockspeeds of CPUs started to get into the range where mobile phones also operated, it became common to add a variation to the actual clock speed. The reason is simple: If the CPU clock is exactly 900 Mhz, all the electronic interference is generated at that frequency. Vary the clock frequency a bit between 895 and 905 Mhz, and the interference is also distributed over that range.

This was possible because modern CPU's are heat-limited. They have no problem running slightly faster for a short period of time, as they can cool down when the clock is slowed down later.