Solution 1:

Apparently this is by design:

If the local clock time of the client is less than three minutes ahead of the time on the server, W32Time will quarter or halve the clock frequency for long enough to bring the clocks into sync. If the client is less that 15 seconds ahead, it will halve the frequency; otherwise, it will quarter the frequency. The amount of time the clock spends running at an unusual frequency depends on the size of the offset that is being corrected

http://blogs.msmvps.com/acefekay/2009/09/18/configuring-the-windows-time-service-for-windows-server/

I can't find a reference that explicitly explains WHY it is designed so, but I would guess that the aim is to avoid sudden jumps in case other applications are using the internal clock to time operations. So if the offset is low then the 'convergence' method is used by adjusting the local clock rate. If the difference is 3 minutes or greater then the risk of Kerberos authentication failing (>5 minutes clock diff) is considered more serious than the risks involved in 'jumping' the local clock, so the clock is just reset rather than converging