Increase rsyslog/syslog precision to include fractional seconds?
Most modern Syslog daemons (including rsyslog and syslog-ng) support high precision timestamping. If you're using one of these tools, you should have no difficulty configuring it.
As to it's accuracy. . . it depends. First of all, it's going to be dependent on the hardware. Most modern hardware supports high precision time, but not all. Assuming the hardware supports it, there's still a few challenges. Top priority will be making sure that your clock is set accurately, and that any other machines that are logging to it have matching time (assuming you're shipping logs from all your systems a central loghost). ntpd is the standard tool for maintaining accurate clock time (usually synced to the ntp.org pool).
Finally, we come to the event itself. The short answer is there will almost always be at least a little bit of drift, even if it is very slight. There is still going to be some variability here, too, depending on other factors. A lot will depend on where the event comes from, and how it gets picked up. For example, if I have an application that does foo, and then sends a log to syslog saying that the app did foo, it may take 100 ms between the completion of foo and sending the log. It may take another 20ms for the syslog() system call to complete.
I don't remember the low-level details of syslog, but I don't believe the event is timestamped when it is submitted to syslog, I think it is timestamped by the syslog daemon when it is picked up. That adds another few milliseconds to the mix.
Basically, unless you're dealing with a realtime system with realtime logging facilities, I don't think you'll ever get 100% perfect accuracy. Even then, you'd likely have (microscopic) levels of drift, but at least you'd have constraints to know your margin of error. At the same time, unless you have that level of requirement, your timestamp is probably going to be accurate enough.
Christopher covered the technical considerations: run ntpd, point it at a reliable source, and be cognizant of drift. I wanted to add a couple notes:
-
The timestamp is set by the
syslog()
libc function call. Here's uClibc's implementation. A typicalsyslog()
invocation will complete in well under 1 ms. Here's the Ruby VM wrappingsyslog()
in 0.15 ms:Benchmark.measure { logger.info 'narf' }.real => 0.000149965286254883
While syslog may be suitable for millisecond precision, many Web app frameworks buffer logs and then either flush the buffer periodically or output all logs for a request after the request ends. Nearly all Web servers do the same (that's how the W3C common logfile format knows bytes transmitted). The impact is that millisecond precision is useful in fewer situations than it might seem.
Clock drift is endemic on VMs. From anecdotal mentions, 3 seconds per day is typical. Running
ntpdate
as a nightly cron job is not good enough.Context usually makes order pretty obvious. If you haven't encountered a case where millisecond precision was the only way to know which event happened first, it's probably not worth the time.