How should unix timestamps be stored in int columns?

I have a logging table that will contain millions of writes for statistical reasons. All the columns are int foreign keys. I am also going to add a timestamp column for each row. Given that DATETIME takes 8bits - I will be using int(10) unsigned to cut the storage space (and index on that column) in half.

However, I'm wondering when this column would no longer work. At 3:14:07AM on 19th January 2038 the value 9,999,999,999 will be a problem for UNIX timestamps - but an unsigned int in MySQL only holds up to 4,294,967,295 and the timestamp 4294967295 is showing an invalid number in my PHP application.

So what does this mean? Is the end of the storing int timestamps in MySQL going to be sometime in 2021 since it can't make it all the way to 9999999999?

Answer:

  1. 2147483647 is 2038 (not 9999999999) so there is no problem.
  2. unsigned isn't needed since 2147483647 fits fine in a signed MySQL int.

Solution 1:

Standard UNIX timestamps are a signed 32bit integer, which in MySQL is a regular "int" column. There's no way you could store 9,999,999,999, as that's way outside the representation range - the highest a 32bit int of any sort can go is 4,294,967,295. The highest a signed 32bit in goes is 2,147,483,647.

If/when UNIX timestamps go to a 64bit data type, then you'll have to use a MySQL "bigint" to store them.

As for int(10), the (10) portion is merely for display purposes. MySQL will still use a full 32bit internally to store the number, but only display 10 whenever you do a select on the table.