How to Store Historical Data [closed]
Supporting historical data directly within an operational system will make your application much more complex than it would otherwise be. Generally, I would not recommend doing it unless you have a hard requirement to manipulate historical versions of a record within the system.
If you look closely, most requirements for historical data fall into one of two categories:
Audit logging: This is better off done with audit tables. It's fairly easy to write a tool that generates scripts to create audit log tables and triggers by reading metadata from the system data dictionary. This type of tool can be used to retrofit audit logging onto most systems. You can also use this subsystem for changed data capture if you want to implement a data warehouse (see below).
Historical reporting: Reporting on historical state, 'as-at' positions or analytical reporting over time. It may be possible to fulfil simple historical reporting requirements by quering audit logging tables of the sort described above. If you have more complex requirements then it may be more economical to implement a data mart for the reporting than to try and integrate history directly into the operational system.
Slowly changing dimensions are by far the simplest mechanism for tracking and querying historical state and much of the history tracking can be automated. Generic handlers aren't that hard to write. Generally, historical reporting does not have to use up-to-the-minute data, so a batched refresh mechanism is normally fine. This keeps your core and reporting system architecture relatively simple.
If your requirements fall into one of these two categories, you are probably better off not storing historical data in your operational system. Separating the historical functionality into another subsystem will probably be less effort overall and produce transactional and audit/reporting databases that work much better for their intended purpose.
I don't think there is a particular standard way of doing it but I thought I would throw in a possible method. I work in Oracle and our in-house web application framework that utilizes XML for storing application data.
We use something called a Master - Detail model that at it's simplest consists of:
Master Table for example calledWidgets
often just containing an ID. Will often contain data that won't change over time / isn't historical.
Detail / History Table for example called Widget_Details
containing at least:
- ID - primary key. Detail/historical ID
- MASTER_ID - for example in this case called 'WIDGET_ID', this is the FK to the Master record
- START_DATETIME - timestamp indicating the start of that database row
- END_DATETIME - timestamp indicating the end of that database row
- STATUS_CONTROL - single char column indicated status of the row. 'C' indicates current, NULL or 'A' would be historical/archived. We only use this because we can't index on END_DATETIME being NULL
- CREATED_BY_WUA_ID - stores the ID of the account that caused the row to be created
- XMLDATA - stores the actual data
So essentially, a entity starts by having 1 row in the master and 1 row in the detail. The detail having a NULL end date and STATUS_CONTROL of 'C'. When an update occurs, the current row is updated to have END_DATETIME of the current time and status_control is set to NULL (or 'A' if preferred). A new row is created in the detail table, still linked to the same master, with status_control 'C', the id of the person making the update and the new data stored in the XMLDATA column.
This is the basis of our historical model. The Create / Update logic is handled in an Oracle PL/SQL package so you simply pass the function the current ID, your user ID and the new XML data and internally it does all the updating / inserting of rows to represent that in the historical model. The start and end times indicate when that row in the table is active for.
Storage is cheap, we don't generally DELETE data and prefer to keep an audit trail. This allows us to see what our data looked like at any given time. By indexing status_control = 'C' or using a View, cluttering isn't exactly a problem. Obviously your queries need to take into account you should always use the current (NULL end_datetime and status_control = 'C') version of a record.
I think you approach is correct. Historical table should be a copy of the main table without indexes, make sure you have update timestamp in the table as well.
If you try the other approach soon enough you will face problems:
- maintenance overhead
- more flags in selects
- queries slowdown
- growth of tables, indexes
In SQL Server 2016 and above, there is a new feature called Temporal Tables that aims to solve this challenge with minimal effort from developer. The concept of temporal table is similar to Change Data Capture (CDC), with the difference that temporal table has abstracted most of the things that you had to do manually if you were using CDC.
Just wanted to add an option that I started using because I use Azure SQL and the multiple table thing was way too cumbersome for me. I added an insert/update/delete trigger on my table and then converted the before/after change to json using the "FOR JSON AUTO" feature.
SET @beforeJson = (SELECT * FROM DELETED FOR JSON AUTO)
SET @afterJson = (SELECT * FROM INSERTED FOR JSON AUTO)
That returns a JSON representation fo the record before/after the change. I then store those values in a history table with a timestamp of when the change occurred (I also store the ID for current record of concern). Using the serialization process, I can control how data is backfilled in the case of changes to schema.
I learned about this from this link here