How to version control a record in a database

Database DesignArchitectureVersioningAuditing

Database Design Problem Overview


Let's say that I have a record in the database and that both admin and normal users can do updates.

Can anyone suggest a good approach/architecture on how to version control every change in this table so it's possible to roll back a record to a previous revision?

Database Design Solutions


Solution 1 - Database Design

Let's say you have a FOO table that admins and users can update. Most of the time you can write queries against the FOO table. Happy days.

Then, I would create a FOO_HISTORY table. This has all the columns of the FOO table. The primary key is the same as FOO plus a RevisionNumber column. There is a foreign key from FOO_HISTORY to FOO. You might also add columns related to the revision such as the UserId and RevisionDate. Populate the RevisionNumbers in an ever-increasing fashion across all the *_HISTORY tables (i.e. from an Oracle sequence or equivalent). Do not rely on there only being one change in a second (i.e. do not put RevisionDate into the primary key).

Now, every time you update FOO, just before you do the update you insert the old values into FOO_HISTORY. You do this at some fundamental level in your design so that programmers can't accidentally miss this step.

If you want to delete a row from FOO you have some choices. Either cascade and delete all the history, or perform a logical delete by flagging FOO as deleted.

This solution is good when you are largely interested in the current values and only occasionally in the history. If you always need the history then you can put effective start and end dates and keep all the records in FOO itself. Every query then needs to check those dates.

Solution 2 - Database Design

I think you are looking for versioning the content of database records (as StackOverflow does when someone edits a question/answer). A good starting point might be looking at some database model that uses revision tracking.

The best example that comes to mind is MediaWiki, the Wikipedia engine. Compare the database diagram here, particularly the revision table.

Depending on what technologies you're using, you'll have to find some good diff/merge algorithms.

Check this question if it's for .NET.

Solution 3 - Database Design

In the BI world, you could accomplish this by adding a startDate and endDate to the table you want to version. When you insert the first record into the table, the startDate is populated, but the endDate is null. When you insert the second record, you also update the endDate of the first record with the startDate of the second record.

When you want to view the current record, you select the one where endDate is null.

This is sometimes called a type 2 Slowly Changing Dimension. See also TupleVersioning

Solution 4 - Database Design

Upgrade to SQL 2008.

Try using SQL Change Tracking, in SQL 2008. Instead of timestamping and tombstone column hacks, you can use this new feature for tracking changes on data in your database.

MSDN SQL 2008 Change Tracking

Solution 5 - Database Design

Just wanted to add that one good solution to this problem is to use a [Temporal database][1]. Many database vendors offer this feature either out of the box or via an extension. I've successfully used the [temporal table][2] extension with PostgreSQL but others have it too. Whenever you update a record in the database, the database holds on to the previous version of that record too.

[1]: https://en.wikipedia.org/wiki/Temporal_database "Temporal database" [2]: http://pgxn.org/dist/temporal_tables/1.0.0/

Solution 6 - Database Design

Two options:

  1. Have a history table - insert the old data into this history table whenever the original is updated.
  2. Audit table - store the before and after values - just for the modified columns in an audit table along with other information like who updated and when.

Solution 7 - Database Design

You can perform auditing on a SQL table via SQL triggers. From a trigger you can access 2 special tables (inserted and deleted). These tables contain the exact rows that were inserted or deleted each time the table is updated. In the trigger SQL you can take these modified rows and insert them into the audit table. This approach means that your auditing is transparent to the programmer; requiring no effort from them or any implementational knowledge.

The added bonus of this approach is that the auditing will occur regardless of whether the sql operation took place via your data access DLLs, or via a manual SQL query; (as the auditing is performed on the server itself).

Solution 8 - Database Design

Alok suggested Audit table above, I would like explain it in my post.

I adopted this schema-less, single table design on my project.

Schema:

  • id - INTEGER AUTO INCREMENT
  • username - STRING
  • tablename - STRING
  • oldvalue - TEXT / JSON
  • newvalue - TEXT / JSON
  • createdon - DATETIME

This table can hold historical records for each table all in once place, with complete object history in one record. This table can populated using triggers / hooks where data changes, storing old and new value snapshot of the target row.

Pros with this design:

  • Less number of tables to manage for history management.
  • Stores full snapshot of each row old and new state.
  • Easy to search on each table.
  • Can create partition by table.
  • Can define data retention policy per table.

Cons with this design:

  • Data size can be large, if system has frequent changes.

Solution 9 - Database Design

You don't say what database, and I don't see it in the post tags. If it's for Oracle, I can recommend the approach that is built in in Designer: use journal tables. If it's for any other database, well, I basically recommend the same way, too...

The way it works, in case you want to replicate it in another DB, or maybe if you just want to understand it, is that for a table there is a shadow table created too, just a normal database table, with the same field specs, plus some extra fields: like what action was last taken (string, typical values "INS" for insert, "UPD" for update and "DEL" for delete), datetime for when the action took place, and user id for who did it.

Through triggers, every action to any row in the table inserts a new row in the journal table with the new values, what action was taken, when, and by what user. You don't ever delete any rows (at least not for the last few months). Yes it'll grow big, easily millions of rows, but you can easily track the value for any record at any point in time since the journaling started or the old journal rows got last purged, and who made the last change.

In Oracle everything you need is generated automatically as SQL code, all you have to do is to compile/run it; and it comes with a basic CRUD application (actually only "R") to inspect it.

Solution 10 - Database Design

I am also doing the same thing. I am making a database for lesson plans. These plans need atomic change versioning flexibility. In other words, each change, no matter how small, to the lesson plans needs to be allowed but the old version needs to be kept intact as well. That way, lesson creators can edit lesson plans while students are using them.

The way it would work is that once a student has done a lesson, their results are attached to the version they completed. If a change is made, their result's will always point to their version.

In this way, if a lesson criteria is deleted or moved, their results won't change.

The way I am currently doing this is by handling all the data in one table. Normally I would just have one id field, but with this system, I am using an id and a sub_id. The sub_id always stays with the row, through updates and deletes. The id is auto-incremented. The lesson plan software will link to the newest sub_id. The student results will link to the id. I have also included a timestamp for tracking when changes happened, but it isn't necessary to handle the versioning.

One thing I might change, once I've tested it, is I might use the previously mentioned endDate null idea. In my system, to find the newest version, I would have to find the max(id). The other system just looks for endDate = null. Not sure if the benefits outway having another date field.

My two cents.

Solution 11 - Database Design

While @WW. answer is a good answer another way is to make a version column and keep all your versions in the same table.

For one table approach you either:

  • Use a flag to indicate the latest ala Word Press
  • OR do a nasty greater than version outer join.

An example SQL of the outer join method using revision numbers is:

SELECT tc.*
FROM text_content tc
LEFT OUTER JOIN text_content mc ON tc.path = mc.path
AND mc.revision > tc.revision
WHERE mc.revision is NULL 
AND tc.path = '/stuff' -- path in this case is our natural id.

The bad news is the above requires an outer join and outer joins can be slow. The good news is that creating new entries is theoretically cheaper because you can do it in one write operation with out transactions (assuming your database is atomic).

An example making a new revision for '/stuff' might be:

INSERT INTO text_content (id, path, data, revision, revision_comment, enabled, create_time, update_time)
(
SELECT
(md5(random()::text)) -- {id}
, tc.path
, 'NEW' -- {data}
, (tc.revision + 1)
, 'UPDATE' -- {comment}
, 't' -- {enabled}
, tc.create_time
, now() 
FROM text_content tc
LEFT OUTER JOIN text_content mc ON tc.path = mc.path
AND mc.revision > tc.revision
WHERE mc.revision is NULL 
AND tc.path = '/stuff' -- {path}
)

We insert by using the old data. This is particularly useful if say you only wanted to update one column and avoid optimistic locking and or transactions.

The flag approach and history table approach requires two rows to be inserted/updated.

The other advantage with the outer join revision number approach is that you can always refactor to the multiple table approach later with triggers because your trigger should essentially to do something like the above.

Solution 12 - Database Design

As an additional step to the answers above me, I would suggest giving each generated change a unique ID, likely something with the date/time and a unique counter for each day (so that multiple updates a second don't overlap). I would include a action type code within this code, so "9129128213939REPLACE". This provides a robustness to allow sanity checking that your other system of history is working correctly.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionNiels BosmaView Question on Stackoverflow
Solution 1 - Database DesignWW.View Answer on Stackoverflow
Solution 2 - Database DesignChristian C. SalvadóView Answer on Stackoverflow
Solution 3 - Database DesignDave NeeleyView Answer on Stackoverflow
Solution 4 - Database DesignD3vtr0nView Answer on Stackoverflow
Solution 5 - Database DesignwuherView Answer on Stackoverflow
Solution 6 - Database DesignalokView Answer on Stackoverflow
Solution 7 - Database DesignDoctor JonesView Answer on Stackoverflow
Solution 8 - Database DesignHassan FaridView Answer on Stackoverflow
Solution 9 - Database DesignbartView Answer on Stackoverflow
Solution 10 - Database DesignJordanView Answer on Stackoverflow
Solution 11 - Database DesignAdam GentView Answer on Stackoverflow
Solution 12 - Database DesignJamesView Answer on Stackoverflow