Audit-trail for ORM change tracking
Since most CRUD operations are centered within the scope of our
mORMot server, we implemented in the ORM an integrated mean of
tracking changes (aka Audit Trail) of any
In short, our ORM is transformed into a time-machine, just like the good old DeLorean!
Keeping a track of the history of business objects is one very common need
for software modeling, and a must-have for any accurate data modeling,
like Domain-Driven Design.
By default, as expected by the OOP model, any change to an object will forget any previous state of this object. But thanks to mORMot's exclusive change-tracking feature, you can persist the history of your objects.
By default, change-tracking feature will be disabled, saving performance and
But you can enable change tracking for any class, by calling the following method, on server side:
This single line will let
aServer: TSQLRestServer monitor all
CRUD operations, and store all changes of the
Since all content change will be stored in this single table by default
(note that the
TrackChanges() method accepts an array of
classes as parameters, and can be called several times), it may be handy
to define several tables for history storage. Later on, an external database
engine may be defined to store history, e.g. on cheap hardware (and big hard
drives), whereas your may database may be powered by high-end hardware (and
small SSDs) - see External database
To do so, you define your custom class for history storage, then supply it as parameter:
type TSQLRecordSecondaryHistory = class(TSQLRecord); (...) aServer.TrackChanges([TSQLInvoice],TSQLRecordSecondaryHistory);
Then, all history will be stored in this
TSQLRecordSecondaryHistory class (in its own table named
SecondaryHistory), and not the default
TSQLRecordHistory class (in its
A true Time Machine for your objects
Once the object changes are tracked, you can later on browse the history of
the object, by using the
var aHist: TSQLRecordSecondaryHistory; aInvoice: TSQLInvoice; aEvent: TSQLEvent; aTimeStamp: TModTime; (...) aInvoice := TSQLInvoice.Create; aHist := TSQLRecordSecondaryHistory.CreateHistory(aClient,TSQLRecordPeopleExt,400); try writeln('Number of items in the record history: ',aHist.HistoryCount); for i := 0 to aHist.HistoryCount-1 do begin aHist.HistoryGet(i,aEvent,aTimeStamp,aInvoice); writeln; writeln('Event: ',GetEnumName(TypeInfo(TSQLEvent),ord(aEvent))^); writeln('TimeStamp: ',TTimeLogBits(TimeStamp).ToText); writeln('Value: ',aInvoice.GetJSONValues(true,true,soSelect)); end; finally aHist.Free; aInvoice.Free; end;
As a result, our ORM is also transformed into a true time machine, for the objects which need it.
This feature will be available on both client and server sides, via the
Automatic history packing
TSQLRecordHistory class will in fact create a
History table in the main database, defined as such:
TSQLRecordHistory = class(TSQLRecord) (...) published /// identifies the modified record property ModifiedRecord: PtrInt read fModifiedRecord; /// the kind of modification stored property Event: TSQLEvent read fEvent; /// for seAdd/seUpdate, the data stored as JSON property SentDataJSON: RawUTF8 index 4000 read fSentData; /// when the modification was recorded property TimeStamp: TModTime read fTimeStamp; /// after some events are written as individual SentData content, they // will be gathered and compressed within one BLOB field property History: TSQLRawBlob read fHistory; end;
In short, any modification via the ORM will be stored in the
TSQLRecordHistory table, as a JSON object of the changed fields,
By design, direct SQL changes are not handled. If you run some SQL
DELETE FROM ... or
UPDATE ... SET ...
are executed within your application or from any external program, then the
History table won't be updated.
In fact, the ORM does not set any DB triggers to track low-level changes: it would slow down the process, and void the persistence agnosticism paradigm we want to follow, e.g. allowing to use a NoSQL database like MongoDB.
When the history grows, the JSON content may become huge, and fill the disk
space with a lot of duplicated content. In order to save disk space, when a
record reaches a define number of JSON data rows, all this JSON content is
gathered and compressed into a BLOB, in
You can force this packing process by calling
TSQLRestServer.TrackChangesFlush() manually in your code. Calling
this method will also have a welcome side effect: it will read the actual
content of the record from the database, then add a fake
event in the history if the field values do not match the one computed from
tracked changes, to ensure that the audit trail will be correct. As a
consequence, history will become always synchronized with the actual data
persisted in the database, even if external SQL did by-pass the CRUD methods of
the ORM, via unsafe
DELETE FROM ... or
UPDATE ... SET
You can tune how packing is defined for a given
table, by using some optional parameters to the registering method:
procedure TrackChanges(const aTable: array of TSQLRecordClass; aTableHistory: TSQLRecordHistoryClass=nil; aMaxHistoryRowBeforeBlob: integer=1000; aMaxHistoryRowPerRecord: integer=10; aMaxUncompressedBlobSize: integer=64*1024); virtual;
Take a look at the documentation of this method (or the comments in its
declaration code) for further information.
Default options will let
called after 1000 individual
are stored, then will compress them into a BLOB once 10 JSON rows are available
for a given record, ensuring that the uncompressed BLOB size for a single
record won't use more than 64 KB of memory (but probably much less in the
database, since it is stored with very high compression rate).
Feedback is welcome on our forum, as usual.