Since all content change will be stored in this single table by default
(note that the TrackChanges()
method accepts an array of
classes as parameters, and can be called several times), it may be handy
to define several tables for history storage. Later on, an external database
engine may be defined to store history, e.g. on cheap hardware (and big hard
drives), whereas your may database may be powered by high-end hardware (and
small SSDs) - see External database
access.
To do so, you define your custom class for history storage, then supply it as
parameter:
type TSQLRecordSecondaryHistory = class(TSQLRecord); (...) aServer.TrackChanges([TSQLInvoice],TSQLRecordSecondaryHistory);
Then, all history will be stored in this
TSQLRecordSecondaryHistory
class (in its own table named
SecondaryHistory
), and not the default
TSQLRecordHistory
class (in its History
table).
A true Time Machine for your objects
Once the object changes are tracked, you can later on browse the history of
the object, by using the TSQLRecordHistory.CreateHistory()
, then
HistoryGetLast
, HistoryCount
, and
HistoryGet()
methods:
var aHist: TSQLRecordSecondaryHistory; aInvoice: TSQLInvoice; aEvent: TSQLEvent; aTimeStamp: TModTime; (...) aInvoice := TSQLInvoice.Create; aHist := TSQLRecordSecondaryHistory.CreateHistory(aClient,TSQLRecordPeopleExt,400); try writeln('Number of items in the record history: ',aHist.HistoryCount); for i := 0 to aHist.HistoryCount-1 do begin aHist.HistoryGet(i,aEvent,aTimeStamp,aInvoice); writeln; writeln('Event: ',GetEnumName(TypeInfo(TSQLEvent),ord(aEvent))^); writeln('TimeStamp: ',TTimeLogBits(TimeStamp).ToText); writeln('Value: ',aInvoice.GetJSONValues(true,true,soSelect)); end; finally aHist.Free; aInvoice.Free; end;
As a result, our ORM is also transformed into a true time machine, for the objects which need it.
This feature will be available on both client and server sides, via the
TSQLRecordHistory
table.
Automatic history packing
This TSQLRecordHistory
class will in fact create a
History
table in the main database, defined as such:
TSQLRecordHistory = class(TSQLRecord) (...) published /// identifies the modified record property ModifiedRecord: PtrInt read fModifiedRecord; /// the kind of modification stored property Event: TSQLEvent read fEvent; /// for seAdd/seUpdate, the data stored as JSON property SentDataJSON: RawUTF8 index 4000 read fSentData; /// when the modification was recorded property TimeStamp: TModTime read fTimeStamp; /// after some events are written as individual SentData content, they // will be gathered and compressed within one BLOB field property History: TSQLRawBlob read fHistory; end;
In short, any modification via the ORM will be stored in the
TSQLRecordHistory
table, as a JSON object of the changed fields,
in TSQLRecordHistory.SentDataJSON
.
By design, direct SQL changes are not handled. If you run some SQL
statements like DELETE FROM ...
or UPDATE ... SET ...
are executed within your application or from any external program, then the
History table won't be updated.
In fact, the ORM does not set any DB triggers to track low-level changes: it
would slow down the process, and void the persistence agnosticism
paradigm we want to follow, e.g. allowing to use a NoSQL database like MongoDB.
When the history grows, the JSON content may become huge, and fill the disk
space with a lot of duplicated content. In order to save disk space, when a
record reaches a define number of JSON data rows, all this JSON content is
gathered and compressed into a BLOB, in
TSQLRecordHistory.History
.
You can force this packing process by calling
TSQLRestServer.TrackChangesFlush()
manually in your code. Calling
this method will also have a welcome side effect: it will read the actual
content of the record from the database, then add a fake seUpdate
event in the history if the field values do not match the one computed from
tracked changes, to ensure that the audit trail will be correct. As a
consequence, history will become always synchronized with the actual data
persisted in the database, even if external SQL did by-pass the CRUD methods of
the ORM, via unsafe DELETE FROM ...
or UPDATE ... SET
...
statements.
You can tune how packing is defined for a given TSQLRecord
table, by using some optional parameters to the registering method:
procedure TrackChanges(const aTable: array of TSQLRecordClass; aTableHistory: TSQLRecordHistoryClass=nil; aMaxHistoryRowBeforeBlob: integer=1000; aMaxHistoryRowPerRecord: integer=10; aMaxUncompressedBlobSize: integer=64*1024); virtual;
Take a look at the documentation of this method (or the comments in its
declaration code) for further information.
Default options will let TSQLRestServer.TrackChangesFlush()
be
called after 1000 individual TSQLRecordHistory.SentDataJSON
rows
are stored, then will compress them into a BLOB once 10 JSON rows are available
for a given record, ensuring that the uncompressed BLOB size for a single
record won't use more than 64 KB of memory (but probably much less in the
database, since it is stored with very high compression rate).
Feedback is welcome on our forum, as usual.