When logging is not enough
By default, any interface
-based service process would be logged
by the framework - see Framework log integration - in dedicated
sllServiceCall
and sllServiceReturn
log levels.
You may see output similar to the following:
18:03:18 Enter mORMot.TSQLRestServerFullMemory(024500A0).URI(POST root/DomUserQuery.SelectByLogonName/1 inlen=7) 18:03:18 Service call mORMot.TSQLRestServerFullMemory(024500A0) DomUserQuery.SelectByLogonName["979"] 18:03:18 Server mORMot.TSQLRestServerFullMemory(024500A0) POST root/DomUserQuery.SelectByLogonName SOA-Interface -> 200 with outlen=21 in 16 us 18:03:18 Service return mORMot.TSQLRestServerFullMemory(024500A0) {"result":[0],"id":1} 18:03:18 Leave 00.000.017
The above lines match the execution of the following method, as defined in
dddDomUserCQRS.pas
:
IDomUserQuery = interface(ICQRSService) ['{198C01D6-5189-4B74-AAF4-C322237D7D53}'] /// would select a single TUser from its logon name // - then use Get() method to retrieve its content function SelectByLogonName(const aLogonName: RawUTF8): TCQRSResult; ...
The actual execution was:
IDomUserQuery.SelectByLogonName('979') -> cqrsSuccess
Here cqrsSuccess
is the first item of the enumeration result,
returned as an integer JSON value "result":[0]
by the method:
TCQRSResult = (cqrsSuccess, cqrsSuccessWithMoreData, cqrsUnspecifiedError, cqrsBadRequest, cqrsNotFound, ...
This detailed log (including micro-second timing on the "Leave"
rows) is very helpful for support, especially to investigate about any error
occurring on a production server.
But it would not be enough (or on the contrary provide "too much information"
which "kills the information") to monitor the higher level of the process,
especially on a server with a lot of concurrent activity.
Tracing Service Methods
The framework allows to optionally store each SOA method execution in a
database, with the input and output parameters, and accurate timing.
You could enable this automated process:
- Either at service level, using
TServiceFactoryServer.SetServiceLog()
; - Or for all services of a
TSQLRestServer.ServiceContainer
instance, viaTServiceContainerServer.SetServiceLog()
.
For instance, you may enable it for a whole REST server:
(aRestSOAServer.ServiceContainer as TServiceContainerServer).SetServiceLog( aRestLogServer,TSQLRecordServiceLog);
This single command would create an Audit Trail with all service calls made
on aRestSOAServer
to the TSQLRecordServiceLog
ORM
class of aRestLogServer
.
Keeping a dedicated REST server for the log entries would reduce the overhead
on the main server, and ease its maintenance.
Actual storage takes place within a class inheriting from
TSQLRecordServiceLog
:
TSQLRecordServiceLog = class(TSQLRecord) ... published /// the 'interface.method' identifier of this call // - this column will be indexed, for fast SQL queries, with the MicroSec // column (for performance tuning) property Method: RawUTF8 read fMethod write fMethod; /// the input parameters, as a JSON document // - will be stored in JSON_OPTIONS_FAST_EXTENDED format, i.e. with // shortened field names, for smaller TEXT storage // - content may be searched using JsonGet/JsonHas SQL functions on a // SQlite3 storage, or with direct document query under MongoDB/PostgreSQL property Input: variant read fInput write fInput; /// the output parameters, as a JSON document, including result: for a function // - will be stored in JSON_OPTIONS_FAST_EXTENDED format, i.e. with // shortened field names, for smaller TEXT storage // - content may be searched using JsonGet/JsonHas SQL functions on a // SQlite3 storage, or with direct document query under MongoDB/PostgreSQL property Output: variant read fOutput write fOutput; /// the Session ID, if there is any property Session: integer read fSession write fSession; /// the User ID, if there is an identified Session property User: integer read fUser write fUser; /// will be filled by the ORM when this record is written in the database property Time: TModTime read fTime write fTime; /// execution time of this method, in micro seconds property MicroSec: integer read fMicroSec write fMicroSec; end;
As you can see, all input and output parameters are part of the record, as
two TDocVariant
instances. Since they are stored as JSON/TEXT, you could perform some requests
directly on their content, especially if actual storage take place in a
MongoDB
database: you may even use dedicated indexes on the parameter values, and/or
run advanced map/reduce queries.
Since very accurate timing, with a micro-second resolution, is part of the
information, you would be able to make filtering or advanced statistics using
simple SQL clauses. It has never been easier to monitor your SOA system, and
identify potential issues.
You may easily extract this information from your database, and feed a
real-time visual monitoring chart system, for instance. Or identify and spy
unusual execution patterns (e.g. unexpected timing or redounding error codes),
which would match some SQL requests: those SQL statements may be run
automatically on a regular basis, to prevent any problem before it actually
happen.
Tracing Asynchronous External Calls
Sometimes, your server may be the client of another process. In an SOA environment, you may interface with a third-party REST service for an external process, e.g. sending a real-time notification.
On the REST client instance, you can execute the
TServiceFactoryClient.SendNotifications()
method for a given
service:
aNotificationClientService.SendNotifications(aServicesLogRest, TSQLRecordServiceNotifications, fSettings.NotificationsRetrySeconds);
This single command would create an Audit Trail with all notification calls
sent to aNotificationClientService
, in the
TSQLRecordServiceNotifications
ORM class of
aServicesLogRest
.
You may use the following TSQLRecordServiceNotifications
class
:
TSQLRecordServiceNotifications = class(TSQLRecordServiceLog) ... published /// when this notification has been sent // - equals 0 until it was actually notified property Sent: TTimeLog read fSent write fSent; end;
The additional Sent
property would contain the
TTimeLog
time-stamp on which the notification would have taken
place.
In fact, all methods executed via this notification service would now be
first stored in this table, then the remote HTTP notifications would take place
asynchronously in the background.
Transmission would be in order (first-in-first-out), and in case of any
connection problem (e.g. the remote server not returning a 200 HTTP
SUCCESS
status code), it won't move to the next entry, and would retry
after the NotificationsRetrySeconds
period, as supplied to the
SendNotifications()
method.
Of course, you may define your own sub-class, to customize the destination Audit Trail table:
type TSQLMyNotifications = class(TSQLRecordServiceNotifications);
Thanks to those TSQLRecordServiceLog
classes, high-level
support and analysis has never become easier. The actual implementation of
those features has been tuned to minimize the impact on main performance, by
using e.g. delayed write operations via
BATCH sequences for adding/updating/deleting records, or a dedicated
background thread for the asynchronous notification process.
For updated information, take a look at our official documentation!