Thread-safety of mORMot
We tried to make mORMot at the same time fast and safe, and able to
scale with the best possible performance on the hardware it runs on.
Multi-threading is the key to better usage of modern multi-core CPUs, and also client responsiveness.
As a result, on the Server side, our framework was designed to be thread-safe.
On typical production use, the mORMot HTTP server will run on its
own optimized thread pool, then call the
method. This method is therefore expected to be thread-safe, e.g. from the
TSQLHttpServer. Request method. Thanks to the RESTful approach of
our framework, this method is the only one which is expected to be thread-safe,
since it is the single entry point of the whole server. This KISS design ensure
better test coverage.
Let us see now how this works, and publish some benchmarks to test how efficient it has been implemented.
In order to achieve this thread-safety without sacrificing performance, the
following rules were applied in
- Most of this methods's logic is to process the incoming parameters,
so is thread-safe by design (e.g.
RecordPropsaccess do not change during process);
- The SQLite3 engine access is protected at SQL/JSON cache
TSQLRestServerStaticmain methods (
EngineList, EngineRetrieve, EngineAdd, EngineUpdate, EngineDelete, EngineRetrieveBlob, EngineUpdateBlob) are thread-safe: e.g.
TSQLRestServerStaticInMemoryuses a per-Table Critical Section;
TSQLRestServerCallBackmethods (i.e. published methods of the inherited
TSQLRestServerclass) must be implemented to be thread-safe;
Interface-based services have several execution modes, including thread safe automated options (see
TServiceMethodOption), or manual thread safety expectation, for better scaling;
- A protected
fSessionCriticalSectionis used to protect shared
fSessionaccess between clients;
- Remote external tables use thread-safe connections and statements when accessing the databases via SQL;
- Access to
fStatswas not made thread-safe, since this data is indicative only: a mutex was not used to protect this resource.
We tried to make the internal Critical Sections as short as possible, or
relative to a table only (e.g. for
There is some kind of "giant lock" at the SQLite3 engine level, so
all requests process will be queued.
This was not found to be a major issue (see benchmark results below), since the internal SQL/JSON cache implementation need such a global lock, and since most of the SQLite3 resource use will consist in hard disk access, which gain to be queued.
It also allows to use the SQLite3 in
mode if needed, with both benefits of high performance and multi-thread
From the Client-side, the REST core of the framework is expected to be Client-safe by design, therefore perfectly thread-safe: it's the benefit of the stateless architecture.
When we are talking about thread-safety, nothing compares to a dedicated
stress test program.
An average human brain (like ours) is not good enough to ensure proper design of such a complex process.
So we have to prove the abilities of our little mORMot.
In the supplied regression tests, we designed a whole class of multi-thread
Its methods will run every and each Client-Server protocols available (direct access via
messages, named pipes, and both HTTP servers - i.e.
Each protocol will execute in parallel a list of INSERTs - i.e.
TSQLRest.Add() - followed by a list of SELECTs - i.e.
Those requests will be performed in 1 thread, then 2, 5, 10, 30 and 50 concurrent threads.
The very same SQLite3 database (in
mode) is accessed at once by all those clients.
Then the IDs generated by each thread are compared together, to ensure no cross-insertion did occur during the process.
Those automated tests did already reveal some issues in the initial
implementation of the framework. We fixed any encountered problems, as soon as
Feel free to send us any feedback, with code to reproduce the issue: but do not forget that multi-threading is also difficult to test - problems may occur not in the framework, but in the testing code itself!
OperationCount to 1000 instead of the default 200,
i.e. running 1000 INSERTions and 1000 SELECTs in concurrent threads, the
numbers are the following, on the local machine (compiled with Delphi XE4):
Multi thread process: - Create thread pool: 1 assertion passed 3.11ms - TSQLRestServerDB: 24,061 assertions passed 903.31ms 1=41986/s 2=24466/s 5=14041/s 10=9212/s 30=10376/s 50=10028/s - TSQLRestClientDB: 24,062 assertions passed 374.93ms 1=38606/s 2=35823/s 5=30083/s 10=32739/s 30=33454/s 50=30905/s - TSQLRestClientURINamedPipe: 12,012 assertions passed 1.68s 1=4562/s 2=5002/s 5=3177/s - TSQLRestClientURIMessage: 16,022 assertions passed 616.00ms 1=16129/s 2=24873/s 5=8613/s 10=11857/s - TSQLHttpClientWinHTTP_HTTPAPI: 24,056 assertions passed 1.63s 1=5352/s 2=7441/s 5=7563/s 10=7903/s 30=8413/s 50=9106/s - TSQLHttpClientWinSock_WinSock: 24,061 assertions passed 1.10s 1=11528/s 2=10941/s 5=12014/s 10=12039/s 30=9443/s 50=10831/s Total failed: 0 / 124,275 - Multi thread process PASSED 6.31s
For direct in-process access,
TSQLRestClientDB sounds the best
candidate: its abstraction layer is very thin, and much more multi-thread
friendly than straight
It also will feature a cache, on need.
And it will allow your code to switch between
kind of classes, from its shared abstract methods.
Named pipes and GDI messages are a bit constrained in highly parallel mode,
but HTTP does pretty good.
The server based on
http.sys (HTTP API) is even impressive: the
more clients, the more responsive it is.
It is known to scale much better than the WinSock-based class supplied, which shines with one unique local client (i.e. in the context of those in-process regression tests), but sounds less reliable on production.
Check yourself before you wreck yourself
In addition, you can make yourself an idea, and run the "21 - HTTP Client-Server performance" sample programs, locally or over a network, to check the mORMot abilities to scale and serve a lot of clients with as few resources as possible.
Compile both client and server projects, then launch
The server side will execute as a console window.
This Server will define the same
TSQLRecordPeople as used
during our multi-thread regression tests, that is:
type TSQLRecordPeople = class(TSQLRecord) private fFirstName: RawUTF8; fLastName: RawUTF8; fYearOfBirth: integer; fYearOfDeath: word; published property FirstName: RawUTF8 read fFirstName write fFirstName; property LastName: RawUTF8 read fLastName write fLastName; property YearOfBirth: integer read fYearOfBirth write fYearOfBirth; property YearOfDeath: word read fYearOfDeath write fYearOfDeath; end;
The server main block is just the following:
aModel := TSQLModel.Create([TSQLRecordPeople]); try aDatabaseFile := ChangeFileExt(paramstr(0),'.db3'); DeleteFile(aDatabaseFile); aServer := TSQLRestServerDB.Create(aModel,aDatabaseFile); try aServer.DB.Synchronous := smOff; aServer.DB.LockingMode := lmExclusive; aServer.NoAJAXJSON := true; aServer.CreateMissingTables; // launch the server aHTTPServer := TSQLHttpServer.Create('888',[aServer]); try writeln(#13#10'Background server is running at http://localhost:888'#13#10+ #13#10'Press [Enter] to close the server.'); ConsoleWaitForEnterKey; with TSQLLog.Family do if not (sllInfo in Level) then // let global server stats be logged Level := Level+[sllInfo]; finally aHTTPServer.Free; end; finally aServer.Free; end; finally aModel.Free; end;
It will give CRUD access to the
TSQLRecordPeople table, from
Synchronous := smOff and
lmExclusive to have the best
Our purpose here is not to have true ACID behavior, but test concurrent remote access.
The Client is just a RAD form which will execute the very same code than
during the regression tests, i.e. a
instance, as shown by the following code:
Tests := TSynTestsLogged.Create; Test := TTestMultiThreadProcess.Create(Tests); try Test.ClientOnlyServerIP := StringToAnsi7(lbledtServerAddress.Text); Test.MinThreads := ThreadCount; Test.MaxThreads := ThreadCount; Test.OperationCount := OperationCount; Test.ClientPerThread := ClientPerThread; Test.CreateThreadPool; txt := Format ('%s'#13#10#13#10'Test started with %d threads, %d client(s) per thread and %d rows to be inserted...', [txt,ThreadCount,ClientPerThread,OperationCount]); mmoInfo.Text := txt; Timer.Start; Test._TSQLHttpClientWinHTTP_HTTPAPI; txt := mmoInfo.Text+Format(#13#10'Assertion(s) failed: %d / %d'+ #13#10'Number of clients connected at once: %d'+ #13#10'Time to process: %s'#13#10'Operation per second: %d', [Test.AssertionsFailed,Test.Assertions, ThreadCount*ClientPerThread,Timer.Stop,Timer.PerSec(OperationCount*2)]); mmoInfo.Text := txt; finally Test.Free; Tests.Free; end;
Each thread of the thread pool will create its own HTTP connection, then
loop to insert (
Add ORM method) and retrieve
Retrieve ORM method) a fixed number of objects - checking that
the retrieved object fields match the inserted values. Then all generated IDs
of all threads are checked for consistency, to ensure no race condition did
The input parameters are therefore the following:
- Remote HTTP server IP (port is 888);
- Number of client threads;
- Number of client instances per thread;
- Number of
When running over the following hardware configuration:
- Server is a Core i7 Notebook, with SSD, under Windows 7;
- Client is a Core 2 Duo Workstation, with regular hard-drive (not used), under Windows 7;
- Communicating over a somewhat slow 100 Mb network with a low priced Ethernet HUB.
Typical results are the following:
During all tests, no assertion failed, meaning that no concurrency problem
did occur, nor any remote command lost.
It is worth noting that when run several times in a row, the same set of input parameters give the very same speed results: it indicates that the architecture is pretty stable and could be considered as safe.
The system is even able to serve 50000 connected clients at once, with no data loss - in this case, performance is lower (2152 insert/second in the above table), but we clearly reached the CPU and network limit of our client hardware configuration; in the meanwhile, server resources on the Notebook have still some potential.
Average performance is pretty good, even more if we consider that we are
inserting one object per request, with no transaction.
In fact, it sounds like if our little SQLite3 server is faster than most database servers, even when accessed in highly concurrent mode! In batch mode we may achieve amazing results.
Feel free to send your own benchmark results and feedback, e.g. with concurrent clients on several workstations, or long-running tests, on our forums.