MongoDB + mORMot ORM = ODM
MongoDB (from "humongous") is
a cross-platform document-oriented database system, and certainly the best
known NoSQL database.
According to http://db-engines.com in April 2014, MongoDB is in 5th place of the most popular types of database management systems, and first place for NoSQL database management systems.
Our mORMot gives premium access to this database, featuring full NoSQL and Object-Document Mapping (ODM) abilities to the framework.
Integration is made at two levels:
- Direct low-level access to the MongoDB server, in the
- Close integration with our ORM (which becomes defacto an ODM), in
MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas (MongoDB calls the format BSON), which matches perfectly mORMot's RESTful approach.
This second article will focus on integration of MongoDB with our ORM.
MongoDB + ORM = ODM
mORMotMongoDB.pas unit is able to let any
TSQLRecord class be persisted on a remote MongoDB
As a result, our ORM is able to be used as a NoSQL and Object-Document
Mapping (ODM) framework, with almost no code change.
Any MongoDB database can be accessed via RESTful commands, using JSON over HTTP.
This integration benefits from the other parts of the framework (e.g. our UTF-8 dedicated process, which is also the native encoding for BSON), so you can easily mix SQL and NoSQL databases with the exact same code, and are still able to tune any SQL or MongoDB request in your code, if necessary.
Register the TSQLRecord class
In the database model, we define a
TSQLRecord class, as
TSQLORM = class(TSQLRecord) private fAge: integer; fName: RawUTF8; fDate: TDateTime; fValue: variant; fInts: TIntegerDynArray; fCreateTime: TCreateTime; fData: TSQLRawBlob; published property Name: RawUTF8 read fName write fName stored AS_UNIQUE; property Age: integer read fAge write fAge; property Date: TDateTime read fDate write fDate; property Value: variant read fValue write fValue; property Ints: TIntegerDynArray index 1 read fInts write fInts; property Data: TSQLRawBlob read fData write fData; property CreateTime: TCreateTime read fCreateTime write fCreateTime; end;
Note that we did not define any
... values for the
RawUTF8 property, as we need for
external SQL databases, since MongoDB does not expect any restriction
about text fields length.
The property values will be stored in the native MongoDB layout, i.e. with a better coverage than the SQL types:
||N/A||You should use
|enumeration||int32||store the ordinal value of the enumerated item(i.e. starting at 0 for the first element)|
|set||int32||each bit corresponding to an enumerated item (therefore a set of up to 64 elements can be stored in such a field)|
||double||safely converted to/from
||UTF-8||this is the preferred field type for storing some textual content in the ORM|
||UTF-8||WinAnsi char-set (code page 1252) in Delphi|
||UTF-8||UCS2 char-set in Delphi, as
||UTF-8||UCS2 char-set, as COM BSTR type (Unicode in all version of Delphi)|
||UTF-8||Will be either
||UTF-8||Not to be used before Delphi 2009 (unless you may loose some data during
||datetime||ISO 8601 encoded date time|
||int64||as proprietary fast
||int64||the server date time will be stored when a record is modified (as
||int64||the server date time will be stored when a record is created (as
||nothing||data is stored in a separate pivot table; for MongoDB, you should better use data sharding, and an embedded sub-document|
||object||BSON object (from
||array||BSON array of objects (from
||array||BSON array of objects (from
||array||BSON array of strings (from
||array||BSON array of string (from
||binary||BSON binary for blob storage - note that you need to retrieve explicitly
such fields, which are not part of the default
|if the dynamic array can be saved as true JSON, will be stored as BSON
array - otherwise, will be stored in the
|BSON number, text, object or array, depending on TDocVariant custom
variant type or
|BSON as defined in code by overriding
On the server side (there won't be any difference for the client), you
TMongoDBClient, and assign it to a given
MongoClient := TMongoClient.Create('localhost',27017); DB := MongoClient.Database['dbname']; Model := TSQLModel.Create([TSQLORM]); Client := TSQLRestClientDB.Create(Model,nil,':memory:',TSQLRestServerDB); if StaticMongoDBRegister(TSQLORM,fClient.Server,fDB,'collectionname')=nil then raise Exception.Create('Error');
And... that's all!
You can then use any ORM command, as usual:
As with external databases, you can specify the field names mapping between
the objects and the MongoDB collection.
By default, the
TSQLRecord.ID property is mapped to the
_id field, and the ORM will populate this
_id field with a sequence of integer values, just like any
You can specify your own mapping, using for instance:
Since the field names are stored within the document itself, it may be a good idea to use shorter naming for the MongoDB collection. It may save some storage space, when working with a huge number of documents.
TSQLRecord is mapped to a MongoDB collection,
you can always have direct access to the
later on, by calling:
(aServer.StaticDataServer[aClass] as TSQLRestServerStaticMongoDB).Collection
This may allow any specific task, including any tuned query or process.
ORM/ODM CRUD methods
You can add documents with the standard CRUD methods of the ORM, as usual:
R := TSQLORM.Create; try for i := 1 to COLL_COUNT do begin R.Name := 'Name '+Int32ToUTF8(i); R.Age := i; R.Date := 1.0*(30000+i); R.Value := _ObjFast(['num',i]); R.Ints := nil; R.DynArray(1).Add(i); assert(Client.Add(R,True)=i); end; finally R.Free; end;
As we already saw, the framework is able to handle any kind of properties,
including complex types like dynamic arrays or
In the above code, a
TDocVariant document has been stored in
R.Value, and a dynamic array of
integer values is
accessed via its
index 1 shortcut and the
methods are available:
R := TSQLORM.Create; try for i := 1 to COLL_COUNT do begin Check(Client.Retrieve(i,R)); // here R instance contains all values of one document, excluding BLOBs end; finally R.Free; end;
You can define a WHERE clause, as if the back-end where a regular SQL database:
R := TSQLORM.CreateAndFillPrepare(Client,'ID=?',[i]); try ...
Current implementation understand one condition over one single field, with
= > >= < <= IN clauses. More advanced queries are
possible, but won't be handled as SQL, but via direct access to the
To perform a query and retrieve the content of several documents, you can
R := TSQLORM.CreateAndFillPrepare(Client,''); try n := 0; while R.FillOne do begin // here R instance contains all values of one document, excluding BLOBs inc(n); end; assert(n=COLL_COUNT); finally R.Free; end;
A WHERE clause can also be defined for
In addition to individual CRUD operations, our MongoDB is able to use BATCH mode for adding or deleting documents.
You can write the exact same code as with any SQL back-end:
Client.BatchStart(TSQLORM); R := TSQLORM.Create; try for i := 1 to COLL_COUNT do begin R.Name := 'Name '+Int32ToUTF8(i); R.Age := i; R.Date := 1.0*(30000+i); R.Value := _ObjFast(['num',i]); R.Ints := nil; R.DynArray(1).Add(i); assert(Client.BatchAdd(R,True)>=0); end; finally R.Free; end; assert(Client.BatchSend(IDs)=HTML_SUCCESS);
Or for deletion:
Client.BatchStart(TSQLORM); for i := 5 to COLL_COUNT do if i mod 5=0 then assert(fClient.BatchDelete(i)>=0); assert(Client.BatchSend(IDs)=HTML_SUCCESS);
Speed benefit may be huge in regard to individual Add/Delete operations, even on a local MongoDB server. We will see some benchmark numbers now.
You can take a look at Data access benchmark to compare MongoDB as back-end for our ORM classes.
In respect to external SQL engines, it features very high speed, low CPU
use, and almost no difference in use. We interfaced the
BatchDelete() methods to benefit of MongoDB BULK
process, and avoided most memory allocation during the process.
Here are some numbers, extracted from the
sample, which reflects the performance of our ORM/ODM, depending on the
Write Concern mode used:
2.1. ORM with acknowledge: - Connect to local server: 6 assertions passed 18.65ms - Insert: 5,002 assertions passed 521.25ms 5000 rows inserted in 520.65ms i.e. 9603/s, aver. 104us, 2.9 MB/s - Insert in batch mode: 5,004 assertions passed 65.37ms 5000 rows inserted in 65.07ms i.e. 76836/s, aver. 13us, 8.4 MB/s - Retrieve: 45,001 assertions passed 640.95ms 5000 rows retrieved in 640.75ms i.e. 7803/s, aver. 128us, 2.1 MB/s - Retrieve all: 40,001 assertions passed 20.79ms 5000 rows retrieved in 20.33ms i.e. 245941/s, aver. 4us, 27.1 MB/s - Retrieve one with where clause: 45,410 assertions passed 673.01ms 5000 rows retrieved in 667.17ms i.e. 7494/s, aver. 133us, 2.0 MB/s - Update: 40,002 assertions passed 681.31ms 5000 rows updated in 660.85ms i.e. 7565/s, aver. 132us, 2.4 MB/s - Blobs: 125,003 assertions passed 2.16s 5000 rows updated in 525.97ms i.e. 9506/s, aver. 105us, 2.4 MB/s - Delete: 38,003 assertions passed 175.86ms 1000 rows deleted in 91.37ms i.e. 10944/s, aver. 91us, 2.3 MB/s - Delete in batch mode: 33,003 assertions passed 34.71ms 1000 rows deleted in 14.90ms i.e. 67078/s, aver. 14us, 597 KB/s Total failed: 0 / 376,435 - ORM with acknowledge PASSED 5.00s
2.2. ORM without acknowledge: - Connect to local server: 6 assertions passed 16.83ms - Insert: 5,002 assertions passed 179.79ms 5000 rows inserted in 179.15ms i.e. 27908/s, aver. 35us, 3.9 MB/s - Insert in batch mode: 5,004 assertions passed 66.30ms 5000 rows inserted in 31.46ms i.e. 158891/s, aver. 6us, 17.5 MB/s - Retrieve: 45,001 assertions passed 642.05ms 5000 rows retrieved in 641.85ms i.e. 7789/s, aver. 128us, 2.1 MB/s - Retrieve all: 40,001 assertions passed 20.68ms 5000 rows retrieved in 20.26ms i.e. 246718/s, aver. 4us, 27.2 MB/s - Retrieve one with where clause: 45,410 assertions passed 680.99ms 5000 rows retrieved in 675.24ms i.e. 7404/s, aver. 135us, 2.0 MB/s - Update: 40,002 assertions passed 231.75ms 5000 rows updated in 193.74ms i.e. 25807/s, aver. 38us, 3.6 MB/s - Blobs: 125,003 assertions passed 1.44s 5000 rows updated in 150.58ms i.e. 33202/s, aver. 30us, 2.6 MB/s - Delete: 38,003 assertions passed 103.57ms 1000 rows deleted in 19.73ms i.e. 50668/s, aver. 19us, 2.4 MB/s - Delete in batch mode: 33,003 assertions passed 47.50ms 1000 rows deleted in 364us i.e. 2747252/s, aver. 0us, 23.4 MB/s Total failed: 0 / 376,435 - ORM without acknowledge PASSED 3.44s
As for direct MongoDB access, the
not to be used on production, but may be very useful in some particular
As expected, the reading process is not impacted by the Write Concern mode set.
You can take a look at the previous blog article, about low-level MongoDB direct
Or check out more complete benchmark results, to compare MongoDB with other SQL back-end handled by our ORM.
Feedback is welcome on our forum, as usual!