Some new TSQLRestClientURI methods have been added to implement BATCH sequences to speed up database modifications: after a call to BatchStart, database modification statements are added to the sequence via BatchAdd / BatchUpdate / BatchDelete, then all statements are sent as one to the remote server via BatchSend - this is MUCH faster than individual calls to Add / Update / Delete in case of a slow remote connection (typically HTTP over Internet).

Since the statements are performed at once, you can't receive the result (e.g. the ID of the added row) on the same time as you append the request to the BATCH sequence. So you'll have to wait for the BatchSend method to retrieve all results, at once, in a dynamic array of integer.

As you may guess, it's also a good idea to use a transaction for the whole process. By default, the BATCH sequence is not embedded into a transaction. It's up to the caller to use a TransactionBegin ... try... Commit except RollBack block.

Here is a typical use (extracted from the regression tests in SQLite3.pas):

// start the transaction
if ClientDist.TransactionBegin(TSQLRecordPeople) then
try
  // start the BATCH sequence
  Check(ClientDist.BatchStart(TSQLRecordPeople));
  // delete some elements
  for i := 0 to n-1 do
    Check(ClientDist.BatchDelete(IntArray[i])=i);
  // update some elements
  nupd := 0;
  for i := 0 to aStatic.Count-1 do
  if i and 7<>0 then
  begin // not yet deleted in BATCH mode
    Check(ClientDist.Retrieve(aStatic.ID[i],V));
    V.YearOfBirth := 1800+nupd;
    Check(ClientDist.BatchUpdate(V)=nupd+n);
    inc(nupd);
  end;
  // add some elements
  V.LastName := 'New';
  for i := 0 to 1000 do
  begin
    V.FirstName := RandomUTF8(10);
    V.YearOfBirth := i+1000;
    Check(ClientDist.BatchAdd(V,true)=n+nupd+i);
  end;
  // send the BATCH sequences to the server
  Check(ClientDist.BatchSend(Results)=200);
  // now Results[] contains the results of every BATCH statement...
  Check(Length(Results)=n+nupd+1001);
  // Results[0] to Results[n-1] should be 200 = deletion OK
  // Results[n] to Results[n+nupd-1] should be 200 = update OK
  // Results[n+nupd] to Results[high(Results)] are the IDs of each added record
  for i := 0 to n-1 do
    Check(not ClientDist.Retrieve(IntArray[i],V),'BatchDelete');
    for i := 0 to high(Results) do
      if i<nupd+n then
        Check(Results[i]=200) else
      begin
        Check(Results[i]>0);
        ndx := aStatic.IDToIndex(Results[i]);
        Check(ndx>=0);
        with TSQLRecordPeople(aStatic.Items[ndx]) do
        begin
          Check(LastName='New','BatchAdd');
          Check(YearOfBirth=1000+i-nupd-n);
        end;
      end;
  // in case of success, apply the Transaction
  ClientDist.Commit;
except
  // In case of error, rollback the Transaction
  ClientDist.RollBack;
end;

Here is a typical JSON stream sent to the server:

{"People":["DELETE":2,"DELETE":13,"DELETE":24,
(...)  all DELETE actions
,"DELETE":11010,
"PUT":{"RowID":3,"FirstName":"Sergei1","LastName":"Rachmaninoff","YearOfBirth":1800, "YearOfDeath":1943},
"PUT":{"RowID":4,"FirstName":"Alexandre1","LastName":"Dumas","YearOfBirth":1801, "YearOfDeath":1870},
(...)  all PUT = update actions
"PUT":{"RowID":11012,"FirstName":"Leonard","LastName":"da Vinçi","YearOfBirth":9025, "YearOfDeath":1519},
"POST":{"FirstName":"‚@•Å"H†m£ g","LastName":"New","YearOfBirth":1000, "YearOfDeath":1519},
"POST":{"FirstName":"@…,KA½à #¶f","LastName":"New","YearOfBirth":1001, "YearOfDeath":1519},
(...)  all POST = add actions
"POST":{"FirstName":"+tqCXW3Â\"","LastName":"New","YearOfBirth":2000, "YearOfDeath":1519}
]}

Here is a typical JSON stream receiver from the server, on success:

 [200,200,...]

All the JSON generation (client-side) and parsing (server-side) is much optimized and very fast. With the new internal SynLZ compression (available by default in our HTTP Client-Server classes), used bandwidth is minimal.

Thanks to these methods, most time is now spent into the database engine itself, and not in the communication layer.

Feedback and comments are welcome in our forum.