What about automated testing?
You know that testing is (almost) everything if you want to avoid regression problems in your application.
How can you be confident that any change made to your software code won't create any error in other part of the software?
So automated unit testing is the good candidate for implementing this.
And even better, test-driven coding
is great:
0. write a void implementation of a feature, that is code the interface with no
implementation;
1. write a test code;
2. launch the test - it must fail;
3. implement the feature;
4. launch the test - it must pass;
5. add some features, and repeat all previous tests every time you add a new
feature.
It could sounds like a waste of time, but such coding improve your code quality a lot, and, at least, it help you write and optimize every implementation feature.
But don't forget that unit testing is not enough: you have to do tests with
your real application, and perform tasks like any user, in order to validate it
works as expected.
That's why we added the writing and cross-referencing of test protocols in
our SynProject
documentation tool.
So how is testing implemented in our framework?
We may have used DUnit - http://sourceforge.net/projects/dunit
But we didn't like the fact that it relies on IDE and create separated units
for testing.
We find it useful to make tests in pure code, in the same unit which implement
them. Smartlinking feature of the Delphi compiler won't put the testing
code in your final application, so it won't inflate your executable size.
And we don't like visual interfaces with red or green lights... we prefer text
files and command line.
And DUnit code is bigger than ours, and we don't need so many options,
following the KISS principle we like so much.
All this is a matter of taste - you can not agree, that's fine.
Among other features of these class, the main is that it is pretty well integrated with other cross-cutting features of the mORMot framewok:
- It can be easily integrated with our logging classes (there is a dedicated "fail" log category, for instance);
- It is integrated when stubbing/mocking interface services;
- It shares a lot of code within
SynCommons.pas
.
So what about using RTTI for adding tests to your program?
In order to define tests, some TSynTestCase
children must be
defined, and will be launched by a TSynTests
instance to perform
all the tests. A text report is created on the current console, providing
statistics and Pass/Fail.
First steps in testing
Here are the functions we want to test:
function Add(A,B: double): Double; overload; begin result := A+B; end; function Add(A,B: integer): integer; overload; begin result := A+B; end; function Multiply(A,B: double): Double; overload; begin result := A*B; end; function Multiply(A,B: integer): integer; overload; begin result := A*B; end;
So we create three classes one for the whole test suit, one for testing addition, one for testing multiplication:
type TTestNumbersAdding = class(TSynTestCase) published procedure TestIntegerAdd; procedure TestDoubleAdd; end; TTestNumbersMultiplying = class(TSynTestCase) published procedure TestIntegerMultiply; procedure TestDoubleMultiply; end; TTestSuit = class(TSynTests) published procedure MyTestSuit; end;
The trick is to create published methods, each containing some tests to process.
Here is how one of these test methods are implemented (I let you guess the others):
procedure TTestNumbersAdding.TestDoubleAdd; var A,B: double; i: integer; begin for i := 1 to 1000 do begin A := Random; B := Random; CheckSame(A+B,Adding(A,B)); end; end;
The CheckSame()
is necessary because of floating-point
precision problem, we can't trust plain = operator (i.e.
Check(A+B=Adding(A,B))
will fail because of rounding
problems).
And here is the test case implementation:
procedure TTestSuit.MyTestSuit; begin AddCase([TTestNumbersAdding,TTestNumbersMultiplying]); end;
And the main program (this .dpr
is expected to be available as
a console program):
with TTestSuit.Create do try ToConsole := @Output; // so we will see something on screen Run; readln; finally Free; end;
Just run this program, and you'll get:
Suit ------
1. My test suit
1.1. Numbers adding: - Test integer add: 1,000 assertions passed 92us - Test double add: 1,000 assertions passed 125us Total failed: 0 / 2,000 - Numbers adding PASSED 360us
1.2. Numbers multiplying: - Test integer multiply: 1,000 assertions passed 73us - Test double multiply: 1,000 assertions passed 117us Total failed: 0 / 2,000 - Numbers multiplying PASSED 324us
Generated with: Delphi 7 compiler
Time elapsed for all tests: 1.51ms Tests performed at 25/03/2014 10:59:33 Total assertions failed for all test suits: 0 / 4,000 All tests passed successfully.
You can see that all text on screen was created by "UnCamelCasing" the
method names (thanks to our good old Camel), and that the test suit just
follows the order defined when registering the classes.
Each method has its own timing, which is pretty convenient to track performance
regressions.
This test program has been uploaded in the SQLite3\Sample\07 -
SynTest
folder of the Source Code Repository.
You can post comments and get feedback in our forum.