Here is the definition of "cache", as stated by Wikipedia:
In computer engineering, a cache is a component that transparently
stores data so that future requests for that data can be served faster. The
data that is stored within a cache might be values that have been computed
earlier or duplicates of original values that are stored elsewhere. If
requested data is contained in the cache (cache hit), this request can be
served by simply reading the cache, which is comparatively faster. Otherwise
(cache miss), the data has to be recomputed or fetched from its original
storage location, which is comparatively slower. Hence, the greater the number
of requests that can be served from the cache, the faster the overall system
performance becomes.
To be cost efficient and to enable an efficient use of data, caches are
relatively small. Nevertheless, caches have proven themselves in many areas of
computing because access patterns in typical computer applications have
locality of reference. References exhibit temporal locality if data is
requested again that has been recently requested already. References exhibit
spatial locality if data is requested that is physically stored close to data
that has been requested already.
In our ORM framework, since performance was one goal since the beginning,
cache has been implemented at four levels:
- Statement cache for reuse of SQL prepared statements, and bound parameters
on the fly - note that this cache is available not only for the
SQlite3 database engine, but also for any external engine;
- Global JSON result cache at the database level, which is flushed globally
on any
INSERT / UPDATE
;
- Tuned record cache at the CRUD/RESTful level for specified tables or
records on the server side;
- Tuned record cache at the CRUD/RESTful level for specified tables or
records on the client side.