1. Delphi interface types already allow to write such short code

First of all, it’s worth saying that you can use interfaces to write perfectly safe and short code, without ARC, just with the current version of Delphi.
See for instance how is implemented this GDI+ library.

Implicit try..finally free blocks are already written by the compiler.

See also our reference article about interfaces.

2. Delphi Owner property already allows you to manage object life time in the VCL

The whole Delphi components life time is based on ownership.
A form owns its sub components.
A module owns its sub components.
And so on...

It is when the owner is freed, that all its components are also released.

Nice and easy.
Safe and efficient.

3. Code length does not make it less readable

We all know that we read much more code than we write.

So first priority is to let the code be as readable as possible.
IMHO try…finally free patterns do not pollute the readability of the code.
On the contrary, from my own taste, it shows exact scope of a class instance life time.

4. Managing life time objects can help you write better code

When you manage the object life time, you are not tempted to re-create the same again and again. You factorize, optimize your code. And it results… in faster code.

I have seen plenty of Java and C# programmers how do not give a clue about memory allocation and internal process: they write code which works, without asking themselves what will be executed in fact.
Then, especially on a server side, performance scaling is a nightmare.
It works on the developer PC, but won't pass the first performance tests pass.

Having to manage memory life time by hand does not bother me.
On the contrary, it made me a better (less bad) lazy programmer.
And it helped me write faster code, which I know what it is doing when executed.

Of course, it is not automatic.
You can just write try…finally blocks and weep, without searching for code refactoring.
I have seen Delphi code written like that, especially from programmers with a GC model background.

So do not be afraid to learn how to manage your memory!

5. Managing life time objects is worth learning

I was a bit afraid about managing memory, when I came from old BASIC and ASM programming on 8 bit systems (good old days!).
A time where there was no heap, but only static allocation, with less than 64 KB of RAM.
It was working well. And such programs can run for years without any memory leak!

But managing life time is a good way of known how your objects are implemented.
When using an object method, you are not just getting the right result, you are calling perhaps a lot of process.
Worth looking at the internals.

In practice, when writing efficient code, in a GC world, you will have to learn a lot of unofficial information from the runtime designers, to know how GC is implemented.
As such, performance may vary on one revision of the runtime engine, in comparison to another.
If you manage your object life time by hand, you know what you are doing.

The ARC model is in the middle.
But introduces some issues, like need for weak references, and zeroing weak pointers.
AFAIK the RTL implements weak references with a global lock using the very slow TMonitor, which will slow down the whole process a lot, especially in multi-thread (whereas the weak pointer implementation in mORMot core is much more scalable, by the way).
BTW, in October last year I was already speaking about this global lock implementation issue, when I discovered a pre-version of it in the XE3 RTL source code. And the version shipped with XE4 did not improve anything.
And could they say that performance is a concern for them? Forcing the immutable strings use for performance is a just joke, when you look at the current RTL.

6. Source code size has nothing to do with execution speed

In practice, the more the compiler magic or the runtime will execute under the hood, the slower it will be.
So shorter code is most of the time slower code.

Of course, I know that using some high-level structures (like a hashed dictionary or an optimized sort) can be much faster than using a manual search (with a for ... loop), or writing a naive bubble sort.
It does not mean that the more verbose code means the faster.
But my point is that if you rely on some hidden low-level mechanisms, like memory handling, auto-generated structures (like closures), or some RTTI-based features, you will probably write less code, but it will be slower, or less stable.

If you do not handle memory, you are not able to tune the execution process, when needed.
It is not for nothing that the most speed-effective Java projects just use POJO and statically allocated instances, to by-pass the GC.

Worth some minutes thinking about...