About the MacOSX stack alignment, that's a fact I knew about... but since this
kind of code is very low-level, and depends on how the original EMB RTL is
written and used, I'd better wait for the official EMB implementation before
getting into tuning it.
For example, they may change the way a record is copied, by using inline code
generated by the compiler at compilation type (this is possible, because the
type of the record is already known and fixed at this time) instead of such
slower _CopyRecord real
time type-depending routine. This would be the more efficient and elegant way
of implementing it.
About cross platform, Free Pascal
Compiler has a better approach than EMB, and is now much more advanced than
the Delphi owner for that. IMHO the FPC approach of evolving the compiler is
better: it doesn't force you to pay for a new version, and they maintain
backward compatibility. I really find the EMB Unicode approach not worth it:
from the VCL point of view, it was a need, but from the language and compiler
point of view, it's a mess.
With FPC, the same code can be compiled (and sometimes cross-compiled) on Mac
OSX, Win32, Win64, Linux 32 or 64, with all CPU available from last x86-64 down
to tiny ARM7 CPU.... you can mix OS, CPU register length (32 or 64), even
endianess... that is quite a challenge!
So here is my proposal: why couldn't Embarcadero people use the Free Pascal Compiler as their internal compiler? If EMB sells IDE, frameworks and support, why do they reinvent the existing wheel, since FPC is there, alive and working? Didn't they include the Oxygen compiler technology into their catalog? Why not including FPC into Delphi 2011? (and release it in 2010... not 2012...)
About cross-platform asm tuning, I think the right approach is PUREPASCAL. That is, whenever you want to code anything in asm, first code it in optimized pascal, then optimize it by hand if it's worth it (that is if your profiler tool identifies a code section to be a real bottleneck for your application). But always leave the original pascal code between {$IfDef PUREPASCAL} conditionals. In all cases, this original pascal code will perform well. For the fast pointer arithmetic adaptation which is needed in this kind of fast pascal code, some low level new types and defines (like PtrInt or PtrUInt for the CPU64) will adapt your code to whatever CPU it will run on. That's how we implemented it with our SQlite3 framework.
In most part of your software, optimized pascal code is the key of efficiency. If you want something fast, code it with pointers, and use the Aft-F2 keys to watch about the generated asm. For your software core, avoid using very high level functions (like generics or TList), and write tuned piece of code. In one word: know what you are doing (i.e. how it will compile and be understood by the CPU), know what for you are coding (i.e. what's the purpose of your code).
And don't speak about security and pointers. That's marketing. There are a lot of security issues with such huge framework as Java or DotNet. You can always use code injection or configuration overwrite, even on a "secure" virtual machine. Just make google researches, and be honest. Security is about code quality, best practices and algorithm. Security is not a "magic" included feature, as marketing people tell you.
Think about Algorithms and Data Structures, take a pen and a sheet of paper, write some design drawings, go take a coffee and/or run some miles with some good music on your headphone (as I like to), before going into your keyboard. You would be able to write much more efficient code.
3 reactions
1 From A. Bouchez - 28/03/2010, 12:52
Another point about Unicode:
I really find the EMB Unicode approach not worth it: from the VCL point of view, it was a need, but from the language and compiler point of view, it's a mess.
I really didn't get there point, why they just didn't want to use UTF-8 encoding instead of UTF-16. And don't speak about character index in UnicodeString, if you know what unicode is, you know about diacritics, UTF-32 (i.e. true Unicode) and such. Saying Windows is natively "Unicode" is another true lye: it is UTF-16. Converting from/to UTF-8 is not noticeable (because of the L1 cache of our modern CPU) than using the hidden slow conversions introduced with Delphi 2009. I really don't like the hidden API calls, like WideCharToMultiByte() and such, introduced since Delphi 2009 RTL. Using UTF-8 encoding as default should have been enough for the VCL to be "Unicode-ready", as marketing says...
2 From Chris - 04/04/2010, 01:33
'Saying Windows is natively "Unicode" is another true lye: it is UTF-16.'
The term 'unicode' is not a synomym for UTF-8 - rather, it it's an umbrella term that encompasses UTF-8, UTF-16LE, UTF16-BE, UCS-4... More exactly, these are all 'unicode encodings' - no one is more 'unicode' than the other.
'About cross platform, Free Pascal Compiler has a better approach than EMB, and is now much more advanced than the Delphi owner for that.'
What do you mean by 'more advanced'? Surely 'supports a wider range of platforms' isn't the only yardstick for this. Does FPC support closures, for example? Generally, my impression is that FPC and DCC have just evolved in different ways.
'So here is my proposal: why couldn't Embarcadero people use the Free Pascal Compiler as their internal compiler?'
IMO, the key stengths of Delphi at the off were made possible by Borland's complete control over the compiler. By which I mean, pretty much every change between the old Borland Pascal dialect and the new Delphi one was to make the VCL and its associated design-time support possible (e.g. the new object model, exceptions, RTTI - all stuff FPC simply copied). To say 'good bye' to doing something similar in the future would lose a key commercial advantage - cf. Microsoft with C#, or even Apple with Objective C - being able to control your own language is hardly an outmoded idea. (I'm assuming, of course, that the FPC folks would not simply accept Embarcadero requesting the language evolve in a specific direction, or even have a certain feature added to it, simply because this fitted Embarcadero's needs.)
Personally, I also disagree with you about D2009+'s vs. FPC's contrasting approaches to Unicode - to me, it's FPC's approach that is a mess. The rights and wrongs of Embarcadero's decisions here have been debated to death already though...
3 From A. Bouchez - 05/04/2010, 13:16
Thanks Chris for your comment back!
About "Unicode", that was exactly what I said about "true lye"... Unicode is not only an encoding (i.e. how many glyphs you can put in 32 bits), but a whole standard system. That's why I spoke about diacritics.
Delegates are easy to use for User Interface, very pleasant to write as a code sample in a blog, but I'm not sure it is so good at multi threading for example. For a real app I mean... classes and methods are a cleaner approach if you have a lot of parameters to pass through your thread proc, or want to share the same data among threads.
I suggest you should at least take a look at the free pascal forum archive, and all the debates about the EMB way of evolving the language. It always sounded to me less marketing in there, and more technical than the approach used by Codegear/Embarcadero. Saying FPC team follows EMB is not true. FPC was one of the first 64 bits compiler available for the Win64 platform. And what about the new multi-threaded FPC heap, available by default since version 2.4? Or their WideString approach (see next paragraph)? Or their package system for the units?
It sounded to me that EMB greatest success was by embedding open source or third-party components (like FastMM4, FastCode, PNG, GIF, ribbon...). Datasnap is powerfull, but you can get the same with FPC, and perhaps even more powerfull...
Why are you saying the FPC's approach of Unicode is a mess? From the compiler point of view, it is not. There is no possible hidden conversion like with Delphi 2009/2010. Widestring in FPC were handled in a "Delphi 2009/2010 UnicodeString" way, since the beginning... IMHO the FPC team was right by not following Codegear in the OLE string mapping for their WideString... In fact, I think the FPC WideString was the first true fast unicode compiler for the object pascal language, with 100% compatibility with AnsiStrings, some years before Delphi 2009 was released.
I didn't speak about the Delphi 2010 RTTI. Why didn't EMB make the compiler generate better code (like for the CopyRecord system procedure - see http://blog.synopse.info/post/2010/... ), instead of making the app code size grow so much? For our ORM approach - see http://blog.synopse.info/category/O... - we would like a wider RTTI support than the one available since Delphi 7, but not at the expense of code size.