There are whole books about optimization, but few mention when is the right moment for the optimization to take place.
There are several perspectives with fundamental differences:
- during development
- at the end of the development cycle
What is the correct answer? In fact there are only less correct or inadequate.
This answer has the most potential to be wrong.
Even though is the most common method and it shouldn’t be interpreted as being wrong, it has the potential to cause issues.
When it may cause issues? When instead of optimization, micro-optimization is used in excess.
Micro-optimization can come in many shapes, depending on how the project is developed. Some write “bulk” code or procedural code, others use frameworks and ORMs. When your deviating from the general rule of writing code to do optimization you should think about the consequences. It will be harder for the next guy that looks at your code to understand what’s going on, and if you make a rule out of that, with time the project will become indecipherable.
For instance if your using ORM and you start typing SQL you are already losing the purpose of the ORM. In ROM there are several steps for each interrogation:
Building the objectual interrogation -> Parse to SQL -> Execute interrogation -> Parse result -> Loading the resulting objects
Building the SQL interrogation -> Execute interrogation -> Parse result
The steps may varies depending on the implementation.
A lot of the times the second approach looks simpler and is faster for sure. But why use the first approach? For the architectural advantages! You can set triggers when accessing or setting the properties for instance.
A mature developer is the one that writes “readable” code, not just optimum.
At the end of the developing cycle
The advantage is that there are no architectural compromises during development.
This is usual the best method, because you have the finished product, developed without compromise and you can see which points should be optimized. When you have all the components is much simpler to reorganize them then during the development when changes may appear, which in turn can generate for instance code redundancy.
The disadvantage is that at the end is sometimes difficult to find the week points of the application.
First of all let’s make it clear, I mean “serious” projects.
There is a general rule that’s saying “hardware is cheap, programmers are expensive”. More broadly this means that a lot of the times is easier to scale an application using hardware then doing major compromises in the code.
A lot of companies and projects support this perspective. Correctly applied this principle has the advantage of having a well organized code, easy to read and with few hacks.
The advantage is at development, few hacks make a project more organized (in theory) and easier to extend.
Unfortunately looks like there is also a different interpretation: “if must work, it doesn’t have to be perfect”. Where can this lead to? Basically is the best excuse for dirty code.
The major difference is that you never optimize, and the code will look just as bad as when you do excessive micro-optimization.
In general, project that use excessively micro-optimization, have a great potential to be often rewritten, either partially of full, because there is another rule that says “rather then repair, a lot of the times is easier to rewrite”. Unfortunately projects with bad written code suffer the same fate.
A major disadvantage to bad code is that it slows the development cycle, in other words minor tasks tend to last longer and longer to be accomplished.
Projects don’t have to be always optimized. But when we have to do that, compromises regarding the architecture must be at minimum.