-
I’ve been working with Magento for a long time and I can say that the platform changed a lot over time and I would like to share my personal thoughts on it.
How Magento 2 learned from the past to make some brand-new mistakes
I had my first encounter with Magento in 2011 and, back then, all I knew about it was that it was based on Zend Framework and that it was doing e-commerce (obviously).
I was coming from the Symfony framework world, with a lot of documentation, a great community and a rock-solid implementation. This was still Symfony 1, Symfony 2 was just coming out.
By comparison, Magento had next to no documentation, barely any community and the implementation had a plethora of bugs. In my first few days on the job I was seeing a lot of people debugging deep in the Core, I was just perplexed, I almost never needed to debug Symfony, let alone actually find bugs.
Some may argue that Symfony is a framework and Magento is a platform and the argument is as solid as a guinea pig.
The reason for this situation was simple: they didn’t expect the platform to have so much success; but it did and the reason was simple – it was pretty much the only platform in the PHP world that was built with modularity in mind at that time, period! There were other platforms, but definitely not as versatile, fully featured and modular as Magento.
Moving forward, they’ve spent a lot of time and effort in building ample documentation for Magento1, focused on quality, rock-solid stability, all paired with exceptional support for enterprise partners!
Psych! No, they did not! The documentation for M1 was always sparse and of very poor quality. Basically, the best resource was Ben Marks’ “Fundamentals of Magento Development” course, a book (about which and not entitled to an opinion, since I didn’t read it), some blog posts from individuals or companies that worked with the platform and a lot of StackOverflow questions and answers. All this was paired with exceptionally bad support for enterprise users, extremely slow and low quality solutions and, to top it off, plenty of Core bugs. It wasn’t unusual to find non-standard, half-baked implementations and bugs in the Core. But the Core was simple enough that, with Xdebug and a lot of patience, everything was more or less fixable.
Fixes coming from their support team were close to a myth. If you didn’t want to fix a bug, or just didn’t feel like working, you could open a ticket and in most cases it took days or even weeks to get a proper response. By that time you would have usually fixed it already and (in some cases) even sent the fix to their support so they can also include it in the Core.
And along comes Magento 2
With the burden of the popularity-driven growth of Magento 1, Magento 2 came to the rescue.
It took a long time for Magento 2 to be ready, and I think that is more of a story in itself, but some actions were taken to prevent some of the issues that the first version had.
Let’s look on some of the platform issues:
- classes could only be extended by overwriting;
- the frontend was based on the Prototype framework.
There were many other changes but I think these were among the most important ones.
And now let’s see how Magento 2 managed to fix all these issues
It is a good practice to start with something nice, so I will start with the documentation, Magento2 had documentation from the start and it was both useful and well made!
Regarding the classes overwrite, there were implemented two approaches:
- implementing dependency injection (from Symfony)
- implementing plugins using interceptor pattern.
Dependency injection really helped with the ability to substitute classes and functionality, and the plugins helped a lot with a non-invasive way of extending the functionality. Before plugins, only observers were able to modify data in a non-invasive way, and the big problem was that (in many cases) there just weren’t observers everywhere you needed them to be.
Unfortunately, this created a lot of overhead. Maybe in M1 you could just put a breakpoint and look in the stacktrace to see what methods are changing what, but now… it is just a lot more complex, each method has an interceptor class that triggers the plugin mechanism. Each plugin can be triggered before, after or around the method, the around method can prevent the actual method from getting called. In short, it is a lot more code with a lot more methods and a lot more code to follow. When it is working is a lot more elegant, but when it doesn’t it is a lot worse to debug.
In a dramatic twist, many of the classes were not implemented properly in the Core. You see, when Magento2 was released, a lot of compromises had to be made, and some of the Core modules were just “made to work” in the new framework. Later when everyone was expecting the cleanup to be made… it wasn’t, the reason is simple, why fix the Core when you can break compatibility with the modules already created?
This issue even made Magento Core developers suggest that you should follow the guidelines, not use the Core as an example: do what we say, not what we do.
Frontend was fixed even better! You may ask yourself what is that “prototype” library and why would anyone in their right mind choose it? It made a lot of sense back then, when Magento1 was created and there were several popular js libraries, mainly: jQuery, Prototype and MooTools. There were a lot more than these 3, but these were the popular ones. There was a war between jQuery and Prototype, just like there is now between React, Angular and Vue. The first had a new approach with almost a new language, while the second was aiming at extending the browser capabilities in a more discrete fashion. We now know who won, but back then it wasn’t obvious. As a small note, I was also a fan of Prototype back then.
The Magento 2 team realized over time that it was a mistake and, to fix it, they promised to make something more flexible. The new more flexible solution didn’t involve the Prototype library, but instead included: jQuery, Knockout.js, and Require.js just at the top level, without taking dependencies into consideration.
The idea was to separate the frontend and the backend, to have the ability to make completely new frontend APPs. And, as before, this was never properly finished and now there is a highly complicated system, partially separated, partially single page App and partially… multi page, implemented in a variety of styles. And this is only on the store front, the store backend has a slightly different, more complex system of xml, phtml, html and js files.
As a backend developer I can truly say that it is a lot harder to debug (or just to understand how the frontend part works) now than before.The XML part of generating grids is probably the hardest part, it is extremely difficult to debug anything in it and, if you mess something up, you get no warnings at all, you have to find the class that is building the grid and see if an exception is triggered there, but that is not an obvious solution at all…
And, of course, the entire thing is a lot slower because of all these “new engineering features”. When all the code is generated and the links to static resources and everything set to production mode, it isn’t slow; it isn’t exactly fast, but it isn’t terribly slow either. But when you don’t have cache, and didn’t deploy the static resources, and you are in developer mode, it is just awfully slow, it can take a few minutes to load a simple product page! It is just ridiculous if you ask me, it isn’t an operating system, or a video game, it is a shopping cart, why in the world would it take 5 minutes to display a page, even without a lot of new modules.
Keep in mind that Magento 2 didn’t come with a lot of new features, most of the functionality was in M1, only more “engineered”, so those 5 minutes of code generation and linking and whatever magic is going on in there aren’t adding a lot of new features, just a lot of refactoring of the old system.
The Magento Cloud
Magento offered a Cloud, I don’t know if they are anymore and I don’t care, nobody really cares about it anymore, mostly because it wasn’t what everybody wanted.
People want simple things, you have an app, you push it to the clouds and money starts raining down. That should be it, less things to worry about.
Demandware, who doesn’t have a nice open source core, is doing something exactly like this: you don’t have the same amount of control, but you don’t take care of your website, the gods in the clouds do! The developer only develops, doesn’t need to care about what magic operations are doing, because he is not ops, he is dev!
The Magento Commerce Cloud was aimed to do that, just push this monster in the cloud and some very smart ops will scale it for you! But it wasn’t like this, it was never easy, nor fast. And, on top of that, other hosting providers started making hosting solutions for Magento that were better at scaling then the official one, which is just ridiculous.
A positive note to end on
There is an interesting core underneath it all.There are plenty of very smart features that make the platform so popular. Now you also have tests, so you can even do TDD.
Even with all this (over) engineering challenges, there are still plenty of passionate developers out there that are able and willing to overcome them.
There are also a lot of tools developed by the community to overcome some of the shortcomings, like generate the huge amount of files required for a model, or MSP_DevTools to help with debugging the frontend, or n98-magerun2 to help with crons and lots and lots more.
And lastly, there are still a lot of very brilliant and passionate developers out there that are willing to figure out a way to develop, scale and make sales for one more day!
-
Ah, the holiday spirit.
Inspired by a post in Perl Advent, I decided that it would be nice to see an example done with PHP and Go. Please note that this blog is heavily inspired by the post mentioned above.
Let’s say you want to wish upon others happy holiday using the speed of Go, but you are using PHP, what is there to be done?
This is a great time in the PHP world to do something like this, as the newly released PHP 7.4 comes with “Foreign Function Interface” (FFI for short).
Equipped with this new power, I installed PHP 7.4 and got to work.
Let’s start with the super incredible Go greeting part, let’s create a file “greeting.go”:
1package main 2 3import ( 4 "fmt" 5) 6 7func main() { 8 WishMerryChristmas(); 9} 10 11func WishMerryChristmas() { 12 fmt.Println("We wish you a Merry Christmas!"); 13}
Now let’s run it with:
1$ go run greeting.go
It should display:
We wish you a Merry Christmas!Great stuff so far! Now this is nice and fast and everything, but it needs to be a service, let’s see how that would look like:
1package main 2 3import ( 4 "C" 5 "fmt" 6) 7 8func main() {} 9 10//export WishMerryChristmas 11func WishMerryChristmas() { 12 fmt.Println("We wish you a Merry Christmas!") 13}
As you can see, there are several differences:
- we also imported “C”,
- we removed the function call from main()
- we added a comment to export the function.
To compile the code, run:
1$ go build -o greeting.so -buildmode=c-shared
Note that this should be run each time the Go file is modified.
The output should be two files: “greeting.so” and “greeting.h”.
The header file “greeting.h” file contains the types and functions definition. If you come from C world you are probably already familiar with this type of files. Normally, all we need to do now is to import the header file using FFI and use the function!
For this I’ve created a file titled “greeting.php”:
1<?php 2 3$ffi = FFI::load("greeting.h"); 4$ffi->WishMerryChristmas();
Sure looks simple enough, you just have to run it with:
1$ php greeting.php 2PHP Fatal error: Uncaught FFI\ParserException: undefined C type '__SIZE_TYPE__' at line 43 in /home/claudiu/php-go/greeting.php:3 3Stack trace: 4#0 /home/claudiu/php-go/greeting.php(3): FFI::load() 5#1 {main} 6 7Next FFI\Exception: Failed loading 'greeting.h' in /home/claudiu/php-go/greeting.php:3 8Stack trace: 9#0 /home/claudiu/php-go/greeting.php(3): FFI::load() 10#1 {main} 11 thrown in /home/claudiu/php-go/greeting.php on line 3
Not exactly the greeting that I was hoping for…
After some digging, I found this one on the manual page:
C preprocessor directives are not supported, i.e. #include, #define and CPP macros do not work.Because of this, we unfortunately can’t really use the header file, or at least I don’t know how.
On the bright side, we can use FFI::cdef() which allows for function definition specification. If I lost you on the way, what I’m trying to do is just tell PHP which are the function definitions that it can use from “greeting.so”.
The new code will become:
1<?php 2$ffi = FFI::cdef(" 3void WishMerryChristmas(); 4", __DIR__ . "/greeting.so"); 5 6$ffi->WishMerryChristmas();
And if we run it:
1$ php greeting.php 2We wish you a Merry Christmas!
We are making great progress, the service is doing a great job!
Adding an int parameter
The greeting is nice and fast and all but it would be nice to be able to specify how many times to run it.
To achieve this, I’m modifying the previous example method to specify how many times to display the greeting in the file greeting.go:
1//export WishMerryChristmas 2func WishMerryChristmas(number int) { 3 for i := 0; i < number; i++ { 4 fmt.Println("We wish you a Merry Christmas!"); 5 } 6}
Run the compilation as before and everything should be fine.
In the PHP script we need to modify the function definition. To see what we should use we can take a hint from the “greeting.h” file. The new function definition in my file is:
1extern void WishMerryChristmas(GoInt p0);
“GoInt”? What magic is that? Well, if we look in the file, there are the following definitions:
1... 2typedef long long GoInt64; 3... 4typedef GoInt64 GoInt; 5...
With this in mind, we can change the PHP file to:
1<?php 2$ffi = FFI::cdef(" 3void WishMerryChristmas(long); 4", __DIR__ . "/greeting.so"); 5 6$ffi->WishMerryChristmas(3);
Run it and you should see:
1$ php greeting.php 2We wish you a Merry Christmas! 3We wish you a Merry Christmas! 4We wish you a Merry Christmas!
Ah, it’s beginning to feel a lot like Christmas!
Adding a string parameter
Displaying a greeting multiple times is quite nice, but it would be nicer to add a name to it.
The new greeting function in go will be:
1//export WishMerryChristmas 2func WishMerryChristmas(name string, number int) { 3 for i := 0; i < number; i++ { 4 fmt.Printf("We wish you a Merry Christmas, %s!\n", name); 5 } 6}
Don’t forget to compile and let’s get to the interesting part.
Looking into the “greeting.h” file, the new function definition is:
1extern void WishMerryChristmas(GoString p0, GoInt p1);
We already got GoInt, but GoString it is a bit trickier. After several substitutions I was able to see that the structure is:
1typedef struct { char* p; long n } GoString;
It is essentially a pointer to a list of characters and a dimension.
This means that, in the PHP file, the new definition is going to be:
1$ffi = FFI::cdef(" 2typedef struct { char* p; long n } GoString; 3typedef long GoInt; 4void WishMerryChristmas(GoString p0, GoInt p1); 5", __DIR__ . "/greeting.so");
p0 and p1 are optional, but I’ve added them for a closer resemblance to the header file. On the same note, GoInt is basically a long, but I left the type there for the same reason.
Building a GoString was a bit of a challenge. The main reason is that I didn’t find a way to create a “char *” and initialize it. My alternative was to create an array of “char” and cast it, like this:
1$name = "reader"; 2$strChar = str_split($name); 3 4$c = FFI::new('char[' . count($strChar) . ']'); 5foreach ($strChar as $i => $char) { 6 $c[$i] = $char; 7} 8 9$goStr = $ffi->new("GoString"); 10$goStr->p = FFI::cast(FFI::type('char *'), $c); 11$goStr->n = count($strChar); 12 13$ffi->WishMerryChristmas($goStr, 2);
And let’s try it out:
1$ php greeting.php 2We wish you a Merry Christmas, reader! 3We wish you a Merry Christmas, reader!
Success!
At this point I would like to move the GoString creation in a new function, just for the sake of code clean-up.
And the new code is:
1$name = "reader"; 2 3$goStr = stringToGoString($ffi->new("GoString"), $name); 4 5$ffi->WishMerryChristmas($goStr, 2); 6 7function stringToGoString($goStr, $name) { 8 $strChar = str_split($name); 9 10 $c = FFI::new('char[' . count($strChar) . ']'); 11 foreach ($strChar as $i => $char) { 12 $c[$i] = $char; 13 } 14 15 $goStr->p = FFI::cast(FFI::type('char *'), $c); 16 $goStr->n = count($strChar); 17 18 return $goStr; 19}
And let’s try it:
1$ php greeting.php 2We wish you a Merry Christmas, ��! 3We wish you a Merry Christmas, ��!
That’s not right… it seems like it’s displaying some junk memory. But why?
Looking into the documentation for FFI::new I’ve seen a second parameter “bool $owned = TRUE”.
Whether to create owned (i.e. managed) or unmanaged data. Managed data lives together with the returned FFI\CData object, and is released when the last reference to that object is released by regular PHP reference counting or GC. Unmanaged data should be released by calling FFI::free(), when no longer needed.This means that, when the function was returned, the GC is clearing the memory for the string. This is very likely a bug, but there is a very simple fix, just modify the char array creation to “false”:
1$c = FFI::new('char[' . count($strChar) . ']', false);
Let’s try it again:
1$ php greeting.php 2We wish you a Merry Christmas, reader! 3We wish you a Merry Christmas, reader!
And it’s working!
Conclusion
Maybe it’s not as easy as importing a header file when trying to run Go libraries from PHP, but with a little patience it is certainly possible! A big advantage to this is that a library built in Go, or other programming languages that allow it, can be used by a language like PHP without the need to reimplement the logic!
And, on this last positive remark, I would like to wish you happy holidays!
-
How to make use of the Xiaomi Air Conditioning Companion in Home Assistant in only 20 easy steps!
Oct 13, 2019 homeassistantStep 1:
Buy a Xiaomi Air Conditioning Companion without first researching how well it is supported by Home Assistant.
Step 2:
Realize that the Chinese power socket for 16A is different from the 10A one.
Step 3:
Realize that nobody is selling a 16A socket adapter for a 10A China power outlet in Romania.
Step 4:
Order a 16A power socket from China.
Step 5:
Wait 2 months for both the socket and the device to arrive from China.
Step 6:
Realize that there is no adapter for the wall outlet from China also.
Step 7:
Find the only seller that will sell a modular outlet that matches outlet box, probably by mistake
Step 8:
Wait for them to tell you that it will be delivered in a month
Step 9:
If it did not arrive yet, wait for a socket module to be delivered
Step 10:
Install the wall module and power socket
Step 11:
Connect the Xiaomi Air Conditioning Companion and AC for the first time
Step 12:
Realize that the Xiaomi Mi Home App has been updated in the meantime and there is no working tutorial on how to get the password for Home Assistant.
Step 13:
Figure out (after many tries) how to get the gateway password.
Step 14:
Add it to Home Assistant and realize that the gateway is not well supported and that the only things you can do from Home Assistant are sound the alarm and change its volume.
Step 15:
Find the xiaomi_airconditioningcompanion module and realize that you didn’t need the gateway password in the first place.
Step 16:
Downgrade the Xiaomi Mi Home App to get the token as specified in the instructions: https://www.home-assistant.io/integrations/vacuum.xiaomi_miio#retrieving-the-access-token
Step 17:
Realize that you don’t have the right version of Hass.io to use the module.
Step 18:
Move to Hassbian from Hass.io
Step 19:
Finally, install the extension and setup the module.
Step 20:
Now the only thing that remains is to enjoy the comfort of controlling your air conditioning from a few feet away, without ever using the remote!
-
Disclaimer
Please note that this solution is tailored specifically for my needs and, while your needs may vary, don’t worry, everything is on GitHub, so feel free to take what you need.
I would like to add that this is not a “you’ve been using Docker with Magento2 wrong, this is how it’s done” kind of blog, I just want to share what I’m using and how. It may not be the best fit for you, but maybe you will find something useful.
Intro
For almost 2 years I’ve been using Magento2 in Docker containers. I’ve been doing that before, but I must admit that it was because I had to, not because I’ve seen the light, I mean advantages.
As you may know Magento2 is not exactly a small and light app, it’s quite heavy on the resources, especially during development.
Compared to a VM, with Docker you get:
- Speed: I think the speed is one of the biggest advantages, you can stop and start containers very fast, only the first build will take time, after that it will be very fast;
- Light on resources: Compared to a VM, the container does not need to include the entire operating system, so it will not take a lot of space on disk and will not use a lot of processing power, because it’s not an entire OS doing… well… OS stuff, it’s just a server most of the time.
What you don’t get:
- Learning curve: if you don’t know Docker and Docker Compose, it will be less intuitive at first;
- First setup: harder to setup at first, if you have been using a VM for a long time, you will feel that you are going against the tide, but I assure you, in the long term it will be a lot simpler this way.
Taking the above into consideration, I would like to say that when I’ve started with this setup I was using Linux with 8G of RAM. One of my colleagues even wished me good luck on installing Magento2 on a ultraportable 8Gb RAM system. He wasn’t even sarcastic, more like pitying me for my bad workstation selection.
One of the requirements was that I needed some isolation and configuration between projects, I couldn’t just install a server and be done with it.
Previously I’ve been using Vagrant and VirtualBox, a great fit, very easy to use (most of the time). However, for Magento2 I’ve realised that it was heavy enough on its own, it was making me run out of resources fast.
Also, I wanted it to be easy to use, I don’t like to have to remember and type out a 3 word command, I just want to press some tabs and get it over with.
The requirements
There were some specific requirements:
- nginx config – should work out of the box, Magento configuration isn’t very small, I wanted to make use of it with ease;
- SSL – the domain has to also work with HTTPS, mostly because some APIs require it, the certificates don’t need to be valid;
- bash – the Magento command should work as the system user, not as root (as containers usually do). This is required, because I don’t want the files generated by Magento to be generated as root (and therefore only removable with root rights);
- xdebug – must work out of the box and be easily integrated with an IDE.
The implementation and usage
Magento2 offered a Docker container to work with. I will not say anything about it, since it wasn’t at all something I needed.
My main source of inspiration was: https://github.com/markoshust/docker-magento. The project changed a lot since I’ve started, so I definitely think you should check it out.
The starter point is: https://github.com/claudiu-persoiu/magento2-docker-compose
The relevant files are:
- magento2 – it should contain a folder html with the project
- dkc_short – it can reside anywhere, but it should be added to the files ~/.bash_profile or ~/.bashrc, this file contains shortcuts, it’s not necessary, but I like it because it make my life easier;
- docker-compose.yml – it contains all the mappings and relevant containers.
NOTE: I think I should point out that the commands on the PHP container run in two ways, as the system user or as root. This is a limitation of the Linux implementation, please make a note of it, as I will refer to it later.
Step 1:
What you should do when starting a new project with an existing Magento2 repository:
1$ git clone https://github.com/claudiu-persoiu/magento2-docker-compose.git project_name 2$ cd project_name 3$ git clone your_own_magento2_repository magento2/html
Step 2 (optional):
Copy the shortcuts to your bash console:
1$ cp dkc_short ~/ 2$ echo ~/dkc_short >> ~/.bash_profile 3$ source ~/.bash_profile
NOTE: If you don’t have the file ~/.bash_profile on your computer, just use ~/.bashrc
Step 3:
Start the setup:
1$ dkc-up -d
It will take a bit of time the first time, but it will be a lot faster next time you run it.
Step 4:
Run composer install:
1$ dkc-php-run composer install
That’s about it.
What is this dkc stuff?
Well, I like to use tabs when running a command, so I added some aliases that allow me to run a Magento command without typing everything, I just type dkc[tab]p[tab]-[tab] and the command. I just love bash autocomplete.
The command list is very simple:
- dkc-up -d – start the containers in the background
- dkc-down – stop all containers
- dkc-mag [command] – run a Magento2 command
- dkc-clean – clear the cache
- dkc-php-run – run a bash command inside the php container, like composer in the previous example. NOTE: This command is running as the system user, not as root.
- dkc-exec phpfpm [command] – this is same as above, but running as root. You should almost always use the command above.
- dkc-exec [container] [command] – this command needs a bit more explanation:
- container can be:
- app – for Nginx server,
- phpfrm – for php container,
- db – for database,
- cache or fpc – for cache containers;
- container can be:
- the command can be anything that applies to that container, like “bash” or “bash composer”, etc.
I know the commands seem like “one more thing to learn”, but most of the time you will only use the first 4 commands.
How does the magic work?
Well, to see what the above commands translate to, just check the “dkc_short” file.
There are only 2 other interesting repositories:
- https://github.com/claudiu-persoiu/magento2-docker-php – that contains phpfpm,
- https://github.com/claudiu-persoiu/magento2-docker-nginx – that contain the nginx server.
The repositories are pretty small and not very hard to understand.
If you need to modify anything, just feel free to fork the repositories.
The conclusion
That’s about all you need to know about it, I’ve been using this setup for almost 2 years.
For me, it’s working as a charm and I was able to use Magento2 on ultraportable laptop with 8Gb RAM without any issues.
The (happy) end!
-
Don’t be fooled, this post is about programming, system architecture but mostly about using a heating system.
If you don’t know how can you talk about programming without programming you should check out the “The Passionate Programmer” book by Chad Fowler, a great read. The Jazz stories from this book inspired me to write this blog.
The story begins with a new apartment in an old building. Or at least it’s new to me.
The building has its own heating system, very old and extremely inefficient. After a long consideration I’ve decided that it was time to install my own heating system and disconnect from the main building one.
So far, nothing interesting, there are many people that do this, partly because of the added comfort and partly to optimize expenses.
With this being said, I had a project and needed a developer. Or in other words, I had a heating system to build and I was in need of somebody to do it.
I’ve asked around for that “good” developer.
Like in anything else, there are a lot of people that come up with bad solutions. There are devs that make great offers but are unable to finish the project, or they write very bad code that is not scalable and even worst, unmaintainable.
Since what I know about heating systems can be covered by anyone with the patience to google the subject for a couple of hours or so, I wanted somebody I could trust, so I was looking for the passionate kind of developer!
I had a couple of recommendations. The first one told me that he had to put all the pipes close to the ceiling. After convincing him that I don’t want my house to look like a factory full of pipes he said that he will definitely need to replace one of the radiators (at least) because he could not fit a pipe behind it. I could fit my palm behind that radiator, with this in mind I knew I wanted somebody that could fit a pipe behind my radiator.
It was clear that he wasn’t a good developer. A good developer must work with the requirements, the very least a project should be able to respect most of the client requirements, if it doesn’t, there can be several explanations: he can’t because he doesn’t know how or he doesn’t want because he knows it’s hard and doesn’t want to go that extra mile. In some cases that’s not a tragedy, maybe it will be cheaper and faster, and in his case it was. Unfortunately for him, I aimed for quality.
Then there was the passionate developer. He never mentioned anything about not being able to do something, it was always a cost and maybe a consequence. The deal with better developers is that they are more expensive and everything about them is expensive, they will want better servers for hosting, better tools and sometimes more time for things like testing and maintenance. In other words, sometimes the cost is bigger not just then, but also in the log run. A quality project takes time and money.
This is my resulting project:
If you never seen an apartment heating system before you should know that except for the pipes, nothing else is actually required.
It’s all just passion!
For instance: the pump on the lower right, it’s there just to force the water to move faster in the system. Think of it as Redis, it will have a good effect on your system but most systems will happily work without it. Of course, at some point there may be maintenance for that pump and can even result in issues, like this Magento 2 issue: https://github.com/magento/magento2/issues/10002. Every system has its own cost.
The expansion tank in the lower left was unnecessary (in the sense that the system already has one built-in), but it’s not a bad idea to have an extra. Think of it as that extra storage space, ram or CPU that you don’t actually use. Your server should never go above certain server loads, that’s the expansion tank you should take into consideration.
The water intake filter is like your firewall, you need it, it’s your protection, maybe most of the time will be useless but when there will be issues, then you will be glad you have it, because he will have filtered them out.
The good thing with passionate developers is that other developers understand and appreciate their work. That is very important, no matter the industry of the “developer”.
The only one that had anything to comment on the system was the ISCIR certified technician that initialized the heating system (ISCIR in Romania is a special authorization needed for this exact thing). You can tell that he wasn’t passionate, he just wanted to say something bad about it because he wanted to make a good impression on me.
Unfortunately for him, he made some very stupid comments and then he made me a maintenance offer. This guy was the consultant, he didn’t do the project and he doesn’t want to work on it but he definitely wants to make some money on it without actually doing anything.
I guess the conclusion is that no matter the developer, the quality and passion transcends the industry.