-
Intro
In 2011 I was at my first hackathon event. I was very excited, this great successful company was coming to our country and hosting a hackathon! The company was Yahoo!, I know, things don't look so shiny and bright today but it was a big deal back then.
During that event I developed (together with some friends) a fun little snake multiplayer game which, like all hackathon by-products, was promptly abandoned the day after.
The game was written in Node.js, and it was the first time I was writing backend JS, even though I was a true enthusiast back then.
A couple of years later I wanted to revisit the game just to see how bad it really was. I, unfortunately, had an issue: we had never versioned the packages, so everything was outdated and nothing worked. Changes from one version to the next tended to be bigger. I've decided that it was time to rebuild it, just for fun.
The refactor
It was the end of 2015, the Paris agreement was going to stop pollution, Volkswagen was found guilty of cheating the pollutions emissions tests, and I was refactoring "Tequila worms".
Node.js was no longer a novelty but a well established technology, it started supporting alternatives to JS, or should I say EcmaScript. Some of the main options were TypeScript and CoffeeScript.
While TypeScript offered static types which allowed it to find some bugs at compilation instead of runtime, CoffeeScript had a much more "code" orientated approach. CoffeeScript allowed for a simpler and nicer syntax. I agree, it was a bit confusing at first, but it looked great!
The decision was made, the backend will be built in CoffeeScript, it was an obviously better choice than TypeScript.
For the frontend there were several options, but I was not going to use jQuery as there were all these new shiny frameworks available. The question was: should I use React or Angular?
I strongly believed that a frontend library should not use backend for complication. Angular with its version 1, which denoted stability was a favorite! It had a stable version, so it wasn't going to change very soon, it had a good community behind it, and it was built by Google!
React on the other hand wasn't that popular. It also needed backend to compile, which is kind of weird if you think about it, it was build by Facebook, and had a very different approach than the more popular MVC.
Angular (version 1) and CoffeeScript were the winners! I can say that it was a pleasure refactoring this silly game and, above all, I was able to use these cutting edge technologies, making the project future-proof!
You can find the end result here: https://github.com/claudiu-persoiu/tequila-worms/
8 years later
The Paris agreement didn't really change the world as radically as we needed it to, Volkswagen is making electrical and hybrid cars now, and my app doesn't look like state of the art anymore.
As Node.js has matured, its dependencies also become increasingly more complex and in need of constant attention.
On the other hand, Angular was completely refactored in version 2 and my dream of building frontend without the need for backend is becoming a more literal dream.
CoffeeScript script was slowly taken over by TypeScript, and is now closer to a piece of history for enthusiasts then a go-to language.
On CoffeeScript's Wikipedia page, under "Adoption", there's a phrase that describes very well what has become of it:
On September 13, 2012, Dropbox announced that their browser-side code base had been rewritten from JavaScript to CoffeeScript, however it was migrated to TypeScript in 2017.
Conclusion
While Node.js was the only good pick in the project stack, the rest of them only prove that you should not take technical advice from people on the Internet, in this particularly case me, I was just so far off...
On the other hand, none of the technologies were a bad decision at the time, and building frontend without the need of backend wasn't the eccentric desire it may seem today.
CoffeeScript was just unlucky, while it made the syntax nicer, it made it a bit harder for people coming from C family languages to understand it.
While the TypeScript syntax is not nicer, it does help with static type checking.
I guess the main takeaways of this story are that there's no future-proof when you rely on frameworks and that you shouldn't trust anyone's predictions, especially not mine.
--
-
Intro
Recently I've been faced with a strange endeavor. I received a Windows laptop. The strange part is that I was the only one in the entire technical department with a Windows laptop, everyone else had a MacBook. Of course the project was not made to work with Windows.
The only guys that had Windows before only had it for about a week (as a temporary workstation), but as days passed it became clear that my situation was not the same.
Considering this is 2021 and we have a full-blown chip shortage, I had to work with what was made available to me.
My first choice was to simply use Docker for Windows and Git for Windows, but that didn't prove to be a very good idea. The issue is that Windows apps want Windows line endings (\r\n) while Linux and macOS work with Unix like line endings (\n) and figuring out where to use one or the other proved to be a huge hassle. Even though I figured it out, I still could not commit all the files with the different terminator and I had to stash them before every pull.
How it's done
-
Download and install WSL2 using instructions from https://docs.microsoft.com/en-us/windows/wsl/install-win10
-
Restart Windows (because that's what Windows users routinely do)
-
From the "Microsoft Store" install a Linux distribution (in this tutorial I will use Ubuntu)
-
After the distro finishes installing, the setup will ask to create a username and password, so choose something appropriate, this will only be for the distro
-
Open a PowerShell with admin rights and run the following commands:
wsl --set-version Ubuntu 2 wsl --set-default Ubuntu
-
To check if the above steps ran successfully, run in the same shell:
wsl -l -v
and you should see an entry with Ubuntu and version 2 -
Since you are ready to install Docker, follow the steps from https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
-
Now the work is almost done, the only remaining part is fixing the fact that the Docker service is not starting automatically;
-
Click on "Start" and search for "Task Scheduler";
-
Click on "Actions" > "Create Basic Task...";
-
Give it a name and click Next;
-
In the Trigger section select "When I log on" and click Next;
-
In the Action section select "Start a program" and click Next;
-
In the "Start a program" screen the value for "Program/script:" is "C:\Windows\System32\wsl.exe" and in "Add arguments (optional)" add "-u root service docker start" and click Next and then Finish;
This should be all, now (after a restart) the Docker daemon will start automatically.
I noticed that sometimes Windows doesn't execute the startup tasks if the laptop is not plugged in, if you are faced with this issue, or Docker just didn't start, run inside the distro:
service docker start
Just a suggestion
If you are using the computer only for development, you should really consider switching to a user friendly Linux distro like Ubuntu. Tools like Docker and Git run a lot better on Linux and there is plenty of support for development tools like IntelliJ and VS Code. If you've never tried it before, you might be surprised at how user friendly it now is.
Using Docker on anything other then Linux is a compromise, even on a Mac, and especially on Apple Silicon hardware, which (for now) is even worse than Windows at this.
I've been using Ubuntu for my personal computer for many years and, with very few exceptions, I never needed anything else.
-
-
I've had this blog for more than 12 years.
In 2008, When I was starting this blog, Wordpress was the most popular open source blogging platform.
As time went by, many security flaws were discovered.
The plugins, the main driver for extending the platform and one of the main reasons why the platform was so popular, had many architectural concerns raised.
Today, after all this years, and with all this security and design consideration, Wordpress ended up being, well... probably the most popular PHP platform on the web, with all the legacy code still working with the new versions and many of the plugins from 2008 still going strong.
As time went by and I was writing less and less, it became obvious that I was spending more time keeping Wordpress up to date than actually using it to write.
So today I finally decided to move to Hugo.
Hugo is a static site generator, built in Golang. Everything is static when building with Hugo and, even more curious, there is no database, there are only config and Markdown files. So the end result for me is a lot less work on updating the platform.
With the new platform, a new theme also comes along, which brings the blog to a more modern look and with the very important feature of night mode!
Jekyll is the most popular platform for static site generator, but it is built with Ruby and I like Hugo better because the templates are generated from Go Template files. It is a matter of taste more than anything else.
And with this update, please enjoy my now static blog!
-
Something interesting happened to me recently: my access to a repository that required authentication was no longer valid. The problem in my case was a failure of the repository, but it could have just as easily been that I’ve lost my credentials or some other similar cause. My access was due to be restored soon. As we all know, soon in IT it can be anything from minutes to the end of life as we know it, and I needed to make a deployment before then.
And another thing, my local install was working.
For a while I wondered why my local was working, but, if you read the title, you already know why, my local cache was still valid.
I searched for a while for a way to get my local packages to the remove that was making the build, and there are ways, but I don’t want to waste days on an issue that may fix itself before I figure it out anyway.
The solution is very simple:
- make a copy of your local cache, if you are using linux it should be in “~/.composer”;
- put the copy on the server of your interest in a preferred location (let’s say /tmp/composer_cache);
- export the COMPOSER_CACHE_DIR variable (“export COMPOSER_CACHE_DIR=/tmp/composer_cache”);
- run composer as usual.
That’s it, you are now using your local cache on a remote server. It’s not the most elegant solution out there, but a quick and dirty hack that gets the job done easily.
-
I’ve been working with Magento for a long time and I can say that the platform changed a lot over time and I would like to share my personal thoughts on it.
How Magento 2 learned from the past to make some brand-new mistakes
I had my first encounter with Magento in 2011 and, back then, all I knew about it was that it was based on Zend Framework and that it was doing e-commerce (obviously).
I was coming from the Symfony framework world, with a lot of documentation, a great community and a rock-solid implementation. This was still Symfony 1, Symfony 2 was just coming out.
By comparison, Magento had next to no documentation, barely any community and the implementation had a plethora of bugs. In my first few days on the job I was seeing a lot of people debugging deep in the Core, I was just perplexed, I almost never needed to debug Symfony, let alone actually find bugs.
Some may argue that Symfony is a framework and Magento is a platform and the argument is as solid as a guinea pig.
The reason for this situation was simple: they didn’t expect the platform to have so much success; but it did and the reason was simple – it was pretty much the only platform in the PHP world that was built with modularity in mind at that time, period! There were other platforms, but definitely not as versatile, fully featured and modular as Magento.
Moving forward, they’ve spent a lot of time and effort in building ample documentation for Magento1, focused on quality, rock-solid stability, all paired with exceptional support for enterprise partners!
Psych! No, they did not! The documentation for M1 was always sparse and of very poor quality. Basically, the best resource was Ben Marks’ “Fundamentals of Magento Development” course, a book (about which and not entitled to an opinion, since I didn’t read it), some blog posts from individuals or companies that worked with the platform and a lot of StackOverflow questions and answers. All this was paired with exceptionally bad support for enterprise users, extremely slow and low quality solutions and, to top it off, plenty of Core bugs. It wasn’t unusual to find non-standard, half-baked implementations and bugs in the Core. But the Core was simple enough that, with Xdebug and a lot of patience, everything was more or less fixable.
Fixes coming from their support team were close to a myth. If you didn’t want to fix a bug, or just didn’t feel like working, you could open a ticket and in most cases it took days or even weeks to get a proper response. By that time you would have usually fixed it already and (in some cases) even sent the fix to their support so they can also include it in the Core.
And along comes Magento 2
With the burden of the popularity-driven growth of Magento 1, Magento 2 came to the rescue.
It took a long time for Magento 2 to be ready, and I think that is more of a story in itself, but some actions were taken to prevent some of the issues that the first version had.
Let’s look on some of the platform issues:
- classes could only be extended by overwriting;
- the frontend was based on the Prototype framework.
There were many other changes but I think these were among the most important ones.
And now let’s see how Magento 2 managed to fix all these issues
It is a good practice to start with something nice, so I will start with the documentation, Magento2 had documentation from the start and it was both useful and well made!
Regarding the classes overwrite, there were implemented two approaches:
- implementing dependency injection (from Symfony)
- implementing plugins using interceptor pattern.
Dependency injection really helped with the ability to substitute classes and functionality, and the plugins helped a lot with a non-invasive way of extending the functionality. Before plugins, only observers were able to modify data in a non-invasive way, and the big problem was that (in many cases) there just weren’t observers everywhere you needed them to be.
Unfortunately, this created a lot of overhead. Maybe in M1 you could just put a breakpoint and look in the stacktrace to see what methods are changing what, but now… it is just a lot more complex, each method has an interceptor class that triggers the plugin mechanism. Each plugin can be triggered before, after or around the method, the around method can prevent the actual method from getting called. In short, it is a lot more code with a lot more methods and a lot more code to follow. When it is working is a lot more elegant, but when it doesn’t it is a lot worse to debug.
In a dramatic twist, many of the classes were not implemented properly in the Core. You see, when Magento2 was released, a lot of compromises had to be made, and some of the Core modules were just “made to work” in the new framework. Later when everyone was expecting the cleanup to be made… it wasn’t, the reason is simple, why fix the Core when you can break compatibility with the modules already created?
This issue even made Magento Core developers suggest that you should follow the guidelines, not use the Core as an example: do what we say, not what we do.
Frontend was fixed even better! You may ask yourself what is that “prototype” library and why would anyone in their right mind choose it? It made a lot of sense back then, when Magento1 was created and there were several popular js libraries, mainly: jQuery, Prototype and MooTools. There were a lot more than these 3, but these were the popular ones. There was a war between jQuery and Prototype, just like there is now between React, Angular and Vue. The first had a new approach with almost a new language, while the second was aiming at extending the browser capabilities in a more discrete fashion. We now know who won, but back then it wasn’t obvious. As a small note, I was also a fan of Prototype back then.
The Magento 2 team realized over time that it was a mistake and, to fix it, they promised to make something more flexible. The new more flexible solution didn’t involve the Prototype library, but instead included: jQuery, Knockout.js, and Require.js just at the top level, without taking dependencies into consideration.
The idea was to separate the frontend and the backend, to have the ability to make completely new frontend APPs. And, as before, this was never properly finished and now there is a highly complicated system, partially separated, partially single page App and partially… multi page, implemented in a variety of styles. And this is only on the store front, the store backend has a slightly different, more complex system of xml, phtml, html and js files.
As a backend developer I can truly say that it is a lot harder to debug (or just to understand how the frontend part works) now than before.The XML part of generating grids is probably the hardest part, it is extremely difficult to debug anything in it and, if you mess something up, you get no warnings at all, you have to find the class that is building the grid and see if an exception is triggered there, but that is not an obvious solution at all…
And, of course, the entire thing is a lot slower because of all these “new engineering features”. When all the code is generated and the links to static resources and everything set to production mode, it isn’t slow; it isn’t exactly fast, but it isn’t terribly slow either. But when you don’t have cache, and didn’t deploy the static resources, and you are in developer mode, it is just awfully slow, it can take a few minutes to load a simple product page! It is just ridiculous if you ask me, it isn’t an operating system, or a video game, it is a shopping cart, why in the world would it take 5 minutes to display a page, even without a lot of new modules.
Keep in mind that Magento 2 didn’t come with a lot of new features, most of the functionality was in M1, only more “engineered”, so those 5 minutes of code generation and linking and whatever magic is going on in there aren’t adding a lot of new features, just a lot of refactoring of the old system.
The Magento Cloud
Magento offered a Cloud, I don’t know if they are anymore and I don’t care, nobody really cares about it anymore, mostly because it wasn’t what everybody wanted.
People want simple things, you have an app, you push it to the clouds and money starts raining down. That should be it, less things to worry about.
Demandware, who doesn’t have a nice open source core, is doing something exactly like this: you don’t have the same amount of control, but you don’t take care of your website, the gods in the clouds do! The developer only develops, doesn’t need to care about what magic operations are doing, because he is not ops, he is dev!
The Magento Commerce Cloud was aimed to do that, just push this monster in the cloud and some very smart ops will scale it for you! But it wasn’t like this, it was never easy, nor fast. And, on top of that, other hosting providers started making hosting solutions for Magento that were better at scaling then the official one, which is just ridiculous.
A positive note to end on
There is an interesting core underneath it all.There are plenty of very smart features that make the platform so popular. Now you also have tests, so you can even do TDD.
Even with all this (over) engineering challenges, there are still plenty of passionate developers out there that are able and willing to overcome them.
There are also a lot of tools developed by the community to overcome some of the shortcomings, like generate the huge amount of files required for a model, or MSP_DevTools to help with debugging the frontend, or n98-magerun2 to help with crons and lots and lots more.
And lastly, there are still a lot of very brilliant and passionate developers out there that are willing to figure out a way to develop, scale and make sales for one more day!