-
Intro
Recently I've been faced with a strange endeavor. I received a Windows laptop. The strange part is that I was the only one in the entire technical department with a Windows laptop, everyone else had a MacBook. Of course the project was not made to work with Windows.
The only guys that had Windows before only had it for about a week (as a temporary workstation), but as days passed it became clear that my situation was not the same.
Considering this is 2021 and we have a full-blown chip shortage, I had to work with what was made available to me.
My first choice was to simply use Docker for Windows and Git for Windows, but that didn't prove to be a very good idea. The issue is that Windows apps want Windows line endings (\r\n) while Linux and macOS work with Unix like line endings (\n) and figuring out where to use one or the other proved to be a huge hassle. Even though I figured it out, I still could not commit all the files with the different terminator and I had to stash them before every pull.
How it's done
-
Download and install WSL2 using instructions from https://docs.microsoft.com/en-us/windows/wsl/install-win10
-
Restart Windows (because that's what Windows users routinely do)
-
From the "Microsoft Store" install a Linux distribution (in this tutorial I will use Ubuntu)
-
After the distro finishes installing, the setup will ask to create a username and password, so choose something appropriate, this will only be for the distro
-
Open a PowerShell with admin rights and run the following commands:
wsl --set-version Ubuntu 2 wsl --set-default Ubuntu
-
To check if the above steps ran successfully, run in the same shell:
wsl -l -v
and you should see an entry with Ubuntu and version 2 -
Since you are ready to install Docker, follow the steps from https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
-
Now the work is almost done, the only remaining part is fixing the fact that the Docker service is not starting automatically;
-
Click on "Start" and search for "Task Scheduler";
-
Click on "Actions" > "Create Basic Task...";
-
Give it a name and click Next;
-
In the Trigger section select "When I log on" and click Next;
-
In the Action section select "Start a program" and click Next;
-
In the "Start a program" screen the value for "Program/script:" is "C:\Windows\System32\wsl.exe" and in "Add arguments (optional)" add "-u root service docker start" and click Next and then Finish;
This should be all, now (after a restart) the Docker daemon will start automatically.
I noticed that sometimes Windows doesn't execute the startup tasks if the laptop is not plugged in, if you are faced with this issue, or Docker just didn't start, run inside the distro:
service docker start
Just a suggestion
If you are using the computer only for development, you should really consider switching to a user friendly Linux distro like Ubuntu. Tools like Docker and Git run a lot better on Linux and there is plenty of support for development tools like IntelliJ and VS Code. If you've never tried it before, you might be surprised at how user friendly it now is.
Using Docker on anything other then Linux is a compromise, even on a Mac, and especially on Apple Silicon hardware, which (for now) is even worse than Windows at this.
I've been using Ubuntu for my personal computer for many years and, with very few exceptions, I never needed anything else.
-
-
I've had this blog for more than 12 years.
In 2008, When I was starting this blog, Wordpress was the most popular open source blogging platform.
As time went by, many security flaws were discovered.
The plugins, the main driver for extending the platform and one of the main reasons why the platform was so popular, had many architectural concerns raised.
Today, after all this years, and with all this security and design consideration, Wordpress ended up being, well... probably the most popular PHP platform on the web, with all the legacy code still working with the new versions and many of the plugins from 2008 still going strong.
As time went by and I was writing less and less, it became obvious that I was spending more time keeping Wordpress up to date than actually using it to write.
So today I finally decided to move to Hugo.
Hugo is a static site generator, built in Golang. Everything is static when building with Hugo and, even more curious, there is no database, there are only config and Markdown files. So the end result for me is a lot less work on updating the platform.
With the new platform, a new theme also comes along, which brings the blog to a more modern look and with the very important feature of night mode!
Jekyll is the most popular platform for static site generator, but it is built with Ruby and I like Hugo better because the templates are generated from Go Template files. It is a matter of taste more than anything else.
And with this update, please enjoy my now static blog!
-
Something interesting happened to me recently: my access to a repository that required authentication was no longer valid. The problem in my case was a failure of the repository, but it could have just as easily been that I’ve lost my credentials or some other similar cause. My access was due to be restored soon. As we all know, soon in IT it can be anything from minutes to the end of life as we know it, and I needed to make a deployment before then.
And another thing, my local install was working.
For a while I wondered why my local was working, but, if you read the title, you already know why, my local cache was still valid.
I searched for a while for a way to get my local packages to the remove that was making the build, and there are ways, but I don’t want to waste days on an issue that may fix itself before I figure it out anyway.
The solution is very simple:
- make a copy of your local cache, if you are using linux it should be in “~/.composer”;
- put the copy on the server of your interest in a preferred location (let’s say /tmp/composer_cache);
- export the COMPOSER_CACHE_DIR variable (“export COMPOSER_CACHE_DIR=/tmp/composer_cache”);
- run composer as usual.
That’s it, you are now using your local cache on a remote server. It’s not the most elegant solution out there, but a quick and dirty hack that gets the job done easily.
-
I’ve been working with Magento for a long time and I can say that the platform changed a lot over time and I would like to share my personal thoughts on it.
How Magento 2 learned from the past to make some brand-new mistakes
I had my first encounter with Magento in 2011 and, back then, all I knew about it was that it was based on Zend Framework and that it was doing e-commerce (obviously).
I was coming from the Symfony framework world, with a lot of documentation, a great community and a rock-solid implementation. This was still Symfony 1, Symfony 2 was just coming out.
By comparison, Magento had next to no documentation, barely any community and the implementation had a plethora of bugs. In my first few days on the job I was seeing a lot of people debugging deep in the Core, I was just perplexed, I almost never needed to debug Symfony, let alone actually find bugs.
Some may argue that Symfony is a framework and Magento is a platform and the argument is as solid as a guinea pig.
The reason for this situation was simple: they didn’t expect the platform to have so much success; but it did and the reason was simple – it was pretty much the only platform in the PHP world that was built with modularity in mind at that time, period! There were other platforms, but definitely not as versatile, fully featured and modular as Magento.
Moving forward, they’ve spent a lot of time and effort in building ample documentation for Magento1, focused on quality, rock-solid stability, all paired with exceptional support for enterprise partners!
Psych! No, they did not! The documentation for M1 was always sparse and of very poor quality. Basically, the best resource was Ben Marks’ “Fundamentals of Magento Development” course, a book (about which and not entitled to an opinion, since I didn’t read it), some blog posts from individuals or companies that worked with the platform and a lot of StackOverflow questions and answers. All this was paired with exceptionally bad support for enterprise users, extremely slow and low quality solutions and, to top it off, plenty of Core bugs. It wasn’t unusual to find non-standard, half-baked implementations and bugs in the Core. But the Core was simple enough that, with Xdebug and a lot of patience, everything was more or less fixable.
Fixes coming from their support team were close to a myth. If you didn’t want to fix a bug, or just didn’t feel like working, you could open a ticket and in most cases it took days or even weeks to get a proper response. By that time you would have usually fixed it already and (in some cases) even sent the fix to their support so they can also include it in the Core.
And along comes Magento 2
With the burden of the popularity-driven growth of Magento 1, Magento 2 came to the rescue.
It took a long time for Magento 2 to be ready, and I think that is more of a story in itself, but some actions were taken to prevent some of the issues that the first version had.
Let’s look on some of the platform issues:
- classes could only be extended by overwriting;
- the frontend was based on the Prototype framework.
There were many other changes but I think these were among the most important ones.
And now let’s see how Magento 2 managed to fix all these issues
It is a good practice to start with something nice, so I will start with the documentation, Magento2 had documentation from the start and it was both useful and well made!
Regarding the classes overwrite, there were implemented two approaches:
- implementing dependency injection (from Symfony)
- implementing plugins using interceptor pattern.
Dependency injection really helped with the ability to substitute classes and functionality, and the plugins helped a lot with a non-invasive way of extending the functionality. Before plugins, only observers were able to modify data in a non-invasive way, and the big problem was that (in many cases) there just weren’t observers everywhere you needed them to be.
Unfortunately, this created a lot of overhead. Maybe in M1 you could just put a breakpoint and look in the stacktrace to see what methods are changing what, but now… it is just a lot more complex, each method has an interceptor class that triggers the plugin mechanism. Each plugin can be triggered before, after or around the method, the around method can prevent the actual method from getting called. In short, it is a lot more code with a lot more methods and a lot more code to follow. When it is working is a lot more elegant, but when it doesn’t it is a lot worse to debug.
In a dramatic twist, many of the classes were not implemented properly in the Core. You see, when Magento2 was released, a lot of compromises had to be made, and some of the Core modules were just “made to work” in the new framework. Later when everyone was expecting the cleanup to be made… it wasn’t, the reason is simple, why fix the Core when you can break compatibility with the modules already created?
This issue even made Magento Core developers suggest that you should follow the guidelines, not use the Core as an example: do what we say, not what we do.
Frontend was fixed even better! You may ask yourself what is that “prototype” library and why would anyone in their right mind choose it? It made a lot of sense back then, when Magento1 was created and there were several popular js libraries, mainly: jQuery, Prototype and MooTools. There were a lot more than these 3, but these were the popular ones. There was a war between jQuery and Prototype, just like there is now between React, Angular and Vue. The first had a new approach with almost a new language, while the second was aiming at extending the browser capabilities in a more discrete fashion. We now know who won, but back then it wasn’t obvious. As a small note, I was also a fan of Prototype back then.
The Magento 2 team realized over time that it was a mistake and, to fix it, they promised to make something more flexible. The new more flexible solution didn’t involve the Prototype library, but instead included: jQuery, Knockout.js, and Require.js just at the top level, without taking dependencies into consideration.
The idea was to separate the frontend and the backend, to have the ability to make completely new frontend APPs. And, as before, this was never properly finished and now there is a highly complicated system, partially separated, partially single page App and partially… multi page, implemented in a variety of styles. And this is only on the store front, the store backend has a slightly different, more complex system of xml, phtml, html and js files.
As a backend developer I can truly say that it is a lot harder to debug (or just to understand how the frontend part works) now than before.The XML part of generating grids is probably the hardest part, it is extremely difficult to debug anything in it and, if you mess something up, you get no warnings at all, you have to find the class that is building the grid and see if an exception is triggered there, but that is not an obvious solution at all…
And, of course, the entire thing is a lot slower because of all these “new engineering features”. When all the code is generated and the links to static resources and everything set to production mode, it isn’t slow; it isn’t exactly fast, but it isn’t terribly slow either. But when you don’t have cache, and didn’t deploy the static resources, and you are in developer mode, it is just awfully slow, it can take a few minutes to load a simple product page! It is just ridiculous if you ask me, it isn’t an operating system, or a video game, it is a shopping cart, why in the world would it take 5 minutes to display a page, even without a lot of new modules.
Keep in mind that Magento 2 didn’t come with a lot of new features, most of the functionality was in M1, only more “engineered”, so those 5 minutes of code generation and linking and whatever magic is going on in there aren’t adding a lot of new features, just a lot of refactoring of the old system.
The Magento Cloud
Magento offered a Cloud, I don’t know if they are anymore and I don’t care, nobody really cares about it anymore, mostly because it wasn’t what everybody wanted.
People want simple things, you have an app, you push it to the clouds and money starts raining down. That should be it, less things to worry about.
Demandware, who doesn’t have a nice open source core, is doing something exactly like this: you don’t have the same amount of control, but you don’t take care of your website, the gods in the clouds do! The developer only develops, doesn’t need to care about what magic operations are doing, because he is not ops, he is dev!
The Magento Commerce Cloud was aimed to do that, just push this monster in the cloud and some very smart ops will scale it for you! But it wasn’t like this, it was never easy, nor fast. And, on top of that, other hosting providers started making hosting solutions for Magento that were better at scaling then the official one, which is just ridiculous.
A positive note to end on
There is an interesting core underneath it all.There are plenty of very smart features that make the platform so popular. Now you also have tests, so you can even do TDD.
Even with all this (over) engineering challenges, there are still plenty of passionate developers out there that are able and willing to overcome them.
There are also a lot of tools developed by the community to overcome some of the shortcomings, like generate the huge amount of files required for a model, or MSP_DevTools to help with debugging the frontend, or n98-magerun2 to help with crons and lots and lots more.
And lastly, there are still a lot of very brilliant and passionate developers out there that are willing to figure out a way to develop, scale and make sales for one more day!
-
Ah, the holiday spirit.
Inspired by a post in Perl Advent, I decided that it would be nice to see an example done with PHP and Go. Please note that this blog is heavily inspired by the post mentioned above.
Let’s say you want to wish upon others happy holiday using the speed of Go, but you are using PHP, what is there to be done?
This is a great time in the PHP world to do something like this, as the newly released PHP 7.4 comes with “Foreign Function Interface” (FFI for short).
Equipped with this new power, I installed PHP 7.4 and got to work.
Let’s start with the super incredible Go greeting part, let’s create a file “greeting.go”:
1package main 2 3import ( 4 "fmt" 5) 6 7func main() { 8 WishMerryChristmas(); 9} 10 11func WishMerryChristmas() { 12 fmt.Println("We wish you a Merry Christmas!"); 13} 14
Now let’s run it with:
1$ go run greeting.go
It should display:
We wish you a Merry Christmas!Great stuff so far! Now this is nice and fast and everything, but it needs to be a service, let’s see how that would look like:
1package main 2 3import ( 4 "C" 5 "fmt" 6) 7 8func main() {} 9 10//export WishMerryChristmas 11func WishMerryChristmas() { 12 fmt.Println("We wish you a Merry Christmas!") 13} 14
As you can see, there are several differences:
- we also imported “C”,
- we removed the function call from main()
- we added a comment to export the function.
To compile the code, run:
1$ go build -o greeting.so -buildmode=c-shared
Note that this should be run each time the Go file is modified.
The output should be two files: “greeting.so” and “greeting.h”.
The header file “greeting.h” file contains the types and functions definition. If you come from C world you are probably already familiar with this type of files. Normally, all we need to do now is to import the header file using FFI and use the function!
For this I’ve created a file titled “greeting.php”:
1<?php 2 3$ffi = FFI::load("greeting.h"); 4$ffi->WishMerryChristmas(); 5
Sure looks simple enough, you just have to run it with:
1$ php greeting.php 2PHP Fatal error: Uncaught FFI\ParserException: undefined C type '__SIZE_TYPE__' at line 43 in /home/claudiu/php-go/greeting.php:3 3Stack trace: 4#0 /home/claudiu/php-go/greeting.php(3): FFI::load() 5#1 {main} 6 7Next FFI\Exception: Failed loading 'greeting.h' in /home/claudiu/php-go/greeting.php:3 8Stack trace: 9#0 /home/claudiu/php-go/greeting.php(3): FFI::load() 10#1 {main} 11 thrown in /home/claudiu/php-go/greeting.php on line 3 12
Not exactly the greeting that I was hoping for…
After some digging, I found this one on the manual page:
C preprocessor directives are not supported, i.e. #include, #define and CPP macros do not work.Because of this, we unfortunately can’t really use the header file, or at least I don’t know how.
On the bright side, we can use FFI::cdef() which allows for function definition specification. If I lost you on the way, what I’m trying to do is just tell PHP which are the function definitions that it can use from “greeting.so”.
The new code will become:
1<?php 2$ffi = FFI::cdef(" 3void WishMerryChristmas(); 4", __DIR__ . "/greeting.so"); 5 6$ffi->WishMerryChristmas(); 7
And if we run it:
1$ php greeting.php 2We wish you a Merry Christmas! 3
We are making great progress, the service is doing a great job!
Adding an int parameter
The greeting is nice and fast and all but it would be nice to be able to specify how many times to run it.
To achieve this, I’m modifying the previous example method to specify how many times to display the greeting in the file greeting.go:
1//export WishMerryChristmas 2func WishMerryChristmas(number int) { 3 for i := 0; i < number; i++ { 4 fmt.Println("We wish you a Merry Christmas!"); 5 } 6} 7
Run the compilation as before and everything should be fine.
In the PHP script we need to modify the function definition. To see what we should use we can take a hint from the “greeting.h” file. The new function definition in my file is:
1extern void WishMerryChristmas(GoInt p0); 2
“GoInt”? What magic is that? Well, if we look in the file, there are the following definitions:
1... 2typedef long long GoInt64; 3... 4typedef GoInt64 GoInt; 5... 6
With this in mind, we can change the PHP file to:
1<?php 2$ffi = FFI::cdef(" 3void WishMerryChristmas(long); 4", __DIR__ . "/greeting.so"); 5 6$ffi->WishMerryChristmas(3); 7
Run it and you should see:
1$ php greeting.php 2We wish you a Merry Christmas! 3We wish you a Merry Christmas! 4We wish you a Merry Christmas! 5
Ah, it’s beginning to feel a lot like Christmas!
Adding a string parameter
Displaying a greeting multiple times is quite nice, but it would be nicer to add a name to it.
The new greeting function in go will be:
1//export WishMerryChristmas 2func WishMerryChristmas(name string, number int) { 3 for i := 0; i < number; i++ { 4 fmt.Printf("We wish you a Merry Christmas, %s!\n", name); 5 } 6} 7
Don’t forget to compile and let’s get to the interesting part.
Looking into the “greeting.h” file, the new function definition is:
1extern void WishMerryChristmas(GoString p0, GoInt p1);
We already got GoInt, but GoString it is a bit trickier. After several substitutions I was able to see that the structure is:
1typedef struct { char* p; long n } GoString;
It is essentially a pointer to a list of characters and a dimension.
This means that, in the PHP file, the new definition is going to be:
1$ffi = FFI::cdef(" 2typedef struct { char* p; long n } GoString; 3typedef long GoInt; 4void WishMerryChristmas(GoString p0, GoInt p1); 5", __DIR__ . "/greeting.so"); 6
p0 and p1 are optional, but I’ve added them for a closer resemblance to the header file. On the same note, GoInt is basically a long, but I left the type there for the same reason.
Building a GoString was a bit of a challenge. The main reason is that I didn’t find a way to create a “char *” and initialize it. My alternative was to create an array of “char” and cast it, like this:
1$name = "reader"; 2$strChar = str_split($name); 3 4$c = FFI::new('char[' . count($strChar) . ']'); 5foreach ($strChar as $i => $char) { 6 $c[$i] = $char; 7} 8 9$goStr = $ffi->new("GoString"); 10$goStr->p = FFI::cast(FFI::type('char *'), $c); 11$goStr->n = count($strChar); 12 13$ffi->WishMerryChristmas($goStr, 2); 14
And let’s try it out:
1$ php greeting.php 2We wish you a Merry Christmas, reader! 3We wish you a Merry Christmas, reader! 4
Success!
At this point I would like to move the GoString creation in a new function, just for the sake of code clean-up.
And the new code is:
1$name = "reader"; 2 3$goStr = stringToGoString($ffi->new("GoString"), $name); 4 5$ffi->WishMerryChristmas($goStr, 2); 6 7function stringToGoString($goStr, $name) { 8 $strChar = str_split($name); 9 10 $c = FFI::new('char[' . count($strChar) . ']'); 11 foreach ($strChar as $i => $char) { 12 $c[$i] = $char; 13 } 14 15 $goStr->p = FFI::cast(FFI::type('char *'), $c); 16 $goStr->n = count($strChar); 17 18 return $goStr; 19} 20
And let’s try it:
1$ php greeting.php 2We wish you a Merry Christmas, ��! 3We wish you a Merry Christmas, ��! 4
That’s not right… it seems like it’s displaying some junk memory. But why?
Looking into the documentation for FFI::new I’ve seen a second parameter “bool $owned = TRUE”.
Whether to create owned (i.e. managed) or unmanaged data. Managed data lives together with the returned FFI\CData object, and is released when the last reference to that object is released by regular PHP reference counting or GC. Unmanaged data should be released by calling FFI::free(), when no longer needed.This means that, when the function was returned, the GC is clearing the memory for the string. This is very likely a bug, but there is a very simple fix, just modify the char array creation to “false”:
1$c = FFI::new('char[' . count($strChar) . ']', false);
Let’s try it again:
1$ php greeting.php 2We wish you a Merry Christmas, reader! 3We wish you a Merry Christmas, reader! 4
And it’s working!
Conclusion
Maybe it’s not as easy as importing a header file when trying to run Go libraries from PHP, but with a little patience it is certainly possible! A big advantage to this is that a library built in Go, or other programming languages that allow it, can be used by a language like PHP without the need to reimplement the logic!
And, on this last positive remark, I would like to wish you happy holidays!