Discovering on Mac OS X: vi -> vim

Ok, I admit it, I’ve only just now found out! Seems the vi terminal command on Mac OS X just points to vim anyway and always has done:

$ ls -l /usr/bin/vi
lrwxr-xr-x  1 root  wheel  3  9 Nov  2015 /usr/bin/vi -> vim

I started using a Mac as my main machine for software development in 2009 and have mostly used one since. Given the number of times I have typed a completely redundant m multiplied by the number of years I have been doing it for makes for some frightening numbers.

As a very quick and rough calculation, I made a quick Python doodle estimating how many times I probably opened vim everyday in each year (trying to judge how heavily I might have typed vim per day each year based on role and environment etc.), here here is basic estimation:

Kind of simplistic and assuming:

  1. Approximation of per day opening frequency
  2. Coding 235 full days a year
    (based on: work days + some weekend days + some evenings – holiday days)

This, depressingly, outputs:

237350 seconds

Which is a lot of redundant key strokes! Assuming I type the command at rate of 80 wpm, according to this website this is a kph (keystroke per hour rate) of around 20,000.

This, even more depressingly, equates to :

11.9 hours

The equivalent of a long day wasted …to add to the many I’ve encountered as a Software Developer.

Advertisements

Easily Install Apache Tomcat on Mac OS X El Capitan

Please like or share this article if it helps you. Any problems, ask in comments!

By far the easiest way to install and configure an Apache Tomcat server on a mac is using the open-source homebrew package management suite. If you’re not already using homebrew, check out its popularity on GitHub. It makes open-source package management on mac 100 times cleaner than doing it manually (everything is stored in one place, packages are easy to remove, upgrade and find configs for).

There is a good tutorial here on installing homebrew if you do not already have it.

1)  –  Install Tomcat Server

Install tomcat with the brew install in terminal (as a normal user, not root):

$ brew install tomcat

This will take care of the downloading, installation and configuration of Tomcat and manage its dependencies as well. Take note of the output, brew commands are typically really good at displaying concise but useful info, error messages and help.

Homebrew keeps packages (known as kegs) in the Cellar, where you can check config and data files. It is a directory located at:

$ ls /usr/local/Cellar/

Verify the Tomcat installation using homebrew’s handy “services” utility:

$ brew services list

Tomcat should now be listed here. brew services are really useful for managing system services, type $ brew services --help for more info.

2)  –  Run Tomcat Server

We are going to start the server by executing Tomcat’s Catalina command with the “run” parameter as such:

$ ls /usr/local/Cellar/tomcat/

$ /usr/local/Cellar/tomcat/8.5.3/bin/catalina run

or more generally:

$ /usr/local/Cellar/tomcat/[version]/bin/catalina run

With [version] replaced with your installed version.

The version number and installation directory will have been listed by homebrew at the end of the installation output (typically the last line with a beer symbol in front). Catalina can also be set to start on system launch – although for security reasons we prefer to only run when needed (either using this command or more commonly via an IDE plugin).

Once the server is running you can navigate to the host page at:

http://localhost:8080/

 

3)  –  Configure Tomcat Server

To add and manage applications running on the server you will also need to edit a configuration file:

$ vim /usr/local/Cellar/tomcat/[version]/libexec/conf/tomcat-users.xml

With [version] again replaced with your installed version.

Towards the bottom of this short config file you will see a selection of users – all commented out by default. You need to uncomment one of these and give it the extra role “manager-gui” (preferably also changing the username and password for security). The resultant user entry should look something like this:

<user username="admin" password="password" roles="tomcat,manager-gui" />

After this you can navigate to the page (or click the “Manager App” link on the main Tomcat Server page):

http://localhost:8080/manager/html

Here you can view or delete the included sample application and deploy your own. Usually, it’s easiest to deploy applications in a dev / testing environment using an IDE like PHPStorm or NetBeans however, Tomcat’s web interface is useful also. For reference, deployed applications are usually then located under the directory:

/usr/local/Cellar/tomcat/[version]/libexec/webapps/

 

Please like or share this article if it helps you. Any problems, ask in comments!

Abstracting logic with AngularJS controller inheritance

As with most frameworks, it’s a good practice in AngularJS apps to abstract common elements out of your application’s different controllers. Whilst injecting custom Services often represent a good way to achieve this, another complimentary approach is to make use of AngularJS’s controller inheritance. As an example, consider the following AngularJS controllers:

And the following html:

This will give the following output:

Menu1 value
Menu2 value
Default Value

This technique can be combined with injected services to allow complex data structures and common initialisation logic to be abstracted to parent controllers and then overridden where desired.

Automating timestamps with Doctrine ORM

The Doctrine ORM includes a robust Event system enabling timestamp fields to be set automatically without any explicit methods calls during object instantiation. This also works great when utilising the many smart RESTful design patterns for Symfony, Laravel and other frameworks which can implement Doctrine.

To have Doctrine recognise event hooks the HasLifecycleCallbacks() annotation should be added to the Entity class:


/**
* @HasLifecycleCallbacks()
* @Entity
*/
class User {
}

Typically Doctrine will be imported to the file as something like @ORM, so the full annotation will be:


/**
* @ORM\HasLifecycleCallbacks()
* @ORM\Entity
*/
class User {
}

Now as the columns to the database that will be utilised (either add to Database manually, using the relevant Doctrine console commands, or even better, a Doctrine Migration):

All persisted objects of this class will now have a timestamp automatically set when created, updated or deleted.

For a more complete example of a resulting Entity class (with typical id & name fields for testing) is:

After creating, persisting and flushing a new instance, the timetamps are automatically set. After selecting the previous row, changing the name and flushing, the updated timestamp also gets automatically set. After selecting the previous row and deleting it, a deleted timestamp get automatically set.

Please note, the deleted timestamp assumes your database is retaining the data in a “deleted” state, using triggers or other such database functionality to handle Doctrine’s instruction to delete the row. If you are not using this, either ignore the code or remove it.

Simpler Doctrine classes by convention

Doctrine convention adherence leads to simpler entity classes, no picky join column specification. A standard bi-directional one-to-many join becomes simply:

The “One” Entity:

The “Many” Entity:

Nice and straight forward.

The getters & setters can be auto-generated with the following console commands:

$ php app/console doctrine:generate:entities AppBundle:Product

and

$ php app/console doctrine:generate:entities AppBundle:Feature

If desired, now is the time to add any Doctrine Events such as LifeCycle Callbacks, which we do manually until they get better integrated into Doctrine’s console generator commands.

After everything is added, it can be a good style convention to leave a few lines between the field declarations and member functions (including the constructor) as these are just boring plumbing, the field declarations should be the focus.

Git Workflow for Sprint projects

It was a surprise to find a lack of clear diagrams for using Git properly in Sprint / Agile / Scrum programming environments. Therefore we thought we would take a moment to plot how we’ve always understood the best practice Sprint git workflow:

Git Sprint Workflow diag

As usual develop is the ongoing central branch, with master being the live, most stable branch which is only merged to via release branches and (when necessary) hot-fix branches for quickly fixing critical bugs on the live system. The key difference being that a new branch is created from develop for each Sprint cycle. Features / stories / tasks (here on known as stories) dished out to team members are then created as branches from that Sprint# branch.

If any stories are not completed within the Sprint timebox (e.g. 2 weeks) their branches can be carried over to the next Sprint as necessary. It should also be noted story feature branches can be merged, for example when a story relies on code from another story in the same Sprint. To do this run from the destination branch:

> git pull origin another-feature-branch

During the Sprint, completed stories / branches are tested and if all’s good a pull-request is created, but only during the Sprint Review at the end of the Sprint are the pull-requests (branches) merged back into the Sprint branch and then at the end of the review, the Sprint branch itself is merged back into Develop.

If a release is to happen after the Sprint (and it frequently should if following agile methodology) then the develop branch is tagged and a new “release branch” created from this tag. It is this release branch which is then merged back to master in what is in effect the release of a new version of your software (along with merging back to develop to keep everything in sync). The master branch can then be tagged to create an official “released” version, this tagging should also happen after hot-fix merges.

As is the general idea with Sprint workflows, a new release can be made after every Sprint (maybe skipping the occasional one if the software isn’t ready).  We usually make these new increments of the .# numbers, so x.1, x.2 … x.25, x.26 etc. Hot-fixes from master lead to  x.x.# increments, so x.x.1, x.x.2 … etc.

Often during release candidate testing minor bugs and tweaks will arise. Major bugs might require the abandoning of the release branch and new sprint work carried out but for quick fixes, a hot-fix branch can be created off the release branch. This hot-fix branch can be merged straight back into the release branch (and shouldn’t effect the release version number once released). 

Modern source hosting environments like BitBucket and GitHub make this whole process nice and GUI based, especially with inbuilt issue-tracking systems, however this can all be done purely from the command line, perhaps tying in with a standalone issue-tracker like TracZendesk or if you want to be really old-school, a simple Excel spreadsheet ;-).

Any questions or comments, please let us know.

Switching databases in Symfony upon application events

We’re big fans of the PHP Symfony2 framework and Doctrine2 ORM but a recent project involved a REST API for a multi-client system where each client wanted their database fully separated from each other, but we only knew which client database to connect to once a user had logged in (so no easy config.yml and parameters.yml configuration).

The login gets handled via Java Web Tokens using the excellent Lexik JWT Authentication bundle. So the first call to /auth/login_check with a username and password retrieves a token along with a id of the client (added using the onAuthenticationSuccessResponse listener documented here. Subsequent requests supply a HTTP Authorization header with the token and a custom “client” header specifying the id of the client (also documented in the above link).

This gave us stateless authentication for our API, but we still had to actually switch the Doctrine dbal connection to point to the right database. The solution (inspired by this Stack Overflow question) involved an onKernelRequest() event listener to switch the database from a base database (holds login / authentication data and is referenced in the default connection settings – or whichever connection you’re overriding – of your config.yml file) to the desired database and update the tokenStorage object accordingly. As a tip though, it’s best to keep the base and individual databases schematically identical as possible for reasons explained later.

First create the Event Listener:

Then add this to the services.yml file:

Then add the client databases to the end of the parameters-dist.yml file:

The security.yml file looks something like:

Conclusion:
This solution works fairly well in providing multi-database connectivity based on a site event (in this case user login) with the occasional bit of peering underneath’s Symfony’s hood. As mentioned above, keeping the base database schematically identical (i.e. same table structure, differing data) to your individual databases is very highly advisable. Some Symfony logic occurs before the onKernalRequest event gets fired (looking at your forms) so functionality limitations can otherwise occur there. Also parsers like the Nelmio Api Documentation bundle typically build pre-authentication (so whilst the base database is still selected).

Doctrine Migrations can also be enabled with a little tinkering (I may write a new post on how to do this if there is demand) but again, it helps if the schemas are identical (although isn’t vital).

Saving a root owned file in vim when not opened using sudo

A very common Linux administration annoyance can be opening a file in vim, making loads of changes, going to save and quit the file (using :wq) only to receive the message:
E45: 'readonly' option is set (add ! to override)
But as only root can write to the file, this won’t work either. Before pulling hair out, try this handy little command:

:wq !sudo tee %

Deleting folders beginning with a dash “-” on Linux

If you have a folder or file which starts with a dash (e.g. “-someFolder”) then simply running rmdir (or rm for a file) will produce the following error:

> rmdir -someFolder
rmdir: illegal option -- s
usage: rmdir [-p] directory ...

A neat way to delete such a folder is by referencing the directory first:

> rmdir ./-someFolder

The folder should now be deleted. Files are the exact same, just replace rmdir with rm.

Doctrine createQueryBuilder() – EntityManager vs. EntityRepository

EntityRepository and EntityManager have slightly different versions of the createQueryBuilder() function. Whereas the EntityManager’s version takes no arguments, EntityRepository’s version expects an ‘$alias‘ parameter. What is going on?!

The EntityRepository class wraps the EntityManager’s call, as such:

From /lib/Doctrine/ORM/EntityRepository.php

The function’s phpdoc reveals the reasoning behind the parameter count differences, EntityRepository returns a version of createQueryBuilder() customised for itself. We no longer need to specify the primary table we’re selecting from. Instead we must supply an $alias parameter which would usually be later supplied to the from() function.

Also note the above means all columns are selected from the entity by default.