Changing the default keyboard shortcut of a Chrome Extension

Changing the keyboard shortcuts of some Chrome web browser extensions can be a pain when the preferences option is non-editable in chrome://settings. A some-what longwinded workaround is to edit the source code of the extension directly. Often this is available on somewhere like GitHub.

To change a Chrome extension keyboard shortcut via its source code, the steps are as follows:

  1. Clone the extension source code to your computer
  2. Open the manifest.json file in the extension’s route directory. Edit or add this section:”omnibox”: { “keyword” : “key” }

Where “key” is the keyboard shortcut key itself. As an fyi, Chrome likes to calls its address bar the “omnibox”

  1. In chrome, go to: chrome://extensions/
  2. Delete the old extension (careful of deleting important data if relevant)
  3. Switch on developer mode in top right of page
  4. Select Load unpacked extension... and navigate to the cloned directory
  5. Verify the extension has loaded in Chrome and works correctly
  6. It is a security risk to browse the Internet with Chrome Extensions developer mode left on. Solve this by packing the extension into a single .crx file (a type of .zip file).
  7. Delete the newly created “unpacked” extension and click Pack extension... (you can sign the file with a .pem key but this is not necessary).
  8. Switch off developer mode in top right of page
  9. Nagivate to the directory above the cloned directory
  10. Drag the .crx file produced in the packing step onto the Chrome window
  11. The new extension should now appear on the chrome://extensions/ page
  12. Verify the correct result by navigating to chrome://settings/ -> Manage search engines...
  13. At bottom of page under “Search engines added by extensions” the keyboard shortcut should be updated and extension only appears once

 

You now have a modified keyboard shortcut for a Google Chrome extension. Some installations may experience problems with the browser disabling the extension unless it is officially released through the store (private repos are available). If the extension is disable then a restart of the web browser usually suffices in making the extension work again.

Advertisements

Checkout Specific Git Branches in Composer.json

Composer allows different branches of a Git repository to be checked out for use as a third party library. To do so first add the git repository to your composer.josn file:

 "repositories": [
     {
         "type": "vcs",
         "url": "git@bitbucket.org:org/repo.git"
     }
 ]

 

Now the repository is known, add the branch to the composer.json require section:

"require": {
    "org/repo": "dev-branch1"
}

 

You can even choose a specfic commit:

"require": {
    "org/repo": "dev-branch1#ec457d0a974c48d5685a7efa03d137dc8bbde7e3"
}

 

Examples

As explained on the getcomposer.org website, login details can be provided to access private repositories. For example, using ssh:

{
    "require": {
        "org/repo": "dev-branch1"
    },
    "repositories": [
        {
            "type": "composer",
            "url": "ssh2.sftp://bitbucket.org",
            "options": {
                "ssh2": {
                    "username": "composer",
                    "pubkey_file": "/home/username/.ssh/id_rsa.pub",
                    "privkey_file": "/home/username/.ssh/id_rsa"
                }
            }
        }
    ]
}

 

Or using an SSL certificate key:

{
    "require": {
        "org/repo": "dev-branch1"
    },
    "repositories": [
        {
            "type": "vcs",
             "url": "https://bitbucket.org:org/repo.git",
            "options": {
                "ssl": {
                    "local_cert": "/home/username/.ssl/composer.pem"
                }
            }
        }
    ]
}

 

Or using HTTP Basic authentication:

{
    "require": {
        "org/repo": "dev-branch1"
    },
    "repositories": [
        {
            "type": "vcs",
            "url": "https://username:password@bitbucket.org/com/repo.git"
        }
    ]
}

 

This provides fairly good SSL encrypted security, although it’s a good idea to remove the username & password credentials from the Git repository and instead place them in a file named auth.json within the COMPOSER_HOME directory, as such:

{
    "https://bitbucket.org": {
        "http-basic": {
            "bitbucket.org": {
                "username": "username",
                "password": "password"
            }
        }
    }
}

 

You can also specify HTTP headers directly, which enables our preferred method for stateless authentication, JWT (Java Web Tokens):

{
    "repositories": [
        {
            "type": "vcs",
            "url": "https://bitbucket.org/com/repo.git",
            "options":  {
                "http": {
                    "header": [
                        "authorization: authenticated.jwt.token"
                    ]
                }
            }
        }
    ]
}

 

Conclusion

As can be seen from the above examples, Composer enables a flexible range of options in regards to the selection and integration of Git repositories along with their respective branches and versions.

Make mysql databases default to UTF8

By default MySQL databases are created with one of the latin charsets – as if globalization never happened – and just asking for future internationalization and charset issues. UTF8 is the generally accepted way forward (for now) and there are few occasions that it isn’t the best charset to use.

Ideally, some sort of framework config system will take care of this but otherwise for existing MySQL databases, you can find out their current charset by entering the MySQL terminal and typing:

SELECT default_character_set_name FROM information_schema.SCHEMATA WHERE schema_name = "myDatabase";

The problem is changing the character set of an in-use databases can be troublesome and you have to remember to specify UTF8 as the charset every time you create a new database. Also many database tools (e.g. PHP’s Doctrine) do not currently support creating a database with a non-default charset.

The best solution is to set the default charset to UTF8 straight after installing mysql and avoid the problems before the occur. This can be done simply by modifying /etc/mysql/my.cnf (note my.conf may be located elsewhere) and adding the following lines to the [mysqld] section:

collation-server = utf8_general_ci
character-set-server = utf8

Then restart the MySQL service:

sudo service mysql restart

Now new databases will now be created using the UTF8 character by default.

Discovering on Mac OS X: vi -> vim

Ok, I admit it, I’ve only just now found out! Seems the vi terminal command on Mac OS X just points to vim anyway and always has done:

$ ls -l /usr/bin/vi
lrwxr-xr-x  1 root  wheel  3  9 Nov  2015 /usr/bin/vi -> vim

I started using a Mac as my main machine for software development in 2009 and have mostly used one since. Given the number of times I have typed a completely redundant m multiplied by the number of years I have been doing it for makes for some frightening numbers.

As a very quick and rough calculation, I made a quick Python doodle estimating how many times I probably opened vim everyday in each year (trying to judge how heavily I might have typed vim per day each year based on role and environment etc.), here here is basic estimation:

Kind of simplistic and assuming:

  1. Approximation of per day opening frequency
  2. Coding 235 full days a year
    (based on: work days + some weekend days + some evenings – holiday days)

This, depressingly, outputs:

237350 seconds

Which is a lot of redundant key strokes! Assuming I type the command at rate of 80 wpm, according to this website this is a kph (keystroke per hour rate) of around 20,000.

This, even more depressingly, equates to :

11.9 hours

The equivalent of a long day wasted …to add to the many I’ve encountered as a Software Developer.

Easily Install Apache Tomcat on Mac OS X El Capitan

Please like or share this article if it helps you. Any problems, ask in comments!

By far the easiest way to install and configure an Apache Tomcat server on a mac is using the open-source homebrew package management suite. If you’re not already using homebrew, check out its popularity on GitHub. It makes open-source package management on mac 100 times cleaner than doing it manually (everything is stored in one place, packages are easy to remove, upgrade and find configs for).

There is a good tutorial here on installing homebrew if you do not already have it.

1)  –  Install Tomcat Server

Install tomcat with the brew install in terminal (as a normal user, not root):

$ brew install tomcat

This will take care of the downloading, installation and configuration of Tomcat and manage its dependencies as well. Take note of the output, brew commands are typically really good at displaying concise but useful info, error messages and help.

Homebrew keeps packages (known as kegs) in the Cellar, where you can check config and data files. It is a directory located at:

$ ls /usr/local/Cellar/

Verify the Tomcat installation using homebrew’s handy “services” utility:

$ brew services list

Tomcat should now be listed here. brew services are really useful for managing system services, type $ brew services --help for more info.

2)  –  Run Tomcat Server

We are going to start the server by executing Tomcat’s Catalina command with the “run” parameter as such:

$ ls /usr/local/Cellar/tomcat/

$ /usr/local/Cellar/tomcat/8.5.3/bin/catalina run

or more generally:

$ /usr/local/Cellar/tomcat/[version]/bin/catalina run

With [version] replaced with your installed version.

The version number and installation directory will have been listed by homebrew at the end of the installation output (typically the last line with a beer symbol in front). Catalina can also be set to start on system launch – although for security reasons we prefer to only run when needed (either using this command or more commonly via an IDE plugin).

Once the server is running you can navigate to the host page at:

http://localhost:8080/

 

3)  –  Configure Tomcat Server

To add and manage applications running on the server you will also need to edit a configuration file:

$ vim /usr/local/Cellar/tomcat/[version]/libexec/conf/tomcat-users.xml

With [version] again replaced with your installed version.

Towards the bottom of this short config file you will see a selection of users – all commented out by default. You need to uncomment one of these and give it the extra role “manager-gui” (preferably also changing the username and password for security). The resultant user entry should look something like this:

<user username="admin" password="password" roles="tomcat,manager-gui" />

After this you can navigate to the page (or click the “Manager App” link on the main Tomcat Server page):

http://localhost:8080/manager/html

Here you can view or delete the included sample application and deploy your own. Usually, it’s easiest to deploy applications in a dev / testing environment using an IDE like PHPStorm or NetBeans however, Tomcat’s web interface is useful also. For reference, deployed applications are usually then located under the directory:

/usr/local/Cellar/tomcat/[version]/libexec/webapps/

 

Please like or share this article if it helps you. Any problems, ask in comments!

Abstracting logic with AngularJS controller inheritance

As with most frameworks, it’s a good practice in AngularJS apps to abstract common elements out of your application’s different controllers. Whilst injecting custom Services often represent a good way to achieve this, another complimentary approach is to make use of AngularJS’s controller inheritance. As an example, consider the following AngularJS controllers:

And the following html:

This will give the following output:

Menu1 value
Menu2 value
Default Value

This technique can be combined with injected services to allow complex data structures and common initialisation logic to be abstracted to parent controllers and then overridden where desired.

Automating timestamps with Doctrine ORM

The Doctrine ORM includes a robust Event system enabling timestamp fields to be set automatically without any explicit methods calls during object instantiation. This also works great when utilising the many smart RESTful design patterns for Symfony, Laravel and other frameworks which can implement Doctrine.

To have Doctrine recognise event hooks the HasLifecycleCallbacks() annotation should be added to the Entity class:


/**
* @HasLifecycleCallbacks()
* @Entity
*/
class User {
}

Typically Doctrine will be imported to the file as something like @ORM, so the full annotation will be:


/**
* @ORM\HasLifecycleCallbacks()
* @ORM\Entity
*/
class User {
}

Now as the columns to the database that will be utilised (either add to Database manually, using the relevant Doctrine console commands, or even better, a Doctrine Migration):

All persisted objects of this class will now have a timestamp automatically set when created, updated or deleted.

For a more complete example of a resulting Entity class (with typical id & name fields for testing) is:

After creating, persisting and flushing a new instance, the timetamps are automatically set. After selecting the previous row, changing the name and flushing, the updated timestamp also gets automatically set. After selecting the previous row and deleting it, a deleted timestamp get automatically set.

Please note, the deleted timestamp assumes your database is retaining the data in a “deleted” state, using triggers or other such database functionality to handle Doctrine’s instruction to delete the row. If you are not using this, either ignore the code or remove it.

Simpler Doctrine classes by convention

Doctrine convention adherence leads to simpler entity classes, no picky join column specification. A standard bi-directional one-to-many join becomes simply:

The “One” Entity:

The “Many” Entity:

Nice and straight forward.

The getters & setters can be auto-generated with the following console commands:

$ php app/console doctrine:generate:entities AppBundle:Product

and

$ php app/console doctrine:generate:entities AppBundle:Feature

If desired, now is the time to add any Doctrine Events such as LifeCycle Callbacks, which we do manually until they get better integrated into Doctrine’s console generator commands.

After everything is added, it can be a good style convention to leave a few lines between the field declarations and member functions (including the constructor) as these are just boring plumbing, the field declarations should be the focus.

Git Workflow for Sprint projects

It was a surprise to find a lack of clear diagrams for using Git properly in Sprint / Agile / Scrum programming environments. Therefore we thought we would take a moment to plot how we’ve always understood the best practice Sprint git workflow:

Git Sprint Workflow diag

As usual develop is the ongoing central branch, with master being the live, most stable branch which is only merged to via release branches and (when necessary) hot-fix branches for quickly fixing critical bugs on the live system. The key difference being that a new branch is created from develop for each Sprint cycle. Features / stories / tasks (here on known as stories) dished out to team members are then created as branches from that Sprint# branch.

If any stories are not completed within the Sprint timebox (e.g. 2 weeks) their branches can be carried over to the next Sprint as necessary. It should also be noted story feature branches can be merged, for example when a story relies on code from another story in the same Sprint. To do this run from the destination branch:

> git pull origin another-feature-branch

During the Sprint, completed stories / branches are tested and if all’s good a pull-request is created, but only during the Sprint Review at the end of the Sprint are the pull-requests (branches) merged back into the Sprint branch and then at the end of the review, the Sprint branch itself is merged back into Develop.

If a release is to happen after the Sprint (and it frequently should if following agile methodology) then the develop branch is tagged and a new “release branch” created from this tag. It is this release branch which is then merged back to master in what is in effect the release of a new version of your software (along with merging back to develop to keep everything in sync). The master branch can then be tagged to create an official “released” version, this tagging should also happen after hot-fix merges.

As is the general idea with Sprint workflows, a new release can be made after every Sprint (maybe skipping the occasional one if the software isn’t ready).  We usually make these new increments of the .# numbers, so x.1, x.2 … x.25, x.26 etc. Hot-fixes from master lead to  x.x.# increments, so x.x.1, x.x.2 … etc.

Often during release candidate testing minor bugs and tweaks will arise. Major bugs might require the abandoning of the release branch and new sprint work carried out but for quick fixes, a hot-fix branch can be created off the release branch. This hot-fix branch can be merged straight back into the release branch (and shouldn’t effect the release version number once released). 

Modern source hosting environments like BitBucket and GitHub make this whole process nice and GUI based, especially with inbuilt issue-tracking systems, however this can all be done purely from the command line, perhaps tying in with a standalone issue-tracker like TracZendesk or if you want to be really old-school, a simple Excel spreadsheet ;-).

Any questions or comments, please let us know.