Containerised app on Google Kubernetes - the easy way!

There are multiple ways to deploy an application to Google Kubernetes, with limited experience it can rather become overwhelming to go through the documentation. Hence, I compiled this step by step guide hoping it will help people who might be having similar trouble like me in getting started with Google Kubernetes deployments.

This guide assumes that you have a containerised application ready for deployment.

If you would like to learn the basic of Kubernetes, I would recommend this blog Kubernetes 101 (my absolute favourite!).

Here’s my 5 step guide:

  1. Create a project in Google Cloud Platform (GCP) with billing enabled.
  2. Install/Enable pre-requisites packages.
  3. Prepare your application and push it to GitHub.
  4. Setup Google cloud build.
  5. Deploy container artefact to kubernetes cluster.

Create a GCP Project with billing

Visit create a project page on GCP and create a project and enable billing.


Install/Enable pre-requisites packages

This guide assumes you have setup docker on your local and you are able to build and run a docker image locally.

1. gcloud sdk Install gcloud SDK on your computer, more details on Google cloud page. Verify if you can run this command on your shell.

gcloud --help

2. Kubernetes (kubectl) cli

brew install kubernetes-cli

Verify installation using:

kubectl version

3. Enable gcloud kubernetes component This component will let kubectl communicate with gcloud.

gcloud components install kubectl

4. Set gcloud config to your project and compute zone

gcloud config set project rahul-my
gcloud config set compute/zone asia-southeast1-b

Enable Kubernetes Engine API Go to and enable the API, this may take upto a min. Let it run in the background.

Meantime validate you can see correct project_id and zone by running gcloud config list.

After this step your computer is ready to create clusters, deploy applications and scale your kubernetes clusters using kubectl command.

Prepare your application and push it to GitHub

Let’s setup our application and push it to github. You should verify you can run your application on your local using docker run command. I have compiled a quick demo application with Nodejs here: Feel free to use it or deploy your own. Rest of the tutorial will work with deploying this sample application.

You can test if the application docker image is working fine in your local using following commands:

git clone

cd nestjs-api

docker build -t nestjs-api .    

docker run -it -p 3000:3000 --name=nestjs-api nestjs-api

visit http://localhost:3000/docs to check if your API documentation is accessible in the browser.

After this step, you have your application ready to go live! Just push your application to your github account.

Setup Google cloud build

Visit and enable Cloud Build API. Now Go to Triggers and create a trigger. And select Github, you maybe asked to authenticate and give access to google cloud platform to your github account repository.


Then select the repository you would like to set a build trigger to. in our case it’s nestjs-api


Next, trigger settings: Here we will set a trigger to build the docker image every time there is a push to master branch. We can name it anything in my case I did Push to master branch and select the master branch as the main branch to trigger build and chose Dockerfile as build configuration parameter.


Click on Create Trigger

Few important things here to note;

  1. The image name is created as the repo-path and tag version is set to the latest commit_sha in your git repository. So at anytime if you would like to deploy an older version you can do that by using the older commit_sha from your git repository, which makes it pretty easy to rollback in case of a bad build goes to production.
  2. You can manually trigger a build by clicking on the Run Trigger button in the Google cloud build triggers menu.
  3. In the history section you can see list of all the builds.
  4. You can see list of all successful docker containers in the Google Container Registry (GCR) ( From there you can get the docker pull command by clicking on the build name for your latest built container which we will deploy to our kubernetes cluster. gcp-gcr.png


  1. Keep the copy of docker pull image path; in our case Note: you can also copy this path from Cloud build history’s artefacts section.

At this step your application artefact is ready for deployment.

Deploy container artefact to kubernetes cluster

Finally, you have everything you need to take your application to kubernetes cluster. Now we will follow theses steps:

  1. Create a smallest available cluster with 3 micro-instances on GCP with name api-cluster (you can name it anything). This step may take several minutes to complete.

    gcloud container clusters create api-cluster --zone asia-southeast1-b --num-nodes 3 --machine-type f1-micro

    It should output the name of cluster with master_ip and nodes status on completion. You can verify 3 micro nodes are running in our cluster by running:

    gcloud compute instances list
  2. Create a Kubectl deployment with our built container

    kubectl create deployment nestjs-api

    It should output deployment.apps/nestjs-api created

  3. Expose your app to the internet using:

    kubectl expose deployment nestjs-api --type=LoadBalancer --port 80 --target-port 3000

    It should output service/nestjs-api exposed. This step creates a Cloud Load Balancer and deploys the service onto it, which later is exposed to the pubic Internet using an external IP and port 80.

Thats it!

It will take few seconds before your application will be able on the Internet.

To check running pods and services you can use the following commands respectively:

kubectl get pods
kubectl get service


From here, you can copy the External IP of your service nestjs-api and paste it in your browser and your should be able to hit your api. In my case it is which outputs this:


Deploying a new build version

Once you have a new version of your application it is pretty simple to deploy using the following command:

kubectl set image deployment nestjs-api$NEW_COMMIT_SHA

It takes few seconds before your new application version starts serving, which can be checked by running

kubectl describe deployments

Which tells if the latest version image is deployed.

Cleaning up your deployment

  1. Delete the Service: This step will deallocate the Cloud Load Balancer created for your Service:

    kubectl delete service nestjs-api
  2. Delete the container cluster: This step will delete the resources that make up the container cluster, such as the compute instances, disks and network resources.

    gcloud container clusters delete api-cluster

Further Steps; deployment steps can be easily automated using a CI/CD tool where the tool gets the latest master branch version and runs the kubectl set image command with the latest COMMIT_SHA on every successful Google Cloud build.

Future of the web is fast, immersive and usable

Future of the web is fast, immersive and usable

It has never been more exciting times for the web than now. While there was much cool stuff launched at Google IO ‘18, there is a lot to cheer about for the web developers. Some of the most exciting ones are on AMP (Accelerated Mobile Pages), PWA (Progressive Web Apps) and WebXR (immersive web AR + web VR). These modern web technologies pan right at the centre of the future of the web. Let’s look where these technologies stand!

AMP (Accelerated Mobile Pages)

AMP is a library that helps create web pages that are compelling, smooth, and load near instantaneously for users; learn more. AMP is built with 3 core-components:

  1. AMP HTML (HTML with some restrictions)
  2. AMP JS (fast rendering JavaScript library)
  3. AMP Cache (Google cache server)

To learn more about these AMP core-components or to get started with AMP development visit:

At the technical front, AMP leverages 2 main aspects to deliver content that loads fast and renders equally faster, those are:

Fast Content Delivery:

AMP caches a valid AMP page to Google AMP cache server (multiple edge server locations) and delivers to the consumers from the nearest location that reduces the network time to a great deal. On top of that, Google Search Result Page (SERP - Search Engine Result Page) preloads AMP pages in the background keep them ready to be consumed directly from the result page; hence page loads instantaneously.

Note: The median load time of AMP page is under 1 sec worldwide.

Fast Page Rendering

AMP Javascript library is designed to render fast by only allowing asynchronous scripts and cutting down all 3rd party extensions and javascript and only allow them to be loaded in <amp-iframe>. It also enforces to size all resources statically so that the layout of the page can be calculated before the resources are downloaded. All CSS is inlined and has a size limitation of up to 50Kb only. Learn more.

The result of the above is stunning webpage that loads fast! here’s an example of popular news sites like Guardian and BBC:

Though the AMP pages work great and load amazingly fast, they were designed with a little caveat, which is more of a trade-off. When an AMP page is loaded from a SERP it is loaded directly from the AMP cache edge server resulting in the URL in the browser to be of the AMP cache server (

Challenge and trade-off

The rationale behind the design of AMP was to cater to allow instant loading of a page without sacrificing the privacy of the users. AMP preloads the cached search AMP pages in the background while showing the results to its users, which allows AMP pages to be loaded instantly and allowing publishers to know what users been searching will be a major privacy flaw, hence the pages are loaded directly from the cache which results in the URL to be shown as as prefix.

At the Google IO ‘18, it was announced that AMP has found a solution to this trade-off using Web-Packaging.

Web Packaging

Web packaging is a standard which allows a publisher to sign an HTTP exchange (request/response pair) using a certificate, which when delivered as a package to the browser can show the publisher’s origin URL. Caching server works to actually delivering that exchange to a browser and the package is sent over HTTPS to the cache server. This feature is still under experimental phase and can be enabled on chrome browser from flags chrome://flags#enable-signed-http-exchange. AMP has also launched AMP packager to make it easier for developers to develop signed packages and push it to AMP cache server with ease. It is still very early build but ready for experimentation and can be found here

Key details about AMP packager

  • Signs the AMP pages that can be consumed by AMP Cache
  • Package is signed with a certificate
  • That certificate’s origin URL will be shown in the browser URL bar which solves the biggest trade-off of AM
  • Max lifetime of the package is 7 days
  • The AMP packager is an HTTP server that sits behind a Frontend server which fetches and signs AMP documents as requested by the AMP Cache.

How to use the AMP packager

  • Configure frontend server to serve certificate and amp packages

  • Setup an URL mapping between AMP URL and corresponding package URL, in the instance below a .htxg extension has been used to correspond to the package file:

  • –>

  • Frontend reverse proxies such a request to something like this:



  • AMP Packages will contain a certUrl that indicates the certificate that can be used to validate the package. The certUrl may be on any domain, and it may be HTTP or HTTPS, but it will have a path of the form

    /amppkg/cert/<base64 encoding of a hash of the public certificate>


  • Receives reverse-proxied requests from the frontend
  • Makes an outgoing connection to on port 443
  • Sends the package to AMP Cache

Signed HTTP Exchange Enabled Browser

  • Loads AMP packager from AMP cache edge server
  • Displays the package’s signed origin URL on the browser address bar.

More info at

AMP Stories

Another interesting product that was launched at IO’18 was AMP stories which allow developers, and publishers to immerse readers in tappable, full-screen content which is developed on top of AMP technology and is fast and open. It brings Instagram/Snapchat like full-screen visual storytelling experience to conveys information using images, videos, graphics, audio, and more.

AMP story components


An HTML page can contain a single AMP story <amp-story> which can house multiple <amp-story-page> and each page can have multiple <amp-story-grid-layer> which provides a building block for HTML/AMP elements to fit in a template.

A basic HTML page with AMP story code snippet will look like this:

<!doctype html>
<html ⚡>
    <meta charset="utf-8">
    <title>My Awesome AMP story</title>
    <script async src=""></script>
    <script async custom-element="amp-story"

    <amp-story standalone
        title="Joy of Pets"
        publisher="AMP tutorials"
      <amp-story-page id="cover">
        <amp-story-grid-layer template="fill">
          <amp-img src=""
              width="720" height="1280"
        <amp-story-grid-layer template="vertical">
          <h1>The Joy of Pets</h1>
          <p>By AMP Tutorials</p>


You can check out a sample amp story I created from some of the moments I captured at Google IO’18.

PWA (Progressive Web Apps)

PWA, in my opinion, is one of the most significant improvements to the web in recent history. PWA provides a reliable, fast and engaging means for the web, it can beat the slow internet speeds by allowing websites to work offline, and allow easy availability by letting them add to home screen. PWAs are fast and runs reliably in different network conditions and allows to have app-like smooth animations. They also support full-screen experience and push notification to engage users.

The core of a PWA is a service worker, the basic architecture of service workers looks like this:


Service workers live in the browsers, works with various APIs like Cache API and help cache important assets in the user’s local browser and help in loading website fast, which is one of many features of service workers. They key useful features of service workers in context to PWA are:

  • Is a JavaScript Worker, so it can’t access the DOM directly.
  • Use Promises.
  • Runs in the background, separate from a web page.
  • Supports push notification, background sync.
  • Acts like a programmable network proxy.
  • It’s terminated when not in use and restarts when needed.

Initially, the service workers were only available to Google Chrome, however, just a couple of months back in March Apple announced the support for service workers for Safari 11.4 on iOS and MacOS. Around the Google IO’18 time, Microsoft also takes a step forward and shipped the support for service workers on Microsoft Edge browser, that means now web developers can develop future-proof PWAs for their users without worrying much about the support for major modern browsers. More information on support for service workers can be found here.

New features in PWA

At Google IO’18, PWA for desktop was announced for the first time, that would mean now developers can develop a similar app like experience for their users. As it might sound intriguing, but is pragmatic. As some of the facts that were shared at Google IO’18 which can be seen in the video here. Summarized here:

The number of global users shows desktop users are still increasing. global_mobile_desktop_users.jpg

Here’s the each of these platforms average daily impressions by hour: platform_average_users.jpg

and desktop usage during the average working hours shows upward trends. percent_average_daily.jpg

These pictures clearly show there is a great opportunity for PWAs on the desktop. PWA can provide an edge for developing productivity tools, games and some quick tools for work that doesn’t require heavy installation files on a computer, rather have a simple and easy to ‘install’ add to home screen icon to allow users to discover apps easily.

Other key features for PWA that were announced at Google IO 2018 were: Now chrome won’t display the installation prompt by itself, rather, developers have to register Window.addEventListener("beforeinstallprompt") to be able to make app installable. Scope attribute has been made compulsory in manifest.json file in order to keep users in the scope of the PWA application. If the user navigates outside the scope, it returns to a normal web page inside a browser tab/window. Currently, the PWA for desktop is an experimental feature available on Google Chrome Canary. To play around you can enable this chrome flag chrome://flags/#enable-desktop-pwas

I tried to play around with PWA app as below, and it just works!


WebXR (Web AR & Web VR)

XR => AR + VR: Augmented Reality or Virtual Reality

AR = Brings the virtual world to you VR = Brings you to the virtual world

AR and VR are not new technologies however they were most widely available to native apps like Android, and iOS platforms. However, with the advent of immersive web technologies, led by major web companies like Mozilla and Google, the future of the web looks more immersive than ever before. Mozilla has been doing great work in this field for many years, at Google IO’18, Google announced and showcased various AR and VR on web samples that were not only cool but also very useful. They render as smooth as native apps, widely accessible using common smartphone hardware. In my opinion, AR and VR content fits right on the web, as it makes consumption of such experiences natural and accessible. Think of a scenario where you are researching or learning about something new on some news website, and while reading you can also dig deeper and immerse into the AR world, without a need to install huge mobile apps, wait for it to finish, even worse (sign up in most cases) before even you can have a single bit of experience on AR content you need to go through multiple steps, but not anymore! Such a concept was showcased at the Google IO, Chacmool AR in chrome canary. AR fits greatly under the conventional web ecosystem, around 100 million phones and tablets are capable of playing AR web content, and most of them (if not all) have a web browser, and that’s pretty much is what’s required to develop immersive content for a website.

Whats new in WebXR?

  • Serves as a foundation for immersive web
  • Replaces the deprecated WebVR API
  • Allows more and more people to be able to use WebVR
    • not just limited to VR devices
    • Enables AR functionality
  • Magic Windows support for VR
  • Enables browsers to add more optimization

Comparing the WebVR, WebXR can pack 2x the pixels at the same frame rate, Default frame buffer size on a Pixel XL for webVR is 1145 x 1807 (~ 2M pixel) however, with WebXR its possible to get 1603 x 2529 (~4M pixels).

WebXR for VR supports Magic windows, that allows users to view 360º content without a need of VR headset. A device’s gyroscope or panning-support can be used to consume VR content on Smartphones. It is also capable of viewing monoscopic view of a 3D scene and updates it based on the device’s orientation sensors.

To get started

  • Early builds are supported on Chrome Canary >= v69.0.3453
  • Chrome://flags/#webxr
    • Enable #webxr and #webxr-hit-test flags

Sample code.

WebXR has great potential and can be used to create web experiences for various use-cases like shopping, education, entertainment and much more.

With the latest advancements in web technologies, it’s very apparent that the future of the web is fast, web and usable!

Helpful resources and credits:

Making web beautiful and high performing with AMP

When speed matters AMP is the way to go, it makes web much faster, provides better user experience at lower bounce rate. So switch to AMP and get higher customer satisfaction, better ranking hence more conversions.

What is AMP

Accelerated Mobile Pages or AMP for short, is a project from Google and Twitter designed to make mobile web page load almost instantaneous, much similar to Facebook’s instant article and Apple News. Technically, it’s a HTML page on a diet, as it contains only a subset of HTML tags (elements) to keep it light weight. Google aims to make future of websites better and faster as they quote on their official amp project page :

The AMP Project is an open-source initiative aiming to make the web better for all. The project enables the creation of websites and ads that are consistently fast, beautiful and high-performing across devices and distribution platforms.

Once an AMP page is indexed by Google it will appear on Google SERP (Search Engine Result Page) with a lightning bolt () icon and will be served from google amp cache and load instantly.

Wonder why AMP pages load so fast?

A typical AMP page construct looks like this:

<!doctype html>
<html ⚡>
   <meta charset="utf-8">
   <link rel="canonical" href="hello-world.html">
   <meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1">
   <style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</style></noscript>
   <script async src=""></script>
 <body>Hello World!</body>

AMP is built with 3 core components which are the main reason why AMP loads instantly.


AMP HTML is basically extension to traditional HTML with custom AMP properties. Some of the HTML tags are replaced with the AMP specific tags for instant <img> is replaced with <amp-img> and <iframe> to <amp-iframe> to name a few. While keeping those regular HTML tags AMP also strips down some of the HTML tags which would not provide any speed benefits. The full list of AMP HTML can be found here..

AMP Javascript

Google puts a strict tab on the use of Javascript and the only JS that can be used in an AMP page is the AMP JS Library (, which implements the AMP’s best performance practices. In essence AMP makes everything that comes from external resources asynchronous, so nothing in the page can block anything from rendering. Other performance techniques include the sandboxing of all iframes, the pre-calculation of the layout of every element on page before resources are loaded and the disabling of slow CSS selectors.

Google AMP Cache

The Google AMP Cache is a proxy-based content delivery network for delivering all valid AMP documents. It fetches AMP HTML pages, caches them, and improves page performance automatically. When using the Google AMP Cache, the document, all JS files and all images load from the same origin that is using HTTP 2.0 for maximum efficiency. Every AMP cached page once indexed will then be served by the AMP cache directly from SERP.

So in essence AMP pages are fast because:

  • HTML and CSS are stripped down,
  • No Javascript is allowed,
  • Images are lazily loaded,
  • Above the fold of AMP page is prerendered, below the fold is loaded asynchronously,
  • And most importantly, pages are heavily cached, that avoid the need to fetching it from the web server.

Impact on SEO and page ranking

Since google’s announcement on mobile-first index which clearly states “Although our search index will continue to be a single index of websites and apps, our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results. Of course, while our index will be built from mobile documents, we’re going to continue to build a great search experience for all users, whether they come from mobile or desktop devices.”. It means that AMP apart from providing indirect SEO benefits due to great UX and speed will have direct benefit on page ranking due to mobile-first indexing.

Hands on with AMP

To get a quick hands on with AMP I have created a microsite using Grow. It can be found here at my git repo amp-ground-zero. This is a quick demo to show how easily AMP can be incorporated in a website. Here the amp-html has been included into the base.html which is the master file for the website. This example leverages on the basic AMP page skeleton along with two commonly used components i.e. amp-image & amp-sidebar and the page is AMP valid (which can be tested by appending #development=1 in the url).

Screenshot from amp-ground-zero:


Further research

Going forward I will try out different features (amp-components) from AMP for developing a complete production ready website. The starting point of my research will be ampbyexample which provides succinct user interface to play around with all AMP offerings.

Optimising website above the fold content

Above the fold content is how you make your first impression with your users. Make sure it is lasting one.

One of the most common enemies of website is slow network and it is always a challenge to avoid long loading time and all users can see is a blank screen for several seconds, remember every extra second of wait is an opportunity of conversion wasted. However, if above-the-fold content (critical path) is properly prioritized and all the unnecessary content is loaded without blocking the rendering it cannot only provide great speed but subtle UX; as users will not see the blank page for long and in best cases will not see it at all.

Before continuing must read basic frontend optimization guide.


Before the browser can render a page it has to build the DOM tree by parsing the HTML markup. During this process, whenever the parser encounters a script it has to stop and execute it before it can continue parsing the HTML. In the case of an external script the parser is also forced to wait for the resource to download, which may incur one or more network roundtrips and delay the time to first render of the page. The above the fold content of your website plays a pivotal role in driving your user traffic. With the need for speed in page loading its also important to know what to load and when to load it. The general thumb rule is to load above the fold content in a single request. There are various ways to do that the most elementary one is to inject all the resources required to render the above the fold content in the HTML itself, as easy it may sound it will require quite some work to achieve that, especially if you already have an existing website which loads multiple resources from external servers like fonts, css, jquery, and other javascript libraries. Let us discuss some of the best practices to rock your website’s above the fold.

Identifying render blocking content

This is the most important part of the study, it is important to know what is the essential part of above the fold content, it will help us to efficiently improve the overall speed of the website. Once the critical path is identified the rest of the resources should be deferred to load after the page load is completed. You can use google pagespeed insight tool to check what are the resources that are causing render blocking.

Critical and non-critical CSS

first the critical path css should be separated and inlined into the HTML itself so that it becomes part of a single request. You can either manually identify and separate them or use some packages that will do it automatically, like:

Online tool criticalpathcssgenerator Using this tool you can build the critical path from the complete css file. The generated critical path can then be inlined into the html <style> block and rest of the complete css file can be loaded asynchronously.

Node module Critical Critical node module setups end to end and fully automated solution for generating critical path css. It can be implemented easily.


$ npm install --save critical


var critical = require('critical');


    // Inline the generated critical-path CSS
    // - true generates HTML
    // - false generates CSS
    inline: true,

    // Your base directory
    base: 'dist/',

    // HTML source
    html: '<html>...</html>',

    // HTML source file
    src: 'index.html',

    // Your CSS Files (optional)
    css: ['dist/styles/main.css'],

    // Viewport width
    width: 1300,

    // Viewport height
    height: 900,

    // Target for final HTML output.
    // use some CSS file when the inline option is not set
    dest: 'index-critical.html',

    // Minify critical-path CSS when inlining
    minify: true,

    // Extract inlined styles from referenced stylesheets
    extract: true,

    // Complete Timeout for Operation
    timeout: 30000,

    // Prefix for asset directory
    pathPrefix: '/MySubfolderDocrot',

    // ignore CSS rules
    ignore: ['font-face',/some-regexp/],

    // overwrite default options
    ignoreOptions: {}

more explanation here.

Laravel Package critical-css For more sophisticated websites built with Laravel, this package provides an end to end solution from building critical-path css to inlining it to the view. It’s built on top of Laravel as a wrapper to Critical package. Here are the basic steps you can follow to get started with criticalcss in a Laravel Project:

Install npm package:

$ npm install critical --save

Install Composer package (composer.json)

$ composer require krisawzm/critical-css
composer install

Setup Service Provider

Add the following to the providers key in config/app.php:

'providers' => [

To get access to the criticalcss:clear and criticalcss:make commands, add the following to the $commands property in app/Console/Kernel.php:

protected $commands = [

Prepare config file Generate a template for the config/criticalcss.php file by running:

$ php artisan vendor:publish

To generate criticalcss use command php artisan criticalcss:make and to inline the generated style in a view specify the @criticalcss directive provided by the CriticalCssServiceProvider as below:


In some of the special cases you might notices the criticalcss package may generate duplicate styles, if the input array for $cssPaths contains files with duplicate styles. Which is common when you compile css for production with some task runner like gulp. If you have 5 compiles css files which contains same styles in each file and it is part of above the fold content, the critical path generated will have these styles duplicate 5 times. In order to resolve this issue I created a fork to the above repository here: This allows users to specify a single css file that you would like to use for a particular laravel route. On top of removing duplicates it also helps to build critical path faster as it has to build css from much smaller stylesheet. To use this fork you can update your composer.json file as following:

composer require: "krisawzm/critical-css": "dev-master"

and specify the VCS to my repository like this:

"repositories": [
            "type": "vcs",
            "url": ""

What to do with non-critical css?

The non criticalcss should be loaded asynchronously to trigger download only after the DOM processing is complete. The google’s recommended way to do so is discussed here. It can be implemented as follows in a Laravel Project as suggested here:

Asynchronously loading Javascript and jQuery

Javascript undoubtedly creates the most overhead when it comes to rendering the page, especially when the website is built with jQuery. If you have built your javascript code based on jQuery it becomes difficult to asynchronously load the jQuery as the scripts depending on it will break while rendering the page. But there is a work around for that, this article explains neatly different ways to asynchronously loading jQuery. My preferred way is to asynchrobously load external jQuery resource and check if it’s loaded and only then execute my jquery dependent code as follows:

<script async src=""></script>

// loaded anywhere on the page asynchronously.
(function jqIsReady() {
    if (typeof $ === "undefined"){
        return ;
    var async = async || [];
    while(async.length) { // there is some syncing to be done
        var obj = async.shift();
        if (obj[0] =="ready") {
        }else if (obj[0] =="load"){

    async = {
        push: function(param){
            if (param[0] =="ready") {
            }else if (param[0] =="load"){
        /* your jquery dependent code here */

This can easily be added to your gulp task using gulp-wrap package which automatically takes care of any new code you may add in future. You can pipe gulp-wrap with this code as below:

return gulp
.pipe(wrap('(function jqIsReady(){if(typeof $==="undefined" || typeof jQuery==="undefined")' +
            '{setTimeout(jqIsReady,10);return;} var async=async || []; while(async.length){var obj=async.shift(); ' +
            'if (obj[0]=="ready") { $(obj[1]); } else if (obj[0] == "load") { $(window).load(obj[1]); } } ' +
            'async = { push: function (param) { if (param[0] == "ready") { $(param[1]); } ' +
            'else if (param[0] == "load") { $(window).load(param[1]); } } }; \n<%= contents %>\n })()'))

read more about gulp-wrap here.

Rest of the non critical javascript can be async or deferred as following: Async: <script async src="script.js">Defer <script defer src="script.js">

Read more about async and defer here.

Loading iframes

Iframes are render blocking resources, if they are not handled well it can make the page keep loading for forever, as the browser will keep waiting until the iframe is completely loaded. So, best practice will be to hack the way and tell browser to load it in background and do not wait for the iframe unless they are needed to be loaded with page load. It can be done by loading the iframe after page load completes, as below:

    (function (d) {
        var iframe = d.body.appendChild(d.createElement('iframe')),
            doc = iframe.contentWindow.document;'<body onload="' +
            'var d = document;d.getElementsByTagName(\'head\')[0].' +
            'appendChild(d.createElement(\'script\')).src' +
            '=\'' +


With all the optimizations we discussed I could boost the google pagespeed score from 70 to 89 on mobile and 77 to 94 on desktop. GooglePageSpeed-criticalpath.png

Other than that the DomContentLoadTime became 727ms as compared to 2.41s while page load time was improved by almost 1 second under regular 3G network simulated on Apple iPhone 6.

Kickass your website front-end speed

The bounce rate on your website is directly proportional to the speed of your website; for that fact the conversion depends a lot on your website speed. If your website takes one extra second to load the important content people will leave your website and move to another one, especially on mobile devices which contributes to almost the half of the traffic nowadays.

How can I check what needs to be fix?

There are various tools you can leverage on to get insights about your website quality and speed. But I trust you won’t need anymore than these two:

  • Google PageSpeed insights ( gives you whole lot of things you can optimise on your website. We will cover the one that counts the most.

  • Google Chrome developer tools Google chrome provides a fantastic tool to monitor & analyse the speed and resource request lifecycle.
    Hint: Look under Network and Timeline tabs

Most important problems that these monitoring tools will point out are:

  • Asset minification
  • HTML minification
  • Image Optimisation
  • Slow image downloads

Asset Minification

One of the most practical and easy to implement solution is to minify all the assets (CSS and JS) to decrease the size of data the browser needs to download. One of the efficient way to automate this process is by writing a gulp task which would looks something like this:

var gulp = require('gulp');
var concat = require('gulp-concat');
var minify = require('gulp-minify');
var cleanCss = require('gulp-clean-css');

gulp.task('css-task', function () {
    return gulp.src(['resources/js/lib/*.js', 'resources/js/main.js'])

gulp.task('js-task', function () {
    return gulp.src(['resources/css/style1.css', 'assets/css/style2.css'])

gulp.task('default', ['css-task', 'js-task']);

Learn more about setting up gulp tasks here:

HTML minification

While there are many ways to minify the HTML, from using online apps to writing your own. It depends upon the platform the application is built upon. If you can use Node platform then one of the recommended package by Google itself is kangax/html-minifier which provides highly configurable environment for minifying html.

Another way I would recommend will be to write your own minifier that would essentially run as a middleware that collects the final response and minify it and responds to the browser, which simply is a string manipulation program to filters out unnecessary spaces and tabs which can considerably decrease the size of the html. Something like this in PHP:

preg_replace('/\s+/S', " ", $response->getContent());

html_minify_comparison.png Figure 1. left: before html minification, right: after html minification_

Image Optimisation

Optimising images is the most important part for frontend optimisation as the web is made of more than 64% of images, according to this interesting report.

  • Image compression

Images plays an important role in loading a webpage, if the images are not optimised they can degrade the entire website performance. Google pagespeed tool’s recommendations can found here. The main idea is to keep images as compressed as possible.

  • Lazy load images

Lazy loading of images can provide a great user experience and speed. This makes the above-the-fold content download much faster practically. One of the recommended library can be found here.

  • Image sprites

Loading multiple image files (like icons) in multiple requests is highly discouraged for best practices. The best way is to club them together and download them all in a single request. And access them through the css background-position property. This process can be easily automated via grunt, gulp or node. In nutshell, the spritesheet (concatenated images) are created by a image processing program (most popular ones are compass and ImageMagick). Spritesmith is one of the popular packages available to automate the sprite generation process with grunt, gulp and webpack.

Using CDN for serving assets

A CDN is essentially a network of geographically dispersed servers. Each CDN node (also called Edge Servers) caches the static content of a site like the images, CSS/JS files and other structural components. CDN provides great speed and stability when it comes to serving images over the network. The primary end user benefit CDN provides is high speed. How it does it is by providing the following benefits:

  1. High speed servers
  2. Low network latency
  3. Multiple edge servers