Deploying Web Applications using Juju – (Part 3/3)
Canonical
on 7 November 2013
The goal of this tutorial series is to demonstrate the power of Juju service orchestration for deploying web applications and infrastructure services.
Juju is a service orchestration framework that is designed to make it very easy for application designers to deploy their applications in an easy, repeatable and logical manner without all that tedious mucking about with custom configurations.
Part 1
In part 1 we set up the Juju environment, the Juju GUI and the Nagios monitoring service. Then we deployed a web application, wordpress and MySQL database and added it to the monitoring server.
Part 2
In part 2 we set up the rsyslog log server, munin graphing server and landscape-client for managing packages. Then we added relationships to the mysql and wordpress services so they are logged, graphed and managed.
Part 3
This part, I will show you how to add a front end load balancer to the wordpress site and then scale the wordpress application server under load to simulate what happens when you need to scale out due to an increase in demand.
Should I use a Load Balancer as a Service?
There are several ways to load balance a web service. Many providers are likely to be providing a load balancing as a service (LBaaS) API which would allow you to load balance your web application. The trouble with these services is that they are not currently standardised and so you are likely to end up with being locked in to a particular provider or LBaaS service. This would then restrict your ability to migrate your services to other provider clouds based on Openstack or any other provider endpoint that Juju can use. That list now includes, Amazon EC2, HP Cloud, Openstack, Microsoft Azure (as of juju version 1.16), Linux Containers and in the future will include the burgeoning number of public cloud providers based on Openstack.
Not only that but currently in testing is a provider which would allow servers to be added to a juju environment using SSH access alone, albeit manually, which would allow you to add any Ubuntu server to a juju environment.
So, in order to be able to migrate our services to any provider we need to use our own Load Balancer service that we deploy just like any other service running on an instance in the cloud.
In this case, I am going to use HAProxy as the front end load balancer, but there are multiple choices such as Apache, Varnish, etc and the process is very similar for each.
Adding a Load Balancer Front End
So, let’s add our load balancer. You know the drill by now.
juju deploy haproxy
Once deployed then add the relation to the wordpress application using:
juju add-relation haproxy wordpress
Your environment should now look like this:
Now, as we are now going to connect to wordpress using the HAProxy front end we need to unexpose wordpress and expose HAProxy:
juju unexpose wordpress
juju expose haproxy
Now to connect to your wordpress blog you connect via the hostname that is the HAProxy service, which you can find from the details in juju-gui once deployment is complete.
So, now we have a front end load balancer, we can access the WordPress blog using the load balancer’s web address.
Adding Relationships to the Load Balancer
Now we have a load balancer, we need to make sure that is monitored and kept up to date, so let’s add relationships to all the infrastructure services as follows:
juju add-relation nagios:nagios haproxy
juju add-relation nrpe haproxy
juju add-relation rsyslog-forwarder haproxy
juju add-relation munin-node haproxy
juju add-relation landscape-client:container haproxy
This will add the haproxy service to all the infrastructure services.
Your environment should now look like this:
If you then check the munin server and nagios servers, you should see the haproxy service automatically added to the configuration after a short time.
You will also see a new registration appear in your Landscape account login waiting to be approved.
Scaling Out on Demand
Let’s say you are now using your blog to post your thoughts on the web and someone decides to link to you from a major news outlet and suddenly your small blog is suffering under a huge demand. First, how would you know, well, you should be able to see if your services are having any trouble from the Nagios alerts that trigger under heavy load and the Munin graphs should show your resources being heavily used. You will also be logging any problems via the rsyslog server, which you can check automatically using a log analysis tool to find critical problems.
So, once we have decided that we want to scale our service for the increased demand it is simple to do with Juju.
From the command line you just add units to cope with the new demand:
juju add-unit –num-units 6 wordpress
From the juju-gui you would click on the wordpress service and then click on the number of units and change to request as many units as you see fit.
So, let’s use the gui, type in 7 to the num-units box and Enter.
You can edit the constraints, such as CPU, memory if you want to make sure you have large enough servers to cope with the new demand. Then click on Confirm.
Now the service scales to 7 servers, they will now deploy and you can see the 1 active server and 6 pending servers.
These new servers will take a few minutes to deploy and once deployed they will be automatically added to the infrastructure to be monitored by nagios, graphed by munin, logged by rsyslog and managed by Landscape (pending approval in the web interface).
The scaling process will complete once the new instances are deployed, the services installed and configured and then all the relationships added, so it should take about 10 minutes in total.
In the juju-gui you will then see the wordpress charm’s bar go green to show that all units are deployed and running.
Any Cloud – The Same Process
Although with this tutorial I focused on using the dominant player in the marketplace, Amazon EC2, this was done as our target audience is likely to already have accounts on Amazon EC2.
However, the process followed in this tutorial for deploying services with Juju is the same if you are deploying to Amazon EC2, HP Cloud, Microsoft Azure, Openstack, MAAS or even local containers using LXC.
The only difference is how you set up the provider environment initially as you need to get an account and configure the parameters for that provider environment.
The documentation on how to do this for each provider is available here:
Summary
Here we can finally see the full power in Juju. It isn’t in deployment as there are many tools to choose from that do that job adequately, it is in the orchestration of services in large complex environments that using other tools, you would have to do a lot of the work on your own, without the aid of quality tools that are maintained by the best developers to deploy the services that they design and write.
Scaling becomes a simple and repeatable process that is easy to carry out, even when doing so in a complex orchestration environment and it is graphically displayed for easy understanding and collaboration between teams of developers and administrators.
Well that ends this three part series and good luck to you on your Juju journey.
Don’t forget to check out the Juju site at:
And the Juju detailed documentation available at:
https://juju.ubuntu.com/docs/
and the Juju charm store at:
Darryl Weaver
Cloud Sales Engineer, EMEA
Canonical
Ubuntu cloud
Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.
Newsletter signup
Related posts
EdgeIQ and Ubuntu Core; bringing security and scalability to device management
Today, EdgeIQ and Canonical announced the release of the EdgeIQ Coda snap and official support of Ubuntu Core on the EdgeIQ Symphony platform. EdgeIQ Symphony...
Ubuntu 20.04 LTS end of life: standard support is coming to an end. Here’s how to prepare.
In 2025, Ubuntu 20.04 LTS (Focal Fossa) will reach the end of its standard five-year support window. It’s time to start thinking about your options for...
Join Canonical in London at Dell Technologies Forum
Canonical is excited to be partnering with Dell Technologies at the upcoming Dell Technologies Forum – London, taking place on 26th November. This prestigious...