Category: Devops

TerraKube – Kubernetes on Openstack

November 14, 2015

TerraKube is the simplest way to get started with Kubernetes on OpenStack.

TerraKube is a simple tool to provision a Kubernetes cluster on top of OpenStack using Hashicorp’s Terraform. If you are unfamiliar with Terraform, Terraform is a declarative tool for building, changing and versioning infrastructure. Desired state of the infrastructure is described in a configuration file and Terraform takes the plan and builds the desired state. If you are familiar with AWS CloudFormation or OpenStack Heat, here’s how it compares to OpenStack Heat: Terraform or Heat

TerraKube is a project that I started few months ago while I was evaluating Kubernetes and needed a simple, quick and repeatable way to install Kubernetes on OpenStack. Keep in mind that this is a work in progress.

For the sake of this tutorial, I will assume you already have some familiarity with OpenStack and know how to use the OpenStack command line.

TerraKube Overview

So what we are going to do here is to install Terraform on a node, typically your workstation. TerraKube is just a terraform configuraion file called a plan. We will apply the plan, which in turn talks to OpenStack, launches instances and configures them with Kubernetes.

The Kubernetes cluster will consist of one Kubernetes Master and n number of Kubernetes nodes:

Kubernetes Master: CoreOS, etcd, kube-api, kube-scheduler, kube-controller-manager
Kubernetes Nodes: CoreOS, etcd, kube-kubelet, kube-proxy, flannel, docker

Installation

1. Install Terraform
Follow the instructions from here: https://www.terraform.io/intro/getting-started/install.html

2. Upload CoreOS Image to OpenStack Glance
TerraKube deploys Kubernetes on top of CoreOS instances on OpenStack. So first we need a CoreOS image to be uploaded to our OpenStack Image Service.

wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_openstack_image.img.bz2
bunzip2 coreos_production_openstack_image.img.bz2
glance image-create --name CoreOs --container-format bare --disk-format qcow2 --file coreos_production_openstack_image.img --is-public True

3. Configure Terrakube

git clone https://github.com/sacharya/terrakube
cd terrakube
mv terraform.tfvars.example terraform.tfvars

Edit your terraform.tfvars with your configuration info. Most of the configuration should be pretty straight forward.

Please note that you have to get a new etcd_discovery_url for every new cluster. Take a look at restart.sh for example, where the etcd_discovery_url in the terraform.vars file is updated with a new value before you apply the terraform plan/

4. Using Terrakube
Show the execution plan

terraform plan

Execute the plan

terraform apply

One you apply the plan and wait for few minutes, you should get an output like:

Outputs:
  master_ip  = 10.0.0.50
  worker_ips = 10.0.0.51

The master_ip is the Kubernetes Master and worker_ips is a list of Kubernetes nodes.

Login to the master and make sure all services are up and Kubernetes is functioning properly.

ssh core@10.0.0.50
cd /opt/kubernetes/server/bin
./kubectl get cluster-info
./kubectl get nodes

5. Running some examples
Kubernetes comes with a lot of examples that you can try out. Note that many of the examples are configured to run on top of Google Container Engine (GKE), and may not run on top of OpenStack without some tweaking. But the manifests are a pretty good starting point to learn about deploying apps on Kubernetes.

git clone https://github.com/kubernetes/kubernetes ~/kubernetes

There are plenty of example applications under examples directory. Examples/guestbook is a good start.

Dynamic Inventory with Ansible and Rackspace Cloud

March 4, 2014

Typically, with Ansible you create one or more hosts file which it calls Inventory file and Ansible will pick the servers from the hosts file and runs the playbooks onto the servers. This is a simple and straightforward way to do it. However, if you are using the Cloud, its very likely that your applications are creating and deleting servers based on some other logic and its very impractical to maintain a static Inventory file. In that case, Ansible can directly talk to your cloud (AWS, Rackspace, OpenStack, etc) or a dynamic source (Cobbler etc) through what it calls Dynamic Inventory plugins, without you having to maintain a static list of servers.

Here, I will go through the process of using the Rackspace Public Cloud Dynamic Inventory Plugin with Ansible.

Install Ansible
First of all, if you have not already installed Ansible, go ahead and do so. I like to install Ansible within virtualenv using pip.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python-dev python-virtualenv
virtualenv env
source env/bin/activate
pip install ansible

Install Rax Dynamic Inventory Plugin
Ansible maintains an external RAX Inventory File on its repository (Not sure why these plugins do not get bundled with the Ansible package). The rax.py script depends on pyrax module, which is the client binding for Rackspace Cloud.

pip install pyrax
wget https://raw.github.com/ansible/ansible/devel/plugins/inventory/rax.py
chmod +x rax.py

The script needs a configuration file named ~/.rackspace_cloud_credentials, which will store your auth credentials to Rackspace Cloud.

cat ~/.rackspace_cloud_credentials
[rackspace_cloud]
username = <username>
api_key = <apikey>

Run rax.py
As you can see, rax.py is a very simple script that provides a couple of methods to list and show servers in your cloud. By default, it grabs the servers in all Rackspace regions. If you are interested in only one region, you can specify the RAX_REGION.

./rax.py --list
RAX_REGION=DFW ./rax.py --list
RAX_REGION=DFW ./rax.py --host some-cloud-server

Create Cloud Servers
Since you have already pyrax installed as a dependency of rax.py inventory plugin, you can use command-line to create a cloud server named ‘staging-apache1′ and and tag the server as staging-apache group using the metadata key-value feature.

export OS_USERNAME=<username>
export OS_PASSWORD=<apikey>
export OS_TENANT_NAME=<username>
export OS_AUTH_SYSTEM=rackspace
export OS_REGION_NAME=DFW
export OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
ssh-keygen
nova keypair-add --pub-key ~/.ssh/id_rsa.pub stagingkey
nova boot --image 80fbcb55-b206-41f9-9bc2-2dd7aac6c061 --flavor 2 --meta group=staging-apache --key-name stagingkey staging-apache1

If you want to install Apache on more staging servers, you would create server named staging-apache2 and tag it with the same group name staging-apache.

Also note, we are injecting ssh keys to the servers on creation, so ansible will be able to do ssh passwordless login. With Ansible, you also have the option of using username-password if you choose so.

Once the server is booted, lets make sure ansible can ping all the servers tagged with the group staging-apache.

ansible -i rax.py staging-apache -u root -m ping

Run a sample playbook
Now, lets create a very simple playbook to install apache on the inventory.

$ cat apache.yml
- hosts: staging-apache
  tasks:
      - name: Installs apache web server
        apt: pkg=apache2 state=installed update_cache=true

Lets run the apache playbook on all rax servers in the region DFW and that match the hosts in the group staging-apache.

RAX_REGION=DFW ansible-playbook -i rax.py apache.yml

With static inventory, you’d be doing this instead, and manually updating the hosts file:

ansible-playbook -i hosts apache.yml

Now you can ssh into the staging-apache1 server and make sure everything is configured as per your playbook.

ssh -i ~/.ssh/id_rsa root@staging-apache1

You may add more servers to the staging-apache group, and on the next run, ansible will detect the updated inventory dynamically and run the playbooks.

Rackspace Public Cloud is based off of OpenStack Nova. So nova.py inventory should work pretty much the same. You can look at the complete lists of dynamic inventory plugins here. Adding a new inventory plugin like for say Razor that isn’t already there would be fairly simple.