Linux Networking Fundamentals Part 2

This is in continuation of previous article. I’m going to start from scratch.

I’m going to build

  1. Two datacenters with name DC1 & DC2 by creating 2 different Vagrant VM networks
  2. Two Rack’s per Datacenter say DC1-RC1, DC1-RC2  and DC2-RC1,DC2-RC2
  3. Each Rack is connected by a Gateway
  4. Each Datacenter is connected by a Router
  5. Finally openvpn to connect both datacenter’s

distributedsystemarch1

All the hardware node and device cooking is mostly done via shell scripts and ruby and vagrant coding.

I’m assuming who ever is interested to go over this first understand basics of networking, Ruby, ShellScripting and Vagrant and Docker Environments.

Before moving ahead i need a simple utility to generate IP address range for given CIDR

Wrote a basic code in ruby that generates that.

# Generate IP's in given Range
# IpList = Nodemanager.convert_ip_range('192.168.1.2', '192.168.1.20')

module Nodemanager

	# Generates range of ips from start to end. Assumption is that i'm only using IPv4 address
	  
  def convertIPrange first, last
    first, last = [first, last].map{|s| s.split(".").inject(0){|i, s| i = 256 * i + s.to_i}}
    (first..last).map do |q|
      a = []
      (q, r = q.divmod(256)) && a.unshift(r) until q.zero?
      a.join(".")
    end
  end

Now i need to load all dependencies in by Berksfile. Berksfile is like a dependency manger for chef (Provisioning tool)

It can be compared with Maven/Gradle(Java), Nuget(Dotnet),Composer (PHP), Bundler (Ruby) , Grunt/Gulp (NodeJS)

name             'basedatacenter'
maintainer       'Ashwin Rayaprolu'
maintainer_email 'ashwin.rayaprolu@gmail.com'
license          'All rights reserved'
description      'Installs/Configures Distributed Workplace'
long_description 'Installs/Configures Distributed Workplace'
version          '1.0.0'


depends 'apt', '~> 2.9'
depends 'firewall', '~> 2.4'
depends 'apache2', '~> 3.2.2'
depends 'mysql', '~> 8.0'  
depends 'mysql2_chef_gem', '~> 1.0'
depends 'database', '~> 5.1'  
depends 'java', '~> 1.42.0'
depends 'users', '~> 3.0.0'
depends 'tarball'


Before moving ahead i want to list my base environment.
I have 2 host machines. One on CentOS 7 and other one on CentOS 6


[ashwin@localhost distributed-workplace]$ uname -r
3.10.0-327.22.2.el7.x86_64
[ashwin@localhost distributed-workplace]$ vboxmanage --version
5.1.2r108956
[ashwin@localhost distributed-workplace]$berks --version
4.3.5
[ashwin@localhost distributed-workplace]$ vagrant --version
Vagrant 1.8.5
[ashwin@localhost distributed-workplace]$ ruby --version
ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux]
[ashwin@localhost distributed-workplace]$ vagrant plugin list
vagrant-berkshelf (5.0.0)
vagrant-hostmanager (1.8.5)
vagrant-omnibus (1.5.0)
vagrant-share (1.1.5, system)

Now let me write a basic Vagrant file to start my VM’s

# -*- mode: ruby -*-
# vi: set ft=ruby :

require './modules/Nodemanager.rb'

include Nodemanager

@IPAddressNodeHash = Hash.new {|h,k| h[k] = Array.new }
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = '2'

Vagrant.require_version '>= 1.5.0'

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Create Share for us to Share some files
  config.vm.synced_folder "share/", "/usr/devenv/share/", disabled: false
  # Disable Default Vagrant Share
  config.vm.synced_folder ".", "/vagrant", disabled: true

  # Setup resource requirements
  config.vm.provider "virtualbox" do |v|
    v.memory = 2048
    v.cpus = 2
  end

  # vagrant plugin install vagrant-hostmanager
  config.hostmanager.enabled = false
  config.hostmanager.manage_host = false
  config.hostmanager.manage_guest = true
  config.hostmanager.ignore_private_ip = false
  config.hostmanager.include_offline = true

  # NOTE: You will need to install the vagrant-omnibus plugin:
  #
  #   $ vagrant plugin install vagrant-omnibus
  #
  if Vagrant.has_plugin?("vagrant-omnibus")
    config.omnibus.chef_version = '12.13.37'
  end

  config.vm.box = 'bento/ubuntu-16.04'
  config.vm.network :private_network, type: 'dhcp'
  config.berkshelf.enabled = true

  # Assumes that the Vagrantfile is in the root of our
  # Chef repository.
  root_dir = File.dirname(File.expand_path(__FILE__))

  # Assumes that the node definitions are in the nodes
  # subfolder
  nodetypes = Dir[File.join(root_dir,'nodes','*.json')]

  ipindex = 0
  # Iterate over each of the JSON files
  nodetypes.each do |file|
    puts "parsing #{file}"
        node_json = JSON.parse(File.read(file))

        # Only process the node if it has a vagrant section
        if(node_json["vagrant"])
          @IPAddressNodeHash[node_json["vagrant"]["name"]] = Nodemanager.convertIPrange(node_json["vagrant"]["start_ip"], node_json["vagrant"]["end_ip"])

          1.upto(node_json["NumberOfNodes"]) do |nodeIndex| 

            ipindex = ipindex + 1

            # Allow us to remove certain items from the run_list if we're
            # using vagrant. Useful for things like networking configuration
            # which may not apply.
            if exclusions = node_json["vagrant"]["exclusions"]
              exclusions.each do |exclusion|
                if node_json["run_list"].delete(exclusion)
                  puts "removed #{exclusion} from the run list"
                end
              end
            end

            vagrant_name = node_json["vagrant"]["name"] + "-#{nodeIndex}"
            is_public = node_json["vagrant"]["is_public"]
            #vagrant_ip = node_json["vagrant"]["ip"]
            vagrant_ip = @IPAddressNodeHash[node_json["vagrant"]["name"]][nodeIndex-1]
            config.vm.define vagrant_name, autostart: true  do |vagrant|

              vagrant.vm.hostname = vagrant_name
              puts  "Working with host #{vagrant_name} with IP : #{vagrant_ip}" 

              # Only use private networking if we specified an
              # IP. Otherwise fallback to DHCP
              # IP/28 is CIDR
              if vagrant_ip
                vagrant.vm.network :private_network, ip: vagrant_ip,  :netmask => "255.255.255.240"
              end

              if is_public
                config.vm.network "public_network", type: "dhcp", bridge: "em1"
              end

              # hostmanager provisioner
              config.vm.provision :hostmanager

              vagrant.vm.provision :chef_solo do |chef|
                chef.data_bags_path = "data_bags"
                chef.json = node_json
              end        

            end  # End of VM Config

          end # End of node interation on count
        end  #End of vagrant found
      end # End of each node type file

end

Finally run vagrant up . Sample output attached below. I’m creating 2 VM’s for 2 Racks and 1 VM for Gateway. There are now 3 VM’s up and running. 2 VM’s represent our 2 virtual racks and third a gateway. If you notice all of them are running on private ip network which is inaccessible from external world except our gateway node. Our gateway node has 2 different ethernet devices 1 connecting private network and other connecting host network. I’ve marked specific lines that define the kind of network that gets created.


# Only use private networking if we specified an
              # IP. Otherwise fallback to DHCP
              # IP/28 is CIDR
              if vagrant_ip
                vagrant.vm.network :private_network, ip: vagrant_ip,  :netmask => "255.255.255.240"
              end

              if is_public
                config.vm.network "public_network", type: "dhcp", bridge: "em1"
              end

Sample output on Vagrant up

VagrantUpOutput.jpg

 

I define node configuration in a json file so as to make it more simple. Attached is sample node type json for both Gateway node and Rack Node
Below is definition for Rack. I tried to add as much comments as possible to explain each field

If you observer below node definition’s i’ve give Node Name prefix in the config file and also from and to range for IP’s in config file. Apart from that i define the kind of recipe that need to loaded by chef for this specific node type.


{
  "NumberOfNodes":2,
  "environment":"production",
  "authorization": {
    "sudo": {
      // the deploy user specifically gets sudo rights
      // if you're using vagrant it's worth adding "vagrant"
      // to this array
      // The password for the dpeloy user is set in data_bags/users/deploy.json
      // and should be generated using:
      // openssl passwd -1 "plaintextpassword"
      "users": ["deploy", "vagrant"]
    }
  },
  // See http://www.talkingquickly.co.uk/2014/08/auto-generate-vagrant-machines-from-chef-node-definitions/ for more on this
  "vagrant" : {
    "exclusions" : [],
    "name" : "dc1-rc",
    "ip" : "192.168.1.2",
    "start_ip":"192.168.1.2",
    "end_ip":"192.168.1.3"
  },
  "mysql": {
      "server_root_password": "rootpass",
      "server_debian_password": "debpass",
      "server_repl_password": "replpass"
  },
  "data_bags_path":"data_bags",
  "run_list":
  [
    "recipe[basedatacenter::platform]",
    "recipe[basedatacenter::users]",
    "recipe[basedatacenter::docker]"
   
  ]
}

Below is node definition for Gateway.


{
  "NumberOfNodes":1,
  "environment":"production",
  "authorization": {
    "sudo": {
      // the deploy user specifically gets sudo rights
      // if you're using vagrant it's worth adding "vagrant"
      // to this array
      // The password for the dpeloy user is set in data_bags/users/deploy.json
      // and should be generated using:
      // openssl passwd -1 "plaintextpassword"
      "users": ["deploy", "vagrant"]
    }
  },
  // See http://www.talkingquickly.co.uk/2014/08/auto-generate-vagrant-machines-from-chef-node-definitions/ for more on this
  "vagrant" : {
    "exclusions" : [],
    "name" : "dc1-gw",
    "ip" : "192.168.1.5",
    "start_ip":"192.168.1.4",
    "end_ip":"192.168.1.4",
    "is_public":true
  },
  "mysql": {
      "server_root_password": "rootpass",
      "server_debian_password": "debpass",
      "server_repl_password": "replpass"
  },
  "data_bags_path":"data_bags",
  "run_list":
  [
    "recipe[basedatacenter::platform]"
  ]
}

 

Before moving on to next step i need to install 5 nodes on each rack. Which is taken care by docker. Docker is a containerization tool that mimic’s VM but very light weight. We are using docker containers to mimic realworld nodes


apt-get install -y curl &&
apt-get install  -y  apt-transport-https ca-certificates &&
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D &&
touch /etc/apt/sources.list.d/docker.list &&
echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" >> /etc/apt/sources.list.d/docker.list  &&
apt-get update &&
apt-get purge lxc-docker &&
apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual &&
apt-get update &&
apt-get install -y docker-engine &&
curl -L https://github.com/docker/machine/releases/download/v0.7.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && 
chmod +x /usr/local/bin/docker-machine &&
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose &&
chmod +x /usr/local/bin/docker-compose &&
sudo usermod -aG docker docker

Once docker is setup on all racks  we need to install all nodes. Below is base version of docker file that i use
My next step is to setup containers on each of the rack so that we can replicate multiple datacenter’s and multiple rack scenarios

I’m going to create 5 containers on each rack and each one of the container will again be using Ubuntu Xenial as base OS. I’m going to install oracle 7 jdk on all of them.

My usecase for distributed architecture is based on HDFS, Cassandra setup hence i need to install java first . Below install.sh script is run by vagrant/chef to install docker on each of the rack.


FROM ubuntu:16.04
MAINTAINER Ashwin Rayaprolu

RUN apt-get update
RUN apt-get dist-upgrade -y

RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python-software-properties
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install byobu curl git htop man unzip vim wget

# Install Java.
RUN \
  echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
  add-apt-repository -y ppa:webupd8team/java && \
  apt-get update && \
  apt-get install -y oracle-java7-installer && \
  rm -rf /var/lib/apt/lists/* && \
  rm -rf /var/cache/oracle-jdk7-installer
  
  
# Install InetUtils for Ping/traceroute/ifconfig
RUN apt-get update
# For Ifconfig and other commands
RUN apt-get install -y net-tools
# For ping command
RUN apt-get install -y iputils-ping 
# For Traceroute
RUN apt-get install -y inetutils-traceroute



# Define working directory.
WORKDIR /data

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-7-oracle

# Define default command.
CMD ["bash"]


 

Docker has a very elegant way of creating network’s. As our
Rack Network is on 192.168.1.*
We want
Node Network on 10.18.1.2/28

We have multiple options to create network in docker. I would like to go with bridge networking. Will discuss on those specific topic later. For now assuming we are using bridge network below is code to create network and attach to some container

We need to make sure we have different range of network on each rack and each datacenter so that we don’t overlap IP’s between different rack’s and datacenter’s

# Below command will create a network in our desired range (dc1-rack1)
# 10.18.1.0  to 10.18.1.15
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.2/28 \
  my-multihost-network 

# Below command will create a network in our desired range (dc1-rack2)
# From 10.18.1.16  to 10.18.1.31
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.19/28 \
  my-multihost-network    

# Below command will create a network in our desired range (dc2-rack1)
# 10.18.1.32  to 10.18.1.47
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.36/28 \
  my-multihost-network    

# Below command will create a network in our desired range (dc2-rack2)
# 10.18.1.48  to 10.18.1.63
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.55/28 \
  my-multihost-network      
  
# -d option to run in background  -t option to get a duplicate tty
docker run -itd multinode_node1

# Connect the newly created network on each node to the node name.
docker network connect my-multihost-network docker_node_name

 

I would write code to automate all the above tasks in subsequent articles. I’m going to use docker-compose to build individual nodes in each rack.

Very basic code would look like this

version: ‘2’
services:
node1:
build: node1/
node2:
image: node2/
node3:
image: node3/

You can checkout First version of code from

https://github.com/ashwinrayaprolu1984/distributed-workplace.git

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s