Amazon Kindle Hack and Download PDF/Image

I usually try to hack/break websites by automating page traversal’s
There are different levels of security in websites

  1. Data Security which is generally taken care by HTTPS
  2. User Authentication for which we generally have session tokens like JSESSIONID,PHPSESSIONID,ASPSESSIONID or Highly scalable approaches like JSON Web Token
  3. Authorization is mostly custom built but there are some standard processes like OAuth or SAML
  4. Code security which is generally taken care URL Masking, Obfuscation of Javascript/html elements
  5. And adding hidden form elements which maintain state which gets exchanged between server and client
  6. Apart from this other kinds of security that can take care of same origin scripts

Even after taking care of all these website that publish copyright content face a different kind of issue.

7. How not to allow user to download/copy/save the copyrighted material on his machine.

My current topic is related to above issue. I assumed google/amazon and other providers which publish copyrighted material online have taken care of this already .

I just wanted to make a simple testcase and found that none of them are secure.

So Authors beware of hackers with similar skillset 

I’ve used simple Utilities that are well known in automation world.

Selenium and CasperJS to test this and found that it just takes make be 1-2 hours to break them.

Problem with these site’s is that they are not at all designed to cover the 7th point in all angles.

Pasting sample code that downloads books from kindle and save’s as PDF

Note: I’m capturing images (Which is increasing size of pdf . I could have directly converted to pdf and merged all pages, But my scope is not effectiveness)


package com.thoughtlane.experiments.kindlehack;

import java.io.File;
import java.io.FileOutputStream;
import java.io.FilenameFilter;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.List;

import org.apache.commons.io.FileUtils;
import org.openqa.selenium.By;
import org.openqa.selenium.Dimension;
import org.openqa.selenium.OutputType;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;

import com.itextpdf.text.Document;
import com.itextpdf.text.DocumentException;
import com.itextpdf.text.Image;
import com.itextpdf.text.pdf.PdfWriter;

/**
 * @author ashwinrayaprolu
 *
 */
public class AmazonKindleBookDownloader {
	public static final String DEST = "results/pdf/multiple_images.pdf";

	/**
	 * @param args
	 */
	public static void main(String... args) {
		System.setProperty("webdriver.chrome.driver", new File("driver/chromedriver").getAbsolutePath());
		File resourcesFolder = new File("resources");

		WebDriver driver = new ChromeDriver();

		driver.manage().window().maximize();
		driver.manage().window().setSize(new Dimension(1279, 682));
		driver.navigate().to("http://read.amazon.com");

		waitSomeTime();

		String appTitle = driver.getTitle();
		System.out.println("Application title is :: " + appTitle);

		driver.findElement(By.id("ap_email")).sendKeys("AMAZON_USERNAME");
		driver.findElement(By.id("ap_password")).sendKeys("AMAZON_PASSWORD");
		driver.findElement(By.id("signInSubmit-input")).click();

		waitSomeTime();

		File screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
		try {
			FileUtils.copyFile(screenshot, new File(resourcesFolder, "HomePage.jpg"));
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}

		driver.switchTo().frame("KindleLibraryIFrame");

		WebElement element = driver.findElement(By.cssSelector("span#kindle_dialog_firstRun_button.chrome_btn"));

		element.click();

		waitSomeTime();

		screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
		try {
			FileUtils.copyFile(screenshot, new File(resourcesFolder, "HomePage2.jpg"));
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}

		List<WebElement> books = driver.findElements(By.cssSelector("img.book_image.book_click_area"));

		
		/***
		 * Iterate over all books on the dashboard/home page
		 */
		for (WebElement book : books) {

			String bookTitle = book.getAttribute("title").substring(0, 24);
			book.click();

			File bookFolder = new File(resourcesFolder, "" + bookTitle.replaceAll(" ", ""));
			bookFolder.mkdirs();

			try {
				FileUtils.writeStringToFile(new File(resourcesFolder, "HomePage.html"), driver.getPageSource(), Charset.defaultCharset());
			} catch (IOException e1) {
				// TODO Auto-generated catch block
				e1.printStackTrace();
			}

			driver.switchTo().parentFrame();
			driver.switchTo().frame("KindleReaderIFrame");

			// WebElement menuLink =
			// driver.findElement(By.cssSelector("div#kindleReader_button_goto.header_bar_icon"));

			// Actions actions = new Actions(driver);
			// actions.moveToElement(menuLink);

			// menuLink.click();
			waitSomeTime();

			// Goto Cover Page to start capturing
			// WebElement coverLink =
			// driver.findElement(By.cssSelector("div#kindleReader_goToMenuItem_goToCover"));
			// coverLink.click();

			// waitSomeTime();

			screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
			try {
				FileUtils.copyFile(screenshot, new File(bookFolder, "Cover.jpg"));
			} catch (IOException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}

			int pageNumber = 0;
			while (true) {
				pageNumber = pageNumber + 1;
				try {
					WebElement nextArrow = driver.findElement(By.cssSelector("div#kindleReader_pageTurnAreaRight.kindleReader_pageTurnArea.pageArrow"));
					if (nextArrow == null) {
						break;
					}

					nextArrow.click();

					waitSomeTime();

					screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
					try {
						FileUtils.copyFile(screenshot, new File(bookFolder, pageNumber + ".jpg"));
					} catch (IOException e) {
						// TODO Auto-generated catch block
						e.printStackTrace();
					}

				} catch (Exception e) {
					e.printStackTrace();
					break;
				}
			}

			try {
				createPdfFromImages(bookFolder.getAbsolutePath(), DEST);
			} catch (IOException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			} catch (DocumentException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}

		}

		waitSomeTime(9000);

		screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
		try {
			FileUtils.copyFile(screenshot, new File(resourcesFolder, "SelectedBook.jpg"));
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}

		driver.quit();
	}

	/**
	 * @param sourceFolder
	 * @param dest
	 * @throws IOException
	 * @throws DocumentException
	 */
	private static void createPdfFromImages(String sourceFolder, String dest) throws IOException, DocumentException {
		File folder = new File(sourceFolder);

		File[] allFiles = folder.listFiles(new FilenameFilter() {

			@Override
			public boolean accept(File dir, String name) {
				if (name.endsWith("jpg")) {
					return true;
				}
				return false;
			}
		});

		Image img = null;
		Document document = new Document();
		PdfWriter.getInstance(document, new FileOutputStream(dest));
		document.open();
		for (File fileObj : allFiles) {
			img = Image.getInstance(fileObj.getAbsolutePath());
			document.setPageSize(img);
			document.newPage();
			img.setAbsolutePosition(0, 0);
			document.add(img);
		}
		document.close();
	}

	/**
	 * 
	 */
	private static void waitSomeTime() {
		try {
			Thread.sleep(7000);
		} catch (InterruptedException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	}

	/**
	 * @param miliSecons
	 */
	private static void waitSomeTime(int miliSecons) {
		try {
			Thread.sleep(miliSecons);
		} catch (InterruptedException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	}

}


Sample uncompleted code in casperjs. Also added code to handle multiple tries of Captcha


var casper = require('casper').create({
	verbose : true,
	logLevel : 'debug'
});
var system = require('system');
var mouse = require("mouse").create(casper);
var utils = require('utils');

casper.options.viewportSize = {
	width : 1366,
	height : 667
};

var last, list = [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ];

var userAgent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36';
casper.userAgent(userAgent);


var fs = require('fs');
var myFolder      = fs.list(".");

//create array of just images 
try{
	for (var i = 0; i < myFolder.length; i++) {
		this.echo(myFolder[i]);
	    if (myFolder[i].indexOf("pdf") != -1) {
	       myFolder[i].remove();
	    }     
	}
} catch (e) {
	console.log(e);
}


/*******************************************************************************
 * Iterate over all options
 */
var cliOptions = casper.cli.options;

// casper.clear();


casper.on("remote.message", function(msg) {
	console.log(msg);
});

casper.start();

var userName = "AMAZON_USERNAME";
var password = "AMAZON_PASSWORD";

casper
		.thenOpen(
				'https://read.amazon.com',
				function() {

					this.echo("Login Page ");

				});

casper.each(list, function(self, i) {
	self.wait(700, function() {
		last = i;
		this.echo('Using this.wait ' + i);
	});
});

casper.then(function() {
	// casper.exit();
	try {
		casper.evaluate(function(userName, password) {
			document.getElementById("ap_email").value = userName;
			document.getElementById("ap_password").value = password;

			// document.aspnetForm.submit();
		}, userName, password);

	} catch (e) {
		console.log(e);
	}

	this.capture("LoginPage.pdf");
	// this.click('input#signInSubmit-input');
	this.mouse.click("input#signInSubmit-input");
});

casper.each(list, function(self, i) {
	self.wait(700, function() {
		last = i;
		this.echo('Using this.wait ' + i);
	});
});

casper.then(function() {

	// var html = this.getHTML();
	var html = this.getPageContent();
	var f = fs.open('HomePage.html', 'w');
	f.write(html);
	f.close();
	// fs.write('path/to/file', 'your string', 'w');
	this.capture("PostLoginPage.pdf");
	system.stdout.writeLine('Has CaptchaCode?: ');
	var hasCaptcha = system.stdin.readLine();
	
	if (hasCaptcha === 'y') {
		
		system.stdout.writeLine('Enter captcha?: ');
		var captcha = system.stdin.readLine();
		this.echo("Using Captcha " + captcha);
		
		try {
			casper.evaluate(function(userName, password,captcha) {
				document.getElementById("ap_email").value = userName;
				document.getElementById("ap_password").value = password;
				document.getElementById("ap_captcha_guess").value = captcha;
				// document.aspnetForm.submit();
			}, userName, password,captcha);

		} catch (e) {
			console.log(e);
		}
		
		this.capture("PostSecondTry.pdf");
		
		this.echo("Clicking Submit Button");
		this.mouse.click("input#signInSubmit-input");
		
		
	}else{
		this.capture("HomePage.pdf");
	}

	
});

casper.each(list, function(self, i) {
	self.wait(700, function() {
		last = i;
		this.echo('Using this.wait ' + i);
	});
});



casper.then(function() {

	// var html = this.getHTML();
	var html = this.getPageContent();
	var f = fs.open('HomePage2.html', 'w');
	f.write(html);
	f.close();
	// fs.write('path/to/file', 'your string', 'w');
	this.capture("PostLoginPage2.pdf");
	system.stdout.writeLine('Has CaptchaCode2?: ');
	var hasCaptcha = system.stdin.readLine();
	
	if (hasCaptcha === 'y') {
		
		system.stdout.writeLine('Enter captcha2?: ');
		var captcha = system.stdin.readLine();
		this.echo("Using Captcha2 " + captcha);
		
		try {
			casper.evaluate(function(userName, password,captcha) {
				document.getElementById("ap_email").value = userName;
				document.getElementById("ap_password").value = password;
				document.getElementById("ap_captcha_guess").value = captcha;
				// document.aspnetForm.submit();
			}, userName, password,captcha);

		} catch (e) {
			console.log(e);
		}
		
		this.capture("PostSecondTry2.pdf");
		
		this.echo("Clicking Submit Button");
		this.mouse.click("input#signInSubmit-input");
		
		
	}else{
		this.capture("HomePage2.pdf");
	}

	
});

casper.each(list, function(self, i) {
	self.wait(700, function() {
		last = i;
		this.echo('Using this.wait ' + i);
	});
});




casper.then(function() {

	// var html = this.getHTML();
	var html = this.getPageContent();
	var f = fs.open('HomePage3.html', 'w');
	f.write(html);
	f.close();
	// fs.write('path/to/file', 'your string', 'w');
	this.capture("PostLoginPage3.pdf");
	system.stdout.writeLine('Has CaptchaCode3?: ');
	var hasCaptcha = system.stdin.readLine();
	
	if (hasCaptcha === 'y') {
		
		system.stdout.writeLine('Enter captcha3?: ');
		var captcha = system.stdin.readLine();
		this.echo("Using Captcha3 " + captcha);
		
		try {
			casper.evaluate(function(userName, password,captcha) {
				document.getElementById("ap_email").value = userName;
				document.getElementById("ap_password").value = password;
				document.getElementById("ap_captcha_guess").value = captcha;
				// document.aspnetForm.submit();
			}, userName, password,captcha);

		} catch (e) {
			console.log(e);
		}
		
		this.capture("PostSecondTry3.pdf");
		
		this.echo("Clicking Submit Button");
		this.mouse.click("input#signInSubmit-input");
		
		
	}else{
		this.capture("HomePage3.pdf");
	}

	
});

casper.each(list, function(self, i) {
	self.wait(700, function() {
		last = i;
		this.echo('Using this.wait ' + i);
	});
});





casper.then(function() {
	var listItems = [];
	listItems = this.evaluate(function() {
		var nodes = document.querySelectorAll('span');
		return [].map.call(nodes, function(node) {
			return node.textContent;
		});
	});

	//this.echo(listItems);

	this.capture("HomePageFinal.pdf");
	// this.mouse.click("span#kindle_dialog_firstRun_button");

});

casper.run();


Advertisements

Merge Overlapping Interval

I recently came across a problem of merging data with overlapping lower and upperbound data. I tried multiple approaches.

Below code uses Double linked list approach


/**
 * 1. For the original DataNode2 we need to make sure we merge nodes when ever we see overlap
 *
 * @author Ashwin Rayaprolu
 *
 */
public class MergeInsertIntervals {

	/**
	 * @param args
	 */
	public static void main(String[] args) {
		int[][] originalIntervals = { { 94230, 94299 }, { 94289, 94699 }, { 94200, 94240 }, { 94133, 94133 } };
		// Sort our array based on lower bound number
		DataNode2LinkedList2 linkedList = new DataNode2LinkedList2();
		//O(n log n) operation
		for(int[] data:originalIntervals){
			linkedList.insert(new DataNode2(data));
		}

		System.out.println("---------Sorted?merged Intervals----------");
		int[][] sortedIntervals = linkedList.traverse();

		System.out.println("---------Merged Intervals----------");

	}

}

/**
 * @author Ashwin Rayaprolu
 *
 */
class DataNode2LinkedList2 {
	DataNode2 firstNode;
	DataNode2 lastNode;

	int size = 0;
	// O(log n) operation
	void insert(DataNode2 newNode) {
		if (firstNode == null) {
			firstNode = newNode;
			lastNode = newNode;
			return;
		}

		// Keep interval on left if lower bound is < than tempPointer
		DataNode2 tempPointer = firstNode;

		if (newNode.data[0] < tempPointer.data[0]) {
			while (tempPointer.leftPointer != null && newNode.data[0] < tempPointer.data[0]) { 				tempPointer = tempPointer.leftPointer; 			} 			 			//If new node is overlapping then merge with current node and return 			if(newNode.data[1]>=tempPointer.data[0]){
				//tempPointer.data[1]=
				tempPointer.data[0] = newNode.data[0];
				return;
			}

			newNode.rightPointer = tempPointer;

			if (tempPointer.leftPointer == null) {
				firstNode = newNode;
			}

			tempPointer.leftPointer = newNode;
			++size;

		} else {
			while (tempPointer.rightPointer != null && newNode.data[0] >= tempPointer.data[0]) {
				tempPointer = tempPointer.rightPointer;
			}

			//If new node is overlapping then merge with current node and return
			if(tempPointer.data[1]>=newNode.data[0]){
				//tempPointer.data[1]=
				tempPointer.data[1] = newNode.data[1];
				return;
			}

			newNode.leftPointer = tempPointer;

			if (tempPointer.rightPointer == null) {
				lastNode = newNode;
			}

			tempPointer.rightPointer = newNode;
			++size;

		}

	}

	int[][] traverse() {
		DataNode2 tempPointer = firstNode;
		int[][] sortedArray = new int[size + 1][2];
		int index = 0;
		while (tempPointer != null) {
			sortedArray[index] = tempPointer.data;
			++index;
			System.out.println("{" + tempPointer.data[0] + "," + tempPointer.data[1] + "}");
			tempPointer = tempPointer.rightPointer;
		}
		return sortedArray;
	}
}

/**
 * Data Node used for sorting
 *
 * @author Ashwin Rayaprolu
 *
 */
class DataNode2 {
	int[] data = {};
	DataNode2 leftPointer;
	DataNode2 rightPointer;

	public DataNode2(int[] data) {
		this.data = data;
	}

}

Below Code Uses Binary Tree approach



/**
 * 1. For the original MergeBinaryInsertDataNode we need to make sure we merge nodes when ever we see overlap
 * 
 * @author Ashwin Rayaprolu
 *
 */
public class MergeBinaryInsertInterval {

	/**
	 * @param args
	 */
	public static void main(String[] args) {
		int[][] originalIntervals = { { 94230, 94299 }, { 94289, 94699 }, { 94200, 94240 }, { 94133, 94133 } };
		// Sort our array based on lower bound number
		MergeBinaryInsertIntervalDataNode linkedList = new MergeBinaryInsertIntervalDataNode();
		//O(n log n) operation
		for(int[] data:originalIntervals){
			linkedList.insert(linkedList.rootNode,data);
		}
		
		
		System.out.println("---------Sorted?merged Intervals----------");
		linkedList.traverse(linkedList.rootNode);

		System.out.println("---------Merged Intervals----------");

		
	}

}

/**
 * @author Ashwin Rayaprolu
 *
 */
class MergeBinaryInsertIntervalDataNode {
	MergeBinaryInsertDataNode rootNode;
	

	int size = 0;
	
	
	// Inserting or pushing data to binary tree
		public void insert(MergeBinaryInsertDataNode currentNode, int[] newData) {
			MergeBinaryInsertDataNode tempNode = new MergeBinaryInsertDataNode(newData);
			// If first Node then make it root node
			if (rootNode == null) {
				rootNode = tempNode;
				return;
			}
			

			// If new node data >= root node data move to right
			if (currentNode.data[0] <= tempNode.data[0]) {
				
				//If new node is overlapping then merge with current node and return
				if(currentNode.data[1]>=tempNode.data[0]){
					//tempPointer.data[1]=
					currentNode.data[1] = tempNode.data[1];
					return;
				}
				
				
				if (currentNode.rightPointer == null) {
					currentNode.rightPointer= tempNode;
				} else {
					insert(currentNode.rightPointer, newData);
				}
			} else {
				//If new node is overlapping then merge with current node and return
				if(tempNode.data[1]>=currentNode.data[0]){
					//tempPointer.data[1]=
					currentNode.data[0] = tempNode.data[0];
					return;
				}
				
				if (currentNode.leftPointer == null) {
					currentNode.leftPointer = tempNode;
				} else {
					insert(currentNode.leftPointer, newData);
				}
			}

		}
		
		
		/**
		 * @param currentNode
		 */
		public void traverse(MergeBinaryInsertDataNode currentNode) {
			if (currentNode == null) {
				return;
			}

			traverse(currentNode.leftPointer);
			System.out.println("{"+currentNode.data[0]+","+currentNode.data[1]+"}");
			traverse(currentNode.rightPointer);

		}
	
	
	
	

}

/**
 * Data Node used for sorting
 * 
 * @author Ashwin Rayaprolu
 *
 */
class MergeBinaryInsertDataNode {
	int[] data = {};
	MergeBinaryInsertDataNode leftPointer;
	MergeBinaryInsertDataNode rightPointer;

	public MergeBinaryInsertDataNode(int[] data) {
		this.data = data;
	}

}

Linux Box as Router

In continuation of my previous posts where i’m create a distributed cloud infrastructure i need to connect VM’s on multiple Host Machines

Pre knowledge of routing (IPTables) and networking is required for information below. Technically this is what containers like docker or software routers do internally when they need to connect 2 different network’s

distributedhostrouting

Lets assume

HostMachine1 has VM’s on network 10.0.0.1/24
HostMachine2 has VM’s on network 192.168.0.1/24

Now our Gateway1 and Gateway2 have 2 Network Interfaces/NIC cards.

Gateway1 and Gateway2 are connected by switch hence on same network as well as their respective VM networks as they have 2 NIC cards connected.

Let assume Gateway1 has IP 10.0.0.2
Let assume Gateway2 has IP 192.168.0.2

Both Gateway1 and Gateway2 can connect to each other as they are directly connected.

My Current Configuration on Gateway1 which is our target router and i’ve below Network Interfaces on that


enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:9d:08:3f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.169/24 brd 10.0.0.255 scope global enp0s9
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe9d:83f/64 scope link
       valid_lft forever preferred_lft forever
 enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:7b:12:89 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.120/16 brd 192.168.255.255 scope global enp0s10
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7b:1289/64 scope link 

Now we need to add routing table configurations on either gateway1 or gateway2 to forward packets from one network to another.

We can do this 2 ways.

Either by creating network bridge


# Install brige utils that gives us easy commands to creatr bridge network
apt-get install bridge-utils
or
yum install bridge-utils

# Create new Bridge
brctl addbr br0

# Enable Spanning Tree Support if you need
brctl stp br0 on

# Make sure you get your Devices down before creating bridge and explicity assign 0.0.0.0 to make sure they loose ip
ifconfig enp0s9 0.0.0.0 down
ifconfig enp0s10 0.0.0.0 down

# Add them to our newly created bridge network
brctl addif br0 enp0s9
brctl addif br0 enp0s10

# Finally get all interfaces up.
ifconfig enp0s9 up
ifconfig enp0s10 up
ifconfig br0 up

or

by modifying routing table. I’m explaining second concept here

Enable forwarding in the kernel:


echo 1 >> /proc/sys/net/ipv4/ip_forward

To set this value on boot uncomment this line in/etc/sysctl.conf

#net.ipv4.ip_forward=1

Now i need to route traffic from one Interface to another using routing tables
Below are statements that can do that


# Always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT

# We allow traffic from the HostMachine1 side
iptables -A INPUT -i enp0s9  -j ACCEPT

# We allow traffic from the HostMachine2 side
iptables -A INPUT -i enp0s10  -j ACCEPT

######################################################################
#
#                         ROUTING
#
######################################################################

# enp0s9 is HostMachine1 Network
# enp0s10 is HostMachine2 Network

# Allow established connections
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Masquerade.
iptables -t nat -A POSTROUTING -o enp0s10  -j MASQUERADE

# fowarding
iptables -A FORWARD -i enp0s9 -o enp0s10  -m state --state RELATED,ESTABLISHED -j ACCEPT

# Allow outgoing connections from the HostMachine1 side.
iptables -A FORWARD -i enp0s10 -o enp0s9  -j ACCEPT


# Repeat same steps on reverse route
iptables -t nat -A POSTROUTING -o enp0s9   -j MASQUERADE

iptables -A FORWARD -i enp0s10 -o enp0s9  -m state --state RELATED,ESTABLISHED -j ACCEPT

iptables -A FORWARD -i enp0s10 -o enp0s9  -j ACCEPT

Finally on HostMachine1 route all traffic on subnet 192.168.0.1/24 to Gateway1

I use Mac as one my host machine hence below command

sudo route -n add -net 192.168.1.1/24 10.0.0.193

If you are using any *nix systems command would be

ip route add 192.168.1.1/24 via 10.0.0.169 dev

Linux Networking Fundamentals Part 2

This is in continuation of previous article. I’m going to start from scratch.

I’m going to build

  1. Two datacenters with name DC1 & DC2 by creating 2 different Vagrant VM networks
  2. Two Rack’s per Datacenter say DC1-RC1, DC1-RC2  and DC2-RC1,DC2-RC2
  3. Each Rack is connected by a Gateway
  4. Each Datacenter is connected by a Router
  5. Finally openvpn to connect both datacenter’s

distributedsystemarch1

All the hardware node and device cooking is mostly done via shell scripts and ruby and vagrant coding.

I’m assuming who ever is interested to go over this first understand basics of networking, Ruby, ShellScripting and Vagrant and Docker Environments.

Before moving ahead i need a simple utility to generate IP address range for given CIDR

Wrote a basic code in ruby that generates that.

# Generate IP's in given Range
# IpList = Nodemanager.convert_ip_range('192.168.1.2', '192.168.1.20')

module Nodemanager

	# Generates range of ips from start to end. Assumption is that i'm only using IPv4 address
	  
  def convertIPrange first, last
    first, last = [first, last].map{|s| s.split(".").inject(0){|i, s| i = 256 * i + s.to_i}}
    (first..last).map do |q|
      a = []
      (q, r = q.divmod(256)) && a.unshift(r) until q.zero?
      a.join(".")
    end
  end

Now i need to load all dependencies in by Berksfile. Berksfile is like a dependency manger for chef (Provisioning tool)

It can be compared with Maven/Gradle(Java), Nuget(Dotnet),Composer (PHP), Bundler (Ruby) , Grunt/Gulp (NodeJS)

name             'basedatacenter'
maintainer       'Ashwin Rayaprolu'
maintainer_email 'ashwin.rayaprolu@gmail.com'
license          'All rights reserved'
description      'Installs/Configures Distributed Workplace'
long_description 'Installs/Configures Distributed Workplace'
version          '1.0.0'


depends 'apt', '~> 2.9'
depends 'firewall', '~> 2.4'
depends 'apache2', '~> 3.2.2'
depends 'mysql', '~> 8.0'  
depends 'mysql2_chef_gem', '~> 1.0'
depends 'database', '~> 5.1'  
depends 'java', '~> 1.42.0'
depends 'users', '~> 3.0.0'
depends 'tarball'


Before moving ahead i want to list my base environment.
I have 2 host machines. One on CentOS 7 and other one on CentOS 6


[ashwin@localhost distributed-workplace]$ uname -r
3.10.0-327.22.2.el7.x86_64
[ashwin@localhost distributed-workplace]$ vboxmanage --version
5.1.2r108956
[ashwin@localhost distributed-workplace]$berks --version
4.3.5
[ashwin@localhost distributed-workplace]$ vagrant --version
Vagrant 1.8.5
[ashwin@localhost distributed-workplace]$ ruby --version
ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux]
[ashwin@localhost distributed-workplace]$ vagrant plugin list
vagrant-berkshelf (5.0.0)
vagrant-hostmanager (1.8.5)
vagrant-omnibus (1.5.0)
vagrant-share (1.1.5, system)

Now let me write a basic Vagrant file to start my VM’s

# -*- mode: ruby -*-
# vi: set ft=ruby :

require './modules/Nodemanager.rb'

include Nodemanager

@IPAddressNodeHash = Hash.new {|h,k| h[k] = Array.new }
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = '2'

Vagrant.require_version '&gt;= 1.5.0'

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Create Share for us to Share some files
  config.vm.synced_folder &quot;share/&quot;, &quot;/usr/devenv/share/&quot;, disabled: false
  # Disable Default Vagrant Share
  config.vm.synced_folder &quot;.&quot;, &quot;/vagrant&quot;, disabled: true

  # Setup resource requirements
  config.vm.provider &quot;virtualbox&quot; do |v|
    v.memory = 2048
    v.cpus = 2
  end

  # vagrant plugin install vagrant-hostmanager
  config.hostmanager.enabled = false
  config.hostmanager.manage_host = false
  config.hostmanager.manage_guest = true
  config.hostmanager.ignore_private_ip = false
  config.hostmanager.include_offline = true

  # NOTE: You will need to install the vagrant-omnibus plugin:
  #
  #   $ vagrant plugin install vagrant-omnibus
  #
  if Vagrant.has_plugin?(&quot;vagrant-omnibus&quot;)
    config.omnibus.chef_version = '12.13.37'
  end

  config.vm.box = 'bento/ubuntu-16.04'
  config.vm.network :private_network, type: 'dhcp'
  config.berkshelf.enabled = true

  # Assumes that the Vagrantfile is in the root of our
  # Chef repository.
  root_dir = File.dirname(File.expand_path(__FILE__))

  # Assumes that the node definitions are in the nodes
  # subfolder
  nodetypes = Dir[File.join(root_dir,'nodes','*.json')]

  ipindex = 0
  # Iterate over each of the JSON files
  nodetypes.each do |file|
    puts &quot;parsing #{file}&quot;
        node_json = JSON.parse(File.read(file))

        # Only process the node if it has a vagrant section
        if(node_json[&quot;vagrant&quot;])
          @IPAddressNodeHash[node_json[&quot;vagrant&quot;][&quot;name&quot;]] = Nodemanager.convertIPrange(node_json[&quot;vagrant&quot;][&quot;start_ip&quot;], node_json[&quot;vagrant&quot;][&quot;end_ip&quot;])

          1.upto(node_json[&quot;NumberOfNodes&quot;]) do |nodeIndex| 

            ipindex = ipindex + 1

            # Allow us to remove certain items from the run_list if we're
            # using vagrant. Useful for things like networking configuration
            # which may not apply.
            if exclusions = node_json[&quot;vagrant&quot;][&quot;exclusions&quot;]
              exclusions.each do |exclusion|
                if node_json[&quot;run_list&quot;].delete(exclusion)
                  puts &quot;removed #{exclusion} from the run list&quot;
                end
              end
            end

            vagrant_name = node_json[&quot;vagrant&quot;][&quot;name&quot;] + &quot;-#{nodeIndex}&quot;
            is_public = node_json[&quot;vagrant&quot;][&quot;is_public&quot;]
            #vagrant_ip = node_json[&quot;vagrant&quot;][&quot;ip&quot;]
            vagrant_ip = @IPAddressNodeHash[node_json[&quot;vagrant&quot;][&quot;name&quot;]][nodeIndex-1]
            config.vm.define vagrant_name, autostart: true  do |vagrant|

              vagrant.vm.hostname = vagrant_name
              puts  &quot;Working with host #{vagrant_name} with IP : #{vagrant_ip}&quot; 

              # Only use private networking if we specified an
              # IP. Otherwise fallback to DHCP
              # IP/28 is CIDR
              if vagrant_ip
                vagrant.vm.network :private_network, ip: vagrant_ip,  :netmask =&gt; &quot;255.255.255.240&quot;
              end

              if is_public
                config.vm.network &quot;public_network&quot;, type: &quot;dhcp&quot;, bridge: &quot;em1&quot;
              end

              # hostmanager provisioner
              config.vm.provision :hostmanager

              vagrant.vm.provision :chef_solo do |chef|
                chef.data_bags_path = &quot;data_bags&quot;
                chef.json = node_json
              end        

            end  # End of VM Config

          end # End of node interation on count
        end  #End of vagrant found
      end # End of each node type file

end

Finally run vagrant up . Sample output attached below. I’m creating 2 VM’s for 2 Racks and 1 VM for Gateway. There are now 3 VM’s up and running. 2 VM’s represent our 2 virtual racks and third a gateway. If you notice all of them are running on private ip network which is inaccessible from external world except our gateway node. Our gateway node has 2 different ethernet devices 1 connecting private network and other connecting host network. I’ve marked specific lines that define the kind of network that gets created.


# Only use private networking if we specified an
              # IP. Otherwise fallback to DHCP
              # IP/28 is CIDR
              if vagrant_ip
                vagrant.vm.network :private_network, ip: vagrant_ip,  :netmask =&gt; &quot;255.255.255.240&quot;
              end

              if is_public
                config.vm.network &quot;public_network&quot;, type: &quot;dhcp&quot;, bridge: &quot;em1&quot;
              end

Sample output on Vagrant up

VagrantUpOutput.jpg

 

I define node configuration in a json file so as to make it more simple. Attached is sample node type json for both Gateway node and Rack Node
Below is definition for Rack. I tried to add as much comments as possible to explain each field

If you observer below node definition’s i’ve give Node Name prefix in the config file and also from and to range for IP’s in config file. Apart from that i define the kind of recipe that need to loaded by chef for this specific node type.


{
  "NumberOfNodes":2,
  "environment":"production",
  "authorization": {
    "sudo": {
      // the deploy user specifically gets sudo rights
      // if you're using vagrant it's worth adding "vagrant"
      // to this array
      // The password for the dpeloy user is set in data_bags/users/deploy.json
      // and should be generated using:
      // openssl passwd -1 "plaintextpassword"
      "users": ["deploy", "vagrant"]
    }
  },
  // See http://www.talkingquickly.co.uk/2014/08/auto-generate-vagrant-machines-from-chef-node-definitions/ for more on this
  "vagrant" : {
    "exclusions" : [],
    "name" : "dc1-rc",
    "ip" : "192.168.1.2",
    "start_ip":"192.168.1.2",
    "end_ip":"192.168.1.3"
  },
  "mysql": {
      "server_root_password": "rootpass",
      "server_debian_password": "debpass",
      "server_repl_password": "replpass"
  },
  "data_bags_path":"data_bags",
  "run_list":
  [
    "recipe[basedatacenter::platform]",
    "recipe[basedatacenter::users]",
    "recipe[basedatacenter::docker]"
   
  ]
}

Below is node definition for Gateway.


{
  "NumberOfNodes":1,
  "environment":"production",
  "authorization": {
    "sudo": {
      // the deploy user specifically gets sudo rights
      // if you're using vagrant it's worth adding "vagrant"
      // to this array
      // The password for the dpeloy user is set in data_bags/users/deploy.json
      // and should be generated using:
      // openssl passwd -1 "plaintextpassword"
      "users": ["deploy", "vagrant"]
    }
  },
  // See http://www.talkingquickly.co.uk/2014/08/auto-generate-vagrant-machines-from-chef-node-definitions/ for more on this
  "vagrant" : {
    "exclusions" : [],
    "name" : "dc1-gw",
    "ip" : "192.168.1.5",
    "start_ip":"192.168.1.4",
    "end_ip":"192.168.1.4",
    "is_public":true
  },
  "mysql": {
      "server_root_password": "rootpass",
      "server_debian_password": "debpass",
      "server_repl_password": "replpass"
  },
  "data_bags_path":"data_bags",
  "run_list":
  [
    "recipe[basedatacenter::platform]"
  ]
}

 

Before moving on to next step i need to install 5 nodes on each rack. Which is taken care by docker. Docker is a containerization tool that mimic’s VM but very light weight. We are using docker containers to mimic realworld nodes


apt-get install -y curl &&
apt-get install  -y  apt-transport-https ca-certificates &&
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D &&
touch /etc/apt/sources.list.d/docker.list &&
echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" >> /etc/apt/sources.list.d/docker.list  &&
apt-get update &&
apt-get purge lxc-docker &&
apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual &&
apt-get update &&
apt-get install -y docker-engine &&
curl -L https://github.com/docker/machine/releases/download/v0.7.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && 
chmod +x /usr/local/bin/docker-machine &&
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose &&
chmod +x /usr/local/bin/docker-compose &&
sudo usermod -aG docker docker

Once docker is setup on all racks  we need to install all nodes. Below is base version of docker file that i use
My next step is to setup containers on each of the rack so that we can replicate multiple datacenter’s and multiple rack scenarios

I’m going to create 5 containers on each rack and each one of the container will again be using Ubuntu Xenial as base OS. I’m going to install oracle 7 jdk on all of them.

My usecase for distributed architecture is based on HDFS, Cassandra setup hence i need to install java first . Below install.sh script is run by vagrant/chef to install docker on each of the rack.


FROM ubuntu:16.04
MAINTAINER Ashwin Rayaprolu

RUN apt-get update
RUN apt-get dist-upgrade -y

RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python-software-properties
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install byobu curl git htop man unzip vim wget

# Install Java.
RUN \
  echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
  add-apt-repository -y ppa:webupd8team/java && \
  apt-get update && \
  apt-get install -y oracle-java7-installer && \
  rm -rf /var/lib/apt/lists/* && \
  rm -rf /var/cache/oracle-jdk7-installer
  
  
# Install InetUtils for Ping/traceroute/ifconfig
RUN apt-get update
# For Ifconfig and other commands
RUN apt-get install -y net-tools
# For ping command
RUN apt-get install -y iputils-ping 
# For Traceroute
RUN apt-get install -y inetutils-traceroute



# Define working directory.
WORKDIR /data

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-7-oracle

# Define default command.
CMD ["bash"]


 

Docker has a very elegant way of creating network’s. As our
Rack Network is on 192.168.1.*
We want
Node Network on 10.18.1.2/28

We have multiple options to create network in docker. I would like to go with bridge networking. Will discuss on those specific topic later. For now assuming we are using bridge network below is code to create network and attach to some container

We need to make sure we have different range of network on each rack and each datacenter so that we don’t overlap IP’s between different rack’s and datacenter’s

# Below command will create a network in our desired range (dc1-rack1)
# 10.18.1.0  to 10.18.1.15
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.2/28 \
  my-multihost-network 

# Below command will create a network in our desired range (dc1-rack2)
# From 10.18.1.16  to 10.18.1.31
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.19/28 \
  my-multihost-network    

# Below command will create a network in our desired range (dc2-rack1)
# 10.18.1.32  to 10.18.1.47
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.36/28 \
  my-multihost-network    

# Below command will create a network in our desired range (dc2-rack2)
# 10.18.1.48  to 10.18.1.63
docker network create -d bridge \
  --subnet=10.18.0.0/16 \
  --gateway=10.18.1.1 \
  --ip-range=10.18.1.55/28 \
  my-multihost-network      
  
# -d option to run in background  -t option to get a duplicate tty
docker run -itd multinode_node1

# Connect the newly created network on each node to the node name.
docker network connect my-multihost-network docker_node_name

 

I would write code to automate all the above tasks in subsequent articles. I’m going to use docker-compose to build individual nodes in each rack.

Very basic code would look like this

version: ‘2’
services:
node1:
build: node1/
node2:
image: node2/
node3:
image: node3/

You can checkout First version of code from

https://github.com/ashwinrayaprolu1984/distributed-workplace.git

 

Simple Cassandra Template Integrated with ObjectPool

Below is a Cassandra Template Integrated with ObjectPool Created in previous post

This has been tested with 30 parallel sessions opened to cassandra

We create one cluster object per application but we can create as many session as your system and configuration permits..

Note: Best way to create session is not attaching them to any keyspace.. so that we can reuse the same session/connections for multiple threads.

/**
*
*/
package com.linkwithweb.products.daolayer.cassandra;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.core.env.Environment;

import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.HostDistance;
import com.datastax.driver.core.PoolingOptions;
import com.datastax.driver.core.QueryOptions;
import com.datastax.driver.core.Session;
import com.linkwithweb.products.daolayer.ObjectPool;

/**
* @author ashwinrayaprolu
*
*/
@Configuration
@PropertySource(value = { "classpath:cassandra.properties" })
public class CassandraTemplate extends ObjectPool<Session> {
private static final Log LOG = LogFactory.getLog(CassandraTemplate.class);
@Autowired
private Environment env;

public CassandraTemplate() {
this(4, 7, 5, 40);
}

/**
* @param minIdle
* @param maxIdle
* @param validationInterval
* @param maxConnections
*/
public CassandraTemplate(int minIdle, int maxIdle, long validationInterval, int maxConnections) {
super(minIdle, maxIdle, validationInterval, maxConnections);
}

/**
* @return
*/
@Bean
public Cluster cassandraCluster() {

Cluster cluster = null;
try {
PoolingOptions poolingOptions = new PoolingOptions();
;

poolingOptions.setCoreConnectionsPerHost(HostDistance.LOCAL, 4).setMaxConnectionsPerHost(HostDistance.LOCAL, 10)
.setCoreConnectionsPerHost(HostDistance.REMOTE, 2).setMaxConnectionsPerHost(HostDistance.REMOTE, 4).setHeartbeatIntervalSeconds(60);

cluster = Cluster.builder()
// (1)
.addContactPoint(env.getProperty("cassandra.contactpoints")).withQueryOptions(new QueryOptions().setFetchSize(2000))
.withPoolingOptions(poolingOptions).build();
} catch (Exception e) {
e.printStackTrace();
} finally {

}

return cluster;

}

/**
* @return
* @throws Exception
*/
public Session cassandraSession() throws Exception {
Session session = cassandraCluster().connect(); // (2)
return session;
}

@Override
protected Session create() {
try {
return cassandraSession();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}

@Override
public void close(Session object) {
object.close();
}
}

 

Generic Object Pool in Java

I recently had to create a generic object pool with minimalistic code. I didn’t want to add too much load by adding thirdparty jar’s.. Moreover i wanted code that can be applied to creation of any objects.. Below is implementation of bounded Object Pool


package com.linkwithweb.products.daolayer;

import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;

/**
* @author ashwinrayaprolu
*
* @param <T>
*/
public abstract class ObjectPool<T> {
private Queue<T> pool;

/**
* Stores number of connections that are being used
*/
private final AtomicInteger usageCount = new AtomicInteger(0);
// Maximum number of connections that can be open. Defaulted to 20
private int maxConnections = 20;

private ScheduledExecutorService executorService;

/**
* Creates the pool.
*
* @param minIdle
* minimum number of objects residing in the pool
*/
public ObjectPool(final int minIdle,final int maxConnections) {
// initialize pool
this.maxConnections = maxConnections;
initialize(minIdle);
}

/**
* Creates the pool.
*
* @param minIdle
* minimum number of objects residing in the pool
* @param maxIdle
* maximum number of objects residing in the pool
* @param validationInterval
* time in seconds for periodical checking of minIdle / maxIdle
* conditions in a separate thread.
* When the number of objects is less than minIdle, missing
* instances will be created.
* When the number of objects is greater than maxIdle, too many
* instances will be removed.
*/
public ObjectPool(final int minIdle, final int maxIdle, final long validationInterval,final int maxConnections) {
this.maxConnections = maxConnections;
// initialize pool
initialize(minIdle);

// check pool conditions in a separate thread
executorService = Executors.newSingleThreadScheduledExecutor();
executorService.scheduleWithFixedDelay(new Runnable() {
@Override
public void run() {
int size = pool.size();
if (size < minIdle) {
if(usageCount.compareAndSet(maxConnections, maxConnections)){
return;
}
int sizeToBeAdded = minIdle - size;
for (int i = 0; i < sizeToBeAdded; i++) {
System.out.println("Background Thread Creating Objects");
pool.add(create());
}
} else if (size > maxIdle) {
int sizeToBeRemoved = size - maxIdle;
for (int i = 0; i < sizeToBeRemoved; i++) {
System.out.println("Background Thread dumping Objects");
pool.poll();
}
}
}
}, validationInterval, validationInterval, TimeUnit.SECONDS);
}

/**
* Gets the next free object from the pool. If the pool doesn't contain any
* objects,
* a new object will be created and given to the caller of this method back.
*
* @return T borrowed object
*/
public T borrowObject() {
T object;

if(usageCount.compareAndSet(maxConnections, maxConnections)){
return null;
}

int preBorrowCount = usageCount.get();
if ((object = pool.poll()) == null) {
object = create();
}
while (usageCount.compareAndSet(preBorrowCount, preBorrowCount+1));

return object;
}

/**
* Returns object back to the pool.
*
* @param object
* object to be returned
*/
public void returnObject(T object) {
if (object == null) {
return;
}
int preReturnCount = usageCount.get();
this.pool.offer(object);
while (usageCount.compareAndSet(preReturnCount, preReturnCount-1));
}

/**
* Shutdown this pool.
*/
public void shutdown() {
if (executorService != null) {
executorService.shutdown();
}
}

/**
* Creates a new object.
*
* @return T new object
*/
protected abstract T create();

protected abstract void close(T object);

private void initialize(final int minIdle) {
pool = new ConcurrentLinkedQueue<T>();

for (int i = 0; i < minIdle; i++) {
pool.add(create());
}
}
}

 

 

 

II_INSTALLATION must be set before the configuration utility is run

When rebooting a server with problems, it could lead to a corrupt ingress installation.

This leads to all kind of problems. When starting the Ingres Visual Manager you get an error: “II_INSTALLATION must be set before the configuration utility is run”.

To fix this problem you need to restore the symbols.tbl file located in the ingres\files directory. If you want to restore them manually, you need to know the original settings. You can get them back with the utility: ingsetenv.exe

If you restore the file, make sure ingres is down and run after the restore the following commands as you have in your Install.log

“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_LANGUAGE ENGLISH
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_TIMEZONE_NAME NA-EASTERN
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” TERM_INGRES IBMPCD
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_INSTALLATION II
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_CHARSETII WIN1252
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingunset.exe” II_CHARSET
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_DATE_FORMAT US
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_MONEY_FORMAT L:$
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_DECIMAL .
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_TEMPORARY “C:\Program Files\CA\Ingres [II]\ingres\temp”
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_CONFIG “C:\Program Files\CA\Ingres [II]\ingres\files”
“C:\Program Files\CA\Ingres [II]\ingres\bin\ingsetenv.exe” II_GCNII_LCL_VNODE “<YOUR_COMPUTER_NAME"

If you do not know the exact settings, you can try to take a look at the install.log. The settings of the ingres environment are also mentioned their.