C# Windows Prompt Dialog Box

One tool I think Microsoft should have shipped with the windows libraries was the old vb6 prompt box.  No problem, I have created a simple inline way of making it with one simple class.

public static class Prompt
    {
        public static string ShowDialog(string text, string caption)
        {
            Form prompt = new Form();
            prompt.Width = 500;
            prompt.Height = 150;
            prompt.Text = caption;
            Label textLabel = new Label() { Left = 50, Top = 20, Text = text };
            TextBox textBox = new TextBox() { Left = 50, Top = 50, Width = 400 };
            Button confirmation = new Button() { Text = "Ok", Left = 350, Width = 100, Top = 70 };
            confirmation.Click += (sender, e) => { prompt.Close(); };
            prompt.Controls.Add(confirmation);
            prompt.Controls.Add(textLabel);
            prompt.Controls.Add(textBox);
            prompt.ShowDialog();
            return textBox.Text;
        }
    }
Once you have that class referenced somewhere, you can do a prompt by the following.
Prompt.ShowDialog("Enter Server Name.", "Please enter the name of your server.");
See, simple way to do a Prompt in C# Windows Forms app.

Wiring a LP Furnace to work with Outdoor Wood Burner / Boiler

So I installed an outdoor wood burner and in doing so needed to connect it to the gas furnace in my house.  How the outdoor burner works is it sits outside the house about fifty feet form the house and burns wood in a large wood furnace that has a large tank of water around the burning chamber.  This heats the water which is then piped into the house underground and circulated through a radiator in your furnace.  So when your house needs heat, it blows ir through the radiator (heath ex-changer) and blows hot air through the house.
This is all rather simple to do, though the one not obvious piece for me was wiring the furnace.  Now, my house has zones and 3 thermostats that control independently with louvers in the ducts.  I didn't want to have a second thermostat to override the gas operation so instead purchased a Honeywell aqua stat like this one from Lowe's.  
It basically can be attached directly to the how water line going into the furnace.  From here it has 3 wires.  

Now a furnace has several ports or wires coming out of the mother board.  They are as follows:

R= Red wire, power from furnace control transformer for heat function.
RC= power from furnace control transformer for cooling function.
C= power return to furnace control transformer for thermostat operation. Could be either blue, black or another color different from the other terminals.
W= White wire, power from thermostat to heat function of furnace.
G= Green wire, power from thermostat to manual blower operation of furnace.
Y= Yellow wire, power from thermostat to furnace for cooling function.

What I did to get this all working is simply unhooked the Gas wire from the control board, hooked that directly to my aquastat R, then ran the Aquastat green to the gas and aquastat W to the fan. So if cold, turn on gas, otherwise just fan. Left control to fan in place if I want to run the fan. Seems to work great, I went the whole winter like this and saw it fail over to Gas when the burner was low, otherwise it just blew through the exchanger.  After discussing this technique with my furnace guy, we decided we should turn off the breaker on the AC just in-case this method may kick it on, we didn't want they running at the same time, so just turned off the breaker for the winter to be safe.

Basic Hadoop User and HDFS Folder Commands

So I am new to Hadoop and trying to figure out how to best interact with it programatically.  However, its hard to do this without first understanding how to do simple operations on it.  

Hadoop is basically a cluster of non-expensive servers that work together to achieve relatively cheap data storage and processing power.  It is basically a distributed file system with access applications on top of it.  Today I am looking at the file system half called HDFS and how to work inside of it.

Now HDFS is a file system that runs accross several servers so accessing it requires permissions from an account to the file system on the linux box in its simplest form.  For this example, I created a user called hdfsuer and here are a few basic operations with that user.

1. Create directory named data
sudo -u hdfs hadoop fs -mkdir /data
This command is rather straight forward.  You use sudo to run as root, then the user (-u) hdfs (general overall hdfs user account) hadoop (basic hadoop command) fs (file system) -mkdir (familiar linux directory creation command) /data (folder to make).

2. Allow everyone permission to the folder
sudo -u hdfs hadoop fs -chmod 777/data
This command is the same as the last except it uses the linux command chmod to modify the permissions on the folder data to 777 or everyone.

3. Creating new folder for new account
su hdfsuser
sudo -u hdfs hadoop fs -mkdir /user/hdfsuser
The first command switches to the new uers, and the second command is the same as number 1 creating a directory, except the directory is now /user/hdfsuser.  For each user you create you should create a directory in the /user/ folder.

4. Giving user access to that folder
sudo -u hdfs hadoop fs -chown hdfsuser /user/hdfsuser
Again, this command is simular to one of the others, this time number 2.  It gives the user hdfsuser ownership of the folder /user/hdfsuser .

5. Create place for user to put different sorts of files
su hdfsuser
hadoop fs /mkdir /user/hdfsuser/data
hadoop fs -copyFromLocal datafile /user/hdfsuser/data
hadoop fs -cat /user/bigdata/data/*
So this set of commands will 1, use the user hdfsuser, creates a new directory under the user/hdfsuser directory for a certain type of data then load a file datafile to that directory and finally select that back out to see.

So there you have it, the basic folder management and sub user commands to create directories and load local files.  I will jump more into hadoop as I progress, but that's enough for today.

Cloudera CDH4 Install on Cluster

So I talked about getting a cluster ready for Cloudera CDH4 free install yesterday, now I am going to talk about actually installing hadoop on the cluster.  You can see the setup article here.  Again, this is basically my install notes, if I do it again, I shall take screen shots to share.

1. Set up Yum to download (optional)
vi /etc/yum.conf
--add the following line
http_proxy-http://blah.blah.com:3128
export http_proxy --created as variable within linux

2. Download Cloudera CDH4
wget http://archive-primary.cloudera.com/cm4/installer/lartest/cloudera-manager-installer.bin

3. Install CDH4
chmod u+x cloudera-manager-installer.bin
./cloudera-manager-installer.bin
Wait for the install
4. Log into the web portal at the link provided by the installer
username admin
password admin

5. Choose the Cloudera Standard Install (Free)
Click Continue on the installer detail screen.
Specify hosts for your CDH cluster installation, list each node on a new line with fully qualified name.
click Search, if any errors, go fix your host files.
click Continue if no errors.

6. Choose packages
Use packages if just downloading initially
Use parcels if you want to make your own store to have a named/saved version for adding single nodes later without upgrading.  With this option you need to pick a version or download a copy and host yourself to upgrade from. 
Choose the Versions you want or don't.
Click Continue

7. Cluster Installation
choose root or user and enter your password.
Click Continue

8. Watch Install
Watch the spinning circles and pray nothing goes poorly.
Hit Continue.

9. Look at errors you have to fix

10. Inspect Role assignments
Set your name node
set your secondary namenode
Set at least 3 zookeepers
Set all nodes that aren't name to tasktracker and data node.  
Gateway is the 2 name nodes
Job tracker is on secondar name node.
Hive meta store on namenode
Push HiveServer, Hue, Cloudera Manager, Service Monitor and all alerts on the name node.
All other services you probably wont use on secondary name node.
click Continue when you think it is configured correctly.

11. Database Setup
Choose your db types, default PostgreSQL.  
Save usernames passwords for later.
Test Connection
Continue

12. Review Server Configurations
Check Data Directories, this is the drives hdfs will use on the machines
Checking volumes on data nodes, namenodes, secondary name nodes
Ideally you don't have to change much here, just look for errors.  
Click continue,

13. Starting cluster services
It proceeds through setting up of the cluster, it takes a while and is the final step.

14. Now you should be at the Cloudera Manager Dashboard

Setting up a Hadoop Cluster for Hadoop Install

A project I have recently been a part of was setting up a POC hadoop cluster for our organization.  We did a simple 6 node cluster all using CentOS 6.4 and the free Cloudera CDH4.  I had installed this on a single node easy enough with the online tutorials, but felt a comprehensive list of tasks needed to prepare the cluster for install wasn't clearly laid out anywhere.  This is how we did it.  For now its just a list of instructions, I will come back an add screen shots if I ever do it again, this was mostly done through putty with vi for editing.

1. Disable the selinux firewall on all nodes.
ls etc/selinux --go to the folder
vi config --edit the config file
selinux=disable --to disable the linux firewall and allow ports to open up so nodes can talk to eachother.
sudo service network restart --to restart the adapter

2. Generate SSH Key - as root on name node
ssh-keygen -t rsa--enter a few times, key generated
cd .ssh -- to the ssh files generated
cp id_rsa authorized_keys --copy
ssh-copy-id hdfsnode2--copy the key to the node hdfsnode2 (computer network name)
ssh hdfsnode2should now allow login to that machine without password as the key is installed.
repeat on all machines in node

3. IP Table Services
service iptables status
--should see firewall is not running
chkconfig iptables off --now the nodes will not have this service start
repeat on each node
4. Make sure host files one each has all nodes

vi /etc/hosts --update host file to have reference to each node including itself ip fullname alias
scp /etc/hosts hdfsnode2:/etc/ --copy host file to each node

5. Restart all servers
ssh hdfsnode2 --get to other server
init 6 --does remote restart