UserOnline

, 3 Guests, 3 Bots

Archives

Print This Post Print This Post

My Raspberry Pi Cluster

xd3io-R-Pi_Logo

After reading all these articles about R Pi clusters I wanted to add my two cents and put one together for myself.  Well first off I was not going to spend the amount of money that some of these projects had so I settled with 6 R Pis for my cluster.  Also I want to make mine a little more portable than the ones I was looking at so I could take it places without worrying that it would fall apart.  So here is the list of parts I used to build this.

  • 6   version B Raspberry Pis
  • 1   10 port powered USB hub
  • 1   5v USB powered fan
  • 1   project case
  • 6   micro USB cables
  • 1    8 port D Link Gigabit switch
  • 1    3 plug electrical extension cord
  • 6   Cat 5e cables
  • 6    video card heat sinks
  • 6    4 Gig micro SD cards with SD card adapters
  • 12   motherboard stand offs

Now I didn’t go out and buy everything here.  Some of the stuff I found in my chest that I store computer parts from other projects or broken computers.  I spent about $320 on everything except the switch, Cat 5 cables, and micro USB cables.  So here is what my R Pi Cluster ended up looking like.

Pi4

Pi3

Ok, it is not the prettiest thing out there but I can hook up one Cat 5 cable and plug in one cord to use the computer.  It has a power switch on the USB hub which is something I wanted, and I played with several ideas before I came to this setup.   The fan obviously to cool the computers inside the box and I attached the video card heat sinks to help with cooling.  They all run at around 40 degrees Celsius, plus the fan has a speed control to tweak it for your cooling needs.

So I imaged Raspbian onto one of the SD cards for the OS that I will be running on the cluster.  I installed  updates to the OS and then installed Ansible.  I downloaded Ansible from their website and installed it using make.  Once that was completed I created an image using ImageWriter because I did all of my work on a Windows 7 machine.

Now that I have an image I created one of the node SD cards and configured it.  To configure I created a hosts file that had all of the cluster IP addresses in it.  I also added a virtual NIC to the setup.  I did this so I could still SSH into the each node in the cluster.  To add the virtual NIC just go to etc/network and use nano, vi, or any other text editor you want, to modify the interfaces file.  Then just add an entry that looks like the following pic.

Untitled 19

Then I created the Ansible hosts.pi file.  This file simple defines every nodes role in the cluster.  It should have the controller and the slave nodes separated by bracketed names.  It should look like the following entry.

[headnode]
192.168.100.120
[Computenodes]
192.168.100.121
192.168.100.122
etc…

Once this was configured I made another image of the compute node and imaged the rest of the SD cards.  Once I had that finished I had to go back through each one and set the IP and the host name for each compute node.  Then I booted up the Raspberry Pis.  Now you have to setup a password less SSH connection for the nodes to talk to the controller node.

First thing is to create the account you want to use for the Ansible connection to each node.  I created an account named connect.  Then  you got to do is run  ssh-keygen -t rsa to generate the key on each node.  You will get output like the following:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa): 
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37: bc :37:e4

Or if you may get a text based picture depending on the version you have installed.  Then on the 
controller node enter the following commands to import the keys for each node.
ssh connect@(hostname) mkdir -p .ssh  It will ask for the account password.  
Then enter cat .ssh/id_rsa.pub | ssh connect@(hostname) 'cat >> .ssh/authorized_keys'  It will ask 
for the password again.  Finally enter ssh connect@(hostname) hostname to test the connection.  If
you are having trouble I used the following site to setup my password less ssh connection.

http://www.linuxproblem.org/art_9.html
  

Now that the ssh is setup run  the command  ansible all -m ping .  If everything worked you should get a screen that looks like following pic.

Untitled 20

I wrote several scripts for cluster administration as well.  Such as cluster shutdown, reboot, and update.  With a functioning cluster you can install OpenMPI, SLURM, or any other message passing programs for the full multi node super computer.  The NVIDIA cluster used SLURM, the Beowulf cluster used MPI so the choice is up to you.  I will write another post once I have everything finished and running as I would like it to.  Hope this help get you started and I will say that I have had fun getting this working and in a form that can be easily trans ported.  I am sure I will refine my design and make it better.  Let me know what you think or if you have problems I will help you as best I can.  Here are the links to the NVIDIA and Beowulf clusters websites.

http://blogs.nvidia.com/blog/2013/07/19/secret-recipe-for-raspberry-pi-server-cluster-unleashed/

http://coen.boisestate.edu/ece/raspberry-pi/

Good Luck

Print This Post Print This Post

Portable DHCP and DNS with Dual Server

Recently I needed at quick DHCP server for some testing and didn’t want to build a full DHCP server.  So I went to the Internet and found this handy tool that was both a DHCP and DNS server in one.  It was called Dual Server and it did everything I needed and more.  Here is a list of features:

  • Either DHCP or DNS or Both Services can be used.
  • DHCP hosts automatically added to DNS, If both services used
  • DHCP Supports 125 ranges, all options, range specific options
  • DNS Supports Zone Transfer and Zone Replication
  • DHCP Supports BOOTP Relay Agents, PXE Boot, BOOTP
  • Dynamically Detects Listening Interfaces, can listen on 125 interfaces
  • HTTP Interface for Lease Status
  • Filtering of Ranges by Mac Range, Vendor Class and User Class
  • Very easy configuration, no Zone files required
  • Allows Replicated operations for DHCP and DNS
  • Very Low Memory and CPU use
  • Can be installed and used by person not having DNS/DHCP Concepts
  • Designed to run as Replicated Load sharing Duplex Operation

I extracted the files and ran it from a flash drive which makes this portable and handy for a quick DHCP server on the fly.  It has pretty straight forward configuration using a INI file and a HTML file to monitor the leases that have been handed out.  It can be ran as a service or from a command terminal.  Here is a few pic of the interface.

Web Interface

CMD Terminal

I am sure you can use this on a more permanent basis but I only used it for a short while.  So I don’t know how well it performs for a full network of devices.  However it performed well for my use and I added it to my tool bag for future use.  To get it up and running just run the EXE file which will extract the files.  Then open the DualServer.ini file and put in your range and machine IP address to listen on.  There is a ton of other options that can be configured such as the domain your on, replication to other DNS servers, and the level of logging you want.  These are just a few things you can do.  The INI file is loaded and I am sure it can be configured to fit your need.

So if you need a quick DHCP server or you don’t have a server OS and need a simple to use option look into Dual Server.  It can be found at the following link.  http://sourceforge.net/projects/dhcp-dns-server/    The Dual Server website is at the following link:  http://dhcp-dns-server.sourceforge.net/

 

Print This Post Print This Post

Custom Linux Live CD using Tiny Core Linux

 

Recently I needed to develop an easy way to get a PXE image pushed out to 400 or more clients for temporary operations.  I already have one build but it tended to be too large and would take a while to boot all the machines.  So Tiny Core and it’s small file size really makes a big difference.  My original image was 250 MB and with Tiny Core it was whittled down to about 12 MB.

To modify the and build a custom live CD is rather easy.  First you should start with a computer that you can install Core Plus on to.  This version is about 64 MB and has everything you need to remaster from the get go.  Plus using a hard drive allows you to move the files to an area that will not be cleared as soon as you reboot the machine.

I booted up CorePlus and installed it to the hard drive that way I didn’t have to keep reloading and I could save changes I have made.  So once it is install you can use the ezremaster app.  This is pretty easy to use and allow you to add in packages and of course take them out.

So you can change the location that you want the files extracted to and then the core.tgz file that you want to use to build the image.  Next you get to pick packages that you want, but I only selected the once that were installed and the removed the wireless and erremaster apps.  I don’t need them for the PXE image I plan to use.  So I open the extracted file directory and began to add the Linux binaries that I need to the Bin directory.  After that was completed I wrote a few scripts that would auto launch once it fully loaded.  This distro loads by default into the tc user.  So to make things auto launch you need to add the scripts to the .X.d directory.  you will have to do a little testing and I had to use the sudo command in several locations for it to work correctly.  Here is what my login screen look like with my script launch automatically.

Now that all of my editing is complete I opened back up the ezremaster app and created an ISO of the extracted files.  Then I opened up the extracted files and created a directory in /mnt for my ISO to be auto mounted.  Then I copied over the ISO to the mnt directory and edited my auto launch script to mount my ISO to /mnt/ISO.  This way I could remove the CD once it booted up and I could use it somewhere else if needed.  So I ran ezremaster once again to create the ISO I would use to test.

I burned a disk and booted it up.  It worked great.  I would load the disk, start the control panel, launch the mount tool , and launch the terminal server.  Then I would unmount the CD and configure the terminal server pointing it to /mnt/ISO.  Then of course I would finish the IP scheme and the rest of the terminal server configuration.  Once all of this was done I could eject the CD and begin testing the PXEboot on other machines.  I worked and auto launched right into the script.  I did find one problem though, once my script was done running the windows would exit and I didn’t want this to happen.  Well it turned out my script had an exit statement that needed to be removed.  So I had to repeat the entire process to finally get my fully functional disk.

We used the disk the very next day on 430 some workstations and laptops without any problems.  Plus you still had the ability to run the disk by itself to accomplish the same task for individual machines if needed.

This had all come about because the previous disk I had made was around 250 MB and it had problems with VLANs.  This disk fixed both by shooting across the network really fast and booting even faster and the VLANs were never a problem because you could use IP space that is on that VLAN.  So if you find a task that needs to be accomplished with a PXEboot you may want to look into Core Linux to build a small, fully configurable image to accomplish your task.