My Raspberry Pi Cluster


After reading all these articles about R Pi clusters I wanted to add my two cents and put one together for myself.  Well first off I was not going to spend the amount of money that some of these projects had so I settled with 6 R Pis for my cluster.  Also I want to make mine a little more portable than the ones I was looking at so I could take it places without worrying that it would fall apart.  So here is the list of parts I used to build this.

  • 6   version B Raspberry Pis
  • 1   10 port powered USB hub
  • 1   5v USB powered fan
  • 1   project case
  • 6   micro USB cables
  • 1    8 port D Link Gigabit switch
  • 1    3 plug electrical extension cord
  • 6   Cat 5e cables
  • 6    video card heat sinks
  • 6    4 Gig micro SD cards with SD card adapters
  • 12   motherboard stand offs

Now I didn’t go out and buy everything here.  Some of the stuff I found in my chest that I store computer parts from other projects or broken computers.  I spent about $320 on everything except the switch, Cat 5 cables, and micro USB cables.  So here is what my R Pi Cluster ended up looking like.



Ok, it is not the prettiest thing out there but I can hook up one Cat 5 cable and plug in one cord to use the computer.  It has a power switch on the USB hub which is something I wanted, and I played with several ideas before I came to this setup.   The fan obviously to cool the computers inside the box and I attached the video card heat sinks to help with cooling.  They all run at around 40 degrees Celsius, plus the fan has a speed control to tweak it for your cooling needs.

So I imaged Raspbian onto one of the SD cards for the OS that I will be running on the cluster.  I installed  updates to the OS and then installed Ansible.  I downloaded Ansible from their website and installed it using make.  Once that was completed I created an image using ImageWriter because I did all of my work on a Windows 7 machine.

Now that I have an image I created one of the node SD cards and configured it.  To configure I created a hosts file that had all of the cluster IP addresses in it.  I also added a virtual NIC to the setup.  I did this so I could still SSH into the each node in the cluster.  To add the virtual NIC just go to etc/network and use nano, vi, or any other text editor you want, to modify the interfaces file.  Then just add an entry that looks like the following pic.

Untitled 19

Then I created the Ansible hosts.pi file.  This file simple defines every nodes role in the cluster.  It should have the controller and the slave nodes separated by bracketed names.  It should look like the following entry.


Once this was configured I made another image of the compute node and imaged the rest of the SD cards.  Once I had that finished I had to go back through each one and set the IP and the host name for each compute node.  Then I booted up the Raspberry Pis.  Now you have to setup a password less SSH connection for the nodes to talk to the controller node.

First thing is to create the account you want to use for the Ansible connection to each node.  I created an account named connect.  Then  you got to do is run  ssh-keygen -t rsa to generate the key on each node.  You will get output like the following:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa): 
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37: bc :37:e4

Or if you may get a text based picture depending on the version you have installed.  Then on the 
controller node enter the following commands to import the keys for each node.
ssh connect@(hostname) mkdir -p .ssh  It will ask for the account password.  
Then enter cat .ssh/ | ssh connect@(hostname) 'cat >> .ssh/authorized_keys'  It will ask 
for the password again.  Finally enter ssh connect@(hostname) hostname to test the connection.  If
you are having trouble I used the following site to setup my password less ssh connection.

Now that the ssh is setup run  the command  ansible all -m ping .  If everything worked you should get a screen that looks like following pic.

Untitled 20

I wrote several scripts for cluster administration as well.  Such as cluster shutdown, reboot, and update.  With a functioning cluster you can install OpenMPI, SLURM, or any other message passing programs for the full multi node super computer.  The NVIDIA cluster used SLURM, the Beowulf cluster used MPI so the choice is up to you.  I will write another post once I have everything finished and running as I would like it to.  Hope this help get you started and I will say that I have had fun getting this working and in a form that can be easily trans ported.  I am sure I will refine my design and make it better.  Let me know what you think or if you have problems I will help you as best I can.  Here are the links to the NVIDIA and Beowulf clusters websites.

Good Luck

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.