Translator

UserOnline

, 12 Guests, 1 Bot

Archives

Print This Post Print This Post

Linux Management with Spacewalk

spacewalk

I recently built a Spacewalk 2.1 server to automate certain administration functions for my Linux machines. Installation was pretty straight forward as long as you don’t have any problems. However once I got it running it was really a great way to manage my systems. First let’s get a little info on the product.

Spacewalk is an open source Linux systems management solution. Spacewalk is the upstream community project from which the Red Hat Satellite product is derived. Spacewalk manages software content updates for Red Hat derived distributions such as Fedora, CentOS, and Scientific Linux, within your firewall. You can stage software content through different environments, managing the deployment of updates to systems and allowing you to view at which update level any given system is at across your deployment. A central web interface allows viewing of systems, their associated software update status, and initiating update actions. Spacewalk provides provisioning and monitoring capabilities, allowing you to manage your systems throughout their lifecycle. Via Provisioning, Spacewalk enables you to kickstart provision systems and manage and deploy configuration files. The monitoring feature allows you to view the status off your systems alongside their software update status.

Here is a pic of the WebUI.

webUI

With all of that out of the way I can talk about the installation process. You will need a base OS, for this I used CentOS 6 with no GUI. This help lower the overhead and you will not have much of a need for a GUI on the OS. However if you want one you can. Here is the link to the wiki that has the install information and any additional stuff you may need to research.

First you will need to add the Spacewalk and EPEL repos to the server. The Spacewalk repo is located at http://yum.spacewalkproject.org/ if you want to download the packages and do this manually. So to install the repo enter the following command.

rpm -Uvh http://yum.spacewalkproject.org/2.1/RHEL/6/x86_64/spacewalk-repo-2.1-2.el6.noarch.rpm

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

You will also need to add the jpackage repo with the following command. The link to the repo is http://www.jpackage.org/mirroring.php

cat > /etc/yum.repos.d/jpackage-generic.repo << EOF
[jpackage-generic]
name=JPackage generic
#baseurl=http://mirrors.dotsrc.org/pub/jpackage/5.0/generic/free/
mirrorlist=http://www.jpackage.org/mirrorlist.php?dist=generic&type=free&release=5.0
enabled=1
gpgcheck=1
gpgkey=http://www.jpackage.org/jpackage.asc
EOF

Now you need to decide what type of database you want to use. You can use Oracle XE or PostgreSQL. I have built this server using Oracle 11g and with PostgreSQL. If you use a separate Oracle server you need to make sure the database permissions are set just as Spacewalk says. If not you will run into all kinds of problems right from the get go. Oracle setup can be found at the following link: https://fedorahosted.org/spacewalk/wiki/FullOracleSetup. But for this we will use PostgreSQL because it is easier to setup and you can use yum without downloading the Oracle packages.

yum install spacewalk-setup-postgresql

Now that the database server is installed we can move on to installing Spacewalk itself. For PostgreSQL we will use the following command

yum install spacewalk-postgresql

This will install the Spacewalk packages and set it up to use PostgreSQL. Spacewalk will need to have a FQDN that resolves. So you can use the hosts file or DNS to accomplish this. Once this is complete you will need to start the Spacewalk install and configuration. Start by entering the following command.

spacewalk-setup –disconnected

You will see output similar to the following. However the example below used Oracle.

* Setting up Oracle environment.
* Setting up database.
** Database: Setting up database connection for Oracle backend.
Database service name (SID)? XE
Username? spacewalk
Password?
** Database: Testing database connection.
** Database: Populating database.
*** Progress: ####
* Setting up users and groups.
** GPG: Initializing GPG and importing key.
** GPG: Creating /root/.gnupg directory
You must enter an email address.
Admin Email Address? root@localhost
* Performing initial configuration.
* Activating Spacewalk.
** Loading Spacewalk Certificate.
** Verifying certificate locally.
** Activating Spacewalk.
* Enabling Monitoring.
* Configuring apache SSL virtual host.
Should setup configure apache’s default ssl server for you (saves original ssl.conf) [Y]?
** /etc/httpd/conf.d/ssl.conf has been backed up to ssl.conf-swsave
* Configuring tomcat.
** /etc/tomcat5/tomcat5.conf has been backed up to tomcat5.conf-swsave
** /etc/tomcat5/server.xml has been backed up to server.xml-swsave
** /etc/tomcat5/web.xml has been backed up to web.xml-swsave
* Configuring jabberd.
* Creating SSL certificates.
CA certificate password?
Re-enter CA certificate password?
Organization? Fedora
Organization Unit [spacewalk.server.com]? Spacewalk Unit
Email Address [root@localhost]?
City? Brno
State? CZ
Country code (Examples: “US”, “JP”, “IN”, or type “?” to see a list)? CZ
** SSL: Generating CA certificate.
** SSL: Deploying CA certificate.
** SSL: Generating server certificate.
** SSL: Storing SSL certificates.
* Deploying configuration files.
* Update configuration in database.
* Setting up Cobbler..
Cobbler requires tftp and xinetd services be turned on for PXE provisioning functionality. Enable these services [Y/n]?
cobblerd does not appear to be running/accessible
* Restarting services.
Installation complete.
Visit https://spacewalk.server.com to create the Spacewalk administrator account.
 
Once this is complete you will be able to access the Spacewalk web page. There is still some more stuff that needs to be done. One is setting up the iptables for the system, unless you decide to disable iptables altogether. Here is the commands you need to open the ports.
iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 80 -j ACCEPT
iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 443 -j ACCEPT
iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 5222 -j ACCEPT
iptables -A OUTPUT -m state –state NEW -m tcp -p tcp –dport 80 -j ACCEPT
iptables -A OUTPUT -m state –state NEW -m tcp -p tcp –dport 443 -j ACCEPT
iptables -A OUTPUT -m state –state NEW -m tcp -p tcp –dport 4545 -j ACCEPT
iptables -A OUTPUT -m state –state NEW -m tcp -p tcp –dport 1521 -j ACCEPT
iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 1521 -j ACCEPT
iptables save

Outbound open ports 80, 443, 4545 (only if you want to enable monitoring) Inbound open ports 80, 443, 5222 (only if you want to push actions to client machines) and 5269 (only for push actions to a Spacewalk Proxy), 69 udp if you want to use tftp.

The server is functional but you will need to do several more things to make it useful. You will need to configure users that can be accomplished in the Users tab. Also you will need to create channels for the packages. Creating channels is easy, but you need to decide if you want repos for these channels. This is accomplished in the channels tab by going to Manage Software Channels. You will need to create at least one channel. However I would create a channel for each OS and architecture you will be managing. Then create repos for each mirror repo you plan to have on the server. Then you can click on the channel and assign the repos to the channels you want them attached to. This is done by clicking on the channel, go to repositories, and check the box. Then you can select sync and Spacewalk will download the packages for you. This will take some time depending on size of the repo and when Spacewalk picks up the task.

Another way you can upload packages is using the physical media or ISO of the OS. You would mount it to the operating system. Spacewalk will not connect directly to RedHat so if you manage RHEL systems you will have to upload packages using this method. So if my DVD is mounted in media and is RedHat 6 I would use the following command.

rhnpush -vvv –channel=rhel6_x86_64 –server=http://localhost –dir=/media/Packages

The only things you would change is the channel name and the exact directory the packages are located. The switch –vvv give you a very verbose execution and http://localhost is required to work correctly. This process will take a long time depending on the number of packages and speed of the system.

Now that we have packages and users we will want to register systems to the Spacewalk server. You will need to create an activation key using the Spacewalk WebUI. On the overview screen click on manage activation keys. Then create new key. You can have it auto generate or enter a string yourself. I would create your own and enter something simple. The auto generated key is a long alphanumeric string. Now we will install the Spacewalk client repo, install some packages, and register the systems. This is done with the following commands. Remember to change the link based on the type of OS you are using.

rpm -Uvh http://yum.spacewalkproject.org/2.1-client/RHEL/6/x86_64/spacewalk-client-repo-2.1-2.el5.noarch.rpm

There are two dependancies that may or may not be present. They are jabberpy and python-hashlib for RHEL 5 based OSes and just jabberpy for RHEL 6 based systems. You can install them as part of the YUM entry if they are in a repo you have installed or as standalone RPM files. Enter the following command to install the client packages.

yum install rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin osad

Once the packages are installed you can register the system using the following command.

rhnreg_ks –serverUrl=http://<yourSpacewalkserveraddress>/XMLRPC –activationkey=1-<youractivationkey>

Then check in the Spacewalk WebUI to see if the system shows up.

We now have system and packages so you can try to install something to see if it is working. The following are some CLI commands that can be used to accomplish certain tasks.

Spacewalk-service — can be used with start|stop|status to control the Spacewalk service.

rhn_check –     this command forces the OS to check in with the Spacewalk server. Spacewalk  monitored systems check in around every 4 hours. So if you want    something done know you need to run this command. Also using the –vvv switch will help you in troubleshooting any problems.

spacewalk-repo-sync –channel <yourchannel> –url <repo to sync to url>  –   This command will sync now instead of waiting for Spacewalk.

Rhnmd This command will force monitoring task to run now.

This is a good start for building your Spacewalk server. There are tons of features that can be added to make the server monitor systems resources, run Open SCAP scans on the systems, configuration management, and have Errata for different OSes. I had to stumble my way through a great deal of the setup because there is no show all document out there. So I hope this helps you on your way and good luck. I use this server every day and it makes takes so much easier. Good luck.

Print This Post Print This Post

Hyper-V 2012 Server Core

windows-hyper-v

I have been studying for my upgrade test for Microsoft Server 2012 and ran across Hyper-V 2012 Server Core.  So I down loaded it and set up a little lab that I could test it features and functionality.  I used 2 HP 8400 Workstations as the servers and a Dell M90 as the domain controller  with another laptop to use for access to the Hyper-V Servers.

The download was about 2 Gigs and installed really quickly.  So in about 30 minutes I had two virtual servers.  The DC was Windows Server 2012 and you will really need a domain to make some of the features work correctly.  With the domain up and the servers all connected I began the task of finishing configuration but if you have ever used a Windows Core server there is no real interface.  So to make this process a great deal easier you should download a tool called Corefig.  You can download it by clicking here.   This adds an easy to use interface that makes using a Core server a great deal easier.  Here is a pic of the interface.

corefig

This makes it a lot easier if you don’t want to run this on a domain, but you will still have some problems.  Anyhow, once you get the machines built and the basic configuration out of the way you can proceed with the building of you virtual machines.  Using ISO files makes this a great deal easier but physical media will be fine but you will need to put them in the server CD drive.  You will also need a Windows machine to access the Hyper V Manager so you can control the VMs with a GUI interface.  I used Windows 8 for this but Window 7 will work but you will have to download the remote administrations tools from Microsoft and install Hyper V Manager.

With all of that complete you can simply connect to the Hyper V servers and start the process of building the VMs.  I’m not going to go through building the VMs because it is pretty straight forward and easy to do.  The things I really like about Hyper V 3 is the replication and fail over features that are built in.  First I will go over replication.  It is also pretty easy to do.  The machines I am using are not part of a cluster and they don’t have any kind of shared storage.  But you can simply click on your VM and select replication.  My machines are on a domain so this process my or my not work with two standalone machines.  The replication menu will is also easy to use.  You just enter the Hyper V server to replicate to and it will begin.    After the initial replication of the VM it will update the VM copy on the other machine every few minutes.  If you had a large number of VMs replication this may take a little bit but I didn’t see any lag from this process.  Once the replication precess is up and working you can click on the replication tab and select failover.  This will make sure the replicated VM is up-to-date and power that machine on.  So if a server fails or you just need to do some work on a server you can move VMs back and forth with very little effort.  This feature is usually not part of a free Hypervisor.  Here is a pic of the replication menu on the Hyper V Manager.

hyperv2

Another feature is the ability to move the VMs from one server to another while they are still running without shared storage.  This one takes a little more setup time and will not work without a domain.  The first thing I did was create a security group and add my Hyper V servers to it.  Then I added two delegations to the Hyper V computers in Active directory.  On each of the Hyper V servers in Active Directory I selected properties and then delegation.  I the delegation menus I selected Trust this computer to trusted delegation service only and selected the Use Kerberos only radio button.  Then in the delegated service box I selected add and added the following services for the other server you will be moving VMs to:

  • cifs
  • Microsoft Virtual System Migration Service

With these delegations added you should no be able to move live VMs from one server to another.  The tab looks like the following.

hyperV_properties

Now with everything set up you should be able to right click on the VM in Hyper V Manager and select move.  It will ask you which server to move it to along with a few other questions on how you want this to be performed.  These are questions about moving the VM if you have shared storage or not, what you want to do with the VM files, and what drive on the other server you want to move the VM to.  Once you answer these questions you select finish and it will begin moving the machine to the other server.  If you wan to see that it is still live just log into the VM and you can see that it remains functional.  It will also close the RDP window and open it again for the other server once it is complete.  All in all this is a nice feature that the average person can use and makes it a lot easier for the maintenance and update to be installed on servers without any down time.

So this version of Hyper V is really a big step forward in features and reliability.  I was not a fan of the previous versions but Hyper V 3 is really something to take a look at.  I have also considered moving my test lab over to this platform for the replication feature.  Finally to wrap everything up I have been playing around with Altero’s Hyper V Backup.  They have  a free edition and would make a test lab complete with Hyper V’s failover features and the ability to backup VMs for the just in case scenario.  If you wanted to use shared storage Hyper V supports iSCSI and is suppose to work with SMB 3.0.  So you could add a inexpensive NAS to the servers to truly make the system have high availability function.  So check it out you just might like it, I know I have.

 

 

Print This Post Print This Post

My Raspberry Pi Cluster

xd3io-R-Pi_Logo

After reading all these articles about R Pi clusters I wanted to add my two cents and put one together for myself.  Well first off I was not going to spend the amount of money that some of these projects had so I settled with 6 R Pis for my cluster.  Also I want to make mine a little more portable than the ones I was looking at so I could take it places without worrying that it would fall apart.  So here is the list of parts I used to build this.

  • 6   version B Raspberry Pis
  • 1   10 port powered USB hub
  • 1   5v USB powered fan
  • 1   project case
  • 6   micro USB cables
  • 1    8 port D Link Gigabit switch
  • 1    3 plug electrical extension cord
  • 6   Cat 5e cables
  • 6    video card heat sinks
  • 6    4 Gig micro SD cards with SD card adapters
  • 12   motherboard stand offs

Now I didn’t go out and buy everything here.  Some of the stuff I found in my chest that I store computer parts from other projects or broken computers.  I spent about $320 on everything except the switch, Cat 5 cables, and micro USB cables.  So here is what my R Pi Cluster ended up looking like.

Pi4

Pi3

Ok, it is not the prettiest thing out there but I can hook up one Cat 5 cable and plug in one cord to use the computer.  It has a power switch on the USB hub which is something I wanted, and I played with several ideas before I came to this setup.   The fan obviously to cool the computers inside the box and I attached the video card heat sinks to help with cooling.  They all run at around 40 degrees Celsius, plus the fan has a speed control to tweak it for your cooling needs.

So I imaged Raspbian onto one of the SD cards for the OS that I will be running on the cluster.  I installed  updates to the OS and then installed Ansible.  I downloaded Ansible from their website and installed it using make.  Once that was completed I created an image using ImageWriter because I did all of my work on a Windows 7 machine.

Now that I have an image I created one of the node SD cards and configured it.  To configure I created a hosts file that had all of the cluster IP addresses in it.  I also added a virtual NIC to the setup.  I did this so I could still SSH into the each node in the cluster.  To add the virtual NIC just go to etc/network and use nano, vi, or any other text editor you want, to modify the interfaces file.  Then just add an entry that looks like the following pic.

Untitled 19

Then I created the Ansible hosts.pi file.  This file simple defines every nodes role in the cluster.  It should have the controller and the slave nodes separated by bracketed names.  It should look like the following entry.

[headnode]
192.168.100.120
[Computenodes]
192.168.100.121
192.168.100.122
etc…

Once this was configured I made another image of the compute node and imaged the rest of the SD cards.  Once I had that finished I had to go back through each one and set the IP and the host name for each compute node.  Then I booted up the Raspberry Pis.  Now you have to setup a password less SSH connection for the nodes to talk to the controller node.

First thing is to create the account you want to use for the Ansible connection to each node.  I created an account named connect.  Then  you got to do is run  ssh-keygen -t rsa to generate the key on each node.  You will get output like the following:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa): 
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37: bc :37:e4

Or if you may get a text based picture depending on the version you have installed.  Then on the 
controller node enter the following commands to import the keys for each node.
ssh connect@(hostname) mkdir -p .ssh  It will ask for the account password.  
Then enter cat .ssh/id_rsa.pub | ssh connect@(hostname) 'cat >> .ssh/authorized_keys'  It will ask 
for the password again.  Finally enter ssh connect@(hostname) hostname to test the connection.  If
you are having trouble I used the following site to setup my password less ssh connection.

http://www.linuxproblem.org/art_9.html
  

Now that the ssh is setup run  the command  ansible all -m ping .  If everything worked you should get a screen that looks like following pic.

Untitled 20

I wrote several scripts for cluster administration as well.  Such as cluster shutdown, reboot, and update.  With a functioning cluster you can install OpenMPI, SLURM, or any other message passing programs for the full multi node super computer.  The NVIDIA cluster used SLURM, the Beowulf cluster used MPI so the choice is up to you.  I will write another post once I have everything finished and running as I would like it to.  Hope this help get you started and I will say that I have had fun getting this working and in a form that can be easily trans ported.  I am sure I will refine my design and make it better.  Let me know what you think or if you have problems I will help you as best I can.  Here are the links to the NVIDIA and Beowulf clusters websites.

http://blogs.nvidia.com/blog/2013/07/19/secret-recipe-for-raspberry-pi-server-cluster-unleashed/

http://coen.boisestate.edu/ece/raspberry-pi/

Good Luck

Page 1 of 4612345...102030...Last »