VMWare ESXi on Gigabyte Brix
Recently in an effort to lower my power consumption I decided to build my virtual hosts on micro PCs. I had done something similar with ProxmoxVE but wanted to not have to rebuild all of my systems that I use every day. So I purchased two Gigabyte Brix Pro i7-5575R micro PCs when they were on sale to help with this. I had been running everything on an AMD FX 8 core processor system with no problems, just it used too much power. So after doing some research on how I could get it to work I jumped in.
I used an VMWare PowerCLI to build the image with all the drivers that I needed from these systems except the USB NIC. I used vSphere 5.5 Update 3 because I had been using it and didn’t want to jump in to vSphere 6 quite yet. I did have some problems at the beginning because I used a cdrom to do the initial install. I could get it to boot but it would freeze on initializing storage stack. I could not get past this part and did a lot of playing around. Finally I built a bootable USB flash drive and it installed like a champ. Once it was installed I would still have some problems getting to boot. After more research I had to change a setting in the BIOS to disable CSM. This BIOS is pretty basic and I had to simply change the OS setting from Windows 7 to Windows 8. Once I did this it worked great. I also needed another NIC so I found a driver and a way to install it at virtually ghetto. The Brix had USB 3.0 so the speed would do just fine and I installed it using the procedures from virtuallyghetto.com. The link to the page is at the bottom of the post. The only drawback I have with this is having to use the CLI to assign the NIC to a vSwitch. But it works great and I use this NIC as the storage connection.
So I now have my two i7 Brix systems installed and connected to my LAN and have a separate switch connecting the storage NICs to a Seagate NAS. I use this NAS as a place for vSphere Data Protection to make backups of the VMs. I also have one running from the NAS to see if I will be able to move more of these VMs to the NAS to allow for vMotion. I used the VMWare standalone converter to copy all of the VMs I wanted to keep over to the esxi hosts. I also used something I had not used before. I install the vCenter appliance instead of using a Windows server as the vCenter server. I did all of the configuration and you can’t tell the difference. You get the same web interface and can connect to it with the vSphere client if you want. I did make a few modification though, reduced the amount of memory because it gives it eight gigs buy default. I have a very small environment and eight gigs is too much of my resources to spend on a server that will not be using that much memory. I gave it four gigs and I have not seen a problem yet.
I did have to do some thought on how I was going to place the VMs on the two hosts. They only have sixteen gigs of memory and I had to think of performance on these systems. So I tried to put two high resource servers on each host and then sprinkle in the others. I have 14 VMs running on these two hosts and they still have plenty of resources to add a few more if needed. However until I add another virtual host to the mix vMotion will not be an option but I can use vSphere Data Protection to keep good backup in case something goes down.
The good thing is that I lowered my electricity consumption by 60 to 110 watts. Then do that 24/7 and it all adds up. Additionally it is a lot quieter in the computer room and cooler as well. The other system would get loud when the fans all kicked in and it was one of the main culprits for heat. It took me about a day with all the messing around to figure out what all would work. Feel free to ask questions and I will add pic of my setup once I get a chance.
Below are the links to different sites that I got drivers or info on how to modify my ESXi install. Some of the info is for older systems but it is still valid for the most part.