Setting up a home learning lab 2018 edition

Posted in: Articles, Cloud, Technical Track

Back in 2013 I published a blog post about setting up a home learning lab. (You can find it here.) To this day it’s still a very popular post on our site.

I was thinking about what I’d written and what I’d do, today, five years later. The question comes because the public cloud vendors make accessing virtual machines very affordable.

I still maintain a desktop for running VMs and I still use it. So the short answer is “yes” I still own hardware in my home, for this purpose, but I use it for different things. Thanks to the public cloud, the way I experiment has changed.

I use the public cloud almost as much, arguably even more than my VM host. If I want to try “something” on SQL Server, I almost always use the public cloud. Why? Its easier and faster. Installing a new VM and SQL Server on it takes at least an hour. (Yes that could be automated, but that would take me several hours and mean I have to maintain scripts.)

I used to maintain several versions of database server VMs, check-pointed at the time of fresh installs, I no longer do that either.

However, if I want to work on larger, more complex tech stacks, I use my local server. Implementing several nodes of a Hadoop or Cassandra cluster and then adding more machines in, eats my credits really fast. It’s not cost efficient for me to use the public cloud for this.

The exception to the “try something” in the cloud rule comes when I decide I want to learn how to install and/or configure a particular piece of software, but this is rare.

I do build and maintain development environments locally. I maintain a “user developer” workstation in my dev domain.

That said, I also use PaaS offerings in the cloud, more and more, but that only sort of counts, as it’s not like I can fire up BigQuery in my home learning lab.

In terms of which hardware to buy, there are two strategies I see my team using. Some buy surplus, rack mount servers. There’s a lot to choose from, easily available for any budget. The reason I haven’t gone with this strategy is noise, size and cost of parts. The most notable factor for me is the size, since I live in a very small home.

The other strategy (and the one I employ) is to buy “big” desktops. You need an I7, as much RAM as you can afford. I have an SSD which I use for the host OS + an occasional VM that I need to be fast, and then I have multiple, 7200 RPM, Hybrid drives and I try to spread the IO load between them. I use 2 & 4 TB drives.

The host VM software hasn’t changed from my original post. I find myself going between Hyper-V and VirtualBox. Both have their strong points.

In Hyper-V, I like the auto RAM management in Hyper-V, and the “enhanced desktop” mode corrected the UI issues but note that this only applies to Windows VMs, not Linux.

In VirtualBox, I really like the interface and ability to tune the hosts, but if your priority is stacking a lot of VMs, running simultaneously, I think Hyper-V is the better choice.

How about you folks? Has this changed for you? Are you using the cloud exclusively for your learning purposes or do you have dedicated hardware?

email

Author

Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Chris Presley loves order—making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with budding DBAs as SQL Server instructor at Conestoga College in Kitchener, Ontario. Drawing on his strong disaster-recovery skills, he monitors production environments to swiftly detect and resolve problems before they arise. A self-described adrenaline junkie, Chris likes tackling the biggest database problems and putting out the toughest fires—and hitting the road on his motorcycle.

1 Comment. Leave new

Hi Chris,

I’m a sucker for home lab stories, so here is mine:

I used to have bigger i7 type machines with a lot of RAM (usually 64GB) and run either vsphere or some variant of xen/kvm. These days I’ve moved away from the bigger machines and use Intel NUC’s instead, with 32GB RAM and ~500GB NVME + 500GB SSD’s.
I still run vsphere (off a USB stick) and then use a my Synology NAS for bulk capacity (ISO’s/templates etc). I typically run VM’s on the internal NVME/SSD drives.
At the moment I have 3 NUC’s and they’re divided into 1 ‘management’ host and 2 compute hosts. The management host run things like Jenkins, Bitbucket, ELK-stack and something to kickstart new hosts etc and then I have various database systems (Oracle/CockroachDB/mysql/hadoop etc) on the other 2.
Pretty much everything is automated, so if something crashes I just rebuild. Most of the automation is done using Ansible.

The nice thing about automation is that I can use the same code if I need to test something at a larger scale (AWS, Azure etc), and in this case I typically use Terraform & Ansible.

On the laptop side, I use a combination of Vagrant/Virtualbox & Docker.

Great post!

(I’m also a fan of the Datascape Podcast and it would be really interesting if you could get someone to talk about CockroachDB, which has got to be one of the more interesting DB’s to come out in a very long time)

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *