Building a Miniature Cloud Platform

Posts in this series
  1. Building a Miniature Cloud Platform
  2. Miniature Cloud Project - Providing Power
  3. Miniature Cloud Project - Networking

Having worked at all sorts of organizations large and small, I have come across more than my fair share of scenarios where local resources were not available for trying new things.  That being one of the big draws of IaaS platforms, it still doesn’t cover platform design experimentation, and the components are so costly that experimenting is simply not allowed in all but the most extreme cases.  The Intel OSIC lab being an edge case, but still having a static hardware footprint.  Fortunately for me, I recently won the Big Bounties for Big Ideas contest at the May Openstack Summit in Austin Texas this year and received a nice prize.

mini-cloud-1-scaled mini-cloud-2-scaled mini-cloud-3-scaled mini-cloud-4-scaled mini-cloud-5-scaled

Pardon my dust.  Literally.

 

My Goal

  • To create a tiny setup that can run hundreds to thousands of instances with the smallest cheapest footprint possible, and to contribute lessons learned and any bugs/issues to the community.

 

 

The Plan

To  build an Openstack data center on my desk using;

  • MaaS for hardware management
  • Juju for application deployment
  • LXD & Libvirt as the hypervisors (Libvirt in case I decide Windoze has value for some reason, but I have to sacrifice an entire blade…)
  • Canonical Kubernetes for advanced container network management
  • Openstack Newton (current stable release)
  • Openstack Neutron with VXLAN ML2 Driver for network virtualization/overlays
  • Openstack Swift object storage cluster, 1 replica
  • Openstack Cinder block storage w/volume migration and replication

The Hardware

  • 3X Intel NUC5i5MYBE hypervisors with Libvirt and LXD + Cinder servers w/USB3 500GB SATA drives for Cinder volume storage, USB wireless dongle, & 64GB USB Flash HDD (competition prize) – Raspberry Pi is a poor hypervisor board. 🙂
  • 1X Intel NUC5i5MYBE Openstack controller/neutron router connected over 10/100 to the core router via CAT6 patch cord, Intel wifi PCI expansion card, & Patriot Ignite M2 240GB SSD Expansion card (competition prize)
  • 3X Raspberry Pi Model 3B Swift controller, storage, and API servers w/64GB Class 10 SSD HDD ($60 ea.)
  • 1X Raspberry Pi Model 3B MaaS controller w/64GB Class 10 SSD HDD ($60) – May convert to Hardkernel ODROID XU4 to add horsepower for additional managment functions.
  • Thrift store wireless N router running DD-WRT ($2.93)
  • 3X 500GB SATA drives (these were just laying around after my NAS expansion to 12TB) w/Evercool HDD fan modules and 12V USB3 Sata adapters ($21.99 + $9.99 ea.)
  • Multiple 5V to 12V DC transformers for powering the router, HDD fans, and SATA adapters via USB (about $20 for everything)
  • 3X Magic-T 5-port 10A USB power blocks ($7.99 ea.)

 

The Challenges

  • ARMHF processors do not support PXE despite me joining the effort to lobby for it.
  • What the hell is Kubernetes again?
  • Running Swift components on ARMHF is theoretically possible, but are the packages actually there?
  • I might need a faster network switch.  I wonder what the Salvation Army has, and if I should just keep this guy and try to find an 8-port desktop switch to add in.
  • Manually adding nodes to MaaS and handing them off to Juju without PXE support is not readily apparent.
  • Providing this many types of different power is going to be a mess using the vendor supplied power packs and cables.  I think I have this one worked out now.

 

This post documents my journey from idea to complete mini-data center on a table top.  I will continue to update as I make progress and as time allows.  Stay tuned.

2 Comments:

  1. Political persuasions aside, you and this project thing are quite cool. Good luck!

Leave a Reply