Are you working at a company with multicloud presence? Is your team responsible for clusters/virtual machines/databases and storage across several regions used by dozens/hundreds/thousands of team members/subcontractors/clients? Are you being asked “What the hell is this m7g.16xlarge EC2 instance in me-central-1 region, who built it, when and why?” weekly? Who is using this GKE Cluster in asia‑east1‑a GCP region and what the hell for? If this rings a bell for you, you are reliving my personal hell from years ago. I was, among other things, in charge of spinning up and managing Kubernetes and OpenShift clusters for dozens of Developers, QA Testers, Product Managers and UI designers. Doing so for anything other than a fixed and well groomed Production cluster on a daily basis using a wide array of configurations and custom deployments (K8s versions, Cloud providers, Container Runtimes, Nodes configuration and quantity) is a daunting undertaking. So, as a tech-savvy and reasonable developer, I strive to be, I started automating these deployments into a seamless web application. The application was a big success and saved many people within the organization a lot of time and money. Although fortunately (or perhaps not) I moved on to another company and altogether another position, that particular fire still burned inside me, so I decided to pursue this particular set of problems in an Open Source environment. I talked to quite a few people in the industry. I still feel there’s a lot of complexity involved in managing non-production-oriented Kubernetes clusters for development/testing purposes. If you need a short termed cluster on multiple services, then you will need a level of expertise that might not be possessed by already over-strained Dev Ops personnel.
So here comes Trolley. An Open Source solution that allows you to build a Kubernetes Cluster on 3 major-est Cloud Providers (AWS/GCP/Azure*). The application will, out of the box, help you spin up a Test/Dev cluster on those providers with the least amount of configuration needed. This cluster will operate within a set timeframe (1 hour up to 7 days). You will be given the option to create teams with users and allow them to build and manage their own clusters. See exactly what everyone uses and where (regions, providers etc..). You can, also, create clients and assign them clusters if you build them for promotional/development/testing purposes. The app is doing quite a few additional things and will do even more in the near future. But at the moment I’d like to focus on the core functionality that was tested well and looks ready to roll.
So what are the main parts of this thing? Let’s do a head first dive, shall we?
Registration
The registration screen should be self-explanatory, I guess. I’d like to replace it with something more out of the box (who said Auth0?) in the near future but for the moment, this’ll do.
Important to notice that the first user to register will become, rightfully, the system admin and tagged as a member of the IT team (congrats and good luck!).
Settings Menu
The next step is to go to the Settings menu. Here, you will be asked to add your AWS/GCP/Azure credentials to build and scan Kubernetes clusters (EKS/GKE/AKS*) for those respective clouds. You will be asked to provide your forked Trolley repo and a GitHub token. These will be needed to allow Trolley to trigger GitHub Actions. All the triggering and syncing is done using GitHub Actions so you should fork Trolley and generate a GitHub token to use these options. Notice that the Build options will be blocked for you in GUI if you don’t provide the correct GitHub parameters.
Build menu
Next up is the Build menu. I tried to keep the things here to the absolute minimum but I plan to expand the Compute type and the Storage values as additional options.
At the moment I use the following default values:
*Notice that at the moment of writing these words I am having some Azure issues. So your mileage may vary.
GKE:
--machine-type e2-medium
--disk-type pd-standard
--disk-size 100
EKS:
—machine-type m5.large
Azure:
TBD
Management menu
One of the nice things about Trolley is being able to use the credentials you provided for cloud scanning. So a few minutes after you provide the credentials you should start seeing clusters that were already built under your account. If for some reason it doesn’t work please press the scan button.
The menus will reveal some info about your clusters (even moar info soon). It will also allow you to tag other members of your organization or clients so you could offload your mental load on someone else. Notice that if you are an admin, you'll still see everything because you are the man now, dog! If you feel like it, you can also delete the clusters here. Whether you succeed or not depends on your credentials' permissions in GCP/AZ/AWS. More on that, in the future blogs.
There are many additional features here I'll dive into once they materialize and mature.
Managing Users and Clients Menus
Assigning Teams and Clients in the Multi Cloud context is another feature you won't get from your friendly Cloud Provider. Meaning, you can let users and clients build and manage clusters using their desired provider. This all will be logged and monitored by you and other assigned admins. You will be able to track who built what and where and force a timed (TBD) or immediate deletion. You can also move ownership between people and clients. This is super convenient for times when the Dev hands off a feature on a cluster to Test and back. In my experience this is super useful!
Get this thing into my VPC, now! But how?
So how does one simply obtain Trolley? Well, you can clone and fork it from here. You can use the steps I outlined there to spin it up locally using docker-compose and give it a go. I also plan to write a Kubernetes/App Engine/ECS(?) and a VM-based deployment soon enough. If you have any ideas, suggestions, bugs and yes, business enquiries as well (I gots kids to feed) please ping me at zagalsky@gmail.com
I got a demo page as well so def ping if you just wanna see and not touch!