Skip to content

breadNET

Goals

  • Have a lab environment to learn tech
  • Have everything self-contained
  • Everything (well as close to) should be defined as code
  • FOSS

Technology used

I'd love to put the usual badges here, but I can't find/make them as they're all custom, but:

  • Xen Orchestra
  • Digital ocean
  • Terraform
  • Gitlab
  • Gitea
  • Linux
  • Nginx
  • Postfix
  • Dovecot
  • Mysql
  • Bookstack
  • Wikimedia
  • Ansible
  • XCP-NG

Project Breakdown

So like I mentioned under goals, I wanted to have everything self-contained and if it was to break, not wipe out my production environment.

I used XCP-NG and XOA as the virtualization platform on premise, and a golden image running ubuntu 18.04 with my ssh keys on, then ansible would deploy the applications and monitoring stack

For cloud deployment I use Digital ocean and OVH, and these are both managed via terraform.

Sadly since moving countless times I have had to sell off the lab (Blog post coming soon)


What I did?

I personally was responsible for:

  • Purchased equipment
  • Scoped out internal IP space to ensure that running k8's wouldn't have issues (eg: not using 10.0.0.0/8)
  • Setup network and servers
  • Built golden images
  • Deployed bulk VM's
  • Built ansible playbooks
  • Deployed ansible playbooks to hosts
  • Built AWX server to constantly update servers on a schedule
  • Integrated on premise AWX to GitHub to pull playbook and run on servers after git push
  • Built local code repo (gitea later to gitlab)
  • Built reverse proxy server
  • Built mail server
  • Started website
  • Built local Kubernetes cluster
  • Migrated to cloud


Issues I had to overcome

Server Monitoring

So after running for a while, I started to realise that troubleshooting servers was a pain, and also how would I be able to pinpoint where the issue was?

I set up Zabbix on my hardware to monitor the servers using the agent, as well as network devices over snmp.

This was quite effective for many years, and never looked back.