I want to be able to declaratively and consistently reproduce my virtual machines necessary for hosting my personal site and projects. Before, using the cloud vendor provided web interfaces, I would construct my infrastructure manually by submitting web forms.
This post outlines summarizes one day of experimentation. It contains my approach to spinning up a small virtual machine/infrastructure, as well as the post-infrastructure scripts that I run to get a minimal secure website public facing. Minimal and secure consists of:
a 2 vCPU, 40GB disk with 4096MB memory virtual machine provisioned thorugh Cybera’sOpenStack offering
instance attached to internal private network, as well as public network (ipv4 & ipv6)
security groups for restrictive inbound and outbound connections
additional 80GB storage volume for application data
various operating system and SSH hardening configurations applied to the instance
a basic NGINX instance up, serving a static website and using Let’s Encrypt SSL
This is a transitionary process, as I have not decommissioned my old server, instead continuing to proxy-pass various functionality until this is fully complete. The ideal state is all my applications are declared as code. Source code is available at git.sr.ht/~udia/infra.
Terraform
I am using Terraform to specify my infrastructure as code, relying on the OpenStack Provider to interact with the OpenStack offered resources.
Although I currently only have one machine, I wanted to set it up on a private network with a subnet and router in addition to the default network provided by Cybera. The networking documentation/wiki serves as a useful starting point, although it is geared towards using the openstack cli.
The instance will be running Ubuntu 20.04 (as the other provided options are older LTS Ubuntu, and CentOS 7/8). TODO: Investigate using Packer to generate my own Debian image for VM provisioning.
The remote-exec and local-exec provisioning steps are commented out here as I manually assign a floating-ip (don’t want to put this into terraform quite yet, as I have a lot of Cloudflare DNS records setup that just point to my fixed public ipv4 address and I don’t want to accidentally lose it with an errant terraform destroy).
After the Terraform script finishes, I use terraform show to obtain the network allocated IP addresses and ensure that my resources have been created to specification.
terraform plan -out tfplan terraform apply tfplan terraform show
Look for the lines indicating the fixed ipv4 address on the public network, as well as the volume storage instance ID.
Ensure that SSH-ing into this new instance occurs without any errors. This can be done by manually assigning the public floating ipv4 address to the new instance or proxy jumping through an existing instance (old server that I’m trying to axe). I use a .ssh/config entry to make this process less cumbersome (see Useful SSH).
Ansible
I automated the configuration of my new instance using Red Hat Ansible.
(env) $ ansible-galaxy list # /home/alexander/.ansible/roles - nginxinc.nginx_config, 0.3.3 - nginxinc.nginx, 0.19.1 [WARNING]: - the configured path /usr/share/ansible/roles does not exist. [WARNING]: - the configured path /etc/ansible/roles does not exist. (env) $ ansible --version ansible 2.10.6 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/alexander/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/alexander/sandbox/src/git.udia.ca/alex/udia-infra/env/lib/python3.7/site-packages/ansible executable location = /home/alexander/sandbox/src/git.udia.ca/alex/udia-infra/env/bin/ansible python version = 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
Basic Operating System Setup
This play configures the new instance with the passwordless root users, ssh key for myself, as well as sets up automatic updates.
The volume mounting and binding has been commented out because the disk is not guaranteed to exist on /dev/sdc, and the UUID listed in the terraform output does not appear associated to the disk.
# - name: Create an ext4 filesystem on /dev/sdc # community.general.filesystem: # fstype: ext4 # dev: /dev/sdc
# - name: Mount /dev/sdc to the server as /mnt/chocolate # ansible.posix.mount: # path: /mnt/chocolate # src: /dev/sdc # fstype: ext4 # state: mounted
NGINX Setup
This is a quick and dirty script for setting up the NGINX web server. I am currently proxy passing all of the traffic for udia.ca and the relevant static files back to my old instance, but this should be modified to serve the static files directly.
NOTE: This current configuration is modified by Lets Encrypt, any ansible changes to the NGINX configuration will remove the Lets Encrypt SSL certs. This needs to be made more robust.
I am not too happy with the approach used here to provision certificates. The operating system provided version of certbot has explicitly been deprecated in the official lets encrypt documentation, but I do not want to install snapd (following their recommended guide).
This is an operational and optional next step, following the ansible collection hardening roles. I loosely inspected the tasks applied on the system- they are reasonable, but modifications to the SSH defaults were necessary to allow proxy-pass for my existing server (a limitation of one ipv4 address).
It was reasonably quick to spin up a new server and setup configuration using Ansible. The experience of using Terraform is certainly more enjoyable than using Ansible, as many of the more advanced features in Ansible are provided by community maintained packages (hardening, nginx). That being said, I do not see this eliminating the necessity of server administration, and additional manual effort (and learning) still remains for effective use of these tools.