Nomad Cluster Notes
Last Updated: November 1st, 2021
Nomad CNI Plugin setup
The following is an example on using the CNI plugin and how to give one of your applications a dedicated IP address.
See this blog post for more information https://www.hashicorp.com/blog/multi-interface-networking-and-cni-plugins-in-nomad-0-12.
For the below steps, replace ens192 with whatever your main interface is called.
Download CNI Plugins
curl -L -o cni-plugins.tgz "https://github.com/containernetworking/plugins/releases/download/v1.0.0/cni-plugins-linux-$( [ $(uname -m) = aarch64 ] && echo arm64 || echo amd64)"-v1.0.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -C /opt/cni/bin -xzf cni-plugins.tgz
See https://www.nomadproject.io/docs/integrations/consul-connect#cni-plugins.
/etc/nomad.d/nomad.hcl
Specify the cni_path and cni_config_dir in the client stanza.
client {
enabled = true
cni_path = "/opt/cni/bin"
cni_config_dir = "/opt/cni/config"
}
/opt/cni/config/unifi.conflist
{
"cniVersion": "0.4.0",
"name": "unifi",
"args": {
"cni": {
"ips": ["10.0.10.128/24"]
}
},
"plugins": [
{
"type": "macvlan",
"master": "ens192",
"ipam": {
"type": "static",
"addresses": [
{
"address": "10.0.10.128/24",
"gateway": "10.0.10.1"
}
],
"routes": [
{ "dst": "0.0.0.0/0" }
],
"dns": {
"nameservers" : ["10.0.0.2", "10.0.0.3"],
"domain": "techstormpc.net",
"search": [ "techstormpc.net" ]
}
}
}
]
}
unifi.nomad jobspec
Under the job > group > network stanza, add:
network {
mode = "cni/unifi"
}
Steps needed to ping the virtual IPs from the same server
Make a macvlan interface
ip link add mac0 link ens192 type macvlan mode bridge
ifconfig mac0 up
ip route add 10.0.10.128/26 dev mac0
Sharing storage across nomad clients
This an example of using NFS storage to share application storage across clients
Add this line to /etc/fstab
. Replace <NFS_SERVER> with your nfs server IP.
<NFS_SERVER>:/mnt/SSDPool/appdata /mnt/appdata nfs auto,nofail,noatime,intr,tcp,bg,_netdev 0 0
You can also use this ansible task.
---
- name: Set up NFS Client
hosts: nomad-clients
remote_user: root
tasks:
- name: Install nfs
apt:
name: nfs-common
state: present
- name: Create mount dir
file:
path: /mnt/appdata
state: directory
- name: Mount an NFS volume
ansible.posix.mount:
src: <NFS_SERVER>:/mnt/SSDPool/appdata
path: /mnt/appdata
opts: auto,nofail,noatime,intr,tcp,bg,_netdev
state: mounted
fstype: nfs
Allowing linux capabilities
Some containers require certain permissions (such as sending ICMP traffic). In my case I had to add the net_raw
capability.
In the /etc/nomad.d/nomad.hcl
file, add the following under the docker plugin.
plugin "docker" {
config {
allow_caps = ["audit_write", "chown", "dac_override", "fowner", "fsetid", "kill", "mknod",
"net_bind_service", "setfcap", "setgid", "setpcap", "setuid", "sys_chroot", "net_raw"]
}
}
In your jobspec, add the cap_add
line under your task config.
task "librenms_cron" {
driver = "docker"
config {
image = "librenms/librenms:22.2.1"
cap_add = ["net_raw"]
}
}