Deploy Snipe It with Kubernetes

Deploy Snipe It with Kubernetes

In the previous post, we spoke about Snipe It and how to deploy it with docker. As is a simple product let’s introduce how to deploy this simple tool with Kubernetes.

First let’s introduce Kubernetes.

Kubernetes is an open-source platform for managing containerized workloads and services. Kubernetes manage containers system resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more.

Kubernetes works with several components:

Kubernetes is deployed with a Cluster who consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.

The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.

Components of Kubernetes

You will find all the informations needed to https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Now we have more knowledge on Kubernetes let’s see how to deploy snipe-it on k8s.

Snipe It docker installation is available on https://snipe-it.readme.io/docs/docker. We need two containers Mysql and Snipe It to let the app work.

That means we will have running pods for Snipe and Mysql.

Let’s create a folder app with all the components files, the first one will be for MySQL.

The file mariadb_deployments.yml will create two components for the database.

First one is Deployment, the deployment provides declarative updates for Pods. The second one is PersistentVolumeClaim, is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany).

Let’s take a look to the file

apiVersion: apps/v1
kind: Deployment

metadata:
  name: snipedb
  labels:
    app: snipe
spec:
  selector:
    matchLabels:
      app: snipe
      tier: snipedb
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: snipe
        tier: snipedb
    spec:
      containers:
        - name: snipedb 
          image: mariadb
          resources:
            limits:
              memory: 512Mi
              cpu: "1"
            requests:
              memory: 256Mi
              cpu: "0.2"
          ports:
            - containerPort: 3306 
          volumeMounts:
            - name: snipedb-pvolume
              mountPath: /var/lib/snipeit
          env: 
            - name: MYSQL_ROOT_PASSWORD
              value: "mysql_root_password"
            - name: MYSQL_DATABASE
              value: "snipe"
            - name: MYSQL_USER
              value: "snipeit"
            - name: MYSQL_PASSWORD
              value: "mysql_password"
      volumes:
        - name: snipedb-pvolume
          persistentVolumeClaim:
            claimName: snipedb-pvolume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: snipedb-pvolume
  labels:
    app: snipedb
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

The Deployment Metadata name is snipedb and it’s tagged with the labels app snipe. Labels are key/value pairs that are attached to objects, such as pods. Labels can be used to organize and to select subsets of objects.

The .spec.strategy.type==Recreate will kill all the previous pods before create new.

in .template.spec we will retrieve all the configs for the pods, we create the volume snipedb-pvolume who use the PersistantVolumeClaim snipedb-pvolume. A volume snipedb-pvolume has been created in .template.spec.containers.volumeMounts this volume use the declarated volume in .template.spec.volumes

The persistentVolumClaim .spec.accessModes has been setup in ReadWriteOnce. That means the volume can be mounted as read-write by a single node. Multiple pods under same nodes can access to Volume.

As the mysql we will create a deployment and a PersistantVolumeClaim for the snipe app.

Now let’s check the snipe_deployments.yml file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: snipe
spec:
  selector:
    matchLabels:
      app: snipe 
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: snipe
        tier: frontend
    spec:
      containers:
        - name: snipe
          image: snipe/snipe-it:latest
          livenessProbe:
            httpGet:
              port: 80
          resources:
            limits:
              memory: 512Mi
              cpu: "1"
            requests:
              memory: 256Mi
              cpu: "0.2"
          ports:
            - containerPort: 80
          volumeMounts:
            - name: snipe-pvolume
              mountPath: /var/lib/snipeit     
          env:
            - name: APP_ENV
              value: "preproduction"
            - name: APP_DEBUG
              value: "true"
            - name: APP_KEY
              value: "base64:D5oGA+zhFSVA3VwuoZoQ21RAcwBtJv/RGiqOcZ7BUvI="
            - name: APP_URL
              value: "http://127.0.0.1:9000"
            - name: APP_TIMEZONE
              value: "Europe/Paris"
            - name: APP_LOCALE
              value: "en"
            - name: DB_CONNECTION
              value: "mysql"
            - name: DB_HOST
              value: "snipedb"
            - name: DB_DATABASE
              value: "snipe"
            - name: DB_USERNAME
              value: "snipeit"
            - name: DB_PASSWORD
              value: ""
            - name: DB_PORT
              value: "3306"
            - name: MAIL_PORT_587_TCP_ADDR
              value: "smtp.gmail.com"
            - name: MAIL_PORT_587_TCP_PORT
              value: "587"
            - name: MAIL_ENV_FROM_ADDR
              value: "mail@domain.com"
            - name: MAIL_ENV_FROM_NAME
              value: ""
            - name: MAIL_ENV_ENCRYPTION
              value: "tls"
            - name: MAIL_ENV_USERNAME
              value: "mail@domain.com" 
            - name: MAIL_ENV_PASSWORD
              value: ""         
      volumes:
        - name: snipe-pvolume
          persistentVolumeClaim:
            claimName: snipe-pvolume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: snipe-pvolume
  labels:
    app: snipe
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

The container image snipe/snipe-it:latest it’s used.

.spec.container.livenessprobe it’s used to know when to restart a container.

With snipe_deployments.yml and mariadb_deployment.yml we have the Deployments ready.

We have now to deploy Services to expose an application running on a set of Pods as a network service.

The snipe_services.yml will be used to expose Snipe app port 80 and databse 3306 port.

apiVersion: v1
kind: Service
metadata:
  name: snipe-entrypoint
  labels:
    app: snipe
spec:
  ports:
    - port: 80
  selector:
    app: snipe
    tier: frontend
  clusterIP: None

---
apiVersion: v1
kind: Service
metadata:
  name: snipedb
  labels:
    app: snipedb
spec:
  ports:
    - port: 3306
  selector:
    app: snipe
    tier: snipedb
  clusterIP: None

Now we should have a folder with three files snipe_deployments.yml, mariadb_deployments.yml and snipe_services.yml.

To generate the application run from command line the command

kubectl apply -f ./

this command will apply all the files and generate all the components.

Finally add a forward port to the service with the command

kubectl port-forward service/snipe-entrypoint 9000:80

Use command kubectl-get, kubectl describe for have more information on the running apps.

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

Pritunl Zero installation and configuration

Install and config Pritunl Zero

Pritunl Zero is a zero trust system that provides secure authenticated access to internal services from untrusted networks without the use of a VPN.

Service can be ssh web in this article we will see how to implement pritunl zero in environment with docker and Traefik.

Pritunl Zero installation 

Our environment is a hosted web server with Traefik as proxy , Pritunl will be installed in a container with docker-compose.

Let’s take a look to the docker-compose file :

version: "3.7"
services:
 traefik:
    image: traefik
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./traefik/letsencrypt:/letsencrypt"
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.mysmartserver.acme.dnschallenge=true"
      - "--certificatesresolvers.mysmartserver.acme.dnschallenge.provider=provider"
    environment:
      - "PROVIDER_ENDPOINT="
      - "PROVIDER_APPLICATION_KEY="
      - "PROVIDER_APPLICATION_SECRET="
      - "PROVIDER_CONSUMER_KEY="
 pritunl:
   image: "pritunl/pritunl-zero:latest"
   links:
     - pritunldb
   environment:
     - "MONGO_URI=mongodb://pritunldb:27017/pritunl-zero"
     - "NODE_ID=5b8e11e4610f990034635e98"
   ports:
     - '81:80'
     - '4444:4444'
     - '444:443'
 
   labels:
     - "traefik.enable=true"
     - "traefik.http.routers.traefikforward.tls.certresolver=mysmartserver"
     - "traefik.http.routers.pritunl.entrypoints=websecure,web"
     - "traefik.http.routers.pritunl.rule=Host(`zero.mysmartserver.com`)"
     - "traefik.http.routers.pritunl.rule=Host(`zero-users.mysmartserver.com`)"
 
 pritunldb:
   image: 'mongo'
   container_name: 'pritunldb'
   environment:
     - MONGO_INITDB_DATABASE="pritunl-zero"
     - MONGO_USER="mongo"
     - MONGO_PASS="password"
   ports:
     - '27017:27017'
   volumes:
     - mongo-data:/data/db
 
 
volumes:
 mongo-data:

The Traefik container listens on the HTTP and HTTPS ports of the server and also generates the SSL certificate with let’s encrypt.

In the environment section we set the dns provider information for let’s encrypt.

Pritunl should be available on the HTTP and HTTPS ports but they are already used with Traefik. 

We put these ports behind 81 and 444 the port 4444 is not required but we will use it later. 

The container is linked to a mongo db database where we create a pritunl-zero db, Node ID represents the instance pritunl zero.

The labels section is lanaged through traefik , we add 2 routes to join the server :

  • zero.mysmartserver.com
  • zero-users.mysmartserver.com

That mean we will  create two ssl ssl certificates for these two records.

Finally the pritunldb is hosted with a mongodb container available on the classic ports 27017.

We store the db to a volume  on the host.

Configure Web service with Pritunl Zero

Once the docker-compose up the pritunl instance is available on zero.pritunl.mysmartserver.com:444

Pritunl zero ask for login/password

Generate the password with the command pritunl-zero default-password for a docker connect on the instance with the command docker exec.

Connect on the interface and click on Certificates to set the certificates used through pritunl.

Note At this step your pritunl instance pushes an invalid certificate.

The purpose is to generate certificates for the admin console but also for service or user interface for ssh access. 

If you are using lets encrypt directly from traefik you can generate the certificates from acme.json and upload them to pritunl.

The jq command will help you to generate the certificates and the key :

 cat acme.json | jq -r ‘.[].Certificates[] | select(.domain.main== »‘zero.mysmartserver.com' ») | .certificate’ | base64 -d > zero.mysmartserver.com.crt

cat acme.json | jq -r ‘.[].Certificates[] | select(.domain.main== »‘zero.mysmartserver.com' ») | .key’ | base64 -d > zero.mysmartserver.com.key

Once the certificates and the key generated copy them on the instance.

Two options are available for certificate use let’s encrypt from pritunl or copy the certificate directly on the instance. 

For let’s encrypt your server will have  to be available from 80 and 443 ports.

Certificates text set will be available like this  :

The next step is to configure the node parameters 

Management is the pritunl admin console 

User is used if you want connect to ssh server from pritunl 

Proxy service allow you to join an internal HTTP/HTTPS ressources from the web

Bastion allows you to implement a layer to access your server from ssh.

It’s a secure way to access your server without VPN , it also allows you to add mfa for contextual response in a zero trust environment.

For the example we chose to set a proxy service to reach an internal resource from HTTP/HTTPS.

Enable Management and Proxy and enter the management Name.

For this example I changed the management port to 4444 to align it on the container port.

My server will be available to 4444 instead of 443 already used with traefik.

Add the generated certificate and save. 

You should reach your server to the 4444 ports on the management url

The generated certificates are correctly used on the server now

Go to the Services tab from the admin console and click on New

Indicate an external domain ( depending on your record but it’s not mandatory to indicate a host).

On the internal server indicate the internal resource you want to join , for example I add the container Ip and the port I want to reach.

Add a role and click on save.

Back to the Nodes tab select the service, add it and save your configuration.

Finally go to the Users tab and add a user with the same role created on the service

You  have to enter the same roles as the service to allow the user. 

The type user can be user from an IDP provider , IDP user can use the MFA (note SSO and MFA are not free on pritunl zero https://zero.pritunl.com/#pricing).

Once saved, go to the service external domain , you should land to a pritunl login page with the correct ssl certificate.

Connect the user previously created , you should be redirected to the server reached

Your internal service is now available from pritunl zero and internet. If you want to add policies or mfa rules you can create policies that you can assign to specific roles and services. 

The policy can help you to restrict the service to networks or add specific parameters to match for a service.

Snipe It Docker-Compose deployment

Snipe It Docker-Compose deployment

Snipe IT is a lovely Free Open Source (FOSS) project built on Laravel 5.7. Snipe It easily allows you manage your It parc with a powerful api.

Let’s take a look about the docker-compose yml file.

version: '3.7'
services:
  snipe:
    image: snipe/snipe-it:latest
    ports:
      - 8000:80
    env_file: ./snipe/env-file
    volumes:
      - './data:/var/lib/snipeit'
    links:
      - snipedb:snipedb
      - redis
  snipedb:
    image: mysql:5.6
    env_file: ./snipe/.env

Nb : we used a mysql:5.6 but for the latest version of snipe it you can also use maria db.

Env-file contain all the parameters you need to configure the DB and your snipe It option you will find below an example:

Env File Parameters

 #Mysql Parameters
 MYSQL_ROOT_PASSWORD = awesomepassword
 MYSQL_DATABASE = snipe
 MYSQL_USER = snipeit
 MYSQL_PASSWORD = awesomepassword

 # Email Parameters
 # - the hostname/IP address of your mailserver
 MAIL_PORT_587_TCP_ADDR=smtp server 
 #the port for the mailserver (probably 587, could be another)
 MAIL_PORT_587_TCP_PORT=587
 # the default from address, and from name for emails
 MAIL_ENV_FROM_ADDR=yourmail@domain.com
 MAIL_ENV_FROM_NAME= Snipe Alerting
 # - pick 'tls' for SMTP-over-SSL, 'tcp' for unencrypted
 MAIL_ENV_ENCRYPTION=tls
 # SMTP username and password
 MAIL_ENV_USERNAME=your mail login
 MAIL_ENV_PASSWORD= your mail password


 # Snipe-IT Settings
 APP_ENV=production
 APP_DEBUG=false
 APP_KEY=base64:
 APP_URL=url of your snipe it 
 APP_TIMEZONE=Europe/Paris
 APP_LOCALE=en
 DB_CONNECTION=mysql
 DB_HOST=snipedb
 DB_DATABASE=snipe
 DB_USERNAME=snipeit
 DB_PASSWORD=awesomepassword
 DB_PORT=3306