Cloudflare Access and Argo tunnel configuration

Cloudflare Access and Argo tunnel configuration

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, customers deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network. 

Cloudflare Access is one-half of the Cloudflare for Teams suite of products. 

In this article, we will see how to implement Cloudflare access and argo tunnel with an IDP from Cloudflare and Terraform.

For the example we chosen Okta as Idp, Okta is one of leader on the IAM technology. 

Argo Tunnel installation and configuration

In our case, we want to reach internal resources without a VPN. Argo tunnel will running directly from an internal server and will forward the traffic to the targeted resources. 

Install cloudflared on the server who running the argo tunnel. The package installer is available directly from https://github.com/cloudflare/cloudflared/releases

Once installed run the following command to login your cloudflared instance to your cloudflared tenant :

cloudflared tunnel login

Once installed run the following command to login to your cloudflared instance to your cloudflared tenant. Once validated Cloudflare will return a cert.pem who’s allowing you to create, delete tunnels and manage DNS records directly with cloudflared.

Run your first argo tunnel with the command : 

cloudflared tunnel create <NAME>

Once created you can list the argo tunnel created with the command :

cloudflared tunnel list 

You should have a list like that :

Argo tunnel configuration

At this step you have created your argo tunnel but you have to configure it.

Cloudflared tunnel will call a YAML config file to run , the config file is generally specfied in ~/.cloudflared, /etc/cloudflared or  /usr/local/etc/cloudflare

Config file should contain the tunnel id and the credential file generated with the command tunnel login.

Here an example of the config file :

tunnel: tunnel UUID 
credentials-file: /root/.cloudflared/tunneUUID.json
logfile: /var/log/cloudflared.log
hello-world: true
loglevel: debug
autoupdate-freq: 2h
 
ingress:
  - hostname: ssh.gitlab.domain.com
    service: ssh://localhost:22
  - hostname: web.gitlab.domain.com
    service: https://localhost:443
    # This "catch-all" rule doesn't have a hostname/path, so it matches everything
  - service: http_status:404

With this file the tunnel allows us to join the target through ssh and HTTPS with two different hostnames. 

Several flags are available for the config lets take a look to the present arguments : 

  • logfile → to log the tunnel ativities 
  • hello-world →  test server for validating the argo tunnel setup
  • loglevel → The verbosity levels of logs expected values trace, debug,  info, warn, error, fatal, panic
  • autoupdate-freq → the autoupdate freqency the default is 24h 

Ingress rules allow you to route the traffic from multiple hostname to multiple services through cloudflared and the argo tunnel. 

In the previous file, We will access my GitLab web interface through web.gitlab.domain.com and through ssh with ssh.gitlab.domain.com

A service for all rules is required at the last line, in this example, we use the http_status 404

Note that you can add the path to the hostname if you want. 

The list of supported protocols is available here

You can validate your configuration and ingress rules with the command : 

cloudflared tunnel ingress validate

 This command will verify if the ingress rules specified in the file are valid.

cloudflared tunnel ingress rule https://web.gitlab.domain.com

This command will test the url and check if associated rules exist.

Route DNS traffic

As we saw previously we will reach our target from the hostname through cloudflared. That means we have to route the traffic from the cloudflare records to the argo tunnel instance. 

We have two ways to do it lets take a look at these: 

Cloudflared Dashboard 

From the cloudflare dashboard select the DNS tab and add a new CNAME record. The record will point to the target tunelUUID.cfargotunnel.com, which is a domain available only through Cloudflare. 

Click save to register.

CLI

As you saw previously we can manage our Cloudflare record once we have logged cloudflared with the certificate.

To add the record simply use the following command : 

cloudflared tunnel route dns <UUID or NAME> web.gitlab.domain.com

Cloudflare Access configuration

Well at this step we have a working tunnel and DNS records to join internal applications. What we want to do is join the application through validation with our IDP and some policies like the below scheme.

Go to the >Cloudflare Teams dashboard and setup the IDP accordingly to this https://developers.cloudflare.com/cloudflare-one/identity/idp-integration

Note we will use Okta as IDP is in example.

Once the IDP added go to the Application Tab, click Add an application and select self-hosted

Enter an application name,  the hostname created on the cloudflared config, and select the Identity Provider.

Click on next to continue on the policy rule

Select the Rule Action Allow and include the Okta Groups → test 

This group is an existing group on our IDP.

Click next and Add application, you should have your created application like that : 

Cloudflare app launcher

Now Let’s configure the App Launcher, this portal contains a dashboard with the available  Cloudflare access app.

From Authentication select App Launcher and click Edit Access App Launcher.

Create a rule and add your IDP group :

You can rename your app launcher with the auth domain

Once configured connect on your app with the hostname you will be redirected to the app launcher.

Click on the IDP logo to sign in with.

Once logged from the IDP user will be redirected to the target.

To join the target through ssh user  have to install cloudflared on his computer and configure a config file on his computer accordingly to this: https://developers.cloudflare.com/cloudflare-one/applications/non-HTTP/ssh/ssh-connections#1-update-your-ssh-configuration-settings

Once configured connect with classic ssh from the computer to the hostname will redirect the user to the the app launcher  browser once the identity validated a token is returned to allow the connection through ssh. 

Cloudflare Access configuration from Terraform

Is possible to configure your Cloudflare access configuration directly from Terraform instead of the dashboard. 

You will find below an example for the configuration from Terraform :

terraform {
 required_providers {
   cloudflare = {
     source  = "cloudflare/cloudflare"
     version = "~> 2.0"
   }
# oauth Okta
resource "cloudflare_access_identity_provider" "okta_oauth" {
 account_id = "Cloudflare account id"
 name       = "Okta"
 type       = "okta"
 config {
   client_id     = "Client ID Okta app"
   client_secret = "secret Okta app"
   okta_account = "https://tenant.okta.com"
 }
}
 
resource "cloudflare_access_application" "gitlab_ssh" {
 zone_id                   = var.cloudflare_zone_id
 name                      = "Gitlab ssh"
 domain                    = "ssh.gitlab.domain.com"
 session_duration          = "24h"
 auto_redirect_to_identity = false
}
 
resource "cloudflare_access_application" "gitlab_web" {
 zone_id                   = var.cloudflare_zone_id
 name                      = "Gitlab web"
 domain                    = "web.gitlab.domain.com"
 session_duration          = "24h"
 auto_redirect_to_identity = false
}
 
 
resource "cloudflare_access_policy" "gitlab_ssh_policy" {
 application_id = cloudflare_access_application.gitlab_ssh.id
 zone_id        = "cloudflare zone id"
 name           = "gitlab ssh policy"
 precedence     = "1"
 decision       = "allow"
 
 include {
   okta {
     name                 = ["test"]
     identity_provider_id = cloudflare_access_identity_provider.okta_oauth.id
   }
 }
}
 
resource "cloudflare_access_policy" "gitlab_web_policy" {
 application_id = cloudflare_access_application.gitlab_web.id
 zone_id        = "cloudflare zone id"
 name           = "gitlab web policy"
 precedence     = "1"
 decision       = "allow"
 
 include {
   okta {
     name                 = ["test"]
     identity_provider_id = cloudflare_access_identity_provider.okta_oauth.id
   }
 }
}

In this previous example, the first block cloudflare_access_identity_provider allow the IDP configuration.

The blocks cloudflare_access_application define the application configuration, cloudflare_access_policy is for the policy configuration. 

Is possible to export the three blocks information with returned variable. 
Cloudflare terraform configuration guides are available from https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs.

Pritunl Zero installation and configuration

Install and config Pritunl Zero

Pritunl Zero is a zero trust system that provides secure authenticated access to internal services from untrusted networks without the use of a VPN.

Service can be ssh web in this article we will see how to implement pritunl zero in environment with docker and Traefik.

Pritunl Zero installation 

Our environment is a hosted web server with Traefik as proxy , Pritunl will be installed in a container with docker-compose.

Let’s take a look to the docker-compose file :

version: "3.7"
services:
 traefik:
    image: traefik
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./traefik/letsencrypt:/letsencrypt"
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.mysmartserver.acme.dnschallenge=true"
      - "--certificatesresolvers.mysmartserver.acme.dnschallenge.provider=provider"
    environment:
      - "PROVIDER_ENDPOINT="
      - "PROVIDER_APPLICATION_KEY="
      - "PROVIDER_APPLICATION_SECRET="
      - "PROVIDER_CONSUMER_KEY="
 pritunl:
   image: "pritunl/pritunl-zero:latest"
   links:
     - pritunldb
   environment:
     - "MONGO_URI=mongodb://pritunldb:27017/pritunl-zero"
     - "NODE_ID=5b8e11e4610f990034635e98"
   ports:
     - '81:80'
     - '4444:4444'
     - '444:443'
 
   labels:
     - "traefik.enable=true"
     - "traefik.http.routers.traefikforward.tls.certresolver=mysmartserver"
     - "traefik.http.routers.pritunl.entrypoints=websecure,web"
     - "traefik.http.routers.pritunl.rule=Host(`zero.mysmartserver.com`)"
     - "traefik.http.routers.pritunl.rule=Host(`zero-users.mysmartserver.com`)"
 
 pritunldb:
   image: 'mongo'
   container_name: 'pritunldb'
   environment:
     - MONGO_INITDB_DATABASE="pritunl-zero"
     - MONGO_USER="mongo"
     - MONGO_PASS="password"
   ports:
     - '27017:27017'
   volumes:
     - mongo-data:/data/db
 
 
volumes:
 mongo-data:

The Traefik container listens on the HTTP and HTTPS ports of the server and also generates the SSL certificate with let’s encrypt.

In the environment section we set the dns provider information for let’s encrypt.

Pritunl should be available on the HTTP and HTTPS ports but they are already used with Traefik. 

We put these ports behind 81 and 444 the port 4444 is not required but we will use it later. 

The container is linked to a mongo db database where we create a pritunl-zero db, Node ID represents the instance pritunl zero.

The labels section is lanaged through traefik , we add 2 routes to join the server :

  • zero.mysmartserver.com
  • zero-users.mysmartserver.com

That mean we will  create two ssl ssl certificates for these two records.

Finally the pritunldb is hosted with a mongodb container available on the classic ports 27017.

We store the db to a volume  on the host.

Configure Web service with Pritunl Zero

Once the docker-compose up the pritunl instance is available on zero.pritunl.mysmartserver.com:444

Pritunl zero ask for login/password

Generate the password with the command pritunl-zero default-password for a docker connect on the instance with the command docker exec.

Connect on the interface and click on Certificates to set the certificates used through pritunl.

Note At this step your pritunl instance pushes an invalid certificate.

The purpose is to generate certificates for the admin console but also for service or user interface for ssh access. 

If you are using lets encrypt directly from traefik you can generate the certificates from acme.json and upload them to pritunl.

The jq command will help you to generate the certificates and the key :

 cat acme.json | jq -r ‘.[].Certificates[] | select(.domain.main== »‘zero.mysmartserver.com' ») | .certificate’ | base64 -d > zero.mysmartserver.com.crt

cat acme.json | jq -r ‘.[].Certificates[] | select(.domain.main== »‘zero.mysmartserver.com' ») | .key’ | base64 -d > zero.mysmartserver.com.key

Once the certificates and the key generated copy them on the instance.

Two options are available for certificate use let’s encrypt from pritunl or copy the certificate directly on the instance. 

For let’s encrypt your server will have  to be available from 80 and 443 ports.

Certificates text set will be available like this  :

The next step is to configure the node parameters 

Management is the pritunl admin console 

User is used if you want connect to ssh server from pritunl 

Proxy service allow you to join an internal HTTP/HTTPS ressources from the web

Bastion allows you to implement a layer to access your server from ssh.

It’s a secure way to access your server without VPN , it also allows you to add mfa for contextual response in a zero trust environment.

For the example we chose to set a proxy service to reach an internal resource from HTTP/HTTPS.

Enable Management and Proxy and enter the management Name.

For this example I changed the management port to 4444 to align it on the container port.

My server will be available to 4444 instead of 443 already used with traefik.

Add the generated certificate and save. 

You should reach your server to the 4444 ports on the management url

The generated certificates are correctly used on the server now

Go to the Services tab from the admin console and click on New

Indicate an external domain ( depending on your record but it’s not mandatory to indicate a host).

On the internal server indicate the internal resource you want to join , for example I add the container Ip and the port I want to reach.

Add a role and click on save.

Back to the Nodes tab select the service, add it and save your configuration.

Finally go to the Users tab and add a user with the same role created on the service

You  have to enter the same roles as the service to allow the user. 

The type user can be user from an IDP provider , IDP user can use the MFA (note SSO and MFA are not free on pritunl zero https://zero.pritunl.com/#pricing).

Once saved, go to the service external domain , you should land to a pritunl login page with the correct ssl certificate.

Connect the user previously created , you should be redirected to the server reached

Your internal service is now available from pritunl zero and internet. If you want to add policies or mfa rules you can create policies that you can assign to specific roles and services. 

The policy can help you to restrict the service to networks or add specific parameters to match for a service.

Zero Trust Approach

Zero trust approach

BYOD , Mobile user and device, cloud  are now a part of the IT landscape. 

Classic model security models viewed the network perimeter, often protected by firewalls and other on-prem solutions, as the ultimate line of defense. Users inside the enterprise network were considered trustworthy and given free rein to access company data and resources. Users outside the perimeter were considered untrustworthy and cannot access resources.

This concept is called the castle-and-moat concept. In castle-and-moat security, it is hard to obtain access from outside the network, but everyone inside the network is trusted by default. 

The problem with this approach is that once an attacker gains access to the network, they have free reign over everything inside.


Zero trust is a concept built to answer to that issue’s previous issues. Instead of having trustworthy people we will consider them. We will require identity verification for every person and device trying to reach our resources in the network. 

Zero trust components 

Zero trust is not a specific technology , it is more a concept or approach. That means we can build zero trust from many ways but the tools have mainly the same functions. 

As said previously zero trust require strong identity verification , so IAM is one of the main components. IAM (Identity and access management) is a way to tell who users are and what they are allowed to do. 

Identity management (IdM), also known as identity and access management (IAM or IdAM), is a framework of policies and technologies for ensuring that the proper people in an enterprise have the appropriate access to technology resources.

You can implement MFA (multi factor authentication) in addition to a password with an IAM , but also tracking access into the Company.

IAM helps prevent identity-based attacks and data breaches that come from privilege escalations (when an unauthorised user has too much access).

MFA which require more than one piece of information to authenticate a user , is a core value of the Zero trust. 

The IAM policies are built on user context (Who are they? Are they in a risky user group?)  and application context (which application the user is trying to access). 

It is important to use microsegmentation (is a method of creating zones in data centers and cloud environments to isolate workloads from one another and secure them individually). 

Organizations use microsegmentation to reduce the network attack surface, improve breach containment and strengthen regulatory compliance.

Zero trust system will need to monitor the location context ( Where the user is logged , analyse impossible travel , check new city /state/country).

Device context is a strict control on device access ( Is it a new device ?, Is it a managed device). MDM can help you to push a strong and efficient device context policy.

For example you can require a computer  enrolled (from byod or zero touch) and compliant on your MDM to access the application. 

Once these contexts are verified you will push a contextual response ( allow access , Require mfa or block access).

Build a zero trust system 

From an existing and classical infrastructure you will start the zero trust implementation from an audit of your existing resources. 

Analyse which applications can be moved to a cloud model (eg : classical file server , laptop identity). 

You can use some tools to redirect ports from an https target with tools like Pritunl Zero , Okta gateway server , Cloudflare access or Azure application proxy for example.

Improve or implement IAM to your identities system with an IDP ( eg : Google , Azure , Okta, Fusion auth).

Once the IAM is implemented , is it important to create policy access and build the application and user context as read previously.

Mainly to build a zero trust policiy you can use tools like proxy , it allow you access to the product securely with https but also redirect from a secure point.

example of how Cloudflare Access work with IDP

Build a device policy with a mdm system (mobile device management), for company devices and BYOD devices (Mosyle , Intune , Jamf , Workspace one). 

Several IDPs( Okta , Azure or Google)  can already communicate with the MDM systems.

Okta fast pass is linked to the mdm and can check registered devices

NOTE : For Azure the communication with the mdm is done with Intune and based on the conditional access.

Intune device can be verified in a conditional access policy

Conclusion 

Well with this article we have seen what are the global components of the Zero trust approach. We’ve also seen how we can consider moving our infrastructure to zero trust. 

For the next articles we will try to go deeper with concrete examples of zero trust implementation.