Skip to main content

Tunnel deployment instructions

Prepare id and certificate​

  1. Generate a new unique id for the customer's tunnel by running the following in Python:

    import uuid
    print(uuid.uuid4().hex)
  2. Download the tunnelproxy-ca.pem file from the AWS secrets manager il-central-1 region from either dev or production AWS account

The following steps will require the generated tunnel id and downloaded tunnelproxy-ca.pem file so make sure to keep them until the deployment is completed

Create a new Cloudflare tunnel for the customer​

  1. Go to Cloudflare tunnels dashboard
  2. Click 'Create a tunnel'
  3. Choose type 'Cloudflared'
  4. Enter the tunnel id as the tunnel name and save
  5. Copy the tunnel access token after the tunnel is created
  6. Click next and under 'Published applications' tab add the following:
    • Subdomain - the tunnel id
    • Domain - legion-tunnel.com
    • Service type - http
    • Service URL - localhost:8080
  7. Click 'Complete setup' to finish creating the tunnel

The following steps will require the tunnel access token so make sure to keep it until the deployment is completed

Update Cloudflare Zero Trust policy

  1. Go to the Backend public IPs list in Cloudflare
  2. Add the 2 public IP addresses of our backend in the relevant AWS region (if they're not already there) to the list and save the list

Create dedicated customer proxy instance​

  1. In infra repository's __main__.py server infra file find the list of proxy_instances for the customer's region and add a proxy instance for the customer. The instance name should be the Legion customer id (org_...).

  2. Merge the change and deploy the infra

  3. In AWS portal navigate to EC2 -> Instances under our AWS production account and under the customer's region, and select the tunnel proxy EC2 instance by its name. Note down the instances private DNS name

  4. Click 'Connect', switch to 'Session Manager' tab and click 'Connect' to connect to the instance

  5. (Optional but recommended: run bash in the opened terminal to use bash terminal instead of default and more annoying to use ubuntu shell)

  6. Create a folder for the proxy's files under /usr/proxy by running:

    sudo mkdir /usr/proxy
    cd /usr/proxy
  7. Copy the following files to the /usr/proxy folder:

    • The mitm_script.py, mitmproxy.service and .env files from the /tunnel_proxy folder in this repository. Make sure to update the values of the .env files (tunnel DNS host taken from tunnel creation step - should be <tunnel-id>.legion-tunnel.com, client id and secret taken from AWS secret manager il-central-1 region under 'infra_secrets' as TUNNEL_PROXY_CLIENT_ID and TUNNEL_PROXY_CLIENT_SECRET)
    • The tunnelproxy-ca.pem file (download from AWS secrets manager in the preparation step)

    Verify that all files were copied by ensuring ls -a /usr/proxy shows 4 files.

    (The fastest way to copy the files is to copy them as text - open them in a text editor on your machine, copy all text, run sudo nano <filename> in the EC2 instance, right click, paste, save)

  8. Prepare the expected folder structure and permissions:

    sudo mkdir /usr/proxy/certs
    sudo mv /usr/proxy/tunnelproxy-ca.pem /usr/proxy/certs/mitmproxy-ca.pem
    sudo chown -R ssm-user:ssm-user .
  9. Install mitmproxy:

    sudo apt update
    sudo apt install python3-pip libffi-dev libssl-dev -y
    pip3 install --upgrade pip --break-system-packages
    pip3 install --no-cache-dir mitmproxy==11.0.2 --break-system-packages
  10. Configure mitmproxy to run automatically on system boot (in case the VM will restart)

    sudo mv mitmproxy.service /etc/systemd/system/mitmproxy.service
    sudo systemctl daemon-reexec
    sudo systemctl daemon-reload
    sudo systemctl enable mitmproxy
    sudo systemctl start mitmproxy
  11. Verify everything is configured correctly by checking service status and ensuring it is running successfully by checking that service status is 'active (running)' written in green. If there's an issue, the errors should be shown in the command output

    sudo systemctl status mitmproxy

If service is showing errors or failed to load, there are a few ways to debug the issue:

  1. Copy the exec command from mitmproxy.service file and run it directly from terminal. (Note: this execution method doesn't load environment variables, so ignored errors related to them)
  2. View and tail service logs by running sudo journalctl -u mitmproxy.service -f

After making any changes run sudo systemctl restart mitmproxy to restart the service and run the status command again to see if the issue was resolved.

The following steps will require the EC2 instance private DNS name so make sure to keep it until the deployment is completed

Prepare files to provide the customer​

  1. Prepare the following folder structure to provide the customer:

    • config folder containing:

      • dlp_config.json file, exported from the /settings API for the customer organization. Should contain only a list of rules with 'mask', 'regex' and 'flags' fields. Can be exported by running dev_tools/export_customer_dlp_settings.py in backend repo. Example output:

        [
        {
        "mask": "Clinical Trial",
        "regex": "\\b(NCT|nct)\\d{8}\\b",
        "flags": ["g"]
        }
        ]

        Important: for the following rules, regexes must use (?:^|\\s)\\b as prefix and \\b as suffix:

        • Credit Card Number
        • US Fax Number
        • US Phone Number
        • Med License
        • Med Record

        Otherwise they can match random strings in JS files returned through the tunnel and break flows.

    • .env file with: (fill in the tunnel access token from tunnel creation step)

      TUNNEL_TOKEN=<tunnel-access-token>
      DLP_CONFIG_FILE_PATH=/config/dlp_config.json

      Optionally, if customer has internal servers we need to call with HTTPs that use certificate from custom CA, the customer can create a folder with the cer/crt files of the custom CA and add the following to the .env file: (any path is ok as long as it's volume mounted to the container)

      EXTERNAL_TRUSTED_CERTS_DIR=/config/custom_ca

      Optionally, if customer upstream requires mutual TLS (client certificate authentication), set the following environment variable and mount the referenced folder to include per-profile certificate/key files:

      CLIENT_CERTS_DIR=/config/client_certs

      Expected structure example:

      config/client_certs/<profile-name>/cert.crt
      config/client_certs/<profile-name>/key.pem
  2. Zip the folder and provide the customer with it securely (since it contains access token and certificate with its private key)

Tunnel execution in customer side​

  1. The tunnel should be run by executing the following for the unzipped folder:

    sudo docker login legiontunnel.azurecr.io --username <LegionTunnelAcrCustomerPull-application id> --password <LegionTunnelAcrCustomerPull-application secret>

    sudo docker run -d \
    --env-file .env \
    -v $(pwd)/config:/config \
    --name legion-tunnel \
    --pull always \
    --restart unless-stopped \
    legiontunnel.azurecr.io/legion-tunnel:latest

    Notes:

    • Docker installation instructions by OS can be found in the Docker documentation.
    • Application ID and secret for LegionTunnelAcrCustomerPull can be found in 1Password under 'LegionTunnelAcrCustomerPull'.
  2. After customer successfully ran the docker container verify the container is running correctly by:

    • Having the customer run sudo docker ps -a to verify the container started correctly and is running
    • Having the customer run sudo docker logs <container id> (with the container id from the docker ps command) to verify that are no error logs from the container
    • Checking the tunnel status in Cloudflare tunnels dashboard - it should be marked with a green 'Healthy' state

    These checks will verify the tunnel is connected and the container is running, but we have no way to verify the DLP proxy until trying to run an automation through it

If there are issues with the tunnel we can ask customer to add VERBOSITY=debug flag to the .env file and restart the container, so that docker logs will show much more verbose data for us to debug the issues.

Enable proxy for autonomous investigations​

  1. Update MongoDB with the customer's tool to proxy mapping so the relevant tools in autonomous mode will go through the created proxy. Do this by running the dev_tools/add_proxy.py file and specifying:

    • The customer id
    • The relevant tool that needs the proxy (run script more than once for multiple tools)
    • Passing the EC2 instance private DNS name as the proxy url, which is the DNS name with 'http://' prefix and 1380 port
    • Proxy type 'tunnel'.

    For example:

    add_proxy(
    customer_id="org_customerid",
    tool="TheHive",
    type="tunnel",
    proxy_url="http://ip-172-31-28-15.il-central-1.compute.internal:1380",
    )

    The next autonomous run using a skill from this tool should go through the tunnel.

    Verify in tunnel and worker logs that the run was successful, or find and fix issues according to the error logs