visionone-file-security-helm

TrendMicro Vision One™ File Security Helm Chart User Guide

Prequisites

Installation

To use our containerized scanner in your Kubernetes, deploy our helm chart containing the scanner server and management service in your Kubernetes cluster by using the following steps:

1. Create secrets with the registration token

Create secrets using the following commands: (replace _your-v1-registration-token_ using your v1 registration token):

kubectl create namespace visionone-filesecurity
kubectl create secret generic token-secret --from-literal=registration-token="_your-v1-registration-token_" -n visionone-filesecurity
kubectl create secret generic device-token-secret -n visionone-filesecurity

Validate the secrets

Get secrets using following command: (should include two secrets token-secret and device-token-secret)

kubectl get secret -n visionone-filesecurity

2. Download the Helm chart containing the scanner from the GitHub repository:

helm repo add visionone-filesecurity https://trendmicro.github.io/visionone-file-security-helm/
helm repo update

3. You can verify that the helm chart has been signed and is valid:

Download the public key file and import:

curl -o public-key.asc https://trendmicro.github.io/visionone-file-security-helm/public-key.asc

gpg --import public-key.asc

[!WARNING] The GnuPG v2 stores your secret keyring using a new format kbx on the default location ~/.gnupg/pubring.kbx. Use the following command to convert your keyring to the legacy gpg format (Reference: Helm Provenance and Integrity):

gpg --export >~/.gnupg/pubring.gpg

Verify that the chart has been signed and is valid:

helm pull --verify visionone-filesecurity/visionone-filesecurity

4. Install the Helm Chart

Install the chart with the release name my-release:

helm install my-release visionone-filesecurity/visionone-filesecurity -n visionone-filesecurity

This will deploy:

Download and install a File Security SDK or the File Security CLI

You need to install the SDK or CLI to retrieve the scan results from the scanner. For more information on installing the SDKs and CLI, see:

Verify that the scanner is working using the CLI

Scan file from another pod using Trend Micro File Security CLI with the scanner service name as the endpoint:

./tmfs scan file:./eicar.com.txt --tls=false --endpoint=my-release-visionone-filesecurity-scanner:50051

Notes:

Uninstalling the Helm Chart

Uninstall Helm Chart

helm uninstall my-release -n visionone-filesecurity

Optional Configuration

Scanner Configuration Management

The scanner service uses a ConfigMap to store its configuration settings. The Management Service provides CLISH interface to dynamically update scanner configurations without requiring pod restarts.

Scanner ConfigMap

The scanner configuration is stored in a ConfigMap specified in the values.yaml:

visiononeFilesecurity:
  scanner:
    configMapName: scanner-config

This ConfigMap is automatically created during Helm installation and can be managed through:

Dynamic Configuration Updates

The Management Service can update scanner configurations in real-time:

Example configuration management via Management Service CLISH:

# turn into management-service1
kubectl exec -it deploy/scanner-visionone-filesecurity-management-service -- bash

# then modify scan policy by using clish
clish scanner scan-policy modify --max-decompression-layer=10

Plugin Configuration

When plugins are enabled, additional ConfigMaps and Secrets are created for plugin-specific configurations:

visiononeFilesecurity:
  management:
    plugins:
      - name: ontap-agent
        enabled: true
        configMapName: ontap-agent-config
        securitySecretName: ontap-agent-security
        jwtSecretName: ontap-agent-jwt

These resources are managed by the Management Service and allow for secure storage of plugin credentials and settings.

Resource Configuration

The Helm chart includes default resource requests for each pod component that define the minimum resource requirements. These values are set based on typical workload requirements but can be adjusted based on your specific needs.

Current Default Resource Requests

The following default resource requests are configured in values.yaml:

Scanner Pod (scanner.resources.requests):

Scan Cache Pod (scanCache.resources.requests):

Backend Communicator Pod (backendCommunicator.resources.requests):

Management Service Pod (managementService.resources.requests):

Customizing Resource Requests

You can modify the resource requests during installation by using the --set flags. You can also modify the requests by overriding the values in your custom values.yaml file:

# Example: Increase scanner resources for high-throughput scanning
scanner:
  resources:
    requests:
      cpu: 1200m
      memory: 4Gi

# Example: Increase scan cache memory for larger cache requirements
scanCache:
  resources:
    requests:
      cpu: 500m
      memory: 512Mi

# Example: Increase management service resources for high-traffic management operations
managementService:
  resources:
    requests:
      cpu: 500m
      memory: 512Mi

Or by using the Helm command line:

helm install my-release visionone-filesecurity/visionone-filesecurity \
  -n visionone-filesecurity \
  --set scanner.resources.requests.cpu=1200m \
  --set scanner.resources.requests.memory=4Gi

Note: It is recommended to only increase these resource requests from the defaults. Decreasing them below the minimum requirements may result in performance issues or pod failures.

Autoscaling Configuration

The Helm chart supports Horizontal Pod Autoscaling (HPA) for the scanner component to automatically scale the number of scanner pods based on CPU and memory utilization.

Autoscaling works best when combined with appropriate resource requests and limits. Monitor your workload patterns to determine optimal scaling thresholds.

Prerequisites for Autoscaling

Default Autoscaling Settings

Autoscaling is disabled by default. The following settings are configured in values.yaml:

Scanner Autoscaling (scanner.autoscaling):

Enabling Autoscaling

You can enable autoscaling during installation by using the ‘–set’ flags. You can also enable autoscaling by overriding the values in your custom ‘values.yaml’ file:

# Example: Enable autoscaling with custom thresholds
scanner:
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 10
    targetCPUUtilizationPercentage: 70
    targetMemoryUtilizationPercentage: 80

Or by using the Helm command line:

helm install my-release visionone-filesecurity/visionone-filesecurity \
  -n visionone-filesecurity \
  --set scanner.autoscaling.enabled=true \
  --set scanner.autoscaling.maxReplicas=10 \
  --set scanner.autoscaling.targetCPUUtilizationPercentage=70

Monitoring Autoscaling

You can monitor the HPA status using:

# Check HPA status
kubectl get hpa -n visionone-filesecurity

# Get detailed HPA information
kubectl describe hpa -n visionone-filesecurity

Proxy Configuration

visiononeFilesecurity.proxyUrl

visiononeFilesecurity.noProxy

Using a Custom values.yaml File

Customizing the values.yaml file allows you to configure optional settings such proxyUrl. Download the default values.yaml file, modify it, and use it during the Helm chart installation.

Steps:

  1. Download the default values.yaml file:
    helm show values visionone-filesecurity/visionone-filesecurity > values.yaml
    
  2. Open the values.yaml file in a text editor and modify the optional configurations. For example:
    visiononeFilesecurity:
      proxyUrl: "http://proxy.example.com:8080"
      noProxy: "localhost,127.0.0.1,.svc.cluster.local"
    
  3. Save the modified values.yaml file.

  4. Install the Helm chart using the modified values.yaml file:
    helm install my-release visionone-filesecurity/visionone-filesecurity -n visionone-filesecurity -f values.yaml
    

Scanner Service Ingress Setup Guide

Trend Micro provides step-by-step guides for exposing the scanner service via Kubernetes Ingress, for both local development environments and AWS EKS production deployments. Each guide covers prerequisites, configuration, deployment, and testing instructions.

The scanner service can be configured with a dedicated host or share a host with the Management Service. When sharing a host, the Management Service uses the /ontap path while the scanner service handles all other traffic.

Shared Host Configuration

When both scanner and Management Service use the same host, configure the ingress paths as follows:

scanner:
  ingress:
    enabled: true
    hosts:
      - host: scanner.example.com
        paths:
          - path: /
            pathType: Prefix

managementService:
  ingress:
    enabled: true
    hosts:
      - host: scanner.example.com
        paths:
          - path: /ontap
            pathType: Prefix

The ingress controller will prioritize the longer /ontap path for Management Service requests, while routing all other traffic to the scanner service.

Environment-Specific Setup Guides

Choose the appropriate guide for your use case:

Note: Both guides can be adapted to support the shared host configuration by applying the ingress settings shown above.

Management Service

The Helm chart includes a Management Service component that provides WebSocket endpoints for plugin integrations. This service exposes WebSocket connections for real-time communication with third-party integrations.

Management Service Features

Configuration

The Management Service is configured in the values.yaml file under the managementService section:

managementService:
  replicaCount: 1
  image:
    repository: your-registry/management-service
    tag: "1.3.4"
  service:
    type: ClusterIP
    port: 8080          # HTTP management port (internal use only)
    ontapWsPort: 8081   # WebSocket port for ONTAP agent plugin
  resources:
    requests:
      cpu: 250m
      memory: 256Mi

Ingress Configuration for Management Service

The Management Service can share the same ingress host as the scanner service by using different paths. The ingress controller will route requests based on path matching, with longer paths taking priority.

Shared Host Configuration

Configure both services to use the same host with different paths:

# Scanner service - handles all traffic not matching /ontap
scanner:
  ingress:
    enabled: true
    hosts:
      - host: scanner.example.com
        paths:
          - path: /
            pathType: Prefix

# Management service - handles /ontap traffic
managementService:
  ingress:
    enabled: true
    hosts:
      - host: scanner.example.com
        paths:
          - path: /ontap
            pathType: Prefix

With this configuration:

Management Service Plugin Support

The Management Service supports plugins for third-party integrations. Configure plugins in the values.yaml:

visiononeFilesecurity:
  management:
    dbEnabled: true
    plugins:
      - name: ontap-agent
        enabled: true
        configMapName: ontap-agent-config
        securitySecretName: ontap-agent-security
        jwtSecretName: ontap-agent-jwt

When plugins are enabled, the Management Service will automatically create the necessary ConfigMaps and Secrets for plugin configuration.

Database Storage Configuration

The chart can run the Management Service with or without the bundled PostgreSQL database:

Turn on the managed database by setting the value under the same hierarchy you already use for other Vision One options:

visiononeFilesecurity:
  management:
    dbEnabled: true

Once the flag is true, also review the databaseContainer block to choose the persistence mode that matches your cluster.

[WARNING] Immutable Configuration: The database uses a Kubernetes StatefulSet with immutable fields. Once deployed, the following values.yaml settings cannot be changed without deleting and recreating the StatefulSet:

databaseContainer:
  storageClass:
    hostPath:
  persistence:
    storageClassName:
    size: 

Quick start: local development (hostPath)

The defaults in values.yaml already ship with a StorageClass that binds to a hostPath on the worker node. Enable the DB and install:

helm upgrade --install v1fs ./visionone-filesecurity \
  --namespace visionone-filesecurity \
  --create-namespace \
  --set visiononeFilesecurity.management.dbEnabled=true

This path keeps everything local to the node, so data disappears if the node is deleted—suitable for demos only. The precise location on disk is controlled by the databaseContainer.storageClass.hostPath setting in values.yaml:

databaseContainer:
  storageClass:
    create: true
    name: visionone-filesecurity-storage
    hostPath: /mnt/data/postgres  # change this if your nodes expose a different path

If you run Kubernetes locally , update the hostPath to a directory that exists on every worker node before enabling the database.

Quick start: AWS EKS (EBS gp3)

EKS clusters typically already have an EBS-backed StorageClass such as gp3. Reuse it by turning off the bundled hostPath StorageClass and pointing the PVC at gp3:

visiononeFilesecurity:
  management:
    dbEnabled: true

databaseContainer:
  storageClass:
    create: false          # reuse an existing EBS StorageClass
  persistence:
    storageClassName: gp3  # any EBS-backed StorageClass name works
    size: 100Gi

What the booleans mean:

Equivalent Helm CLI example:

helm upgrade --install v1fs ./visionone-filesecurity \
  --namespace visionone-filesecurity \
  --create-namespace \
  --set visiononeFilesecurity.management.dbEnabled=true \
  --set databaseContainer.storageClass.create=false \
  --set databaseContainer.persistence.storageClassName=gp3 \
  --set databaseContainer.persistence.size=100Gi

Checklist before enabling on EKS

See the AWS EKS Storage Setup Guide for step-by-step driver installation, IAM policy snippets, and troubleshooting tips (PVC Pending, IAM errors, AZ mismatch, etc.).

Accessing the Management Interface

Once deployed with ingress enabled, you can access the management interface at:

ICAP Scanner Service Setup Guide

To make the ICAP Scanner Service accessible outside your Kubernetes cluster, you can expose it using a Kubernetes LoadBalancer service type. This is useful for integrating with external applications or security gateways that require ICAP protocol support. The setup process varies depending on your environment.

For detailed, step-by-step instructions, refer to the appropriate guide for your deployment:

Basic Testing with ICAP Scanner Service

You can test the ICAP server functionality by port forwarding the service to your local machine:

# Forward the ICAP port to your local machine
kubectl port-forward -n visionone-filesecurity svc/my-release-visionone-filesecurity-scanner 1344:1344

In a separate terminal, install the ICAP client and test the connection:

# Install c-icap-client (if not already installed)
sudo apt-get install c-icap

# Test file scanning
c-icap-client -i localhost -p 1344 -s scan -v -f sample.txt -x "X-scan-file-name: sample.txt"

Releases

You can find the latest release notes here.