To use our containerized scanner in your Kubernetes, deploy our helm chart containing the scanner server and management service in your Kubernetes cluster by using the following steps:
Create secrets using the following commands: (replace _your-v1-registration-token_ using your v1 registration token):
kubectl create namespace visionone-filesecurity
kubectl create secret generic token-secret --from-literal=registration-token="_your-v1-registration-token_" -n visionone-filesecurity
kubectl create secret generic device-token-secret -n visionone-filesecurity
Get secrets using following command: (should include two secrets token-secret and device-token-secret)
kubectl get secret -n visionone-filesecurity
helm repo add visionone-filesecurity https://trendmicro.github.io/visionone-file-security-helm/
helm repo update
Download the public key file and import:
curl -o public-key.asc https://trendmicro.github.io/visionone-file-security-helm/public-key.asc
gpg --import public-key.asc
[!WARNING] The GnuPG v2 stores your secret keyring using a new format kbx on the default location ~/.gnupg/pubring.kbx. Use the following command to convert your keyring to the legacy gpg format (Reference: Helm Provenance and Integrity):
gpg --export >~/.gnupg/pubring.gpg
Verify that the chart has been signed and is valid:
helm pull --verify visionone-filesecurity/visionone-filesecurity
Install the chart with the release name my-release:
helm install my-release visionone-filesecurity/visionone-filesecurity -n visionone-filesecurity
This will deploy:
You need to install the SDK or CLI to retrieve the scan results from the scanner. For more information on installing the SDKs and CLI, see:
Scan file from another pod using Trend Micro File Security CLI with the scanner service name as the endpoint:
./tmfs scan file:./eicar.com.txt --tls=false --endpoint=my-release-visionone-filesecurity-scanner:50051
my-release-visionone-filesecurity-scanner:50051 is only accessible within the same Kubernetes namespace (visionone-filesecurity) by default.my-release-visionone-filesecurity-management-service:8081 for ONTAP agent WebSocket connections.helm uninstall my-release -n visionone-filesecurity
The scanner service uses a ConfigMap to store its configuration settings. The Management Service provides CLISH interface to dynamically update scanner configurations without requiring pod restarts.
The scanner configuration is stored in a ConfigMap specified in the values.yaml:
visiononeFilesecurity:
scanner:
configMapName: scanner-config
This ConfigMap is automatically created during Helm installation and can be managed through:
The Management Service can update scanner configurations in real-time:
Example configuration management via Management Service CLISH:
# turn into management-service1
kubectl exec -it deploy/scanner-visionone-filesecurity-management-service -- bash
# then modify scan policy by using clish
clish scanner scan-policy modify --max-decompression-layer=10
When plugins are enabled, additional ConfigMaps and Secrets are created for plugin-specific configurations:
visiononeFilesecurity:
management:
plugins:
- name: ontap-agent
enabled: true
configMapName: ontap-agent-config
securitySecretName: ontap-agent-security
jwtSecretName: ontap-agent-jwt
These resources are managed by the Management Service and allow for secure storage of plugin credentials and settings.
The Helm chart includes default resource requests for each pod component that define the minimum resource requirements. These values are set based on typical workload requirements but can be adjusted based on your specific needs.
The following default resource requests are configured in values.yaml:
Scanner Pod (scanner.resources.requests):
800m2GiScan Cache Pod (scanCache.resources.requests):
250m128MiBackend Communicator Pod (backendCommunicator.resources.requests):
250m128MiManagement Service Pod (managementService.resources.requests):
250m256MiYou can modify the resource requests during installation by using the --set flags. You can also modify the requests by overriding the values in your custom values.yaml file:
# Example: Increase scanner resources for high-throughput scanning
scanner:
resources:
requests:
cpu: 1200m
memory: 4Gi
# Example: Increase scan cache memory for larger cache requirements
scanCache:
resources:
requests:
cpu: 500m
memory: 512Mi
# Example: Increase management service resources for high-traffic management operations
managementService:
resources:
requests:
cpu: 500m
memory: 512Mi
Or by using the Helm command line:
helm install my-release visionone-filesecurity/visionone-filesecurity \
-n visionone-filesecurity \
--set scanner.resources.requests.cpu=1200m \
--set scanner.resources.requests.memory=4Gi
Note: It is recommended to only increase these resource requests from the defaults. Decreasing them below the minimum requirements may result in performance issues or pod failures.
The Helm chart supports Horizontal Pod Autoscaling (HPA) for the scanner component to automatically scale the number of scanner pods based on CPU and memory utilization.
Autoscaling works best when combined with appropriate resource requests and limits. Monitor your workload patterns to determine optimal scaling thresholds.
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Verify the Metrics Server is running:
kubectl get deployment metrics-server -n kube-system
kubectl top nodes # This should return node metrics
Autoscaling is disabled by default. The following settings are configured in values.yaml:
Scanner Autoscaling (scanner.autoscaling):
false (autoscaling is disabled by default)1 (minimum number of scanner pods)5 (maximum number of scanner pods)80 (target CPU utilization to trigger scaling)80 (target memory utilization to trigger scaling)You can enable autoscaling during installation by using the ‘–set’ flags. You can also enable autoscaling by overriding the values in your custom ‘values.yaml’ file:
# Example: Enable autoscaling with custom thresholds
scanner:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
Or by using the Helm command line:
helm install my-release visionone-filesecurity/visionone-filesecurity \
-n visionone-filesecurity \
--set scanner.autoscaling.enabled=true \
--set scanner.autoscaling.maxReplicas=10 \
--set scanner.autoscaling.targetCPUUtilizationPercentage=70
You can monitor the HPA status using:
# Check HPA status
kubectl get hpa -n visionone-filesecurity
# Get detailed HPA information
kubectl describe hpa -n visionone-filesecurity
http://) and SOCKS5 (socks5://) proxies."" (no proxy is used).HTTPS_PROXY environment variable to all pods in the Helm chart.visiononeFilesecurity:
proxyUrl: "http://proxy.example.com:8080"
visiononeFilesecurity:
proxyUrl: "socks5://proxy.example.com:1080"
proxyUrl is left empty.http://username:password@proxy.example.com:port or socks5://username:password@proxy.example.com:port for authenticated proxies."localhost,127.0.0.1,.svc.cluster.local".NO_PROXY environment variable to all pods in the Helm chart. It ensures that internal Kubernetes services and local traffic bypass the proxy.visiononeFilesecurity:
noProxy: "localhost,127.0.0.1,.svc.cluster.local,.example.com"
172.20.0.1).172.20.0.0/16) is included in your noProxy setting to prevent these internal requests from being routed through the proxy.values.yaml FileCustomizing the values.yaml file allows you to configure optional settings such proxyUrl. Download the default values.yaml file, modify it, and use it during the Helm chart installation.
values.yaml file:
helm show values visionone-filesecurity/visionone-filesecurity > values.yaml
values.yaml file in a text editor and modify the optional configurations. For example:
visiononeFilesecurity:
proxyUrl: "http://proxy.example.com:8080"
noProxy: "localhost,127.0.0.1,.svc.cluster.local"
Save the modified values.yaml file.
values.yaml file:
helm install my-release visionone-filesecurity/visionone-filesecurity -n visionone-filesecurity -f values.yaml
Trend Micro provides step-by-step guides for exposing the scanner service via Kubernetes Ingress, for both local development environments and AWS EKS production deployments. Each guide covers prerequisites, configuration, deployment, and testing instructions.
The scanner service can be configured with a dedicated host or share a host with the Management Service. When sharing a host, the Management Service uses the /ontap path while the scanner service handles all other traffic.
When both scanner and Management Service use the same host, configure the ingress paths as follows:
scanner:
ingress:
enabled: true
hosts:
- host: scanner.example.com
paths:
- path: /
pathType: Prefix
managementService:
ingress:
enabled: true
hosts:
- host: scanner.example.com
paths:
- path: /ontap
pathType: Prefix
The ingress controller will prioritize the longer /ontap path for Management Service requests, while routing all other traffic to the scanner service.
Choose the appropriate guide for your use case:
Note: Both guides can be adapted to support the shared host configuration by applying the ingress settings shown above.
The Helm chart includes a Management Service component that provides WebSocket endpoints for plugin integrations. This service exposes WebSocket connections for real-time communication with third-party integrations.
The Management Service is configured in the values.yaml file under the managementService section:
managementService:
replicaCount: 1
image:
repository: your-registry/management-service
tag: "1.3.4"
service:
type: ClusterIP
port: 8080 # HTTP management port (internal use only)
ontapWsPort: 8081 # WebSocket port for ONTAP agent plugin
resources:
requests:
cpu: 250m
memory: 256Mi
The Management Service can share the same ingress host as the scanner service by using different paths. The ingress controller will route requests based on path matching, with longer paths taking priority.
Configure both services to use the same host with different paths:
# Scanner service - handles all traffic not matching /ontap
scanner:
ingress:
enabled: true
hosts:
- host: scanner.example.com
paths:
- path: /
pathType: Prefix
# Management service - handles /ontap traffic
managementService:
ingress:
enabled: true
hosts:
- host: scanner.example.com
paths:
- path: /ontap
pathType: Prefix
With this configuration:
https://scanner.example.com/ontap/* will be routed to the Management Service WebSocket endpointhttps://scanner.example.com/* (not matching /ontap) will be routed to the Scanner Service/ontap requests are handled correctlyThe Management Service supports plugins for third-party integrations. Configure plugins in the values.yaml:
visiononeFilesecurity:
management:
dbEnabled: true
plugins:
- name: ontap-agent
enabled: true
configMapName: ontap-agent-config
securitySecretName: ontap-agent-security
jwtSecretName: ontap-agent-jwt
When plugins are enabled, the Management Service will automatically create the necessary ConfigMaps and Secrets for plugin configuration.
The chart can run the Management Service with or without the bundled PostgreSQL database:
dbEnabled: false) – no PostgreSQL objects (StatefulSet, PVC, Service, Secret, StorageClass) are created. Use this only for stateless tests or when another team deploys and wires up a database outside this chart.dbEnabled: true) – Helm deploys the PostgreSQL StatefulSet, exposes it through an internal ClusterIP Service, and provisions persistent storage for you.Turn on the managed database by setting the value under the same hierarchy you already use for other Vision One options:
visiononeFilesecurity:
management:
dbEnabled: true
Once the flag is true, also review the databaseContainer block to choose the persistence mode that matches your cluster.
[WARNING] Immutable Configuration: The database uses a Kubernetes StatefulSet with immutable fields. Once deployed, the following
values.yamlsettings cannot be changed without deleting and recreating the StatefulSet:databaseContainer: storageClass: hostPath: persistence: storageClassName: size:
The defaults in values.yaml already ship with a StorageClass that binds to a hostPath on the worker node. Enable the DB and install:
helm upgrade --install v1fs ./visionone-filesecurity \
--namespace visionone-filesecurity \
--create-namespace \
--set visiononeFilesecurity.management.dbEnabled=true
This path keeps everything local to the node, so data disappears if the node is deleted—suitable for demos only.
The precise location on disk is controlled by the databaseContainer.storageClass.hostPath
setting in values.yaml:
databaseContainer:
storageClass:
create: true
name: visionone-filesecurity-storage
hostPath: /mnt/data/postgres # change this if your nodes expose a different path
If you run Kubernetes locally , update the hostPath to a directory that exists on every worker node before enabling the database.
EKS clusters typically already have an EBS-backed StorageClass such as gp3. Reuse it by turning off the bundled hostPath StorageClass and pointing the PVC at gp3:
visiononeFilesecurity:
management:
dbEnabled: true
databaseContainer:
storageClass:
create: false # reuse an existing EBS StorageClass
persistence:
storageClassName: gp3 # any EBS-backed StorageClass name works
size: 100Gi
What the booleans mean:
databaseContainer.storageClass.create: true – Helm installs the helper StorageClass definition from templates/database-container/storageclass.yaml (hostPath-based by default). Use this when you do not already have a suitable StorageClass in the cluster.databaseContainer.storageClass.create: false – skip creating that helper resource and reuse an existing StorageClass (EBS gp3, NFS, Ceph, etc.) referenced by databaseContainer.persistence.storageClassName.Equivalent Helm CLI example:
helm upgrade --install v1fs ./visionone-filesecurity \
--namespace visionone-filesecurity \
--create-namespace \
--set visiononeFilesecurity.management.dbEnabled=true \
--set databaseContainer.storageClass.create=false \
--set databaseContainer.persistence.storageClassName=gp3 \
--set databaseContainer.persistence.size=100Gi
Checklist before enabling on EKS
kubectl get storageclass shows an ebs.csi.aws.com provisioner)gp3, gp2, etc.) already exists; create it if neededSee the AWS EKS Storage Setup Guide for step-by-step driver installation, IAM policy snippets, and troubleshooting tips (PVC Pending, IAM errors, AZ mismatch, etc.).
Once deployed with ingress enabled, you can access the management interface at:
wss://scanner.example.com/ontap/ (for ONTAP agent)To make the ICAP Scanner Service accessible outside your Kubernetes cluster, you can expose it using a Kubernetes LoadBalancer service type. This is useful for integrating with external applications or security gateways that require ICAP protocol support. The setup process varies depending on your environment.
For detailed, step-by-step instructions, refer to the appropriate guide for your deployment:
You can test the ICAP server functionality by port forwarding the service to your local machine:
# Forward the ICAP port to your local machine
kubectl port-forward -n visionone-filesecurity svc/my-release-visionone-filesecurity-scanner 1344:1344
In a separate terminal, install the ICAP client and test the connection:
# Install c-icap-client (if not already installed)
sudo apt-get install c-icap
# Test file scanning
c-icap-client -i localhost -p 1344 -s scan -v -f sample.txt -x "X-scan-file-name: sample.txt"
You can find the latest release notes here.