Skip to main content

Installation with Kubernetes

Kubernetes is a supported deployment target for Txture. We provide container images that can be deployed on any Kubernetes cluster. This guide describes a production-oriented deployment: no hostPath volumes, no broad permissions, and security-hardened containers.

Sizing and hardware requirements are detailed in the System Requirements.

tip

For parameterized and repeatable deployments, consider using the Helm chart instead.

I. Prerequisites

Before you begin, ensure you have the following:

  • Kubernetes cluster: Kubernetes 1.27+ with at least one available StorageClass.
  • kubectl: Install and configure kubectl against your cluster.
  • envsubst: Available on most Linux/macOS systems (part of gettext). Used to substitute parameters into the manifests.
  • Permissions: Cluster permissions to create Namespaces, Secrets, PVCs, Deployments, and Services.
  • Txture Home Directory Contents: You need the initial contents for the txture_home directory. This is provided by your Txture contact.
caution

Some commands in this step-by-step guide need to be adjusted and placeholders must be replaced with actual values.


II. Installation Steps

Having fulfilled all prerequisites, follow the steps below to set up and run Txture on Kubernetes. You provide a parameter file (txture.env) and use envsubst to inject the values into the manifests before applying them.

1. Create namespace

Create a dedicated namespace for Txture:

kubectl create namespace txture

2. Create parameter file

Create a file named txture.env on your local machine. This file controls credentials and sizing, similar to the .env file in the Docker guide. Refer to the System Requirements to determine the correct sizing for your workload.

Template for txture.env
# Database Credentials
POSTGRES_USER=txture
POSTGRES_PASSWORD=YOUR_SECURE_PASSWORD_HERE
POSTGRES_DB=txture

# PostgreSQL hostname (use the bundled service name, or set to your external database host)
POSTGRES_HOST=postgres

# Txture Sizing (32GB RAM / 10 Cores example)
TXTURE_MEMORY_LIMIT=32Gi
TXTURE_MEMORY_REQUEST=16Gi
TXTURE_CPU_REQUEST=2000m
TXTURE_CPU_LIMIT=10000m

# PostgreSQL Sizing
POSTGRES_MEMORY_REQUEST=256Mi
POSTGRES_MEMORY_LIMIT=1Gi
POSTGRES_CPU_REQUEST=100m
POSTGRES_CPU_LIMIT=1000m

# Storage
TXTURE_STORAGE_SIZE=130Gi
POSTGRES_STORAGE_SIZE=20Gi
caution

All parameters must be defined in your txture.env file. Unlike shell scripts, envsubst does not support default values — undefined parameters result in empty strings, which cause deployment errors.

3. Create the Kubernetes manifests

Create a file named txture-manifests.yaml with the following content. This file defines all resources (Secret, PVCs, Deployments, Services) and uses ${VARIABLE} placeholders that are filled in by envsubst during deployment. You generally do not need to edit this file.

# Database Credentials Secret
apiVersion: v1
kind: Secret
metadata:
name: txture-db-credentials
namespace: txture
stringData:
POSTGRES_USER: "${POSTGRES_USER}"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: "${POSTGRES_DB}"
---
# PersistentVolumeClaims
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: txture-home
namespace: txture
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: ${TXTURE_STORAGE_SIZE}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: txture-postgres
namespace: txture
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: ${POSTGRES_STORAGE_SIZE}
---
# PostgreSQL Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: txture
spec:
replicas: 1
selector:
matchLabels:
app: postgres
strategy:
type: Recreate
template:
metadata:
labels:
app: postgres
spec:
securityContext:
runAsNonRoot: true
runAsUser: 999
fsGroup: 999
containers:
- name: postgres
image: postgres:15-bullseye
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRES_PASSWORD
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRES_DB
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: ${POSTGRES_MEMORY_REQUEST}
cpu: ${POSTGRES_CPU_REQUEST}
limits:
memory: ${POSTGRES_MEMORY_LIMIT}
cpu: ${POSTGRES_CPU_LIMIT}
readinessProbe:
exec:
command: ["/bin/sh", "-c", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
initialDelaySeconds: 15
periodSeconds: 20
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: txture-postgres
---
# PostgreSQL Service
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: txture
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
# Txture Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: txture
namespace: txture
spec:
replicas: 1
selector:
matchLabels:
app: txture
strategy:
type: Recreate
template:
metadata:
labels:
app: txture
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
runAsNonRoot: true
initContainers:
- name: setup
image: postgres:15-bullseye
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: PGHOST
value: "${POSTGRES_HOST}"
- name: PGUSER
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRES_USER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRES_PASSWORD
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRES_DB
command: ['/bin/bash', '-c']
args:
- |
echo "=== Txture Init: waiting for data ==="

echo "Waiting for upload signal (/opt/txture_home/.ready) ..."
while [ ! -f /opt/txture_home/.ready ]; do
sleep 10
done
echo "Upload signal received."

echo "Waiting for PostgreSQL at ${PGHOST}:5432 ..."
until pg_isready -q; do
echo " not ready — retrying in 5s ..."
sleep 5
done
echo "PostgreSQL is ready."

DUMP=$(find /opt/txture_home -maxdepth 1 -name '*_dump.gz' -print -quit)
if [ -n "$DUMP" ]; then
echo "Found database dump: $DUMP"
echo "Restoring database ..."
gunzip -c "$DUMP" | psql
echo "Restore complete. Removing dump file from PVC."
rm "$DUMP"
else
echo "No dump file found on PVC, skipping restore."
fi

echo "=== Init complete ==="
volumeMounts:
- name: txture-home
mountPath: /opt/txture_home
containers:
- name: txture
image: europe-docker.pkg.dev/txture/production/txture:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: CATALINA_OPTS
value: >-
-DtxtureHome=/opt/txture_home -XX:MaxRAMPercentage=75.0 -XX:InitialRAMPercentage=75.0 -server
-Dio.grpc.netty.shaded.io.netty.transport.noNative=true
- name: TXTURE_DB_JDBC_URL
value: "jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}?user=${POSTGRES_USER}&password=${POSTGRES_PASSWORD}"
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: txture-home
mountPath: /opt/txture_home
resources:
requests:
memory: ${TXTURE_MEMORY_REQUEST}
cpu: ${TXTURE_CPU_REQUEST}
limits:
memory: ${TXTURE_MEMORY_LIMIT}
cpu: ${TXTURE_CPU_LIMIT}
startupProbe:
httpGet:
path: /
port: 8080
failureThreshold: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 10
livenessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 30
volumes:
- name: txture-home
persistentVolumeClaim:
claimName: txture-home
---
# Txture Service
apiVersion: v1
kind: Service
metadata:
name: txture
namespace: txture
spec:
selector:
app: txture
ports:
- port: 8080
targetPort: 8080
name: http
note

The manifests use :latest image tags for convenience. For production deployments, it is recommended to pin images to a specific version tag or digest to ensure reproducible deployments. Your Txture contact can provide the appropriate image tag for your release.

note

The manifests use ${VARIABLE} placeholders that are filled in by envsubst from the values in your txture.env file. All parameters must be defined — envsubst does not support default values.

4. Deploy resources

Source the parameter file and apply the manifests using envsubst:

set -a && source txture.env && set +a
envsubst '$POSTGRES_USER $POSTGRES_PASSWORD $POSTGRES_DB $POSTGRES_HOST
$TXTURE_MEMORY_LIMIT $TXTURE_MEMORY_REQUEST $TXTURE_CPU_REQUEST $TXTURE_CPU_LIMIT
$TXTURE_STORAGE_SIZE $POSTGRES_STORAGE_SIZE
$POSTGRES_MEMORY_REQUEST $POSTGRES_MEMORY_LIMIT $POSTGRES_CPU_REQUEST $POSTGRES_CPU_LIMIT' \
< txture-manifests.yaml | kubectl apply -f -
caution

The explicit variable list passed to envsubst is required. Without it, envsubst also replaces shell variables used inside the init container script (e.g. $DUMP), which breaks the deployment.

This creates all resources and starts both PostgreSQL and Txture. The Txture pod's init container will wait for data before the application starts — no manual scaling is needed.

Wait until PostgreSQL is running before continuing:

kubectl get pods -n txture -l app=postgres -w

Ensure the pod shows 1/1 Running before proceeding. The Txture pod will show Init:0/1 — this means the init container is waiting for data, which is the expected state.

5. Prepare data

The Txture pod's init container is waiting for data. First, get the pod name:

kubectl get pod -n txture -l app=txture -o name

Replace <pod-name> in the commands below with the output (e.g. txture-7b9c8d4e5-xxxxx; you can omit the pod/ prefix).

Copy the contents of your local txture_home directory onto the PVC. Note the trailing /. — this copies the directory contents, not the directory itself:

kubectl cp /path/to/local/txture_home/. <pod-name>:/opt/txture_home/ -c setup -n txture
caution

Make sure to copy the contents of your txture_home directory, not the directory itself. Files like txture.properties, modelDefinition.json, and the database dump must be directly under /opt/txture_home/.

Signal readiness

Once the copy is complete, signal the init container that the data is ready:

kubectl exec <pod-name> -c setup -n txture -- touch /opt/txture_home/.ready

The init container will then automatically:

  1. Wait for PostgreSQL to be ready.
  2. Restore the database dump if a *_dump.gz file is found on the PVC (and remove it after a successful restore).
  3. Exit, allowing the main Txture container to start.
tip

You can follow the init container's progress with:

kubectl logs -n txture -l app=txture -c setup -f

6. Wait for Txture

Watch the pod until Txture is running and ready:

kubectl get pods -n txture -l app=txture -w

First startup can take a few minutes after the init container exits. Wait until the pod shows 1/1 Running.


III. Verify the Installation

Once Txture has started, verify that the workloads are running correctly.

  1. Check pod status
    Run kubectl get pods -n txture to see the list of pods. The output should be similar to this:

    NAME                        READY   STATUS    RESTARTS   AGE
    postgres-6d8f9b5c4-xxxxx 1/1 Running 0 10m
    txture-7b9c8d4e5-xxxxx 1/1 Running 0 5m

    This confirms that both PostgreSQL and Txture are up and running.

  2. Expose Txture via NodePort
    Change the Txture Service to NodePort so it is reachable without keeping a terminal open:

    kubectl patch svc txture -n txture -p '{"spec": {"type": "NodePort", "ports": [{"port": 8080, "targetPort": 8080, "nodePort": 30080}]}}'

    Then open http://<node-ip>:30080 in your browser.

    tip

    For a quick one-off check, you can also use port forwarding instead: kubectl port-forward -n txture svc/txture 8080:8080 This requires an active terminal session and is mainly useful for debugging.


IV. Managing Txture

Here are some useful commands for managing your Txture instance on Kubernetes.

  • View logs: To view the logs from the Txture container in real-time, use:

    kubectl logs -n txture -l app=txture -f

    Press Ctrl + C to stop viewing the logs.

  • Restore database (subsequent restores): For restores after the initial setup, pipe the dump directly from your local machine into the PostgreSQL pod without copying files:

    gzip -cd /path/to/local/dump.gz | kubectl exec -n txture deployment/postgres -i -- psql -U txture -d txture

    This avoids placing files inside the container. Scale down Txture before restoring and scale it back up afterwards.

  • Update Txture image: See the Upgrading Txture guide for instructions on how to upgrade to a newer version.

  • List resources: To see all resources in the namespace:
    kubectl get all -n txture

  • Access Txture: If you exposed Txture via NodePort (see Verify the Installation), open http://<node-ip>:30080 in your browser. For quick debugging, you can also use port forwarding:
    kubectl port-forward -n txture svc/txture 8080:8080


V. Configuration

Database connection

Txture gets the database connection from the TXTURE_DB_JDBC_URL environment variable, which is constructed automatically from the POSTGRES_* parameters in your txture.env file. You typically do not need to configure this manually.

note

After envsubst processing, the database password is embedded in the TXTURE_DB_JDBC_URL environment variable value and stored in the Deployment object in etcd. This is an acceptable trade-off for most environments since access to Deployment specs requires cluster-level privileges. If your security policy requires stricter secret handling, consider constructing the JDBC URL at runtime using environment variables sourced from the Secret.

Using an external PostgreSQL

This guide deploys PostgreSQL alongside Txture for convenience, but you can use an external PostgreSQL instance instead. To do so:

  1. Remove the PostgreSQL Deployment, Service, and PVC from the manifests (or do not apply them).
  2. Set POSTGRES_HOST in your txture.env to the external hostname (e.g. POSTGRES_HOST=db.example.com). This configures both the JDBC connection and the init container's readiness check automatically.

Memory allocation

Memory is configured via two parameters in your txture.env file:

  • TXTURE_MEMORY_LIMIT: Sets the container memory limit (e.g. 32Gi).
  • TXTURE_MEMORY_REQUEST: Sets the memory request, i.e. how much memory the scheduler reserves on the node (e.g. 16Gi).

The JVM heap size is set automatically to 75% of the container memory limit via -XX:MaxRAMPercentage=75.0. You do not need to configure this manually. By default, the memory request is half of the limit, which gives the pod Burstable QoS and avoids over-reserving resources on shared clusters. Set TXTURE_MEMORY_REQUEST equal to TXTURE_MEMORY_LIMIT for Guaranteed QoS on dedicated nodes.

Outbound proxy

To route Txture's outgoing traffic through a proxy, add the following to your txture.properties file (in the Txture home directory):

txture.http.outbound.proxy.server.host=http://your.proxy.server:80

This can also be configured in the Txture UI if it is not set in the properties file.


VI. Production Setup Recommendations

Exposing Txture

The NodePort service from the installation steps provides basic access but does not offer TLS or hostname-based routing. For production, consider one of the following:

  • Ingress with TLS (recommended): Use your cluster's Ingress controller (e.g. NGINX Ingress, Traefik) with a TLS certificate to expose Txture on a proper hostname. This is the standard Kubernetes approach for HTTP services.
  • LoadBalancer Service: Change the Service type to LoadBalancer to get an external IP directly. On cloud providers this provisions a cloud load balancer; on bare-metal or edge clusters, tools like MetalLB or k3s ServiceLB provide the same functionality.
  • External reverse proxy: Place an nginx or Caddy instance in front of the NodePort or use port forwarding to a local reverse proxy that handles TLS termination. See our example reverse proxy configuration for more details.
important

Regardless of approach, we strongly recommend TLS encryption for all traffic to your Txture instance in production.

Additionally, it is recommended to set up SMTP credentials to enable email functionalities like notifications, user or survey invites.

Further production considerations:

Storage

  • Use a cluster StorageClass and avoid hostPath.
  • Enable volume snapshots for backups where supported.
  • Consider separate storage tiers or performance classes for database vs. application data.

Security

  • Enable Pod Security Standards (at least baseline, ideally restricted) on the txture namespace.
  • The manifests already set allowPrivilegeEscalation: false, capabilities.drop: ["ALL"], and runAsNonRoot: true.
  • Consider NetworkPolicies to restrict traffic between namespaces.
  • Store database passwords in a secret manager or sealed secrets rather than on the command line.

High availability

  • Use a multi-node cluster for resilience.
  • Use pod anti-affinity so Txture and PostgreSQL pods are spread across nodes.
  • For database HA, consider a managed PostgreSQL operator (e.g. CloudNativePG, Crunchy Data) or a cloud-managed database service.

Monitoring and backups

  • Back up the PostgreSQL database (e.g. pg_dump or volume snapshots) and the txture_home PVC regularly.
  • Test restore procedures in a non-production environment.

VII. Tested System Configurations

The instructions work on any Kubernetes 1.27+ cluster with a suitable StorageClass. The manifests have been tested on:

  • Google Kubernetes Engine (GKE) 1.29
  • Amazon Elastic Kubernetes Service (EKS) 1.29
  • Azure Kubernetes Service (AKS) 1.29
  • k3s on Debian 13 (Trixie), Kubernetes 1.31
  • kubeadm on Ubuntu 22.04, Kubernetes 1.28

VIII. Troubleshooting

PVCs stay in Pending

Run kubectl get storageclass and ensure a default StorageClass exists and that the provisioner is healthy.

Txture pod stays in Init:0/1

The init container is waiting for data. Check its logs with kubectl logs -n txture -l app=txture -c setup to see what it is waiting for:

  • "Waiting for upload signal": The kubectl cp in step 5 has not completed yet, or the signal command (touch .ready) has not been run.
  • "Waiting for PostgreSQL": The PostgreSQL pod is not running. Check with kubectl get pods -n txture -l app=postgres.

Txture reports missing txture.properties

Ensure txture_home contents (including txture.properties) are at the root of the volume, not inside an extra subdirectory, and that the Txture deployment mounts the same PVC at /opt/txture_home.

Txture pod fails to start (WAR / permission errors)

If the Txture pod crashes with The main resource set specified [/usr/local/tomcat/webapps/txture] is not a directory or war file, the container is running with the wrong UID. Ensure the pod security context sets runAsUser: 1000 and fsGroup: 1000.

Database connection failures

Confirm PostgreSQL is running: kubectl get pods -n txture -l app=postgres. Check that the TXTURE_DB_JDBC_URL environment variable is correct: kubectl exec -n txture deployment/txture -- env | grep JDBC It should match the PostgreSQL service (postgres:5432) and the credentials in your txture.env.

envsubst not found

Install gettext on your system:

  • macOS: brew install gettext
  • Ubuntu/Debian: sudo apt install gettext-base
  • Fedora/RHEL: sudo dnf install gettext

IX. FAQ

1. Can I use hostPath or manual PVs for storage?
This guide targets production and therefore uses a StorageClass and PVCs only. In lab or edge environments without a CSI driver, you may need to create PersistentVolumes manually; avoid hostPath and broad permissions (e.g. chmod 777) in production.

2. Why use envsubst instead of Helm or Kustomize?
envsubst is available on any system and requires no additional tooling. For parameterized and repeatable deployments, we recommend the Helm chart instead.

3. What does the init container do?
The setup init container on the Txture Deployment serves three purposes: it acts as a target for kubectl cp (to upload txture_home onto the PVC), it waits for PostgreSQL to be ready, and it automatically restores the database dump if one is found on the PVC. On subsequent pod restarts (e.g. after an upgrade or node reboot), the init container finds the .ready signal file immediately, confirms PostgreSQL is ready, finds no dump file, and exits within seconds.

4. How do I re-upload data after a failed upload or crash?
Delete the Txture pod so the deployment creates a fresh one with a new init container:

kubectl delete pod -n txture -l app=txture

If you already removed .ready from the PVC, the new init container will wait for data again (Init:0/1). If .ready still exists, remove it first so the init container waits:

POD=$(kubectl get pod -n txture -l app=txture -o jsonpath='{.items[0].metadata.name}')
kubectl exec $POD -c setup -n txture -- rm -f /opt/txture_home/.ready

Then delete the pod, re-upload your data, and signal .ready as described in step 5.

5. How do I access Txture?
The installation steps expose Txture via a NodePort service on port 30080. Open http://<node-ip>:30080 in your browser. For production, set up an Ingress with TLS or a reverse proxy as described in the Production Setup Recommendations.