Installation with Helm
Helm is the recommended way to deploy Txture on Kubernetes when you want parameterized, repeatable, and upgradable deployments.
The Txture Helm chart packages all required Kubernetes resources and lets you configure them through a single values.yaml file.
Sizing and hardware requirements are detailed in the System Requirements.
This chart works on OpenShift with adjustments (PostgreSQL image, security context constraints, Routes instead of Ingress). For OpenShift production deployments, use the dedicated OpenShift deployment guide instead.
I. Prerequisites
Before you begin, ensure you have the following:
- Kubernetes cluster: Kubernetes 1.27+ with at least one available StorageClass.
- kubectl: Install and configure kubectl against your cluster.
- Helm 3+: Install Helm (version 3.8 or later, including Helm 4).
- Txture Helm Chart: Provided by your Txture contact as a ZIP archive.
- Txture Home Directory Contents: You need the initial contents for the
txture_homedirectory. This is provided by your Txture contact.
II. Installation Steps
1. Obtain the chart
The Txture Helm chart is provided by your Txture contact as a ZIP archive. Extract it to your working directory:
unzip txture-helm.zip
This creates a txture/ directory containing Chart.yaml, values.yaml, and the templates/ folder.
The commands below assume you run them from the directory containing the extracted txture/ folder.
2. Create your values file
Create a file named custom-values.yaml with your configuration overrides.
Only values you want to change from the defaults need to be listed.
Minimal example (only the required password):
postgres:
password: YOUR_SECURE_PASSWORD_HERE
Full example (with sizing):
postgres:
user: txture
password: YOUR_SECURE_PASSWORD_HERE
database: txture
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 1Gi
cpu: 1000m
storage:
size: 20Gi
txture:
resources:
requests:
memory: 16Gi
cpu: 2000m
limits:
memory: 32Gi
cpu: 10000m
storage:
size: 130Gi
3. Install the chart
helm install txture ./txture -f custom-values.yaml -n txture --create-namespace
This creates the namespace, Secret, PVCs, Deployments, and Services. The Txture pod's init container will wait for data before the application starts.
4. Prepare data
The data upload flow is the same as for the Kubernetes deployment.
Get the pod name:
kubectl get pod -n txture -l app=txture -o name
Copy the contents of your local txture_home directory onto the PVC.
Note the trailing /. — this copies the directory contents, not the directory itself:
kubectl cp /path/to/local/txture_home/. <pod-name>:/opt/txture_home/ -c setup -n txture
Signal readiness once the upload is complete:
kubectl exec <pod-name> -c setup -n txture -- touch /opt/txture_home/.ready
The init container will then wait for PostgreSQL, restore the database dump (if present), and exit.
Follow the init container's progress:
kubectl logs -n txture -l app=txture -c setup -f
5. Wait for Txture
kubectl get pods -n txture -l app=txture -w
Wait until the pod shows 1/1 Running.
First startup can take a few minutes.
6. Access Txture
kubectl port-forward -n txture svc/txture 8080:8080
Open http://localhost:8080 in your browser.
III. Values Reference
| Parameter | Default | Description |
|---|---|---|
postgres.user | txture | Database user |
postgres.password | "" (required) | Database password |
postgres.database | txture | Database name |
postgres.host | postgres | PostgreSQL hostname (change for external DB) |
postgres.image.repository | postgres | PostgreSQL image |
postgres.image.tag | 15-bullseye | PostgreSQL image tag |
postgres.resources.requests.memory | 256Mi | PostgreSQL memory request |
postgres.resources.requests.cpu | 100m | PostgreSQL CPU request |
postgres.resources.limits.memory | 1Gi | PostgreSQL memory limit |
postgres.resources.limits.cpu | 1000m | PostgreSQL CPU limit |
postgres.storage.size | 20Gi | PostgreSQL PVC size |
postgres.storage.storageClass | "" (cluster default) | StorageClass for PostgreSQL PVC |
txture.image.repository | europe-docker.pkg.dev/txture/production/txture | Txture image |
txture.image.tag | latest | Txture image tag |
txture.resources.requests.memory | 16Gi | Txture memory request |
txture.resources.requests.cpu | 2000m | Txture CPU request |
txture.resources.limits.memory | 32Gi | Txture memory limit |
txture.resources.limits.cpu | 10000m | Txture CPU limit |
txture.storage.size | 130Gi | Txture Home PVC size |
txture.storage.storageClass | "" (cluster default) | StorageClass for Txture PVC |
ingress.enabled | false | Enable Ingress resource |
ingress.className | "" | Ingress class name |
ingress.annotations | {} | Ingress annotations |
ingress.hostname | txture.example.com | Ingress hostname |
ingress.tls | [] | Ingress TLS configuration |
The JVM heap size is set automatically to 75% of the container memory limit. No manual heap configuration is needed.
IV. Upgrading
To upgrade Txture to a new version, update the image tag in your values file and run:
helm upgrade txture ./txture -f custom-values.yaml -n txture
Or set the tag directly:
helm upgrade txture ./txture -n txture --set txture.image.tag=<new_version>
Data is stored on PersistentVolumeClaims, so pods can be replaced without data loss. See the Upgrading Txture guide for more details.
V. Using an External PostgreSQL
To use an external PostgreSQL instance instead of the bundled one:
- Set
postgres.hostto your external database hostname incustom-values.yaml. - Scale the bundled PostgreSQL deployment to zero after install:
kubectl scale deployment postgres -n txture --replicas=0
The init container's pg_isready check and the JDBC connection string will automatically point to the external host.
VI. Enabling Ingress
By default, Ingress is disabled and you access Txture via port forwarding. For production, we recommend using a reverse proxy with TLS (see reverse proxy configuration).
If you prefer using a Kubernetes Ingress controller, enable it in your values:
ingress:
enabled: true
className: nginx
hostname: txture.example.com
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "2g"
tls:
- secretName: txture-tls
hosts:
- txture.example.com
VII. Uninstalling
To remove all Txture resources:
helm uninstall txture -n txture
This does not delete PersistentVolumeClaims. To also delete the data, run:
kubectl delete pvc txture-home txture-postgres -n txture