Skip to main content

Installation with Helm

Helm is the recommended way to deploy Txture on Kubernetes when you want parameterized, repeatable, and upgradable deployments. The Txture Helm chart packages all required Kubernetes resources and lets you configure them through a single values.yaml file.

Sizing and hardware requirements are detailed in the System Requirements.

OpenShift

This chart works on OpenShift with adjustments (PostgreSQL image, security context constraints, Routes instead of Ingress). For OpenShift production deployments, use the dedicated OpenShift deployment guide instead.

I. Prerequisites

Before you begin, ensure you have the following:

  • Kubernetes cluster: Kubernetes 1.27+ with at least one available StorageClass.
  • kubectl: Install and configure kubectl against your cluster.
  • Helm 3+: Install Helm (version 3.8 or later, including Helm 4).
  • Txture Helm Chart: Provided by your Txture contact as a ZIP archive.
  • Txture Home Directory Contents: You need the initial contents for the txture_home directory. This is provided by your Txture contact.

II. Installation Steps

1. Obtain the chart

The Txture Helm chart is provided by your Txture contact as a ZIP archive. Extract it to your working directory:

unzip txture-helm.zip

This creates a txture/ directory containing Chart.yaml, values.yaml, and the templates/ folder. The commands below assume you run them from the directory containing the extracted txture/ folder.

2. Create your values file

Create a file named custom-values.yaml with your configuration overrides. Only values you want to change from the defaults need to be listed.

Minimal example (only the required password):

postgres:
password: YOUR_SECURE_PASSWORD_HERE

Full example (with sizing):

postgres:
user: txture
password: YOUR_SECURE_PASSWORD_HERE
database: txture
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 1Gi
cpu: 1000m
storage:
size: 20Gi

txture:
resources:
requests:
memory: 16Gi
cpu: 2000m
limits:
memory: 32Gi
cpu: 10000m
storage:
size: 130Gi

3. Install the chart

helm install txture ./txture -f custom-values.yaml -n txture --create-namespace

This creates the namespace, Secret, PVCs, Deployments, and Services. The Txture pod's init container will wait for data before the application starts.

4. Prepare data

The data upload flow is the same as for the Kubernetes deployment.

Get the pod name:

kubectl get pod -n txture -l app=txture -o name

Copy the contents of your local txture_home directory onto the PVC. Note the trailing /. — this copies the directory contents, not the directory itself:

kubectl cp /path/to/local/txture_home/. <pod-name>:/opt/txture_home/ -c setup -n txture

Signal readiness once the upload is complete:

kubectl exec <pod-name> -c setup -n txture -- touch /opt/txture_home/.ready

The init container will then wait for PostgreSQL, restore the database dump (if present), and exit.

tip

Follow the init container's progress:

kubectl logs -n txture -l app=txture -c setup -f

5. Wait for Txture

kubectl get pods -n txture -l app=txture -w

Wait until the pod shows 1/1 Running. First startup can take a few minutes.

6. Access Txture

kubectl port-forward -n txture svc/txture 8080:8080

Open http://localhost:8080 in your browser.


III. Values Reference

ParameterDefaultDescription
postgres.usertxtureDatabase user
postgres.password"" (required)Database password
postgres.databasetxtureDatabase name
postgres.hostpostgresPostgreSQL hostname (change for external DB)
postgres.image.repositorypostgresPostgreSQL image
postgres.image.tag15-bullseyePostgreSQL image tag
postgres.resources.requests.memory256MiPostgreSQL memory request
postgres.resources.requests.cpu100mPostgreSQL CPU request
postgres.resources.limits.memory1GiPostgreSQL memory limit
postgres.resources.limits.cpu1000mPostgreSQL CPU limit
postgres.storage.size20GiPostgreSQL PVC size
postgres.storage.storageClass"" (cluster default)StorageClass for PostgreSQL PVC
txture.image.repositoryeurope-docker.pkg.dev/txture/production/txtureTxture image
txture.image.taglatestTxture image tag
txture.resources.requests.memory16GiTxture memory request
txture.resources.requests.cpu2000mTxture CPU request
txture.resources.limits.memory32GiTxture memory limit
txture.resources.limits.cpu10000mTxture CPU limit
txture.storage.size130GiTxture Home PVC size
txture.storage.storageClass"" (cluster default)StorageClass for Txture PVC
ingress.enabledfalseEnable Ingress resource
ingress.className""Ingress class name
ingress.annotations{}Ingress annotations
ingress.hostnametxture.example.comIngress hostname
ingress.tls[]Ingress TLS configuration

The JVM heap size is set automatically to 75% of the container memory limit. No manual heap configuration is needed.


IV. Upgrading

To upgrade Txture to a new version, update the image tag in your values file and run:

helm upgrade txture ./txture -f custom-values.yaml -n txture

Or set the tag directly:

helm upgrade txture ./txture -n txture --set txture.image.tag=<new_version>

Data is stored on PersistentVolumeClaims, so pods can be replaced without data loss. See the Upgrading Txture guide for more details.


V. Using an External PostgreSQL

To use an external PostgreSQL instance instead of the bundled one:

  1. Set postgres.host to your external database hostname in custom-values.yaml.
  2. Scale the bundled PostgreSQL deployment to zero after install:
    kubectl scale deployment postgres -n txture --replicas=0

The init container's pg_isready check and the JDBC connection string will automatically point to the external host.


VI. Enabling Ingress

By default, Ingress is disabled and you access Txture via port forwarding. For production, we recommend using a reverse proxy with TLS (see reverse proxy configuration).

If you prefer using a Kubernetes Ingress controller, enable it in your values:

ingress:
enabled: true
className: nginx
hostname: txture.example.com
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "2g"
tls:
- secretName: txture-tls
hosts:
- txture.example.com

VII. Uninstalling

To remove all Txture resources:

helm uninstall txture -n txture
caution

This does not delete PersistentVolumeClaims. To also delete the data, run:

kubectl delete pvc txture-home txture-postgres -n txture