Skip to main content

Installation with OpenShift

OpenShift Container Platform is a supported deployment target for Txture. We provide container images that can be run on OpenShift like on Kubernetes, with OpenShift-specific resources such as Routes and Security Context Constraints (SCCs). This guide describes a production-oriented deployment: no hostPath volumes, no broad SCC grants, and no excessive permissions.

Sizing and hardware requirements are detailed in the System Requirements.

I. Prerequisites

Before you begin, ensure you have the following:

  • OpenShift cluster: OpenShift 4.14+ with at least one available StorageClass (e.g. OpenShift Data Foundation, LVM Storage, or NFS provisioner).
  • oc CLI: Install and configure the OpenShift CLI (oc) against your cluster.
  • Permissions: Cluster permissions to create projects, Secrets, PVCs, Deployments, Services, and Routes in a dedicated project.
  • Txture Home Directory Contents: You need the initial contents for the txture_home directory. This is provided by your Txture contact.
caution

Some commands in this step-by-step guide need to be adjusted and placeholders must be replaced with actual values.


II. Installation Steps

Having fulfilled all prerequisites, follow the steps below to set up and run Txture on OpenShift. We use an OpenShift Template that defines the entire deployment. You only need to provide a parameter file (txture.env) to customize it.

1. Create project

Create a dedicated project for Txture:

oc new-project txture

2. Create parameter file

Create a file named txture.env on your local machine. This file controls credentials and sizing, similar to the .env file in the Docker guide. Refer to the System Requirements to determine the correct sizing for your workload.

Template for txture.env
# Database Credentials
POSTGRES_USER=txture
POSTGRES_PASSWORD=YOUR_SECURE_PASSWORD_HERE
POSTGRES_DB=txture

# Txture Sizing (32GB RAM / 10 Cores example)
TXTURE_MEMORY_LIMIT=32Gi
TXTURE_CPU_REQUEST=2000m
TXTURE_CPU_LIMIT=10000m

# Txture Memory Request (default: half of limit; lower = easier scheduling on shared clusters)
# TXTURE_MEMORY_REQUEST=16Gi

# PostgreSQL Sizing (Defaults shown, adjust if needed)
# POSTGRES_MEMORY_REQUEST=256Mi
# POSTGRES_MEMORY_LIMIT=1Gi
# POSTGRES_CPU_REQUEST=100m
# POSTGRES_CPU_LIMIT=1000m

# Storage (Defaults: 130Gi / 20Gi)
# TXTURE_STORAGE_SIZE=130Gi
# POSTGRES_STORAGE_SIZE=20Gi

# StorageClass (Default unused, must be commented in the YAML template in PersistentVolumeClaims)
# STORAGE_CLASS=ocs-storagecluster-ceph-rbd

# Route hostname (leave empty for auto-generated)
# ROUTE_HOSTNAME=txture.apps.example.com

3. Create the OpenShift Template

Create a file named txture-template.yaml with the following content. This template defines all resources (Secret, PVCs, ServiceAccount, Deployments, Services, Route) and uses the parameters from your txture.env file. You generally do not need to edit this file.

apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: txture-deployment
objects:
# Database Credentials Secret
- apiVersion: v1
kind: Secret
metadata:
name: txture-db-credentials
stringData:
POSTGRESQL_USER: "${POSTGRES_USER}"
POSTGRESQL_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRESQL_DATABASE: "${POSTGRES_DB}"

# PersistentVolumeClaims
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: txture-home
spec:
# storageClassName: ${STORAGE_CLASS}
accessModes: [ReadWriteOnce]
resources:
requests:
storage: ${TXTURE_STORAGE_SIZE}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: txture-postgres
spec:
# storageClassName: ${STORAGE_CLASS}
accessModes: [ReadWriteOnce]
resources:
requests:
storage: ${POSTGRES_STORAGE_SIZE}

# Txture ServiceAccount
- apiVersion: v1
kind: ServiceAccount
metadata:
name: txture-app

# PostgreSQL ServiceAccount
- apiVersion: v1
kind: ServiceAccount
metadata:
name: txture-postgres

# PostgreSQL Deployment & Service (RHEL9 image, OpenShift-compatible)
- apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
serviceAccountName: txture-postgres
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: postgres
image: registry.redhat.io/rhel9/postgresql-15:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: POSTGRESQL_USER
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRESQL_USER
- name: POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRESQL_PASSWORD
- name: POSTGRESQL_DATABASE
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRESQL_DATABASE
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/pgsql/data
resources:
requests:
memory: ${POSTGRES_MEMORY_REQUEST}
cpu: ${POSTGRES_CPU_REQUEST}
limits:
memory: ${POSTGRES_MEMORY_LIMIT}
cpu: ${POSTGRES_CPU_LIMIT}
readinessProbe:
exec:
# pg_isready is lightweight and does not require authentication.
# ${POSTGRES_USER} and ${POSTGRES_DB} are replaced by oc process at deploy time.
command: ["/bin/sh", "-c", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
exec:
command: ["/bin/sh", "-c", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
initialDelaySeconds: 15
periodSeconds: 20
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: txture-postgres
- apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres

# Txture Deployment
# Note: memory request < limit gives Burstable QoS, which is suitable for shared clusters.
# Set TXTURE_MEMORY_REQUEST = TXTURE_MEMORY_LIMIT for Guaranteed QoS on dedicated nodes.
- apiVersion: apps/v1
kind: Deployment
metadata:
name: txture
spec:
replicas: 1
selector:
matchLabels:
app: txture
strategy:
type: Recreate
template:
metadata:
labels:
app: txture
spec:
serviceAccountName: txture-app
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
initContainers:
- name: setup
image: registry.redhat.io/rhel9/postgresql-15:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: PGHOST
value: "${POSTGRES_HOST}"
- name: PGUSER
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRESQL_USER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRESQL_PASSWORD
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: txture-db-credentials
key: POSTGRESQL_DATABASE
command: ['/bin/bash', '-c']
args:
- |
echo "=== Txture Init: waiting for data ==="

echo "Waiting for upload signal (/opt/txture_home/.ready) ..."
while [ ! -f /opt/txture_home/.ready ]; do
sleep 10
done
echo "Upload signal received."

echo "Checking for archive to extract ..."
ARCHIVE=$(find /opt/txture_home -maxdepth 1 \( -name '*.zip' -o -name '*.tar.gz' \) -print -quit)
if [ -n "$ARCHIVE" ]; then
echo "Found archive: $ARCHIVE"
case "$ARCHIVE" in
*.zip) unzip -o "$ARCHIVE" -d /opt/txture_home ;;
*.tar.gz) tar xzf "$ARCHIVE" -C /opt/txture_home ;;
esac
rm "$ARCHIVE"
# If the archive contained a single wrapper directory, flatten it
SUBDIRS=$(find /opt/txture_home -mindepth 1 -maxdepth 1 -type d ! -name lost+found)
if [ "$(echo "$SUBDIRS" | wc -l)" -eq 1 ] && [ ! -f /opt/txture_home/txture.properties ]; then
echo "Flattening wrapper directory: $SUBDIRS"
mv "$SUBDIRS"/* "$SUBDIRS"/.* /opt/txture_home/ 2>/dev/null; true
rmdir "$SUBDIRS" 2>/dev/null; true
fi
echo "Extraction complete."
fi

echo "Waiting for PostgreSQL at ${PGHOST}:5432 ..."
until pg_isready -q; do
echo " not ready — retrying in 5s ..."
sleep 5
done
echo "PostgreSQL is ready."

DUMP=$(find /opt/txture_home -maxdepth 1 -name '*_dump.gz' -print -quit)
if [ -n "$DUMP" ]; then
echo "Found database dump: $DUMP"
echo "Restoring database ..."
gunzip -c "$DUMP" | psql
echo "Restore complete. Removing dump file from PVC."
rm "$DUMP"
else
echo "No dump file found on PVC, skipping restore."
fi

echo "=== Init complete ==="
volumeMounts:
- name: txture-home
mountPath: /opt/txture_home
containers:
- name: txture
image: europe-docker.pkg.dev/txture/production/txture:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: CATALINA_OPTS
value: >-
-DtxtureHome=/opt/txture_home -XX:MaxRAMPercentage=75.0 -XX:InitialRAMPercentage=75.0 -server
-Dio.grpc.netty.shaded.io.netty.transport.noNative=true
- name: TXTURE_DB_JDBC_URL
value: "jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}?user=${POSTGRES_USER}&password=${POSTGRES_PASSWORD}"
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: txture-home
mountPath: /opt/txture_home
resources:
requests:
memory: ${TXTURE_MEMORY_REQUEST}
cpu: ${TXTURE_CPU_REQUEST}
limits:
memory: ${TXTURE_MEMORY_LIMIT}
cpu: ${TXTURE_CPU_LIMIT}
startupProbe:
httpGet:
path: /
port: 8080
failureThreshold: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 10
livenessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 30
volumes:
- name: txture-home
persistentVolumeClaim:
claimName: txture-home

# Txture Service & Route
- apiVersion: v1
kind: Service
metadata:
name: txture
spec:
selector:
app: txture
ports:
- port: 8080
targetPort: 8080
name: http
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: txture
spec:
host: "${ROUTE_HOSTNAME}"
to:
kind: Service
name: txture
port:
targetPort: http
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect

parameters:
- name: POSTGRES_USER
description: "Database user."
value: txture
- name: POSTGRES_PASSWORD
description: "Database password."
required: true
- name: POSTGRES_DB
description: "Database name."
value: txture
- name: POSTGRES_HOST
description: "PostgreSQL hostname. Set to your external database host if not using the bundled PostgreSQL."
value: postgres
- name: TXTURE_MEMORY_LIMIT
description: "Memory limit for Txture pod (e.g. 32Gi). Should match your sizing."
value: 32Gi
- name: TXTURE_MEMORY_REQUEST
description: "Memory request for Txture (default: half of limit). Lower values make scheduling easier on shared clusters but give Burstable QoS. Set equal to TXTURE_MEMORY_LIMIT for Guaranteed QoS on dedicated nodes."
value: 16Gi
- name: TXTURE_CPU_REQUEST
description: "CPU request for Txture (e.g. 2000m = 2 cores)."
value: 2000m
- name: TXTURE_CPU_LIMIT
description: "CPU limit for Txture (e.g. 10000m = 10 cores)."
value: 10000m
- name: TXTURE_STORAGE_SIZE
description: "Persistent storage for Txture Home."
value: 130Gi
- name: POSTGRES_MEMORY_REQUEST
description: "Memory request for PostgreSQL."
value: 256Mi
- name: POSTGRES_MEMORY_LIMIT
description: "Memory limit for PostgreSQL."
value: 1Gi
- name: POSTGRES_CPU_REQUEST
description: "CPU request for PostgreSQL."
value: 100m
- name: POSTGRES_CPU_LIMIT
description: "CPU limit for PostgreSQL."
value: 1000m
- name: POSTGRES_STORAGE_SIZE
description: "Persistent storage for PostgreSQL."
value: 20Gi
- name: STORAGE_CLASS
description: "StorageClass for PersistentVolumeClaims (block storage, RWO). Leave empty to use the cluster default."
value: ""
- name: ROUTE_HOSTNAME
description: "Custom hostname for the Txture Route (e.g. txture.apps.example.com). Leave empty for auto-generated hostname (<route>-<project>.apps.<domain>)."
value: ""
note

The template uses :latest image tags for convenience. For production deployments, it is recommended to pin images to a specific version tag or digest to ensure reproducible deployments. Your Txture contact can provide the appropriate image tag for your release.

4. Deploy resources

Apply the template using your parameter file. This creates all resources and starts both PostgreSQL and Txture. The Txture pod's init container will wait for data before the application starts — no manual scaling is needed.

oc process -f txture-template.yaml --param-file=txture.env | oc apply -f -

5. Configure permissions

The Txture container image defines a non-root user (txture, UID 1000) that owns the application files. OpenShift's default restricted-v2 SCC assigns a UID from the namespace range instead, which prevents Tomcat from unpacking the application. Grant the nonroot-v2 SCC to the Txture ServiceAccount so the pod can run as UID 1000:

oc adm policy add-scc-to-user nonroot-v2 system:serviceaccount:txture:txture-app
note

The Txture pod may restart once before the SCC takes effect. This is expected — once the SCC is applied, the pod will start and the init container will begin waiting for data.

note

PostgreSQL uses the Red Hat rhel9/postgresql-15 image, which is designed for OpenShift and runs with any UID assigned by the cluster. It does not need an SCC grant beyond the default restricted-v2.

Wait until PostgreSQL is running before continuing:

oc get pods -n txture -l app=postgres -w

Ensure the pod shows 1/1 Running before proceeding. The Txture pod will show Init:0/1 — this means the init container is waiting for data, which is the expected state.

6. Prepare data

The Txture pod's init container is waiting for data. First, get the pod name:

oc get pod -n txture -l app=txture -o name

Replace <pod-name> in the commands below with the output (e.g. txture-7b9c8d4e5-xxxxx; you can omit the pod/ prefix).

Option A — Upload as ZIP archive (recommended for large data sets)

Copy a single ZIP file containing your txture_home contents onto the PVC:

oc cp /path/to/txture_home.zip <pod-name>:/opt/txture_home/txture_home.zip -c setup -n txture

After you signal readiness (see below), the init container automatically detects and extracts ZIP (.zip) or tarball (.tar.gz) archives from /opt/txture_home/. If the archive contains a single wrapper directory, it is automatically flattened. This is significantly faster than copying many small files.

Option B — Upload directory contents

Alternatively, copy the contents of your local txture_home directory directly. Note the trailing /. — this copies the directory contents, not the directory itself:

oc cp /path/to/local/txture_home/. <pod-name>:/opt/txture_home/ -c setup -n txture
caution

Make sure to copy the contents of your txture_home directory, not the directory itself. Files like txture.properties, modelDefinition.json, and the database dump must be directly under /opt/txture_home/.

Signal readiness

Once the copy is complete, signal the init container that the data is ready:

oc exec <pod-name> -c setup -n txture -- touch /opt/txture_home/.ready

The init container will then automatically:

  1. Extract any ZIP or tar.gz archive found on the PVC (and flatten a wrapper directory if present).
  2. Wait for PostgreSQL to be ready.
  3. Restore the database dump if a *_dump.gz file is found on the PVC (and remove it after a successful restore).
  4. Exit, allowing the main Txture container to start.
tip

You can follow the init container's progress with:

oc logs -n txture -l app=txture -c setup -f

7. Wait for Txture

Watch the pod until Txture is running and ready:

oc get pods -n txture -l app=txture -w

First startup can take a few minutes after the init container exits. Wait until the pod shows 1/1 Running.


III. Verify the Installation

Once Txture has started, verify that the workloads are running correctly.

  1. Check pod status
    Run oc get pods -n txture to see the list of pods. The output should be similar to this:

    NAME                        READY   STATUS    RESTARTS   AGE
    postgres-6d8f9b5c4-xxxxx 1/1 Running 0 10m
    txture-7b9c8d4e5-xxxxx 1/1 Running 0 5m

    This confirms that both PostgreSQL and Txture are up and running.

  2. Access Txture
    Run oc get route txture -n txture to get the Route hostname. Txture will be available at the reported URL (e.g. https://txture-txture.apps.your-cluster-domain.tld) in your web browser.


IV. Managing Txture

Here are some useful commands for managing your Txture instance on OpenShift.

  • View logs: To view the logs from the Txture container in real-time, use:

    oc logs -n txture -l app=txture -f

    Press Ctrl + C to stop viewing the logs.

  • Restore database (subsequent restores): For restores after the initial setup, pipe the dump directly from your local machine into the PostgreSQL pod without copying files:

    gzip -cd /path/to/local/dump.gz | oc exec -n txture deployment/postgres -i -- psql -U txture -d txture

    This avoids placing files inside the container. Scale down Txture before restoring and scale it back up afterwards.

  • Update Txture image: See the Upgrading Txture guide for instructions on how to upgrade to a newer version.

  • List resources: To see all resources in the project:
    oc get all -n txture

  • Port forward (optional): For direct access without a Route:
    oc port-forward -n txture svc/txture 8080:8080
    Then open http://localhost:8080 in your browser.


V. Configuration

Database connection

Txture gets the database connection from the TXTURE_DB_JDBC_URL environment variable, which is constructed automatically by the OpenShift Template using the POSTGRES_* parameters in your txture.env file. You typically do not need to configure this manually.

note

After template processing, the database password is embedded in the TXTURE_DB_JDBC_URL environment variable value and stored in the Deployment object in etcd. This is an acceptable trade-off for most environments since access to Deployment specs requires cluster-level privileges. If your security policy requires stricter secret handling, consider constructing the JDBC URL at runtime using environment variables sourced from the Secret.

Using an external PostgreSQL

This guide deploys PostgreSQL alongside Txture for convenience, but you can use an external PostgreSQL instance instead. To do so:

  1. Remove the PostgreSQL Deployment, Service, PVC, and txture-postgres ServiceAccount from the template (or simply do not apply them).
  2. Set POSTGRES_HOST in your txture.env to the external hostname (e.g. POSTGRES_HOST=db.example.com). This configures both the JDBC connection and the init container's readiness check automatically.

Memory allocation

Memory is configured via two parameters in your txture.env file:

  • TXTURE_MEMORY_LIMIT: Sets the container memory limit (e.g. 32Gi).
  • TXTURE_MEMORY_REQUEST: Sets the memory request, i.e. how much memory the scheduler reserves on the node (default: 16Gi).

The JVM heap size is set automatically to 75% of the container memory limit via -XX:MaxRAMPercentage=75.0. You do not need to configure this manually. By default, the memory request is half of the limit, which gives the pod Burstable QoS and avoids over-reserving resources on shared clusters. Set TXTURE_MEMORY_REQUEST equal to TXTURE_MEMORY_LIMIT for Guaranteed QoS on dedicated nodes.

Outbound proxy

To route Txture's outgoing traffic through a proxy, add the following to your txture.properties file (in the Txture home directory):

txture.http.outbound.proxy.server.host=http://your.proxy.server:80

This can also be configured in the Txture UI if it is not set in the properties file.


VI. Production Setup Recommendations

The template already configures TLS edge termination on the Route, using the cluster's default wildcard certificate. If you need a custom certificate, update the Route with your own certificate, key, and caCertificate fields or use termination: reencrypt. See our example reverse proxy configuration for more details. Additionally, it is recommended to set up SMTP credentials to enable email functionalities like notifications, user or survey invites.

Further production considerations:

Storage

  • Use a cluster StorageClass (e.g. OpenShift Data Foundation, LVM Storage, or NFS) and avoid hostPath.
  • Enable volume snapshots for backups where supported.
  • Consider separate storage tiers or performance classes for database vs. application data.

High availability

  • Prefer a multi-node cluster (e.g. 3 control plane + 3 worker nodes).
  • Use pod anti-affinity so Txture and PostgreSQL pods are spread across nodes.
  • For database HA, consider a managed PostgreSQL operator (e.g. Crunchy Data) or StatefulSet with replication.

Security

  • Do not grant anyuid or other broad SCCs to the default ServiceAccount.
  • Use a dedicated ServiceAccount per workload and the most restrictive SCC that works.
  • Only the Txture ServiceAccount (txture-app) needs the nonroot-v2 SCC; PostgreSQL (RHEL9 image) runs fine with the default restricted-v2.
  • Consider NetworkPolicies to restrict traffic between namespaces.
  • Store database passwords in a secret manager or sealed secrets rather than on the command line.

Monitoring and backups

  • Back up the PostgreSQL database (e.g. pg_dump or volume snapshots) and the txture_home PVC regularly.
  • Test restore procedures in a non-production environment.

VII. Tested System Configurations

The instructions have been tested on the following configurations, though OpenShift 4.14+ with a suitable StorageClass should work.

  • OpenShift 4.18 Single Node OpenShift (SNO), OpenShift Data Foundation (Ceph RBD)

VIII. Troubleshooting

PVCs stay in Pending

Run oc get storageclass and ensure a default or chosen StorageClass exists and that the provisioner is healthy.

Txture pod fails to start (WAR / permission errors)

If the Txture pod crashes with The main resource set specified [/usr/local/tomcat/webapps/txture] is not a directory or war file, the container is running with the wrong UID. Ensure that nonroot-v2 SCC is granted to txture-app and that the pod security context sets runAsUser: 1000 and fsGroup: 1000.

Image pull errors (short name enforcing)

OpenShift may enforce fully qualified image names. Use:

  • registry.redhat.io/rhel9/postgresql-15:latest
  • europe-docker.pkg.dev/txture/production/txture:latest

Txture pod stays in Init:0/1

The init container is waiting for data. Check its logs with oc logs -n txture -l app=txture -c setup to see what it is waiting for:

  • "Waiting for upload signal": The oc cp in step 6 has not completed yet, or the signal command (touch .ready) has not been run.
  • "Waiting for PostgreSQL": The PostgreSQL pod is not running. Check with oc get pods -n txture -l app=postgres.

Txture reports missing txture.properties

Ensure txture_home contents (including txture.properties) are at the root of the volume, not inside an extra subdirectory, and that the Txture deployment mounts the same PVC at /opt/txture_home.

Database connection failures

Confirm PostgreSQL is running: oc get pods -n txture -l app=postgres. Check that the TXTURE_DB_JDBC_URL environment variable is correct: oc exec -n txture deployment/txture -- env | grep JDBC It should match the PostgreSQL service (postgres:5432) and the credentials in your txture.env.


IX. FAQ

1. Can I use hostPath or manual PVs for storage?
This guide targets production and therefore uses a StorageClass and PVCs only. In lab or edge environments without a CSI driver, you may need to create PersistentVolumes manually; avoid hostPath and broad permissions (e.g. chmod 777) in production.

2. Why does Txture need the nonroot-v2 SCC but PostgreSQL does not?
The Txture Docker image defines a user txture (UID 1000) that owns the Tomcat application files. Without the nonroot-v2 SCC, OpenShift assigns a UID from the namespace range that cannot write to those directories. The RHEL9 PostgreSQL image is designed for OpenShift and works with any UID, so the default restricted-v2 SCC is sufficient.

3. What does the init container do?
The setup init container on the Txture Deployment serves four purposes: it acts as a target for oc cp (to upload txture_home onto the PVC), it extracts ZIP or tar.gz archives if present, it waits for PostgreSQL to be ready, and it automatically restores the database dump if one is found on the PVC. On subsequent pod restarts (e.g. after an upgrade or node reboot), the init container finds the .ready signal file immediately, finds no archive to extract, confirms PostgreSQL is ready, finds no dump file, and exits within seconds.

4. How do I re-upload data after a failed upload or crash?
Delete the Txture pod so the deployment creates a fresh one with a new init container:

oc delete pod -n txture -l app=txture

If you already removed .ready from the PVC, the new init container will wait for data again (Init:0/1). If .ready still exists, remove it first so the init container waits:

POD=$(oc get pod -n txture -l app=txture -o jsonpath='{.items[0].metadata.name}')
oc exec $POD -c setup -n txture -- rm -f /opt/txture_home/.ready

Then delete the pod, re-upload your data, and signal .ready as described in step 6.

5. How do I get the application URL?
Run oc get route txture -n txture and open the reported hostname in your browser. The Route is configured with TLS edge termination, so use HTTPS.