Source code can be found here:
This helm chart installs Midaz, a high-performance and open-source ledger.
The default installation is similar to the one provided in the Midaz repo.
To install Midaz using Helm, run:
$ helm install midaz oci://registry-1.docker.io/lerianstudio/midaz-helm --version <version> -n midaz --create-namespace
Replace <version> with the desired chart version. This creates a namespace called midaz if it doesn’t already exist and deploys the Midaz Helm chart.
After installation, you can verify that the release was successful by listing the Helm releases in the midaz namespace:
$ helm list -n midaz
The Midaz Helm Chart supports different Ingress Controllers for exposing services. You can enable Ingress for the following services: Transaction, Onboarding, and Console. Below are configurations for commonly used controllers.
Note: Before configuring Ingress, ensure that you have an Ingress Controller installed in your cluster. The Ingress Controller manages external access to services. Examples include NGINX, AWS ALB, and Traefik.
To use the NGINX Ingress Controller, configure the values.yaml as follows:
ingress:
enabled: true
className: "nginx"
# The `annotations` field adds custom metadata to the Nginx resource.
# Annotations are key-value pairs that augment the behavior of the Nginx resource.
# See https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md
annotations: {}
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: midaz-tls # Ensure this secret exists or is managed by cert-manager
hosts:
- midaz.example.com
For AWS ALB Ingress Controller, use the following configuration:
ingress:
enabled: true
className: "alb"
annotations:
alb.ingress.kubernetes.io/scheme: internal # Use "internet-facing" for public ALB
alb.ingress.kubernetes.io/target-type: ip # Use "instance" if targeting EC2 instances
alb.ingress.kubernetes.io/group.name: "midaz" # Group ALB resources under this name
alb.ingress.kubernetes.io/healthcheck-path: "/healthz" # Health check path
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]' # Listen on HTTP and HTTPS
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls: [] # TLS is managed by the ALB using ACM certificates
For Traefik, configure the values.yaml as follows:
ingress:
enabled: true
className: "traefik"
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: "web, websecure" # Entrypoints defined in Traefik
traefik.ingress.kubernetes.io/router.tls: "true" # Enable TLS for this route
hosts:
- host: midaz.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: midaz-tls # Ensure this secret exists and contains the TLS certificate
hosts:
- midaz.example.com
Midaz deploys the following core services:
| Parameter | Description | Default Value |
|---|---|---|
onboarding.name |
Service name. | "onboarding" |
onboarding.replicaCount |
Number of replicas for the onboarding service. | 2 |
onboarding.image.repository |
Repository for the onboarding service container image. | "lerianstudio/midaz-onboarding" |
onboarding.image.pullPolicy |
Image pull policy. | "IfNotPresent" |
onboarding.image.tag |
Image tag used for deployment. | "3.3.4" |
onboarding.imagePullSecrets |
Secrets for pulling images from a private registry. | [] |
onboarding.nameOverride |
Overrides the default generated name by Helm. | "" |
onboarding.fullnameOverride |
Overrides the full name generated by Helm. | "" |
onboarding.podAnnotations |
Pod annotations for additional metadata. | {} |
onboarding.podSecurityContext |
Security context applied at the pod level. | {} |
onboarding.securityContext.* |
Defines security context settings for the container. | See values.yaml |
onboarding.pdb.enabled |
Specifies whether PodDisruptionBudget is enabled. | true |
onboarding.pdb.minAvailable |
Minimum number of available pods. | 1 |
onboarding.pdb.maxUnavailable |
Maximum number of unavailable pods. | 1 |
onboarding.pdb.annotations |
Annotations for the PodDisruptionBudget. | {} |
onboarding.deploymentUpdate.* |
Deployment update strategy. | See values.yaml |
onboarding.service.type |
Kubernetes service type. | "ClusterIP" |
onboarding.service.port |
Port for the HTTP API. | 3000 |
onboarding.service.annotations |
Annotations for the service. | {} |
onboarding.ingress.enabled |
Specifies whether Ingress is enabled. | false |
onboarding.ingress.className |
Ingress class name. | "" |
onboarding.ingress.annotations |
Additional ingress annotations. | {} |
onboarding.ingress.hosts |
Configured hosts for Ingress and associated paths. | "" |
onboarding.ingress.tls |
TLS configurations for Ingress. | [] |
onboarding.resources.* |
CPU/Memory resource requests/limits. | See values.yaml |
onboarding.autoscaling.enabled |
Specifies whether autoscaling is enabled. | true |
onboarding.autoscaling.minReplicas |
Minimum number of replicas for autoscaling. | 2 |
onboarding.autoscaling.maxReplicas |
Maximum number of replicas for autoscaling. | 5 |
onboarding.autoscaling.targetCPUUtilizationPercentage |
Target CPU utilization percentage for autoscaling. | 80 |
onboarding.autoscaling.targetMemoryUtilizationPercentage |
Target memory utilization percentage for autoscaling. | 80 |
onboarding.nodeSelector |
Node selectors for pod scheduling. | {} |
onboarding.tolerations |
Tolerations for pod scheduling. | {} |
onboarding.affinity |
Affinity rules for pod scheduling. | {} |
onboarding.configmap.* |
Environment variables for the service. | See values.yaml |
onboarding.secrets.* |
Secrets for the service. | See values.yaml |
onboarding.useExistingSecret |
Use an existing secret instead of creating a new one. | false |
onboarding.existingSecretName |
The name of the existing secret to use. | "" |
onboarding.extraEnvVars |
A list of extra environment variables. | [] |
onboarding.serviceAccount.create |
Specifies whether the service account should be created. | true |
onboarding.serviceAccount.annotations |
Annotations for the service account. | {} |
onboarding.serviceAccount.name |
Service account name. If not defined, it will be generated automatically. | "" |
If you want to use an existing Kubernetes Secret for the onboarding service, you can create it manually with the following command:
kubectl create secret generic midaz-onboarding \
--from-literal=MONGO_PASSWORD='<your-mongo-password>' \
--from-literal=DB_PASSWORD='<your-db-password>' \
--from-literal=DB_REPLICA_PASSWORD='<your-db-replica-password>' \
--from-literal=RABBITMQ_DEFAULT_PASS='<your-rabbitmq-password>' \
--from-literal=REDIS_PASSWORD='<your-redis-password>' \
-n midaz
Then configure the onboarding service to use this existing secret:
onboarding:
useExistingSecret: true
existingSecretName: "midaz-onboarding"
| Parameter | Description | Default Value |
|---|---|---|
transaction.name |
Service name. | "transaction" |
transaction.replicaCount |
Number of replicas for the transaction service. | 1 |
transaction.image.repository |
Repository for the transaction service container image. | "lerianstudio/midaz-transaction" |
transaction.image.pullPolicy |
Image pull policy. | "IfNotPresent" |
transaction.image.tag |
Image tag used for deployment. | "3.3.4" |
transaction.imagePullSecrets |
Secrets for pulling images from a private registry. | [] |
transaction.nameOverride |
Overrides the default generated name by Helm. | "" |
transaction.fullnameOverride |
Overrides the full name generated by Helm. | "" |
transaction.podAnnotations |
Pod annotations for additional metadata. | {} |
transaction.podSecurityContext |
Security context for the pod. | {} |
transaction.securityContext.* |
Defines security context settings for the container. | See values.yaml |
transaction.pdb.enabled |
Enable or disable PodDisruptionBudget. | true |
transaction.pdb.minAvailable |
Minimum number of available pods. | 2 |
transaction.pdb.maxUnavailable |
Maximum number of unavailable pods. | 1 |
transaction.pdb.annotations |
Annotations for the PodDisruptionBudget. | {} |
transaction.deploymentUpdate.* |
Deployment update strategy. | See values.yaml |
transaction.service.type |
Kubernetes service type. | "ClusterIP" |
transaction.service.port |
Port for the HTTP API. | 3001 |
transaction.service.annotations |
Annotations for the service. | {} |
transaction.ingress.enabled |
Enable or disable ingress. | false |
transaction.ingress.className |
Ingress class name. | "" |
transaction.ingress.annotations |
Additional ingress annotations. | {} |
transaction.ingress.hosts |
Configured hosts for ingress and associated paths. | [] |
transaction.ingress.tls |
TLS configuration for ingress. | [] |
transaction.resources.* |
CPU/Memory resource requests/limits. | See values.yaml |
transaction.autoscaling.enabled |
Enable or disable horizontal pod autoscaling. | true |
transaction.autoscaling.minReplicas |
Minimum number of replicas for autoscaling. | 1 |
transaction.autoscaling.maxReplicas |
Maximum number of replicas for autoscaling. | 5 |
transaction.autoscaling.targetCPUUtilizationPercentage |
Target CPU utilization percentage for autoscaling. | 80 |
transaction.autoscaling.targetMemoryUtilizationPercentage |
Target memory utilization percentage for autoscaling. | 80 |
transaction.nodeSelector |
Node selector for scheduling pods on specific nodes. | {} |
transaction.tolerations |
Tolerations for scheduling on tainted nodes. | {} |
transaction.affinity |
Affinity rules for pod scheduling. | {} |
transaction.configmap.* |
Environment variables for the service. | See values.yaml |
transaction.secrets.* |
Secrets for the service. | See values.yaml |
transaction.useExistingSecret |
Use an existing secret instead of creating a new one. | false |
transaction.existingSecretName |
The name of the existing secret to use. | "" |
transaction.extraEnvVars |
A list of extra environment variables. | [] |
transaction.serviceAccount.create |
Specifies whether a ServiceAccount should be created. | true |
transaction.serviceAccount.annotations |
Annotations for the ServiceAccount. | {} |
transaction.serviceAccount.name |
Name of the service account. | "" |
If you want to use an existing Kubernetes Secret for the transaction service, you can create it manually with the following command:
kubectl create secret generic midaz-transaction \
--from-literal=MONGO_PASSWORD='<your-mongo-password>' \
--from-literal=DB_PASSWORD='<your-db-password>' \
--from-literal=DB_REPLICA_PASSWORD='<your-db-replica-password>' \
--from-literal=RABBITMQ_DEFAULT_PASS='<your-rabbitmq-password>' \
--from-literal=RABBITMQ_CONSUMER_PASS='<your-rabbitmq-consumer-password>' \
--from-literal=REDIS_PASSWORD='<your-redis-password>' \
-n midaz
Note: The transaction service requires an additional secret key RABBITMQ_CONSUMER_PASS compared to onboarding.
Then configure the transaction service to use this existing secret:
transaction:
useExistingSecret: true
existingSecretName: "midaz-transaction"
The ledger service combines onboarding and transaction modules into a single deployment. Use this service for new installations. It will become mandatory in future releases.
Important: When
ledger.enabledis set totrue, the onboarding and transaction services are automatically disabled (unlessmigration.allowAllServicesis set totruefor testing purposes).
| Parameter | Description | Default Value |
|---|---|---|
ledger.enabled |
Enable or disable the ledger service. | false |
ledger.name |
Service name. | "ledger" |
ledger.replicaCount |
Number of replicas for the ledger service. | 1 |
ledger.image.repository |
Repository for the ledger service container image. | "lerianstudio/midaz-ledger" |
ledger.image.pullPolicy |
Image pull policy. | "IfNotPresent" |
ledger.image.tag |
Image tag used for deployment. | "" (defaults to Chart.AppVersion) |
ledger.imagePullSecrets |
Secrets for pulling images from a private registry. | [] |
ledger.nameOverride |
Overrides the default generated name by Helm. | "" |
ledger.fullnameOverride |
Overrides the full name generated by Helm. | "" |
ledger.podAnnotations |
Pod annotations for additional metadata. | {} |
ledger.podSecurityContext |
Security context applied at the pod level. | {} |
ledger.securityContext.* |
Defines security context settings for the container. | See values.yaml |
ledger.pdb.enabled |
Specifies whether PodDisruptionBudget is enabled. | true |
ledger.pdb.minAvailable |
Minimum number of available pods. | 1 |
ledger.pdb.maxUnavailable |
Maximum number of unavailable pods. | 1 |
ledger.pdb.annotations |
Annotations for the PodDisruptionBudget. | {} |
ledger.deploymentUpdate.* |
Deployment update strategy. | See values.yaml |
ledger.service.type |
Kubernetes service type. | "ClusterIP" |
ledger.service.port |
Port for the HTTP API. | 3000 |
ledger.service.annotations |
Annotations for the service. | {} |
ledger.ingress.enabled |
Specifies whether Ingress is enabled. | false |
ledger.ingress.className |
Ingress class name. | "" |
ledger.ingress.annotations |
Additional ingress annotations. | {} |
ledger.ingress.hosts |
Configured hosts for Ingress and associated paths. | [] |
ledger.ingress.tls |
TLS configurations for Ingress. | [] |
ledger.resources.* |
CPU/Memory resource requests/limits. | See values.yaml |
ledger.autoscaling.enabled |
Specifies whether autoscaling is enabled. | true |
ledger.autoscaling.minReplicas |
Minimum number of replicas for autoscaling. | 2 |
ledger.autoscaling.maxReplicas |
Maximum number of replicas for autoscaling. | 5 |
ledger.autoscaling.targetCPUUtilizationPercentage |
Target CPU utilization percentage for autoscaling. | 80 |
ledger.autoscaling.targetMemoryUtilizationPercentage |
Target memory utilization percentage for autoscaling. | 80 |
ledger.nodeSelector |
Node selectors for pod scheduling. | {} |
ledger.tolerations |
Tolerations for pod scheduling. | {} |
ledger.affinity |
Affinity rules for pod scheduling. | {} |
ledger.configmap.* |
Environment variables for the service. | See values.yaml |
ledger.secrets.* |
Secrets for the service. | See values.yaml |
ledger.useExistingSecret |
Use an existing secret instead of creating a new one. | false |
ledger.existingSecretName |
The name of the existing secret to use. | "" |
ledger.extraEnvVars |
A list of extra environment variables. | [] |
ledger.serviceAccount.create |
Specifies whether the service account should be created. | true |
ledger.serviceAccount.annotations |
Annotations for the service account. | {} |
ledger.serviceAccount.name |
Service account name. If not defined, it will be generated automatically. | "" |
If you want to use an existing Kubernetes Secret for the ledger service, you can create it manually with the following command:
kubectl create secret generic midaz-ledger \
--from-literal=DB_ONBOARDING_PASSWORD='<your-db-onboarding-password>' \
--from-literal=DB_ONBOARDING_REPLICA_PASSWORD='<your-db-onboarding-replica-password>' \
--from-literal=MONGO_ONBOARDING_PASSWORD='<your-mongo-onboarding-password>' \
--from-literal=DB_TRANSACTION_PASSWORD='<your-db-transaction-password>' \
--from-literal=DB_TRANSACTION_REPLICA_PASSWORD='<your-db-transaction-replica-password>' \
--from-literal=MONGO_TRANSACTION_PASSWORD='<your-mongo-transaction-password>' \
--from-literal=REDIS_PASSWORD='<your-redis-password>' \
--from-literal=RABBITMQ_DEFAULT_PASS='<your-rabbitmq-password>' \
--from-literal=RABBITMQ_CONSUMER_PASS='<your-rabbitmq-consumer-password>' \
-n midaz
Note: The ledger service uses module-specific database credentials (onboarding and transaction) since it combines both modules.
Then configure the ledger service to use this existing secret:
ledger:
enabled: true
useExistingSecret: true
existingSecretName: "midaz-ledger"
To enable the ledger service and disable the separate onboarding/transaction services:
ledger:
enabled: true
onboarding:
enabled: false
transaction:
enabled: false
When ledger is enabled, the onboarding and transaction ingresses will automatically redirect traffic to the ledger service, maintaining backward compatibility with existing DNS configurations.
The crm service provides APIs for managing holder data and their relationships with ledger accounts. Previously available as a separate chart (plugin-crm) deployed in the midaz-plugins namespace, the CRM is being migrated to become a core component of Midaz, now deployed in the midaz namespace.
For more details, refer to the official documentation: CRM Documentation
Migration Note: If you are currently using
plugin-crmin themidaz-pluginsnamespace, we recommend migrating to this new integrated CRM workload. See the Upgrade Guide for migration steps.
| Parameter | Description | Default Value |
|---|---|---|
crm.enabled |
Enable or disable the CRM service. | false |
crm.name |
Service name. | "crm" |
crm.replicaCount |
Number of replicas for the CRM service. | 1 |
crm.image.repository |
Repository for the CRM service container image. | "ghcr.io/lerianstudio/midaz-crm" |
crm.image.pullPolicy |
Image pull policy. | "Always" |
crm.image.tag |
Image tag used for deployment. | "3.5.0" |
crm.imagePullSecrets |
Secrets for pulling images from a private registry. | [] |
crm.nameOverride |
Overrides the default generated name by Helm. | "" |
crm.fullnameOverride |
Overrides the full name generated by Helm. | "" |
crm.podAnnotations |
Pod annotations for additional metadata. | {} |
crm.podSecurityContext |
Security context applied at the pod level. | {} |
crm.securityContext.* |
Defines security context settings for the container. | See values.yaml |
crm.pdb.enabled |
Specifies whether PodDisruptionBudget is enabled. | true |
crm.pdb.minAvailable |
Minimum number of available pods. | 1 |
crm.pdb.maxUnavailable |
Maximum number of unavailable pods. | 1 |
crm.pdb.annotations |
Annotations for the PodDisruptionBudget. | {} |
crm.deploymentUpdate.type |
Type of deployment strategy. | "RollingUpdate" |
crm.deploymentUpdate.maxSurge |
Maximum number of pods that can be created over the desired number of pods. | 1 |
crm.deploymentUpdate.maxUnavailable |
Maximum number of pods that can be unavailable during the update. | 1 |
crm.service.type |
Kubernetes service type. | "ClusterIP" |
crm.service.port |
Service port. | 4003 |
crm.ingress.enabled |
Specifies whether Ingress is enabled. | false |
crm.ingress.className |
Ingress class name. | "" |
crm.ingress.annotations |
Additional ingress annotations. | {} |
crm.ingress.hosts |
Configured hosts for Ingress and associated paths. | [] |
crm.ingress.tls |
TLS configurations for Ingress. | [] |
crm.resources.* |
CPU/Memory resource requests/limits. | See values.yaml |
crm.autoscaling.enabled |
Specifies whether autoscaling is enabled. | true |
crm.autoscaling.minReplicas |
Minimum number of replicas for autoscaling. | 1 |
crm.autoscaling.maxReplicas |
Maximum number of replicas for autoscaling. | 3 |
crm.autoscaling.targetCPUUtilizationPercentage |
Target CPU utilization percentage for autoscaling. | 80 |
crm.autoscaling.targetMemoryUtilizationPercentage |
Target memory utilization percentage for autoscaling. | 80 |
crm.nodeSelector |
Node selectors for pod scheduling. | {} |
crm.tolerations |
Tolerations for pod scheduling. | {} |
crm.affinity |
Affinity rules for pod scheduling. | {} |
crm.configmap.* |
Environment variables for the service. | See values.yaml |
crm.secrets.* |
Secrets for the service. | See values.yaml |
crm.useExistingSecret |
Use an existing secret instead of creating a new one. | false |
crm.existingSecretName |
The name of the existing secret to use. | "" |
crm.extraEnvVars |
A list of extra environment variables. | {} |
If you want to use an existing Kubernetes Secret for the CRM service, you can create it manually with the following command:
kubectl create secret generic midaz-crm \
--from-literal=LCRYPTO_HASH_SECRET_KEY='<your-hash-secret-key>' \
--from-literal=LCRYPTO_ENCRYPT_SECRET_KEY='<your-encrypt-secret-key>' \
--from-literal=MONGO_PASSWORD='<your-mongo-password>' \
-n midaz
Then configure the CRM service to use this existing secret:
crm:
enabled: true
useExistingSecret: true
existingSecretName: "midaz-crm"
To enable the CRM service:
crm:
enabled: true
configmap:
MONGO_HOST: "midaz-mongodb" # Use your MongoDB host
MONGO_NAME: "crm"
MONGO_USER: "midaz"
secrets:
MONGO_PASSWORD: "lerian"
Midaz uses Grafana Docker OpenTelemetry LGTM for observability. This component collects, processes, and exports telemetry data such as traces and metrics.
You can access the observability dashboard in two ways:
$ kubectl port-forward svc/midaz-grafana 3000:3000 -n midaz
Then, open your browser and navigate to http://localhost:3000.
If you want to access the observability dashboard internally using a custom DNS (e.g., within your Kubernetes cluster or private network), you can enable and configure the Ingress for the grafana component in the values.yaml file. Here’s an example configuration for an internal Ingress:
grafana:
enabled: true
name: grafana
ingress:
enabled: true
className: "nginx" # Use an internal Ingress class (e.g., nginx-internal)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
# Optional: Use the following annotation to restrict access to internal networks
nginx.ingress.kubernetes.io/whitelist-source-range: ""
hosts:
- host: "midaz-ote.example.com" # Replace with your custom internal DNS
paths:
- path: /
pathType: Prefix
tls: [] # TLS is optional for internal access
If necessary, the deployment of this component can be disabled by setting otel.enabled to false in the values file.
grafana:
enabled: false
This chart includes the following dependencies for the default installation. All dependencies are enabled by default.
valkey.enabled to false in the values file.Note: If you have an existing Valkey or Redis instance, you can disable this dependency and configure Midaz Components to use your external instance, like this:
onboarding:
configmap:
REDIS_HOST: { your-host }:{ your-host-port }
secrets:
REDIS_PASSWORD: { your-host-pass }
transaction:
configmap:
REDIS_HOST: { your-host }:{ your-host-port }
secrets:
REDIS_PASSWORD: { your-host-pass }
postgresql.enabled to false in the values file.Note: If you have an existing PostgreSQL instance, you can disable this dependency and configure Midaz Components to use your external PostgreSQL, like this:
onboarding:
configmap:
DB_HOST: { your-host }
DB_USER: { your-host-user }
DB_PORT: { your-host-port }
## DB Replication
DB_REPLICA_HOST: { your-replication-host }
DB_REPLICA_USER: { your-replication-host-user }
DB_REPLICA_PORT: { your-replication-host-port}
secrets:
DB_PASSWORD: { your-host-pass }
DB_REPLICA_PASSWORD: { your-replication-host-pass }
transaction:
configmap:
DB_HOST: { your-host }
DB_USER: { your-host-user }
DB_PORT: { your-host-port }
## DB Replication
DB_REPLICA_HOST: { your-replication-host }
DB_REPLICA_USER: { your-replication-host-user }
DB_REPLICA_PORT: { your-replication-host-port}
secrets:
DB_PASSWORD: { your-host-pass }
DB_REPLICA_PASSWORD: { your-replication-host-pass }
When using an external PostgreSQL (i.e., postgresql.enabled: false), this chart provides a one-shot bootstrap Job that:
onboarding and transaction databases if they do not exist.midaz role/user if it does not exist and sets its password.public schema permissions so midaz can create tables.Is idempotent: if everything already exists, it prints and exits.
charts/midaz/templates/bootstrap-postgres.yamlmidaz-bootstrap-postgresConfigure in values.yaml:
postgresql:
enabled: false # disable bundled PostgreSQL to use an external one
global:
externalPostgresDefinitions:
enabled: true
connection:
host: "your-postgres-host"
port: "5432"
postgresAdminLogin:
# Option A: Use an existing Secret (recommended)
# Required keys: DB_USER_ADMIN, DB_ADMIN_PASSWORD
useExistingSecret:
name: "my-postgres-admin-secret"
# Option B: Inline credentials (not recommended in production)
# username: "postgres"
# password: "s3cret"
midazCredentials:
# Option A: Use an existing Secret (recommended)
# Required key: DB_PASSWORD_MIDAZ
useExistingSecret:
name: "my-midaz-credentials-secret"
# Option B: Inline password (not recommended in production)
# password: "midaz-password"
Notes:
mongodb.enabled to false in the values file.Note: If you have an existing MongoDB instance, you can disable this dependency and configure Midaz Components to use your external MongoDB, like this:
onboarding:
configmap:
MONGO_HOST: { your-host }
MONGO_NAME: { your-host-name }
MONGO_USER: { your-host-user }
MONGO_PORT: { your-host-port }
secrets:
MONGO_PASSWORD: { your-host-pass }
transaction:
configmap:
MONGO_HOST: { your-host }
MONGO_NAME: { your-host-name }
MONGO_USER: { your-host-user }
MONGO_PORT: { your-host-port }
secrets:
MONGO_PASSWORD: { your-host-pass }
How to disable: Set rabbitmq.enabled to false in the values file.
Important: When using an external RabbitMQ instance, it is essential to load the RabbitMQ definitions from the load_definitions.json file. These definitions contain crucial configurations (users, queues, exchanges, bindings) required for Midaz Components to function correctly. Without these definitions, Midaz Components will not operate as expected.
Automatically: Enable the bootstrap job in your values.yaml to automatically apply the RabbitMQ definitions to your external instance:
global:
externalRabbitmqDefinitions:
enabled: true
Manually: You can also manually apply the definitions using RabbitMQ’s HTTP API with the following command:
curl -u {user}:{pass} -X POST -H "Content-Type: application/json" \
-d @load_definitions.json http://{host}:{port}/api/definitions
The load_definitions.json file is located at:
charts/midaz/files/rabbitmq/load_definitions.json
To streamline external RabbitMQ setup, this chart provides a one-shot Job that:
charts/midaz/files/rabbitmq/load_definitions.json) via the HTTP API.transaction and consumer users with custom passwords.Is idempotent: if users already exist, it skips and exits.
charts/midaz/templates/bootstrap-rabbitmq.yamlmidaz-bootstrap-rabbitmqConfigure in values.yaml:
rabbitmq:
enabled: false # disable bundled RabbitMQ to use an external one
global:
externalRabbitmqDefinitions:
enabled: true
connection:
protocol: "http" # http or https
host: "your-rabbitmq-host"
port: "15672" # HTTP management port
portAmqp: "5672" # AMQP port (for connectivity check)
rabbitmqAdminLogin:
# Option A: Use an existing Secret (recommended)
# Required keys: RABBITMQ_ADMIN_USER, RABBITMQ_ADMIN_PASS
useExistingSecret:
name: "my-rabbitmq-admin-secret"
# Option B: Inline credentials (not recommended in production)
# username: "admin"
# password: "s3cret"
appCredentials:
# Option A: Use an existing Secret (recommended)
# Required keys: RABBITMQ_DEFAULT_PASS, RABBITMQ_CONSUMER_PASS
useExistingSecret:
name: "my-rabbitmq-app-credentials"
# Option B: Inline passwords (not recommended in production)
# transactionPassword: "transaction-pass"
# consumerPassword: "consumer-pass"
Notes:
midaz (admin), transaction, consumer.If your RabbitMQ server requires TLS/SSL, update the client environment variables to use secure protocols:
onboarding:
configmap:
RABBITMQ_URI: "amqps" # was "amqp"
RABBITMQ_PROTOCOL: "https" # was "http"
transaction:
configmap:
RABBITMQ_URI: "amqps" # was "amqp"
RABBITMQ_PROTOCOL: "https" # was "http"
otel-collector-lerian.enabled to true in the values file.otel-collector-lerian:
enabled: true