Opensearch is a distributed search and analytics engine. It is used for web search, log monitoring, and real-time analytics. Ideal for Big Data applications.
Trademarks: This software listing is packaged by Benoît Pourre. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
$ helm install my-release oci://registry-1.docker.io/captnbp/opensearch
This chart bootstraps an Opensearch deployment on a Kubernetes cluster using the Helm package manager.
- Kubernetes 1.19+
- Helm 3.2.0+
- PV provisioner support in the underlying infrastructure
- cert-manager
To install the chart with the release name my-release
:
$ helm install my-release oci://registry-1.docker.io/captnbp/opensearch
These commands deploy Opensearch on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
To uninstall/delete the my-release
release:
$ helm delete my-release
The command removes all the Kubernetes components associated with the chart and deletes the release. Remove also the chart using --purge
option:
$ helm delete --purge my-release
Name | Description | Value |
---|---|---|
global.imageRegistry |
Global Docker image registry | "" |
global.imagePullSecrets |
Global Docker registry secret names as an array | [] |
global.storageClass |
Global StorageClass for Persistent Volume(s) | "" |
Name | Description | Value |
---|---|---|
nameOverride |
String to partially override common.names.fullname template (will maintain the release name) | "" |
fullnameOverride |
String to fully override common.names.fullname template | "" |
clusterDomain |
Kubernetes cluster domain | cluster.local |
commonLabels |
Labels to add to all deployed objects | {} |
commonAnnotations |
Annotations to add to all deployed objects | {} |
extraDeploy |
Array of extra objects to deploy with the release | [] |
diagnosticMode.enabled |
Enable diagnostic mode (all probes will be disabled and the command will be overridden) | false |
diagnosticMode.command |
Command to override all containers in the deployment | ["sleep"] |
diagnosticMode.args |
Args to override all containers in the deployment | ["infinity"] |
Name | Description | Value |
---|---|---|
image.registry |
Opensearch image registry | docker.io |
image.repository |
Opensearch image repository | opensearchproject/opensearch |
image.tag |
Opensearch image tag (immutable tags are recommended) | 2.15.0 |
image.pullPolicy |
Opensearch image pull policy | IfNotPresent |
image.pullSecrets |
Opensearch image pull secrets | [] |
image.debug |
Enable image debug mode | false |
security.opensearchPassword |
Password for 'admin' user | "" |
security.dashboardsPassword |
Password for reserved 'dashboard' user | "" |
security.monitoringPassword |
Password for reserved 'monitoring' user | "" |
security.existingSecret |
Name of the existing secret containing the Opensearch passwords | "" |
security.tls.http.autoGenerated |
Create cert-manager signed TLS certificates. | true |
security.tls.http.existingRootCASecret |
Existing secret containing the tls.key and tls.crt of the Root CA which will sign HTTP certs. This will be used in the cert-manager Issuer. | root-ca-tls |
security.tls.http.algorithm |
Algorithm of the private key. Allowed values are either RSA,Ed25519 or ECDSA. | RSA |
security.tls.http.size |
Size is the key bit size of the corresponding private key for this certificate. If algorithm is set to RSA, valid values are 2048, 4096 or 8192, and will default to 2048 if not specified. If algorithm is set to ECDSA, valid values are 256, 384 or 521, and will default to 256 if not specified. If algorithm is set to Ed25519, Size is ignored. No other values are allowed. | 2048 |
security.tls.http.cluster_manager.existingSecret |
Existing secret containing the certificates for the cluster_manager nodes | "" |
security.tls.http.data.existingSecret |
Existing secret containing the certificates for the data nodes | "" |
security.tls.http.ingest.existingSecret |
Existing secret containing the certificates for the ingest nodes | "" |
security.tls.http.coordinating.existingSecret |
Existing secret containing the certificates for the coordinating nodes | "" |
security.tls.http.keyPassword |
Password to access the PEM key when they are password-protected. | "" |
security.tls.http.subject.organizations |
Subject's organization | example |
security.tls.http.subject.countries |
Subject's country | com |
security.tls.http.issuerRef.existingIssuerName |
Existing name of the cert-manager http issuer. If provided, it won't create a default one. | "" |
security.tls.http.issuerRef.kind |
Kind of the cert-manager issuer resource (defaults to "Issuer") | Issuer |
security.tls.http.issuerRef.group |
Group of the cert-manager issuer resource (defaults to "cert-manager.io") | cert-manager.io |
security.tls.transport.enforceHostnameVerification |
Whether to verify hostnames on the transport layer. | false |
security.tls.transport.resolveHostname |
Whether to resolve hostnames against DNS on the transport layer. Optional. Default is true. Only works if hostname verification is also enabled. | true |
security.tls.transport.autoGenerated |
Create cert-manager signed TLS certificates. | true |
security.tls.transport.existingRootCASecret |
Existing secret containing the tls.key and tls.crt of the Root CA which will sign transport certs. This will be used in the cert-manager Issuer. | root-ca-tls |
security.tls.transport.algorithm |
Algorithm of the private key. Allowed values are either RSA,Ed25519 or ECDSA. | RSA |
security.tls.transport.size |
Size is the key bit size of the corresponding private key for this certificate. If algorithm is set to RSA, valid values are 2048, 4096 or 8192, and will default to 2048 if not specified. If algorithm is set to ECDSA, valid values are 256, 384 or 521, and will default to 256 if not specified. If algorithm is set to Ed25519, Size is ignored. No other values are allowed. | 2048 |
security.tls.transport.cluster_manager.existingSecret |
Existing secret containing the certificates for the cluster_manager nodes | "" |
security.tls.transport.data.existingSecret |
Existing secret containing the certificates for the data nodes | "" |
security.tls.transport.ingest.existingSecret |
Existing secret containing the certificates for the ingest nodes | "" |
security.tls.transport.coordinating.existingSecret |
Existing secret containing the certificates for the coordinating nodes | "" |
security.tls.transport.keyPassword |
Password to access the PEM key when they are password-protected. | "" |
security.tls.transport.subject.organizations |
Subject's organization | example |
security.tls.transport.subject.countries |
Subject's country | com |
security.tls.transport.issuerRef.existingIssuerName |
Existing name of the cert-manager transport issuer. If provided, it won't create a default one. | "" |
security.tls.transport.issuerRef.kind |
Kind of the cert-manager issuer resource (defaults to "Issuer") | Issuer |
security.tls.transport.issuerRef.group |
Group of the cert-manager issuer resource (defaults to "cert-manager.io") | cert-manager.io |
security.tls.truststore.extraCACerts |
Add extra pem CA certs to the Java truststore | {} |
security.audit.type |
Audit logs let you track access to your OpenSearch cluster and are useful for compliance purposes or in the aftermath of a security breach. You can configure the categories to be logged, the detail level of the logged messages, and where to store the logs. Possible values: <debug | internal_opensearch |
security.audit.index |
Specify the target index for storage types internal_opensearch or external_opensearch | 'security-auditlog-'YYYY.MM.dd |
security.audit.ignore_requests |
You can exclude certain requests from being logged completely, by either configuring actions (for transport requests) and/or HTTP request paths (REST) | [] |
security.audit.ignore_users |
By default, the security plugin logs events from all users, but excludes the internal OpenSearch Dashboards server users dashboard and monitoring | [] |
security.audit.enable_rest |
By default, the security plugin logs events on the REST layer | true |
security.audit.enable_transport |
By default, the security plugin logs events on the transport layer | true |
security.audit.resolve_indices |
By default, the security plugin logs all indices affected by a request. Because index names can be aliases and contain wildcards/date patterns, the security plugin logs the index name that the user submitted and the actual index name to which it resolves. | true |
security.audit.config |
Configure audit setting | {} |
clusterName |
Opensearch cluster name | opensearch |
containerPorts.restAPI |
Opensearch REST API port | 9200 |
containerPorts.transport |
Opensearch Transport port | 9300 |
plugins |
Comma, semi-colon or space separated list of plugins to install at initialization | repository-s3,https://github.com/aiven/prometheus-exporter-plugin-for-opensearch/releases/download/2.15.0.0/prometheus-exporter-2.15.0.0.zip |
networkHost |
Network interface to bind (ex: "0.0.0.0", "::" [local, site]) | 0.0.0.0 |
config |
Override Opensearch configuration | {} |
allocationAwareness.enabled |
Enable allocation awareness | false |
allocationAwareness.topologyKey |
Node label used for topologyKey | topology.kubernetes.io/zone |
allocationAwareness.forceZones.enabled |
Require that primary and replica shards are never allocated to the same zone | false |
allocationAwareness.forceZones.zones |
To configure forced awareness, specify all the possible values for your zone attributes | [] |
extraConfig |
Append extra configuration to the Opensearch node configuration | {} |
extraVolumes |
A list of volumes to be added to the pod | [] |
extraVolumeMounts |
A list of volume mounts to be added to the pod | [] |
extraEnvVars |
Array containing extra env vars to be added to all pods (evaluated as a template) | [] |
extraEnvVarsConfigMap |
ConfigMap containing extra env vars to be added to all pods (evaluated as a template) | "" |
extraEnvVarsSecret |
Secret containing extra env vars to be added to all pods (evaluated as a template) | "" |
Name | Description | Value |
---|---|---|
cluster_manager.fullnameOverride |
String to fully override opensearch.cluster_manager.fullname template with a string | "" |
cluster_manager.replicaCount |
Desired number of Opensearch cluster_manager nodes. Consider using an odd number of cluster_manager nodes to prevent "split brain" situation. See: https://opensearch.org/docs/latest/opensearch/cluster/ | 3 |
cluster_manager.updateStrategy.type |
Update strategy for Master statefulset | RollingUpdate |
cluster_manager.hostAliases |
Add deployment host aliases | [] |
cluster_manager.schedulerName |
Name of the k8s scheduler (other than default) | "" |
cluster_manager.heapSize |
Master-eligible node heap size | 128m |
cluster_manager.podAnnotations |
Annotations for cluster_manager pods. | {} |
cluster_manager.podLabels |
Extra labels to add to Pod | {} |
cluster_manager.securityContext.enabled |
Enable security context for cluster_manager pods | true |
cluster_manager.securityContext.fsGroup |
Group ID for the container for cluster_manager pods | 1000 |
cluster_manager.securityContext.runAsUser |
User ID for the container for cluster_manager pods | 1000 |
cluster_manager.podSecurityContext.enabled |
Enable security context for cluster_manager pods | false |
cluster_manager.podSecurityContext.fsGroup |
Group ID for the container for cluster_manager pods | 1000 |
cluster_manager.containerSecurityContext.enabled |
Enable security context for the main container | true |
cluster_manager.containerSecurityContext.runAsUser |
User ID for the container for the main container | 1000 |
cluster_manager.containerSecurityContext.runAsNonRoot |
Indicates that the container must run as a non-root user | true |
cluster_manager.containerSecurityContext.allowPrivilegeEscalation |
Controls whether a process can gain more privileges than its parent process | false |
cluster_manager.podAffinityPreset |
Master-eligible Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
cluster_manager.podAntiAffinityPreset |
Master-eligible Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
cluster_manager.nodeAffinityPreset.type |
Master-eligible Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard |
"" |
cluster_manager.nodeAffinityPreset.key |
Master-eligible Node label key to match Ignored if affinity is set. |
"" |
cluster_manager.nodeAffinityPreset.values |
Master-eligible Node label values to match. Ignored if affinity is set. |
[] |
cluster_manager.affinity |
Master-eligible Affinity for pod assignment | {} |
cluster_manager.priorityClassName |
Master pods Priority Class Name | "" |
cluster_manager.nodeSelector |
Master-eligible Node labels for pod assignment | {} |
cluster_manager.tolerations |
Master-eligible Tolerations for pod assignment | [] |
cluster_manager.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
cluster_manager.resources.limits |
The resources limits for the container | {} |
cluster_manager.resources.requests |
The requested resources for the container | {} |
cluster_manager.startupProbe.enabled |
Enable/disable the startup probe (cluster_manager nodes pod) | false |
cluster_manager.startupProbe.initialDelaySeconds |
Delay before startup probe is initiated (cluster_manager nodes pod) | 90 |
cluster_manager.startupProbe.periodSeconds |
How often to perform the probe (cluster_manager nodes pod) | 10 |
cluster_manager.startupProbe.timeoutSeconds |
When the probe times out (cluster_manager nodes pod) | 5 |
cluster_manager.startupProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (cluster_manager nodes pod) | 1 |
cluster_manager.startupProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
cluster_manager.livenessProbe.enabled |
Enable/disable the liveness probe (cluster_manager nodes pod) | false |
cluster_manager.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated (cluster_manager nodes pod) | 90 |
cluster_manager.livenessProbe.periodSeconds |
How often to perform the probe (cluster_manager nodes pod) | 10 |
cluster_manager.livenessProbe.timeoutSeconds |
When the probe times out (cluster_manager nodes pod) | 5 |
cluster_manager.livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (cluster_manager nodes pod) | 1 |
cluster_manager.livenessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
cluster_manager.readinessProbe.enabled |
Enable/disable the readiness probe (cluster_manager nodes pod) | true |
cluster_manager.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated (cluster_manager nodes pod) | 30 |
cluster_manager.readinessProbe.periodSeconds |
How often to perform the probe (cluster_manager nodes pod) | 10 |
cluster_manager.readinessProbe.timeoutSeconds |
When the probe times out (cluster_manager nodes pod) | 5 |
cluster_manager.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (cluster_manager nodes pod) | 1 |
cluster_manager.readinessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
cluster_manager.customStartupProbe |
Override default startup probe | {} |
cluster_manager.customLivenessProbe |
Override default liveness probe | {} |
cluster_manager.customReadinessProbe |
Override default readiness probe | {} |
cluster_manager.initContainers |
Extra init containers to add to the Opensearch cluster_manager pod(s) | [] |
cluster_manager.sidecars |
Extra sidecar containers to add to the Opensearch cluster_manager pod(s) | [] |
cluster_manager.persistence.enabled |
Enable persistence using a PersistentVolumeClaim |
true |
cluster_manager.persistence.storageClass |
Persistent Volume Storage Class | "" |
cluster_manager.persistence.existingClaim |
Existing Persistent Volume Claim | "" |
cluster_manager.persistence.existingVolume |
Existing Persistent Volume for use as volume match label selector to the volumeClaimTemplate . Ignored when cluster_manager.persistence.selector is set. |
"" |
cluster_manager.persistence.selector |
Configure custom selector for existing Persistent Volume. Overwrites cluster_manager.persistence.existingVolume |
{} |
cluster_manager.persistence.annotations |
Persistent Volume Claim annotations | {} |
cluster_manager.persistence.accessModes |
Persistent Volume Access Modes | ["ReadWriteOnce"] |
cluster_manager.persistence.size |
Persistent Volume Size | 8Gi |
cluster_manager.service.type |
Kubernetes Service type (cluster_manager nodes) | ClusterIP |
cluster_manager.service.ports.restAPI |
Opensearch service REST API port | 9200 |
cluster_manager.service.ports.transport |
Opensearch service transport port | 9300 |
cluster_manager.service.nodePort |
Kubernetes Service nodePort (cluster_manager nodes) | "" |
cluster_manager.service.annotations |
Annotations for cluster_manager nodes service | {} |
cluster_manager.service.loadBalancerIP |
loadBalancerIP if cluster_manager nodes service type is LoadBalancer |
"" |
cluster_manager.service.ipFamilyPolicy |
ipFamilyPolicy for cluster_manager nodes service | PreferDualStack |
cluster_manager.serviceAccount.create |
Specifies whether a ServiceAccount should be created | true |
cluster_manager.serviceAccount.name |
Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | "" |
cluster_manager.serviceAccount.automountServiceAccountToken |
Automount service account token for the server service account | false |
cluster_manager.serviceAccount.annotations |
Annotations for service account. Evaluated as a template. Only used if create is true . |
{} |
cluster_manager.autoscaling.enabled |
Whether enable horizontal pod autoscale | false |
cluster_manager.autoscaling.minReplicas |
Configure a minimum amount of pods | 3 |
cluster_manager.autoscaling.maxReplicas |
Configure a maximum amount of pods | 11 |
cluster_manager.autoscaling.targetCPU |
Define the CPU target to trigger the scaling actions (utilization percentage) | "" |
cluster_manager.autoscaling.targetMemory |
Define the memory target to trigger the scaling actions (utilization percentage) | "" |
Name | Description | Value |
---|---|---|
coordinating.fullnameOverride |
String to fully override opensearch.coordinating.fullname template with a string | "" |
coordinating.replicaCount |
Desired number of Opensearch coordinating nodes | 2 |
coordinating.hostAliases |
Add deployment host aliases | [] |
coordinating.schedulerName |
Name of the k8s scheduler (other than default) | "" |
coordinating.updateStrategy.type |
Update strategy for Coordinating Statefulset | RollingUpdate |
coordinating.heapSize |
coordinating node heap size | 128m |
coordinating.podAnnotations |
Annotations for coordinating pods. | {} |
coordinating.podLabels |
Extra labels to add to Pod | {} |
coordinating.securityContext.enabled |
Enable security context for coordinating pods | true |
coordinating.securityContext.fsGroup |
Group ID for the container for coordinating pods | 1000 |
coordinating.securityContext.runAsUser |
User ID for the container for coordinating pods | 1000 |
coordinating.podSecurityContext.enabled |
Enable security context for coordinating pods | false |
coordinating.podSecurityContext.fsGroup |
Group ID for the container for coordinating pods | 1000 |
coordinating.containerSecurityContext.enabled |
Enable security context for the main container | true |
coordinating.containerSecurityContext.runAsUser |
User ID for the container for the main container | 1000 |
coordinating.containerSecurityContext.runAsNonRoot |
Indicates that the container must run as a non-root user | true |
coordinating.containerSecurityContext.allowPrivilegeEscalation |
Controls whether a process can gain more privileges than its parent process | false |
coordinating.podAffinityPreset |
Coordinating Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
coordinating.podAntiAffinityPreset |
Coordinating Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
coordinating.nodeAffinityPreset.type |
Coordinating Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard |
"" |
coordinating.nodeAffinityPreset.key |
Coordinating Node label key to match Ignored if affinity is set. |
"" |
coordinating.nodeAffinityPreset.values |
Coordinating Node label values to match. Ignored if affinity is set. |
[] |
coordinating.affinity |
Coordinating Affinity for pod assignment | {} |
coordinating.priorityClassName |
Coordinating pods Priority Class Name | "" |
coordinating.nodeSelector |
Coordinating Node labels for pod assignment | {} |
coordinating.tolerations |
Coordinating Tolerations for pod assignment | [] |
coordinating.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
coordinating.resources.limits |
The resources limits for the container | {} |
coordinating.resources.requests |
The requested resources for the container | {} |
coordinating.startupProbe.enabled |
Enable/disable the startup probe (coordinating nodes pod) | false |
coordinating.startupProbe.initialDelaySeconds |
Delay before startup probe is initiated (coordinating nodes pod) | 90 |
coordinating.startupProbe.periodSeconds |
How often to perform the probe (coordinating nodes pod) | 10 |
coordinating.startupProbe.timeoutSeconds |
When the probe times out (coordinating nodes pod) | 5 |
coordinating.startupProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
coordinating.startupProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (coordinating nodes pod) | 1 |
coordinating.livenessProbe.enabled |
Enable/disable the liveness probe (coordinating nodes pod) | false |
coordinating.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated (coordinating nodes pod) | 90 |
coordinating.livenessProbe.periodSeconds |
How often to perform the probe (coordinating nodes pod) | 10 |
coordinating.livenessProbe.timeoutSeconds |
When the probe times out (coordinating nodes pod) | 5 |
coordinating.livenessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
coordinating.livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (coordinating nodes pod) | 1 |
coordinating.readinessProbe.enabled |
Enable/disable the readiness probe (coordinating nodes pod) | true |
coordinating.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated (coordinating nodes pod) | 30 |
coordinating.readinessProbe.periodSeconds |
How often to perform the probe (coordinating nodes pod) | 10 |
coordinating.readinessProbe.timeoutSeconds |
When the probe times out (coordinating nodes pod) | 5 |
coordinating.readinessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
coordinating.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (coordinating nodes pod) | 1 |
coordinating.customStartupProbe |
Override default startup probe | {} |
coordinating.customLivenessProbe |
Override default liveness probe | {} |
coordinating.customReadinessProbe |
Override default readiness probe | {} |
coordinating.initContainers |
Extra init containers to add to the Opensearch coordinating pod(s) | [] |
coordinating.sidecars |
Extra sidecar containers to add to the Opensearch coordinating pod(s) | [] |
coordinating.service.type |
Kubernetes Service type (coordinating nodes) | ClusterIP |
coordinating.service.ports.restAPI |
Opensearch service REST API port | 9200 |
coordinating.service.ports.transport |
Opensearch service transport port | 9300 |
coordinating.service.nodePort |
Kubernetes Service nodePort (coordinating nodes) | "" |
coordinating.service.annotations |
Annotations for coordinating nodes service | {} |
coordinating.service.loadBalancerIP |
loadBalancerIP if coordinating nodes service type is LoadBalancer |
"" |
coordinating.service.externalTrafficPolicy |
Enable client source IP preservation with externalTrafficPolicy: Local | Cluster |
coordinating.service.ipFamilyPolicy |
ipFamilyPolicy for coordinating nodes service | PreferDualStack |
coordinating.serviceAccount.create |
Specifies whether a ServiceAccount should be created | true |
coordinating.serviceAccount.name |
Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | "" |
coordinating.serviceAccount.automountServiceAccountToken |
Automount service account token for the server service account | false |
coordinating.serviceAccount.annotations |
Annotations for service account. Evaluated as a template. Only used if create is true . |
{} |
coordinating.autoscaling.enabled |
Whether enable horizontal pod autoscale | false |
coordinating.autoscaling.minReplicas |
Configure a minimum amount of pods | 3 |
coordinating.autoscaling.maxReplicas |
Configure a maximum amount of pods | 11 |
coordinating.autoscaling.targetCPU |
Define the CPU target to trigger the scaling actions (utilization percentage) | "" |
coordinating.autoscaling.targetMemory |
Define the memory target to trigger the scaling actions (utilization percentage) | "" |
Name | Description | Value |
---|---|---|
data.fullnameOverride |
String to fully override opensearch.data.fullname template with a string | "" |
data.replicaCount |
Desired number of Opensearch data nodes | 2 |
data.hostAliases |
Add deployment host aliases | [] |
data.schedulerName |
Name of the k8s scheduler (other than default) | "" |
data.updateStrategy.type |
Update strategy for Data statefulset | RollingUpdate |
data.updateStrategy.rollingUpdatePartition |
Partition update strategy for Data statefulset | "" |
data.heapSize |
Data node heap size | 1024m |
data.podAnnotations |
Annotations for data pods. | {} |
data.podLabels |
Extra labels to add to Pod | {} |
data.securityContext.enabled |
Enable security context for data pods | true |
data.securityContext.fsGroup |
Group ID for the container for data pods | 1000 |
data.securityContext.runAsUser |
User ID for the container for data pods | 1000 |
data.podSecurityContext.enabled |
Enable security context for data pods | true |
data.podSecurityContext.fsGroup |
Group ID for the container for data pods | 1000 |
data.containerSecurityContext.enabled |
Enable security context for the main container | true |
data.containerSecurityContext.runAsUser |
User ID for the container for the main container | 1000 |
data.containerSecurityContext.runAsNonRoot |
Indicates that the container must run as a non-root user | true |
data.containerSecurityContext.allowPrivilegeEscalation |
Controls whether a process can gain more privileges than its parent process | false |
data.podAffinityPreset |
Data Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
data.podAntiAffinityPreset |
Data Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
data.nodeAffinityPreset.type |
Data Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard |
"" |
data.nodeAffinityPreset.key |
Data Node label key to match Ignored if affinity is set. |
"" |
data.nodeAffinityPreset.values |
Data Node label values to match. Ignored if affinity is set. |
[] |
data.affinity |
Data Affinity for pod assignment | {} |
data.priorityClassName |
Data pods Priority Class Name | "" |
data.nodeSelector |
Data Node labels for pod assignment | {} |
data.tolerations |
Data Tolerations for pod assignment | [] |
data.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
data.resources.limits |
The resources limits for the container | {} |
data.resources.requests |
The requested resources for the container | {} |
data.startupProbe.enabled |
Enable/disable the startup probe (data nodes pod) | false |
data.startupProbe.initialDelaySeconds |
Delay before startup probe is initiated (data nodes pod) | 90 |
data.startupProbe.periodSeconds |
How often to perform the probe (data nodes pod) | 10 |
data.startupProbe.timeoutSeconds |
When the probe times out (data nodes pod) | 5 |
data.startupProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
data.startupProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) | 1 |
data.livenessProbe.enabled |
Enable/disable the liveness probe (data nodes pod) | false |
data.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated (data nodes pod) | 90 |
data.livenessProbe.periodSeconds |
How often to perform the probe (data nodes pod) | 10 |
data.livenessProbe.timeoutSeconds |
When the probe times out (data nodes pod) | 5 |
data.livenessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
data.livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) | 1 |
data.readinessProbe.enabled |
Enable/disable the readiness probe (data nodes pod) | true |
data.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated (data nodes pod) | 30 |
data.readinessProbe.periodSeconds |
How often to perform the probe (data nodes pod) | 10 |
data.readinessProbe.timeoutSeconds |
When the probe times out (data nodes pod) | 5 |
data.readinessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
data.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (data nodes pod) | 1 |
data.customStartupProbe |
Override default startup probe | {} |
data.customLivenessProbe |
Override default liveness probe | {} |
data.customReadinessProbe |
Override default readiness probe | {} |
data.initContainers |
Extra init containers to add to the Opensearch data pod(s) | [] |
data.sidecars |
Extra sidecar containers to add to the Opensearch data pod(s) | [] |
data.service.annotations |
Annotations for data-eligible nodes service | {} |
data.service.ipFamilyPolicy |
ipFamilyPolicy for data nodes service | PreferDualStack |
data.persistence.enabled |
Enable persistence using a PersistentVolumeClaim |
true |
data.persistence.storageClass |
Persistent Volume Storage Class | "" |
data.persistence.existingClaim |
Existing Persistent Volume Claim | "" |
data.persistence.existingVolume |
Existing Persistent Volume for use as volume match label selector to the volumeClaimTemplate . Ignored when data.persistence.selector ist set. |
"" |
data.persistence.selector |
Configure custom selector for existing Persistent Volume. Overwrites data.persistence.existingVolume |
{} |
data.persistence.annotations |
Persistent Volume Claim annotations | {} |
data.persistence.accessModes |
Persistent Volume Access Modes | ["ReadWriteOnce"] |
data.persistence.size |
Persistent Volume Size | 8Gi |
data.serviceAccount.create |
Specifies whether a ServiceAccount should be created | true |
data.serviceAccount.name |
Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | "" |
data.serviceAccount.automountServiceAccountToken |
Automount service account token for the server service account | false |
data.serviceAccount.annotations |
Annotations for service account. Evaluated as a template. Only used if create is true . |
{} |
data.autoscaling.enabled |
Whether enable horizontal pod autoscale | false |
data.autoscaling.minReplicas |
Configure a minimum amount of pods | 3 |
data.autoscaling.maxReplicas |
Configure a maximum amount of pods | 11 |
data.autoscaling.targetCPU |
Define the CPU target to trigger the scaling actions (utilization percentage) | "" |
data.autoscaling.targetMemory |
Define the memory target to trigger the scaling actions (utilization percentage) | "" |
Name | Description | Value |
---|---|---|
ingest.enabled |
Enable ingest nodes | true |
ingest.fullnameOverride |
String to fully override opensearch.ingest.fullname template with a string | "" |
ingest.replicaCount |
Desired number of Opensearch ingest nodes | 2 |
ingest.updateStrategy.type |
Update strategy for Ingest statefulset | RollingUpdate |
ingest.heapSize |
Ingest node heap size | 128m |
ingest.podAnnotations |
Annotations for ingest pods. | {} |
ingest.hostAliases |
Add deployment host aliases | [] |
ingest.schedulerName |
Name of the k8s scheduler (other than default) | "" |
ingest.podLabels |
Extra labels to add to Pod | {} |
ingest.securityContext.enabled |
Enable security context for ingest pods | true |
ingest.securityContext.fsGroup |
Group ID for the container for ingest pods | 1000 |
ingest.securityContext.runAsUser |
User ID for the container for ingest pods | 1000 |
ingest.podSecurityContext.enabled |
Enable security context for ingest pods | true |
ingest.podSecurityContext.fsGroup |
Group ID for the container for ingest pods | 1000 |
ingest.containerSecurityContext.enabled |
Enable security context for the main container | true |
ingest.containerSecurityContext.runAsUser |
User ID for the container for the main container | 1000 |
ingest.containerSecurityContext.runAsNonRoot |
Indicates that the container must run as a non-root user | true |
ingest.containerSecurityContext.allowPrivilegeEscalation |
Controls whether a process can gain more privileges than its parent process | false |
ingest.podAffinityPreset |
Ingest Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
ingest.podAntiAffinityPreset |
Ingest Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard |
"" |
ingest.nodeAffinityPreset.type |
Ingest Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard |
"" |
ingest.nodeAffinityPreset.key |
Ingest Node label key to match Ignored if affinity is set. |
"" |
ingest.nodeAffinityPreset.values |
Ingest Node label values to match. Ignored if affinity is set. |
[] |
ingest.affinity |
Ingest Affinity for pod assignment | {} |
ingest.priorityClassName |
Ingest pods Priority Class Name | "" |
ingest.nodeSelector |
Ingest Node labels for pod assignment | {} |
ingest.tolerations |
Ingest Tolerations for pod assignment | [] |
ingest.topologySpreadConstraints |
Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template | [] |
ingest.resources.limits |
The resources limits for the container | {} |
ingest.resources.requests |
The requested resources for the container | {} |
ingest.startupProbe.enabled |
Enable/disable the startup probe (ingest nodes pod) | false |
ingest.startupProbe.initialDelaySeconds |
Delay before startup probe is initiated (ingest nodes pod) | 90 |
ingest.startupProbe.periodSeconds |
How often to perform the probe (ingest nodes pod) | 10 |
ingest.startupProbe.timeoutSeconds |
When the probe times out (ingest nodes pod) | 5 |
ingest.startupProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
ingest.startupProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (ingest nodes pod) | 1 |
ingest.livenessProbe.enabled |
Enable/disable the liveness probe (ingest nodes pod) | false |
ingest.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated (ingest nodes pod) | 90 |
ingest.livenessProbe.periodSeconds |
How often to perform the probe (ingest nodes pod) | 10 |
ingest.livenessProbe.timeoutSeconds |
When the probe times out (ingest nodes pod) | 5 |
ingest.livenessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
ingest.livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (ingest nodes pod) | 1 |
ingest.readinessProbe.enabled |
Enable/disable the readiness probe (ingest nodes pod) | true |
ingest.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated (ingest nodes pod) | 30 |
ingest.readinessProbe.periodSeconds |
How often to perform the probe (ingest nodes pod) | 10 |
ingest.readinessProbe.timeoutSeconds |
When the probe times out (ingest nodes pod) | 5 |
ingest.readinessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded | 5 |
ingest.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed (ingest nodes pod) | 1 |
ingest.customStartupProbe |
Override default startup probe | {} |
ingest.customLivenessProbe |
Override default liveness probe | {} |
ingest.customReadinessProbe |
Override default readiness probe | {} |
ingest.initContainers |
Extra init containers to add to the Opensearch ingest pod(s) | [] |
ingest.sidecars |
Extra sidecar containers to add to the Opensearch ingest pod(s) | [] |
ingest.service.type |
Kubernetes Service type (ingest nodes) | LoadBalancer |
ingest.service.ports.restAPI |
Opensearch service REST API port | 9200 |
ingest.service.ports.transport |
Opensearch service transport port | 9300 |
ingest.service.nodePorts.restAPI |
Node port for REST API | "" |
ingest.service.nodePorts.transport |
Node port for REST API | "" |
ingest.service.annotations |
Annotations for ingest nodes service | {} |
ingest.service.loadBalancerIP |
loadBalancerIP if ingest nodes service type is LoadBalancer |
"" |
ingest.service.ipFamilyPolicy |
ipFamilyPolicy for cluster_manager nodes service | PreferDualStack |
ingest.serviceAccount.create |
Specifies whether a ServiceAccount should be created | true |
ingest.serviceAccount.name |
Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | "" |
ingest.serviceAccount.automountServiceAccountToken |
Automount service account token for the server service account | false |
ingest.serviceAccount.annotations |
Annotations for service account. Evaluated as a template. Only used if create is true . |
{} |
ingest.autoscaling.enabled |
Whether enable horizontal pod autoscale | false |
ingest.autoscaling.minReplicas |
Configure a minimum amount of pods | 3 |
ingest.autoscaling.maxReplicas |
Configure a maximum amount of pods | 11 |
ingest.autoscaling.targetCPU |
Define the CPU target to trigger the scaling actions (utilization percentage) | "" |
ingest.autoscaling.targetMemory |
Define the memory target to trigger the scaling actions (utilization percentage) | "" |
Name | Description | Value |
---|---|---|
metrics.enabled |
Enable prometheus exporter | false |
metrics.serviceMonitor.enabled |
Create ServiceMonitor Resource for scraping metrics using PrometheusOperator | false |
metrics.serviceMonitor.namespace |
Namespace which Prometheus is running in | "" |
metrics.serviceMonitor.jobLabel |
The name of the label on the target service to use as the job name in prometheus. | "" |
metrics.serviceMonitor.interval |
Interval at which metrics should be scraped | "" |
metrics.serviceMonitor.scrapeTimeout |
Timeout after which the scrape is ended | "" |
metrics.serviceMonitor.relabelings |
RelabelConfigs to apply to samples before scraping | [] |
metrics.serviceMonitor.metricRelabelings |
MetricRelabelConfigs to apply to samples before ingestion | [] |
metrics.serviceMonitor.selector |
ServiceMonitor selector labels | {} |
metrics.serviceMonitor.labels |
Extra labels for the ServiceMonitor | {} |
metrics.serviceMonitor.honorLabels |
honorLabels chooses the metric's labels on collisions with target labels | false |
metrics.prometheusRule.enabled |
Creates a Prometheus Operator PrometheusRule (also requires metrics.enabled to be true and metrics.prometheusRule.rules ) |
false |
metrics.prometheusRule.namespace |
Namespace for the PrometheusRule Resource (defaults to the Release Namespace) | "" |
metrics.prometheusRule.additionalLabels |
Additional labels that can be used so PrometheusRule will be discovered by Prometheus | {} |
metrics.prometheusRule.rules |
Prometheus Rule definitions | [] |
Name | Description | Value |
---|---|---|
sysctlImage.enabled |
Enable kernel settings modifier image | true |
sysctlImage.registry |
Kernel settings modifier image registry | docker.io |
sysctlImage.repository |
Kernel settings modifier image repository | bitnami/bitnami-shell |
sysctlImage.tag |
Kernel settings modifier image tag | 10-debian-10-r328 |
sysctlImage.pullPolicy |
Kernel settings modifier image pull policy | IfNotPresent |
sysctlImage.pullSecrets |
Kernel settings modifier image pull secrets | [] |
sysctlImage.resources.limits |
The resources limits for the container | {} |
sysctlImage.resources.requests |
The requested resources for the container | {} |
Name | Description | Value |
---|---|---|
volumePermissions.enabled |
Enable init container that changes volume permissions in the data directory (for cases where the default k8s runAsUser and fsUser values do not work) |
false |
volumePermissions.image.registry |
Init container volume-permissions image registry | docker.io |
volumePermissions.image.repository |
Init container volume-permissions image name | bitnami/bitnami-shell |
volumePermissions.image.tag |
Init container volume-permissions image tag | 10-debian-10-r328 |
volumePermissions.image.pullPolicy |
Init container volume-permissions image pull policy | IfNotPresent |
volumePermissions.image.pullSecrets |
Init container volume-permissions image pull secrets | [] |
volumePermissions.resources.limits |
The resources limits for the container | {} |
volumePermissions.resources.requests |
The requested resources for the container | {} |
Name | Description | Value |
---|---|---|
ingress.enabled |
Enable ingress controller resource | false |
ingress.pathType |
Ingress Path type | ImplementationSpecific |
ingress.apiVersion |
Override API Version (automatically detected if not set) | "" |
ingress.hostname |
Default host for the ingress resource. If specified as "*" no host rule is configured | opensearch.local |
ingress.path |
The Path to Dashboard. You may need to set this to '/*' in order to use this with ALB ingress controllers. | / |
ingress.annotations |
Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
ingress.tls |
Enable TLS configuration for the hostname defined at ingress.hostname parameter | false |
ingress.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
ingress.extraHosts |
The list of additional hostnames to be covered with this ingress record. | [] |
ingress.extraPaths |
Additional arbitrary path/backend objects | [] |
ingress.extraTls |
The tls configuration for additional hostnames to be covered with this ingress record. | [] |
ingress.secrets |
If you're providing your own certificates, please use this to add the certificates as secrets | [] |
ingress.ingressClassName |
IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
ingress.extraRules |
The list of additional rules to be added to this ingress record. Evaluated as a template | [] |
Name | Description | Value |
---|---|---|
securityadmin.enabled |
Enable Opensearch SecurityAdmin hook job | true |
securityadmin.schedulerName |
Name of the k8s scheduler (other than default) | "" |
securityadmin.podAnnotations |
Annotations to add to the pod | {} |
securityadmin.podLabels |
Extra labels to add to Pod | {} |
securityadmin.initContainers |
Extra init containers to add to the Opensearch coordinating pod(s) | [] |
securityadmin.sidecars |
Extra sidecar containers to add to the Opensearch ingest pod(s) | [] |
securityadmin.affinity |
SecurityAdmin Affinity for pod assignment | {} |
securityadmin.nodeSelector |
SecurityAdmin Node labels for pod assignment | {} |
securityadmin.tolerations |
SecurityAdmin Tolerations for pod assignment | [] |
securityadmin.serviceAccount.create |
Specifies whether a ServiceAccount should be created | true |
securityadmin.serviceAccount.name |
Name of the service account to use. If not set and create is true, a name is generated using the fullname template. | "" |
securityadmin.serviceAccount.automountServiceAccountToken |
Automount service account token for the server service account | false |
securityadmin.serviceAccount.annotations |
Annotations for service account. Evaluated as a template. Only used if create is true . |
{} |
securityadmin.securityContext.enabled |
Enable security context for securityadmin pods | true |
securityadmin.securityContext.fsGroup |
Group ID for the container for securityadmin pods | 1000 |
securityadmin.securityContext.runAsUser |
User ID for the container for securityadmin pods | 1000 |
securityadmin.podSecurityContext.enabled |
Enable security context for securityadmin pods | false |
securityadmin.podSecurityContext.fsGroup |
Group ID for the container for securityadmin pods | 1000 |
securityadmin.containerSecurityContext.enabled |
Enable security context for the main container | true |
securityadmin.containerSecurityContext.runAsUser |
User ID for the container for the main container | 1000 |
securityadmin.containerSecurityContext.runAsNonRoot |
Indicates that the container must run as a non-root user | true |
securityadmin.containerSecurityContext.allowPrivilegeEscalation |
Controls whether a process can gain more privileges than its parent process | false |
securityadmin.resources.limits |
The resources limits for the container | {} |
securityadmin.resources.requests |
The requested resources for the container | {} |
securityadmin.priorityClassName |
SecurityAdmin Pods Priority Class Name | "" |
securityadmin.extraVolumes |
Extra volumes | [] |
securityadmin.extraVolumeMounts |
Mount extra volume(s) | [] |
securityadmin.securityConfig.path |
Base path for security YAML files | /usr/share/opensearch/plugins/opensearch-security/securityconfig |
securityadmin.securityConfig.internal_users |
This file contains any initial users that you want to add to the security plugin’s internal user database. | {} |
securityadmin.securityConfig.allowlist |
You can use allowlist.yml to add any endpoints and HTTP requests to a list of allowed endpoints and requests. If enabled, all users except the super admin are allowed access to only the specified endpoints and HTTP requests, and all other HTTP requests associated with the endpoint are denied. For example, if GET _cluster/settings is added to the allow list, users cannot submit PUT requests to _cluster/settings to update cluster settings. | {} |
securityadmin.securityConfig.config.dynamic.kibana.multitenancy_enabled |
Enable or disable multi-tenancy | true |
securityadmin.securityConfig.config.dynamic.kibana.server_username |
Must match the name of the OpenSearch Dashboards server user from opensearch_dashboards.yml | dashboards |
securityadmin.securityConfig.config.dynamic.kibana.index |
Must match the name of the OpenSearch Dashboards index from opensearch_dashboards.yml | .opensearch_dashboards |
securityadmin.securityConfig.config.dynamic.authc.basic_internal_auth_domain.http_enabled |
true |
|
securityadmin.securityConfig.config.dynamic.authc.basic_internal_auth_domain.transport_enabled |
true |
|
securityadmin.securityConfig.config.dynamic.authc.basic_internal_auth_domain.order |
0 |
|
securityadmin.securityConfig.config.dynamic.authc.basic_internal_auth_domain.http_authenticator.type |
HTTP basic authentication. No additional configuration is needed. | basic |
securityadmin.securityConfig.config.dynamic.authc.basic_internal_auth_domain.http_authenticator.challenge |
In most cases, you set the challenge flag to true. The flag defines the behavior of the security plugin if the Authorization field in the HTTP header is not set. | false |
securityadmin.securityConfig.config.dynamic.authc.basic_internal_auth_domain.authentication_backend.type |
Use the users and roles defined in internal_users.yml for authentication. | internal |
securityadmin.securityConfig.config.dynamic.authc.clientcert_auth_domain.http_enabled |
true |
|
securityadmin.securityConfig.config.dynamic.authc.clientcert_auth_domain.transport_enabled |
true |
|
securityadmin.securityConfig.config.dynamic.authc.clientcert_auth_domain.order |
1 |
|
securityadmin.securityConfig.config.dynamic.authc.clientcert_auth_domain.http_authenticator.type |
TLS client cert authentication. No additional configuration is needed. | clientcert |
securityadmin.securityConfig.config.dynamic.authc.clientcert_auth_domain.http_authenticator.config.username_attribute |
TLS cert username attribute for role matching. If omitted DN becomes username. | cn |
securityadmin.securityConfig.config.dynamic.authc.clientcert_auth_domain.http_authenticator.challenge |
In most cases, you set the challenge flag to true. The flag defines the behavior of the security plugin if the Authorization field in the HTTP header is not set. | false |
securityadmin.securityConfig.config.dynamic.authc.clientcert_auth_domain.authentication_backend.type |
noop |
|
securityadmin.securityConfig.roles |
This file contains any initial roles that you want to add to the security plugin. Aside from some metadata, the default file is empty, because the security plugin has a number of static roles that it adds automatically. | {} |
securityadmin.securityConfig.roles_mapping |
This file contains any initial role mappings | {} |
securityadmin.securityConfig.action_groups |
This file contains any initial action groups that you want to add to the security plugin. | {} |
securityadmin.securityConfig.tenants |
You can use this file to specify and add any number of OpenSearch Dashboards tenants to your OpenSearch cluster. | {} |
securityadmin.securityConfig.nodes_dn |
This file lets you add certificates’ distinguished names (DNs) an allow list to enable communication between any number of nodes and/or clusters. | {} |
Name | Description | Value |
---|---|---|
s3Snapshots.enabled |
Enable Opensearch S3 snapshots | false |
s3Snapshots.config.s3.client.default.access_key |
S3 Access key | "" |
s3Snapshots.config.s3.client.default.secret_key |
S3 Secret key | "" |
s3Snapshots.config.s3.client.default.existingSecret |
Name of an existing secret resource containing the S3 access and secret key | "" |
s3Snapshots.config.s3.client.default.existingSecretAccessKey |
Name of an existing secret key containing the S3 access key | access_key |
s3Snapshots.config.s3.client.default.existingSecretSecretKey |
Name of an existing secret key containing the S3 secret key | secret_key |
s3Snapshots.config.s3.client.default.endpoint |
S3 endpoint | s3.amazonaws.com |
s3Snapshots.config.s3.client.default.bucket |
S3 bucket | opensearch |
s3Snapshots.config.s3.client.default.base_path |
S3 path in bucket | snapshots |
s3Snapshots.config.s3.client.default.max_retries |
Number of retries if a request fails | 3 |
s3Snapshots.config.s3.client.default.path_style_access |
Whether to use the deprecated path-style bucket URLs. | false |
s3Snapshots.config.s3.client.default.protocol |
http or https | https |
s3Snapshots.config.s3.client.default.read_timeout |
The S3 connection timeout | 50s |
s3Snapshots.config.s3.client.default.use_throttle_retries |
Whether the client should wait a progressively longer amount of time (exponential backoff) between each successive retry | true |
Name | Description | Value |
---|---|---|
extraSecretsKeystore.existingSecret |
Name of the existing secret containing the entries to add to the Opensearch keystore | "" |
extraSecretsKeystore.secrets |
Dict of the K/V entries to add to the Opensearch keystore | {} |
To modify the Opensearch version used in this chart you can specify a valid image tag using the image.tag
parameter. For example, image.tag=X.Y.Z
. This approach is also applicable to other images like exporters.
Currently, Opensearch requires some changes in the kernel of the host machine to work as expected. If those values are not set in the underlying operating system, the Opensearch containers fail to boot with ERROR messages. More information about these requirements can be found in the links below:
This chart uses a privileged initContainer to change those settings in the Kernel by running: sysctl -w vm.max_map_count=262144 && sysctl -w fs.file-max=65536
.
You can disable the initContainer using the sysctlImage.enabled=false
parameter.
This Opensearch chart contains Opensearch Dashboard as subchart, you can enable it just setting the global.kibanaEnabled=true
parameter.
To see the notes with some operational instructions from the Opensearch Dashboard chart, please use the --render-subchart-notes
as part of your helm install
command, in this way you can see the Opensearch Dashboard and Opensearch notes in your terminal.
When enabling the bundled kibana subchart, there are a few gotchas that you should be aware of listed below.
TLS is enabled by default for transport and Rest layers.
This chart relies by default on Cert-Manager with the CA Issuer.
If you do not provide your own issuers, the chart will create a self-signed issuer to issue 2 CAs:
- Transport CA
- HTTP CA
As it's described in the official documentation, it's necessary to register a snapshot repository before you can perform snapshot and restore operations.
This chart allows you to configure Opensearch to use a shared file system to store snapshots. To do so, you need to mount a RWX volume on every Opensearch node, and set the parameter snapshotRepoPath
with the path where the volume is mounted. In the example below, you can find the values to set when using a NFS Perstitent Volume:
extraVolumes:
- name: snapshot-repository
nfs:
server: nfs.example.com # Please change this to your NFS server
path: /share1
extraVolumeMounts:
- name: snapshot-repository
mountPath: /snapshots
snapshotRepoPath: "/snapshots"
If you have a need for additional containers to run within the same pod as Opensearch components (e.g. an additional metrics or logging exporter), you can do so via the XXX.sidecars
parameter(s), where XXX is placeholder you need to replace with the actual component(s). Simply define your container according to the Kubernetes container spec.
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Similarly, you can add extra init containers using the initContainers
parameter.
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
This chart allows you to set your custom affinity using the XXX.affinity
parameter(s). Find more information about Pod's affinity in the kubernetes documentation.
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset
, XXX.podAntiAffinityPreset
, or XXX.nodeAffinityPreset
parameters.
The Opensearch image stores the Opensearch data at the /usr/share/opensearch/data
path of the container.
By default, the chart mounts a Persistent Volume at this location. The volume is created using dynamic volume provisioning. See the Parameters section to configure the PVC.
As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.
By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
You can enable this initContainer by setting volumePermissions.enabled
to true
.
This chart uses the Elasticsearch Prometheus exporter (https://github.com/prometheus-community/elasticsearch_exporter), it may have issues with Opensearch metric collection.
You can use the following Grafana dashboard : https://grafana.com/grafana/dashboards/2322
Find more information about how to deal with common errors related to Opensearch in this troubleshooting guide.
MIT License
Copyright (c) 2022 Benoît Pourre
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.