Compare commits

..

3 Commits

Author SHA1 Message Date
00dfee338c externalise config map name 2023-01-31 11:04:24 +00:00
35fb1119fb trunc to 63 characters 2023-01-27 12:26:36 +00:00
8396c0de20 Don't use mnt for nexus-data 2023-01-27 11:45:26 +00:00
5 changed files with 92 additions and 18 deletions

View File

@ -14,11 +14,13 @@
--> -->
# ⚠️ Archive Notice # ⚠️ Archive Notice
As of October 24, 2023, we will be making the following changes to this repository and the available helm charts: As of October 24, 2023, we will no longer update or support the [Single-Instance OSS/Pro Helm Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager).
1. We will no longer update or support the [Helm Chart for Single-Instance Kubernetes Deployments using OrientDB](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager). This is because deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments. Deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
2. There is not nor do we anticipate their being a Helm chart available for single-instance Kubernetes deployments using PostgreSQL.
3. The only Helm chart we will support is the [Helm Chart for Resilient AWS deployments using EKS](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency), which allows you to deploy Nexus Repository in an EKS cluster as described in our [resilient deployment options documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws). If you are deploying in AWS, you can use our [AWS Helm chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency) to deploy Nexus Repository in an EKS cluster.
We do not currently provide Helm charts for on-premises deployments using PostgreSQL. For those wishing to deploy on premises, see our [Single Data Center On-Premises Deployment Example Using Kubernetes documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-data-center-on-premises-deployment-example-using-kubernetes) for information and sample YAMLs to help you plan a resilient on-premises deployment.
## Helm Charts for Sonatype Nexus Repository Manager 3 ## Helm Charts for Sonatype Nexus Repository Manager 3

View File

@ -12,14 +12,15 @@
Eclipse Foundation. All other trademarks are the property of their respective owners. Eclipse Foundation. All other trademarks are the property of their respective owners.
--> -->
# ⚠️ Archive Notice # ⚠️ Archive Notice
As of October 24, 2023, we will be making the following changes to this repository and the available helm charts: As of October 24, 2023, we will no longer update or support this Helm chart.
1. We will no longer update or support this Helm chart for Kubernetes deployments using OrientDB. This is because deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments. Deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
2. There is not nor do we anticipate their being a Helm chart available for single-instance Kubernetes deployments using PostgreSQL.
3. The only Helm chart we will support is the [Helm Chart for Resilient AWS deployments using EKS](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency), which allows you to deploy Nexus Repository in an EKS cluster as described in our [resilient deployment options documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws). If you are deploying in AWS, you can use our [AWS Helm chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency) to deploy Nexus Repository in an EKS cluster.
We do not currently provide Helm charts for on-premises deployments using PostgreSQL. For those wishing to deploy on premises, see our [Single Data Center On-Premises Deployment Example Using Kubernetes documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-data-center-on-premises-deployment-example-using-kubernetes) for information and sample YAMLs to help you plan a resilient on-premises deployment.
# Nexus Repository # Nexus Repository

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.workdir.configmap.name }}
namespace: {{ .Values.namespaces.nexusNs }}
data:
create-nexus-work-dir.sh: |
#!/bin/bash
# Make Nexus Repository Manager work directory
mkdir -p /nexus-repo-mgr-work-dir/work

View File

@ -0,0 +1,51 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ .Values.workdir.daemonset.name }}
namespace: {{ .Values.namespaces.nexusNs }}
spec:
selector:
matchLabels:
job: dircreator
template:
metadata:
labels:
job: dircreator
spec:
hostPID: true
restartPolicy: Always
initContainers:
# Copy file for creating nexus work directory over and execute it on host
- name: create-nexus-work-dir
image: ubuntu:23.04
command: [/bin/sh]
args:
- -c
- >-
cp /tmp/create-nexus-work-dir.sh /host-dir &&
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/install/create-nexus-work-dir.sh &&
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/install/create-nexus-work-dir.sh
securityContext:
privileged: true
volumeMounts:
- name: create-nexus-work-dir-script
mountPath: /tmp
- name: host-mnt
mountPath: /host-dir
containers:
- name: directory-creator
image: busybox:1.33.1
command: ["/bin/sh"]
args:
- -c
- >-
tail -f /dev/null
securityContext:
privileged: true
volumes:
- name: create-nexus-work-dir-script
configMap:
name: {{ .Values.workdir.configmap.name }}
- name: host-mnt
hostPath:
path: /tmp/install

View File

@ -12,6 +12,7 @@ deployment:
clusterName: nxrm-nexus clusterName: nxrm-nexus
logsRegion: us-east-1 logsRegion: us-east-1
fluentBitVersion: 2.28.0 fluentBitVersion: 2.28.0
replicaCount: 1
initContainer: initContainer:
image: image:
repository: busybox repository: busybox
@ -19,7 +20,7 @@ deployment:
container: container:
image: image:
repository: sonatype/nexus3 repository: sonatype/nexus3
tag: 3.41.1 tag: 3.45.1
containerPort: 8081 containerPort: 8081
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
env: env:
@ -48,27 +49,35 @@ ingress:
#host: "example.com" #host to apply this ingress rule to. Uncomment this in your values.yaml and set it as you wish #host: "example.com" #host to apply this ingress rule to. Uncomment this in your values.yaml and set it as you wish
annotations: annotations:
kubernetes.io/ingress.class: alb kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-path: /service/rest/v1/status
alb.ingress.kubernetes.io/scheme: internal # scheme alb.ingress.kubernetes.io/scheme: internal # scheme
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' uncomment for https
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # The AWS Certificate Manager ARN for your HTTPS certificate #alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Uncomment for https. The AWS Certificate Manager ARN for your HTTPS certificate
dockerIngress: #Ingress for Docker Connector - comment out if you don't use docker repositories dockerIngress: #Ingress for Docker Connector - comment out if you don't use docker repositories
annotations: annotations:
kubernetes.io/ingress.class: alb # comment out if you don't use docker repositories kubernetes.io/ingress.class: alb # comment out if you don't use docker repositories
alb.ingress.kubernetes.io/scheme: internal # scheme comment out if you don't use docker repositories alb.ingress.kubernetes.io/scheme: internal # scheme comment out if you don't use docker repositories
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids, comment out if you don't use docker repositories alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids, comment out if you don't use docker repositories
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #comment out if you don't use docker repositories # alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #uncomment if you use docker repositories
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Comment out if you don't use docker repositories - The AWS Certificate Manager ARN for your HTTPS certificate # alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Uncomment if you use docker repositories - The AWS Certificate Manager ARN for your HTTPS certificate
external-dns.alpha.kubernetes.io/hostname: dockerrepo1.example.com, dockerrepo2.example.com, dockerrepo3.example.com # Add more docker subdomains using dockerrepoName.example.com othereise comment out if you don't use docker repositories # external-dns.alpha.kubernetes.io/hostname: dockerrepo1.example.com, dockerrepo2.example.com, dockerrepo3.example.com # Add more docker subdomains using dockerrepoName.example.com othereise comment out if you don't use docker repositories
workdir:
configmap:
name: create-nexus-workdir-config
daemonset:
name: create-nexus-work-dir
storageClass:
iopsPerGB: "10" #Note: aws plugin multiplies this by the size of the requested volumne to compute IOPS of the volumne and caps it a 20, 000 IOPS
pv: pv:
storage: 120Gi storage: 120Gi
volumeMode: Filesystem volumeMode: Filesystem
accessModes: ReadWriteOnce accessModes: ReadWriteOnce
reclaimPolicy: Retain reclaimPolicy: Retain
path: /mnt path: /nexus-repo-mgr-work-dir/work
zones: zones:
zone1: us-east-1a - us-east-1a
zone2: us-east-1b - us-east-1b
pvc: pvc:
accessModes: ReadWriteOnce accessModes: ReadWriteOnce
storage: 100Gi storage: 100Gi