Compare commits

..

3 Commits

Author SHA1 Message Date
511d7ab6ac Update README.md 2023-01-25 12:50:17 -05:00
5a3923f645 Update README.md 2023-01-25 12:49:52 -05:00
338565b3d8 Update README.md 2023-01-25 12:47:52 -05:00
51 changed files with 43 additions and 133 deletions

View File

@ -14,13 +14,11 @@
-->
# ⚠️ Archive Notice
As of October 24, 2023, we will no longer update or support the [Single-Instance OSS/Pro Helm Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager).
As of October 24, 2023, we will be making the following changes to this repository and the available helm charts:
Deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
If you are deploying in AWS, you can use our [AWS Helm chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency) to deploy Nexus Repository in an EKS cluster.
We do not currently provide Helm charts for on-premises deployments using PostgreSQL. For those wishing to deploy on premises, see our [Single Data Center On-Premises Deployment Example Using Kubernetes documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-data-center-on-premises-deployment-example-using-kubernetes) for information and sample YAMLs to help you plan a resilient on-premises deployment.
1. We will no longer update or support the [Helm Chart for Single-Instance Kubernetes Deployments using OrientDB](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager). This is because deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
2. There is not nor do we anticipate their being a Helm chart available for single-instance Kubernetes deployments using PostgreSQL.
3. The only Helm chart we will support is the [Helm Chart for Resilient AWS deployments using EKS](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency), which allows you to deploy Nexus Repository in an EKS cluster as described in our [resilient deployment options documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws).
## Helm Charts for Sonatype Nexus Repository Manager 3

View File

@ -12,7 +12,7 @@
# Eclipse Foundation. All other trademarks are the property of their respective owners.
#
helm plugin install --version "0.2.11" https://github.com/quintush/helm-unittest
helm plugin install https://github.com/quintush/helm-unittest
set -e

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -3,10 +3,10 @@ name: nexus-repository-manager
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 59.0.0
version: 45.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 3.59.0
appVersion: 3.45.1
description: Sonatype Nexus Repository Manager - Universal Binary repository

View File

@ -12,15 +12,14 @@
Eclipse Foundation. All other trademarks are the property of their respective owners.
-->
# ⚠️ Archive Notice
As of October 24, 2023, we will no longer update or support this Helm chart.
As of October 24, 2023, we will be making the following changes to this repository and the available helm charts:
Deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
If you are deploying in AWS, you can use our [AWS Helm chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency) to deploy Nexus Repository in an EKS cluster.
We do not currently provide Helm charts for on-premises deployments using PostgreSQL. For those wishing to deploy on premises, see our [Single Data Center On-Premises Deployment Example Using Kubernetes documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-data-center-on-premises-deployment-example-using-kubernetes) for information and sample YAMLs to help you plan a resilient on-premises deployment.
1. We will no longer update or support this Helm chart for Kubernetes deployments using OrientDB. This is because deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
2. There is not nor do we anticipate their being a Helm chart available for single-instance Kubernetes deployments using PostgreSQL.
3. The only Helm chart we will support is the [Helm Chart for Resilient AWS deployments using EKS](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency), which allows you to deploy Nexus Repository in an EKS cluster as described in our [resilient deployment options documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws).
# Nexus Repository

View File

@ -6,7 +6,7 @@ deploymentStrategy: Recreate
image:
# Sonatype Official Public Image
repository: sonatype/nexus3
tag: 3.59.0
tag: 3.45.1
pullPolicy: IfNotPresent
imagePullSecrets:
# for image registries that require login, specify the name of the existing

View File

@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 59.0.0
version: 45.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 3.59.0
appVersion: 3.45.1
keywords:
- artifacts

View File

@ -1,4 +1,4 @@
{{- if .Values.externaldns.enabled }}
# comment out sa if it was previously created
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@ -64,4 +64,3 @@ spec:
env:
- name: AWS_DEFAULT_REGION
value: {{ .Values.deployment.clusterRegion }}
{{- end }}

View File

@ -1,4 +1,3 @@
{{- if .Values.fluentbit.enabled -}}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -78,7 +77,7 @@ data:
[INPUT]
Name tail
Tag nexus.nexus-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -113,7 +112,7 @@ data:
[INPUT]
Name tail
Tag nexus.request-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_request-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_request-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -148,7 +147,7 @@ data:
[INPUT]
Name tail
Tag nexus.audit-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -183,7 +182,7 @@ data:
[INPUT]
Name tail
Tag nexus.tasks-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -358,5 +357,4 @@ spec:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
{{- end }}
effect: "NoSchedule"

View File

@ -24,7 +24,6 @@ spec:
port:
number: {{ .Values.service.nexus.port }}
---
{{- if .Values.ingress.dockerIngress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
@ -50,4 +49,3 @@ spec:
name: {{ .Chart.Name }}-docker-service
port:
number: {{ .Values.service.docker.port }}
{{- end }}

View File

@ -3,16 +3,13 @@ kind: Namespace
metadata:
name: {{ .Values.namespaces.nexusNs }}
---
{{- if .Values.fluentbit.enabled }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.cloudwatchNs }}
{{- end }}
---
{{- if .Values.externaldns.enabled }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.externaldnsNs }}
{{- end }}
---

View File

@ -6,7 +6,6 @@ metadata:
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.role }}
---
{{- if .Values.externaldns.enabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -14,4 +13,4 @@ metadata:
namespace: {{ .Values.namespaces.externaldnsNs }}
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.externaldns.role }}
{{- end }}
---

View File

@ -14,7 +14,6 @@ spec:
port: {{ .Values.service.nexus.port }}
targetPort: {{ .Values.service.nexus.targetPort }}
---
{{- if .Values.service.docker.enabled -}}
apiVersion: v1
kind: Service
metadata:
@ -31,4 +30,3 @@ spec:
protocol: {{ .Values.service.docker.protocol }}
port: {{ .Values.service.docker.port }}
targetPort: {{ .Values.service.docker.targetPort }}
{{- end }}

View File

@ -1,11 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.workdir.configmap.name }}
namespace: {{ .Values.namespaces.nexusNs }}
data:
create-nexus-work-dir.sh: |
#!/bin/bash
# Make Nexus Repository Manager work directory
mkdir -p /nexus-repo-mgr-work-dir/work

View File

@ -1,51 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ .Values.workdir.daemonset.name }}
namespace: {{ .Values.namespaces.nexusNs }}
spec:
selector:
matchLabels:
job: dircreator
template:
metadata:
labels:
job: dircreator
spec:
hostPID: true
restartPolicy: Always
initContainers:
# Copy file for creating nexus work directory over and execute it on host
- name: create-nexus-work-dir
image: ubuntu:23.04
command: [/bin/sh]
args:
- -c
- >-
cp /tmp/create-nexus-work-dir.sh /host-dir &&
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/install/create-nexus-work-dir.sh &&
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/install/create-nexus-work-dir.sh
securityContext:
privileged: true
volumeMounts:
- name: create-nexus-work-dir-script
mountPath: /tmp
- name: host-mnt
mountPath: /host-dir
containers:
- name: directory-creator
image: busybox:1.33.1
command: ["/bin/sh"]
args:
- -c
- >-
tail -f /dev/null
securityContext:
privileged: true
volumes:
- name: create-nexus-work-dir-script
configMap:
name: {{ .Values.workdir.configmap.name }}
- name: host-mnt
hostPath:
path: /tmp/install

View File

@ -4,18 +4,14 @@ namespaces:
cloudwatchNs: amazon-cloudwatch
externaldnsNs: nexus-externaldns
externaldns:
enabled: false
domainFilter: example.com #your root domain e.g example.com
awsZoneType: private # hosted zone to look at (valid values are public, private or no value for both)
fluentbit:
enabled: false
deployment:
clusterRegion: us-east-1
name: nxrm.deployment
clusterName: nxrm-nexus
logsRegion: us-east-1
fluentBitVersion: 2.28.0
replicaCount: 1
initContainer:
image:
repository: busybox
@ -23,7 +19,7 @@ deployment:
container:
image:
repository: sonatype/nexus3
tag: 3.45.1
tag: 3.41.1
containerPort: 8081
pullPolicy: IfNotPresent
env:
@ -52,33 +48,24 @@ ingress:
#host: "example.com" #host to apply this ingress rule to. Uncomment this in your values.yaml and set it as you wish
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-path: /service/rest/v1/status
alb.ingress.kubernetes.io/scheme: internal # scheme
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids
#alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' uncomment for https
#alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Uncomment for https. The AWS Certificate Manager ARN for your HTTPS certificate
dockerIngress: #Ingress for Docker Connector - comment out if you don't use docker repositories
enabled: false
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # The AWS Certificate Manager ARN for your HTTPS certificate
dockerIngress: #Ingress for Docker Connector - comment out if you don't use docker repositories
annotations:
kubernetes.io/ingress.class: alb # comment out if you don't use docker repositories
alb.ingress.kubernetes.io/scheme: internal # scheme comment out if you don't use docker repositories
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids, comment out if you don't use docker repositories
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #uncomment if you use docker repositories
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Uncomment if you use docker repositories - The AWS Certificate Manager ARN for your HTTPS certificate
# external-dns.alpha.kubernetes.io/hostname: dockerrepo1.example.com, dockerrepo2.example.com, dockerrepo3.example.com # Add more docker subdomains using dockerrepoName.example.com othereise comment out if you don't use docker repositories
workdir:
configmap:
name: create-nexus-workdir-config
daemonset:
name: create-nexus-work-dir
storageClass:
iopsPerGB: "10" #Note: aws plugin multiplies this by the size of the requested volumne to compute IOPS of the volumne and caps it a 20, 000 IOPS
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #comment out if you don't use docker repositories
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Comment out if you don't use docker repositories - The AWS Certificate Manager ARN for your HTTPS certificate
external-dns.alpha.kubernetes.io/hostname: dockerrepo1.example.com, dockerrepo2.example.com, dockerrepo3.example.com # Add more docker subdomains using dockerrepoName.example.com othereise comment out if you don't use docker repositories
pv:
storage: 120Gi
volumeMode: Filesystem
accessModes: ReadWriteOnce
reclaimPolicy: Retain
path: /nexus-repo-mgr-work-dir/work
path: /mnt
zones:
zone1: us-east-1a
zone2: us-east-1b
@ -86,22 +73,21 @@ pvc:
accessModes: ReadWriteOnce
storage: 100Gi
service: #Nexus Repo NodePort Service
service: #Nexus Repo NodePort Service
nexus:
type: NodePort
protocol: TCP
port: 80
targetPort: 8081
docker: #Nodeport Service for Docker Service
enabled: false
type: NodePort
protocol: TCP
port: 9090
targetPort: 8081
type: NodePort
protocol: TCP
port: 80
targetPort: 8081
docker: #Nodeport Service for Docker Service
type: NodePort
protocol: TCP
port: 9090
targetPort: 8081
secret:
license:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrm-nexus-license
alias: nxrm-license.lic
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrm-nexus-license
alias: nxrm-license.lic
rds:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrmrds-cred-nexus
adminpassword: