Compare commits

..

16 Commits

Author SHA1 Message Date
02c19b39ee Release Update for 41.1.3 2022-08-30 15:16:56 +00:00
f351b8e244 Merge pull request #14 from sonatype/NEXUS-34871-configure-external-dns-to-create-docker-sub-domain-records-and-https-listener
NEXUS-34871 - Create A records for docker sub domain and configure HTTPS listener for ALB
2022-08-30 15:49:43 +01:00
4902991b0c Add more comments to values.yaml 2022-08-30 15:49:21 +01:00
0734d746eb Associate sub domains with docker ingress 2022-08-27 00:59:07 +01:00
c7c527174f Associate sub domains with docker ingress 2022-08-27 00:38:36 +01:00
595db96ef1 license 2022-08-26 22:07:25 +01:00
97dfe39202 parameterise hosted zone 2022-08-26 21:35:49 +01:00
10ee4a5efb parameterise hosted zone 2022-08-26 21:33:53 +01:00
1e5ce73111 specify examples for docker subdomain and cert manager arn 2022-08-26 21:01:10 +01:00
862f179251 Configure external-dns to create docker sub domain rcords and https listener 2022-08-26 20:47:16 +01:00
769c3b7f7c revert 2022-08-26 20:44:42 +01:00
e3af231002 Configure external-dns to create docker sub domain rcords and https listener 2022-08-26 20:42:13 +01:00
a0318927b0 Merge pull request #13 from sonatype/fix-typo
fix numbering
2022-08-26 12:33:05 +01:00
702f846cb2 fix numbering 2022-08-26 12:31:05 +01:00
53b1ba9fcb Merge pull request #12 from sonatype/NEXUS-34129-Update-Broken-links-and-readme
NEXUS-34129 - Update-Broken-links-and-readme
2022-08-26 12:28:54 +01:00
1cddb6982b Update Broken links and readme 2022-08-26 11:48:48 +01:00
15 changed files with 158 additions and 73 deletions

28
LICENSE
View File

@ -1,21 +1,13 @@
MIT License
Copyright (c) 2020-present Sonatype, Inc.
Copyright (c) 2020 Sonatype
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
http://www.apache.org/licenses/LICENSE-2.0
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -17,12 +17,12 @@
We provide Helm charts for two different deployment scenarios:
See the [AWS Single-Instance Resiliency Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/aws-single-instance-resiliency) if you are doing the following:
See the [AWS Single-Instance Resiliency Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency) if you are doing the following:
* Deploying Nexus Repository Pro to an AWS cloud environment with the desire for automatic failover across Availability Zones (AZs) within a single region
* Planning to configure a single Nexus Repository Pro instance within your Kubernetes/EKS cluster with two or more nodes spread across different AZs within an AWS region
* Using an external PostgreSQL database (required)
See the [Single-Instance OSS/Pro Kubernetes Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/single-inst-oss-pro-kubernetes) if you are doing the following:
See the [Single-Instance OSS/Pro Kubernetes Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager) if you are doing the following:
* Using embedded OrientDB (required)
* Deploying either Nexus Repository Pro or OSS to an on-premises environment with bare metal/VM server (Node)
* Deploying a single Nexus Repository instance within a Kubernetes cluster that has a single Node configured

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -3,7 +3,7 @@ name: nexus-repository-manager
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 41.1.2
version: 41.1.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 3.41.1

View File

@ -15,7 +15,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 41.1.2
version: 41.1.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@ -63,13 +63,14 @@ You will also need to complete the steps below. See the referenced AWS documenta
---
## Deployment
1. Pull the [nxrm-resiliency-aws-helmchart](https://github.com/sonatype/nxrm3-helm-repository/blob/main/aws-single-instance-resiliency/Chart.yaml).
1. Add the sonatype repo to your helm:
```helm repo add sonatype https://sonatype.github.io/helm3-charts/ ```
2. Ensure you have updated your values.yaml with appropriate values for your environment.
3. Install the chart using the following:
```helm install nxrm nexus/nxrm-aws-resiliency --values values.yaml```
```helm install nxrm sonatype/nxrm-aws-resiliency -f values.yaml```
3. Get the Nexus Repository link using the following:
4. Get the Nexus Repository link using the following:
```kubectl get ingresses -n nexusrepo```

View File

@ -0,0 +1,66 @@
# comment out sa if it was previously created
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
namespace: {{ .Values.namespaces.externaldnsNs }}
labels:
app.kubernetes.io/name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods","nodes"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
namespace: {{ .Values.namespaces.externaldnsNs }}
labels:
app.kubernetes.io/name: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount.externaldns.name }}
namespace: {{ .Values.namespaces.externaldnsNs }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: {{ .Values.namespaces.externaldnsNs }}
labels:
app.kubernetes.io/name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: external-dns
template:
metadata:
labels:
app.kubernetes.io/name: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.11.0
args:
- --source=service
- --source=ingress
- --domain-filter={{ .Values.externaldns.domainFilter }} # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type={{ .Values.externaldns.awsZoneType }} # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=external-dns
env:
- name: AWS_DEFAULT_REGION
value: {{ .Values.deployment.clusterRegion }}

View File

@ -77,7 +77,7 @@ data:
[INPUT]
Name tail
Tag nexus.nexus-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -112,7 +112,7 @@ data:
[INPUT]
Name tail
Tag nexus.request-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_request-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_request-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -147,7 +147,7 @@ data:
[INPUT]
Name tail
Tag nexus.audit-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -182,7 +182,7 @@ data:
[INPUT]
Name tail
Tag nexus.tasks-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -263,7 +263,7 @@ spec:
spec:
containers:
- name: fluent-bit
image: amazon/aws-for-fluent-bit:2.10.0
image: amazon/aws-for-fluent-bit:{{ .Values.deployment.fluentBitVersion }}
imagePullPolicy: Always
env:
- name: AWS_REGION

View File

@ -48,4 +48,4 @@ spec:
service:
name: {{ .Chart.Name }}-docker-service
port:
number: {{ .Values.ingress.dockerIngress.port }}
number: {{ .Values.service.docker.port }}

View File

@ -8,3 +8,8 @@ kind: Namespace
metadata:
name: {{ .Values.namespaces.cloudwatchNs }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.externaldnsNs }}
---

View File

@ -5,3 +5,12 @@ metadata:
namespace: {{ .Values.namespaces.nexusNs }}
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.role }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount.externaldns.name }}
namespace: {{ .Values.namespaces.externaldnsNs }}
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.externaldns.role }}
---

View File

@ -26,7 +26,7 @@ spec:
selector:
app: nxrm
ports:
- name: docker-connector
- name: docker-service
protocol: {{ .Values.service.docker.protocol }}
port: {{ .Values.service.docker.port }}
targetPort: {{ .Values.service.docker.targetPort }}

View File

@ -1,52 +1,64 @@
# Declare variables to be passed into your templates.
namespaces:
nexusNs: nexusrepo
cloudwatchNs: amazon-cloudwatch
nexusNs: nexusrepo
cloudwatchNs: amazon-cloudwatch
externaldnsNs: nexus-externaldns
externaldns:
domainFilter: example.com #your root domain e.g example.com
awsZoneType: private # hosted zone to look at (valid values are public, private or no value for both)
deployment:
name: nxrm.deployment
clusterName: nxrm-nexus
logsRegion: us-east-1
initContainer:
image:
repository: busybox
tag: 1.33.1
container:
image:
repository: sonatype/nexus3
tag: 3.41.1
containerPort: 8081
pullPolicy: IfNotPresent
env:
nexusDBName: nexus
nexusDBPort: 3306
requestLogContainer:
image:
repository: busybox
tag: 1.33.1
auditLogContainer:
image:
repository: busybox
tag: 1.33.1
taskLogContainer:
image:
repository: busybox
tag: 1.33.1
clusterRegion: us-east-1
name: nxrm.deployment
clusterName: nxrm-nexus
logsRegion: us-east-1
fluentBitVersion: 2.28.0
initContainer:
image:
repository: busybox
tag: 1.33.1
container:
image:
repository: sonatype/nexus3
tag: 3.41.1
containerPort: 8081
pullPolicy: IfNotPresent
env:
nexusDBName: nexus
nexusDBPort: 3306
requestLogContainer:
image:
repository: busybox
tag: 1.33.1
auditLogContainer:
image:
repository: busybox
tag: 1.33.1
taskLogContainer:
image:
repository: busybox
tag: 1.33.1
serviceAccount:
name: nexus-repository-deployment-sa #This SA is created as part of steps under "AWS Secrets Manager"
role: arn:aws:iam::000000000000:role/nxrm-nexus-role #Role with secretsmanager permissions
name: nexus-repository-deployment-sa #This SA is created as part of steps under "AWS Secrets Manager"
role: arn:aws:iam::000000000000:role/nxrm-nexus-role #Role with secretsmanager permissions
externaldns:
name: external-dns
role: arn:aws:iam::000000000000:role/nexusrepo-external-dns-irsa-role #Role with route53 permissions needed by external-dns
ingress:
#host: "nexus.ingress.rule.host" #host to apply this ingress rule to. Uncomment this in your values.yaml and set it as you wish
#host: "example.com" #host to apply this ingress rule to. Uncomment this in your values.yaml and set it as you wish
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal # scheme
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids
dockerIngress: #Ingress for Docker Connector
#host: "docker.ingress.rule.host" #host to apply this ingress rule to. Uncomment this in your values.yaml and set it as you wish
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # The AWS Certificate Manager ARN for your HTTPS certificate
dockerIngress: #Ingress for Docker Connector - comment out if you don't use docker repositories
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal # scheme
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids
port: 9090
kubernetes.io/ingress.class: alb # comment out if you don't use docker repositories
alb.ingress.kubernetes.io/scheme: internal # scheme comment out if you don't use docker repositories
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids, comment out if you don't use docker repositories
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #comment out if you don't use docker repositories
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Comment out if you don't use docker repositories - The AWS Certificate Manager ARN for your HTTPS certificate
external-dns.alpha.kubernetes.io/hostname: dockerrepo1.example.com, dockerrepo2.example.com, dockerrepo3.example.com # Add more docker subdomains using dockerrepoName.example.com othereise comment out if you don't use docker repositories
pv:
storage: 120Gi
volumeMode: Filesystem
@ -66,11 +78,11 @@ service: #Nexus Repo NodePort Service
protocol: TCP
port: 80
targetPort: 8081
docker: #Nodeport Service for Docker connector
docker: #Nodeport Service for Docker Service
type: NodePort
protocol: TCP
port: 9090
targetPort: 9090
targetPort: 8081
secret:
license:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrm-nexus-license