Compare commits

..

3 Commits

Author SHA1 Message Date
511d7ab6ac Update README.md 2023-01-25 12:50:17 -05:00
5a3923f645 Update README.md 2023-01-25 12:49:52 -05:00
338565b3d8 Update README.md 2023-01-25 12:47:52 -05:00
66 changed files with 256 additions and 154 deletions

View File

@ -14,10 +14,22 @@
-->
# ⚠️ Archive Notice
As of October 24, 2023, we will no longer update or support the [Single-Instance OSS/Pro Helm Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager).
As of October 24, 2023, we will be making the following changes to this repository and the available helm charts:
Deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
1. We will no longer update or support the [Helm Chart for Single-Instance Kubernetes Deployments using OrientDB](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager). This is because deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
2. There is not nor do we anticipate their being a Helm chart available for single-instance Kubernetes deployments using PostgreSQL.
3. The only Helm chart we will support is the [Helm Chart for Resilient AWS deployments using EKS](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency), which allows you to deploy Nexus Repository in an EKS cluster as described in our [resilient deployment options documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws).
## Helm Charts for Sonatype Nexus Repository Manager 3
We now provide one [HA/Resiliency Helm Chart](https://github.com/sonatype/nxrm3-ha-repository/tree/main/nxrm-ha) that supports both high availability and resilient deployments in AWS, Azure, or on-premises in a Kubernetes cluster. This is our only supported Helm chart for deploying Sonatype Nexus Repository; it requires a PostgreSQL database.
We provide Helm charts for two different deployment scenarios:
See the [AWS Single-Instance Resiliency Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency) if you are doing the following:
* Deploying Nexus Repository Pro to an AWS cloud environment with the desire for automatic failover across Availability Zones (AZs) within a single region
* Planning to configure a single Nexus Repository Pro instance within your Kubernetes/EKS cluster with two or more nodes spread across different AZs within an AWS region
* Using an external PostgreSQL database (required)
See the [Single-Instance OSS/Pro Helm Chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nexus-repository-manager) if you are doing the following:
* Using embedded OrientDB (required)
* Deploying either Nexus Repository Pro or OSS to an on-premises environment with bare metal/VM server (Node)
* Deploying a single Nexus Repository instance within a Kubernetes cluster that has a single Node configured

View File

@ -12,7 +12,7 @@
# Eclipse Foundation. All other trademarks are the property of their respective owners.
#
helm plugin install --version "0.2.11" https://github.com/quintush/helm-unittest
helm plugin install https://github.com/quintush/helm-unittest
set -e

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,16 +1,14 @@
apiVersion: v2
name: nexus-repository-manager
# The nexus-repository-manager chart is deprecated and no longer maintained
deprecated: true
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 64.1.0
version: 45.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 3.64.0
appVersion: 3.45.1
description: DEPRECATED Sonatype Nexus Repository Manager - Universal Binary repository
description: Sonatype Nexus Repository Manager - Universal Binary repository
# A chart can be either an 'application' or a 'library' chart.
#
@ -37,3 +35,6 @@ home: https://www.sonatype.com/nexus-repository-oss
icon: https://sonatype.github.io/helm3-charts/NexusRepo_Vertical.svg
sources:
- https://github.com/sonatype/nexus-public
maintainers:
- email: support@sonatype.com
name: Sonatype

View File

@ -12,15 +12,14 @@
Eclipse Foundation. All other trademarks are the property of their respective owners.
-->
# ⚠️ Archive Notice
As of October 24, 2023, we will no longer update or support this Helm chart.
As of October 24, 2023, we will be making the following changes to this repository and the available helm charts:
Deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
If you are deploying in AWS, you can use our [AWS Helm chart](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency) to deploy Nexus Repository in an EKS cluster.
We do not currently provide Helm charts for on-premises deployments using PostgreSQL. For those wishing to deploy on premises, see our [Single Data Center On-Premises Deployment Example Using Kubernetes documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-data-center-on-premises-deployment-example-using-kubernetes) for information and sample YAMLs to help you plan a resilient on-premises deployment.
1. We will no longer update or support this Helm chart for Kubernetes deployments using OrientDB. This is because deploying Nexus Repository in containers with an embedded database has been known to corrupt the database under some circumstances. We strongly recommend that you use an external PostgreSQL database for Kubernetes deployments.
2. There is not nor do we anticipate their being a Helm chart available for single-instance Kubernetes deployments using PostgreSQL.
3. The only Helm chart we will support is the [Helm Chart for Resilient AWS deployments using EKS](https://github.com/sonatype/nxrm3-helm-repository/tree/main/nxrm-aws-resiliency), which allows you to deploy Nexus Repository in an EKS cluster as described in our [resilient deployment options documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws).
# Nexus Repository

View File

@ -6,7 +6,7 @@ deploymentStrategy: Recreate
image:
# Sonatype Official Public Image
repository: sonatype/nexus3
tag: 3.64.0
tag: 3.45.1
pullPolicy: IfNotPresent
imagePullSecrets:
# for image registries that require login, specify the name of the existing

View File

@ -1,8 +1,6 @@
apiVersion: v2
name: nxrm-aws-resiliency
# The nxrm-aws-resiliency chart is deprecated and no longer maintained
deprecated: true
description: DEPRECATED Resilient AWS Deployment of Sonatype Nexus Repository Manager - Universal Binary repository
description: Resilient AWS Deployment of Sonatype Nexus Repository Manager - Universal Binary repository
# A chart can be either an 'application' or a 'library' chart.
#
@ -17,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 64.1.0
version: 45.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 3.64.0
appVersion: 3.45.1
keywords:
- artifacts
@ -38,4 +36,6 @@ keywords:
- nexus3
home: https://www.sonatype.com/nexus-repository-oss
icon: https://sonatype.github.io/helm3-charts/NexusRepo_Vertical.svg
maintainers:
- name: Sonatype

View File

@ -12,24 +12,201 @@
Eclipse Foundation. All other trademarks are the property of their respective owners.
-->
# ⚠️ Archive Notice
As of February 9, 2024, we now provide one [HA/Resiliency Helm Chart](https://github.com/sonatype/nxrm3-ha-repository/tree/main/nxrm-ha) that supports both high availability and resilient deployments in AWS, Azure, or on-premises in a Kubernetes cluster. This is our only supported Helm chart for deploying Sonatype Nexus Repository; it requires a PostgreSQL database and a Pro license.
# Helm Chart for a Resilient Nexus Repository Deployment in AWS
# Helm Chart Instructions
This Helm chart configures the Kubernetes resources that are needed for a resilient Nexus Repository deployment on AWS as described in our documented [single-node cloud resilient deployment example using AWS](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws).
See the [HA/Resiliency Helm Chart in GitHub](https://github.com/sonatype/nxrm3-ha-repository/tree/main/nxrm-ha) for details on the new combined Helm chart.
Detailed Help instructions are also available at the following locations:
* [Single-Node Cloud Resilient Example Using AWS] (https://help.sonatype.com/en/single-node-cloud-resilient-deployment-example-using-aws.html)
* [Single-Node Cloud Resilient Example Using Azure] (https://help.sonatype.com/en/single-node-cloud-resilient-deployment-example-using-azure.html)
* [Single Data Center On-Premises Resilient Example Using Kubernetes] (https://help.sonatype.com/en/single-data-center-on-premises-deployment-example-using-kubernetes.html)
* [High Availability Deployment in AWS] (https://help.sonatype.com/en/option-3---high-availability-deployment-in-amazon-web-services--aws-.html)
* [High Availability Deployment in Azure] (https://help.sonatype.com/en/option-4---high-availability-deployment-in-azure.html)
* [On-Premises High Availability Deployment Using Kubernetes] (https://help.sonatype.com/en/option-2---on-premises-high-availability-deployment-using-kubernetes.html)
Detailed Help instructions are also available at the following locations:
* [Single-Node Cloud Resilient Example Using AWS] (https://help.sonatype.com/en/single-node-cloud-resilient-deployment-example-using-aws.html)
* [Single-Node Cloud Resilient Example Using Azure] (https://help.sonatype.com/en/single-node-cloud-resilient-deployment-example-using-azure.html)
* [Single Data Center On-Premises Resilient Example Using Kubernetes] (https://help.sonatype.com/en/single-data-center-on-premises-deployment-example-using-kubernetes.html)
* [High Availability Deployment in AWS] (https://help.sonatype.com/en/option-3---high-availability-deployment-in-amazon-web-services--aws-.html)
* [High Availability Deployment in Azure] (https://help.sonatype.com/en/option-4---high-availability-deployment-in-azure.html)
* [On-Premises High Availability Deployment Using Kubernetes] (https://help.sonatype.com/en/option-2---on-premises-high-availability-deployment-using-kubernetes.html)
Use the checklist below to determine if this Helm chart is suitable for your deployment needs.
---
## When to Use This Helm Chart
Use this Helm chart if you are doing any of the following:
- Deploying Nexus Repository Pro to an AWS cloud environment with the desire for automatic failover across Availability Zones (AZs) within a single region
- Planning to configure a single Nexus Repository Pro instance within your Kubernetes/EKS cluster with two or more nodes spread across different AZs within an AWS region
- Using an external PostgreSQL database
> **Note**: A Nexus Repository Pro license is required for our resilient deployment options. Your Nexus Repository Pro license file must be stored externally as mounted from AWS Secrets AWS (required).
---
## Prerequisites for This Chart
In order to set up an environment like the one illustrated above and described in this section, you will need the following:
- Kubernetes 1.19+
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Helm 3](https://helm.sh/docs/intro/install/)
- A Nexus Repository Pro license
- An AWS account with permissions for accessing the following AWS services:
- Elastic Kubernetes Service (EKS)
- Relational Database Service (RDS) for PostgreSQL
- Application Load Balancer (ALB)
- CloudWatch
- Simple Storage Service (S3)
- Secrets Manager
You will also need to complete the steps below. See the referenced AWS documentation for detailed configuration steps. Also see [our resiliency documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws) for more details about why these steps are necessary and how each AWS solution functions within a resilient deployment:
1. Configure an EKS cluster - [AWS documentation for managed nodes (i.e., EC2)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html)
2. Create an Aurora database cluster - [AWS documentation for creating an Aurora database cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.CreateInstance.html)
3. Deploy the AWS Load Balancer Controller (LBC) to your EKS cluster - [AWS documentation for deploying the AWS LBC to your EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html)
4. Install AWS Secrets Store CSI drivers - You need to create an IAM service account using the ```eksctl create iamserviceaccount``` command. Before proceeding, read the points below as they contain important required steps to ensure this helm chart will work for you: <br>
- **You must include two additional command parameters when running the command**: ```--role-only``` and ```--namespace <nexusrepo namespace>```
- It is important to include the ```--role-only``` option in the ```eksctl create iamserviceaccount``` command so that the helm chart manages the Kubernetes service account. <br>
- **The namespace you specify to the ```eksctl create iamserviceaccount``` must be the same namespace into which you will deploy the Nexus Repository pod.** <br>
- Although the namespace does not exist at this point, you must specify it as part of the command. **Do not create that namespace manually beforehand**; the helm chart will create and manage it.
- You should specify this same namespace as the value of ```nexusNs``` in your values.yaml. <br>
- Follow the instructions provided in the [AWS Secrets Store CSI drivers documentation](https://github.com/aws/secrets-store-csi-driver-provider-aws/blob/main/README.md) to install the AWS Secrets Store CSI drivers; ensure that you follow the additional instructions in the bullets above when you reach the ```eksctl create iamserviceaccount``` command on that page.
5. Ensure that your EKS nodes are granted CloudWatchFullAccess and CloudWatchAgentServerPolicy IAM policies. This Helm chart will configure Fluentbit for log externalisation to CloudWatch.
- [AWS documentation for setting up Fluentbit](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html)
---
## External-dns
This helm chart uses [external-dns](https://github.com/kubernetes-sigs/external-dns) to create 'A' records in AWS Route 53 for our [Docker subdomain feature](https://help.sonatype.com/repomanager3/nexus-repository-administration/formats/docker-registry/docker-subdomain-connector).
See the ```external-dns.alpha.kubernetes.io/hostname``` annotation in the dockerIngress resource in the values.yaml.
### Permissions for external-dns
Open a terminal that has connectivity to your EKS cluster and run the following commands:
```
cat <<'EOF' >> external-dns-r53-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
}
EOF
aws iam create-policy --policy-name "AllowExternalDNSUpdates" --policy-document file://external-dns-r53-policy.json
POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' --output text)
EKS_CLUSTER_NAME=<Your EKS Cluster Name>
aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text
eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --approve
ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e 's|^https://||')
```
Note: The value you assign to the 'EXTERNALDNS_NS' variable below should be the same as the one you specify in your values.yaml for namespaces.externaldnsNs
```
EXTERNALDNS_NS=nexus-externaldns
cat <<-EOF > externaldns-trust.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"$OIDC_PROVIDER:sub": "system:serviceaccount:${EXTERNALDNS_NS}:external-dns",
"$OIDC_PROVIDER:aud": "sts.amazonaws.com"
}
}
}
]
}
EOF
IRSA_ROLE="nexusrepo-external-dns-irsa-role"
aws iam create-role --role-name $IRSA_ROLE --assume-role-policy-document file://externaldns-trust.json
aws iam attach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN
ROLE_ARN=$(aws iam get-role --role-name $IRSA_ROLE --query Role.Arn --output text)
echo $ROLE_ARN
```
2. Take note of the ROLE_ARN outputted last above and specify it in your values.yaml for serviceAccount.externaldns.role
## Deployment
1. Add the sonatype repo to your helm:
```helm repo add sonatype https://sonatype.github.io/helm3-charts/ ```
2. Ensure you have updated your values.yaml with appropriate values for your environment.
- Note that you can specify Ingress annotations via the values.yaml.
- If you wish to add [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/), you can do so via kubectl. See the [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) for specific commands.
3. Install the chart using the following:
```helm install nxrm sonatype/nxrm-aws-resiliency -f values.yaml```
4. Get the Nexus Repository link using the following:
```kubectl get ingresses -n nexusrepo```
---
## Health Check
You can use the following commands to perform various health checks:
See a list of releases:
```helm list```
Check pods using the following:
```kubectl get pods -n nexusrepo```
Check the Nexus Repository logs with the following:
```kubectl logs <pod_name> -n nexusrepo nxrm-app```
Check if the pod is OK by using the following; you shouldn't see any error/warning messages:
```kubectl describe pod <pod_name> -n nexusrepo```
Check if ingress is OK using the following:
```kubectl describe ingress <ingress_name> -n nexusrepo```
Check that the Fluent Bit pod is sending events to CloudWatch using the following:
```kubectl logs -n amazon-cloudwatch <fluent-bit pod id>```
If the above returns without error, then check CloudWatch for the ```/aws/containerinsights/<eks cluster name>/nexus-logs``` log group, which should contain four log streams.
---
## Uninstall
To uninstall the deployment, use the following:
```helm uninstall nxrm```
After removing the deployment, ensure that the namespace is deleted and that Nexus Repository is not listed when using the following:
```helm list```

View File

@ -1,4 +1,4 @@
{{- if .Values.externaldns.enabled }}
# comment out sa if it was previously created
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@ -64,4 +64,3 @@ spec:
env:
- name: AWS_DEFAULT_REGION
value: {{ .Values.deployment.clusterRegion }}
{{- end }}

View File

@ -1,4 +1,3 @@
{{- if .Values.fluentbit.enabled -}}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -78,7 +77,7 @@ data:
[INPUT]
Name tail
Tag nexus.nexus-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -113,7 +112,7 @@ data:
[INPUT]
Name tail
Tag nexus.request-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_request-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_request-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -148,7 +147,7 @@ data:
[INPUT]
Name tail
Tag nexus.audit-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -183,7 +182,7 @@ data:
[INPUT]
Name tail
Tag nexus.tasks-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}*{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment*{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
@ -358,5 +357,4 @@ spec:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
{{- end }}
effect: "NoSchedule"

View File

@ -24,7 +24,6 @@ spec:
port:
number: {{ .Values.service.nexus.port }}
---
{{- if .Values.ingress.dockerIngress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
@ -50,4 +49,3 @@ spec:
name: {{ .Chart.Name }}-docker-service
port:
number: {{ .Values.service.docker.port }}
{{- end }}

View File

@ -3,16 +3,13 @@ kind: Namespace
metadata:
name: {{ .Values.namespaces.nexusNs }}
---
{{- if .Values.fluentbit.enabled }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.cloudwatchNs }}
{{- end }}
---
{{- if .Values.externaldns.enabled }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.externaldnsNs }}
{{- end }}
---

View File

@ -6,7 +6,6 @@ metadata:
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.role }}
---
{{- if .Values.externaldns.enabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -14,4 +13,4 @@ metadata:
namespace: {{ .Values.namespaces.externaldnsNs }}
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.externaldns.role }}
{{- end }}
---

View File

@ -14,7 +14,6 @@ spec:
port: {{ .Values.service.nexus.port }}
targetPort: {{ .Values.service.nexus.targetPort }}
---
{{- if .Values.service.docker.enabled -}}
apiVersion: v1
kind: Service
metadata:
@ -31,4 +30,3 @@ spec:
protocol: {{ .Values.service.docker.protocol }}
port: {{ .Values.service.docker.port }}
targetPort: {{ .Values.service.docker.targetPort }}
{{- end }}

View File

@ -1,11 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.workdir.configmap.name }}
namespace: {{ .Values.namespaces.nexusNs }}
data:
create-nexus-work-dir.sh: |
#!/bin/bash
# Make Nexus Repository Manager work directory
mkdir -p /nexus-repo-mgr-work-dir/work

View File

@ -1,51 +0,0 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ .Values.workdir.daemonset.name }}
namespace: {{ .Values.namespaces.nexusNs }}
spec:
selector:
matchLabels:
job: dircreator
template:
metadata:
labels:
job: dircreator
spec:
hostPID: true
restartPolicy: Always
initContainers:
# Copy file for creating nexus work directory over and execute it on host
- name: create-nexus-work-dir
image: ubuntu:23.04
command: [/bin/sh]
args:
- -c
- >-
cp /tmp/create-nexus-work-dir.sh /host-dir &&
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/install/create-nexus-work-dir.sh &&
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/install/create-nexus-work-dir.sh
securityContext:
privileged: true
volumeMounts:
- name: create-nexus-work-dir-script
mountPath: /tmp
- name: host-mnt
mountPath: /host-dir
containers:
- name: directory-creator
image: busybox:1.33.1
command: ["/bin/sh"]
args:
- -c
- >-
tail -f /dev/null
securityContext:
privileged: true
volumes:
- name: create-nexus-work-dir-script
configMap:
name: {{ .Values.workdir.configmap.name }}
- name: host-mnt
hostPath:
path: /tmp/install

View File

@ -4,18 +4,14 @@ namespaces:
cloudwatchNs: amazon-cloudwatch
externaldnsNs: nexus-externaldns
externaldns:
enabled: false
domainFilter: example.com #your root domain e.g example.com
awsZoneType: private # hosted zone to look at (valid values are public, private or no value for both)
fluentbit:
enabled: false
deployment:
clusterRegion: us-east-1
name: nxrm.deployment
clusterName: nxrm-nexus
logsRegion: us-east-1
fluentBitVersion: 2.28.0
replicaCount: 1
initContainer:
image:
repository: busybox
@ -23,7 +19,7 @@ deployment:
container:
image:
repository: sonatype/nexus3
tag: 3.45.1
tag: 3.41.1
containerPort: 8081
pullPolicy: IfNotPresent
env:
@ -52,33 +48,24 @@ ingress:
#host: "example.com" #host to apply this ingress rule to. Uncomment this in your values.yaml and set it as you wish
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-path: /service/rest/v1/status
alb.ingress.kubernetes.io/scheme: internal # scheme
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids
#alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' uncomment for https
#alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Uncomment for https. The AWS Certificate Manager ARN for your HTTPS certificate
dockerIngress: #Ingress for Docker Connector - comment out if you don't use docker repositories
enabled: false
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # The AWS Certificate Manager ARN for your HTTPS certificate
dockerIngress: #Ingress for Docker Connector - comment out if you don't use docker repositories
annotations:
kubernetes.io/ingress.class: alb # comment out if you don't use docker repositories
alb.ingress.kubernetes.io/scheme: internal # scheme comment out if you don't use docker repositories
alb.ingress.kubernetes.io/subnets: subnet-1,subnet-2 #comma separated list of subnet ids, comment out if you don't use docker repositories
# alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #uncomment if you use docker repositories
# alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Uncomment if you use docker repositories - The AWS Certificate Manager ARN for your HTTPS certificate
# external-dns.alpha.kubernetes.io/hostname: dockerrepo1.example.com, dockerrepo2.example.com, dockerrepo3.example.com # Add more docker subdomains using dockerrepoName.example.com othereise comment out if you don't use docker repositories
workdir:
configmap:
name: create-nexus-workdir-config
daemonset:
name: create-nexus-work-dir
storageClass:
iopsPerGB: "10" #Note: aws plugin multiplies this by the size of the requested volumne to compute IOPS of the volumne and caps it a 20, 000 IOPS
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' #comment out if you don't use docker repositories
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0000000000000:certificate/00000000-1111-2222-3333-444444444444 # Comment out if you don't use docker repositories - The AWS Certificate Manager ARN for your HTTPS certificate
external-dns.alpha.kubernetes.io/hostname: dockerrepo1.example.com, dockerrepo2.example.com, dockerrepo3.example.com # Add more docker subdomains using dockerrepoName.example.com othereise comment out if you don't use docker repositories
pv:
storage: 120Gi
volumeMode: Filesystem
accessModes: ReadWriteOnce
reclaimPolicy: Retain
path: /nexus-repo-mgr-work-dir/work
path: /mnt
zones:
zone1: us-east-1a
zone2: us-east-1b
@ -86,22 +73,21 @@ pvc:
accessModes: ReadWriteOnce
storage: 100Gi
service: #Nexus Repo NodePort Service
service: #Nexus Repo NodePort Service
nexus:
type: NodePort
protocol: TCP
port: 80
targetPort: 8081
docker: #Nodeport Service for Docker Service
enabled: false
type: NodePort
protocol: TCP
port: 9090
targetPort: 8081
type: NodePort
protocol: TCP
port: 80
targetPort: 8081
docker: #Nodeport Service for Docker Service
type: NodePort
protocol: TCP
port: 9090
targetPort: 8081
secret:
license:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrm-nexus-license
alias: nxrm-license.lic
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrm-nexus-license
alias: nxrm-license.lic
rds:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrmrds-cred-nexus
adminpassword: