Creating public repository

This commit is contained in:
Mike Oliverio 2022-07-05 15:25:48 -04:00
parent 33c60ee0d0
commit 06fce09493
45 changed files with 2456 additions and 1 deletions

17
Dockerfile Normal file
View File

@ -0,0 +1,17 @@
#
# Copyright (c) 2019-present Sonatype, Inc. All rights reserved.
# Includes the third-party code listed at http://links.sonatype.com/products/clm/attributions.
# "Sonatype" is a trademark of Sonatype, Inc.
#
# FROM docker-all.repo.sonatype.com/alpine:latest
# LABEL maintainer="operations-group@sontype.com"
# RUN apk update
# WORKDIR /app
# COPY ./src ./
# EXPOSE 8080
# CMD ["./runit"]

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2020 Sonatype
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

65
OPSDOC.md Normal file
View File

@ -0,0 +1,65 @@
<!--
Copyright (c) 2019-present Sonatype, Inc. All rights reserved.
Includes the third-party code listed at http://links.sonatype.com/products/clm/attributions.
"Sonatype" is a trademark of Sonatype, Inc.
-->
## Overview
Overview of the service: what is it, why do we have it, who are the primary
contacts, how to report bugs, links to design docs and other relevant
information.
### Public Facing Endpoints
The URLs (or IPs) and ports used by the service and what they are used for
(ALB? SSH? FTP?) and notes about any certificates and their location.
## Monitoring
Monitoring dashboards / logging / introspection & obseverbility info.
### Runbooks
A list of every alert your monitoring system may generate for this service and
a step-by-step "what do to when..." for each of them.
### SLO
Service Level Objectives in a succinct format: a target value or range of
values for a service level that is measured by an SLI. A natural structure for
SLOs is thus SLI ≤ target, or lower bound ≤ SLI ≤ upper bound. For example, we
might decide that we will return Shakespeare search results "quickly," adopting
an SLO that our average search request latency should be less than 100
milliseconds.
For more detailed information, please check out the Service Level Objectives
doc. If you're still unsure of what your SLOs should be, please reach out to
the SREs at #ops-sre-chat.
Optionally but recommended, have a section of monitoring and dashboards for SLO
tracking (see the auth-service OpsDoc for examples of dashboards).
## Build
How to build the software that makes the service. Where to download it from,
where the source code repository is, steps for building and making a package or
other distribution mechanisms. If it is software that you modify in any way
(open source project you contribute to or a local project) include instructions
for how a new developer gets started. Ideally the end result is a package that
can be copied to other machines for installation.
## Deploy
How to deploy the service. How to build something from scratch: RAM/disk
requirements, OS version and configuration, what packages to install, and so
on. If this is automated with a configuration management tool like ansible/etc,
then say so.
## Common Tasks
Step-by-step instructions for common things like provisioning
(add/change/delete), common problems and their solutions, and so on.
## DR
Where are backups of data stored? What are disaster / data recovery
procedures?

114
README.md
View File

@ -1 +1,113 @@
# nxrm3-helm-charts ![Lint and Test Charts](https://github.com/sonatype/helm3-charts/workflows/Lint%20and%20Test%20Charts/badge.svg)
## Helm3 Charts for Sonatype Nexus Repository Manager (NXRM3) Products
These charts are designed to work out of the box with minikube using both ingess and ingress dns addons.
The current releases have been tested on minikube v1.12.3 running k8s v1.18.3
### User Documentation
See docs/index.md which is also https://sonatype.github.io/nxrm-helm-repository/
### Contributing
See the [contributing document](./CONTRIBUTING.md) for details.
For Sonatypers, note that external contributors must sign the CLA and
the Dev-Ex team must verify this prior to accepting any PR.
### Updating Charts
Charts for NXRM can be updated in `sonatype/<deployment>/charts/` directories.
The most common updates will be to use new application images and to bump
chart versions for release.
There should likely be no reason to update anything in `docs/` by hand.
Test a chart in a local k8s cluster (like minikube) by installing the local copy
from within each charts directory:
```
helm install --generate-name ./
```
### Packaging and Indexing
*Sonatype CI build will package, commit, and publish to the official helm repository.*
Upon update of the `charts/`, run `build.sh` from here in the project root to
create `tgz` packages of the latest chart changes and regenerate the `index.yaml`
file to the `docs/` directory which is the root of the
[repo site](https://sonatype.github.io/nxrm-helm-repository/).
The build process requires Helm 3.
### Testing the Helm Charts
To test Helm Charts locally you will need to follow the next steps:
1. Install docker, helm, kubectl, and [minikube](https://minikube.sigs.k8s.io/docs/start/), if you don't already have it on your local workstation.
* You could also use docker with k8s enabled instead of minikube. You don't need both.
2. Start up minikube: `minikube start`
3. Confirm minikube is up and running: `minikube status`
4. List the existing pods in the cluster: `kubectl get pods` (There should not be anything listed at this point.)
5. Install the helm chart in any of these ways:
* From a copy of the source: `helm install iq {path/to/your/nxrm-helm-repository/charts/<deployment>}/nexus-iq --wait`
* From our production online repo: Add our helm repo locally as instructed at https://sonatype.github.io/nxrm-helm-repository/<deployment>/
6. List installed servers with helm: helm list
7. Watch the server start in kubernetes by running: `kubectl get pods`
8. Use the pod name you get from last command to follow the console logs: `kubectl logs -f iq-nexus-iq-server-xxx`
9. Confirm expected version numbers in those logs.
10. Forward a localhost port to a port on the running pod: `kubectl port-forward iq-nexus-iq-server-xxx 8070`
11. Connect and check that your fresh new server is successfully running: `http://localhost:8070/`
12. Uninstall the server with helm: `helm delete iq`
13. Confirm it's gone: `helm list && kubectl get pods`
14. Shutdown minikube: `minikube stop`
### Running Unit Tests
To unit test the helm charts you can follow the next steps:
1. Install the unittest plugin for Helm: https://github.com/quintush/helm-unittest
2. Run the tests for each individual chart:
* `cd charts/<deployment>/nexus-repository-manager; helm unittest -3 -t junit -o test-output.xml .`
### Running Integration Tests
You can run the integration tests for the helm charts by running the next commands.
Before running the integration tests:
* Install docker, helm, kubectl, and [minikube](https://minikube.sigs.k8s.io/docs/start/), if you don't already have it on your local workstation.
* You could also use docker with k8s enabled instead of minikube.
* The integration tests will be executed on a running cluster. Each test will create a new POD that will connect to the server installed by our
helm chart. Check [this](https://helm.sh/docs/topics/chart_tests/)
Running integration tests for Nexus Repository Manager:
1. From source code: `helm install nxrm ./charts/<deployment>/nexus-repository-manager --wait`
3. Run the tests: `helm test nxrm`
### Further Notes on Usage
#### Resolver File and Ingress-DNS
Get the default `values.yaml` for each chart.
- Nexus Repository: `helm show values nexus-repo sonatype/nxrm3-helm-repository > iq-values.yaml`
Edit the values file you just downloaded to enable ingress support, and install the chart
with those values:
- Nexus Repository: `helm install nexus-repo sonatype/nxrm3-helm-repository -f repo-values.yaml`
If you want to use the custom values file for the demo environment that expose
the apps on a local domain of *.demo which is done by creating a resolver file.
On a Mac it's `/etc/resolver/minikube-minikube-demo` with the following entries:
```
domain demo
nameserver 192.168.64.8
search_order 1
timeout 5
```
You'll need to update the IP address to match the running instance's IP address.
Use `minikube ip` to get the address
Docs for Ingress-dns are here
https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress-dns

80
SECURITY.md Normal file
View File

@ -0,0 +1,80 @@
<!--
Copyright (c) 2019-present Sonatype, Inc. All rights reserved.
Includes the third-party code listed at http://links.sonatype.com/products/clm/attributions.
"Sonatype" is a trademark of Sonatype, Inc.
-->
# Reporting Security Vulnerabilities
## When to report
First check
[Important advisories of known security vulnerabilities in Sonatype products](https://support.sonatype.com/hc/en-us/sections/203012668-Security-Advisories)
to see if this has been previously reported.
## How to report
Please email reports regarding security related issues you find to [mailto:security@sonatype.com](security@sonatype.com).
Use our public key below to keep your message safe.
## What to include
Please use a descriptive subject line in your email report.
Your name and/or affiliation.
A detailed technical description of the vulnerability, attack scenario and where
possible, how we can reproduce your findings.
Provide us with a secure way to respond.
## What to expect
Your email will be acknowledged within 1 - 2 business days, and you'll receive a
more detailed response to your email within 7 business days.
We ask that everyone please follow responsible disclosure practices and allow
time for us to release a fix prior to public release.
Once an issue is reported, Sonatype uses the following disclosure process:
When a report is received, we confirm the issue and determine its severity.
If third-party services or software require mitigation before publication, those
projects will be notified.
## Our public key
```console
-----BEGIN PUBLIC KEY BLOCK-----
mQENBFF+a9ABCADQWSAAU7w9i71Zn3TQ6k7lT9x57cRdtX7V709oeN/c/1it+gCw
onmmCyf4ypor6XcPSOasp/x0s3hVuf6YfMbI0tSwJUWWihrmoPGIXtmiSOotQE0Q
Sav41xs3YyI9LzQB4ngZR/nhp4YhioD1dVorD6LGXk08rvl2ikoqHwTagbEXZJY7
3VYhW6JHbZTLwCsfyg6uaSYF1qXfUxHPOiHYKNbhK/tM3giX+9ld/7xi+9f4zEFQ
eX9wcRTdgdDOAqDOK7MV30KXagSqvW0MgEYtKX6q4KjjRzBYjkiTdFW/yMXub/Bs
5UckxHTCuAmvpr5J0HIUeLtXi1QCkijyn8HJABEBAAG0KVNvbmF0eXBlIFNlY3Vy
aXR5IDxzZWN1cml0eUBzb25hdHlwZS5jb20+iQE4BBMBAgAiBQJRfmvQAhsDBgsJ
CAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRAgkmxsNtgwfUzbCACLtCgieq1kJOqo
2i136ND5ZOj31zIzNENLn8dhSg5zQwTHOcntWAtS8uCNq4fSlslwvlbPYWTLD7fE
iJn1z7BCU8gBk+pkAJJFWEPweMVt+9bYQ4HfKceGbJeuwBBhS34SK9ZIp9gfxxfA
oTm0aGYwKR5wH3sqL/mrhwKhPt9wXR4qwlE635STEX8wzJ5SBqf3ArJUtCp1rzgR
Dx+DiZed5HE1pOI2Kyb6O80bm485WThPXxpvp3bfzTNYoGzeLi/F7WkmgggkXxsT
Pyd0sSx0B/MO4lJtQvEBlIHDFno9mXa30fKl+rzp2geG5UxNHJUjaC5JhfWLEXEX
wV0ErBsmuQENBFF+a9ABCADXj04+GLIz8VCaZH554nUHEhaKoiIXH3Tj7UiMZDqy
o4WIw2RFaCQNA8T0R5Q0yxINU146JQMbA2SN59AGcGYZcajyEvTR7tLG0meMO6S0
JWpkX7s3xaC0s+5SJ/ba00oHGzW0aotgzG9BWA5OniNHK7zZKMVu7M80M/wB1RvK
x775hAeJ+8F9MDJ+ijydBtaOfDdkbg+0kU1xR6Io+vVLPk38ghlWU8QFP4/B0oWi
jK4xiDqK6cG7kyH9kC9nau+ckH8MrJ/RzEpsc4GRwqS4IEnvHWe7XbgydWS1bCp6
8uP5ma3d02elQmSEa+PABIPKnZcAf1YKLr9O/+IzEdOhABEBAAGJAR8EGAECAAkF
AlF+a9ACGwwACgkQIJJsbDbYMH3WzAf/XOm4YQZFOgG2h9d03m8me8d1vrYico+0
pBYU9iCozLgamM4er9Efb+XzfLvNVKuqyR0cgvGszukIPQYeX58DMrZ07C+E0wDZ
bG+ZAYXT5GqsHkSVnMCVIfyJNLjR4sbVzykyVtnccBL6bP3jxbCP1jJdT7bwiKre
1jQjvyoL0yIegdiN/oEdmx52Fqjt4NkQsp4sk625UBFTVISr22bnf60ZIGgrRbAP
DU1XMdIrmqmhEEQcXMp4CeflDMksOmaIeAUkZY7eddnXMwQDJTnz5ziCal+1r0R3
dh0XISRG0NkiLEXeGkrs7Sn7BAAsTsaH/1zU6YbvoWlMlHYT6EarFQ== =sFGt
-----END PUBLIC KEY BLOCK-----
```

View File

@ -0,0 +1,24 @@
apiVersion: v2
name: nxrm-aws-resiliency
description: Helm chart for a Resilient Nexus Repository deployment in AWS
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 40.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.40.1"

View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2020 Sonatype
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,101 @@
# Helm Chart for a Resilient Nexus Repository Deployment in AWS
This Helm chart configures the Kubernetes resources that are needed for a resilient Nexus Repository deployment on AWS as described in our documented [single-node cloud resilient deployment example using AWS](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws).
Use the checklist below to determine if this Helm chart is suitable for your deployment needs.
---
## When to Use This Helm Chart
Use this Helm chart if you are doing any of the following:
- Deploying Nexus Repository Pro to an AWS cloud environment with the desire for automatic failover across Availability Zones (AZs) within a single region
- Planning to configure a single Nexus Repository Pro instance within your Kubernetes/EKS cluster with two or more nodes spread across different AZs within an AWS region
- Using an external PostgreSQL database
> **Note**: A Nexus Repository Pro license is required for our resilient deployment options. Your Nexus Repository Pro license file must be stored externally as mounted from AWS Secrets AWS (required).
---
## Prerequisites for This Chart
In order to set up an environment like the one illustrated above and described in this section, you will need the following:
- Kubernetes 1.19+
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Helm 3](https://helm.sh/docs/intro/install/)
- A Nexus Repository Pro license
- An AWS account with permissions for accessing the following AWS services:
- Elastic Kubernetes Service (EKS)
- Relational Database Service (RDS) for PostgreSQL
- Application Load Balancer (ALB)
- CloudWatch
- Simple Storage Service (S3)
- Secrets Manager
You will also need to complete the steps below. See the referenced AWS documentation for detailed configuration steps. Also see [our resiliency documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws) for more details about why these steps are necessary and how each AWS solution functions within a resilient deployment:
1. Configure an EKS cluster - [AWS documentation for managed nodes (i.e., EC2)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html)
2. Create an Aurora database cluster - [AWS documentation for creating an Aurora database cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.CreateInstance.html)
3. Deploy the AWS Load Balancer Controller (LBC) to your EKS cluster - [AWS documentation for deploying the AWS LBC to your EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html)
4. Install AWS Secrets Store CSI drivers - You need to create an IAM service account using the ```eksctl create iamserviceaccount``` command. Before proceeding, read the points below as they contain important required steps to ensure this helm chart will work for you: <br>
- **You must include two additional command parameters when running the command**: ```--role-only``` and ```--namespace <nexusrepo namespace>```
- It is important to include the ```--role-only``` option in the ```eksctl create iamserviceaccount``` command so that the helm chart manages the Kubernetes service account. <br>
- **The namespace you specify to the ```eksctl create iamserviceaccount``` must be the same namespace into which you will deploy the Nexus Repository pod.** <br>
- Although the namespace does not exist at this point, you must specify it as part of the command. **Do not create that namespace manually beforehand**; the helm chart will create and manage it.
- You should specify this same namespace as the value of ```nexusNs``` in your values.yaml. <br>
- Follow the instructions provided in the [AWS Secrets Store CSI drivers documentation](https://github.com/aws/secrets-store-csi-driver-provider-aws/blob/main/README.md) to install the AWS Secrets Store CSI drivers; ensure that you follow the additional instructions in the bullets above when you reach the ```eksctl create iamserviceaccount``` command on that page.
5. Ensure that your EKS nodes are granted CloudWatchFullAccess and CloudWatchAgentServerPolicy IAM policies. This Helm chart will configure Fluentbit for log externalisation to CloudWatch.
- [AWS documentation for setting up Fluentbit](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html)
---
## Deployment
1. Pull the [nxrm-resiliency-aws-helmchart](https://github.com/sonatype/nxrm-resiliency-aws-helmchart).
2. Ensure you have updated your values.yaml with appropriate values for your environment.
3. Install the chart using the following:
```helm install nxrm nexus/nxrm-aws-resiliency --values values.yaml```
3. Get the Nexus Repository link using the following:
```kubectl get ingresses -n nexusrepo```
---
## Health Check
You can use the following commands to perform various health checks:
See a list of releases:
```helm list```
Check pods using the following:
```kubectl get pods -n nexusrepo```
Check the Nexus Repository logs with the following:
```kubectl logs <pod_name> -n nexusrepo nxrm-app```
Check if the pod is OK by using the following; you shouldn't see any error/warning messages:
```kubectl describe pod <pod_name> -n nexusrepo```
Check if ingress is OK using the following:
```kubectl describe ingress <ingress_name> -n nexusrepo```
Check that the Fluent Bit pod is sending events to CloudWatch using the following:
```kubectl logs -n amazon-cloudwatch <fluent-bit pod id>```
If the above returns without error, then check CloudWatch for the ```/aws/containerinsights/<eks cluster name>/nexus-logs``` log group, which should contain four log streams.
---
## Uninstall
To uninstall the deployment, use the following:
```helm uninstall nxrm```
After removing the deployment, ensure that the namespace is deleted and that Nexus Repository is not listed when using the following:
```helm list```

View File

@ -0,0 +1 @@
Thank you for installing {{ .Chart.Name }}.

View File

@ -0,0 +1,120 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}
namespace: {{ .Values.namespaces.nexusNs }}
labels:
app: nxrm
spec:
replicas: 1
selector:
matchLabels:
app: nxrm
template:
metadata:
labels:
app: nxrm
spec:
serviceAccountName: {{ .Values.serviceAccount.name }}
initContainers:
# chown nexus-data to 'nexus' user and init log directories/files for a new pod
# otherwise the side car containers will crash a couple of times and backoff whilst waiting
# for nxrm-app to start and this increases the total start up time.
- name: chown-nexusdata-owner-to-nexus-and-init-log-dir
image: {{ .Values.deployment.initContainer.image.repository }}:{{ .Values.deployment.initContainer.image.tag }}
command: [/bin/sh]
args:
- -c
- >-
mkdir -p /nexus-data/etc/logback &&
mkdir -p /nexus-data/log/tasks &&
mkdir -p /nexus-data/log/audit &&
touch -a /nexus-data/log/tasks/allTasks.log &&
touch -a /nexus-data/log/audit/audit.log &&
touch -a /nexus-data/log/request.log &&
chown -R '200:200' /nexus-data
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
containers:
- name: nxrm-app
image: {{ .Values.deployment.container.image.repository }}:{{ .Values.deployment.container.image.tag }}
securityContext:
runAsUser: 200
imagePullPolicy: {{ .Values.deployment.container.pullPolicy }}
ports:
- containerPort: {{ .Values.deployment.container.containerPort }}
env:
- name: DB_NAME
value: "{{ .Values.deployment.container.env.nexusDBName }}"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: nxrm-db-secret
key: db-password
- name: DB_USER
valueFrom:
secretKeyRef:
name: nxrm-db-secret
key: db-user
- name: DB_HOST
valueFrom:
secretKeyRef:
name: nxrm-db-secret
key: db-host
- name: NEXUS_SECURITY_INITIAL_PASSWORD
valueFrom:
secretKeyRef:
name: nxrm-admin-secret
key: nexus-admin-password
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: "false"
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Dnexus.licenseFile=/nxrm-secrets/{{ .Values.secret.license.alias }} \
-Dnexus.datastore.enabled=true -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs \
-Dnexus.datastore.nexus.jdbcUrl=jdbc:postgresql://${DB_HOST}:{{ .Values.deployment.container.env.nexusDBPort }}/${DB_NAME} \
-Dnexus.datastore.nexus.username=${DB_USER} \
-Dnexus.datastore.nexus.password=${DB_PASSWORD}"
volumeMounts:
- mountPath: /nxrm-secrets
name: nxrm-secrets
- name: nexusdata
mountPath: /nexus-data
- name: logback-tasklogfile-override
mountPath: /nexus-data/etc/logback/logback-tasklogfile-appender-override.xml
subPath: logback-tasklogfile-appender-override.xml
- name: request-log
image: {{ .Values.deployment.requestLogContainer.image.repository }}:{{ .Values.deployment.requestLogContainer.image.tag }}
args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/request.log']
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
- name: audit-log
image: {{ .Values.deployment.auditLogContainer.image.repository }}:{{ .Values.deployment.auditLogContainer.image.tag }}
args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/audit/audit.log']
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
- name: tasks-log
image: {{ .Values.deployment.taskLogContainer.image.repository }}:{{ .Values.deployment.taskLogContainer.image.tag }}
args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/tasks/allTasks.log']
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
volumes:
- name: nexusdata
persistentVolumeClaim:
claimName: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ebs-claim
- name: nxrm-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-secret
fsType: ext4
- name: logback-tasklogfile-override
configMap:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-logback-tasklogfile-override
items:
- key: logback-tasklogfile-appender-override.xml
path: logback-tasklogfile-appender-override.xml

View File

@ -0,0 +1,360 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
namespace: {{ .Values.namespaces.cloudwatchNs }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-role
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
- apiGroups: [""]
resources:
- namespaces
- pods
- pods/logs
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-role
subjects:
- kind: ServiceAccount
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
namespace: {{ .Values.namespaces.cloudwatchNs }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-cluster-info
namespace: {{ .Values.namespaces.cloudwatchNs }}
data:
cluster.name: {{ .Values.deployment.clusterName }}
http.server: "On"
http.port: "2020"
read.head: "Off"
read.tail: "On"
logs.region: {{ .Values.deployment.logsRegion }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-config
namespace: {{ .Values.namespaces.cloudwatchNs }}
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server ${HTTP_SERVER}
HTTP_Listen 0.0.0.0
HTTP_Port ${HTTP_PORT}
storage.path /var/fluent-bit/state/flb-storage/
storage.sync normal
storage.checksum off
storage.backlog.mem_limit 5M
@INCLUDE nexus-log.conf
@INCLUDE nexus-request-log.conf
@INCLUDE nexus-audit-log.conf
@INCLUDE nexus-tasks-log.conf
nexus-log.conf: |
[INPUT]
Name tail
Tag nexus.nexus-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.nexus-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.nexus-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-nexus.log-
auto_create_group true
extra_user_agent container-insights
nexus-request-log.conf: |
[INPUT]
Name tail
Tag nexus.request-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_request-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.request-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.request-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-request.log-
auto_create_group true
extra_user_agent container-insights
nexus-audit-log.conf: |
[INPUT]
Name tail
Tag nexus.audit-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.audit-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.audit-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-audit.log-
auto_create_group true
extra_user_agent container-insights
nexus-tasks-log.conf: |
[INPUT]
Name tail
Tag nexus.tasks-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.tasks-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.tasks-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-tasks.log-
auto_create_group true
extra_user_agent container-insights
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
[PARSER]
Name syslog
Format regex
Regex ^(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
Time_Key time
Time_Format %b %d %H:%M:%S
[PARSER]
Name container_firstline
Format regex
Regex (?<log>(?<="log":")\S(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=})
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
[PARSER]
Name cwagent_firstline
Format regex
Regex (?<log>(?<="log":")\d{4}[\/-]\d{1,2}[\/-]\d{1,2}[ T]\d{2}:\d{2}:\d{2}(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=})
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
namespace: {{ .Values.namespaces.cloudwatchNs }}
labels:
k8s-app: fluent-bit
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluent-bit
template:
metadata:
labels:
k8s-app: fluent-bit
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: fluent-bit
image: amazon/aws-for-fluent-bit:2.10.0
imagePullPolicy: Always
env:
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: logs.region
- name: CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: cluster.name
- name: HTTP_SERVER
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: http.server
- name: HTTP_PORT
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: http.port
- name: READ_FROM_HEAD
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: read.head
- name: READ_FROM_TAIL
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: read.tail
- name: HOST_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CI_VERSION
value: "k8s/1.3.7"
# the below var is just to force DaemonSet restarts when changing configuration stored in ConfigMap above
- name: FOO_VERSION
value: "16"
resources:
limits:
memory: 200Mi
requests:
cpu: 500m
memory: 100Mi
volumeMounts:
# Please don't change below read-only permissions
- name: fluentbitstate
mountPath: /var/fluent-bit/state
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
- name: runlogjournal
mountPath: /run/log/journal
readOnly: true
- name: dmesg
mountPath: /var/log/dmesg
readOnly: true
terminationGracePeriodSeconds: 120
volumes:
- name: fluentbitstate
hostPath:
path: /var/fluent-bit/state
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config
configMap:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-config
- name: runlogjournal
hostPath:
path: /run/log/journal
- name: dmesg
hostPath:
path: /var/log/dmesg
serviceAccountName: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"

View File

@ -0,0 +1,41 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: {{ .Values.namespaces.nexusNs }}
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: {{ .Values.ingress.nxrmIngress.scheme }}
alb.ingress.kubernetes.io/subnets: "{{ .Values.ingress.nxrmIngress.subnets }}"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}-service
port:
number: {{ .Values.service.nexus.port }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: {{ .Values.namespaces.nexusNs }}
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ingress-nxrm-docker
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: {{ .Values.ingress.dockerIngress.scheme }}
alb.ingress.kubernetes.io/subnets: {{ .Values.ingress.dockerIngress.subnets }}
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}-docker-service
port:
number: {{ .Values.ingress.dockerIngress.port }}

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.nexusNs }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.cloudwatchNs }}
---

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-logback-tasklogfile-override
namespace: {{ .Values.namespaces.nexusNs }}
data:
logback-tasklogfile-appender-override.xml: |
<included>
<appender name="tasklogfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${karaf.data}/log/tasks/allTasks.log</File>
<filter class="org.sonatype.nexus.pax.logging.TaskLogsFilter" />
<Append>true</Append>
<encoder class="org.sonatype.nexus.pax.logging.NexusLayoutEncoder">
<pattern>%d{"yyyy-MM-dd HH:mm:ss,SSSZ"} %-5p [%thread] %node %mdc{userId:-*SYSTEM} %c - %m%n</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${karaf.data}/log/tasks/allTasks-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<maxHistory>1</maxHistory>
</rollingPolicy>
</appender>
</included>

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ebs-pv
spec:
capacity:
storage: {{ .Values.pv.storage }}
volumeMode: Filesystem
accessModes:
- {{ .Values.pv.accessModes }}
persistentVolumeReclaimPolicy: {{ .Values.pv.reclaimPolicy }}
storageClassName: local-storage
local:
path: {{ .Values.pv.path }}
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
{{- range $zone := .Values.pv.zones }}
- {{ $zone }}
{{- end }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ebs-claim
namespace: {{ .Values.namespaces.nexusNs }}
spec:
accessModes:
- {{ .Values.pvc.accessModes }}
storageClassName: local-storage
resources:
requests:
storage: {{ .Values.pvc.storage }}

View File

@ -0,0 +1,38 @@
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
namespace: {{ .Values.namespaces.nexusNs }}
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-secret
spec:
provider: aws
secretObjects:
- data:
- key: db-user
objectName: nxrm-db-user
- key: db-password
objectName: nxrm-db-password
- key: db-host
objectName: nxrm-db-host
secretName: nxrm-db-secret
type: Opaque
- data:
- key: nexus-admin-password
objectName: nxrm-admin-password
secretName: nxrm-admin-secret
type: Opaque
parameters:
objects: |
- objectName: "{{ .Values.secret.license.arn }}"
objectAlias: "{{ .Values.secret.license.alias }}"
- objectName: "{{ .Values.secret.rds.arn }}"
jmesPath:
- path: "username"
objectAlias: "nxrm-db-user"
- path: "password"
objectAlias: "nxrm-db-password"
- path: "host"
objectAlias: "nxrm-db-host"
- objectName: "{{ .Values.secret.adminpassword.arn }}"
jmesPath:
- path: "admin_nxrm_password"
objectAlias: "nxrm-admin-password"

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Values.namespaces.nexusNs }}
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.role }}

View File

@ -0,0 +1,32 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-service
namespace: {{ .Values.namespaces.nexusNs }}
labels:
app: nxrm
spec:
type: {{ .Values.service.nexus.type }}
selector:
app: nxrm
ports:
- protocol: {{ .Values.service.nexus.protocol }}
port: {{ .Values.service.nexus.port }}
targetPort: {{ .Values.service.nexus.targetPort }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-docker-service
namespace: {{ .Values.namespaces.nexusNs }}
labels:
app: nxrm
spec:
type: {{ .Values.service.docker.type }}
selector:
app: nxrm
ports:
- name: docker-connector
protocol: {{ .Values.service.docker.protocol }}
port: {{ .Values.service.docker.port }}
targetPort: {{ .Values.service.docker.targetPort }}

View File

@ -0,0 +1,7 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-local-storage
namespace: {{ .Values.namespaces.nexusNs }}
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

View File

@ -0,0 +1,77 @@
# Declare variables to be passed into your templates.
namespaces:
nexusNs: nexusrepo
cloudwatchNs: amazon-cloudwatch
deployment:
name: nxrm.deployment
clusterName: nxrm-nexus
logsRegion: us-east-1
initContainer:
image:
repository: busybox
tag: 1.33.1
container:
image:
repository: sonatype/nexus3
tag: 3.40.1
containerPort: 8081
pullPolicy: IfNotPresent
env:
nexusDBName: nexus
nexusDBPort: 3306
requestLogContainer:
image:
repository: busybox
tag: 1.33.1
auditLogContainer:
image:
repository: busybox
tag: 1.33.1
taskLogContainer:
image:
repository: busybox
tag: 1.33.1
serviceAccount:
name: nexus-repository-deployment-sa #This SA is created as part of steps under "AWS Secrets Manager"
role: arn:aws:iam::000000000000:role/nxrm-nexus-role #Role with secretsmanager permissions
ingress:
nxrmIngress:
scheme: internal
port: 9090
subnets: subnet-000000 #comma separated list of Subnets
dockerIngress: #Ingress for Docker Connector
scheme: internal
port: 9090
subnets: subnet-000000 #comma separated list of Subnets
pv:
storage: 120Gi
volumeMode: Filesystem
accessModes: ReadWriteOnce
reclaimPolicy: Retain
path: /mnt
zones:
zone1: us-east-1a
zone2: us-east-1b
pvc:
accessModes: ReadWriteOnce
storage: 100Gi
service: #Nexus Repo NodePort Service
nexus:
type: NodePort
protocol: TCP
port: 80
targetPort: 8081
docker: #Nodeport Service for Docker connector
type: NodePort
protocol: TCP
port: 9090
targetPort: 9090
secret:
license:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrm-nexus-license
alias: nxrm-license.lic
rds:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrmrds-cred-nexus
adminpassword:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:admin-nxrm-password

3
header.txt Normal file
View File

@ -0,0 +1,3 @@
Copyright (c) 2019-present Sonatype, Inc. All rights reserved.
Includes the third-party code listed at http://links.sonatype.com/products/clm/attributions.
"Sonatype" is a trademark of Sonatype, Inc.

BIN
single-inst-oss-pro-kubernetes/.DS_Store vendored Normal file

Binary file not shown.

View File

@ -0,0 +1,40 @@
apiVersion: v2
name: nexus-repository-manager
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 38.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 3.38.1
description: Sonatype Nexus Repository Manager - Universal Binary repository
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
keywords:
- artifacts
- dependency
- management
- sonatype
- nexus
- repository
- quickstart
- ci
- repository-manager
- nexus3
home: https://www.sonatype.com/nexus-repository-oss
icon: https://sonatype.github.io/helm3-charts/NexusRepo_Vertical.svg
sources:
- https://github.com/sonatype/nexus-public
maintainers:
- email: support@sonatype.com
name: Sonatype

View File

@ -0,0 +1,13 @@
Copyright (c) 2020-present Sonatype, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,188 @@
# Nexus Repository
[Nexus Repository OSS](https://www.sonatype.com/nexus-repository-oss) provides universal support for all major build tools.
- Store and distribute Maven/Java, npm, NuGet, Helm, Docker, p2, OBR, APT, Go, R, Conan components and more.
- Manage components from dev through delivery: binaries, containers, assemblies, and finished goods.
- Support for the Java Virtual Machine (JVM) ecosystem, including Gradle, Ant, Maven, and Ivy.
- Compatible with popular tools like Eclipse, IntelliJ, Hudson, Jenkins, Puppet, Chef, Docker, and more.
*Efficiency and Flexibility to Empower Development Teams*
- Streamline productivity by sharing components internally.
- Gain insight into component security, license, and quality issues.
- Build off-line with remote package availability.
- Integrate with industry-leading build tools.
---
## Introduction
This chart installs a single Nexus Repository instance within a Kubernetes cluster that has a single node (server) configured. It is not appropriate for a resilient Nexus Repository deployment. Refer to our [resiliency documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability) for information about resilient Nexus Repository deployment options.
Use the checklist below to determine if this Helm chart is suitable for your deployment needs.
### When to Use This Helm Chart
Use this Helm chart if you are doing any of the following:
- Deploying either Nexus Repository Pro or OSS to an on-premises environment with bare metal/VM server (Node)
- Deploying a single Nexus Repository instance within a Kubernetes cluster that has a single Node configured
> **Note**: If you are using Nexus Repository Pro, your license file and embedded database will reside on the node and be mounted on the container as a Persistent Volume (required).
### When Not to Use This Helm Chart
Do not use this Helm chart and, instead, refer to our [resiliency documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability) if you are doing any of the following:
- Deploying Nexus Repository Pro to a cloud environment with the desire for automatic failover across Availability Zones (AZs) within a single region
- Planning to configure a single Nexus Repository Pro instance within your Kubernetes/EKS cluster with two or more nodes spread across different AZs within an AWS region
- Using an external PostgreSQL database
> **Note**: A Nexus Repository Pro license is required for our resilient deployment options. Your Nexus Repository Pro license file must be stored externally as either mounted from AWS Secrets/Azure Key Vault in AWS/Azure deployments or mounted using Kustomize for on-premises deployments (required).
> **Note**: We do not currently provide Helm charts for our resilient deployment options.
---
## Prerequisites for This Chart
- Kubernetes 1.19+
- PV provisioner support in the underlying infrastructure
- Helm 3
### With Open Docker Image
By default, this Chart uses Sonatype's Public Docker image. If you want to use a different image, run with the following: `--set nexus.imageName=<my>/<image>`.
### With Red Hat Certified container
If you're looking run our Certified Red Hat image in an OpenShift4 environment, there is a Certified Operator in OperatorHub.
---
## Adding the repo
To add as a Helm Repo, use the following:
```helm repo add sonatype https://sonatype.github.io/helm3-charts/```
---
## Testing the Chart
To test the chart, use the following:
```bash
$ helm install --dry-run --debug --generate-name ./
```
To test the chart with your own values, use the following:
```bash
$ helm install --dry-run --debug --generate-name -f myvalues.yaml ./
```
---
## Installing the Chart
To install the chart, use the following:
```bash
$ helm install nexus-rm sonatype/nexus-repository-manager [ --version v29.2.0 ]
```
The above command deploys Nexus Repository on the Kubernetes cluster in the default configuration.
You can pass custom configuration values as follows:
```bash
$ helm install -f myvalues.yaml sonatype-nexus ./
```
The default login is randomized and can be found in `/nexus-data/admin.password` or you can get the initial static passwords (admin/admin123)
by setting the environment variable `NEXUS_SECURITY_RANDOMPASSWORD` to `false` in your `values.yaml`.
---
## Uninstalling the Chart
To uninstall/delete the deployment, use the following:
```bash
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
plinking-gopher default 1 2021-03-10 15:44:57.301847 -0800 PST deployed nexus-repository-manager-29.2.0 3.29.2
$ helm delete plinking-gopher
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
---
## Configuration
The following table lists the configurable parameters of the Nexus chart and their default values.
| Parameter | Description | Default |
|--------------------------------------------|----------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
| `deploymentStrategy` | Deployment Strategy | `Recreate` |
| `nexus.imagePullPolicy` | Nexus Repository image pull policy | `IfNotPresent` |
| `nexus.imagePullSecrets` | Secret to download Nexus Repository image from private registry | `nil` |
| `nexus.docker.enabled` | Enable/disable Docker support | `false` |
| `nexus.docker.registries` | Support multiple Docker registries | (see below) |
| `nexus.docker.registries[0].host` | Host for the Docker registry | `cluster.local` |
| `nexus.docker.registries[0].port` | Port for the Docker registry | `5000` |
| `nexus.docker.registries[0].secretName` | TLS Secret Name for the ingress | `registrySecret` |
| `nexus.env` | Nexus Repository environment variables | `[{INSTALL4J_ADD_VM_PARAMS: -Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap}]` |
| `nexus.resources` | Nexus Repository resource requests and limits | `{}` |
| `nexus.nexusPort` | Internal port for Nexus Repository service | `8081` |
| `nexus.securityContext` | Security Context (for enabling official image use `fsGroup: 2000`) | `{}` |
| `nexus.labels` | Service labels | `{}` |
| `nexus.podAnnotations` | Pod Annotations | `{}` |
| `nexus.livenessProbe.initialDelaySeconds` | LivenessProbe initial delay | 30 |
| `nexus.livenessProbe.periodSeconds` | Seconds between polls | 30 |
| `nexus.livenessProbe.failureThreshold` | Number of attempts before failure | 6 |
| `nexus.livenessProbe.timeoutSeconds` | Time in seconds after liveness probe times out | `nil` |
| `nexus.livenessProbe.path` | Path for LivenessProbe | / |
| `nexus.readinessProbe.initialDelaySeconds` | ReadinessProbe initial delay | 30 |
| `nexus.readinessProbe.periodSeconds` | Seconds between polls | 30 |
| `nexus.readinessProbe.failureThreshold` | Number of attempts before failure | 6 |
| `nexus.readinessProbe.timeoutSeconds` | Time in seconds after readiness probe times out | `nil` |
| `nexus.readinessProbe.path` | Path for ReadinessProbe | / |
| `nexus.hostAliases` | Aliases for IPs in /etc/hosts | [] |
| `nexus.properties.override` | Set to true to override default nexus.properties | `false` |
| `nexus.properties.data` | A map of custom nexus properties if `override` is set to true | `nexus.scripts.allowCreation: true` |
| `ingress.enabled` | Create an ingress for Nexus Repository | `true` |
| `ingress.annotations` | Annotations to enhance ingress configuration | `{kubernetes.io/ingress.class: nginx}` |
| `ingress.tls.secretName` | Name of the secret storing TLS cert, `false` to use the Ingress' default certificate | `nexus-tls` |
| `ingress.path` | Path for ingress rules. GCP users should set to `/*`. | `/` |
| `tolerations` | tolerations list | `[]` |
| `config.enabled` | Enable configmap | `false` |
| `config.mountPath` | Path to mount the config | `/sonatype-nexus-conf` |
| `config.data` | Configmap data | `nil` |
| `deployment.annotations` | Annotations to enhance deployment configuration | `{}` |
| `deployment.initContainers` | Init containers to run before main containers | `nil` |
| `deployment.postStart.command` | Command to run after starting the container | `nil` |
| `deployment.terminationGracePeriodSeconds` | Update termination grace period (in seconds) | 120s |
| `deployment.additionalContainers` | Add additional Container | `nil` |
| `deployment.additionalVolumes` | Add additional Volumes | `nil` |
| `deployment.additionalVolumeMounts` | Add additional Volume mounts | `nil` |
| `secret.enabled` | Enable secret | `false` |
| `secret.mountPath` | Path to mount the secret | `/etc/secret-volume` |
| `secret.readOnly` | Secret readonly state | `true` |
| `secret.data` | Secret data | `nil` |
| `service.enabled` | Enable additional service | `true` |
| `service.name` | Service name | `nexus3` |
| `service.labels` | Service labels | `nil` |
| `service.annotations` | Service annotations | `nil` |
| `service.type` | Service Type | `ClusterIP` |
| `route.enabled` | Set to true to create route for additional service | `false` |
| `route.name` | Name of route | `docker` |
| `route.portName` | Target port name of service | `docker` |
| `route.labels` | Labels to be added to route | `{}` |
| `route.annotations` | Annotations to be added to route | `{}` |
| `route.path` | Host name of Route e.g. jenkins.example.com | nil |
| `serviceAccount.create` | Set to true to create ServiceAccount | `true` |
| `serviceAccount.annotations` | Set annotations for ServiceAccount | `{}` |
| `serviceAccount.name` | The name of the service account to use. Auto-generate if not set and create is true. | `{}` |
| `persistence.enabled` | Set false to eliminate persistent storage | `true` |
| `persistence.existingClaim` | Specify the name of an existing persistent volume claim to use instead of creating a new one | nil |
| `persistence.storageSize` | Size of the storage the chart will request | `8Gi` |
### Persistence
By default, a `PersistentVolumeClaim` is created and mounted into the `/nexus-data` directory. In order to disable this functionality, you can change the `values.yaml` to disable persistence, which will use an `emptyDir` instead.
> *"An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever."*

Binary file not shown.

View File

@ -0,0 +1,27 @@
{{- if .Values.ingress.enabled }}
1. Your ingresses are available here:
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $.Values.ingress.hostRepo }}{{ $.Values.ingress.hostPath }}
{{- if $.Values.nexus.docker.enabled }}
{{- range $registry := .Values.nexus.docker.registries }}
https://{{ $registry.host }}/
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "nexus.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
Your application is available at http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "nexus.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "nexus.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
{{- range $index, $port := .Values.service.ports }}
Your application is available at http://$SERVICE_IP:{{ $port }}
{{- end }}
{{- else if contains "ClusterIP" .Values.service.type }}
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "nexus.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8081:80
Your application is available at http://127.0.0.1
{{- end }}

View File

@ -0,0 +1,63 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nexus.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nexus.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nexus.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "nexus.labels" -}}
helm.sh/chart: {{ include "nexus.chart" . }}
{{ include "nexus.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "nexus.selectorLabels" -}}
app.kubernetes.io/name: {{ include "nexus.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "nexus.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "nexus.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,17 @@
{{- if .Values.nexus.properties.override -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "nexus.name" . }}-properties
labels: {{- include "nexus.labels" . | nindent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
data:
nexus.properties: |
{{- range $k, $v := .Values.nexus.properties.data }}
{{ $k }}={{ $v }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.config.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "nexus.name" . }}-conf
labels:
{{ include "nexus.labels" . | indent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
data:
{{ toYaml .Values.config.data | indent 2 }}
{{- end }}

View File

@ -0,0 +1,163 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "nexus.fullname" . }}
labels:
{{ include "nexus.labels" . | indent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- if .Values.deployment.annotations }}
annotations:
{{ toYaml .Values.deployment.annotations | nindent 4 }}
{{- end }}
spec:
replicas: 1
strategy:
type: {{ .Values.deploymentStrategy }}
selector:
matchLabels:
{{- include "nexus.selectorLabels" . | nindent 6 }}
{{- if .Values.nexus.extraSelectorLabels }}
{{- with .Values.nexus.extraSelectorLabels }}
{{ toYaml . | indent 6 }}
{{- end }}
{{- end }}
template:
metadata:
annotations:
checksum/configmap-properties: {{ include (print .Template.BasePath "/configmap-properties.yaml") $ | sha256sum }}
{{- if .Values.nexus.podAnnotations }}
{{ toYaml .Values.nexus.podAnnotations | nindent 8}}
{{- end }}
labels:
{{- include "nexus.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ include "nexus.serviceAccountName" . }}
{{- if .Values.deployment.initContainers }}
initContainers:
{{ toYaml .Values.deployment.initContainers | nindent 6 }}
{{- end }}
{{- if .Values.nexus.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nexus.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.nexus.hostAliases }}
hostAliases:
{{ toYaml .Values.nexus.hostAliases | nindent 8 }}
{{- end }}
{{- if .Values.nexus.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.deployment.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.deployment.terminationGracePeriodSeconds }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
lifecycle:
{{- if .Values.deployment.postStart.command }}
postStart:
exec:
command: {{ .Values.deployment.postStart.command }}
{{- end }}
env:
{{ toYaml .Values.nexus.env | nindent 12 }}
envFrom:
{{ toYaml .Values.nexus.envFrom | nindent 12 }}
resources:
{{ toYaml .Values.nexus.resources | nindent 12 }}
ports:
- name: nexus-ui
containerPort: {{ .Values.nexus.nexusPort }}
{{- if .Values.nexus.docker.enabled }}
{{- range .Values.nexus.docker.registries }}
- name: docker-{{ .port }}
containerPort: {{ .port }}
{{- end }}
{{- end }}
livenessProbe:
httpGet:
path: {{ .Values.nexus.livenessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.livenessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.livenessProbe.failureThreshold }}
{{- if .Values.nexus.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.livenessProbe.timeoutSeconds }}
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.nexus.readinessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.readinessProbe.failureThreshold }}
{{- if .Values.nexus.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.readinessProbe.timeoutSeconds }}
{{- end }}
volumeMounts:
- mountPath: /nexus-data
name: {{ template "nexus.name" . }}-data
{{- if .Values.config.enabled }}
- mountPath: {{ .Values.config.mountPath }}
name: {{ template "nexus.name" . }}-conf
{{- end }}
{{- if .Values.nexus.properties.override }}
- mountPath: /nexus-data/etc/nexus.properties
name: {{ template "nexus.name" . }}-properties
subPath: nexus.properties
{{- end }}
{{- if .Values.secret.enabled }}
- mountPath: {{ .Values.secret.mountPath }}
name: {{ template "nexus.name" . }}-secret
readOnly: {{ .Values.secret.readOnly }}
{{- end }}
{{- if .Values.deployment.additionalVolumeMounts}}
{{ toYaml .Values.deployment.additionalVolumeMounts | nindent 12 }}
{{- end }}
{{- if .Values.deployment.additionalContainers }}
{{ toYaml .Values.deployment.additionalContainers | nindent 8 }}
{{- end }}
{{- if .Values.nexus.securityContext }}
securityContext:
{{ toYaml .Values.nexus.securityContext | nindent 8 }}
{{- end }}
volumes:
- name: {{ template "nexus.name" . }}-data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default (printf "%s-%s" (include "nexus.fullname" .) "data") }}
{{- else }}
emptyDir: {}
{{- end }}
{{- if .Values.config.enabled }}
- name: {{ template "nexus.name" . }}-conf
configMap:
name: {{ template "nexus.name" . }}-conf
{{- end }}
{{- if .Values.nexus.properties.override }}
- name: {{ template "nexus.name" . }}-properties
configMap:
name: {{ template "nexus.name" . }}-properties
items:
- key: nexus.properties
path: nexus.properties
{{- end }}
{{- if .Values.secret.enabled }}
- name: {{ template "nexus.name" . }}-secret
secret:
secretName: {{ template "nexus.name" . }}-secret
{{- end }}
{{- if .Values.deployment.additionalVolumes }}
{{ toYaml .Values.deployment.additionalVolumes | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,82 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "nexus.fullname" . -}}
{{- $svcPort := .Values.nexus.nexusPort -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "nexus.labels" . | nindent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.ingressClassName }}
ingressClassName: {{ .Values.ingress.ingressClassName }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
- host: {{ .Values.ingress.hostRepo }}
http:
paths:
- path: {{ .Values.ingress.hostPath }}
pathType: Prefix
backend:
service:
name: {{ $fullName }}
port:
number: 8081
{{ if .Values.nexus.docker.enabled }}
{{ range $registry := .Values.nexus.docker.registries }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName | trunc 49 }}-docker-{{ $registry.port }}
labels:
{{- include "nexus.labels" $ | nindent 4 }}
{{- if $.Values.nexus.extraLabels }}
{{- with $.Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- with $.Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
tls:
- hosts:
- {{ $registry.host | quote }}
secretName: {{ $registry.secretName }}
rules:
- host: {{ $registry.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ $fullName | trunc 49 }}-docker-{{ $registry.port }}
port:
number: {{ $registry.port }}
{{- end }} {{- /* range of nexus.docker.registries */ -}}
{{- end }} {{- /* nexus.docker.enabled */ -}}
{{- end }} {{- /* ingress.enabled */ -}}

View File

@ -0,0 +1,23 @@
{{- if .Values.nexusProxyRoute.enabled }}
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: {{ template "nexus.fullname" . }}
labels: {{ .Values.nexusProxyRoute.labels }}
annotations:
{{- range $key, $value := .Values.nexusProxyRoute.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
host: {{ .Values.nexusProxyRoute.path }}
port:
targetPort: {{ template "nexus.fullname" . }}
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: {{ template "nexus.fullname" . }}
weight: 100
wildcardPolicy: None
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if not .Values.statefulset.enabled }}
{{- if .Values.persistence.pdName -}}
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.persistence.pdName }}
labels:
{{ include "nexus.labels" . | indent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
spec:
capacity:
storage: {{ .Values.persistence.storageSize }}
accessModes:
- ReadWriteOnce
claimRef:
namespace: {{ .Release.Namespace }}
name: {{ template "nexus.fullname" . }}-data
gcePersistentDisk:
pdName: {{ .Values.persistence.pdName }}
fsType: {{ .Values.persistence.fsType }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,30 @@
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "nexus.fullname" . }}-data
labels:
{{ include "nexus.labels" . | indent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- if .Values.persistence.annotations }}
annotations:
{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.storageSize | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,27 @@
{{- if .Values.route.enabled }}
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: {{ .Values.route.name }}
labels: {{ .Values.route.labels }}
annotations:
{{- range $key, $value := .Values.route.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
host: {{ .Values.route.path }}
port:
targetPort: {{ .Values.service.portName }}
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
{{- if .Values.service.name }}
name: {{ .Values.service.name }}
{{- else }}
name: {{ template "nexus.name" . }}-service
{{- end }}
weight: 100
wildcardPolicy: None
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.secret.enabled -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "nexus.name" . }}-secret
labels:
{{ include "nexus.labels" . | indent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
data:
{{ toYaml .Values.secret.data | indent 2 }}
{{- end}}

View File

@ -0,0 +1,66 @@
{{- if .Values.service.enabled -}}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "nexus.fullname" . }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
labels:
{{- include "nexus.labels" . | nindent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.nexus.nexusPort }}
protocol: TCP
name: nexus-ui
selector:
{{- include "nexus.selectorLabels" . | nindent 4 }}
{{- if .Values.nexus.extraSelectorLabels }}
{{- with .Values.nexus.extraSelectorLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- if .Values.nexus.docker.enabled }}
{{- range $registry := .Values.nexus.docker.registries }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "nexus.fullname" $ | trunc 49 }}-docker-{{ $registry.port }}
{{- if $.Values.service.annotations }}
annotations:
{{ toYaml $.Values.service.annotations | indent 4 }}
{{- end }}
labels:
{{- include "nexus.labels" $ | nindent 4 }}
{{- if $.Values.nexus.extraLabels }}
{{- with $.Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
spec:
type: {{ $.Values.service.type }}
ports:
- port: {{ $registry.port }}
protocol: TCP
name: docker-{{ $registry.port }}
selector:
{{- include "nexus.selectorLabels" $ | nindent 4 }}
{{- if $.Values.nexus.extraSelectorLabels }}
{{- with $.Values.nexus.extraSelectorLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "nexus.serviceAccountName" . }}
labels: {{- include "nexus.labels" . | nindent 4 }}
{{- if .Values.nexus.extraLabels }}
{{- with .Values.nexus.extraLabels }}
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}
{{- with .Values.serviceAccount.annotations }}
annotations: {{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,25 @@
# This test checks the logs to confirm the running app version is the same as the chart app version
# This test will run only if the flag persistence.enabled is true on the values.yaml file
{{- if .Values.persistence.enabled }}
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-test-check-logs"
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
spec:
containers:
- name: {{ .Release.Name }}-test-check-logs
image: busybox
command: ["/bin/sh"]
args: ["-c", "cat /nexus-data/log/nexus.log | grep {{ .Chart.AppVersion }} || exit 1"]
volumeMounts:
- mountPath: /nexus-data
name: {{ template "nexus.name" . }}-data
volumes:
- name: {{ template "nexus.name" . }}-data
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default (printf "%s-%s" (include "nexus.fullname" .) "data") }}
restartPolicy: Never
{{- end }}

View File

@ -0,0 +1,15 @@
# This test checks that the server is up and running by making a wget
apiVersion: v1
kind: Pod
metadata:
name: "{{ .Release.Name }}-test-connection"
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed
spec:
containers:
- name: {{ .Release.Name }}-test-connection
image: busybox
command: ['wget']
args: ['{{ include "nexus.fullname" . }}:{{ .Values.nexus.nexusPort }}']
restartPolicy: Never

View File

@ -0,0 +1,85 @@
suite: deployment
templates:
- deployment.yaml
- configmap-properties.yaml
tests:
- it: renders with defaults
template: deployment.yaml
asserts:
- hasDocuments:
count: 1
- isKind:
of: Deployment
- equal:
path: apiVersion
value: apps/v1
- equal:
path: metadata.name
value: RELEASE-NAME-nexus-repository-manager
- matchRegex:
path: metadata.labels.[app.kubernetes.io/name]
pattern: nexus-repository-manager
- matchRegex:
path: metadata.labels.[app.kubernetes.io/version]
pattern: 3\.\d+\.\d+
- matchRegex:
path: spec.template.metadata.annotations.[checksum/configmap-properties]
pattern: .+
- equal:
path: spec.replicas
value: 1
- equal:
path: spec.strategy.type
value: Recreate
- matchRegex:
path: spec.template.spec.containers[0].image
pattern: sonatype/nexus3:3\.\d+\.\d+
- equal:
path: spec.template.spec.containers[0].securityContext
value: null
- equal:
path: spec.template.spec.containers[0].imagePullPolicy
value: IfNotPresent
- equal:
path: spec.template.spec.containers[0].env
value:
- name: INSTALL4J_ADD_VM_PARAMS
value: -Xms2703M -Xmx2703M -XX:MaxDirectMemorySize=2703M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: "true"
- equal:
path: spec.template.spec.containers[0].ports
value:
- containerPort: 8081
name: nexus-ui
- equal:
path: spec.template.spec.containers[0].livenessProbe
value:
failureThreshold: 6
httpGet:
path: /
port: 8081
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
- equal:
path: spec.template.spec.containers[0].readinessProbe
value:
failureThreshold: 6
httpGet:
path: /
port: 8081
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
- equal:
path: spec.template.spec.containers[0].volumeMounts
value:
- mountPath: /nexus-data
name: nexus-repository-manager-data
- equal:
path: spec.template.spec.volumes
value:
- name: nexus-repository-manager-data
persistentVolumeClaim:
claimName: RELEASE-NAME-nexus-repository-manager-data

View File

@ -0,0 +1,144 @@
suite: ingress
templates:
- ingress.yaml
tests:
- it: renders with defaults
set:
ingress:
enabled: true
asserts:
- hasDocuments:
count: 1
- isKind:
of: Ingress
- equal:
path: apiVersion
value: networking.k8s.io/v1
- equal:
path: metadata.labels.[app.kubernetes.io/instance]
value: RELEASE-NAME
- equal:
path: metadata.labels.[app.kubernetes.io/managed-by]
value: Helm
- matchRegex:
path: metadata.labels.[app.kubernetes.io/version]
pattern: \d+\.\d+\.\d+
- matchRegex:
path: metadata.labels.[helm.sh/chart]
pattern: nexus-repository-manager-\d+\.\d+\.\d+
- equal:
path: metadata.labels.[app.kubernetes.io/name]
value: nexus-repository-manager
- equal:
path: metadata.annotations
value:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
- documentIndex: 0
equal:
path: metadata.name
value: RELEASE-NAME-nexus-repository-manager
- documentIndex: 0
equal:
path: spec
value:
ingressClassName: nginx
rules:
- host: repo.demo
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: RELEASE-NAME-nexus-repository-manager
port:
number: 8081
- it: renders a second docker ingress
set:
ingress:
enabled: true
nexus:
docker:
enabled: true
registries:
- host: docker.repo.demo
port: 5000
secretName: registry-secret
asserts:
- hasDocuments:
count: 2
- isKind:
of: Ingress
- equal:
path: apiVersion
value: networking.k8s.io/v1
- equal:
path: metadata.labels.[app.kubernetes.io/instance]
value: RELEASE-NAME
- equal:
path: metadata.labels.[app.kubernetes.io/managed-by]
value: Helm
- matchRegex:
path: metadata.labels.[app.kubernetes.io/version]
pattern: \d+\.\d+\.\d+
- matchRegex:
path: metadata.labels.[helm.sh/chart]
pattern: nexus-repository-manager-\d+\.\d+\.\d+
- equal:
path: metadata.labels.[app.kubernetes.io/name]
value: nexus-repository-manager
- equal:
path: metadata.annotations
value:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
- documentIndex: 0
equal:
path: metadata.name
value: RELEASE-NAME-nexus-repository-manager
- documentIndex: 1
equal:
path: metadata.name
value: RELEASE-NAME-nexus-repository-manager-docker-5000
- documentIndex: 1
equal:
path: spec
value:
rules:
- host: docker.repo.demo
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: RELEASE-NAME-nexus-repository-manager-docker-5000
port:
number: 5000
tls:
- hosts:
- docker.repo.demo
secretName: registry-secret
- it: is disabled by default
asserts:
- hasDocuments:
count: 0
- it: renders with tls config when provided
set:
ingress:
enabled: true
tls:
- secretName: nexus-tls-local
hosts:
- repo.host
asserts:
- equal:
path: spec.tls
value:
- secretName: nexus-tls-local
hosts:
- repo.host

View File

@ -0,0 +1,178 @@
---
statefulset:
# This is not supported
enabled: false
# By default deploymentStrategy is set to rollingUpdate with maxSurge of 25% and maxUnavailable of 25% . you can change type to `Recreate` or can uncomment `rollingUpdate` specification and adjust them to your usage.
deploymentStrategy: Recreate
image:
# Sonatype Official Public Image
repository: sonatype/nexus3
tag: 3.38.1
pullPolicy: IfNotPresent
nexus:
docker:
enabled: false
# registries:
# - host: chart.local
# port: 5000
# secretName: registrySecret
env:
# minimum recommended memory settings for a small, person instance from
# https://help.sonatype.com/repomanager3/product-information/system-requirements
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms2703M -Xmx2703M -XX:MaxDirectMemorySize=2703M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: "true"
properties:
override: false
data:
nexus.scripts.allowCreation: true
# See this article for ldap configuratioon options https://support.sonatype.com/hc/en-us/articles/216597138-Setting-Advanced-LDAP-Connection-Properties-in-Nexus-Repository-Manager
# nexus.ldap.env.java.naming.security.authentication: simple
# nodeSelector:
# cloud.google.com/gke-nodepool: default-pool
resources:
# minimum recommended memory settings for a small, person instance from
# https://help.sonatype.com/repomanager3/product-information/system-requirements
# requests:
# cpu: 4
# memory: 8Gi
# limits:
# cpu: 4
# memory: 8Gi
# The ports should only be changed if the nexus image uses a different port
nexusPort: 8081
# Default the pods UID and GID to match the nexus3 container.
# Customize or remove these values from the securityContext as appropriate for
# your deployment environment.
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
podAnnotations: {}
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 6
timeoutSeconds: 10
path: /
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
failureThreshold: 6
timeoutSeconds: 10
path: /
# hostAliases allows the modification of the hosts file inside a container
hostAliases: []
# - ip: "192.168.1.10"
# hostnames:
# - "example.com"
# - "www.example.com"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
deployment:
# # Add annotations in deployment to enhance deployment configurations
annotations: {}
# # Add init containers. e.g. to be used to give specific permissions for nexus-data.
# # Add your own init container or uncomment and modify the given example.
initContainers:
# - name: fmp-volume-permission
# image: busybox
# imagePullPolicy: IfNotPresent
# command: ['chown','-R', '200', '/nexus-data']
# volumeMounts:
# - name: nexus-data
# mountPath: /nexus-data
# Uncomment and modify this to run a command after starting the nexus container.
postStart:
command: # '["/bin/sh", "-c", "ls"]'
preStart:
command: # '["/bin/rm", "-f", "/path/to/lockfile"]'
terminationGracePeriodSeconds: 120
additionalContainers:
additionalVolumes:
additionalVolumeMounts:
ingress:
enabled: false
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
hostPath: /
hostRepo: repo.demo
# tls:
# - secretName: nexus-local-tls
# hosts:
# - repo.demo
service:
name: nexus3
enabled: true
labels: {}
annotations: {}
type: ClusterIP
route:
enabled: false
name: docker
portName: docker
labels:
annotations:
# path: /docker
nexusProxyRoute:
enabled: false
labels:
annotations:
# path: /nexus
persistence:
enabled: true
accessMode: ReadWriteOnce
## If defined, storageClass: <storageClass>
## If set to "-", storageClass: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClass spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# existingClaim:
# annotations:
# "helm.sh/resource-policy": keep
# storageClass: "-"
storageSize: 8Gi
# If PersistentDisk already exists you can create a PV for it by including the 2 following keypairs.
# pdName: nexus-data-disk
# fsType: ext4
tolerations: []
# Enable configmap and add data in configmap
config:
enabled: false
mountPath: /sonatype-nexus-conf
data: []
# # To use an additional secret, set enable to true and add data
secret:
enabled: false
mountPath: /etc/secret-volume
readOnly: true
data: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""