Creating public repository

This commit is contained in:
Mike Oliverio
2022-07-05 15:25:48 -04:00
parent 33c60ee0d0
commit 06fce09493
45 changed files with 2456 additions and 1 deletions

View File

@ -0,0 +1,24 @@
apiVersion: v2
name: nxrm-aws-resiliency
description: Helm chart for a Resilient Nexus Repository deployment in AWS
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 40.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.40.1"

View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2020 Sonatype
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,101 @@
# Helm Chart for a Resilient Nexus Repository Deployment in AWS
This Helm chart configures the Kubernetes resources that are needed for a resilient Nexus Repository deployment on AWS as described in our documented [single-node cloud resilient deployment example using AWS](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws).
Use the checklist below to determine if this Helm chart is suitable for your deployment needs.
---
## When to Use This Helm Chart
Use this Helm chart if you are doing any of the following:
- Deploying Nexus Repository Pro to an AWS cloud environment with the desire for automatic failover across Availability Zones (AZs) within a single region
- Planning to configure a single Nexus Repository Pro instance within your Kubernetes/EKS cluster with two or more nodes spread across different AZs within an AWS region
- Using an external PostgreSQL database
> **Note**: A Nexus Repository Pro license is required for our resilient deployment options. Your Nexus Repository Pro license file must be stored externally as mounted from AWS Secrets AWS (required).
---
## Prerequisites for This Chart
In order to set up an environment like the one illustrated above and described in this section, you will need the following:
- Kubernetes 1.19+
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Helm 3](https://helm.sh/docs/intro/install/)
- A Nexus Repository Pro license
- An AWS account with permissions for accessing the following AWS services:
- Elastic Kubernetes Service (EKS)
- Relational Database Service (RDS) for PostgreSQL
- Application Load Balancer (ALB)
- CloudWatch
- Simple Storage Service (S3)
- Secrets Manager
You will also need to complete the steps below. See the referenced AWS documentation for detailed configuration steps. Also see [our resiliency documentation](https://help.sonatype.com/repomanager3/planning-your-implementation/resiliency-and-high-availability/single-node-cloud-resilient-deployment-example-using-aws) for more details about why these steps are necessary and how each AWS solution functions within a resilient deployment:
1. Configure an EKS cluster - [AWS documentation for managed nodes (i.e., EC2)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html)
2. Create an Aurora database cluster - [AWS documentation for creating an Aurora database cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.CreateInstance.html)
3. Deploy the AWS Load Balancer Controller (LBC) to your EKS cluster - [AWS documentation for deploying the AWS LBC to your EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html)
4. Install AWS Secrets Store CSI drivers - You need to create an IAM service account using the ```eksctl create iamserviceaccount``` command. Before proceeding, read the points below as they contain important required steps to ensure this helm chart will work for you: <br>
- **You must include two additional command parameters when running the command**: ```--role-only``` and ```--namespace <nexusrepo namespace>```
- It is important to include the ```--role-only``` option in the ```eksctl create iamserviceaccount``` command so that the helm chart manages the Kubernetes service account. <br>
- **The namespace you specify to the ```eksctl create iamserviceaccount``` must be the same namespace into which you will deploy the Nexus Repository pod.** <br>
- Although the namespace does not exist at this point, you must specify it as part of the command. **Do not create that namespace manually beforehand**; the helm chart will create and manage it.
- You should specify this same namespace as the value of ```nexusNs``` in your values.yaml. <br>
- Follow the instructions provided in the [AWS Secrets Store CSI drivers documentation](https://github.com/aws/secrets-store-csi-driver-provider-aws/blob/main/README.md) to install the AWS Secrets Store CSI drivers; ensure that you follow the additional instructions in the bullets above when you reach the ```eksctl create iamserviceaccount``` command on that page.
5. Ensure that your EKS nodes are granted CloudWatchFullAccess and CloudWatchAgentServerPolicy IAM policies. This Helm chart will configure Fluentbit for log externalisation to CloudWatch.
- [AWS documentation for setting up Fluentbit](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html)
---
## Deployment
1. Pull the [nxrm-resiliency-aws-helmchart](https://github.com/sonatype/nxrm-resiliency-aws-helmchart).
2. Ensure you have updated your values.yaml with appropriate values for your environment.
3. Install the chart using the following:
```helm install nxrm nexus/nxrm-aws-resiliency --values values.yaml```
3. Get the Nexus Repository link using the following:
```kubectl get ingresses -n nexusrepo```
---
## Health Check
You can use the following commands to perform various health checks:
See a list of releases:
```helm list```
Check pods using the following:
```kubectl get pods -n nexusrepo```
Check the Nexus Repository logs with the following:
```kubectl logs <pod_name> -n nexusrepo nxrm-app```
Check if the pod is OK by using the following; you shouldn't see any error/warning messages:
```kubectl describe pod <pod_name> -n nexusrepo```
Check if ingress is OK using the following:
```kubectl describe ingress <ingress_name> -n nexusrepo```
Check that the Fluent Bit pod is sending events to CloudWatch using the following:
```kubectl logs -n amazon-cloudwatch <fluent-bit pod id>```
If the above returns without error, then check CloudWatch for the ```/aws/containerinsights/<eks cluster name>/nexus-logs``` log group, which should contain four log streams.
---
## Uninstall
To uninstall the deployment, use the following:
```helm uninstall nxrm```
After removing the deployment, ensure that the namespace is deleted and that Nexus Repository is not listed when using the following:
```helm list```

View File

@ -0,0 +1 @@
Thank you for installing {{ .Chart.Name }}.

View File

@ -0,0 +1,120 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-{{ .Values.deployment.name }}
namespace: {{ .Values.namespaces.nexusNs }}
labels:
app: nxrm
spec:
replicas: 1
selector:
matchLabels:
app: nxrm
template:
metadata:
labels:
app: nxrm
spec:
serviceAccountName: {{ .Values.serviceAccount.name }}
initContainers:
# chown nexus-data to 'nexus' user and init log directories/files for a new pod
# otherwise the side car containers will crash a couple of times and backoff whilst waiting
# for nxrm-app to start and this increases the total start up time.
- name: chown-nexusdata-owner-to-nexus-and-init-log-dir
image: {{ .Values.deployment.initContainer.image.repository }}:{{ .Values.deployment.initContainer.image.tag }}
command: [/bin/sh]
args:
- -c
- >-
mkdir -p /nexus-data/etc/logback &&
mkdir -p /nexus-data/log/tasks &&
mkdir -p /nexus-data/log/audit &&
touch -a /nexus-data/log/tasks/allTasks.log &&
touch -a /nexus-data/log/audit/audit.log &&
touch -a /nexus-data/log/request.log &&
chown -R '200:200' /nexus-data
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
containers:
- name: nxrm-app
image: {{ .Values.deployment.container.image.repository }}:{{ .Values.deployment.container.image.tag }}
securityContext:
runAsUser: 200
imagePullPolicy: {{ .Values.deployment.container.pullPolicy }}
ports:
- containerPort: {{ .Values.deployment.container.containerPort }}
env:
- name: DB_NAME
value: "{{ .Values.deployment.container.env.nexusDBName }}"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: nxrm-db-secret
key: db-password
- name: DB_USER
valueFrom:
secretKeyRef:
name: nxrm-db-secret
key: db-user
- name: DB_HOST
valueFrom:
secretKeyRef:
name: nxrm-db-secret
key: db-host
- name: NEXUS_SECURITY_INITIAL_PASSWORD
valueFrom:
secretKeyRef:
name: nxrm-admin-secret
key: nexus-admin-password
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: "false"
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Dnexus.licenseFile=/nxrm-secrets/{{ .Values.secret.license.alias }} \
-Dnexus.datastore.enabled=true -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs \
-Dnexus.datastore.nexus.jdbcUrl=jdbc:postgresql://${DB_HOST}:{{ .Values.deployment.container.env.nexusDBPort }}/${DB_NAME} \
-Dnexus.datastore.nexus.username=${DB_USER} \
-Dnexus.datastore.nexus.password=${DB_PASSWORD}"
volumeMounts:
- mountPath: /nxrm-secrets
name: nxrm-secrets
- name: nexusdata
mountPath: /nexus-data
- name: logback-tasklogfile-override
mountPath: /nexus-data/etc/logback/logback-tasklogfile-appender-override.xml
subPath: logback-tasklogfile-appender-override.xml
- name: request-log
image: {{ .Values.deployment.requestLogContainer.image.repository }}:{{ .Values.deployment.requestLogContainer.image.tag }}
args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/request.log']
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
- name: audit-log
image: {{ .Values.deployment.auditLogContainer.image.repository }}:{{ .Values.deployment.auditLogContainer.image.tag }}
args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/audit/audit.log']
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
- name: tasks-log
image: {{ .Values.deployment.taskLogContainer.image.repository }}:{{ .Values.deployment.taskLogContainer.image.tag }}
args: [/bin/sh, -c, 'tail -n+1 -F /nexus-data/log/tasks/allTasks.log']
volumeMounts:
- name: nexusdata
mountPath: /nexus-data
volumes:
- name: nexusdata
persistentVolumeClaim:
claimName: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ebs-claim
- name: nxrm-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-secret
fsType: ext4
- name: logback-tasklogfile-override
configMap:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-logback-tasklogfile-override
items:
- key: logback-tasklogfile-appender-override.xml
path: logback-tasklogfile-appender-override.xml

View File

@ -0,0 +1,360 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
namespace: {{ .Values.namespaces.cloudwatchNs }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-role
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
- apiGroups: [""]
resources:
- namespaces
- pods
- pods/logs
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-role
subjects:
- kind: ServiceAccount
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
namespace: {{ .Values.namespaces.cloudwatchNs }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-cluster-info
namespace: {{ .Values.namespaces.cloudwatchNs }}
data:
cluster.name: {{ .Values.deployment.clusterName }}
http.server: "On"
http.port: "2020"
read.head: "Off"
read.tail: "On"
logs.region: {{ .Values.deployment.logsRegion }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-config
namespace: {{ .Values.namespaces.cloudwatchNs }}
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server ${HTTP_SERVER}
HTTP_Listen 0.0.0.0
HTTP_Port ${HTTP_PORT}
storage.path /var/fluent-bit/state/flb-storage/
storage.sync normal
storage.checksum off
storage.backlog.mem_limit 5M
@INCLUDE nexus-log.conf
@INCLUDE nexus-request-log.conf
@INCLUDE nexus-audit-log.conf
@INCLUDE nexus-tasks-log.conf
nexus-log.conf: |
[INPUT]
Name tail
Tag nexus.nexus-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_nxrm-app-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.nexus-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.nexus-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-nexus.log-
auto_create_group true
extra_user_agent container-insights
nexus-request-log.conf: |
[INPUT]
Name tail
Tag nexus.request-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_request-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.request-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.request-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-request.log-
auto_create_group true
extra_user_agent container-insights
nexus-audit-log.conf: |
[INPUT]
Name tail
Tag nexus.audit-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_audit-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.audit-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.audit-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-audit.log-
auto_create_group true
extra_user_agent container-insights
nexus-tasks-log.conf: |
[INPUT]
Name tail
Tag nexus.tasks-log
Path /var/log/containers/{{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-nxrm.deployment-*-*_{{ .Values.namespaces.nexusNs }}_tasks-log-*.log
Parser docker
DB /var/fluent-bit/state/flb_container.db
Mem_Buf_Limit 5MB
Skip_Long_Lines Off
Refresh_Interval 10
Rotate_Wait 30
storage.type filesystem
Read_from_Head ${READ_FROM_HEAD}
[FILTER]
Name kubernetes
Match nexus.tasks-log
Kube_URL https://kubernetes.default.svc:443
Kube_Tag_Prefix application.var.log.containers.
Merge_Log On
Merge_Log_Key log_processed
K8S-Logging.Parser On
K8S-Logging.Exclude Off
Labels Off
Annotations Off
[OUTPUT]
Name cloudwatch_logs
Match nexus.tasks-log
region ${AWS_REGION}
log_group_name /aws/containerinsights/${CLUSTER_NAME}/nexus-logs
log_stream_prefix ${HOST_NAME}-tasks.log-
auto_create_group true
extra_user_agent container-insights
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
[PARSER]
Name syslog
Format regex
Regex ^(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
Time_Key time
Time_Format %b %d %H:%M:%S
[PARSER]
Name container_firstline
Format regex
Regex (?<log>(?<="log":")\S(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=})
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
[PARSER]
Name cwagent_firstline
Format regex
Regex (?<log>(?<="log":")\d{4}[\/-]\d{1,2}[\/-]\d{1,2}[ T]\d{2}:\d{2}:\d{2}(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=})
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
namespace: {{ .Values.namespaces.cloudwatchNs }}
labels:
k8s-app: fluent-bit
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluent-bit
template:
metadata:
labels:
k8s-app: fluent-bit
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: fluent-bit
image: amazon/aws-for-fluent-bit:2.10.0
imagePullPolicy: Always
env:
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: logs.region
- name: CLUSTER_NAME
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: cluster.name
- name: HTTP_SERVER
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: http.server
- name: HTTP_PORT
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: http.port
- name: READ_FROM_HEAD
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: read.head
- name: READ_FROM_TAIL
valueFrom:
configMapKeyRef:
name: fluent-bit-cluster-info
key: read.tail
- name: HOST_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CI_VERSION
value: "k8s/1.3.7"
# the below var is just to force DaemonSet restarts when changing configuration stored in ConfigMap above
- name: FOO_VERSION
value: "16"
resources:
limits:
memory: 200Mi
requests:
cpu: 500m
memory: 100Mi
volumeMounts:
# Please don't change below read-only permissions
- name: fluentbitstate
mountPath: /var/fluent-bit/state
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
- name: runlogjournal
mountPath: /run/log/journal
readOnly: true
- name: dmesg
mountPath: /var/log/dmesg
readOnly: true
terminationGracePeriodSeconds: 120
volumes:
- name: fluentbitstate
hostPath:
path: /var/fluent-bit/state
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config
configMap:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit-config
- name: runlogjournal
hostPath:
path: /run/log/journal
- name: dmesg
hostPath:
path: /var/log/dmesg
serviceAccountName: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"

View File

@ -0,0 +1,41 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: {{ .Values.namespaces.nexusNs }}
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: {{ .Values.ingress.nxrmIngress.scheme }}
alb.ingress.kubernetes.io/subnets: "{{ .Values.ingress.nxrmIngress.subnets }}"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}-service
port:
number: {{ .Values.service.nexus.port }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: {{ .Values.namespaces.nexusNs }}
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ingress-nxrm-docker
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: {{ .Values.ingress.dockerIngress.scheme }}
alb.ingress.kubernetes.io/subnets: {{ .Values.ingress.dockerIngress.subnets }}
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}-docker-service
port:
number: {{ .Values.ingress.dockerIngress.port }}

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.nexusNs }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespaces.cloudwatchNs }}
---

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-logback-tasklogfile-override
namespace: {{ .Values.namespaces.nexusNs }}
data:
logback-tasklogfile-appender-override.xml: |
<included>
<appender name="tasklogfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${karaf.data}/log/tasks/allTasks.log</File>
<filter class="org.sonatype.nexus.pax.logging.TaskLogsFilter" />
<Append>true</Append>
<encoder class="org.sonatype.nexus.pax.logging.NexusLayoutEncoder">
<pattern>%d{"yyyy-MM-dd HH:mm:ss,SSSZ"} %-5p [%thread] %node %mdc{userId:-*SYSTEM} %c - %m%n</pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${karaf.data}/log/tasks/allTasks-%d{yyyy-MM-dd}.log.gz</fileNamePattern>
<maxHistory>1</maxHistory>
</rollingPolicy>
</appender>
</included>

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ebs-pv
spec:
capacity:
storage: {{ .Values.pv.storage }}
volumeMode: Filesystem
accessModes:
- {{ .Values.pv.accessModes }}
persistentVolumeReclaimPolicy: {{ .Values.pv.reclaimPolicy }}
storageClassName: local-storage
local:
path: {{ .Values.pv.path }}
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
{{- range $zone := .Values.pv.zones }}
- {{ $zone }}
{{- end }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-ebs-claim
namespace: {{ .Values.namespaces.nexusNs }}
spec:
accessModes:
- {{ .Values.pvc.accessModes }}
storageClassName: local-storage
resources:
requests:
storage: {{ .Values.pvc.storage }}

View File

@ -0,0 +1,38 @@
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
namespace: {{ .Values.namespaces.nexusNs }}
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-secret
spec:
provider: aws
secretObjects:
- data:
- key: db-user
objectName: nxrm-db-user
- key: db-password
objectName: nxrm-db-password
- key: db-host
objectName: nxrm-db-host
secretName: nxrm-db-secret
type: Opaque
- data:
- key: nexus-admin-password
objectName: nxrm-admin-password
secretName: nxrm-admin-secret
type: Opaque
parameters:
objects: |
- objectName: "{{ .Values.secret.license.arn }}"
objectAlias: "{{ .Values.secret.license.alias }}"
- objectName: "{{ .Values.secret.rds.arn }}"
jmesPath:
- path: "username"
objectAlias: "nxrm-db-user"
- path: "password"
objectAlias: "nxrm-db-password"
- path: "host"
objectAlias: "nxrm-db-host"
- objectName: "{{ .Values.secret.adminpassword.arn }}"
jmesPath:
- path: "admin_nxrm_password"
objectAlias: "nxrm-admin-password"

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.serviceAccount.name }}
namespace: {{ .Values.namespaces.nexusNs }}
annotations:
eks.amazonaws.com/role-arn: {{ .Values.serviceAccount.role }}

View File

@ -0,0 +1,32 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-service
namespace: {{ .Values.namespaces.nexusNs }}
labels:
app: nxrm
spec:
type: {{ .Values.service.nexus.type }}
selector:
app: nxrm
ports:
- protocol: {{ .Values.service.nexus.protocol }}
port: {{ .Values.service.nexus.port }}
targetPort: {{ .Values.service.nexus.targetPort }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-docker-service
namespace: {{ .Values.namespaces.nexusNs }}
labels:
app: nxrm
spec:
type: {{ .Values.service.docker.type }}
selector:
app: nxrm
ports:
- name: docker-connector
protocol: {{ .Values.service.docker.protocol }}
port: {{ .Values.service.docker.port }}
targetPort: {{ .Values.service.docker.targetPort }}

View File

@ -0,0 +1,7 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Chart.Name }}-{{ .Chart.Version }}.{{ .Release.Name }}-local-storage
namespace: {{ .Values.namespaces.nexusNs }}
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

View File

@ -0,0 +1,77 @@
# Declare variables to be passed into your templates.
namespaces:
nexusNs: nexusrepo
cloudwatchNs: amazon-cloudwatch
deployment:
name: nxrm.deployment
clusterName: nxrm-nexus
logsRegion: us-east-1
initContainer:
image:
repository: busybox
tag: 1.33.1
container:
image:
repository: sonatype/nexus3
tag: 3.40.1
containerPort: 8081
pullPolicy: IfNotPresent
env:
nexusDBName: nexus
nexusDBPort: 3306
requestLogContainer:
image:
repository: busybox
tag: 1.33.1
auditLogContainer:
image:
repository: busybox
tag: 1.33.1
taskLogContainer:
image:
repository: busybox
tag: 1.33.1
serviceAccount:
name: nexus-repository-deployment-sa #This SA is created as part of steps under "AWS Secrets Manager"
role: arn:aws:iam::000000000000:role/nxrm-nexus-role #Role with secretsmanager permissions
ingress:
nxrmIngress:
scheme: internal
port: 9090
subnets: subnet-000000 #comma separated list of Subnets
dockerIngress: #Ingress for Docker Connector
scheme: internal
port: 9090
subnets: subnet-000000 #comma separated list of Subnets
pv:
storage: 120Gi
volumeMode: Filesystem
accessModes: ReadWriteOnce
reclaimPolicy: Retain
path: /mnt
zones:
zone1: us-east-1a
zone2: us-east-1b
pvc:
accessModes: ReadWriteOnce
storage: 100Gi
service: #Nexus Repo NodePort Service
nexus:
type: NodePort
protocol: TCP
port: 80
targetPort: 8081
docker: #Nodeport Service for Docker connector
type: NodePort
protocol: TCP
port: 9090
targetPort: 9090
secret:
license:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrm-nexus-license
alias: nxrm-license.lic
rds:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:nxrmrds-cred-nexus
adminpassword:
arn: arn:aws:secretsmanager:us-east-1:000000000000:secret:admin-nxrm-password