Merge pull request #1 from jfrog/master

Catch up to Jfrog
This commit is contained in:
Serienmorder
2020-10-06 06:16:28 -07:00
committed by GitHub
214 changed files with 12242 additions and 380 deletions

28
.github/workflows/cla.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: "CLA Assistant"
on:
issue_comment:
types: [created]
pull_request_target:
types: [opened,synchronize]
jobs:
CLAssistant:
runs-on: ubuntu-latest
steps:
- name: "CLA Assistant"
if: (github.event.comment.body == 'recheckcla' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
# Alpha Release
uses: cla-assistant/github-action@v2.0.1-alpha
env:
# Generated and maintained by github
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# JFrog organization secret
PERSONAL_ACCESS_TOKEN : ${{ secrets.CLA_SIGN_TOKEN }}
with:
path-to-signatures: 'signed_clas.json'
path-to-cla-document: 'https://jfrog.com/cla/'
remote-organization-name: 'jfrog'
remote-repository-name: 'jfrog-signed-clas'
# branch should not be protected
branch: 'master'
allowlist: bot*

11
Ansible/CHANGELOG.md Normal file
View File

@@ -0,0 +1,11 @@
# Changelog
All notable changes to this project will be documented in this file.
## [1.1.0] - 2020-09-27
- Validated for Artifactory 7.7.8 and Xray 3.8.6.
- Added offline support for Artifactory and Xray.
- Added support for configurable Postgres pg_hba.conf.
- Misc fixes due to Artifactory 7.7.8.
- Published 1.1.0 to [Ansible Galaxy](https://galaxy.ansible.com/jfrog/installers).

View File

@@ -12,6 +12,11 @@ This Ansible directory consists of the following directories that support the JF
| collection_version | artifactory_version | xray_version | | collection_version | artifactory_version | xray_version |
|--------------------|---------------------|--------------| |--------------------|---------------------|--------------|
| 1.1.0 | 7.7.8 | 3.8.6 |
| 1.0.9 | 7.7.3 | 3.8.0 |
| 1.0.8 | 7.7.3 | 3.8.0 |
| 1.0.8 | 7.7.1 | 3.5.2 |
| 1.0.8 | 7.6.1 | 3.5.2 |
| 1.0.7 | 7.6.1 | 3.5.2 | | 1.0.7 | 7.6.1 | 3.5.2 |
| 1.0.6 | 7.5.0 | 3.3.0 | | 1.0.6 | 7.5.0 | 3.3.0 |
| 1.0.6 | 7.4.3 | 3.3.0 | | 1.0.6 | 7.4.3 | 3.3.0 |
@@ -45,14 +50,14 @@ This Ansible directory consists of the following directories that support the JF
5. Then execute with the following command to provision the JFrog software with Ansible. Variables can also be passed in at the command-line. 5. Then execute with the following command to provision the JFrog software with Ansible. Variables can also be passed in at the command-line.
``` ```
ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 16) join_key=$(openssl rand -hex 16)" ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 32) join_key=$(openssl rand -hex 32)"
``` ```
## Autogenerating Master and Join Keys ## Autogenerating Master and Join Keys
You may want to auto-generate your master amd join keys and apply it to all the nodes. You may want to auto-generate your master amd join keys and apply it to all the nodes.
``` ```
ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 16) join_key=$(openssl rand -hex 16)" ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 32) join_key=$(openssl rand -hex 32)"
``` ```
## Using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to Encrypt Vars ## Using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to Encrypt Vars
@@ -84,11 +89,29 @@ ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A us
eg. eg.
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"' ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"'
``` ```
## Upgrades
The Artifactory and Xray roles support software updates. To use a role to perform a software update only, use the _artifactory_upgrade_only_ or _xray_upgrade_only_ variable and specify the version. See the following example.
```
- hosts: artifactory
vars:
artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}"
artifactory_upgrade_only: true
roles:
- artifactory
- hosts: xray
vars:
xray_version: "{{ lookup('env', 'xray_version_upgrade') }}"
xray_upgrade_only: true
roles:
- xray
```
## Building the Collection Archive ## Building the Collection Archive
1. Go to the [ansible_collections/jfrog/installers directory](ansible_collections/jfrog/installers). 1. Go to the [ansible_collections/jfrog/installers directory](ansible_collections/jfrog/installers).
2. Update the galaxy.yml meta file as needed. Update the version. 2. Update the galaxy.yml meta file as needed. Update the version.
3. Build the archive. 3. Build the archive. (Requires Ansible 2.9+)
``` ```
ansible-galaxy collection build ansible-galaxy collection build
``` ```

View File

@@ -9,7 +9,7 @@ namespace: "jfrog"
name: "installers" name: "installers"
# The version of the collection. Must be compatible with semantic versioning # The version of the collection. Must be compatible with semantic versioning
version: "1.0.9" version: "1.1.0"
# The path to the Markdown (.md) readme file. This path is relative to the root of the collection # The path to the Markdown (.md) readme file. This path is relative to the root of the collection
readme: "README.md" readme: "README.md"

View File

@@ -12,7 +12,7 @@ The artifactory role installs the Artifactory Pro software onto the host. Per th
* _db_user_: The database user to configure. eg. "artifactory" * _db_user_: The database user to configure. eg. "artifactory"
* _db_password_: The database password to configure. "Art1fact0ry" * _db_password_: The database password to configure. "Art1fact0ry"
* _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io" * _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io"
* _system_file_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. **If specified, this file will be used rather than constructing a file from the parameters above.** * _artifactory_system_yaml_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. **If specified, this file will be used rather than constructing a file from the parameters above.**
* _binary_store_file_: Your own [binary store file](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore) can be used. If specified, the default cluster-file-system will not be used. * _binary_store_file_: Your own [binary store file](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore) can be used. If specified, the default cluster-file-system will not be used.
* _artifactory_upgrade_only_: Perform an software upgrade only. Default is false. * _artifactory_upgrade_only_: Perform an software upgrade only. Default is false.
@@ -24,6 +24,8 @@ The artifactory role installs the Artifactory Pro software onto the host. Per th
### secondary vars (vars used by the secondary Artifactory server) ### secondary vars (vars used by the secondary Artifactory server)
* _artifactory_is_primary_: For the secondary node(s) this must be set to **false**. * _artifactory_is_primary_: For the secondary node(s) this must be set to **false**.
Additional variables can be found in [defaults/main.yml](./defaults/main.yml).
## Example Playbook ## Example Playbook
``` ```
--- ---

View File

@@ -4,7 +4,7 @@
ansible_marketplace: standalone ansible_marketplace: standalone
# The version of Artifactory to install # The version of Artifactory to install
artifactory_version: 7.7.3 artifactory_version: 7.7.8
# licenses file - specify a licenses file or specify up to 5 licenses # licenses file - specify a licenses file or specify up to 5 licenses
artifactory_license1: artifactory_license1:
@@ -29,7 +29,7 @@ artifactory_file_store_dir: /data
artifactory_flavour: pro artifactory_flavour: pro
extra_java_opts: -server -Xms2g -Xmx14g -Xss256k -XX:+UseG1GC extra_java_opts: -server -Xms2g -Xmx14g -Xss256k -XX:+UseG1GC
artifactory_system_yaml: system.yaml.j2 artifactory_system_yaml_template: system.yaml.j2
artifactory_tar: https://dl.bintray.com/jfrog/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/{{ artifactory_version }}/jfrog-artifactory-pro-{{ artifactory_version }}-linux.tar.gz artifactory_tar: https://dl.bintray.com/jfrog/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/{{ artifactory_version }}/jfrog-artifactory-pro-{{ artifactory_version }}-linux.tar.gz
artifactory_home: "{{ jfrog_home_directory }}/artifactory" artifactory_home: "{{ jfrog_home_directory }}/artifactory"
artifactory_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}" artifactory_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}"

View File

@@ -64,6 +64,14 @@
group: "{{ artifactory_group }}" group: "{{ artifactory_group }}"
become: yes become: yes
- name: ensure data exists
file:
path: "{{ artifactory_home }}/var/data"
state: directory
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: ensure etc exists - name: ensure etc exists
file: file:
path: "{{ artifactory_home }}/var/etc" path: "{{ artifactory_home }}/var/etc"
@@ -74,17 +82,17 @@
- name: use specified system yaml - name: use specified system yaml
copy: copy:
src: "{{ system_file }}"
dest: "{{ artifactory_home }}/var/etc/system.yaml"
become: yes
when: system_file is defined
- name: configure system yaml
template:
src: "{{ artifactory_system_yaml }}" src: "{{ artifactory_system_yaml }}"
dest: "{{ artifactory_home }}/var/etc/system.yaml" dest: "{{ artifactory_home }}/var/etc/system.yaml"
become: yes become: yes
when: system_file is not defined when: artifactory_system_yaml is defined
- name: configure system yaml template
template:
src: "{{ artifactory_system_yaml_template }}"
dest: "{{ artifactory_home }}/var/etc/system.yaml"
become: yes
when: artifactory_system_yaml is not defined
- name: ensure {{ artifactory_home }}/var/etc/security/ exists - name: ensure {{ artifactory_home }}/var/etc/security/ exists
file: file:
@@ -181,7 +189,7 @@
- name: start and enable the primary node - name: start and enable the primary node
service: service:
name: artifactory name: artifactory
state: restarted state: started
become: yes become: yes
when: artifactory_is_primary == true when: artifactory_is_primary == true
@@ -193,6 +201,6 @@
- name: start and enable the secondary nodes - name: start and enable the secondary nodes
service: service:
name: artifactory name: artifactory
state: restarted state: started
become: yes become: yes
when: artifactory_is_primary == false when: artifactory_is_primary == false

View File

@@ -1,7 +1,9 @@
--- ---
- name: Nginx Install Block - name: install nginx
block: block:
- name: install nginx - debug:
msg: "Attempting nginx installation without dependencies for potential offline mode."
- name: install nginx without dependencies
package: package:
name: nginx name: nginx
state: present state: present
@@ -11,9 +13,11 @@
become: yes become: yes
until: package_res is success until: package_res is success
rescue: rescue:
- name: perform dependency installation - debug:
msg: "Attempting nginx installation with dependencies for potential online mode."
- name: install dependencies
include_tasks: "{{ ansible_os_family }}.yml" include_tasks: "{{ ansible_os_family }}.yml"
- name: install nginx - name: install nginx after dependency installation
package: package:
name: nginx name: nginx
state: present state: present

View File

@@ -5,6 +5,17 @@ The postgres role will install Postgresql software and configure a database and
* _db_users_: This is a list of database users to create. eg. db_users: - { db_user: "artifactory", db_password: "Art1fAct0ry" } * _db_users_: This is a list of database users to create. eg. db_users: - { db_user: "artifactory", db_password: "Art1fAct0ry" }
* _dbs_: This is the database to create. eg. dbs: - { db_name: "artifactory", db_owner: "artifactory" } * _dbs_: This is the database to create. eg. dbs: - { db_name: "artifactory", db_owner: "artifactory" }
By default, the [_pg_hba.conf_](https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html) client authentication file is configured for open access for development purposes through the _postgres_allowed_hosts_ variable:
```
postgres_allowed_hosts:
- { type: "host", database: "all", user: "all", address: "0.0.0.0/0", method: "trust"}
```
**THIS SHOULD NOT BE USED FOR PRODUCTION.**
**Update this variable to only allow access from Artifactory and Xray.**
## Example Playbook ## Example Playbook
``` ```
--- ---

View File

@@ -82,3 +82,8 @@ postgres_server_auto_explain_log_min_duration: -1
# Whether or not to use EXPLAIN ANALYZE. # Whether or not to use EXPLAIN ANALYZE.
postgres_server_auto_explain_log_analyze: true postgres_server_auto_explain_log_analyze: true
# Sets the hosts that can access the database
postgres_allowed_hosts:
- { type: "host", database: "all", user: "all", address: "0.0.0.0/0", method: "trust"}

View File

@@ -4,12 +4,14 @@
name: python-psycopg2 name: python-psycopg2
update_cache: yes update_cache: yes
become: yes become: yes
ignore_errors: yes
- name: install python3 psycopg2 - name: install python3 psycopg2
apt: apt:
name: python3-psycopg2 name: python3-psycopg2
update_cache: yes update_cache: yes
become: yes become: yes
ignore_errors: yes
- name: add postgres apt key - name: add postgres apt key
apt_key: apt_key:

View File

@@ -4,4 +4,8 @@ local all all peer
host all all 127.0.0.1/32 md5 host all all 127.0.0.1/32 md5
host all all ::1/128 md5 host all all ::1/128 md5
## remote connections IPv4 ## remote connections IPv4
host all all 0.0.0.0/0 trust {% if postgres_allowed_hosts and postgres_allowed_hosts is iterable %}
{% for host in postgres_allowed_hosts %}
{{ host.type | default('host') }} {{ host.database | default('all') }} {{ host.user | default('all') }} {{ host.address | default('0.0.0.0/0') }} {{ item.auth | default('trust') }}
{% endfor %}
{% endif %}

View File

@@ -11,9 +11,10 @@ The xray role will install Xray software onto the host. An Artifactory server an
* _db_url_: This is the database url. eg. "postgres://10.0.0.59:5432/xraydb?sslmode=disable" * _db_url_: This is the database url. eg. "postgres://10.0.0.59:5432/xraydb?sslmode=disable"
* _db_user_: The database user to configure. eg. "xray" * _db_user_: The database user to configure. eg. "xray"
* _db_password_: The database password to configure. "xray" * _db_password_: The database password to configure. "xray"
* _system_file_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. If specified, this file will be used rather than constructing a file from the parameters above. * _xray_system_yaml_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. If specified, this file will be used rather than constructing a file from the parameters above.
* _xray_upgrade_only_: Perform an software upgrade only. Default is false. * _xray_upgrade_only_: Perform an software upgrade only. Default is false.
Additional variables can be found in [defaults/main.yml](./defaults/main.yml).
## Example Playbook ## Example Playbook
``` ```
--- ---

View File

@@ -4,7 +4,7 @@
ansible_marketplace: standalone ansible_marketplace: standalone
# The version of xray to install # The version of xray to install
xray_version: 3.5.2 xray_version: 3.8.6
# whether to enable HA # whether to enable HA
xray_ha_enabled: true xray_ha_enabled: true
@@ -24,4 +24,6 @@ xray_user: xray
xray_group: xray xray_group: xray
# if this is an upgrade # if this is an upgrade
xray_upgrade_only: false xray_upgrade_only: false
xray_system_yaml_template: system.yaml.j2

View File

@@ -27,10 +27,16 @@
name: libwxbase3.0-0v5 name: libwxbase3.0-0v5
update_cache: yes update_cache: yes
state: present state: present
ignore_errors: yes
become: yes become: yes
- name: Install erlang - name: Install erlang 21.2.1-1
apt: apt:
deb: "{{ xray_home }}/app/third-party/rabbitmq/esl-erlang_21.2.1-1~ubuntu~xenial_amd64.deb" deb: "{{ xray_home }}/app/third-party/rabbitmq/esl-erlang_21.2.1-1~ubuntu~xenial_amd64.deb"
when: xray_version is version("3.8.0","<")
become: yes
- name: Install erlang 22.3.4.1-1
apt:
deb: "{{ xray_home }}/app/third-party/rabbitmq/esl-erlang_22.3.4.1-1_ubuntu_xenial_amd64.deb"
when: xray_version is version("3.8.0",">=")
become: yes become: yes

View File

@@ -11,8 +11,16 @@
state: present state: present
become: yes become: yes
- name: Install erlang - name: Install erlang 21.1.4-1
yum: yum:
name: "{{ xray_home }}/app/third-party/rabbitmq/erlang-21.1.4-1.el7.centos.x86_64.rpm" name: "{{ xray_home }}/app/third-party/rabbitmq/erlang-21.1.4-1.el7.centos.x86_64.rpm"
state: present state: present
when: xray_version is version("3.8.0","<")
become: yes
- name: Install erlang 22.3.4.1-1
yum:
name: "{{ xray_home }}/app/third-party/rabbitmq/erlang-22.3.4.1-1.el7.centos.x86_64.rpm"
state: present
when: xray_version is version("3.8.0",">=")
become: yes become: yes

View File

@@ -52,11 +52,19 @@
group: "{{ xray_group }}" group: "{{ xray_group }}"
become: yes become: yes
- name: configure system yaml - name: use specified system yaml
template: copy:
src: system.yaml.j2 src: "{{ xray_system_yaml }}"
dest: "{{ xray_home }}/var/etc/system.yaml" dest: "{{ xray_home }}/var/etc/system.yaml"
become: yes become: yes
when: xray_system_yaml is defined
- name: configure system yaml template
template:
src: "{{ xray_system_yaml_template }}"
dest: "{{ xray_home }}/var/etc/system.yaml"
become: yes
when: xray_system_yaml is not defined
- name: ensure {{ xray_home }}/var/etc/security/ exists - name: ensure {{ xray_home }}/var/etc/security/ exists
file: file:

View File

@@ -5,7 +5,7 @@ resources:
gitProvider: jefferyfryGithub gitProvider: jefferyfryGithub
path: jefferyfry/JFrog-Cloud-Installers path: jefferyfry/JFrog-Cloud-Installers
pipelines: pipelines:
- name: ansible_aws_azure_automation_pipeline - name: ansible_automation_pipeline
steps: steps:
- name: execute_aws_ansible_playbook - name: execute_aws_ansible_playbook
type: Bash type: Bash
@@ -53,58 +53,6 @@ pipelines:
- ls - ls
- eval $(ssh-agent -s) - eval $(ssh-agent -s)
- ssh-add <(echo "$int_ansiblePrivateKey_key") - ssh-add <(echo "$int_ansiblePrivateKey_key")
- ansible-playbook Ansible/test/aws/playbook.yaml - ansible-playbook Ansible/test/aws/playbook-ha-install.yaml
onComplete: onComplete:
- echo "AWS Ansible playbook complete." - echo "AWS Ansible playbook complete."
- name: execute_azure_ansible_playbook
type: Bash
configuration:
runtime:
type: image
image:
auto:
language: java
versions:
- "8"
integrations:
- name: ansibleAzureKeys
- name: ansibleEnvVars
- name: ansiblePrivateKey
inputResources:
- name: ansibleRepo
execution:
onStart:
- echo "Executing Azure Ansible playbook..."
onExecute:
- sudo apt-get update
- sudo apt-get install gnupg2
- sudo apt-get install software-properties-common
- sudo apt-add-repository --yes --update ppa:ansible/ansible
- sudo apt -y --allow-unauthenticated install ansible
- sudo pip install packaging
- sudo pip install msrestazure
- sudo pip install ansible[azure]
- cd dependencyState/resources/ansibleRepo
- echo 'Setting environment variables...'
- export artifactory_version="$int_ansibleEnvVars_artifactory_version"
- export xray_version="$int_ansibleEnvVars_xray_version"
- export artifactory_license1="$int_ansibleEnvVars_artifactory_license1"
- export artifactory_license2="$int_ansibleEnvVars_artifactory_license2"
- export artifactory_license3="$int_ansibleEnvVars_artifactory_license3"
- export master_key="$int_ansibleEnvVars_master_key"
- export join_key="$int_ansibleEnvVars_join_key"
- export ssh_public_key="$int_ansibleEnvVars_ssh_public_key"
- export arm_template="$int_ansibleEnvVars_arm_template"
- export azure_resource_group="$int_ansibleEnvVars_azure_resource_group"
- export clientId="$int_ansibleAzureKeys_appId"
- export clientSecret="$int_ansibleAzureKeys_password"
- export tenantId="$int_ansibleAzureKeys_tenant"
- printenv
- pwd
- ls
- eval $(ssh-agent -s)
- ssh-add <(echo "$int_ansiblePrivateKey_key")
- az login --service-principal -u "$clientId" -p "$clientSecret" --tenant "$tenantId"
- ansible-playbook Ansible/test/azure/playbook.yaml
onComplete:
- echo "Azure Ansible playbook complete."

View File

@@ -84,6 +84,11 @@
- { db_name: "xraydb", db_owner: "xray" } - { db_name: "xraydb", db_owner: "xray" }
groups: database groups: database
- name: Set up test environment file
copy:
src: ../tests/src/test/resources/testenv_tpl.yaml
dest: ../tests/src/test/resources/testenv.yaml
- name: Set up test environment url - name: Set up test environment url
replace: replace:
path: ../tests/src/test/resources/testenv.yaml path: ../tests/src/test/resources/testenv.yaml
@@ -140,12 +145,7 @@
- name: Test - name: Test
hosts: localhost hosts: localhost
tasks: tasks:
- name: Run tests - name: Run tests
shell: shell:
cmd: ./gradlew clean unified_test cmd: ./gradlew clean unified_test
chdir: ../tests/ chdir: ../tests/
- name: Cleanup and delete stack
cloudformation:
stack_name: "{{ lookup('env', 'stack_name') }}"
region: "us-east-1"
state: "absent"

View File

@@ -0,0 +1,172 @@
---
- name: Provision AWS test infrastructure
hosts: localhost
tasks:
- shell: 'pwd'
register: cmd
- debug:
msg: "{{ cmd.stdout }}"
- name: Create AWS test system
cloudformation:
stack_name: "{{ lookup('env', 'stack_name') }}"
state: "present"
region: "us-east-1"
disable_rollback: true
template: "{{ lookup('env', 'cfn_template') }}"
template_parameters:
SSHKeyName: "{{ lookup('env', 'ssh_public_key_name') }}"
tags:
Stack: "{{ lookup('env', 'stack_name') }}"
register: AWSDeployment
- name: Get AWS deployment details
debug:
var: AWSDeployment
- name: Add bastion
add_host:
hostname: "{{ AWSDeployment.stack_outputs.BastionInstancePublic }}"
groups: bastion
ansible_user: "ubuntu"
- name: Add new RT primary to host group
add_host:
hostname: "{{ AWSDeployment.stack_outputs.RTPriInstancePrivate }}"
ansible_user: "ubuntu"
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"'
artifactory_version: "{{ lookup('env', 'artifactory_version') }}"
db_url: "jdbc:postgresql://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/artifactory"
server_name: "{{ AWSDeployment.stack_outputs.ALBHostName }}"
artifactory_is_primary: true
artifactory_license_file: "{{ lookup('env', 'artifactory_license_file') }}"
groups:
- artifactory
- name: Add RT secondaries to host group
add_host:
hostname: "{{ AWSDeployment.stack_outputs.RTSecInstancePrivate }}"
ansible_user: "ubuntu"
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"'
artifactory_version: "{{ lookup('env', 'artifactory_version') }}"
db_url: "jdbc:postgresql://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/artifactory"
server_name: "{{ AWSDeployment.stack_outputs.ALBHostName }}"
artifactory_is_primary: false
groups:
- artifactory
- name: Add xrays to host group
add_host:
hostname: "{{ AWSDeployment.stack_outputs.XrayInstancePrivate }}"
ansible_user: "ubuntu"
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"'
xray_version: "{{ lookup('env', 'xray_version') }}"
jfrog_url: "http://{{ AWSDeployment.stack_outputs.ALBHostName }}"
master_key: "{{ lookup('env', 'master_key') }}"
join_key: "{{ lookup('env', 'join_key') }}"
db_type: "postgresql"
db_driver: "org.postgresql.Driver"
db_user: "xray"
db_password: "xray"
db_url: "postgres://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/xraydb?sslmode=disable"
groups: xray
- name: Add DBs to host group
add_host:
hostname: "{{ AWSDeployment.stack_outputs.DBInstancePrivate }}"
ansible_user: "ubuntu"
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"'
db_users:
- { db_user: "artifactory", db_password: "Art1fAct0ry" }
- { db_user: "xray", db_password: "xray" }
dbs:
- { db_name: "artifactory", db_owner: "artifactory" }
- { db_name: "xraydb", db_owner: "xray" }
groups: database
- name: Set up test environment file
copy:
src: ../tests/src/test/resources/testenv_tpl.yaml
dest: ../tests/src/test/resources/testenv.yaml
- name: Set up test environment url
replace:
path: ../tests/src/test/resources/testenv.yaml
regexp: 'urlval'
replace: "http://{{ AWSDeployment.stack_outputs.ALBHostName }}"
- name: Set up test environment external_ip
replace:
path: ../tests/src/test/resources/testenv.yaml
regexp: 'ipval'
replace: "{{ AWSDeployment.stack_outputs.ALBHostName }}"
- name: Set up test environment rt_password
replace:
path: ../tests/src/test/resources/testenv.yaml
regexp: 'passval'
replace: "password"
- name: show testenv.yaml
debug: var=item
with_file:
- ../tests/src/test/resources/testenv.yaml
- name: Wait 300 seconds for port 22
wait_for:
port: 22
host: "{{ AWSDeployment.stack_outputs.BastionInstancePublic }}"
delay: 10
- debug:
msg: "Unified URL is at http://{{ AWSDeployment.stack_outputs.ALBHostName }}"
# apply roles to install software
- hosts: database
roles:
- postgres
- hosts: artifactory
vars:
artifactory_ha_enabled: true
master_key: "{{ lookup('env', 'master_key') }}"
join_key: "{{ lookup('env', 'join_key') }}"
db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar"
db_type: "postgresql"
db_driver: "org.postgresql.Driver"
db_user: "artifactory"
db_password: "Art1fAct0ry"
roles:
- artifactory
- hosts: xray
roles:
- xray
- name: Test
hosts: localhost
tasks:
- name: Run tests
shell:
cmd: ./gradlew clean unified_test
chdir: ../tests/
# Now upgrade
- name: Upgrade
hosts: localhost
tasks:
- pause:
prompt: "Proceed to upgrade?"
minutes: 5
- hosts: artifactory
vars:
artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}"
artifactory_upgrade_only: true
roles:
- artifactory
- hosts: xray
vars:
xray_version: "{{ lookup('env', 'xray_version_upgrade') }}"
xray_upgrade_only: true
roles:
- xray

View File

@@ -1,3 +1,12 @@
#!/usr/bin/env bash #!/usr/bin/env bash
ansible-playbook Ansible/test/aws/playbook.yaml export stack_name=$1
export cfn_template="~/git/JFrog-Cloud-Installers/Ansible/infra/aws/lb-rt-xray-ha-ubuntu16.json"
export ssh_public_key_name=jeff-ansible
export artifactory_license_file="~/Desktop/artifactory.cluster.license"
export master_key=d8c19a03036f83ea45f2c658e22fdd60
export join_key=d8c19a03036f83ea45f2c658e22fdd61
export ansible_user=ubuntu
export artifactory_version="7.4.3"
export xray_version="3.4.0"
ansible-playbook Ansible/test/aws/playbook-ha-install.yaml

View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
export stack_name=$1
export cfn_template="~/git/JFrog-Cloud-Installers/Ansible/infra/aws/lb-rt-xray-ha-ubuntu16.json"
export ssh_public_key_name=jeff-ansible
export artifactory_license_file="~/Desktop/artifactory.cluster.license"
export master_key=d8c19a03036f83ea45f2c658e22fdd60
export join_key=d8c19a03036f83ea45f2c658e22fdd61
export ansible_user=ubuntu
export artifactory_version="7.4.3"
export xray_version="3.4.0"
export artifactory_version_upgrade="7.6.1"
export xray_version_upgrade="3.5.2"
ansible-playbook Ansible/test/aws/playbook-ha-upgrade.yaml

View File

@@ -1,6 +1,6 @@
artifactory: artifactory:
url: urlval url: http://Ansib-Appli-1NLZU3V2AGK49-291976964.us-east-1.elb.amazonaws.com
external_ip: ipval external_ip: Ansib-Appli-1NLZU3V2AGK49-291976964.us-east-1.elb.amazonaws.com
distribution: artifactory_ha distribution: artifactory_ha
rt_username: admin rt_username: admin
rt_password: passval rt_password: password

View File

@@ -0,0 +1,6 @@
artifactory:
url: urlval
external_ip: ipval
distribution: artifactory_ha
rt_username: admin
rt_password: passval

View File

@@ -123,7 +123,7 @@
"name": "xrayVersion", "name": "xrayVersion",
"type": "Microsoft.Common.DropDown", "type": "Microsoft.Common.DropDown",
"label": "Xray-vm image version to deploy.", "label": "Xray-vm image version to deploy.",
"defaultValue": "3.8.2", "defaultValue": "3.8.5",
"toolTip": "Version of Xray to deploy", "toolTip": "Version of Xray to deploy",
"constraints": { "constraints": {
"allowedValues": [ "allowedValues": [
@@ -134,6 +134,10 @@
{ {
"label": "3.8.2", "label": "3.8.2",
"value": "0.0.4" "value": "0.0.4"
},
{
"label": "3.8.5",
"value": "0.0.5"
} }
], ],
"required": true "required": true

View File

@@ -19,10 +19,11 @@
}, },
"xrayVersion": { "xrayVersion": {
"type": "string", "type": "string",
"defaultValue": "0.0.4", "defaultValue": "0.0.5",
"allowedValues": [ "allowedValues": [
"0.0.3", "0.0.3",
"0.0.4" "0.0.4",
"0.0.5"
], ],
"metadata": { "metadata": {
"description": "Xray-vm image version to deploy." "description": "Xray-vm image version to deploy."
@@ -182,6 +183,7 @@
"publicIPAddressType": "Dynamic", "publicIPAddressType": "Dynamic",
"db_server": "[parameters('db_server')]", "db_server": "[parameters('db_server')]",
"db_user": "[concat(parameters('db_user'), '@', parameters('db_server'))]", "db_user": "[concat(parameters('db_user'), '@', parameters('db_server'))]",
"actual_db_user": "[parameters('db_user')]",
"db_password": "[parameters('db_password')]", "db_password": "[parameters('db_password')]",
"db_location": "[parameters('location')]", "db_location": "[parameters('location')]",
"db_name": "[parameters('databases').properties[0].name]", "db_name": "[parameters('databases').properties[0].name]",
@@ -351,7 +353,7 @@
"computerNamePrefix": "[variables('namingInfix')]", "computerNamePrefix": "[variables('namingInfix')]",
"adminUsername": "[parameters('adminUsername')]", "adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]", "adminPassword": "[parameters('adminPassword')]",
"customData": "[base64(concat('#INSTALL SCRIPT INPUTS\nXRAY_VERSION=', parameters('xrayVersion'),'\nARTIFACTORY_URL=',variables('artifactoryURL'),'\nDB_SERVER=',variables('db_server'),'\nDB_NAME=',variables('db_name'),'\nDB_ADMIN_USER=',variables('db_user'),'\nDB_ADMIN_PASSWD=',variables('db_password'),'\nMASTER_KEY=',variables('masterKey'),'\nJOIN_KEY=',variables('joinKey'),'\n'))]" "customData": "[base64(concat('#INSTALL SCRIPT INPUTS\nXRAY_VERSION=', parameters('xrayVersion'),'\nARTIFACTORY_URL=',variables('artifactoryURL'),'\nDB_SERVER=',variables('db_server'),'\nDB_NAME=',variables('db_name'),'\nDB_ADMIN_USER=',variables('db_user'),'\nACTUAL_DB_ADMIN_USER=',variables('actual_db_user'),'\nDB_ADMIN_PASSWD=',variables('db_password'),'\nMASTER_KEY=',variables('masterKey'),'\nJOIN_KEY=',variables('joinKey'),'\n'))]"
}, },
"networkProfile": { "networkProfile": {
"networkInterfaceConfigurations": [ "networkInterfaceConfigurations": [

View File

@@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
DB_NAME=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_NAME=" | sed "s/DB_NAME=//") DB_NAME=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_NAME=" | sed "s/DB_NAME=//")
DB_USER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_USER=" | sed "s/DB_ADMIN_USER=//") DB_USER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_USER=" | sed "s/DB_ADMIN_USER=//")
ACTUAL_DB_USER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^ACTUAL_DB_ADMIN_USER=" | sed "s/ACTUAL_DB_ADMIN_USER=//")
DB_PASSWORD=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_PASSWD=" | sed "s/DB_ADMIN_PASSWD=//") DB_PASSWORD=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_PASSWD=" | sed "s/DB_ADMIN_PASSWD=//")
DB_SERVER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_SERVER=" | sed "s/DB_SERVER=//") DB_SERVER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_SERVER=" | sed "s/DB_SERVER=//")
MASTER_KEY=$(cat /var/lib/cloud/instance/user-data.txt | grep "^MASTER_KEY=" | sed "s/MASTER_KEY=//") MASTER_KEY=$(cat /var/lib/cloud/instance/user-data.txt | grep "^MASTER_KEY=" | sed "s/MASTER_KEY=//")
@@ -25,6 +26,7 @@ EOF
HOSTNAME=$(hostname -i) HOSTNAME=$(hostname -i)
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.url postgres://${DB_SERVER}.postgres.database.azure.com:5432/${DB_NAME}?sslmode=disable yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.url postgres://${DB_SERVER}.postgres.database.azure.com:5432/${DB_NAME}?sslmode=disable
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.username ${DB_USER} yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.username ${DB_USER}
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.actualUsername ${ACTUAL_DB_USER}
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.password ${DB_PASSWORD} yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.password ${DB_PASSWORD}
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.rabbitMq.password JFXR_RABBITMQ_COOKIE yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.rabbitMq.password JFXR_RABBITMQ_COOKIE
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.jfrogUrl ${ARTIFACTORY_URL} yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.jfrogUrl ${ARTIFACTORY_URL}

View File

@@ -14,7 +14,7 @@ This template can help you setup [JFrog Xray](https://jfrog.com/xray/) on Azure
2. Deployed Postgresql instance (if "existing DB" is selected as a parameter). 2. Deployed Postgresql instance (if "existing DB" is selected as a parameter).
## Postgresql deployment ## Postgresql deployment
Xray could fail to connect to "out of the box" Azure Postgresql. You can deploy a compatible Postgresql instance using this link: You can deploy a compatible Postgresql instance using this link:
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjfrog%2FJFrog-Cloud-Installers%2Farm-xray%2FAzureResourceManager%2FPostgresql%2FazurePostgresDBDeploy.json" target="_blank"> <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fjfrog%2FJFrog-Cloud-Installers%2Farm-xray%2FAzureResourceManager%2FPostgresql%2FazurePostgresDBDeploy.json" target="_blank">
<img src="https://aka.ms/deploytoazurebutton"/> <img src="https://aka.ms/deploytoazurebutton"/>
@@ -33,16 +33,6 @@ In the Databases field, use the object:
] ]
} }
``` ```
Before deploying Xray, please do following steps:
1. Use the admin role given by Azure that you initially connected with to PSDB (for example xray) - Remember the password of this role to connect when setting up with Xray.
2. Create a new role named xray@{hostname}, where {hostname} is a DB server name.
3. Add xray@{hostname} membership to the base Azure user. In the client tab (PgAdmin for example) right click on properties of role "azure_pg_admin" and under Membership tab, add the relevant "xray@{hostname}", click on the checkbox on the tag, save.
4. Change ownership of Xray database. Right click On the name of the database and change owner to "xray@{hostname}"
After these steps are done, run Xray deployment.
## Installation ## Installation
1. Click "Deploy to Azure" button. If you don't have an Azure subscription, it will guide you on how to signup for a free trial. 1. Click "Deploy to Azure" button. If you don't have an Azure subscription, it will guide you on how to signup for a free trial.

View File

@@ -19,10 +19,11 @@
}, },
"xrayVersion": { "xrayVersion": {
"type": "string", "type": "string",
"defaultValue": "0.0.4", "defaultValue": "0.0.5",
"allowedValues": [ "allowedValues": [
"0.0.3", "0.0.3",
"0.0.4" "0.0.4",
"0.0.5"
], ],
"metadata": { "metadata": {
"description": "Xray-vm image version to deploy." "description": "Xray-vm image version to deploy."
@@ -182,6 +183,7 @@
"publicIPAddressType": "Dynamic", "publicIPAddressType": "Dynamic",
"db_server": "[parameters('db_server')]", "db_server": "[parameters('db_server')]",
"db_user": "[concat(parameters('db_user'), '@', parameters('db_server'))]", "db_user": "[concat(parameters('db_user'), '@', parameters('db_server'))]",
"actual_db_user": "[parameters('db_user')]",
"db_password": "[parameters('db_password')]", "db_password": "[parameters('db_password')]",
"db_location": "[parameters('location')]", "db_location": "[parameters('location')]",
"db_name": "[parameters('databases').properties[0].name]", "db_name": "[parameters('databases').properties[0].name]",
@@ -189,7 +191,7 @@
"joinKey": "[parameters('joinKey')]", "joinKey": "[parameters('joinKey')]",
"osType": { "osType": {
"publisher": "jfrog", "publisher": "jfrog",
"offer": "x-ray-vm-preview", "offer": "x-ray-vm",
"sku": "x-ray-vm", "sku": "x-ray-vm",
"version": "[parameters('xrayVersion')]" "version": "[parameters('xrayVersion')]"
}, },
@@ -325,7 +327,7 @@
"plan": { "plan": {
"name": "x-ray-vm", "name": "x-ray-vm",
"publisher": "jfrog", "publisher": "jfrog",
"product": "x-ray-vm-preview" "product": "x-ray-vm"
}, },
"sku": { "sku": {
"name": "[parameters('virtualMachineSize')]", "name": "[parameters('virtualMachineSize')]",
@@ -351,7 +353,7 @@
"computerNamePrefix": "[variables('namingInfix')]", "computerNamePrefix": "[variables('namingInfix')]",
"adminUsername": "[parameters('adminUsername')]", "adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]", "adminPassword": "[parameters('adminPassword')]",
"customData": "[base64(concat('#INSTALL SCRIPT INPUTS\nXRAY_VERSION=', parameters('xrayVersion'),'\nARTIFACTORY_URL=',variables('artifactoryURL'),'\nDB_SERVER=',variables('db_server'),'\nDB_NAME=',variables('db_name'),'\nDB_ADMIN_USER=',variables('db_user'),'\nDB_ADMIN_PASSWD=',variables('db_password'),'\nMASTER_KEY=',variables('masterKey'),'\nJOIN_KEY=',variables('joinKey'),'\n'))]" "customData": "[base64(concat('#INSTALL SCRIPT INPUTS\nXRAY_VERSION=', parameters('xrayVersion'),'\nARTIFACTORY_URL=',variables('artifactoryURL'),'\nDB_SERVER=',variables('db_server'),'\nDB_NAME=',variables('db_name'),'\nDB_ADMIN_USER=',variables('db_user'),'\nACTUAL_DB_ADMIN_USER=',variables('actual_db_user'),'\nDB_ADMIN_PASSWD=',variables('db_password'),'\nMASTER_KEY=',variables('masterKey'),'\nJOIN_KEY=',variables('joinKey'),'\n'))]"
}, },
"networkProfile": { "networkProfile": {
"networkInterfaceConfigurations": [ "networkInterfaceConfigurations": [

View File

@@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
DB_NAME=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_NAME=" | sed "s/DB_NAME=//") DB_NAME=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_NAME=" | sed "s/DB_NAME=//")
DB_USER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_USER=" | sed "s/DB_ADMIN_USER=//") DB_USER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_USER=" | sed "s/DB_ADMIN_USER=//")
ACTUAL_DB_USER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^ACTUAL_DB_ADMIN_USER=" | sed "s/ACTUAL_DB_ADMIN_USER=//")
DB_PASSWORD=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_PASSWD=" | sed "s/DB_ADMIN_PASSWD=//") DB_PASSWORD=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_ADMIN_PASSWD=" | sed "s/DB_ADMIN_PASSWD=//")
DB_SERVER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_SERVER=" | sed "s/DB_SERVER=//") DB_SERVER=$(cat /var/lib/cloud/instance/user-data.txt | grep "^DB_SERVER=" | sed "s/DB_SERVER=//")
MASTER_KEY=$(cat /var/lib/cloud/instance/user-data.txt | grep "^MASTER_KEY=" | sed "s/MASTER_KEY=//") MASTER_KEY=$(cat /var/lib/cloud/instance/user-data.txt | grep "^MASTER_KEY=" | sed "s/MASTER_KEY=//")
@@ -25,6 +26,7 @@ EOF
HOSTNAME=$(hostname -i) HOSTNAME=$(hostname -i)
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.url postgres://${DB_SERVER}.postgres.database.azure.com:5432/${DB_NAME}?sslmode=disable yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.url postgres://${DB_SERVER}.postgres.database.azure.com:5432/${DB_NAME}?sslmode=disable
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.username ${DB_USER} yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.username ${DB_USER}
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.actualUsername ${ACTUAL_DB_USER}
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.password ${DB_PASSWORD} yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.database.password ${DB_PASSWORD}
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.rabbitMq.password JFXR_RABBITMQ_COOKIE yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.rabbitMq.password JFXR_RABBITMQ_COOKIE
yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.jfrogUrl ${ARTIFACTORY_URL} yq w -i /var/opt/jfrog/xray/etc/system.yaml shared.jfrogUrl ${ARTIFACTORY_URL}

View File

@@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# Upgrade version for every release # Upgrade version for every release
XRAY_VERSION=3.8.2 XRAY_VERSION=3.8.5
export DEBIAN_FRONTEND=noninteractive export DEBIAN_FRONTEND=noninteractive

View File

@@ -1,6 +1,12 @@
# JFrog Openshift Artifactory-ha Chart Changelog # JFrog Openshift Artifactory-ha Chart Changelog
All changes to this chart will be documented in this file. All changes to this chart will be documented in this file.
## [4.1.0] - Sept 30, 2020
* Updating to latest jfrog/artifactory-ha helm chart version 4.1.0 artifactory version 7.9.0
## [3.1.0] - Aug 17, 2020
* Updating to latest jfrog/artifactory-ha helm chart version 3.1.0 artifactory version 7.7.3
## [3.0.5] - Jul 16, 2020 ## [3.0.5] - Jul 16, 2020
* Updating to latest jfrog/artifactory helm chart version 3.0.5 artifactory version 7.6.3 * Updating to latest jfrog/artifactory helm chart version 3.0.5 artifactory version 7.6.3

View File

@@ -1,5 +1,5 @@
apiVersion: v1 apiVersion: v1
appVersion: 7.6.3 appVersion: 7.9.0
description: Openshift JFrog Artifactory HA subcharting Artifactory HA to work in Openshift environment description: Openshift JFrog Artifactory HA subcharting Artifactory HA to work in Openshift environment
home: https://www.jfrog.com/artifactory/ home: https://www.jfrog.com/artifactory/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-ha/logo/artifactory-logo.png icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory-ha/logo/artifactory-logo.png
@@ -16,4 +16,4 @@ name: openshift-artifactory-ha
sources: sources:
- https://bintray.com/jfrog/product/JFrog-Artifactory-Pro/view - https://bintray.com/jfrog/product/JFrog-Artifactory-Pro/view
- https://github.com/jfrog/charts - https://github.com/jfrog/charts
version: 3.0.5 version: 4.1.0

View File

@@ -51,5 +51,7 @@ helm install artifactory-ha . \
--set artifactory-ha.database.driver=org.postgresql.Driver \ --set artifactory-ha.database.driver=org.postgresql.Driver \
--set artifactory-ha.database.url=jdbc:postgresql://postgres-postgresql:5432/artifactory \ --set artifactory-ha.database.url=jdbc:postgresql://postgres-postgresql:5432/artifactory \
--set artifactory-ha.database.user=artifactory \ --set artifactory-ha.database.user=artifactory \
--set artifactory-ha.database.password=password --set artifactory-ha.database.password=password \
--set artifactory-ha.artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \
--set artifactory-ha.artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

View File

@@ -0,0 +1,6 @@
dependencies:
- name: artifactory-ha
repository: https://charts.jfrog.io/
version: 4.1.0
digest: sha256:8df1fd70eeabbb7687da0dd534d2161a413389ec40f331d5eb8e95ae50119222
generated: "2020-09-30T12:30:08.142288-07:00"

View File

@@ -1,4 +1,4 @@
dependencies: dependencies:
- name: artifactory-ha - name: artifactory-ha
version: 3.0.5 version: 4.1.0
repository: https://charts.jfrog.io/ repository: https://charts.jfrog.io/

View File

@@ -12,41 +12,31 @@ artifactory-ha:
url: "OVERRIDE" url: "OVERRIDE"
user: "OVERRIDE" user: "OVERRIDE"
password: "OVERRIDE" password: "OVERRIDE"
initContainerImage: registry.redhat.io/ubi8-minimal initContainerImage: registry.connect.redhat.com/jfrog/init:1.0.1
waitForDatabase: false waitForDatabase: true
installerInfo: '{ "productId": "Openshift_artifactory-ha/{{ .Chart.Version }}", "features": [ { "featureId": "ArtifactoryVersion/{{ default .Chart.AppVersion .Values.artifactory.image.version }}" }, { "featureId": "{{ if .Values.postgresql.enabled }}postgresql{{ else }}{{ .Values.database.type }}{{ end }}/0.0.0" }, { "featureId": "Platform/Openshift" }, { "featureId": "Partner/ACC-006983" }, { "featureId": "Channel/Openshift" } ] }' installerInfo: '{ "productId": "Openshift_artifactory-ha/{{ .Chart.Version }}", "features": [ { "featureId": "ArtifactoryVersion/{{ default .Chart.AppVersion .Values.artifactory.image.version }}" }, { "featureId": "{{ if .Values.postgresql.enabled }}postgresql{{ else }}{{ .Values.database.type }}{{ end }}/0.0.0" }, { "featureId": "Platform/Openshift" }, { "featureId": "Partner/ACC-006983" }, { "featureId": "Channel/Openshift" } ] }'
artifactory: artifactory:
## Add custom init containers execution before predefined init containers uid: "1000721030"
customInitContainersBegin: |
- name: "redhat-custom-setup"
#image: "{{ .Values.initContainerImage }}"
image: {{ index .Values "initContainerImage" }}
imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
command:
- 'sh'
- '-c'
- 'chown -R 1030:1030 {{ .Values.artifactory.persistence.mountPath }}'
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
name: volume
## Change to use RH UBI images ## Change to use RH UBI images
image: image:
repository: registry.connect.redhat.com/jfrog/artifactory-pro registry: registry.connect.redhat.com
version: 7.6.3 repository: jfrog/artifactory-pro
tag: 7.9.0
node: node:
replicaCount: 2 replicaCount: 2
waitForPrimaryStartup: waitForPrimaryStartup:
enabled: false enabled: false
masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF masterKey: "OVERRIDE"
joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE joinKey: "OVERRIDE"
postgresql: postgresql:
enabled: false enabled: false
nginx: nginx:
uid: "1000720104"
gid: "1000720107"
image: image:
repository: registry.redhat.io/rhel8/nginx-116 registry: registry.redhat.io
version: latest repository: rhel8/nginx-116
tag: latest
## K8S secret name for the TLS secret to be used for SSL ## K8S secret name for the TLS secret to be used for SSL
tlsSecretName: "OVERRIDE" tlsSecretName: "OVERRIDE"
service: service:

View File

@@ -0,0 +1,5 @@
# JFrog Openshift Pipelines Chart Changelog
All changes to this chart will be documented in this file.
## [1.4.5] Sept 21, 2020
* Adding Openshift Pipelines helm chart version 1.4.5 app version 1.7.2

View File

@@ -0,0 +1,16 @@
apiVersion: v1
appVersion: 1.7.2
description: A Helm chart for JFrog Pipelines
home: https://jfrog.com/pipelines/
icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/pipelines/icon/pipelines-logo.png
keywords:
- pipelines
- jfrog
- devops
maintainers:
- email: vinaya@jfrog.com
name: Vinay Aggarwal
- email: johnp@jfrog.com
name: John Peterson
name: openshift-pipelines
version: 1.4.5

View File

@@ -0,0 +1,223 @@
# JFrog Pipelines on Kubernetes Helm Chart
[JFrog Pipelines](https://jfrog.com/pipelines/)
## Prerequisites Details
* Kubernetes 1.12+
## Chart Details
This chart will do the following:
- Deploy PostgreSQL (optionally with an external PostgreSQL instance)
- Deploy RabbitMQ (optionally as an HA cluster)
- Deploy Redis (optionally as an HA cluster)
- Deploy Vault (optionally as an HA cluster)
- Deploy JFrog Pipelines
## Requirements
- A running Kubernetes cluster
- Dynamic storage provisioning enabled
- Default StorageClass set to allow services using the default StorageClass for persistent storage
- A running Artifactory 7.7.x with Enterprise+ License
- Precreated repository `jfrogpipelines` in Artifactory type `Generic` with layout `maven-2-default`
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed and setup to use the cluster
- [Helm](https://helm.sh/) v2 or v3 installed
## Install JFrog Pipelines
### Add ChartCenter Helm repository
Before installing JFrog helm charts, you need to add the [ChartCenter helm repository](https://chartcenter.io) to your helm client
```bash
helm repo add center https://repo.chartcenter.io
helm repo update
```
### Artifactory Connection Details
In order to connect Pipelines to your Artifactory installation, you have to use a Join Key, hence it is *MANDATORY* to provide a Join Key and Jfrog Url to your Pipelines installation. Here's how you do that:
Retrieve the connection details of your Artifactory installation, from the UI - https://www.jfrog.com/confluence/display/JFROG/General+Security+Settings#GeneralSecuritySettings-ViewingtheJoinKey.
### Install Pipelines Chart with Ingress
#### Pre-requisites
Before deploying Pipelines you need to have the following
- A running Kubernetes cluster
- An [Artifactory ](https://hub.helm.sh/charts/jfrog/artifactory) or [Artifactory HA](https://hub.helm.sh/charts/jfrog/artifactory-ha) with Enterprise+ License
- Precreated repository `jfrogpipelines` in Artifactiry type `Generic` with layout `maven-2-default`
- Deployed [Nginx-ingress controller](https://hub.helm.sh/charts/stable/nginx-ingress)
- [Optional] Deployed [Cert-manager](https://hub.helm.sh/charts/jetstack/cert-manager) for automatic management of TLS certificates with [Lets Encrypt](https://letsencrypt.org/)
- [Optional] TLS secret needed for https access
#### Prepare configurations
Fetch the JFrog Pipelines helm chart to get the needed configuration files
```bash
helm fetch center/jfrog/pipelines --untar
```
Edit local copies of `values-ingress.yaml`, `values-ingress-passwords.yaml` and `values-ingress-external-secret.yaml` with the needed configuration values
- URLs in `values-ingress.yaml`
- Artifactory URL
- Ingress hosts
- Ingress tls secrets
- Passwords `uiUserPassword`, `postgresqlPassword` and `rabbitmq.password` must be set, and same for `masterKey` and `joinKey` in `values-ingress-passwords.yaml`
#### Install JFrog Pipelines
Install JFrog Pipelines
```bash
kubectl create ns pipelines
helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml
```
### Use external secret
**Note:** Best practice is to use external secrets instead of storing passwords in `values.yaml` files.
Don't forget to **update** URLs in `values-ingress-external-secret.yaml` file.
Fill in all required passwords, `masterKey` and `joinKey` in `values-ingress-passwords.yaml` and then create and install the external secret.
**Note:** Helm release name for secrets generation and `helm install` must be set the same, in this case it is `pipelines`.
With Helm v2:
```bash
## Generate pipelines-system-yaml secret
helm template --name-template pipelines pipelines/ -x templates/pipelines-system-yaml.yaml \
-f pipelines/values-ingress-external-secret.yaml -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
## Generate pipelines-database secret
helm template --name-template pipelines pipelines/ -x templates/database-secret.yaml \
-f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
## Generate pipelines-rabbitmq-secret secret
helm template --name-template pipelines pipelines/ -x templates/rabbitmq-secret.yaml \
-f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
```
With Helm v3:
```bash
## Generate pipelines-system-yaml secret
helm template --name-template pipelines pipelines/ -s templates/pipelines-system-yaml.yaml \
-f pipelines/values-ingress-external-secret.yaml -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
## Generate pipelines-database secret
helm template --name-template pipelines pipelines/ -s templates/database-secret.yaml \
-f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
## Generate pipelines-rabbitmq-secret secret
helm template --name-template pipelines pipelines/ -s templates/rabbitmq-secret.yaml \
-f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -
```
Install JFrog Pipelines:
```bash
helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f values-ingress-external-secret.yaml
```
### Using external Rabbitmq
If you want to use external Rabbitmq, set `rabbitmq.enabled=false` and create `values-external-rabbitmq.yaml` with below yaml configuration
```yaml
rabbitmq:
enabled: false
internal_ip: "{{ .Release.Name }}-rabbitmq"
msg_hostname: "{{ .Release.Name }}-rabbitmq"
port: 5672
manager_port: 15672
ms_username: admin
ms_password: password
cp_username: admin
cp_password: password
build_username: admin
build_password: password
root_vhost_exchange_name: rootvhost
erlang_cookie: secretcookie
build_vhost_name: pipelines
root_vhost_name: pipelinesRoot
protocol: amqp
```
```bash
helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f values-external-rabbitmq.yaml
```
### Using external Vault
If you want to use external Vault, set `vault.enabled=false` and create `values-external-vault.yaml` with below yaml configuration
```yaml
vault:
enabled: false
global:
vault:
host: vault_url
port: vault_port
token: vault_token
## Set Vault token using existing secret
# existingSecret: vault-secret
```
If you store external Vault token in a pre-existing Kubernetes Secret, you can specify it via `existingSecret`.
To create a secret containing the Vault token:
```bash
kubectl create secret generic vault-secret --from-literal=token=${VAULT_TOKEN}
```
```bash
helm upgrade --install pipelines --namespace pipelines center/jfrog/pipelines -f values-external-vault.yaml
```
### Status
See the status of deployed **helm** release:
With Helm v2:
```bash
helm status pipelines
```
With Helm v3:
```bash
helm status pipelines --namespace pipelines
```
### Pipelines Version
- By default, the pipelines images will use the value `appVersion` in the Chart.yml. This can be over-ridden by adding `version` to the pipelines section of the values.yml
### Build Plane
#### Build Plane with static and dynamic node-pool VMs
To start using Pipelines you need to setup a Build Plane:
- For Static VMs Node-pool setup, please read [Managing Node Pools](https://www.jfrog.com/confluence/display/JFROG/Managing+Pipelines+Node+Pools#ManagingPipelinesNodePools-static-node-poolsAdministeringStaticNodePools).
- For Dynamic VMs Node-pool setup, please read [Managing Dynamic Node Pools](https://www.jfrog.com/confluence/display/JFROG/Managing+Pipelines+Node+Pools#ManagingPipelinesNodePools-dynamic-node-poolsAdministeringDynamicNodePools).
- For Kubernetes Node-pool setup, please read [Managing Dynamic Node Pools](https://www.jfrog.com/confluence/display/JFROG/Managing+Pipelines+Node+Pools#ManagingPipelinesNodePools-dynamic-node-poolsAdministeringDynamicNodePools).
## Useful links
- https://www.jfrog.com/confluence/display/JFROG/Pipelines+Quickstart
- https://www.jfrog.com/confluence/display/JFROG/Using+Pipelines
- https://www.jfrog.com/confluence/display/JFROG/Managing+Runtimes

View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
echo "Installing Pipelines"
if [ -z "$MASTER_KEY" ]
then
MASTER_KEY=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
fi
if [ -z "$JOIN_KEY" ]
then
JOIN_KEY=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
fi
helm upgrade --install pipelines . \
--set pipelines.pipelines.jfrogUrl=http://openshiftartifactoryha-nginx \
--set pipelines.pipelines.jfrogUrlUI=http://openshiftartifactoryha-nginx \
--set pipelines.pipelines.masterKey=$MASTER_KEY \
--set pipelines.pipelines.joinKey=$JOIN_KEY \
--set pipelines.pipelines.accessControlAllowOrigins_0=http://openshiftartifactoryha-nginx \
--set pipelines.pipelines.accessControlAllowOrigins_1=http://openshiftartifactoryha-nginx \
--set pipelines.pipelines.msg.uiUser=monitor \
--set pipelines.pipelines.msg.uiUserPassword=monitor \
--set pipelines.postgresql.enabled=false \
--set pipelines.global.postgresql.host=postgres-postgresql \
--set pipelines.global.postgresql.port=5432 \
--set pipelines.global.postgresql.database=pipelinesdb \
--set pipelines.global.postgresql.user=artifactory \
--set pipelines.global.postgresql.password=password \
--set pipelines.global.postgresql.ssl=false \
--set pipelines.rabbitmq.rabbitmq.username=user \
--set pipelines.rabbitmq.rabbitmq.password=bitnami \
--set pipelines.rabbitmq.externalUrl=amqps://pipelines-rabbit.jfrog.tech \
--set pipelines.pipelines.api.externalUrl=http://pipelines-api.jfrog.tech \
--set pipelines.pipelines.www.externalUrl=http://pipelines-www.jfrog.tech

View File

@@ -0,0 +1,6 @@
dependencies:
- name: pipelines
repository: https://charts.jfrog.io/
version: 1.4.5
digest: sha256:83b0fa740797074925e7f237762ff493727faf58476c3884f247acc44428202b
generated: "2020-09-21T10:32:37.846331-07:00"

View File

@@ -0,0 +1,4 @@
dependencies:
- name: pipelines
version: 1.4.5
repository: https://charts.jfrog.io/

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,12 @@
# JFrog Openshift Artifactory-Xray Chart Changelog # JFrog Openshift Artifactory-Xray Chart Changelog
All changes to this chart will be documented in this file. All changes to this chart will be documented in this file.
## [6.0.6] Oct 1st, 2020
* Updating to Xray chart version 6.0.6 and Xray app version 3.8.8
## [4.2.0] Aug 17, 2020
* Updating to Xray chart version 4.2.0 and Xray app version 3.8.0
## [4.1.2] July 28, 2020 ## [4.1.2] July 28, 2020
* Updating to Xray chart version 4.1.2 and Xray app version 3.6.2 * Updating to Xray chart version 4.1.2 and Xray app version 3.6.2

View File

@@ -1,5 +1,5 @@
apiVersion: v1 apiVersion: v1
appVersion: 3.6.2 appVersion: 3.8.8
description: Universal component scan for security and license inventory and impact analysis description: Universal component scan for security and license inventory and impact analysis
sources: sources:
- https://bintray.com/jfrog/product/xray/view - https://bintray.com/jfrog/product/xray/view
@@ -13,4 +13,4 @@ maintainers:
- email: johnp@jfrog.com - email: johnp@jfrog.com
name: John Peterson name: John Peterson
name: openshift-xray name: openshift-xray
version: 4.1.2 version: 6.0.6

View File

@@ -57,7 +57,10 @@ fi
JFROGURL="" JFROGURL=""
if [[ -z "$4" ]] if [[ -z "$4" ]]
then then
JFROGURL="http://openshiftartifactoryha-nginx" # HELM
JFROGURL="http://artifactory-ha-nginx"
# OPERATOR
# JFROGURL="http://openshiftartifactoryha-nginx"
else else
JFROGURL=$4 JFROGURL=$4
fi fi
@@ -68,4 +71,6 @@ helm install xray . \
--set xray.database.url=$DBURL \ --set xray.database.url=$DBURL \
--set xray.database.user=$DBUSER \ --set xray.database.user=$DBUSER \
--set xray.database.password=$DBPASS \ --set xray.database.password=$DBPASS \
--set xray.xray.jfrogUrl=$JFROGURL --set xray.xray.jfrogUrl=$JFROGURL \
--set xray.xray.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \
--set xray.xray.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

View File

@@ -16,10 +16,10 @@ spec:
app: rabbitmq app: rabbitmq
spec: spec:
containers: containers:
- image: quay.io/jfrog/xray-rabbitmq-rh:3.8.0 - image: registry.connect.redhat.com/jfrog/xray-rabbitmq:3.8.9
imagePullPolicy: "Always" imagePullPolicy: "Always"
name: xray-rabbitmq name: xray-rabbitmq
ports: ports:
- containerPort: 4369 - containerPort: 4369
- containerPort: 5672 - containerPort: 5672
- containerPort: 25672 - containerPort: 15672

View File

@@ -8,17 +8,17 @@ spec:
selector: selector:
app: rabbitmq app: rabbitmq
ports: ports:
- name: port1 - name: epmd
protocol: TCP protocol: TCP
port: 4369 port: 4369
targetPort: 4369 targetPort: 4369
- name: port3 - name: ampq
protocol: TCP protocol: TCP
port: 5672 port: 5672
targetPort: 5672 targetPort: 5672
- name: port4 - name: management
protocol: TCP protocol: TCP
port: 25672 port: 15672
targetPort: 25672 targetPort: 25672
type: ClusterIP type: ClusterIP

View File

@@ -0,0 +1,6 @@
dependencies:
- name: xray
repository: https://charts.jfrog.io/
version: 6.0.6
digest: sha256:339b5ec4e309ce2970ed34ebc700d6fe8f436d6cbe8dd5d352f0b080401752af
generated: "2020-10-01T15:04:29.008985-07:00"

View File

@@ -1,4 +1,4 @@
dependencies: dependencies:
- name: xray - name: xray
version: 4.1.2 version: 6.0.6
repository: https://charts.jfrog.io/ repository: https://charts.jfrog.io/

View File

@@ -0,0 +1,101 @@
# Openshift Jfrog Xray
xray:
unifiedUpgradeAllowed: true
replicaCount: 1
xray:
masterKey: "OVERRIDE"
joinKey: "OVERRIDE"
consoleLog: false
jfrogUrl: "OVERRIDE"
postgresql:
enabled: false
database:
url: "OVERRIDE"
user: "OVERRIDE"
password: "OVERRIDE"
common:
xrayUserId: "1000721035"
xrayGroupId: "1000721035"
analysis:
name: xray-analysis
image:
registry: registry.connect.redhat.com
repository: jfrog/xray-analysis
tag: 3.8.8
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
preStartCommand:
indexer:
name: xray-indexer
image:
registry: registry.connect.redhat.com
repository: jfrog/xray-indexer
tag: 3.8.8
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
persist:
name: xray-persist
image:
registry: registry.connect.redhat.com
repository: jfrog/xray-persist
tag: 3.8.8
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
persistence:
size: 10Gi
preStartCommand:
server:
name: xray-server
image:
registry: registry.connect.redhat.com
repository: jfrog/xray-server
tag: 3.8.8
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
replicaCount: 1
router:
name: router
image:
registry: registry.connect.redhat.com
repository: jfrog/xray-router
tag: 1.4.3
imagePullPolicy: IfNotPresent
rabbitmq-ha:
enabled: true
replicaCount: 1
image:
repository: registry.connect.redhat.com/jfrog/xray-rabbitmq
tag: 3.8.9
rabbitmqEpmdPort: 4369
rabbitmqNodePort: 5672
rabbitmqManagerPort: 15672
rabbitmqUsername: guest
rabbitmqPassword: guest
managementUsername: management
managementPassword: management
initContainer:
enabled: false
securityContext:
fsGroup: 1000721035
runAsUser: 1000721035
runAsGroup: 1000721035
livenessProbe:
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
exec:
command:
- /bin/sh
- -c
- 'rabbitmqctl status'
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 6
exec:
command:
- /bin/sh
- -c
- 'rabbitmqctl status'

View File

@@ -1,6 +0,0 @@
dependencies:
- name: artifactory-ha
repository: https://charts.jfrog.io/
version: 3.0.5
digest: sha256:59deb56ee27e8a629a22f48cc051453e774999228ece09c77584d95c8c54ce6d
generated: "2020-07-16T14:29:16.129919-07:00"

View File

@@ -1,6 +0,0 @@
dependencies:
- name: xray
repository: https://charts.jfrog.io/
version: 4.1.2
digest: sha256:79e535f41be683f61d7f181a094d91f2688df43b7c3511be0c5c3216a6ce342b
generated: "2020-07-28T11:11:46.534466-07:00"

View File

@@ -1,78 +0,0 @@
# Openshift Jfrog Xray
xray:
unifiedUpgradeAllowed: true
replicaCount: 1
xray:
masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
consoleLog: false
jfrogUrl: "OVERRIDE"
postgresql:
enabled: false
database:
url: "OVERRIDE"
user: "OVERRIDE"
password: "OVERRIDE"
rabbitmq-ha:
enabled: true
replicaCount: 1
image:
tag: 3.7.21-alpine
rabbitmqUsername: guest
rabbitmqPassword: ""
persistentVolume:
enabled: true
size: 20Gi
rbac:
create: true
preStartCommand:
global:
postgresqlTlsSecret:
analysis:
name: xray-analysis
image:
repository: registry.connect.redhat.com/jfrog/xray-analysis
version: 3.6.2
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
preStartCommand:
indexer:
name: xray-indexer
image:
repository: registry.connect.redhat.com/jfrog/xray-indexer
version: 3.6.2
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
persist:
name: xray-persist
image:
repository: registry.connect.redhat.com/jfrog/xray-persist
version: 3.6.2
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
persistence:
size: 10Gi
preStartCommand:
server:
name: xray-server
image:
repository: registry.connect.redhat.com/jfrog/xray-server
version: 3.6.2
updateStrategy: RollingUpdate
podManagementPolicy: Parallel
replicaCount: 1
router:
name: router
image:
repository: registry.connect.redhat.com/jfrog/xray-router
version: 1.4.2
imagePullPolicy: IfNotPresent
rabbitmq-ha:
enabled: true
replicaCount: 1
image:
repository: registry.connect.redhat.com/jfrog/xray-rabbitmq
tag: 3.8.0
rabbitmqEpmdPort: 4369
rabbitmqNodePort: 5672
rabbitmqManagerPort: 15672

View File

@@ -0,0 +1,24 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
bin
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Kubernetes Generated files - skip generated files, except for vendored files
!vendor/**/zz_generated.*
# editor and IDE paraphernalia
.idea
*.swp
*.swo
*~

View File

@@ -7,4 +7,4 @@ scorecard:
- olm: - olm:
cr-manifest: cr-manifest:
- "deploy/crds/charts.helm.k8s.io_v1alpha1_openshiftartifactoryha_cr.yaml" - "deploy/crds/charts.helm.k8s.io_v1alpha1_openshiftartifactoryha_cr.yaml"
csv-path: "deploy/olm-catalog/artifactory-ha-operator/1.0.2/artifactory-ha-operator.v1.0.2.clusterserviceversion.yaml" csv-path: "deploy/olm-catalog/artifactory-ha-operator/1.0.3/artifactory-ha-operator.v1.0.3.clusterserviceversion.yaml"

View File

@@ -0,0 +1,29 @@
# JFrog Openshift Artifactory-ha Chart Changelog
All changes to this chart will be documented in this file.
## [1.1.0] - Sept 30, 2020
* Updating Operator to latest jfrog/artifactory-ha helm chart version 4.1.0 artifactory version 7.9.0
## [1.0.3] - Aug 17, 2020
* Updating Operator to latest jfrog/artifactory-ha helm chart version 3.1.0 artifactory version 7.7.3
## [1.0.2] - July 16, 2020
* Updating Operator to latest jfrog/artifactory-ha helm chart version 3.0.5 artifactory version 7.6.3
## [1.0.1] - June 29, 2020
* Updating to latest jfrog/artifactory-ha helm chart version 2.6.0 artifactory version 7.6.1
## [1.0.0] - May 12, 2020
* Updating to latest jfrog/artifactory-ha helm chart version 2.4.6 artifactory version 7.4.3
## [0.4.0] - April 13, 2020
* Updating to latest jfrog/artifactory-ha helm chart version 2.3.0
## [0.3.0] - April 11, 2020
* Fixed issues with master key
## [0.2.0] - March 17, 2020
* Updated Artifactory version to 7.3.2
## [0.1.0] - March 09, 2020
* Updated Artifactory version to 7.2.1

View File

@@ -0,0 +1,13 @@
# Build the manager binary
FROM quay.io/operator-framework/helm-operator:v1.0.1
LABEL name="JFrog Artifactory Enterprise Operator" \
description="Openshift operator to deploy JFrog Artifactory Enterprise based on the Red Hat Universal Base Image." \
vendor="JFrog" \
summary="JFrog Artifactory Enterprise Operator" \
com.jfrog.license_terms="https://jfrog.com/artifactory/eula/"
COPY licenses/ /licenses
ENV HOME=/opt/helm
COPY watches.yaml ${HOME}/watches.yaml
COPY helm-charts ${HOME}/helm-charts
WORKDIR ${HOME}

View File

@@ -0,0 +1,92 @@
# Current Operator version
VERSION ?= 0.0.1
# Default bundle image tag
BUNDLE_IMG ?= controller-bundle:$(VERSION)
# Options for 'bundle-build'
ifneq ($(origin CHANNELS), undefined)
BUNDLE_CHANNELS := --channels=$(CHANNELS)
endif
ifneq ($(origin DEFAULT_CHANNEL), undefined)
BUNDLE_DEFAULT_CHANNEL := --default-channel=$(DEFAULT_CHANNEL)
endif
BUNDLE_METADATA_OPTS ?= $(BUNDLE_CHANNELS) $(BUNDLE_DEFAULT_CHANNEL)
# Image URL to use all building/pushing image targets
IMG ?= controller:latest
all: docker-build
# Run against the configured Kubernetes cluster in ~/.kube/config
run: helm-operator
$(HELM_OPERATOR) run
# Install CRDs into a cluster
install: kustomize
$(KUSTOMIZE) build config/crd | kubectl apply -f -
# Uninstall CRDs from a cluster
uninstall: kustomize
$(KUSTOMIZE) build config/crd | kubectl delete -f -
# Deploy controller in the configured Kubernetes cluster in ~/.kube/config
deploy: kustomize
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default | kubectl apply -f -
# Undeploy controller in the configured Kubernetes cluster in ~/.kube/config
undeploy: kustomize
$(KUSTOMIZE) build config/default | kubectl delete -f -
# Build the docker image
docker-build:
docker build . -t ${IMG}
# Push the docker image
docker-push:
docker push ${IMG}
PATH := $(PATH):$(PWD)/bin
SHELL := env PATH=$(PATH) /bin/sh
OS = $(shell uname -s | tr '[:upper:]' '[:lower:]')
ARCH = $(shell uname -m | sed 's/x86_64/amd64/')
OSOPER = $(shell uname -s | tr '[:upper:]' '[:lower:]' | sed 's/darwin/apple-darwin/' | sed 's/linux/linux-gnu/')
ARCHOPER = $(shell uname -m )
kustomize:
ifeq (, $(shell which kustomize 2>/dev/null))
@{ \
set -e ;\
mkdir -p bin ;\
curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | tar xzf - -C bin/ ;\
}
KUSTOMIZE=$(realpath ./bin/kustomize)
else
KUSTOMIZE=$(shell which kustomize)
endif
helm-operator:
ifeq (, $(shell which helm-operator 2>/dev/null))
@{ \
set -e ;\
mkdir -p bin ;\
curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.0.1/helm-operator-v1.0.1-$(ARCHOPER)-$(OSOPER) ;\
mv helm-operator-v1.0.1-$(ARCHOPER)-$(OSOPER) ./bin/helm-operator ;\
chmod +x ./bin/helm-operator ;\
}
HELM_OPERATOR=$(realpath ./bin/helm-operator)
else
HELM_OPERATOR=$(shell which helm-operator)
endif
# Generate bundle manifests and metadata, then validate generated files.
.PHONY: bundle
bundle: kustomize
operator-sdk generate kustomize manifests -q
cd config/manager && $(KUSTOMIZE) edit set image controller=$(IMG)
$(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)
operator-sdk bundle validate ./bundle
# Build the bundle image.
.PHONY: bundle-build
bundle-build:
docker build -f bundle.Dockerfile -t $(BUNDLE_IMG) .

View File

@@ -0,0 +1,8 @@
domain: jfrog.com
layout: helm.sdk.operatorframework.io/v1
projectName: artifactory-ha-operator
resources:
- group: cache
kind: OpenshiftArtifactoryHa
version: v1alpha1
version: 3-alpha

View File

@@ -4,6 +4,12 @@ This code base is intended to deploy Artifactory HA as an operator to an Openshi
Openshift OperatorHub has the latest official supported Cluster Service Version (CSV) for the OLM catalog. Openshift OperatorHub has the latest official supported Cluster Service Version (CSV) for the OLM catalog.
# Breaking Changes
```
v1.1.0 breaks existing upgrade path due to base helm chart breaking changes
```
## Getting Started ## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
@@ -141,4 +147,4 @@ We use [SemVer](http://semver.org/) for versioning. For the versions available,
## Contact ## Contact
Github Issues Github Issues

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

View File

@@ -0,0 +1,7 @@
annotations:
operators.operatorframework.io.bundle.channel.default.v1: alpha
operators.operatorframework.io.bundle.channels.v1: alpha
operators.operatorframework.io.bundle.manifests.v1: manifests/
operators.operatorframework.io.bundle.mediatype.v1: registry+v1
operators.operatorframework.io.bundle.metadata.v1: metadata/
operators.operatorframework.io.bundle.package.v1: openshiftartifactoryha-operator

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

View File

@@ -0,0 +1,7 @@
annotations:
operators.operatorframework.io.bundle.channel.default.v1: alpha
operators.operatorframework.io.bundle.channels.v1: alpha
operators.operatorframework.io.bundle.manifests.v1: manifests/
operators.operatorframework.io.bundle.mediatype.v1: registry+v1
operators.operatorframework.io.bundle.metadata.v1: metadata/
operators.operatorframework.io.bundle.package.v1: openshiftartifactoryha-operator

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

View File

@@ -0,0 +1,7 @@
annotations:
operators.operatorframework.io.bundle.channel.default.v1: alpha
operators.operatorframework.io.bundle.channels.v1: alpha
operators.operatorframework.io.bundle.manifests.v1: manifests/
operators.operatorframework.io.bundle.mediatype.v1: registry+v1
operators.operatorframework.io.bundle.metadata.v1: metadata/
operators.operatorframework.io.bundle.package.v1: openshiftartifactoryha-operator

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

View File

@@ -0,0 +1,7 @@
annotations:
operators.operatorframework.io.bundle.channel.default.v1: alpha
operators.operatorframework.io.bundle.channels.v1: alpha
operators.operatorframework.io.bundle.manifests.v1: manifests/
operators.operatorframework.io.bundle.mediatype.v1: registry+v1
operators.operatorframework.io.bundle.metadata.v1: metadata/
operators.operatorframework.io.bundle.package.v1: openshiftartifactoryha-operator

View File

@@ -0,0 +1,29 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: openshiftartifactoryhas.charts.helm.k8s.io
spec:
group: charts.helm.k8s.io
names:
kind: OpenshiftArtifactoryHa
listKind: OpenshiftArtifactoryHaList
plural: openshiftartifactoryhas
singular: openshiftartifactoryha
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ''
plural: ''
conditions: null
storedVersions: null

Some files were not shown because too many files have changed in this diff Show More