[ansible] JFrog Platform 7.18.5 release (#106)

* [ansible] JFrog Platform 7.18.5 release
This commit is contained in:
Ram Mohan Rao Chukka
2021-05-03 21:11:56 +05:30
committed by GitHub
parent 94b2752d7d
commit ab2644dd80
226 changed files with 3815 additions and 6212 deletions

View File

@@ -1,8 +0,0 @@
#
# Ansible managed
#
exclude_paths:
- ./meta/version.yml
- ./meta/exception.yml
- ./meta/preferences.yml
- ./molecule/default/verify.yml

View File

@@ -1,12 +0,0 @@
---
extends: default
rules:
braces:
max-spaces-inside: 1
level: error
brackets:
max-spaces-inside: 1
level: error
line-length: disable
truthy: disable

View File

@@ -1,89 +0,0 @@
# JFrog Ansible Installers Collection
## Getting Started
1. Install this collection from Ansible Galaxy. This collection is also available in RedHat Automation Hub.
```
ansible-galaxy collection install jfrog.installers
```
Ensure you reference the collection in your playbook when using these roles.
```
---
- hosts: xray
collections:
- jfrog.installers
roles:
- xray
```
2. Ansible uses SSH to connect to hosts. Ensure that your SSH private key is on your client and the public keys are installed on your Ansible hosts.
3. Create your inventory file. Use one of the examples from the [examples directory](https://github.com/jfrog/JFrog-Cloud-Installers/tree/master/Ansible/examples) to construct an inventory file (hosts.yml) with the host addresses and variables.
4. Create your playbook. Use one of the examples from the [examples directory](https://github.com/jfrog/JFrog-Cloud-Installers/tree/master/Ansible/examples) to construct a playbook using the JFrog Ansible roles. These roles will be applied to your inventory and provision software.
5. Then execute with the following command to provision the JFrog software with Ansible. Variables can also be passed in at the command-line.
```
ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 16) join_key=$(openssl rand -hex 16)"
```
## Autogenerating Master and Join Keys
You may want to auto-generate your master amd join keys and apply it to all the nodes.
```
ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 16) join_key=$(openssl rand -hex 16)"
```
## Using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to Encrypt Vars
Some vars you may want to keep secret. You may put these vars into a separate file and encrypt them using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html).
```
ansible-vault encrypt secret-vars.yml --vault-password-file ~/.vault_pass.txt
```
then in your playbook include the secret vars file.
```
- hosts: primary
vars_files:
- ./vars/secret-vars.yml
- ./vars/vars.yml
roles:
- artifactory
```
## Bastion Hosts
In many cases, you may want to run this Ansible collection through a Bastion host to provision JFrog servers. You can include the following Var for a host or group of hosts:
```
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A user@host -W %h:%p"'
eg.
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"'
```
## Upgrades
The Artifactory and Xray roles support software upgrades. To use a role to perform a software upgrade only, use the _artifactory_upgrade_only_ or _xray_upgrade_only_ variables and specify the version. See the following example.
```
- hosts: artifactory
vars:
artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}"
artifactory_upgrade_only: true
roles:
- artifactory
- hosts: xray
vars:
xray_version: "{{ lookup('env', 'xray_version_upgrade') }}"
xray_upgrade_only: true
roles:
- xray
```

View File

@@ -1,49 +0,0 @@
# artifactory
The artifactory role installs the Artifactory Pro software onto the host. Per the Vars below, it will configure a node as primary or secondary. This role uses secondary roles artifactory_nginx to install nginx.
1.1.1 contains breaking changes. To mitigate this, use the role before doing any upgrades, let it mitigate the path changes, and then run again with your upgrade.
## Role Variables
* _artifactory_version_: The version of Artifactory to install. eg. "7.4.1"
* _master_key_: This is the Artifactory [Master Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys).
* _join_key_: This is the Artifactory [Join Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys).
* _db_download_url_: This is the download URL for the JDBC driver for your database. eg. "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar"
* _db_type_: This is the database type. eg. "postgresql"
* _db_driver_: This is the JDBC driver class. eg. "org.postgresql.Driver"
* _db_url_: This is the JDBC database url. eg. "jdbc:postgresql://10.0.0.120:5432/artifactory"
* _db_user_: The database user to configure. eg. "artifactory"
* _db_password_: The database password to configure. "Art1fact0ry"
* _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io"
* _artifactory_system_yaml_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. **If specified, this file will be used rather than constructing a file from the parameters above.**
* _binary_store_file_: Your own [binary store file](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore) can be used. If specified, the default cluster-file-system will not be used.
* _artifactory_upgrade_only_: Perform an software upgrade only. Default is false.
### primary vars (vars used by the primary Artifactory server)
* _artifactory_is_primary_: For the primary node this must be set to **true**.
* _artifactory_license1 - 5_: These are the cluster licenses.
* _artifactory_license_file_: Your own license file can be used. **If specified, a license file constructed from the licenses above will not be used.**
### secondary vars (vars used by the secondary Artifactory server)
* _artifactory_is_primary_: For the secondary node(s) this must be set to **false**.
Additional variables can be found in [defaults/main.yml](./defaults/main.yml).
## Example Playbook
```
---
- hosts: primary
roles:
- artifactory
```
## Upgrades
The Artifactory role supports software upgrades. To use a role to perform a software upgrade only, use the _artifactory_upgrade_only_ variable and specify the version. See the following example.
```
- hosts: artifactory
vars:
artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}"
artifactory_upgrade_only: true
roles:
- artifactory
```

View File

@@ -1,56 +0,0 @@
---
# defaults file for artifactory
# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone)
ansible_marketplace: standalone
# The version of Artifactory to install
artifactory_version: 7.10.2
# licenses file - specify a licenses file or specify up to 5 licenses
artifactory_license1:
artifactory_license2:
artifactory_license3:
artifactory_license4:
artifactory_license5:
# whether to enable HA
artifactory_ha_enabled: true
# value for whether a host is primary. this should be set in host vars
artifactory_is_primary: true
# The location where Artifactory should install.
jfrog_home_directory: /opt/jfrog
# The location where Artifactory should store data.
artifactory_file_store_dir: /data
# Pick the Artifactory flavour to install, can be also cpp-ce, jcr, pro.
artifactory_flavour: pro
extra_java_opts: -server -Xms2g -Xmx14g -Xss256k -XX:+UseG1GC
artifactory_system_yaml_template: system.yaml.j2
artifactory_tar: https://dl.bintray.com/jfrog/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/{{ artifactory_version }}/jfrog-artifactory-pro-{{ artifactory_version }}-linux.tar.gz
artifactory_home: "{{ jfrog_home_directory }}/artifactory"
artifactory_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}"
artifactory_user: artifactory
artifactory_group: artifactory
# Set the parameters required for the service.
service_list:
- name: artifactory
description: Start script for Artifactory
start_command: "{{ artifactory_home }}/bin/artifactory.sh start"
stop_command: "{{ artifactory_home }}/bin/artifactory.sh stop"
type: forking
status_pattern: artifactory
user_name: "{{ artifactory_user }}"
group_name: "{{ artifactory_group }}"
# if this is an upgrade
artifactory_upgrade_only: false
#default username and password
artifactory_app_username: admin
artifactory_app_user_pass: password

View File

@@ -1,10 +0,0 @@
---
# handlers file for artifactory
- name: systemctl daemon-reload
systemd:
daemon_reload: yes
- name: restart artifactory
service:
name: artifactory
state: restarted

View File

@@ -1,228 +0,0 @@
---
- debug:
msg: "Performing installation of Artifactory..."
- name: install nginx
include_role:
name: artifactory_nginx
- name: create group for artifactory
group:
name: "{{ artifactory_group }}"
state: present
become: yes
- name: create user for artifactory
user:
name: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
system: yes
become: yes
- name: ensure jfrog_home_directory exists
file:
path: "{{ jfrog_home_directory }}"
state: directory
become: yes
- name: Local Copy artifactory
unarchive:
src: "{{ local_artifactory_tar }}"
dest: "{{ jfrog_home_directory }}"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
creates: "{{ artifactory_untar_home }}"
become: yes
when: local_artifactory_tar is defined
register: downloadartifactory
until: downloadartifactory is succeeded
retries: 3
- name: download artifactory
unarchive:
src: "{{ artifactory_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
creates: "{{ artifactory_untar_home }}"
become: yes
when: artifactory_tar is defined
register: downloadartifactory
until: downloadartifactory is succeeded
retries: 3
- name: Create artifactory home folder
file:
state: directory
path: "{{ artifactory_home }}"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: Create Symlinks for var folder
file:
state: link
src: "{{ artifactory_untar_home }}/var"
dest: "{{ artifactory_home }}/var"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: Create Symlinks for app folder
file:
state: link
src: "{{ artifactory_untar_home }}/app"
dest: "{{ artifactory_home }}/app"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: ensure artifactory_file_store_dir exists
file:
path: "{{ artifactory_file_store_dir }}"
state: directory
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: ensure data exists
file:
path: "{{ artifactory_home }}/var/data"
state: directory
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: ensure etc exists
file:
path: "{{ artifactory_home }}/var/etc"
state: directory
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: use specified system yaml
copy:
src: "{{ artifactory_system_yaml }}"
dest: "{{ artifactory_home }}/var/etc/system.yaml"
become: yes
when: artifactory_system_yaml is defined
- name: configure system yaml template
template:
src: "{{ artifactory_system_yaml_template }}"
dest: "{{ artifactory_home }}/var/etc/system.yaml"
become: yes
when: artifactory_system_yaml is not defined
- name: ensure {{ artifactory_home }}/var/etc/security/ exists
file:
path: "{{ artifactory_home }}/var/etc/security/"
state: directory
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: configure master key
template:
src: master.key.j2
dest: "{{ artifactory_home }}/var/etc/security/master.key"
become: yes
- name: configure join key
template:
src: join.key.j2
dest: "{{ artifactory_home }}/var/etc/security/join.key"
become: yes
- name: ensure {{ artifactory_home }}/var/etc/artifactory/info/ exists
file:
path: "{{ artifactory_home }}/var/etc/artifactory/info/"
state: directory
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: configure installer info
template:
src: installer-info.json.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/info/installer-info.json"
become: yes
- name: use specified binary store
copy:
src: "{{ binary_store_file }}"
dest: "{{ artifactory_home }}/var/etc/binarystore.xml"
become: yes
when: binary_store_file is defined
- name: use default binary store
template:
src: binarystore.xml.j2
dest: "{{ artifactory_home }}/var/etc/binarystore.xml"
become: yes
when: binary_store_file is not defined
- name: use license file
copy:
src: "{{ artifactory_license_file }}"
dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license"
become: yes
when: artifactory_license_file is defined and artifactory_is_primary == true
- name: use license strings
template:
src: artifactory.cluster.license.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license"
become: yes
when: artifactory_license_file is not defined and artifactory_is_primary == true
- name: Copy local database driver
copy:
src: "{{ db_local_location }}"
dest: "{{ artifactory_home }}/var/bootstrap/artifactory/tomcat/lib"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
when: db_local_location is defined
become: yes
- name: download database driver
get_url:
url: "{{ db_download_url }}"
dest: "{{ artifactory_home }}/var/bootstrap/artifactory/tomcat/lib"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
when: db_download_url is defined
become: yes
- name: create artifactory service
shell: "{{ artifactory_home }}/app/bin/installService.sh"
become: yes
- name: Ensure permissions are correct
file:
path: "{{ jfrog_home_directory }}"
group: "{{ artifactory_group }}"
owner: "{{ artifactory_user }}"
recurse: yes
become: yes
- name: start and enable the primary node
service:
name: artifactory
state: started
become: yes
when: artifactory_is_primary == true
- name: random wait before restarting to prevent secondary nodes from hitting DB first
pause:
seconds: "{{ 120 | random + 10}}"
when: artifactory_is_primary == false
- name: start and enable the secondary nodes
service:
name: artifactory
state: started
become: yes
when: artifactory_is_primary == false

View File

@@ -1,34 +0,0 @@
---
- name: MV artifactory home to artifactory untar home
command: "mv {{ artifactory_home }} {{ temp_untar_home }}"
become: yes
- name: Ensure untar home permissions are correct
file:
state: directory
path: "{{ temp_untar_home }}"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: Create artifactory home folder
file:
state: directory
path: "{{ artifactory_home }}"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: Create Symlinks for var folder
file:
state: link
src: "{{ temp_untar_home }}/var"
dest: "{{ artifactory_home }}/var"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: Create Symlinks for app folder
file:
state: link
src: "{{ temp_untar_home }}/app"
dest: "{{ artifactory_home }}/app"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes

View File

@@ -1,44 +0,0 @@
- name: Rectify Legacy Installation Block
block:
- name: Check to see if artifactory has a service and stop it
service:
name: artifactory
state: stopped
become: yes
- name: Check symlink method
stat:
path: /opt/jfrog/artifactory/app
register: newMethod
- name: Check artifactory version
uri:
url: "{{ web_method }}://{{ artifactory_server_url }}:{{ url_port }}/artifactory/api/system/version"
url_username: "{{ artifactory_app_username }}"
url_password: "{{ artifactory_app_user_pass }}"
register: artifactory_installed_version
- name: Debug defunct installation
debug:
var: artifactory_installed_version.json.version
- name: Setup temporary untar home
set_fact:
temp_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_installed_version }}"
- name: Rectify legacy installation
include_tasks: "legacy_migration.yml"
when: (not newMethod.stat.islnk) and newMethod.stat.exists
rescue:
- name: Check to see if artifactory has a service and stop it
service:
name: artifactory
state: stopped
- name: Setup temporary untar home (assuming version is set var for version)
set_fact:
temp_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}"
- name: Rectify legacy installation
include_tasks: "legacy_migration.yml"
when: (not newMethod.stat.islnk) and newMethod.stat.exists
always:
- name: perform installation
include_tasks: "install.yml"
when: not artifactory_upgrade_only
- name: perform upgrade
include_tasks: "upgrade.yml"
when: artifactory_upgrade_only

View File

@@ -1,94 +0,0 @@
---
- debug:
msg: "Performing upgrade of Artifactory..."
- name: stop artifactory
service:
name: artifactory
state: stopped
become: yes
- name: ensure jfrog_home_directory exists
file:
path: "{{ jfrog_home_directory }}"
state: directory
become: yes
- name: Local Copy artifactory
unarchive:
src: "{{ local_artifactory_tar }}"
dest: "{{ jfrog_home_directory }}"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
creates: "{{ artifactory_untar_home }}"
become: yes
when: local_artifactory_tar is defined
register: downloadartifactory
until: downloadartifactory is succeeded
retries: 3
- name: download artifactory
unarchive:
src: "{{ artifactory_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
creates: "{{ artifactory_untar_home }}"
become: yes
when: artifactory_tar is defined
register: downloadartifactory
until: downloadartifactory is succeeded
retries: 3
#- name: Delete artifactory app
# file:
# path: "{{ artifactory_home }}/app"
# state: absent
# become: yes
#- name: CP new app to artifactory app
# command: "cp -r {{ artifactory_untar_home }}/app {{ artifactory_home }}/app"
# become: yes
#- name: Delete untar directory
# file:
# path: "{{ artifactory_untar_home }}"
# state: absent
# become: yes
- name: Create Symlinks for app folder
file:
state: link
src: "{{ artifactory_untar_home }}/app"
dest: "{{ artifactory_home }}/app"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
become: yes
- name: Ensure permissions are correct
file:
path: "{{ jfrog_home_directory }}"
group: "{{ artifactory_group }}"
owner: "{{ artifactory_user }}"
recurse: yes
become: yes
- name: start and enable the primary node
service:
name: artifactory
state: restarted
become: yes
when: artifactory_is_primary == true
- name: random wait before restarting to prevent secondary nodes from hitting DB first
pause:
seconds: "{{ 120 | random + 10}}"
when: artifactory_is_primary == false
- name: start and enable the secondary nodes
service:
name: artifactory
state: restarted
become: yes
when: artifactory_is_primary == false

View File

@@ -1,31 +0,0 @@
{% if artifactory_license1 %}
{% if artifactory_license1|length %}
{{ artifactory_license1 }}
{% endif %}
{% endif %}
{% if artifactory_license2 %}
{% if artifactory_license2|length %}
{{ artifactory_license2 }}
{% endif %}
{% endif %}
{% if artifactory_license3 %}
{% if artifactory_license3|length %}
{{ artifactory_license3 }}
{% endif %}
{% endif %}
{% if artifactory_license4 %}
{% if artifactory_license4|length %}
{{ artifactory_license4 }}
{% endif %}
{% endif %}
{% if artifactory_license5 %}
{% if artifactory_license5|length %}
{{ artifactory_license5 }}
{% endif %}
{% endif %}

View File

@@ -1,12 +0,0 @@
{
"productId": "Ansible_artifactory/1.0.0",
"features": [
{
"featureId": "Partner/ACC-006973"
},
{
"featureId": "Channel/{{ ansible_marketplace }}"
}
]
}

View File

@@ -1,44 +0,0 @@
## @formatter:off
## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
configVersion: 1
## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products.
## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog
## NOTE: Sensitive information such as passwords and join key are encrypted on first read.
## NOTE: The provided commented key and value is the default.
## SHARED CONFIGURATIONS
## A shared section for keys across all services in this config
shared:
## Node Settings
node:
## A unique id to identify this node.
## Default: auto generated at startup.
id: {{ ansible_machine_id }}
## Sets this node as primary in HA installation
primary: {{ artifactory_is_primary }}
## Sets this node as part of HA installation
haEnabled: {{ artifactory_ha_enabled }}
## Database Configuration
database:
## One of: mysql, oracle, mssql, postgresql, mariadb
## Default: Embedded derby
## Example for mysql/postgresql
type: "{{ db_type }}"
{%+ if db_type == 'derby' -%}
# driver: "{{ db_driver }}"
# url: "{{ db_url }}"
# username: "{{ db_user }}"
{%+ else -%}
driver: "{{ db_driver }}"
url: "{{ db_url }}"
username: "{{ db_user }}"
{%+ endif -%}
password: "{{ db_password }}"

View File

@@ -1,29 +0,0 @@
---
language: python
python: "2.7"
# Use the new container infrastructure
sudo: false
# Install ansible
addons:
apt:
packages:
- python-pip
install:
# Install ansible
- pip install ansible
# Check ansible version
- ansible --version
# Create ansible.cfg with correct roles_path
- printf '[defaults]\nroles_path=../' >ansible.cfg
script:
# Basic role syntax check
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/

View File

@@ -1,2 +0,0 @@
---
# defaults file for artifactory_nginx

View File

@@ -1,2 +0,0 @@
---
# handlers file for artifactory_nginx

View File

@@ -1,53 +0,0 @@
---
- name: install nginx
block:
- debug:
msg: "Attempting nginx installation without dependencies for potential offline mode."
- name: install nginx without dependencies
package:
name: nginx
state: present
register: package_res
retries: 5
delay: 60
become: yes
until: package_res is success
rescue:
- debug:
msg: "Attempting nginx installation with dependencies for potential online mode."
- name: install dependencies
include_tasks: "{{ ansible_os_family }}.yml"
- name: install nginx after dependency installation
package:
name: nginx
state: present
register: package_res
retries: 5
delay: 60
become: yes
until: package_res is success
- name: configure main nginx conf file.
copy:
src: nginx.conf
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: '0755'
become: yes
- name: configure the artifactory nginx conf
template:
src: artifactory.conf.j2
dest: /etc/nginx/conf.d/artifactory.conf
owner: root
group: root
mode: '0755'
become: yes
- name: restart nginx
service:
name: nginx
state: restarted
enabled: yes
become: yes

View File

@@ -1,29 +0,0 @@
---
language: python
python: "2.7"
# Use the new container infrastructure
sudo: false
# Install ansible
addons:
apt:
packages:
- python-pip
install:
# Install ansible
- pip install ansible
# Check ansible version
- ansible --version
# Create ansible.cfg with correct roles_path
- printf '[defaults]\nroles_path=../' >ansible.cfg
script:
# Basic role syntax check
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/

View File

@@ -1,2 +0,0 @@
---
# defaults file for artifactory_nginx

View File

@@ -1,2 +0,0 @@
---
# handlers file for artifactory_nginx

View File

@@ -1,30 +0,0 @@
---
language: python
services:
- docker
env:
global:
- DEBUG=--debug
matrix:
- MOLECULE_DISTRO=centos7 MOLECULE_SCENARIO=default
- MOLECULE_DISTRO=centos7 MOLECULE_SCENARIO=version11
# - MOLECULE_DISTRO: fedora27
# - MOLECULE_DISTRO: fedora29
- MOLECULE_DISTRO=ubuntu1604 MOLECULE_SCENARIO=default
- MOLECULE_DISTRO=ubuntu1604 MOLECULE_SCENARIO=version11
- MOLECULE_DISTRO=ubuntu1804 MOLECULE_SCENARIO=default
- MOLECULE_DISTRO=ubuntu1804 MOLECULE_SCENARIO=version11
# - MOLECULE_DISTRO: debian9
before_install:
- sudo apt-get -qq update
- sudo apt-get install -y net-tools
install:
- pip install molecule docker-py
script:
- molecule --version
- ansible --version
- molecule $DEBUG test -s $MOLECULE_SCENARIO

View File

@@ -1,25 +0,0 @@
# postgres
The postgres role will install Postgresql software and configure a database and user to support an Artifactory or Xray server.
### Role Variables
* _db_users_: This is a list of database users to create. eg. db_users: - { db_user: "artifactory", db_password: "Art1fAct0ry" }
* _dbs_: This is the database to create. eg. dbs: - { db_name: "artifactory", db_owner: "artifactory" }
By default, the [_pg_hba.conf_](https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html) client authentication file is configured for open access for development purposes through the _postgres_allowed_hosts_ variable:
```
postgres_allowed_hosts:
- { type: "host", database: "all", user: "all", address: "0.0.0.0/0", method: "trust"}
```
**THIS SHOULD NOT BE USED FOR PRODUCTION.**
**Update this variable to only allow access from Artifactory and Xray.**
## Example Playbook
```
---
- hosts: database
roles:
- postgres
```

View File

@@ -1,4 +0,0 @@
---
- name: restart postgres
systemd: name={{ postgres_server_service_name }} state=restarted

View File

@@ -1,105 +0,0 @@
---
- name: define distribution-specific variables
include_vars: "{{ ansible_os_family }}.yml"
- name: create directory for bind mount if necessary
file:
path: "{{ postgres_server_bind_mount_var_lib_pgsql_target }}"
state: directory
become: yes
when: postgres_server_bind_mount_var_lib_pgsql
- name: perform bind mount if necessary
mount:
path: "/var/lib/pgsql"
src: "{{ postgres_server_bind_mount_var_lib_pgsql_target }}"
opts: bind
state: mounted
fstype: none
become: yes
when: postgres_server_bind_mount_var_lib_pgsql
- name: perform installation
include_tasks: "{{ ansible_os_family }}.yml"
- name: extend path
copy:
dest: /etc/profile.d/postgres-path.sh
mode: a=rx
content: "export PATH=$PATH:/usr/pgsql-{{ postgres_server_version }}/bin"
become: yes
- name: initialize PostgreSQL database cluster
environment:
LC_ALL: "en_US.UTF-8"
vars:
ansible_become: "{{ postgres_server_initdb_become }}"
ansible_become_user: "{{ postgres_server_user }}"
command: "{{ postgres_server_cmd_initdb }} {{ postgres_server_data_location }}"
args:
creates: "{{ postgres_server_data_location }}/PG_VERSION"
- name: install postgres configuration
template:
src: "{{ item }}.j2"
dest: "{{ postgres_server_config_location }}/{{ item }}"
owner: postgres
group: postgres
mode: u=rw,go=r
vars:
ansible_become: "{{ postgres_server_initdb_become }}"
ansible_become_user: "{{ postgres_server_user }}"
loop:
- pg_hba.conf
- postgresql.conf
- name: enable postgres service
systemd:
name: "{{ postgres_server_service_name }}"
state: started
enabled: yes
become: yes
- name: Hold until Postgresql is up and running
wait_for:
port: 5432
- name: Create users
become_user: postgres
become: yes
postgresql_user:
name: "{{ item.db_user }}"
password: "{{ item.db_password }}"
conn_limit: "-1"
loop: "{{ db_users|default([]) }}"
no_log: true # secret passwords
- name: Create a database
become_user: postgres
become: yes
postgresql_db:
name: "{{ item.db_name }}"
owner: "{{ item.db_owner }}"
encoding: UTF-8
loop: "{{ dbs|default([]) }}"
- name: Grant privs on db
become_user: postgres
become: yes
postgresql_privs:
database: "{{ item.db_name }}"
role: "{{ item.db_owner }}"
state: present
privs: ALL
type: database
loop: "{{ dbs|default([]) }}"
- name: restart postgres
service:
name: "{{ postgres_server_service_name }}"
state: restarted
become: yes
- debug:
msg: "Restarted postgres service {{ postgres_server_service_name }}"

View File

@@ -1,12 +0,0 @@
---
postgres_server_cmd_initdb: /usr/lib/postgresql/{{ postgres_server_version }}/bin/initdb -D
postgres_server_initdb_become: yes
postgres_server_data_location: /var/lib/postgresql/{{ postgres_server_version }}/main
postgres_server_config_location: /etc/postgresql/{{ postgres_server_version }}/main
postgres_server_service_name: postgresql@{{ postgres_server_version }}-main
postgres_server_config_data_directory: "/var/lib/postgresql/{{ postgres_server_version }}/main"
postgres_server_config_hba_file: "/etc/postgresql/{{ postgres_server_version }}/main/pg_hba.conf"
postgres_server_config_ident_file: "/etc/postgresql/{{ postgres_server_version }}/main/pg_ident.conf"
postgres_server_config_external_pid_file: "/var/run/postgresql/{{ postgres_server_version }}-main.pid"

View File

@@ -1,11 +0,0 @@
---
postgres_server_cmd_initdb: /usr/pgsql-{{ postgres_server_version }}/bin/postgresql{{ postgres_server_pkg_version }}-setup initdb -D
postgres_server_data_location: /var/lib/pgsql/{{ postgres_server_version }}/data
postgres_server_config_location: "{{ postgres_server_data_location }}"
postgres_server_service_name: postgresql-{{ postgres_server_version }}
postgres_server_config_data_directory: null
postgres_server_config_hba_file: null
postgres_server_config_ident_file: null
postgres_server_config_external_pid_file: null

View File

@@ -1,4 +0,0 @@
---
postgres_server_cmd_initdb: /usr/pgsql-{{ postgres_server_version }}/bin/postgresql{{ postgres_server_pkg_version }}-setup initdb
postgres_server_initdb_become: false

View File

@@ -1,4 +0,0 @@
---
postgres_server_cmd_initdb: /usr/pgsql-{{ postgres_server_version }}/bin/initdb -D /var/lib/pgsql/{{ postgres_server_version }}/data
postgres_server_initdb_become: yes

View File

@@ -1,29 +0,0 @@
---
language: python
python: "2.7"
# Use the new container infrastructure
sudo: false
# Install ansible
addons:
apt:
packages:
- python-pip
install:
# Install ansible
- pip install ansible
# Check ansible version
- ansible --version
# Create ansible.cfg with correct roles_path
- printf '[defaults]\nroles_path=../' >ansible.cfg
script:
# Basic role syntax check
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/

View File

@@ -1,36 +0,0 @@
# xray
The xray role will install Xray software onto the host. An Artifactory server and Postgress database is required.
### Role Variables
* _xray_version_: The version of Artifactory to install. eg. "3.3.0"
* _jfrog_url_: This is the URL to the Artifactory base URL. eg. "http://ec2-54-237-207-135.compute-1.amazonaws.com"
* _master_key_: This is the Artifactory [Master Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys).
* _join_key_: This is the Artifactory [Join Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys).
* _db_type_: This is the database type. eg. "postgresql"
* _db_driver_: This is the JDBC driver class. eg. "org.postgresql.Driver"
* _db_url_: This is the database url. eg. "postgres://10.0.0.59:5432/xraydb?sslmode=disable"
* _db_user_: The database user to configure. eg. "xray"
* _db_password_: The database password to configure. "xray"
* _xray_system_yaml_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. If specified, this file will be used rather than constructing a file from the parameters above.
* _xray_upgrade_only_: Perform an software upgrade only. Default is false.
Additional variables can be found in [defaults/main.yml](./defaults/main.yml).
## Example Playbook
```
---
- hosts: xray
roles:
- xray
```
## Upgrades
The Xray role supports software upgrades. To use a role to perform a software upgrade only, use the _xray_upgrade_only_ variables and specify the version. See the following example.
```
- hosts: xray
vars:
xray_version: "{{ lookup('env', 'xray_version_upgrade') }}"
xray_upgrade_only: true
roles:
- xray
```

View File

@@ -1,29 +0,0 @@
---
# defaults file for xray
# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone)
ansible_marketplace: standalone
# The version of xray to install
xray_version: 3.10.3
# whether to enable HA
xray_ha_enabled: true
# The location where xray should install.
jfrog_home_directory: /opt/jfrog
# The remote xray download file
xray_tar: https://dl.bintray.com/jfrog/jfrog-xray/xray-linux/{{ xray_version }}/jfrog-xray-{{ xray_version }}-linux.tar.gz
#The xray install directory
xray_untar_home: "{{ jfrog_home_directory }}/jfrog-xray-{{ xray_version }}-linux"
xray_home: "{{ jfrog_home_directory }}/xray"
#xray users and groups
xray_user: xray
xray_group: xray
# if this is an upgrade
xray_upgrade_only: false
xray_system_yaml_template: system.yaml.j2

View File

@@ -1,2 +0,0 @@
---
# handlers file for xray

View File

@@ -1,42 +0,0 @@
---
- name: Install db5.3-util
apt:
deb: "{{ xray_home }}/app/third-party/misc/db5.3-util_5.3.28-3ubuntu3_amd64.deb"
ignore_errors: yes
become: yes
- name: Install db-util
apt:
deb: "{{ xray_home }}/app/third-party/misc/db-util_1_3a5.3.21exp1ubuntu1_all.deb"
ignore_errors: yes
become: yes
- name: Install libssl
apt:
deb: "{{ xray_home }}/app/third-party/rabbitmq/libssl1.1_1.1.0j-1_deb9u1_amd64.deb"
ignore_errors: yes
become: yes
- name: Install socat
apt:
deb: "{{ xray_home }}/app/third-party/rabbitmq/socat_1.7.3.1-2+deb9u1_amd64.deb"
become: yes
- name: Install libwxbase3.0-0v5
apt:
name: libwxbase3.0-0v5
update_cache: yes
state: present
become: yes
- name: Install erlang 21.2.1-1
apt:
deb: "{{ xray_home }}/app/third-party/rabbitmq/esl-erlang_21.2.1-1~ubuntu~xenial_amd64.deb"
when: xray_version is version("3.8.0","<")
become: yes
- name: Install erlang 22.3.4.1-1
apt:
deb: "{{ xray_home }}/app/third-party/rabbitmq/esl-erlang_22.3.4.1-1_ubuntu_xenial_amd64.deb"
when: xray_version is version("3.8.0",">=")
become: yes

View File

@@ -1,26 +0,0 @@
---
- name: Install db-utl
yum:
name: "{{ xray_home }}/app/third-party/misc/libdb-utils-5.3.21-19.el7.x86_64.rpm"
state: present
become: yes
- name: Install socat
yum:
name: "{{ xray_home }}/app/third-party/rabbitmq/socat-1.7.3.2-2.el7.x86_64.rpm"
state: present
become: yes
- name: Install erlang 21.1.4-1
yum:
name: "{{ xray_home }}/app/third-party/rabbitmq/erlang-21.1.4-1.el7.centos.x86_64.rpm"
state: present
when: xray_version is version("3.8.0","<")
become: yes
- name: Install erlang 22.3.4.1-1
yum:
name: "{{ xray_home }}/app/third-party/rabbitmq/erlang-22.3.4.1-1.el7.centos.x86_64.rpm"
state: present
when: xray_version is version("3.8.0",">=")
become: yes

View File

@@ -1,111 +0,0 @@
---
- debug:
msg: "Performing installation of Xray..."
- name: create group for xray
group:
name: "{{ xray_group }}"
state: present
become: yes
- name: create user for xray
user:
name: "{{ xray_user }}"
group: "{{ xray_group }}"
system: yes
become: yes
- name: ensure jfrog_home_directory exists
file:
path: "{{ jfrog_home_directory }}"
state: directory
become: yes
- name: download xray
unarchive:
src: "{{ xray_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ xray_user }}"
group: "{{ xray_group }}"
creates: "{{ xray_untar_home }}"
become: yes
register: downloadxray
until: downloadxray is succeeded
retries: 3
- name: MV untar directory to xray home
command: "mv {{ xray_untar_home }} {{ xray_home }}"
become: yes
- debug:
msg: "Running dependency installation for {{ ansible_os_family }}"
- name: perform dependency installation
include_tasks: "{{ ansible_os_family }}.yml"
- name: ensure etc exists
file:
path: "{{ xray_home }}/var/etc"
state: directory
owner: "{{ xray_user }}"
group: "{{ xray_group }}"
become: yes
- name: use specified system yaml
copy:
src: "{{ xray_system_yaml }}"
dest: "{{ xray_home }}/var/etc/system.yaml"
become: yes
when: xray_system_yaml is defined
- name: configure system yaml template
template:
src: "{{ xray_system_yaml_template }}"
dest: "{{ xray_home }}/var/etc/system.yaml"
become: yes
when: xray_system_yaml is not defined
- name: ensure {{ xray_home }}/var/etc/security/ exists
file:
path: "{{ xray_home }}/var/etc/security/"
state: directory
owner: "{{ xray_user }}"
group: "{{ xray_group }}"
become: yes
- name: configure master key
template:
src: master.key.j2
dest: "{{ xray_home }}/var/etc/security/master.key"
become: yes
- name: configure join key
template:
src: join.key.j2
dest: "{{ xray_home }}/var/etc/security/join.key"
become: yes
- name: ensure {{ xray_home }}/var/etc/info/ exists
file:
path: "{{ xray_home }}/var/etc/info/"
state: directory
owner: "{{ xray_user }}"
group: "{{ xray_group }}"
become: yes
- name: configure installer info
template:
src: installer-info.json.j2
dest: "{{ xray_home }}/var/etc/info/installer-info.json"
become: yes
- name: create xray service
shell: "{{ xray_home }}/app/bin/installService.sh"
become: yes
- name: start and enable xray
service:
name: xray
state: restarted
become: yes

View File

@@ -1,54 +0,0 @@
---
- debug:
msg: "Performing upgrade of Xray..."
- name: stop xray
service:
name: xray
state: stopped
become: yes
- name: ensure jfrog_home_directory exists
file:
path: "{{ jfrog_home_directory }}"
state: directory
become: yes
- name: download xray
unarchive:
src: "{{ xray_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ xray_user }}"
group: "{{ xray_group }}"
creates: "{{ xray_untar_home }}"
become: yes
register: downloadxray
until: downloadxray is succeeded
retries: 3
- name: Delete xray app
file:
path: "{{ xray_home }}/app"
state: absent
become: yes
- name: CP new app to xray app
command: "cp -r {{ xray_untar_home }}/app {{ xray_home }}/app"
become: yes
- name: Delete untar directory
file:
path: "{{ xray_untar_home }}"
state: absent
become: yes
- name: create xray service
shell: "{{ xray_home }}/app/bin/installService.sh"
become: yes
- name: start and enable xray
service:
name: xray
state: restarted
become: yes

View File

@@ -1,11 +0,0 @@
{
"productId": "Ansible_artifactory/1.0.0",
"features": [
{
"featureId": "Partner/ACC-006973"
},
{
"featureId": "Channel/{{ ansible_marketplace }}"
}
]
}

View File

@@ -1,36 +0,0 @@
## @formatter:off
## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
configVersion: 1
## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products.
## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog
## NOTE: Sensitive information such as passwords and join key are encrypted on first read.
## NOTE: The provided commented key and value is the default.
## SHARED CONFIGURATIONS
## A shared section for keys across all services in this config
shared:
## Base URL of the JFrog Platform Deployment (JPD)
## This is the URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs.
## Examples: "http://jfrog.acme.com" or "http://10.20.30.40:8082"
jfrogUrl: {{ jfrog_url }}
## Node Settings
node:
## A unique id to identify this node.
## Default: auto generated at startup.
id: {{ ansible_machine_id }}
## Database Configuration
database:
## One of: mysql, oracle, mssql, postgresql, mariadb
## Default: Embedded derby
## Example for mysql/postgresql
type: "{{ db_type }}"
driver: "{{ db_driver }}"
url: "{{ db_url }}"
username: "{{ db_user }}"
password: "{{ db_password }}"

View File

@@ -0,0 +1,112 @@
# JFrog Platform Ansible Collection
This Ansible directory consists of the following directories that support the JFrog Platform collection.
* [ansible_collections directory](ansible_collections) - This directory contains the Ansible collection package that has the Ansible roles for Artifactory, Distribution, Missioncontrol and Xray. See the roles README for details on the product roles and variables.
* [examples directory](examples) - This directory contains example playbooks for various architectures.
## Getting Started
1. Install this collection from Ansible Galaxy. This collection is also available in RedHat Automation Hub.
```
ansible-galaxy collection install jfrog.platform
```
Ensure you reference the collection in your playbook when using these roles.
```
---
- hosts: artifactory_servers
collections:
- jfrog.platform
roles:
- artifactory
```
2. Ansible uses SSH to connect to hosts. Ensure that your SSH private key is on your client and the public keys are installed on your Ansible hosts.
3. Create your inventory file. Use one of the examples from the [examples directory](examples) to construct an inventory file (hosts.ini) with the host addresses
4. Create your playbook. Use one of the examples from the [examples directory](examples) to construct a playbook using the JFrog Ansible roles. These roles will be applied to your inventory and provision software.
5. Then execute with the following command to provision the JFrog Platform with Ansible.
```
ansible-playbook -vv platform.yml -i hosts.ini"
```
## Generating Master and Join Keys
**Note** : If you don't provide these keys, they will be set to defaults (check groupvars/all/vars.yaml file)
For production deployments,You may want to generate your master amd join keys and apply it to all the nodes.
**IMPORTANT** : Save below generated master and join keys for future upgrades
```
MASTER_KEY_VALUE=$(openssl rand -hex 32)
JOIN_KEY_VALUE=$(openssl rand -hex 32)
ansible-playbook -vv platform.yml -i hosts.ini --extra-vars "master_key=$MASTER_KEY_VALUE join_key=$JOIN_KEY_VALUE"
```
## Using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to Encrypt Vars
Some vars you may want to keep secret. You may put these vars into a separate file and encrypt them using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html).
```
ansible-vault encrypt secret-vars.yml --vault-password-file ~/.vault_pass.txt
```
then in your playbook include the secret vars file.
```
- hosts: artifactory_servers
vars_files:
- ./vars/secret-vars.yml
- ./vars/vars.yml
roles:
- artifactory
```
## Upgrades
All JFrog product roles support software updates. To use a role to perform a software update only, use the _<product>_upgrade_only_ variable and specify the version. See the following example.
```
- hosts: artifactory_servers
vars:
artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}"
artifactory_upgrade_only: true
roles:
- artifactory
- hosts: xray_servers
vars:
xray_version: "{{ lookup('env', 'xray_version_upgrade') }}"
xray_upgrade_only: true
roles:
- xray
```
## Building the Collection Archive
1. Go to the [ansible_collections/jfrog/platform directory](ansible_collections/jfrog/platform).
2. Update the galaxy.yml meta file as needed. Update the version.
3. Build the archive. (Requires Ansible 2.9+)
```
ansible-galaxy collection build
```
## OS support
The JFrog Platform Ansible Collection can be installed on the following operating systems:
* Ubuntu LTS versions (16.04/18.04/20.4)
* Centos/RHEL 7.x/8.x
* Debian 9.x/10.x
## Known issues
* Refer [here](https://github.com/jfrog/JFrog-Cloud-Installers/issues?q=is%3Aopen+is%3Aissue+label%3AAnsible)
* By default, ansible_python_interpreter: "/usr/bin/python3" used , For Centos/RHEL-7, Set this to "/usr/bin/python" . For example
```
ansible-playbook -vv platform.yml -i hosts.ini -e 'ansible_python_interpreter=/usr/bin/python'
```

View File

@@ -0,0 +1,6 @@
[defaults]
host_key_checking = false
stdout_callback = debug
remote_tmp = /tmp/.ansible/tmp
private_key_file=~/.ssh/ansible-jfrog.key
timeout = 20

View File

@@ -0,0 +1,4 @@
---
- hosts: artifactory_servers
roles:
- artifactory

View File

@@ -0,0 +1,4 @@
---
- hosts: distribution_servers
roles:
- distribution

View File

@@ -6,10 +6,10 @@
namespace: "jfrog"
# The name of the collection. Has the same character restrictions as 'namespace'
name: "installers"
name: "platform"
# The version of the collection. Must be compatible with semantic versioning
version: "1.1.2"
version: "7.18.5"
# The path to the Markdown (.md) readme file. This path is relative to the root of the collection
readme: "README.md"
@@ -17,13 +17,13 @@ readme: "README.md"
# A list of the collection's content authors. Can be just the name or in the format 'Full Name <email> (url)
# @nicks:irc/im.site#channel'
authors:
- "Jeff Fry <jefff@jfrog.com>"
- "JFrog Maintainers Team <installers@jfrog.com>"
### OPTIONAL but strongly recommended
# A short summary description of the collection
description: "This collection provides roles for installing Artifactory and Xray. Additionally, it provides optional SSL and Postgresql roles if these are needed for your deployment."
description: "This collection provides roles for installing JFrog Platform which includes Artifactory, Distribution, Mission-control and Xray. Additionally, it provides optional SSL and Postgresql roles if these are needed for your deployment."
# Either a single license or a list of licenses for content inside of a collection. Ansible Galaxy currently only
# accepts L(SPDX,https://spdx.org/licenses/) licenses. This key is mutually exclusive with 'license_file'
@@ -37,10 +37,14 @@ license_file: ""
# A list of tags you want to associate with the collection for indexing/searching. A tag name has the same character
# requirements as 'namespace' and 'name'
tags:
- artifactory
- xray
- jfrog
- platform
- devops
- application
- artifactory
- distribution
- missioncontrol
- xray
# Collections that this collection requires to be installed for it to be usable. The key of the dict is the
# collection label 'namespace.name'. The value is a version range
@@ -49,13 +53,13 @@ tags:
dependencies: {}
# The URL of the originating SCM repository
repository: "https://github.com/jfrog/JFrog-Cloud-Installers/"
repository: "https://github.com/jfrog/JFrog-Cloud-Installers/Ansible"
# The URL to any online docs
documentation: "https://github.com/jfrog/JFrog-Cloud-Installers/blob/master/Ansible/README.md"
# The URL to the homepage of the collection/project
homepage: "https://github.com/jfrog/JFrog-Cloud-Installers/"
homepage: "https://github.com/jfrog/JFrog-Cloud-Installers/Ansible"
# The URL to the collection issue tracker
issues: "https://github.com/jfrog/JFrog-Cloud-Installers/issues"

View File

@@ -0,0 +1,8 @@
# The version of products to install
artifactory_version: 7.18.5
xray_version: 3.24.2
distribution_version: 2.7.1
missioncontrol_version: 4.7.3
# platform collection version
platform_collection_version: 7.18.5

View File

@@ -0,0 +1,74 @@
---
# Defaults
## Note : These values are global and can be overridden in role/<product>/defaults/main.yaml file
## For production deployments,You may want to generate your master amd join keys and apply it to all the nodes.
master_key: ee69d96880726d3abf6b42b97d2ae589111ea95c2a8bd5876ec5cd9e8ee34f86
join_key: 83da88eaaa08dfed5b86888fcec85f19ace0c3ff8747bcefcec2c9769ad4043d
jfrog_url: >-
{%- for host in groups['artifactory_servers'] -%}
"http://{{ hostvars[host]['ansible_host'] }}:8082"
{%- endfor -%}
# Artifactory DB details
artifactory_db_type: postgresql
artifactory_db_driver: org.postgresql.Driver
artifactory_db_name: artifactory
artifactory_db_user: artifactory
artifactory_db_password: password
artifactory_db_url: >-
{%- for item in groups['postgres_servers'] -%}
jdbc:postgresql://{{ hostvars[item]['ansible_host'] }}:5432/{{ artifactory_db_name }}
{%- endfor -%}
# Xray DB details
xray_db_type: postgresql
xray_db_driver: org.postgresql.Driver
xray_db_name: xray
xray_db_user: xray
xray_db_password: password
xray_db_url: >-
{%- for item in groups['postgres_servers'] -%}
postgres://{{ hostvars[item]['ansible_host'] }}:5432/{{ xray_db_name }}?sslmode=disable
{%- endfor -%}
# Distribution DB details
distribution_db_type: postgresql
distribution_db_driver: org.postgresql.Driver
distribution_db_name: distribution
distribution_db_user: distribution
distribution_db_password: password
distribution_db_url: >-
{%- for item in groups['postgres_servers'] -%}
jdbc:postgresql://{{ hostvars[item]['ansible_host'] }}:5432/{{ distribution_db_name }}?sslmode=disable
{%- endfor -%}
# MissionControl DB details
mc_db_type: postgresql
mc_db_driver: org.postgresql.Driver
mc_db_name: mc
mc_db_user: mc
mc_db_password: password
mc_db_url: >-
{%- for item in groups['postgres_servers'] -%}
jdbc:postgresql://{{ hostvars[item]['ansible_host'] }}:5432/{{ mc_db_name }}?sslmode=disable
{%- endfor -%}
# Postgresql users and databases/schemas
db_users:
- { db_user: "{{ artifactory_db_user }}", db_password: "{{ artifactory_db_password }}" }
- { db_user: "{{ xray_db_user }}", db_password: "{{ xray_db_password }}" }
- { db_user: "{{ distribution_db_user }}", db_password: "{{ distribution_db_password }}" }
- { db_user: "{{ mc_db_user }}", db_password: "{{ mc_db_password }}" }
dbs:
- { db_name: "{{ artifactory_db_name }}", db_owner: "{{ artifactory_db_user }}" }
- { db_name: "{{ xray_db_name }}", db_owner: "{{ xray_db_user }}" }
- { db_name: "{{ distribution_db_name }}", db_owner: "{{ distribution_db_user }}" }
- { db_name: "{{ mc_db_name }}", db_owner: "{{ mc_db_user }}" }
mc_schemas:
- jfmc_server
- insight_server
- insight_scheduler
# For Centos/RHEL-7, Set this to "/usr/bin/python"
ansible_python_interpreter: "/usr/bin/python3"

View File

@@ -0,0 +1,23 @@
[postgres_servers]
postgres-1 ansible_host=10.70.64.85 private_ip=10.70.64.85
[artifactory_servers]
artifactory-1 ansible_host=10.70.64.84 private_ip=10.70.64.84
[xray_servers]
xray-1 ansible_host=10.70.64.83 private_ip=10.70.64.83
[distribution_servers]
distribution-1 ansible_host=10.70.64.82 private_ip=10.70.64.82
[missionControl_servers]
missionControl-1 ansible_host=10.70.64.79 private_ip=10.70.64.79
[xray_secondary_servers]
xray-2 ansible_host=0.0.0.0 private_ip=0.0.0.0
[distribution_secondary_servers]
distribution-2 ansible_host=0.0.0.0 private_ip=0.0.0.0
[missionControl_secondary_servers]
missionControl-2 ansible_host=0.0.0.0 private_ip=0.0.0.0

View File

@@ -0,0 +1,4 @@
---
- hosts: missioncontrol_servers
roles:
- missioncontrol

View File

@@ -0,0 +1,16 @@
---
- hosts: postgres_servers
roles:
- postgres
- hosts: artifactory_servers
roles:
- artifactory
- hosts: xray_servers
roles:
- xray
- hosts: distribution_servers
roles:
- distribution
- hosts: missioncontrol_servers
roles:
- missioncontrol

View File

@@ -0,0 +1,31 @@
# Collections Plugins Directory
This directory can be used to ship various plugins inside an Ansible collection. Each plugin is placed in a folder that
is named after the type of plugin it is in. It can also include the `module_utils` and `modules` directory that
would contain module utils and modules respectively.
Here is an example directory of the majority of plugins currently supported by Ansible:
```
└── plugins
├── action
├── become
├── cache
├── callback
├── cliconf
├── connection
├── filter
├── httpapi
├── inventory
├── lookup
├── module_utils
├── modules
├── netconf
├── shell
├── strategy
├── terminal
├── test
└── vars
```
A full list of plugin types can be found at [Working With Plugins](https://docs.ansible.com/ansible/2.9/plugins/plugins.html).

View File

@@ -0,0 +1,4 @@
---
- hosts: postgres
roles:
- postgres

View File

@@ -0,0 +1,28 @@
# artifactory
The artifactory role installs the Artifactory Pro software onto the host. Per the Vars below, it will configure a node as primary or secondary. This role uses secondary roles artifactory_nginx to install nginx.
## Role Variables
* _server_name_: **mandatory** This is the server name. eg. "artifactory.54.175.51.178.xip.io"
* _artifactory_upgrade_only_: Perform an software upgrade only. Default is false.
Additional variables can be found in [defaults/main.yml](./defaults/main.yml).
## Example Playbook
```
---
- hosts: artifactory_servers
roles:
- artifactory
```
## Upgrades
The Artifactory role supports software upgrades. To use a role to perform a software upgrade only, use the _artifactory_upgrade_only_ variable and specify the version. See the following example.
```
- hosts: artifactory_servers
vars:
artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}"
artifactory_upgrade_only: true
roles:
- artifactory
```

View File

@@ -0,0 +1,57 @@
---
# defaults file for artifactory
# indicates where this collection was downloaded from (galaxy, automation_hub, standalone)
ansible_marketplace: standalone
# Set this to true when SSL is enabled (to use artifactory_nginx_ssl role), default to false (implies artifactory uses artifactory_nginx role )
artifactory_nginx_ssl_enabled: false
# Provide single node license
# artifactory_single_license:
# Provide individual (HA) licenses file separated by new line and set artifactory_ha_enabled: true.
# Example:
# artifactory_licenses: |-
# <license_1>
# <license_2>
# <license_3>
# To enable HA, set to true
artifactory_ha_enabled: false
# By default, all nodes are primary (CNHA) - https://www.jfrog.com/confluence/display/JFROG/High+Availability#HighAvailability-Cloud-NativeHighAvailability
artifactory_taskAffinity: any
# The location where Artifactory should install.
jfrog_home_directory: /opt/jfrog
# The location where Artifactory should store data.
artifactory_file_store_dir: /data
# Pick the Artifactory flavour to install, can be also cpp-ce, jcr, pro.
artifactory_flavour: pro
artifactory_extra_java_opts: -server -Xms512m -Xmx2g -Xss256k -XX:+UseG1GC
artifactory_system_yaml_template: system.yaml.j2
artifactory_tar: https://releases.jfrog.io/artifactory/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/{{ artifactory_version }}/jfrog-artifactory-pro-{{ artifactory_version }}-linux.tar.gz
artifactory_home: "{{ jfrog_home_directory }}/artifactory"
artifactory_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}"
postgres_driver_download_url: https://repo1.maven.org/maven2/org/postgresql/postgresql/42.2.20/postgresql-42.2.20.jar
artifactory_user: artifactory
artifactory_group: artifactory
artifactory_daemon: artifactory
artifactory_uid: 1030
artifactory_gid: 1030
# if this is an upgrade
artifactory_upgrade_only: false
#default username and password
artifactory_admin_username: admin
artifactory_admin_password: password

View File

@@ -0,0 +1,7 @@
---
# handlers file for distribution
- name: restart artifactory
become: yes
systemd:
name: "{{ artifactory_daemon }}"
state: restarted

View File

@@ -1,5 +1,5 @@
galaxy_info:
author: "Jeff Fry <jefff@jfrog.com>"
author: "JFrog Maintainers Team <installers@jfrog.com>"
description: "The artifactory role installs the Artifactory Pro software onto the host. Per the Vars below, it will configure a node as primary or secondary. This role uses secondary roles artifactory_nginx to install nginx."
company: JFrog

View File

@@ -0,0 +1,161 @@
---
- debug:
msg: "Performing installation of Artifactory version : {{ artifactory_version }} "
- name: install nginx
include_role:
name: artifactory_nginx
when: artifactory_nginx_ssl_enabled == false
- name: install nginx with SSL
include_role:
name: artifactory_nginx_ssl
when: artifactory_nginx_ssl_enabled == true
- name: Ensure group artifactory exist
become: yes
group:
name: "{{ artifactory_group }}"
gid: "{{ artifactory_gid }}"
state: present
- name: Ensure user artifactory exist
become: yes
user:
uid: "{{ artifactory_uid }}"
name: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
create_home: yes
home: "{{ artifactory_home }}"
shell: /bin/bash
state: present
- name: Download artifactory
become: yes
unarchive:
src: "{{ artifactory_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
creates: "{{ artifactory_untar_home }}"
when: artifactory_tar is defined
register: downloadartifactory
until: downloadartifactory is succeeded
retries: 3
- name: Check if app directory exists
become: yes
stat:
path: "{{ artifactory_home }}/app"
register: app_dir_check
- name: Copy untar directory to artifactory home
become: yes
command: "cp -r {{ artifactory_untar_home }}/. {{ artifactory_home }}"
when: not app_dir_check.stat.exists
- name: Create required directories
become: yes
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
loop:
- "{{ artifactory_file_store_dir }}"
- "{{ artifactory_home }}/var/data"
- "{{ artifactory_home }}/var/etc"
- "{{ artifactory_home }}/var/etc/security/"
- "{{ artifactory_home }}/var/etc/artifactory/info/"
- name: Configure systemyaml
become: yes
template:
src: "{{ artifactory_system_yaml_template }}"
dest: "{{ artifactory_home }}/var/etc/system.yaml"
notify: restart artifactory
- name: Configure master key
become: yes
copy:
dest: "{{ artifactory_home }}/var/etc/security/master.key"
content: |
{{ master_key }}
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
mode: 0640
- name: Configure join key
become: yes
copy:
dest: "{{ artifactory_home }}/var/etc/security/join.key"
content: |
{{ join_key }}
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
mode: 0640
notify: restart artifactory
- name: Configure installer info
become: yes
template:
src: installer-info.json.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/info/installer-info.json"
notify: restart artifactory
- name: Configure binary store
become: yes
template:
src: binarystore.xml.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/binarystore.xml"
notify: restart artifactory
- name: Configure single license
become: yes
template:
src: artifactory.lic.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.lic"
when: artifactory_single_license is defined
notify: restart artifactory
- name: Configure HA licenses
become: yes
template:
src: artifactory.cluster.license.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license"
when: artifactory_licenses is defined
notify: restart artifactory
- name: Download database driver
become: yes
get_url:
url: "{{ postgres_driver_download_url }}"
dest: "{{ artifactory_home }}/var/bootstrap/artifactory/tomcat/lib"
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
when: postgres_driver_download_url is defined
notify: restart artifactory
- name: Create artifactory service
become: yes
shell: "{{ artifactory_home }}/app/bin/installService.sh"
- name: Ensure permissions are correct
become: yes
file:
path: "{{ jfrog_home_directory }}"
group: "{{ artifactory_group }}"
owner: "{{ artifactory_user }}"
recurse: yes
- name: Restart artifactory
meta: flush_handlers
- name : Wait for artifactory to be fully deployed
uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130
register: result
until: result.status == 200
retries: 25
delay: 5

View File

@@ -0,0 +1,6 @@
- name: perform installation
include_tasks: "install.yml"
when: not artifactory_upgrade_only
- name: perform upgrade
include_tasks: "upgrade.yml"
when: artifactory_upgrade_only

View File

@@ -0,0 +1,105 @@
---
- debug:
msg: "Performing upgrade of Artifactory version to : {{ artifactory_version }} "
- name: Stop artifactory
become: yes
systemd:
name: "{{ artifactory_daemon }}"
state: stopped
- name: Ensure jfrog_home_directory exists
become: yes
file:
path: "{{ jfrog_home_directory }}"
state: directory
- name: Download artifactory for upgrade
become: yes
unarchive:
src: "{{ artifactory_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
creates: "{{ artifactory_untar_home }}"
when: artifactory_tar is defined
register: downloadartifactory
until: downloadartifactory is succeeded
retries: 3
- name: Delete artifactory app directory
become: yes
file:
path: "{{ artifactory_home }}/app"
state: absent
- name: Copy new app to artifactory app
become: yes
command: "cp -r {{ artifactory_untar_home }}/app/. {{ artifactory_home }}/app"
- name: Configure join key
become: yes
copy:
dest: "{{ artifactory_home }}/var/etc/security/join.key"
content: |
{{ join_key }}
owner: "{{ artifactory_user }}"
group: "{{ artifactory_group }}"
mode: 0640
notify: restart artifactory
- name: Configure single license
become: yes
template:
src: artifactory.lic.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.lic"
when: artifactory_single_license is defined
notify: restart artifactory
- name: Configure HA licenses
become: yes
template:
src: artifactory.cluster.license.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license"
when: artifactory_licenses is defined
notify: restart artifactory
- name: Configure installer info
become: yes
template:
src: installer-info.json.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/info/installer-info.json"
notify: restart artifactory
- name: Configure binary store
become: yes
template:
src: binarystore.xml.j2
dest: "{{ artifactory_home }}/var/etc/artifactory/binarystore.xml"
notify: restart artifactory
- name: Configure systemyaml
become: yes
template:
src: "{{ artifactory_system_yaml_template }}"
dest: "{{ artifactory_home }}/var/etc/system.yaml"
notify: restart artifactory
- name: Ensure permissions are correct
become: yes
file:
path: "{{ jfrog_home_directory }}"
group: "{{ artifactory_group }}"
owner: "{{ artifactory_user }}"
recurse: yes
- name: Restart artifactory
meta: flush_handlers
- name : Wait for artifactory to be fully deployed
uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130
register: result
until: result.status == 200
retries: 25
delay: 5

View File

@@ -0,0 +1,3 @@
{% if (artifactory_licenses) and (artifactory_licenses|length > 0) %}
{{ artifactory_licenses }}
{% endif %}

View File

@@ -0,0 +1,3 @@
{% if (artifactory_single_license) and (artifactory_single_license|length > 0) %}
{{ artifactory_single_license }}
{% endif %}

View File

@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<config version="2">
<chain template="cluster-file-system"/>
</config>
</config>

View File

@@ -0,0 +1,9 @@
{{ ansible_managed | comment }}
{
"productId": "Ansible_Artifactory/{{ platform_collection_version }}-{{ artifactory_version }}",
"features": [
{
"featureId": "Channel/{{ ansible_marketplace }}"
}
]
}

View File

@@ -0,0 +1,17 @@
configVersion: 1
shared:
extraJavaOpts: "{{ artifactory_extra_java_opts }}"
node:
id: {{ ansible_date_time.iso8601_micro | to_uuid }}
ip: {{ ansible_host }}
taskAffinity: {{ artifactory_taskAffinity }}
haEnabled: {{ artifactory_ha_enabled }}
database:
type: "{{ artifactory_db_type }}"
driver: "{{ artifactory_db_driver }}"
url: "{{ artifactory_db_url }}"
username: "{{ artifactory_db_user }}"
password: "{{ artifactory_db_password }}"
router:
entrypoints:
internalPort: 8046

View File

@@ -2,4 +2,4 @@
This role installs NGINX for artifactory. This role is automatically called by the artifactory role and isn't intended to be used separately.
## Role Variables
* _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io"
* _server_name_: **mandatory** This is the server name. eg. "artifactory.54.175.51.178.xip.io"

View File

@@ -0,0 +1,7 @@
---
# defaults file for artifactory_nginx
## For production deployments,You SHOULD change it.
server_name: test.artifactory.com
nginx_daemon: nginx

View File

@@ -0,0 +1,8 @@
---
# handlers file for artifactory_nginx
- name: restart nginx
become: yes
systemd:
name: "{{ nginx_daemon }}"
state: restarted
enabled: yes

View File

@@ -1,5 +1,5 @@
galaxy_info:
author: "Jeff Fry <jefff@jfrog.com>"
author: "JFrog Maintainers Team <installers@jfrog.com>"
description: "This role installs NGINX for artifactory. This role is automatically called by the artifactory role and isn't intended to be used separately."
company: JFrog

View File

@@ -1,9 +1,9 @@
---
- name: apt-get update
become: yes
apt:
update_cache: yes
register: package_res
retries: 5
delay: 60
become: yes
until: package_res is success

View File

@@ -1,6 +1,6 @@
---
- name: epel-release
become: yes
yum:
name: epel-release
state: present
become: yes
state: present

View File

@@ -0,0 +1,35 @@
---
- name: Install dependencies
include_tasks: "{{ ansible_os_family }}.yml"
- name: Install nginx after dependency installation
become: yes
package:
name: nginx
state: present
register: package_res
retries: 5
delay: 60
until: package_res is success
- name: Configure main nginx conf file.
become: yes
copy:
src: nginx.conf
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: '0755'
- name: Configure the artifactory nginx conf
become: yes
template:
src: artifactory.conf.j2
dest: /etc/nginx/conf.d/artifactory.conf
owner: root
group: root
mode: '0755'
notify: restart nginx
- name: Restart nginx
meta: flush_handlers

View File

@@ -1,6 +1,6 @@
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
###########################################################
## add HA entries when ha is configure
upstream artifactory {

View File

@@ -5,12 +5,3 @@ The artifactory_nginx_ssl role installs and configures nginx for SSL.
* _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io"
* _certificate_: This is the SSL cert.
* _certificate_key_: This is the SSL private key.
## Example Playbook
```
---
- hosts: primary
roles:
- artifactory
- artifactory_nginx_ssl
```

View File

@@ -0,0 +1,7 @@
---
# defaults file for artifactory_nginx
## For production deployments,You SHOULD change it.
# server_name: test.artifactory.com
nginx_daemon: nginx

View File

@@ -0,0 +1,8 @@
---
# handlers file for artifactory_nginx_ssl
- name: restart nginx
become: yes
systemd:
name: "{{ nginx_daemon }}"
state: restarted
enabled: yes

View File

@@ -1,5 +1,5 @@
galaxy_info:
author: "Jeff Fry <jefff@jfrog.com>"
author: "JFrog Maintainers Team <installers@jfrog.com>"
description: "The artifactory_nginx_ssl role installs and configures nginx for SSL."
company: JFrog

View File

@@ -1,41 +1,40 @@
---
# tasks file for artifactory_nginx
- name: configure the artifactory nginx conf
- name: Configure the artifactory nginx conf
become: yes
template:
src: artifactory.conf.j2
dest: /etc/nginx/conf.d/artifactory.conf
owner: root
group: root
mode: '0755'
become: yes
notify: restart nginx
- name: ensure nginx dir exists
- name: Ensure nginx dir exists
become: yes
file:
path: "/var/opt/jfrog/nginx/ssl"
state: directory
become: yes
- name: configure certificate
- name: Configure certificate
become: yes
template:
src: certificate.pem.j2
dest: "/var/opt/jfrog/nginx/ssl/cert.pem"
become: yes
notify: restart nginx
- name: ensure pki exists
- name: Ensure pki exists
become: yes
file:
path: "/etc/pki/tls"
state: directory
become: yes
- name: configure key
- name: Configure key
become: yes
template:
src: certificate.key.j2
dest: "/etc/pki/tls/cert.key"
become: yes
notify: restart nginx
- name: restart nginx
service:
name: nginx
state: restarted
enabled: yes
become: yes
- name: Restart nginx
meta: flush_handlers

View File

@@ -1,6 +1,6 @@
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
###########################################################
## add HA entries when ha is configure
upstream artifactory {

View File

@@ -0,0 +1,26 @@
# Distribution
The Distribution role will install distribution software onto the host. An Artifactory server and Postgress database is required.
### Role Variables
* _distribution_upgrade_only_: Perform an software upgrade only. Default is false.
Additional variables can be found in [defaults/main.yml](./defaults/main.yml).
## Example Playbook
```
---
- hosts: distribution_servers
roles:
- distribution
```
## Upgrades
The distribution role supports software upgrades. To use a role to perform a software upgrade only, use the _xray_upgrade_only_ variables and specify the version. See the following example.
```
- hosts: distributionservers
vars:
distribution_version: "{{ lookup('env', 'distribution_version_upgrade') }}"
distribution_upgrade_only: true
roles:
- distribution
```

View File

@@ -0,0 +1,43 @@
---
# defaults file for distribution
# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone)
ansible_marketplace: standalone
# whether to enable HA
distribution_ha_enabled: false
distribution_ha_node_type : master
# The location where distribution should install.
jfrog_home_directory: /opt/jfrog
# The remote distribution download file
distribution_tar: https://releases.jfrog.io/artifactory/jfrog-distribution/distribution-linux/{{ distribution_version }}/jfrog-distribution-{{ distribution_version }}-linux.tar.gz
#The distribution install directory
distribution_untar_home: "{{ jfrog_home_directory }}/jfrog-distribution-{{ distribution_version }}-linux"
distribution_home: "{{ jfrog_home_directory }}/distribution"
distribution_install_script_path: "{{ distribution_home }}/app/bin"
distribution_thirdparty_path: "{{ distribution_home }}/app/third-party"
distribution_archive_service_cmd: "{{ distribution_install_script_path }}/installService.sh"
#distribution users and groups
distribution_user: distribution
distribution_group: distribution
distribution_uid: 1040
distribution_gid: 1040
distribution_daemon: distribution
flow_type: archive
# Redis details
distribution_redis_url: "redis://localhost:6379"
distribution_redis_password: password
# if this is an upgrade
distribution_upgrade_only: false
distribution_system_yaml_template: system.yaml.j2

View File

@@ -0,0 +1,7 @@
---
# handlers file for distribution
- name: restart distribution
become: yes
systemd:
name: "{{ distribution_daemon }}"
state: restarted

View File

@@ -0,0 +1,16 @@
galaxy_info:
author: "JFrog Maintainers Team <installers@jfrog.com>"
description: "The distribution role will install distribution software onto the host. An Artifactory server and Postgress database is required."
company: JFrog
issue_tracker_url: "https://github.com/jfrog/JFrog-Cloud-Installers/issues"
license: license (Apache-2.0)
min_ansible_version: 2.9
galaxy_tags:
- distribution
- jfrog
dependencies: []

View File

@@ -0,0 +1,44 @@
- name: Prepare expect scenario script
set_fact:
expect_scenario: |
set timeout 300
spawn {{ exp_executable_cmd }}
expect_before timeout { exit 1 }
set CYCLE_END 0
set count 0
while { $CYCLE_END == 0 } {
expect {
{% for each_request in exp_scenarios %}
-nocase -re {{ '{' }}{{ each_request.expecting }}.*} {
send "{{ each_request.sending }}\n"
}
{% endfor %}
eof {
set CYCLE_END 1
}
}
set count "[expr $count + 1]"
if { $count > 16} {
exit 128
}
}
expect eof
lassign [wait] pid spawnid os_error_flag value
if {$os_error_flag == 0} {
puts "INSTALLER_EXIT_STATUS-$value"
} else {
puts "INSTALLER_EXIT_STATUS-$value"
}
- name: Interactive with expect
become: yes
ignore_errors: yes
shell: |
{{ expect_scenario }}
args:
executable: /usr/bin/expect
chdir: "{{ exp_dir }}"
register: exp_result

View File

@@ -0,0 +1,155 @@
---
- debug:
msg: "Performing installation of Distribution version - {{ distribution_version }}"
- name: Install expect dependency
yum:
name: expect
state: present
become: yes
when: ansible_os_family == 'Redhat'
- name: Install expect dependency
apt:
name: expect
state: present
update_cache: yes
become: yes
when: ansible_os_family == 'Debian'
- name: Ensure group jfdistribution exist
become: yes
group:
name: "{{ distribution_group }}"
gid: "{{ distribution_gid }}"
state: present
- name: Ensure user distribution exist
become: yes
user:
uid: "{{ distribution_uid }}"
name: "{{ distribution_user }}"
group: "{{ distribution_group }}"
create_home: yes
home: "{{ distribution_home }}"
shell: /bin/bash
state: present
- name: Download distribution
become: yes
unarchive:
src: "{{ distribution_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ distribution_user }}"
group: "{{ distribution_group }}"
creates: "{{ distribution_untar_home }}"
register: downloaddistribution
until: downloaddistribution is succeeded
retries: 3
- name: Check if app directory exists
become: yes
stat:
path: "{{ distribution_home }}/app"
register: app_dir_check
- name: Copy untar directory to distribution home
become: yes
command: "cp -r {{ distribution_untar_home }}/. {{ distribution_home }}"
when: not app_dir_check.stat.exists
- name: Create required directories
become: yes
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ distribution_user }}"
group: "{{ distribution_group }}"
loop:
- "{{ distribution_home }}/var/etc"
- "{{ distribution_home }}/var/etc/security/"
- "{{ distribution_home }}/var/etc/info/"
- "{{ distribution_home }}/var/etc/redis/"
- name: Configure master key
become: yes
copy:
dest: "{{ distribution_home }}/var/etc/security/master.key"
content: |
{{ master_key }}
owner: "{{ distribution_user }}"
group: "{{ distribution_group }}"
mode: 0640
- name: Check if install.sh wrapper script exist
become: yes
stat:
path: "{{ distribution_install_script_path }}/install.sh"
register: install_wrapper_script
- name: Include interactive installer scripts
include_vars: script/archive.yml
- name: Install Distribution
include_tasks: expect.yml
vars:
exp_executable_cmd: "./install.sh -u {{ distribution_user }} -g {{ distribution_group }}"
exp_dir: "{{ distribution_install_script_path }}"
exp_scenarios: "{{ distribution_installer_scenario['main'] }}"
args:
apply:
environment:
YQ_PATH: "{{ distribution_thirdparty_path }}/yq"
when: install_wrapper_script.stat.exists
- name: Configure redis config
become: yes
template:
src: "redis.conf.j2"
dest: "{{ distribution_home }}/var/etc/redis/redis.conf"
notify: restart distribution
- name: Configure systemyaml
become: yes
template:
src: "{{ distribution_system_yaml_template }}"
dest: "{{ distribution_home }}/var/etc/system.yaml"
notify: restart distribution
- name: Configure installer info
become: yes
template:
src: installer-info.json.j2
dest: "{{ distribution_home }}/var/etc/info/installer-info.json"
notify: restart distribution
- name: Update distribution permissions
become: yes
file:
path: "{{ distribution_home }}"
state: directory
recurse: yes
owner: "{{ distribution_user }}"
group: "{{ distribution_group }}"
mode: '0755'
- name: Install Distribution as a service
become: yes
shell: |
{{ distribution_archive_service_cmd }}
args:
chdir: "{{ distribution_install_script_path }}"
register: check_service_status_result
ignore_errors: yes
- name: Restart distribution
meta: flush_handlers
- name : Wait for distribution to be fully deployed
uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130
register: result
until: result.status == 200
retries: 25
delay: 5

View File

@@ -0,0 +1,6 @@
- name: perform installation
include_tasks: "install.yml"
when: not distribution_upgrade_only
- name: perform upgrade
include_tasks: "upgrade.yml"
when: distribution_upgrade_only

View File

@@ -0,0 +1,111 @@
---
- debug:
msg: "Performing upgrade of Distribution version to {{ distribution_version }} "
- name: Stop distribution
become: yes
systemd:
name: "{{ distribution_daemon }}"
state: stopped
- name: Download distribution for upgrade
become: yes
unarchive:
src: "{{ distribution_tar }}"
dest: "{{ jfrog_home_directory }}"
remote_src: yes
owner: "{{ distribution_user }}"
group: "{{ distribution_group }}"
creates: "{{ distribution_untar_home }}"
register: downloaddistribution
until: downloaddistribution is succeeded
retries: 3
- name: Delete distribution app
become: yes
file:
path: "{{ distribution_home }}/app"
state: absent
- name: Copy new app to distribution app
become: yes
command: "cp -r {{ distribution_untar_home }}/app/. {{ distribution_home }}/app"
- name: Check if install.sh wrapper script exist
become: yes
stat:
path: "{{ distribution_install_script_path }}/install.sh"
register: install_wrapper_script
- name: Include interactive installer scripts
include_vars: script/archive.yml
- name: Install Distribution
include_tasks: expect.yml
vars:
exp_executable_cmd: "./install.sh -u {{ distribution_user }} -g {{ distribution_group }}"
exp_dir: "{{ distribution_install_script_path }}"
exp_scenarios: "{{ distribution_installer_scenario['main'] }}"
args:
apply:
environment:
YQ_PATH: "{{ distribution_thirdparty_path }}/yq"
when: install_wrapper_script.stat.exists
- name: Ensure {{ distribution_home }}/var/etc/redis exists
become: yes
file:
path: "{{ distribution_home }}/var/etc/redis/"
state: directory
owner: "{{ distribution_user }}"
group: "{{ distribution_group }}"
- name: Configure redis config
become: yes
template:
src: "redis.conf.j2"
dest: "{{ distribution_home }}/var/etc/redis/redis.conf"
notify: restart distribution
- name: Configure installer info
become: yes
template:
src: installer-info.json.j2
dest: "{{ distribution_home }}/var/etc/info/installer-info.json"
notify: restart distribution
- name: Configure systemyaml
become: yes
template:
src: "{{ distribution_system_yaml_template }}"
dest: "{{ distribution_home }}/var/etc/system.yaml"
notify: restart distribution
- name: Update Distribution base dir owner and group
become: yes
file:
path: "{{ distribution_home }}"
state: directory
recurse: yes
owner: "{{ distribution_user }}"
group: "{{ distribution_group }}"
mode: '0755'
- name: Install Distribution as a service
become: yes
shell: |
{{ distribution_archive_service_cmd }}
args:
chdir: "{{ distribution_install_script_path }}"
register: check_service_status_result
ignore_errors: yes
- name: Restart distribution
meta: flush_handlers
- name : Wait for distribution to be fully deployed
uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130
register: result
until: result.status == 200
retries: 25
delay: 5

View File

@@ -0,0 +1,9 @@
{{ ansible_managed | comment }}
{
"productId": "Ansible_Distribution/{{ platform_collection_version }}-{{ distribution_version }}",
"features": [
{
"featureId": "Channel/{{ ansible_marketplace }}"
}
]
}

View File

@@ -0,0 +1,15 @@
{{ ansible_managed | comment }}
# Redis configuration file
# data directory for redis
dir {{ distribution_home }}/var/data/redis
# log directory for redis
logfile {{ distribution_home }}/var/log/redis/redis.log
# pid file location for redis
pidfile {{ distribution_home }}/app/run/redis.pid
# password for redis
# if changed, the same should be set as value for shared.redis.password in JF_PRODUCT_HOME/var/etc/system.yaml
requirepass {{ distribution_redis_password }}

View File

@@ -0,0 +1,20 @@
configVersion: 1
shared:
jfrogUrl: {{ jfrog_url }}
node:
ip: {{ ansible_host }}
id: {{ ansible_date_time.iso8601_micro | to_uuid }}
database:
type: "{{ distribution_db_type }}"
driver: "{{ distribution_db_driver }}"
url: "{{ distribution_db_url }}"
username: "{{ distribution_db_user }}"
password: "{{ distribution_db_password }}"
redis:
connectionString: "{{ distribution_redis_url }}"
password: "{{ distribution_redis_password }}"
security:
joinKey: {{ join_key }}
router:
entrypoints:
internalPort: 8046

Some files were not shown because too many files have changed in this diff Show More