diff --git a/Ansible/CHANGELOG.md b/Ansible/CHANGELOG.md deleted file mode 100644 index ece5411..0000000 --- a/Ansible/CHANGELOG.md +++ /dev/null @@ -1,21 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -## [1.1.2] - 2020-10-29 -- Updated default versions to RT 7.10.2 and Xray 3.10.3. -- Removed obsolete gradle tests. - -## [1.1.1] - 2020-10-15 -- added idempotence to artifactory installer -- added fix for derby deployments -- Migration to reduce changes during playbook runs contains breaking changes. You either must run once before upgrade, or provide playbook with valid credentials to access version information for it to perform properly. -- First time installers need not worry about above - -## [1.1.0] - 2020-09-27 - -- Validated for Artifactory 7.7.8 and Xray 3.8.6. -- Added offline support for Artifactory and Xray. -- Added support for configurable Postgres pg_hba.conf. -- Misc fixes due to Artifactory 7.7.8. -- Published 1.1.0 to [Ansible Galaxy](https://galaxy.ansible.com/jfrog/installers). diff --git a/Ansible/README.md b/Ansible/README.md deleted file mode 100644 index d60a876..0000000 --- a/Ansible/README.md +++ /dev/null @@ -1,124 +0,0 @@ -# JFrog Ansible Installers Collection - -This Ansible directory consists of the following directories that support the JFrog Ansible collection. - - * [ansible_collections directory](ansible_collections) - This directory contains the Ansible collection package that has the Ansible roles for Artifactory and Xray. See the collection [README](ansible_collections/README.md) for details on the available roles and variables. - * [examples directory](examples) - This directory contains example playbooks for various architectures from single Artifactory (RT) deployments to high-availability setups. - * [infra directory](infra) - This directory contains example infrastructure templates that can be used for testing and as example deployments. - * [test directory](test) - This directory contains Gradle tests that can be used to verify a deployment. It also has Ansible playbooks for creating infrastructure, provisioning software and testing with Gradle. - - ## Tested Artifactory and Xray Versions - The following versions of Artifactory and Xray have been validated with this collection. Other versions and combinations may also work. - - -| collection_version | artifactory_version | xray_version | -|--------------------|---------------------|--------------| -| 1.1.2 | 7.10.2 | 3.10.3 | -| 1.1.1 | 7.10.2 | 3.9.1 | -| 1.1.0 | 7.7.8 | 3.8.6 | -| 1.0.9 | 7.7.3 | 3.8.0 | -| 1.0.8 | 7.7.3 | 3.8.0 | -| 1.0.8 | 7.7.1 | 3.5.2 | -| 1.0.8 | 7.6.1 | 3.5.2 | -| 1.0.7 | 7.6.1 | 3.5.2 | -| 1.0.6 | 7.5.0 | 3.3.0 | -| 1.0.6 | 7.4.3 | 3.3.0 | - - ## Getting Started - - 1. Install this collection from Ansible Galaxy. This collection is also available in RedHat Automation Hub. - - ``` - ansible-galaxy collection install jfrog.installers - ``` - - Ensure you reference the collection in your playbook when using these roles. - - ``` - --- - - hosts: xray - collections: - - jfrog.installers - roles: - - xray - - ``` - - 2. Ansible uses SSH to connect to hosts. Ensure that your SSH private key is on your client and the public keys are installed on your Ansible hosts. - - 3. Create your inventory file. Use one of the examples from the [examples directory](examples) to construct an inventory file (hosts.yml) with the host addresses and variables. - - 4. Create your playbook. Use one of the examples from the [examples directory](examples) to construct a playbook using the JFrog Ansible roles. These roles will be applied to your inventory and provision software. - - 5. Then execute with the following command to provision the JFrog software with Ansible. Variables can also be passed in at the command-line. - -``` -ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 32) join_key=$(openssl rand -hex 32)" -``` - -## Autogenerating Master and Join Keys -You may want to auto-generate your master amd join keys and apply it to all the nodes. - -``` -ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 32) join_key=$(openssl rand -hex 32)" -``` - -## Using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to Encrypt Vars -Some vars you may want to keep secret. You may put these vars into a separate file and encrypt them using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html). - -``` -ansible-vault encrypt secret-vars.yml --vault-password-file ~/.vault_pass.txt -``` - -then in your playbook include the secret vars file. - -``` -- hosts: primary - - vars_files: - - ./vars/secret-vars.yml - - ./vars/vars.yml - - roles: - - artifactory -``` - -## Bastion Hosts -In many cases, you may want to run this Ansible collection through a Bastion host to provision JFrog servers. You can include the following Var for a host or group of hosts: - -``` -ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A user@host -W %h:%p"' - -eg. -ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"' -``` -## Upgrades -The Artifactory and Xray roles support software updates. To use a role to perform a software update only, use the _artifactory_upgrade_only_ or _xray_upgrade_only_ variable and specify the version. See the following example. - -``` -- hosts: artifactory - vars: - artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}" - artifactory_upgrade_only: true - roles: - - artifactory - -- hosts: xray - vars: - xray_version: "{{ lookup('env', 'xray_version_upgrade') }}" - xray_upgrade_only: true - roles: - - xray -``` - -## Building the Collection Archive -1. Go to the [ansible_collections/jfrog/installers directory](ansible_collections/jfrog/installers). -2. Update the galaxy.yml meta file as needed. Update the version. -3. Build the archive. (Requires Ansible 2.9+) -``` -ansible-galaxy collection build -``` - -## OS support -* Current ansible collection only supports ubuntu and its flavours -* Centos/RHEL and SELinux support is coming soon, stay tuned :) diff --git a/Ansible/ansible_collections/.ansible-lint b/Ansible/ansible_collections/.ansible-lint deleted file mode 100644 index a59f903..0000000 --- a/Ansible/ansible_collections/.ansible-lint +++ /dev/null @@ -1,8 +0,0 @@ -# -# Ansible managed -# -exclude_paths: - - ./meta/version.yml - - ./meta/exception.yml - - ./meta/preferences.yml - - ./molecule/default/verify.yml diff --git a/Ansible/ansible_collections/.yamllint b/Ansible/ansible_collections/.yamllint deleted file mode 100644 index c5ae64b..0000000 --- a/Ansible/ansible_collections/.yamllint +++ /dev/null @@ -1,12 +0,0 @@ ---- -extends: default - -rules: - braces: - max-spaces-inside: 1 - level: error - brackets: - max-spaces-inside: 1 - level: error - line-length: disable - truthy: disable diff --git a/Ansible/ansible_collections/jfrog/installers/README.md b/Ansible/ansible_collections/jfrog/installers/README.md deleted file mode 100644 index 88f2bdf..0000000 --- a/Ansible/ansible_collections/jfrog/installers/README.md +++ /dev/null @@ -1,89 +0,0 @@ -# JFrog Ansible Installers Collection - -## Getting Started - - 1. Install this collection from Ansible Galaxy. This collection is also available in RedHat Automation Hub. - - ``` - ansible-galaxy collection install jfrog.installers - ``` - - Ensure you reference the collection in your playbook when using these roles. - - ``` - --- - - hosts: xray - collections: - - jfrog.installers - roles: - - xray - - ``` - - 2. Ansible uses SSH to connect to hosts. Ensure that your SSH private key is on your client and the public keys are installed on your Ansible hosts. - - 3. Create your inventory file. Use one of the examples from the [examples directory](https://github.com/jfrog/JFrog-Cloud-Installers/tree/master/Ansible/examples) to construct an inventory file (hosts.yml) with the host addresses and variables. - - 4. Create your playbook. Use one of the examples from the [examples directory](https://github.com/jfrog/JFrog-Cloud-Installers/tree/master/Ansible/examples) to construct a playbook using the JFrog Ansible roles. These roles will be applied to your inventory and provision software. - - 5. Then execute with the following command to provision the JFrog software with Ansible. Variables can also be passed in at the command-line. - - ``` -ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 16) join_key=$(openssl rand -hex 16)" -``` - -## Autogenerating Master and Join Keys -You may want to auto-generate your master amd join keys and apply it to all the nodes. - -``` -ansible-playbook -i hosts.yml playbook.yml --extra-vars "master_key=$(openssl rand -hex 16) join_key=$(openssl rand -hex 16)" -``` - -## Using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to Encrypt Vars -Some vars you may want to keep secret. You may put these vars into a separate file and encrypt them using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html). - -``` -ansible-vault encrypt secret-vars.yml --vault-password-file ~/.vault_pass.txt -``` - -then in your playbook include the secret vars file. - -``` -- hosts: primary - - vars_files: - - ./vars/secret-vars.yml - - ./vars/vars.yml - - roles: - - artifactory -``` - -## Bastion Hosts -In many cases, you may want to run this Ansible collection through a Bastion host to provision JFrog servers. You can include the following Var for a host or group of hosts: - -``` -ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A user@host -W %h:%p"' - -eg. -ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"' -``` - -## Upgrades -The Artifactory and Xray roles support software upgrades. To use a role to perform a software upgrade only, use the _artifactory_upgrade_only_ or _xray_upgrade_only_ variables and specify the version. See the following example. - -``` -- hosts: artifactory - vars: - artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}" - artifactory_upgrade_only: true - roles: - - artifactory - -- hosts: xray - vars: - xray_version: "{{ lookup('env', 'xray_version_upgrade') }}" - xray_upgrade_only: true - roles: - - xray -``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/jfrog-installers-1.1.1.tar.gz b/Ansible/ansible_collections/jfrog/installers/jfrog-installers-1.1.1.tar.gz deleted file mode 100644 index 02495d5..0000000 Binary files a/Ansible/ansible_collections/jfrog/installers/jfrog-installers-1.1.1.tar.gz and /dev/null differ diff --git a/Ansible/ansible_collections/jfrog/installers/jfrog-installers-1.1.2.tar.gz b/Ansible/ansible_collections/jfrog/installers/jfrog-installers-1.1.2.tar.gz deleted file mode 100644 index a9fdeb6..0000000 Binary files a/Ansible/ansible_collections/jfrog/installers/jfrog-installers-1.1.2.tar.gz and /dev/null differ diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/README.md b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/README.md deleted file mode 100644 index 50ecaf6..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/README.md +++ /dev/null @@ -1,49 +0,0 @@ -# artifactory -The artifactory role installs the Artifactory Pro software onto the host. Per the Vars below, it will configure a node as primary or secondary. This role uses secondary roles artifactory_nginx to install nginx. - -1.1.1 contains breaking changes. To mitigate this, use the role before doing any upgrades, let it mitigate the path changes, and then run again with your upgrade. - -## Role Variables -* _artifactory_version_: The version of Artifactory to install. eg. "7.4.1" -* _master_key_: This is the Artifactory [Master Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys). -* _join_key_: This is the Artifactory [Join Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys). -* _db_download_url_: This is the download URL for the JDBC driver for your database. eg. "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" -* _db_type_: This is the database type. eg. "postgresql" -* _db_driver_: This is the JDBC driver class. eg. "org.postgresql.Driver" -* _db_url_: This is the JDBC database url. eg. "jdbc:postgresql://10.0.0.120:5432/artifactory" -* _db_user_: The database user to configure. eg. "artifactory" -* _db_password_: The database password to configure. "Art1fact0ry" -* _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io" -* _artifactory_system_yaml_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. **If specified, this file will be used rather than constructing a file from the parameters above.** -* _binary_store_file_: Your own [binary store file](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore) can be used. If specified, the default cluster-file-system will not be used. -* _artifactory_upgrade_only_: Perform an software upgrade only. Default is false. - -### primary vars (vars used by the primary Artifactory server) -* _artifactory_is_primary_: For the primary node this must be set to **true**. -* _artifactory_license1 - 5_: These are the cluster licenses. -* _artifactory_license_file_: Your own license file can be used. **If specified, a license file constructed from the licenses above will not be used.** - -### secondary vars (vars used by the secondary Artifactory server) -* _artifactory_is_primary_: For the secondary node(s) this must be set to **false**. - -Additional variables can be found in [defaults/main.yml](./defaults/main.yml). - -## Example Playbook -``` ---- -- hosts: primary - roles: - - artifactory -``` - -## Upgrades -The Artifactory role supports software upgrades. To use a role to perform a software upgrade only, use the _artifactory_upgrade_only_ variable and specify the version. See the following example. - -``` -- hosts: artifactory - vars: - artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}" - artifactory_upgrade_only: true - roles: - - artifactory -``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/defaults/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/defaults/main.yml deleted file mode 100644 index 1a8ca54..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/defaults/main.yml +++ /dev/null @@ -1,56 +0,0 @@ ---- -# defaults file for artifactory -# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone) -ansible_marketplace: standalone - -# The version of Artifactory to install -artifactory_version: 7.10.2 - -# licenses file - specify a licenses file or specify up to 5 licenses -artifactory_license1: -artifactory_license2: -artifactory_license3: -artifactory_license4: -artifactory_license5: - -# whether to enable HA -artifactory_ha_enabled: true - -# value for whether a host is primary. this should be set in host vars -artifactory_is_primary: true - -# The location where Artifactory should install. -jfrog_home_directory: /opt/jfrog - -# The location where Artifactory should store data. -artifactory_file_store_dir: /data - -# Pick the Artifactory flavour to install, can be also cpp-ce, jcr, pro. -artifactory_flavour: pro - -extra_java_opts: -server -Xms2g -Xmx14g -Xss256k -XX:+UseG1GC -artifactory_system_yaml_template: system.yaml.j2 -artifactory_tar: https://dl.bintray.com/jfrog/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/{{ artifactory_version }}/jfrog-artifactory-pro-{{ artifactory_version }}-linux.tar.gz -artifactory_home: "{{ jfrog_home_directory }}/artifactory" -artifactory_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}" - -artifactory_user: artifactory -artifactory_group: artifactory - -# Set the parameters required for the service. -service_list: - - name: artifactory - description: Start script for Artifactory - start_command: "{{ artifactory_home }}/bin/artifactory.sh start" - stop_command: "{{ artifactory_home }}/bin/artifactory.sh stop" - type: forking - status_pattern: artifactory - user_name: "{{ artifactory_user }}" - group_name: "{{ artifactory_group }}" - -# if this is an upgrade -artifactory_upgrade_only: false - -#default username and password -artifactory_app_username: admin -artifactory_app_user_pass: password diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/handlers/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/handlers/main.yml deleted file mode 100644 index 6f8fcda..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/handlers/main.yml +++ /dev/null @@ -1,10 +0,0 @@ ---- -# handlers file for artifactory -- name: systemctl daemon-reload - systemd: - daemon_reload: yes - -- name: restart artifactory - service: - name: artifactory - state: restarted diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/install.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/install.yml deleted file mode 100644 index 9537459..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/install.yml +++ /dev/null @@ -1,228 +0,0 @@ ---- -- debug: - msg: "Performing installation of Artifactory..." - -- name: install nginx - include_role: - name: artifactory_nginx - -- name: create group for artifactory - group: - name: "{{ artifactory_group }}" - state: present - become: yes - -- name: create user for artifactory - user: - name: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - system: yes - become: yes - -- name: ensure jfrog_home_directory exists - file: - path: "{{ jfrog_home_directory }}" - state: directory - become: yes - -- name: Local Copy artifactory - unarchive: - src: "{{ local_artifactory_tar }}" - dest: "{{ jfrog_home_directory }}" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - creates: "{{ artifactory_untar_home }}" - become: yes - when: local_artifactory_tar is defined - register: downloadartifactory - until: downloadartifactory is succeeded - retries: 3 - -- name: download artifactory - unarchive: - src: "{{ artifactory_tar }}" - dest: "{{ jfrog_home_directory }}" - remote_src: yes - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - creates: "{{ artifactory_untar_home }}" - become: yes - when: artifactory_tar is defined - register: downloadartifactory - until: downloadartifactory is succeeded - retries: 3 - -- name: Create artifactory home folder - file: - state: directory - path: "{{ artifactory_home }}" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: Create Symlinks for var folder - file: - state: link - src: "{{ artifactory_untar_home }}/var" - dest: "{{ artifactory_home }}/var" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: Create Symlinks for app folder - file: - state: link - src: "{{ artifactory_untar_home }}/app" - dest: "{{ artifactory_home }}/app" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: ensure artifactory_file_store_dir exists - file: - path: "{{ artifactory_file_store_dir }}" - state: directory - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: ensure data exists - file: - path: "{{ artifactory_home }}/var/data" - state: directory - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: ensure etc exists - file: - path: "{{ artifactory_home }}/var/etc" - state: directory - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: use specified system yaml - copy: - src: "{{ artifactory_system_yaml }}" - dest: "{{ artifactory_home }}/var/etc/system.yaml" - become: yes - when: artifactory_system_yaml is defined - -- name: configure system yaml template - template: - src: "{{ artifactory_system_yaml_template }}" - dest: "{{ artifactory_home }}/var/etc/system.yaml" - become: yes - when: artifactory_system_yaml is not defined - -- name: ensure {{ artifactory_home }}/var/etc/security/ exists - file: - path: "{{ artifactory_home }}/var/etc/security/" - state: directory - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: configure master key - template: - src: master.key.j2 - dest: "{{ artifactory_home }}/var/etc/security/master.key" - become: yes - -- name: configure join key - template: - src: join.key.j2 - dest: "{{ artifactory_home }}/var/etc/security/join.key" - become: yes - -- name: ensure {{ artifactory_home }}/var/etc/artifactory/info/ exists - file: - path: "{{ artifactory_home }}/var/etc/artifactory/info/" - state: directory - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: configure installer info - template: - src: installer-info.json.j2 - dest: "{{ artifactory_home }}/var/etc/artifactory/info/installer-info.json" - become: yes - -- name: use specified binary store - copy: - src: "{{ binary_store_file }}" - dest: "{{ artifactory_home }}/var/etc/binarystore.xml" - become: yes - when: binary_store_file is defined - -- name: use default binary store - template: - src: binarystore.xml.j2 - dest: "{{ artifactory_home }}/var/etc/binarystore.xml" - become: yes - when: binary_store_file is not defined - -- name: use license file - copy: - src: "{{ artifactory_license_file }}" - dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license" - become: yes - when: artifactory_license_file is defined and artifactory_is_primary == true - -- name: use license strings - template: - src: artifactory.cluster.license.j2 - dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license" - become: yes - when: artifactory_license_file is not defined and artifactory_is_primary == true - -- name: Copy local database driver - copy: - src: "{{ db_local_location }}" - dest: "{{ artifactory_home }}/var/bootstrap/artifactory/tomcat/lib" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - when: db_local_location is defined - become: yes - -- name: download database driver - get_url: - url: "{{ db_download_url }}" - dest: "{{ artifactory_home }}/var/bootstrap/artifactory/tomcat/lib" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - when: db_download_url is defined - become: yes - -- name: create artifactory service - shell: "{{ artifactory_home }}/app/bin/installService.sh" - become: yes - -- name: Ensure permissions are correct - file: - path: "{{ jfrog_home_directory }}" - group: "{{ artifactory_group }}" - owner: "{{ artifactory_user }}" - recurse: yes - become: yes - -- name: start and enable the primary node - service: - name: artifactory - state: started - become: yes - when: artifactory_is_primary == true - -- name: random wait before restarting to prevent secondary nodes from hitting DB first - pause: - seconds: "{{ 120 | random + 10}}" - when: artifactory_is_primary == false - -- name: start and enable the secondary nodes - service: - name: artifactory - state: started - become: yes - when: artifactory_is_primary == false diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/legacy_migration.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/legacy_migration.yml deleted file mode 100644 index e3e15d5..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/legacy_migration.yml +++ /dev/null @@ -1,34 +0,0 @@ ---- -- name: MV artifactory home to artifactory untar home - command: "mv {{ artifactory_home }} {{ temp_untar_home }}" - become: yes -- name: Ensure untar home permissions are correct - file: - state: directory - path: "{{ temp_untar_home }}" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes -- name: Create artifactory home folder - file: - state: directory - path: "{{ artifactory_home }}" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes -- name: Create Symlinks for var folder - file: - state: link - src: "{{ temp_untar_home }}/var" - dest: "{{ artifactory_home }}/var" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes -- name: Create Symlinks for app folder - file: - state: link - src: "{{ temp_untar_home }}/app" - dest: "{{ artifactory_home }}/app" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/main.yml deleted file mode 100644 index 65728de..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/main.yml +++ /dev/null @@ -1,44 +0,0 @@ -- name: Rectify Legacy Installation Block - block: - - name: Check to see if artifactory has a service and stop it - service: - name: artifactory - state: stopped - become: yes - - name: Check symlink method - stat: - path: /opt/jfrog/artifactory/app - register: newMethod - - name: Check artifactory version - uri: - url: "{{ web_method }}://{{ artifactory_server_url }}:{{ url_port }}/artifactory/api/system/version" - url_username: "{{ artifactory_app_username }}" - url_password: "{{ artifactory_app_user_pass }}" - register: artifactory_installed_version - - name: Debug defunct installation - debug: - var: artifactory_installed_version.json.version - - name: Setup temporary untar home - set_fact: - temp_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_installed_version }}" - - name: Rectify legacy installation - include_tasks: "legacy_migration.yml" - when: (not newMethod.stat.islnk) and newMethod.stat.exists - rescue: - - name: Check to see if artifactory has a service and stop it - service: - name: artifactory - state: stopped - - name: Setup temporary untar home (assuming version is set var for version) - set_fact: - temp_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}" - - name: Rectify legacy installation - include_tasks: "legacy_migration.yml" - when: (not newMethod.stat.islnk) and newMethod.stat.exists - always: - - name: perform installation - include_tasks: "install.yml" - when: not artifactory_upgrade_only - - name: perform upgrade - include_tasks: "upgrade.yml" - when: artifactory_upgrade_only \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/upgrade.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/upgrade.yml deleted file mode 100644 index 5ac0fd8..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/tasks/upgrade.yml +++ /dev/null @@ -1,94 +0,0 @@ ---- -- debug: - msg: "Performing upgrade of Artifactory..." - -- name: stop artifactory - service: - name: artifactory - state: stopped - become: yes - -- name: ensure jfrog_home_directory exists - file: - path: "{{ jfrog_home_directory }}" - state: directory - become: yes - -- name: Local Copy artifactory - unarchive: - src: "{{ local_artifactory_tar }}" - dest: "{{ jfrog_home_directory }}" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - creates: "{{ artifactory_untar_home }}" - become: yes - when: local_artifactory_tar is defined - register: downloadartifactory - until: downloadartifactory is succeeded - retries: 3 - -- name: download artifactory - unarchive: - src: "{{ artifactory_tar }}" - dest: "{{ jfrog_home_directory }}" - remote_src: yes - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - creates: "{{ artifactory_untar_home }}" - become: yes - when: artifactory_tar is defined - register: downloadartifactory - until: downloadartifactory is succeeded - retries: 3 - -#- name: Delete artifactory app -# file: -# path: "{{ artifactory_home }}/app" -# state: absent -# become: yes - -#- name: CP new app to artifactory app -# command: "cp -r {{ artifactory_untar_home }}/app {{ artifactory_home }}/app" -# become: yes - -#- name: Delete untar directory -# file: -# path: "{{ artifactory_untar_home }}" -# state: absent -# become: yes - -- name: Create Symlinks for app folder - file: - state: link - src: "{{ artifactory_untar_home }}/app" - dest: "{{ artifactory_home }}/app" - owner: "{{ artifactory_user }}" - group: "{{ artifactory_group }}" - become: yes - -- name: Ensure permissions are correct - file: - path: "{{ jfrog_home_directory }}" - group: "{{ artifactory_group }}" - owner: "{{ artifactory_user }}" - recurse: yes - become: yes - -- name: start and enable the primary node - service: - name: artifactory - state: restarted - become: yes - when: artifactory_is_primary == true - -- name: random wait before restarting to prevent secondary nodes from hitting DB first - pause: - seconds: "{{ 120 | random + 10}}" - when: artifactory_is_primary == false - -- name: start and enable the secondary nodes - service: - name: artifactory - state: restarted - become: yes - when: artifactory_is_primary == false diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/artifactory.cluster.license.j2 b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/artifactory.cluster.license.j2 deleted file mode 100644 index 3f674f6..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/artifactory.cluster.license.j2 +++ /dev/null @@ -1,31 +0,0 @@ -{% if artifactory_license1 %} -{% if artifactory_license1|length %} -{{ artifactory_license1 }} -{% endif %} -{% endif %} -{% if artifactory_license2 %} - - -{% if artifactory_license2|length %} -{{ artifactory_license2 }} -{% endif %} -{% endif %} -{% if artifactory_license3 %} - - -{% if artifactory_license3|length %} -{{ artifactory_license3 }} -{% endif %} -{% endif %} -{% if artifactory_license4 %} - -{% if artifactory_license4|length %} -{{ artifactory_license4 }} -{% endif %} -{% endif %} -{% if artifactory_license5 %} - -{% if artifactory_license5|length %} -{{ artifactory_license5 }} -{% endif %} -{% endif %} diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/installer-info.json.j2 b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/installer-info.json.j2 deleted file mode 100644 index f475256..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/installer-info.json.j2 +++ /dev/null @@ -1,12 +0,0 @@ -{ - "productId": "Ansible_artifactory/1.0.0", - "features": [ - { - "featureId": "Partner/ACC-006973" - }, - { - "featureId": "Channel/{{ ansible_marketplace }}" - } - ] -} - diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/join.key.j2 b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/join.key.j2 deleted file mode 100644 index 17d05d2..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/join.key.j2 +++ /dev/null @@ -1 +0,0 @@ -{{ join_key }} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/master.key.j2 b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/master.key.j2 deleted file mode 100644 index 0462a64..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/master.key.j2 +++ /dev/null @@ -1 +0,0 @@ -{{ master_key }} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/system.yaml.j2 b/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/system.yaml.j2 deleted file mode 100644 index a7fede0..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/system.yaml.j2 +++ /dev/null @@ -1,44 +0,0 @@ -## @formatter:off -## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE -## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character. -configVersion: 1 - -## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products. -## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog - -## NOTE: Sensitive information such as passwords and join key are encrypted on first read. -## NOTE: The provided commented key and value is the default. - -## SHARED CONFIGURATIONS -## A shared section for keys across all services in this config -shared: - - ## Node Settings - node: - ## A unique id to identify this node. - ## Default: auto generated at startup. - id: {{ ansible_machine_id }} - - ## Sets this node as primary in HA installation - primary: {{ artifactory_is_primary }} - - ## Sets this node as part of HA installation - haEnabled: {{ artifactory_ha_enabled }} - - ## Database Configuration - database: - ## One of: mysql, oracle, mssql, postgresql, mariadb - ## Default: Embedded derby - - ## Example for mysql/postgresql - type: "{{ db_type }}" -{%+ if db_type == 'derby' -%} -# driver: "{{ db_driver }}" -# url: "{{ db_url }}" -# username: "{{ db_user }}" -{%+ else -%} - driver: "{{ db_driver }}" - url: "{{ db_url }}" - username: "{{ db_user }}" -{%+ endif -%} - password: "{{ db_password }}" \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/.travis.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/.travis.yml deleted file mode 100644 index 36bbf62..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/.travis.yml +++ /dev/null @@ -1,29 +0,0 @@ ---- -language: python -python: "2.7" - -# Use the new container infrastructure -sudo: false - -# Install ansible -addons: - apt: - packages: - - python-pip - -install: - # Install ansible - - pip install ansible - - # Check ansible version - - ansible --version - - # Create ansible.cfg with correct roles_path - - printf '[defaults]\nroles_path=../' >ansible.cfg - -script: - # Basic role syntax check - - ansible-playbook tests/test.yml -i tests/inventory --syntax-check - -notifications: - webhooks: https://galaxy.ansible.com/api/v1/notifications/ \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/defaults/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/defaults/main.yml deleted file mode 100644 index 5818d2b..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/defaults/main.yml +++ /dev/null @@ -1,2 +0,0 @@ ---- -# defaults file for artifactory_nginx \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/handlers/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/handlers/main.yml deleted file mode 100644 index f07f4d4..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/handlers/main.yml +++ /dev/null @@ -1,2 +0,0 @@ ---- -# handlers file for artifactory_nginx \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/main.yml deleted file mode 100644 index fba3324..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/main.yml +++ /dev/null @@ -1,53 +0,0 @@ ---- -- name: install nginx - block: - - debug: - msg: "Attempting nginx installation without dependencies for potential offline mode." - - name: install nginx without dependencies - package: - name: nginx - state: present - register: package_res - retries: 5 - delay: 60 - become: yes - until: package_res is success - rescue: - - debug: - msg: "Attempting nginx installation with dependencies for potential online mode." - - name: install dependencies - include_tasks: "{{ ansible_os_family }}.yml" - - name: install nginx after dependency installation - package: - name: nginx - state: present - register: package_res - retries: 5 - delay: 60 - become: yes - until: package_res is success - -- name: configure main nginx conf file. - copy: - src: nginx.conf - dest: /etc/nginx/nginx.conf - owner: root - group: root - mode: '0755' - become: yes - -- name: configure the artifactory nginx conf - template: - src: artifactory.conf.j2 - dest: /etc/nginx/conf.d/artifactory.conf - owner: root - group: root - mode: '0755' - become: yes - -- name: restart nginx - service: - name: nginx - state: restarted - enabled: yes - become: yes diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/.travis.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/.travis.yml deleted file mode 100644 index 36bbf62..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/.travis.yml +++ /dev/null @@ -1,29 +0,0 @@ ---- -language: python -python: "2.7" - -# Use the new container infrastructure -sudo: false - -# Install ansible -addons: - apt: - packages: - - python-pip - -install: - # Install ansible - - pip install ansible - - # Check ansible version - - ansible --version - - # Create ansible.cfg with correct roles_path - - printf '[defaults]\nroles_path=../' >ansible.cfg - -script: - # Basic role syntax check - - ansible-playbook tests/test.yml -i tests/inventory --syntax-check - -notifications: - webhooks: https://galaxy.ansible.com/api/v1/notifications/ \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/defaults/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/defaults/main.yml deleted file mode 100644 index 5818d2b..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/defaults/main.yml +++ /dev/null @@ -1,2 +0,0 @@ ---- -# defaults file for artifactory_nginx \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/handlers/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/handlers/main.yml deleted file mode 100644 index f07f4d4..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/handlers/main.yml +++ /dev/null @@ -1,2 +0,0 @@ ---- -# handlers file for artifactory_nginx \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/.travis.yml b/Ansible/ansible_collections/jfrog/installers/roles/postgres/.travis.yml deleted file mode 100644 index 9d4d136..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/.travis.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -language: python - -services: - - docker - -env: - global: - - DEBUG=--debug - matrix: - - MOLECULE_DISTRO=centos7 MOLECULE_SCENARIO=default - - MOLECULE_DISTRO=centos7 MOLECULE_SCENARIO=version11 - # - MOLECULE_DISTRO: fedora27 - # - MOLECULE_DISTRO: fedora29 - - MOLECULE_DISTRO=ubuntu1604 MOLECULE_SCENARIO=default - - MOLECULE_DISTRO=ubuntu1604 MOLECULE_SCENARIO=version11 - - MOLECULE_DISTRO=ubuntu1804 MOLECULE_SCENARIO=default - - MOLECULE_DISTRO=ubuntu1804 MOLECULE_SCENARIO=version11 - # - MOLECULE_DISTRO: debian9 - -before_install: - - sudo apt-get -qq update - - sudo apt-get install -y net-tools -install: - - pip install molecule docker-py - -script: - - molecule --version - - ansible --version - - molecule $DEBUG test -s $MOLECULE_SCENARIO diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/README.md b/Ansible/ansible_collections/jfrog/installers/roles/postgres/README.md deleted file mode 100644 index eccb452..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# postgres -The postgres role will install Postgresql software and configure a database and user to support an Artifactory or Xray server. - -### Role Variables -* _db_users_: This is a list of database users to create. eg. db_users: - { db_user: "artifactory", db_password: "Art1fAct0ry" } -* _dbs_: This is the database to create. eg. dbs: - { db_name: "artifactory", db_owner: "artifactory" } - -By default, the [_pg_hba.conf_](https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html) client authentication file is configured for open access for development purposes through the _postgres_allowed_hosts_ variable: - -``` -postgres_allowed_hosts: - - { type: "host", database: "all", user: "all", address: "0.0.0.0/0", method: "trust"} -``` - -**THIS SHOULD NOT BE USED FOR PRODUCTION.** - -**Update this variable to only allow access from Artifactory and Xray.** - -## Example Playbook -``` ---- -- hosts: database - roles: - - postgres -``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/handlers/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/postgres/handlers/main.yml deleted file mode 100644 index 5341b3d..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/handlers/main.yml +++ /dev/null @@ -1,4 +0,0 @@ ---- - -- name: restart postgres - systemd: name={{ postgres_server_service_name }} state=restarted diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/main.yml deleted file mode 100644 index c267ba9..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/main.yml +++ /dev/null @@ -1,105 +0,0 @@ ---- -- name: define distribution-specific variables - include_vars: "{{ ansible_os_family }}.yml" - -- name: create directory for bind mount if necessary - file: - path: "{{ postgres_server_bind_mount_var_lib_pgsql_target }}" - state: directory - become: yes - when: postgres_server_bind_mount_var_lib_pgsql - - -- name: perform bind mount if necessary - mount: - path: "/var/lib/pgsql" - src: "{{ postgres_server_bind_mount_var_lib_pgsql_target }}" - opts: bind - state: mounted - fstype: none - become: yes - when: postgres_server_bind_mount_var_lib_pgsql - -- name: perform installation - include_tasks: "{{ ansible_os_family }}.yml" - -- name: extend path - copy: - dest: /etc/profile.d/postgres-path.sh - mode: a=rx - content: "export PATH=$PATH:/usr/pgsql-{{ postgres_server_version }}/bin" - become: yes - -- name: initialize PostgreSQL database cluster - environment: - LC_ALL: "en_US.UTF-8" - vars: - ansible_become: "{{ postgres_server_initdb_become }}" - ansible_become_user: "{{ postgres_server_user }}" - command: "{{ postgres_server_cmd_initdb }} {{ postgres_server_data_location }}" - args: - creates: "{{ postgres_server_data_location }}/PG_VERSION" - -- name: install postgres configuration - template: - src: "{{ item }}.j2" - dest: "{{ postgres_server_config_location }}/{{ item }}" - owner: postgres - group: postgres - mode: u=rw,go=r - vars: - ansible_become: "{{ postgres_server_initdb_become }}" - ansible_become_user: "{{ postgres_server_user }}" - loop: - - pg_hba.conf - - postgresql.conf - -- name: enable postgres service - systemd: - name: "{{ postgres_server_service_name }}" - state: started - enabled: yes - become: yes - -- name: Hold until Postgresql is up and running - wait_for: - port: 5432 - -- name: Create users - become_user: postgres - become: yes - postgresql_user: - name: "{{ item.db_user }}" - password: "{{ item.db_password }}" - conn_limit: "-1" - loop: "{{ db_users|default([]) }}" - no_log: true # secret passwords - -- name: Create a database - become_user: postgres - become: yes - postgresql_db: - name: "{{ item.db_name }}" - owner: "{{ item.db_owner }}" - encoding: UTF-8 - loop: "{{ dbs|default([]) }}" - -- name: Grant privs on db - become_user: postgres - become: yes - postgresql_privs: - database: "{{ item.db_name }}" - role: "{{ item.db_owner }}" - state: present - privs: ALL - type: database - loop: "{{ dbs|default([]) }}" - -- name: restart postgres - service: - name: "{{ postgres_server_service_name }}" - state: restarted - become: yes - -- debug: - msg: "Restarted postgres service {{ postgres_server_service_name }}" \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/Debian.yml b/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/Debian.yml deleted file mode 100644 index 1c1a7f4..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/Debian.yml +++ /dev/null @@ -1,12 +0,0 @@ ---- - -postgres_server_cmd_initdb: /usr/lib/postgresql/{{ postgres_server_version }}/bin/initdb -D -postgres_server_initdb_become: yes -postgres_server_data_location: /var/lib/postgresql/{{ postgres_server_version }}/main -postgres_server_config_location: /etc/postgresql/{{ postgres_server_version }}/main -postgres_server_service_name: postgresql@{{ postgres_server_version }}-main - -postgres_server_config_data_directory: "/var/lib/postgresql/{{ postgres_server_version }}/main" -postgres_server_config_hba_file: "/etc/postgresql/{{ postgres_server_version }}/main/pg_hba.conf" -postgres_server_config_ident_file: "/etc/postgresql/{{ postgres_server_version }}/main/pg_ident.conf" -postgres_server_config_external_pid_file: "/var/run/postgresql/{{ postgres_server_version }}-main.pid" diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat.yml b/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat.yml deleted file mode 100644 index f6faafd..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat.yml +++ /dev/null @@ -1,11 +0,0 @@ ---- - -postgres_server_cmd_initdb: /usr/pgsql-{{ postgres_server_version }}/bin/postgresql{{ postgres_server_pkg_version }}-setup initdb -D -postgres_server_data_location: /var/lib/pgsql/{{ postgres_server_version }}/data -postgres_server_config_location: "{{ postgres_server_data_location }}" -postgres_server_service_name: postgresql-{{ postgres_server_version }} - -postgres_server_config_data_directory: null -postgres_server_config_hba_file: null -postgres_server_config_ident_file: null -postgres_server_config_external_pid_file: null diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat_pg-9.6.yml b/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat_pg-9.6.yml deleted file mode 100644 index 56d0263..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat_pg-9.6.yml +++ /dev/null @@ -1,4 +0,0 @@ ---- - -postgres_server_cmd_initdb: /usr/pgsql-{{ postgres_server_version }}/bin/postgresql{{ postgres_server_pkg_version }}-setup initdb -postgres_server_initdb_become: false diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat_pg-default.yml b/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat_pg-default.yml deleted file mode 100644 index 3d974c2..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/vars/RedHat_pg-default.yml +++ /dev/null @@ -1,4 +0,0 @@ ---- - -postgres_server_cmd_initdb: /usr/pgsql-{{ postgres_server_version }}/bin/initdb -D /var/lib/pgsql/{{ postgres_server_version }}/data -postgres_server_initdb_become: yes diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/.travis.yml b/Ansible/ansible_collections/jfrog/installers/roles/xray/.travis.yml deleted file mode 100644 index 36bbf62..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/.travis.yml +++ /dev/null @@ -1,29 +0,0 @@ ---- -language: python -python: "2.7" - -# Use the new container infrastructure -sudo: false - -# Install ansible -addons: - apt: - packages: - - python-pip - -install: - # Install ansible - - pip install ansible - - # Check ansible version - - ansible --version - - # Create ansible.cfg with correct roles_path - - printf '[defaults]\nroles_path=../' >ansible.cfg - -script: - # Basic role syntax check - - ansible-playbook tests/test.yml -i tests/inventory --syntax-check - -notifications: - webhooks: https://galaxy.ansible.com/api/v1/notifications/ \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/README.md b/Ansible/ansible_collections/jfrog/installers/roles/xray/README.md deleted file mode 100644 index 2604b26..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/README.md +++ /dev/null @@ -1,36 +0,0 @@ -# xray -The xray role will install Xray software onto the host. An Artifactory server and Postgress database is required. - -### Role Variables -* _xray_version_: The version of Artifactory to install. eg. "3.3.0" -* _jfrog_url_: This is the URL to the Artifactory base URL. eg. "http://ec2-54-237-207-135.compute-1.amazonaws.com" -* _master_key_: This is the Artifactory [Master Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys). -* _join_key_: This is the Artifactory [Join Key](https://www.jfrog.com/confluence/display/JFROG/Managing+Keys). See below to [autogenerate this key](#autogenerating-master-and-join-keys). -* _db_type_: This is the database type. eg. "postgresql" -* _db_driver_: This is the JDBC driver class. eg. "org.postgresql.Driver" -* _db_url_: This is the database url. eg. "postgres://10.0.0.59:5432/xraydb?sslmode=disable" -* _db_user_: The database user to configure. eg. "xray" -* _db_password_: The database password to configure. "xray" -* _xray_system_yaml_: Your own [system YAML](https://www.jfrog.com/confluence/display/JFROG/System+YAML+Configuration+File) file can be specified and used. If specified, this file will be used rather than constructing a file from the parameters above. -* _xray_upgrade_only_: Perform an software upgrade only. Default is false. - -Additional variables can be found in [defaults/main.yml](./defaults/main.yml). -## Example Playbook -``` ---- -- hosts: xray - roles: - - xray -``` - -## Upgrades -The Xray role supports software upgrades. To use a role to perform a software upgrade only, use the _xray_upgrade_only_ variables and specify the version. See the following example. - -``` -- hosts: xray - vars: - xray_version: "{{ lookup('env', 'xray_version_upgrade') }}" - xray_upgrade_only: true - roles: - - xray -``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/defaults/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/xray/defaults/main.yml deleted file mode 100644 index fd674bf..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/defaults/main.yml +++ /dev/null @@ -1,29 +0,0 @@ ---- -# defaults file for xray -# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone) -ansible_marketplace: standalone - -# The version of xray to install -xray_version: 3.10.3 - -# whether to enable HA -xray_ha_enabled: true - -# The location where xray should install. -jfrog_home_directory: /opt/jfrog - -# The remote xray download file -xray_tar: https://dl.bintray.com/jfrog/jfrog-xray/xray-linux/{{ xray_version }}/jfrog-xray-{{ xray_version }}-linux.tar.gz - -#The xray install directory -xray_untar_home: "{{ jfrog_home_directory }}/jfrog-xray-{{ xray_version }}-linux" -xray_home: "{{ jfrog_home_directory }}/xray" - -#xray users and groups -xray_user: xray -xray_group: xray - -# if this is an upgrade -xray_upgrade_only: false - -xray_system_yaml_template: system.yaml.j2 diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/handlers/main.yml b/Ansible/ansible_collections/jfrog/installers/roles/xray/handlers/main.yml deleted file mode 100644 index f236fe3..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/handlers/main.yml +++ /dev/null @@ -1,2 +0,0 @@ ---- -# handlers file for xray \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/Debian.yml b/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/Debian.yml deleted file mode 100644 index ec28e0a..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/Debian.yml +++ /dev/null @@ -1,42 +0,0 @@ ---- -- name: Install db5.3-util - apt: - deb: "{{ xray_home }}/app/third-party/misc/db5.3-util_5.3.28-3ubuntu3_amd64.deb" - ignore_errors: yes - become: yes - -- name: Install db-util - apt: - deb: "{{ xray_home }}/app/third-party/misc/db-util_1_3a5.3.21exp1ubuntu1_all.deb" - ignore_errors: yes - become: yes - -- name: Install libssl - apt: - deb: "{{ xray_home }}/app/third-party/rabbitmq/libssl1.1_1.1.0j-1_deb9u1_amd64.deb" - ignore_errors: yes - become: yes - -- name: Install socat - apt: - deb: "{{ xray_home }}/app/third-party/rabbitmq/socat_1.7.3.1-2+deb9u1_amd64.deb" - become: yes - -- name: Install libwxbase3.0-0v5 - apt: - name: libwxbase3.0-0v5 - update_cache: yes - state: present - become: yes - -- name: Install erlang 21.2.1-1 - apt: - deb: "{{ xray_home }}/app/third-party/rabbitmq/esl-erlang_21.2.1-1~ubuntu~xenial_amd64.deb" - when: xray_version is version("3.8.0","<") - become: yes - -- name: Install erlang 22.3.4.1-1 - apt: - deb: "{{ xray_home }}/app/third-party/rabbitmq/esl-erlang_22.3.4.1-1_ubuntu_xenial_amd64.deb" - when: xray_version is version("3.8.0",">=") - become: yes \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/RedHat.yml b/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/RedHat.yml deleted file mode 100644 index a24f774..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/RedHat.yml +++ /dev/null @@ -1,26 +0,0 @@ ---- -- name: Install db-utl - yum: - name: "{{ xray_home }}/app/third-party/misc/libdb-utils-5.3.21-19.el7.x86_64.rpm" - state: present - become: yes - -- name: Install socat - yum: - name: "{{ xray_home }}/app/third-party/rabbitmq/socat-1.7.3.2-2.el7.x86_64.rpm" - state: present - become: yes - -- name: Install erlang 21.1.4-1 - yum: - name: "{{ xray_home }}/app/third-party/rabbitmq/erlang-21.1.4-1.el7.centos.x86_64.rpm" - state: present - when: xray_version is version("3.8.0","<") - become: yes - -- name: Install erlang 22.3.4.1-1 - yum: - name: "{{ xray_home }}/app/third-party/rabbitmq/erlang-22.3.4.1-1.el7.centos.x86_64.rpm" - state: present - when: xray_version is version("3.8.0",">=") - become: yes \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/install.yml b/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/install.yml deleted file mode 100644 index 64155c8..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/install.yml +++ /dev/null @@ -1,111 +0,0 @@ ---- -- debug: - msg: "Performing installation of Xray..." - -- name: create group for xray - group: - name: "{{ xray_group }}" - state: present - become: yes - -- name: create user for xray - user: - name: "{{ xray_user }}" - group: "{{ xray_group }}" - system: yes - become: yes - -- name: ensure jfrog_home_directory exists - file: - path: "{{ jfrog_home_directory }}" - state: directory - become: yes - -- name: download xray - unarchive: - src: "{{ xray_tar }}" - dest: "{{ jfrog_home_directory }}" - remote_src: yes - owner: "{{ xray_user }}" - group: "{{ xray_group }}" - creates: "{{ xray_untar_home }}" - become: yes - register: downloadxray - until: downloadxray is succeeded - retries: 3 - -- name: MV untar directory to xray home - command: "mv {{ xray_untar_home }} {{ xray_home }}" - become: yes - -- debug: - msg: "Running dependency installation for {{ ansible_os_family }}" - -- name: perform dependency installation - include_tasks: "{{ ansible_os_family }}.yml" - -- name: ensure etc exists - file: - path: "{{ xray_home }}/var/etc" - state: directory - owner: "{{ xray_user }}" - group: "{{ xray_group }}" - become: yes - -- name: use specified system yaml - copy: - src: "{{ xray_system_yaml }}" - dest: "{{ xray_home }}/var/etc/system.yaml" - become: yes - when: xray_system_yaml is defined - -- name: configure system yaml template - template: - src: "{{ xray_system_yaml_template }}" - dest: "{{ xray_home }}/var/etc/system.yaml" - become: yes - when: xray_system_yaml is not defined - -- name: ensure {{ xray_home }}/var/etc/security/ exists - file: - path: "{{ xray_home }}/var/etc/security/" - state: directory - owner: "{{ xray_user }}" - group: "{{ xray_group }}" - become: yes - -- name: configure master key - template: - src: master.key.j2 - dest: "{{ xray_home }}/var/etc/security/master.key" - become: yes - -- name: configure join key - template: - src: join.key.j2 - dest: "{{ xray_home }}/var/etc/security/join.key" - become: yes - -- name: ensure {{ xray_home }}/var/etc/info/ exists - file: - path: "{{ xray_home }}/var/etc/info/" - state: directory - owner: "{{ xray_user }}" - group: "{{ xray_group }}" - become: yes - -- name: configure installer info - template: - src: installer-info.json.j2 - dest: "{{ xray_home }}/var/etc/info/installer-info.json" - become: yes - -- name: create xray service - shell: "{{ xray_home }}/app/bin/installService.sh" - become: yes - -- name: start and enable xray - service: - name: xray - state: restarted - become: yes \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/upgrade.yml b/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/upgrade.yml deleted file mode 100644 index 623661c..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/upgrade.yml +++ /dev/null @@ -1,54 +0,0 @@ ---- -- debug: - msg: "Performing upgrade of Xray..." - -- name: stop xray - service: - name: xray - state: stopped - become: yes - -- name: ensure jfrog_home_directory exists - file: - path: "{{ jfrog_home_directory }}" - state: directory - become: yes - -- name: download xray - unarchive: - src: "{{ xray_tar }}" - dest: "{{ jfrog_home_directory }}" - remote_src: yes - owner: "{{ xray_user }}" - group: "{{ xray_group }}" - creates: "{{ xray_untar_home }}" - become: yes - register: downloadxray - until: downloadxray is succeeded - retries: 3 - -- name: Delete xray app - file: - path: "{{ xray_home }}/app" - state: absent - become: yes - -- name: CP new app to xray app - command: "cp -r {{ xray_untar_home }}/app {{ xray_home }}/app" - become: yes - -- name: Delete untar directory - file: - path: "{{ xray_untar_home }}" - state: absent - become: yes - -- name: create xray service - shell: "{{ xray_home }}/app/bin/installService.sh" - become: yes - -- name: start and enable xray - service: - name: xray - state: restarted - become: yes \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/installer-info.json.j2 b/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/installer-info.json.j2 deleted file mode 100644 index a76c88c..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/installer-info.json.j2 +++ /dev/null @@ -1,11 +0,0 @@ -{ - "productId": "Ansible_artifactory/1.0.0", - "features": [ - { - "featureId": "Partner/ACC-006973" - }, - { - "featureId": "Channel/{{ ansible_marketplace }}" - } - ] -} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/join.key.j2 b/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/join.key.j2 deleted file mode 100644 index 17d05d2..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/join.key.j2 +++ /dev/null @@ -1 +0,0 @@ -{{ join_key }} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/master.key.j2 b/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/master.key.j2 deleted file mode 100644 index 0462a64..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/master.key.j2 +++ /dev/null @@ -1 +0,0 @@ -{{ master_key }} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/system.yaml.j2 b/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/system.yaml.j2 deleted file mode 100644 index 206eb77..0000000 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/templates/system.yaml.j2 +++ /dev/null @@ -1,36 +0,0 @@ -## @formatter:off -## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE -## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character. -configVersion: 1 - -## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products. -## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog - -## NOTE: Sensitive information such as passwords and join key are encrypted on first read. -## NOTE: The provided commented key and value is the default. - -## SHARED CONFIGURATIONS -## A shared section for keys across all services in this config -shared: - ## Base URL of the JFrog Platform Deployment (JPD) - ## This is the URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. - ## Examples: "http://jfrog.acme.com" or "http://10.20.30.40:8082" - jfrogUrl: {{ jfrog_url }} - - ## Node Settings - node: - ## A unique id to identify this node. - ## Default: auto generated at startup. - id: {{ ansible_machine_id }} - - ## Database Configuration - database: - ## One of: mysql, oracle, mssql, postgresql, mariadb - ## Default: Embedded derby - - ## Example for mysql/postgresql - type: "{{ db_type }}" - driver: "{{ db_driver }}" - url: "{{ db_url }}" - username: "{{ db_user }}" - password: "{{ db_password }}" \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/README.md b/Ansible/ansible_collections/jfrog/platform/README.md new file mode 100644 index 0000000..1d24d83 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/README.md @@ -0,0 +1,112 @@ +# JFrog Platform Ansible Collection + +This Ansible directory consists of the following directories that support the JFrog Platform collection. + + * [ansible_collections directory](ansible_collections) - This directory contains the Ansible collection package that has the Ansible roles for Artifactory, Distribution, Missioncontrol and Xray. See the roles README for details on the product roles and variables. + * [examples directory](examples) - This directory contains example playbooks for various architectures. + + + ## Getting Started + + 1. Install this collection from Ansible Galaxy. This collection is also available in RedHat Automation Hub. + + ``` + ansible-galaxy collection install jfrog.platform + ``` + + Ensure you reference the collection in your playbook when using these roles. + + ``` + --- + - hosts: artifactory_servers + collections: + - jfrog.platform + roles: + - artifactory + + ``` + + 2. Ansible uses SSH to connect to hosts. Ensure that your SSH private key is on your client and the public keys are installed on your Ansible hosts. + + 3. Create your inventory file. Use one of the examples from the [examples directory](examples) to construct an inventory file (hosts.ini) with the host addresses + + 4. Create your playbook. Use one of the examples from the [examples directory](examples) to construct a playbook using the JFrog Ansible roles. These roles will be applied to your inventory and provision software. + + 5. Then execute with the following command to provision the JFrog Platform with Ansible. + +``` +ansible-playbook -vv platform.yml -i hosts.ini" +``` + +## Generating Master and Join Keys +**Note** : If you don't provide these keys, they will be set to defaults (check groupvars/all/vars.yaml file) +For production deployments,You may want to generate your master amd join keys and apply it to all the nodes. +**IMPORTANT** : Save below generated master and join keys for future upgrades + +``` +MASTER_KEY_VALUE=$(openssl rand -hex 32) +JOIN_KEY_VALUE=$(openssl rand -hex 32) +ansible-playbook -vv platform.yml -i hosts.ini --extra-vars "master_key=$MASTER_KEY_VALUE join_key=$JOIN_KEY_VALUE" +``` + +## Using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to Encrypt Vars +Some vars you may want to keep secret. You may put these vars into a separate file and encrypt them using [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html). + +``` +ansible-vault encrypt secret-vars.yml --vault-password-file ~/.vault_pass.txt +``` + +then in your playbook include the secret vars file. + +``` +- hosts: artifactory_servers + + vars_files: + - ./vars/secret-vars.yml + - ./vars/vars.yml + + roles: + - artifactory +``` + +## Upgrades +All JFrog product roles support software updates. To use a role to perform a software update only, use the __upgrade_only_ variable and specify the version. See the following example. + +``` +- hosts: artifactory_servers + vars: + artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}" + artifactory_upgrade_only: true + roles: + - artifactory + +- hosts: xray_servers + vars: + xray_version: "{{ lookup('env', 'xray_version_upgrade') }}" + xray_upgrade_only: true + roles: + - xray +``` + +## Building the Collection Archive +1. Go to the [ansible_collections/jfrog/platform directory](ansible_collections/jfrog/platform). +2. Update the galaxy.yml meta file as needed. Update the version. +3. Build the archive. (Requires Ansible 2.9+) +``` +ansible-galaxy collection build +``` + +## OS support +The JFrog Platform Ansible Collection can be installed on the following operating systems: + +* Ubuntu LTS versions (16.04/18.04/20.4) +* Centos/RHEL 7.x/8.x +* Debian 9.x/10.x + +## Known issues +* Refer [here](https://github.com/jfrog/JFrog-Cloud-Installers/issues?q=is%3Aopen+is%3Aissue+label%3AAnsible) +* By default, ansible_python_interpreter: "/usr/bin/python3" used , For Centos/RHEL-7, Set this to "/usr/bin/python" . For example +``` +ansible-playbook -vv platform.yml -i hosts.ini -e 'ansible_python_interpreter=/usr/bin/python' +``` + diff --git a/Ansible/ansible_collections/jfrog/platform/ansible.cfg b/Ansible/ansible_collections/jfrog/platform/ansible.cfg new file mode 100644 index 0000000..5c2352e --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/ansible.cfg @@ -0,0 +1,6 @@ +[defaults] +host_key_checking = false +stdout_callback = debug +remote_tmp = /tmp/.ansible/tmp +private_key_file=~/.ssh/ansible-jfrog.key +timeout = 20 diff --git a/Ansible/ansible_collections/jfrog/platform/artifactory.yml b/Ansible/ansible_collections/jfrog/platform/artifactory.yml new file mode 100644 index 0000000..b0a9eef --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/artifactory.yml @@ -0,0 +1,4 @@ +--- +- hosts: artifactory_servers + roles: + - artifactory diff --git a/Ansible/ansible_collections/jfrog/platform/distribution.yml b/Ansible/ansible_collections/jfrog/platform/distribution.yml new file mode 100644 index 0000000..d1e90e5 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/distribution.yml @@ -0,0 +1,4 @@ +--- +- hosts: distribution_servers + roles: + - distribution diff --git a/Ansible/ansible_collections/jfrog/installers/galaxy.yml b/Ansible/ansible_collections/jfrog/platform/galaxy.yml similarity index 80% rename from Ansible/ansible_collections/jfrog/installers/galaxy.yml rename to Ansible/ansible_collections/jfrog/platform/galaxy.yml index d407f59..c649f5d 100644 --- a/Ansible/ansible_collections/jfrog/installers/galaxy.yml +++ b/Ansible/ansible_collections/jfrog/platform/galaxy.yml @@ -6,10 +6,10 @@ namespace: "jfrog" # The name of the collection. Has the same character restrictions as 'namespace' -name: "installers" +name: "platform" # The version of the collection. Must be compatible with semantic versioning -version: "1.1.2" +version: "7.18.5" # The path to the Markdown (.md) readme file. This path is relative to the root of the collection readme: "README.md" @@ -17,13 +17,13 @@ readme: "README.md" # A list of the collection's content authors. Can be just the name or in the format 'Full Name (url) # @nicks:irc/im.site#channel' authors: - - "Jeff Fry " + - "JFrog Maintainers Team " ### OPTIONAL but strongly recommended # A short summary description of the collection -description: "This collection provides roles for installing Artifactory and Xray. Additionally, it provides optional SSL and Postgresql roles if these are needed for your deployment." +description: "This collection provides roles for installing JFrog Platform which includes Artifactory, Distribution, Mission-control and Xray. Additionally, it provides optional SSL and Postgresql roles if these are needed for your deployment." # Either a single license or a list of licenses for content inside of a collection. Ansible Galaxy currently only # accepts L(SPDX,https://spdx.org/licenses/) licenses. This key is mutually exclusive with 'license_file' @@ -37,10 +37,14 @@ license_file: "" # A list of tags you want to associate with the collection for indexing/searching. A tag name has the same character # requirements as 'namespace' and 'name' tags: - - artifactory - - xray - jfrog + - platform + - devops - application + - artifactory + - distribution + - missioncontrol + - xray # Collections that this collection requires to be installed for it to be usable. The key of the dict is the # collection label 'namespace.name'. The value is a version range @@ -49,13 +53,13 @@ tags: dependencies: {} # The URL of the originating SCM repository -repository: "https://github.com/jfrog/JFrog-Cloud-Installers/" +repository: "https://github.com/jfrog/JFrog-Cloud-Installers/Ansible" # The URL to any online docs documentation: "https://github.com/jfrog/JFrog-Cloud-Installers/blob/master/Ansible/README.md" # The URL to the homepage of the collection/project -homepage: "https://github.com/jfrog/JFrog-Cloud-Installers/" +homepage: "https://github.com/jfrog/JFrog-Cloud-Installers/Ansible" # The URL to the collection issue tracker issues: "https://github.com/jfrog/JFrog-Cloud-Installers/issues" diff --git a/Ansible/ansible_collections/jfrog/platform/group_vars/all/package_version.yml b/Ansible/ansible_collections/jfrog/platform/group_vars/all/package_version.yml new file mode 100644 index 0000000..7000464 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/group_vars/all/package_version.yml @@ -0,0 +1,8 @@ +# The version of products to install +artifactory_version: 7.18.5 +xray_version: 3.24.2 +distribution_version: 2.7.1 +missioncontrol_version: 4.7.3 + +# platform collection version +platform_collection_version: 7.18.5 diff --git a/Ansible/ansible_collections/jfrog/platform/group_vars/all/vars.yml b/Ansible/ansible_collections/jfrog/platform/group_vars/all/vars.yml new file mode 100755 index 0000000..d9701dd --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/group_vars/all/vars.yml @@ -0,0 +1,74 @@ +--- +# Defaults +## Note : These values are global and can be overridden in role//defaults/main.yaml file +## For production deployments,You may want to generate your master amd join keys and apply it to all the nodes. +master_key: ee69d96880726d3abf6b42b97d2ae589111ea95c2a8bd5876ec5cd9e8ee34f86 +join_key: 83da88eaaa08dfed5b86888fcec85f19ace0c3ff8747bcefcec2c9769ad4043d + +jfrog_url: >- + {%- for host in groups['artifactory_servers'] -%} + "http://{{ hostvars[host]['ansible_host'] }}:8082" + {%- endfor -%} + +# Artifactory DB details +artifactory_db_type: postgresql +artifactory_db_driver: org.postgresql.Driver +artifactory_db_name: artifactory +artifactory_db_user: artifactory +artifactory_db_password: password +artifactory_db_url: >- + {%- for item in groups['postgres_servers'] -%} + jdbc:postgresql://{{ hostvars[item]['ansible_host'] }}:5432/{{ artifactory_db_name }} + {%- endfor -%} + +# Xray DB details +xray_db_type: postgresql +xray_db_driver: org.postgresql.Driver +xray_db_name: xray +xray_db_user: xray +xray_db_password: password +xray_db_url: >- + {%- for item in groups['postgres_servers'] -%} + postgres://{{ hostvars[item]['ansible_host'] }}:5432/{{ xray_db_name }}?sslmode=disable + {%- endfor -%} + +# Distribution DB details +distribution_db_type: postgresql +distribution_db_driver: org.postgresql.Driver +distribution_db_name: distribution +distribution_db_user: distribution +distribution_db_password: password +distribution_db_url: >- + {%- for item in groups['postgres_servers'] -%} + jdbc:postgresql://{{ hostvars[item]['ansible_host'] }}:5432/{{ distribution_db_name }}?sslmode=disable + {%- endfor -%} + +# MissionControl DB details +mc_db_type: postgresql +mc_db_driver: org.postgresql.Driver +mc_db_name: mc +mc_db_user: mc +mc_db_password: password +mc_db_url: >- + {%- for item in groups['postgres_servers'] -%} + jdbc:postgresql://{{ hostvars[item]['ansible_host'] }}:5432/{{ mc_db_name }}?sslmode=disable + {%- endfor -%} + +# Postgresql users and databases/schemas +db_users: + - { db_user: "{{ artifactory_db_user }}", db_password: "{{ artifactory_db_password }}" } + - { db_user: "{{ xray_db_user }}", db_password: "{{ xray_db_password }}" } + - { db_user: "{{ distribution_db_user }}", db_password: "{{ distribution_db_password }}" } + - { db_user: "{{ mc_db_user }}", db_password: "{{ mc_db_password }}" } +dbs: + - { db_name: "{{ artifactory_db_name }}", db_owner: "{{ artifactory_db_user }}" } + - { db_name: "{{ xray_db_name }}", db_owner: "{{ xray_db_user }}" } + - { db_name: "{{ distribution_db_name }}", db_owner: "{{ distribution_db_user }}" } + - { db_name: "{{ mc_db_name }}", db_owner: "{{ mc_db_user }}" } +mc_schemas: + - jfmc_server + - insight_server + - insight_scheduler + +# For Centos/RHEL-7, Set this to "/usr/bin/python" +ansible_python_interpreter: "/usr/bin/python3" \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/hosts.ini b/Ansible/ansible_collections/jfrog/platform/hosts.ini new file mode 100644 index 0000000..b721a54 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/hosts.ini @@ -0,0 +1,23 @@ +[postgres_servers] +postgres-1 ansible_host=10.70.64.85 private_ip=10.70.64.85 + +[artifactory_servers] +artifactory-1 ansible_host=10.70.64.84 private_ip=10.70.64.84 + +[xray_servers] +xray-1 ansible_host=10.70.64.83 private_ip=10.70.64.83 + +[distribution_servers] +distribution-1 ansible_host=10.70.64.82 private_ip=10.70.64.82 + +[missionControl_servers] +missionControl-1 ansible_host=10.70.64.79 private_ip=10.70.64.79 + +[xray_secondary_servers] +xray-2 ansible_host=0.0.0.0 private_ip=0.0.0.0 + +[distribution_secondary_servers] +distribution-2 ansible_host=0.0.0.0 private_ip=0.0.0.0 + +[missionControl_secondary_servers] +missionControl-2 ansible_host=0.0.0.0 private_ip=0.0.0.0 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/jfrog-platform-7.18.5.tar.gz b/Ansible/ansible_collections/jfrog/platform/jfrog-platform-7.18.5.tar.gz new file mode 100644 index 0000000..c565760 Binary files /dev/null and b/Ansible/ansible_collections/jfrog/platform/jfrog-platform-7.18.5.tar.gz differ diff --git a/Ansible/ansible_collections/jfrog/platform/missionControl.yml b/Ansible/ansible_collections/jfrog/platform/missionControl.yml new file mode 100644 index 0000000..ebd69f5 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/missionControl.yml @@ -0,0 +1,4 @@ +--- +- hosts: missioncontrol_servers + roles: + - missioncontrol diff --git a/Ansible/ansible_collections/jfrog/platform/platform.yml b/Ansible/ansible_collections/jfrog/platform/platform.yml new file mode 100644 index 0000000..ac116cf --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/platform.yml @@ -0,0 +1,16 @@ +--- +- hosts: postgres_servers + roles: + - postgres +- hosts: artifactory_servers + roles: + - artifactory +- hosts: xray_servers + roles: + - xray +- hosts: distribution_servers + roles: + - distribution +- hosts: missioncontrol_servers + roles: + - missioncontrol diff --git a/Ansible/ansible_collections/jfrog/installers/plugins/README.md b/Ansible/ansible_collections/jfrog/platform/plugins/README.md similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/plugins/README.md rename to Ansible/ansible_collections/jfrog/platform/plugins/README.md diff --git a/Ansible/ansible_collections/jfrog/platform/plugins/callback/README.md b/Ansible/ansible_collections/jfrog/platform/plugins/callback/README.md new file mode 100644 index 0000000..6541cf7 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/plugins/callback/README.md @@ -0,0 +1,31 @@ +# Collections Plugins Directory + +This directory can be used to ship various plugins inside an Ansible collection. Each plugin is placed in a folder that +is named after the type of plugin it is in. It can also include the `module_utils` and `modules` directory that +would contain module utils and modules respectively. + +Here is an example directory of the majority of plugins currently supported by Ansible: + +``` +└── plugins + ├── action + ├── become + ├── cache + ├── callback + ├── cliconf + ├── connection + ├── filter + ├── httpapi + ├── inventory + ├── lookup + ├── module_utils + ├── modules + ├── netconf + ├── shell + ├── strategy + ├── terminal + ├── test + └── vars +``` + +A full list of plugin types can be found at [Working With Plugins](https://docs.ansible.com/ansible/2.9/plugins/plugins.html). \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/postgres.yml b/Ansible/ansible_collections/jfrog/platform/postgres.yml new file mode 100644 index 0000000..0696185 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/postgres.yml @@ -0,0 +1,4 @@ +--- +- hosts: postgres + roles: + - postgres diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/README.md b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/README.md new file mode 100644 index 0000000..c5d211b --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/README.md @@ -0,0 +1,28 @@ +# artifactory +The artifactory role installs the Artifactory Pro software onto the host. Per the Vars below, it will configure a node as primary or secondary. This role uses secondary roles artifactory_nginx to install nginx. + +## Role Variables +* _server_name_: **mandatory** This is the server name. eg. "artifactory.54.175.51.178.xip.io" +* _artifactory_upgrade_only_: Perform an software upgrade only. Default is false. + +Additional variables can be found in [defaults/main.yml](./defaults/main.yml). + +## Example Playbook +``` +--- +- hosts: artifactory_servers + roles: + - artifactory +``` + +## Upgrades +The Artifactory role supports software upgrades. To use a role to perform a software upgrade only, use the _artifactory_upgrade_only_ variable and specify the version. See the following example. + +``` +- hosts: artifactory_servers + vars: + artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}" + artifactory_upgrade_only: true + roles: + - artifactory +``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/defaults/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/defaults/main.yml new file mode 100644 index 0000000..db4b5fb --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/defaults/main.yml @@ -0,0 +1,57 @@ +--- +# defaults file for artifactory +# indicates where this collection was downloaded from (galaxy, automation_hub, standalone) +ansible_marketplace: standalone + +# Set this to true when SSL is enabled (to use artifactory_nginx_ssl role), default to false (implies artifactory uses artifactory_nginx role ) +artifactory_nginx_ssl_enabled: false + +# Provide single node license +# artifactory_single_license: + +# Provide individual (HA) licenses file separated by new line and set artifactory_ha_enabled: true. +# Example: +# artifactory_licenses: |- +# + +# + +# + +# To enable HA, set to true +artifactory_ha_enabled: false + +# By default, all nodes are primary (CNHA) - https://www.jfrog.com/confluence/display/JFROG/High+Availability#HighAvailability-Cloud-NativeHighAvailability +artifactory_taskAffinity: any + +# The location where Artifactory should install. +jfrog_home_directory: /opt/jfrog + +# The location where Artifactory should store data. +artifactory_file_store_dir: /data + +# Pick the Artifactory flavour to install, can be also cpp-ce, jcr, pro. +artifactory_flavour: pro + +artifactory_extra_java_opts: -server -Xms512m -Xmx2g -Xss256k -XX:+UseG1GC +artifactory_system_yaml_template: system.yaml.j2 +artifactory_tar: https://releases.jfrog.io/artifactory/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/{{ artifactory_version }}/jfrog-artifactory-pro-{{ artifactory_version }}-linux.tar.gz +artifactory_home: "{{ jfrog_home_directory }}/artifactory" +artifactory_untar_home: "{{ jfrog_home_directory }}/artifactory-{{ artifactory_flavour }}-{{ artifactory_version }}" + +postgres_driver_download_url: https://repo1.maven.org/maven2/org/postgresql/postgresql/42.2.20/postgresql-42.2.20.jar + +artifactory_user: artifactory +artifactory_group: artifactory + +artifactory_daemon: artifactory + +artifactory_uid: 1030 +artifactory_gid: 1030 + +# if this is an upgrade +artifactory_upgrade_only: false + +#default username and password +artifactory_admin_username: admin +artifactory_admin_password: password diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/handlers/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/handlers/main.yml new file mode 100644 index 0000000..4cd96cc --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/handlers/main.yml @@ -0,0 +1,7 @@ +--- +# handlers file for distribution +- name: restart artifactory + become: yes + systemd: + name: "{{ artifactory_daemon }}" + state: restarted diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/meta/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/meta/main.yml similarity index 88% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory/meta/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory/meta/main.yml index c128393..e604dfc 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/meta/main.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/meta/main.yml @@ -1,5 +1,5 @@ galaxy_info: - author: "Jeff Fry " + author: "JFrog Maintainers Team " description: "The artifactory role installs the Artifactory Pro software onto the host. Per the Vars below, it will configure a node as primary or secondary. This role uses secondary roles artifactory_nginx to install nginx." company: JFrog diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/install.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/install.yml new file mode 100644 index 0000000..74aec64 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/install.yml @@ -0,0 +1,161 @@ +--- +- debug: + msg: "Performing installation of Artifactory version : {{ artifactory_version }} " + +- name: install nginx + include_role: + name: artifactory_nginx + when: artifactory_nginx_ssl_enabled == false + +- name: install nginx with SSL + include_role: + name: artifactory_nginx_ssl + when: artifactory_nginx_ssl_enabled == true + +- name: Ensure group artifactory exist + become: yes + group: + name: "{{ artifactory_group }}" + gid: "{{ artifactory_gid }}" + state: present + +- name: Ensure user artifactory exist + become: yes + user: + uid: "{{ artifactory_uid }}" + name: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + create_home: yes + home: "{{ artifactory_home }}" + shell: /bin/bash + state: present + +- name: Download artifactory + become: yes + unarchive: + src: "{{ artifactory_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + creates: "{{ artifactory_untar_home }}" + when: artifactory_tar is defined + register: downloadartifactory + until: downloadartifactory is succeeded + retries: 3 + +- name: Check if app directory exists + become: yes + stat: + path: "{{ artifactory_home }}/app" + register: app_dir_check + +- name: Copy untar directory to artifactory home + become: yes + command: "cp -r {{ artifactory_untar_home }}/. {{ artifactory_home }}" + when: not app_dir_check.stat.exists + +- name: Create required directories + become: yes + file: + path: "{{ item }}" + state: directory + recurse: yes + owner: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + loop: + - "{{ artifactory_file_store_dir }}" + - "{{ artifactory_home }}/var/data" + - "{{ artifactory_home }}/var/etc" + - "{{ artifactory_home }}/var/etc/security/" + - "{{ artifactory_home }}/var/etc/artifactory/info/" + +- name: Configure systemyaml + become: yes + template: + src: "{{ artifactory_system_yaml_template }}" + dest: "{{ artifactory_home }}/var/etc/system.yaml" + notify: restart artifactory + +- name: Configure master key + become: yes + copy: + dest: "{{ artifactory_home }}/var/etc/security/master.key" + content: | + {{ master_key }} + owner: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + mode: 0640 + +- name: Configure join key + become: yes + copy: + dest: "{{ artifactory_home }}/var/etc/security/join.key" + content: | + {{ join_key }} + owner: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + mode: 0640 + notify: restart artifactory + +- name: Configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/info/installer-info.json" + notify: restart artifactory + +- name: Configure binary store + become: yes + template: + src: binarystore.xml.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/binarystore.xml" + notify: restart artifactory + +- name: Configure single license + become: yes + template: + src: artifactory.lic.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.lic" + when: artifactory_single_license is defined + notify: restart artifactory + +- name: Configure HA licenses + become: yes + template: + src: artifactory.cluster.license.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license" + when: artifactory_licenses is defined + notify: restart artifactory + +- name: Download database driver + become: yes + get_url: + url: "{{ postgres_driver_download_url }}" + dest: "{{ artifactory_home }}/var/bootstrap/artifactory/tomcat/lib" + owner: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + when: postgres_driver_download_url is defined + notify: restart artifactory + +- name: Create artifactory service + become: yes + shell: "{{ artifactory_home }}/app/bin/installService.sh" + +- name: Ensure permissions are correct + become: yes + file: + path: "{{ jfrog_home_directory }}" + group: "{{ artifactory_group }}" + owner: "{{ artifactory_user }}" + recurse: yes + +- name: Restart artifactory + meta: flush_handlers + +- name : Wait for artifactory to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/main.yml new file mode 100644 index 0000000..3afccb3 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/main.yml @@ -0,0 +1,6 @@ +- name: perform installation + include_tasks: "install.yml" + when: not artifactory_upgrade_only +- name: perform upgrade + include_tasks: "upgrade.yml" + when: artifactory_upgrade_only \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/upgrade.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/upgrade.yml new file mode 100644 index 0000000..547c41d --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/tasks/upgrade.yml @@ -0,0 +1,105 @@ +--- +- debug: + msg: "Performing upgrade of Artifactory version to : {{ artifactory_version }} " + +- name: Stop artifactory + become: yes + systemd: + name: "{{ artifactory_daemon }}" + state: stopped + +- name: Ensure jfrog_home_directory exists + become: yes + file: + path: "{{ jfrog_home_directory }}" + state: directory + +- name: Download artifactory for upgrade + become: yes + unarchive: + src: "{{ artifactory_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + creates: "{{ artifactory_untar_home }}" + when: artifactory_tar is defined + register: downloadartifactory + until: downloadartifactory is succeeded + retries: 3 + +- name: Delete artifactory app directory + become: yes + file: + path: "{{ artifactory_home }}/app" + state: absent + +- name: Copy new app to artifactory app + become: yes + command: "cp -r {{ artifactory_untar_home }}/app/. {{ artifactory_home }}/app" + +- name: Configure join key + become: yes + copy: + dest: "{{ artifactory_home }}/var/etc/security/join.key" + content: | + {{ join_key }} + owner: "{{ artifactory_user }}" + group: "{{ artifactory_group }}" + mode: 0640 + notify: restart artifactory + +- name: Configure single license + become: yes + template: + src: artifactory.lic.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.lic" + when: artifactory_single_license is defined + notify: restart artifactory + +- name: Configure HA licenses + become: yes + template: + src: artifactory.cluster.license.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/artifactory.cluster.license" + when: artifactory_licenses is defined + notify: restart artifactory + +- name: Configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/info/installer-info.json" + notify: restart artifactory + +- name: Configure binary store + become: yes + template: + src: binarystore.xml.j2 + dest: "{{ artifactory_home }}/var/etc/artifactory/binarystore.xml" + notify: restart artifactory + +- name: Configure systemyaml + become: yes + template: + src: "{{ artifactory_system_yaml_template }}" + dest: "{{ artifactory_home }}/var/etc/system.yaml" + notify: restart artifactory + +- name: Ensure permissions are correct + become: yes + file: + path: "{{ jfrog_home_directory }}" + group: "{{ artifactory_group }}" + owner: "{{ artifactory_user }}" + recurse: yes + +- name: Restart artifactory + meta: flush_handlers + +- name : Wait for artifactory to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/artifactory.cluster.license.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/artifactory.cluster.license.j2 new file mode 100644 index 0000000..8fa3367 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/artifactory.cluster.license.j2 @@ -0,0 +1,3 @@ +{% if (artifactory_licenses) and (artifactory_licenses|length > 0) %} +{{ artifactory_licenses }} +{% endif %} diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/artifactory.lic.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/artifactory.lic.j2 new file mode 100644 index 0000000..49fa0ca --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/artifactory.lic.j2 @@ -0,0 +1,3 @@ +{% if (artifactory_single_license) and (artifactory_single_license|length > 0) %} +{{ artifactory_single_license }} +{% endif %} diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/binarystore.xml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/binarystore.xml.j2 similarity index 91% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/binarystore.xml.j2 rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/binarystore.xml.j2 index f85f16f..a06e211 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory/templates/binarystore.xml.j2 +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/binarystore.xml.j2 @@ -1,4 +1,4 @@ - + \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/installer-info.json.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/installer-info.json.j2 new file mode 100644 index 0000000..639e741 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/installer-info.json.j2 @@ -0,0 +1,9 @@ +{{ ansible_managed | comment }} +{ + "productId": "Ansible_Artifactory/{{ platform_collection_version }}-{{ artifactory_version }}", + "features": [ + { + "featureId": "Channel/{{ ansible_marketplace }}" + } + ] +} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/system.yaml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/system.yaml.j2 new file mode 100644 index 0000000..876afd5 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory/templates/system.yaml.j2 @@ -0,0 +1,17 @@ +configVersion: 1 +shared: + extraJavaOpts: "{{ artifactory_extra_java_opts }}" + node: + id: {{ ansible_date_time.iso8601_micro | to_uuid }} + ip: {{ ansible_host }} + taskAffinity: {{ artifactory_taskAffinity }} + haEnabled: {{ artifactory_ha_enabled }} + database: + type: "{{ artifactory_db_type }}" + driver: "{{ artifactory_db_driver }}" + url: "{{ artifactory_db_url }}" + username: "{{ artifactory_db_user }}" + password: "{{ artifactory_db_password }}" +router: + entrypoints: + internalPort: 8046 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/README.md b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/README.md similarity index 65% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/README.md rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/README.md index 6a6cb60..75da2e8 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/README.md +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/README.md @@ -2,4 +2,4 @@ This role installs NGINX for artifactory. This role is automatically called by the artifactory role and isn't intended to be used separately. ## Role Variables -* _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io" \ No newline at end of file +* _server_name_: **mandatory** This is the server name. eg. "artifactory.54.175.51.178.xip.io" \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/defaults/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/defaults/main.yml new file mode 100644 index 0000000..72c1819 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/defaults/main.yml @@ -0,0 +1,7 @@ +--- +# defaults file for artifactory_nginx +## For production deployments,You SHOULD change it. +server_name: test.artifactory.com + +nginx_daemon: nginx + diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/files/nginx.conf b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/files/nginx.conf similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/files/nginx.conf rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/files/nginx.conf diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/handlers/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/handlers/main.yml new file mode 100644 index 0000000..ddfbae0 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/handlers/main.yml @@ -0,0 +1,8 @@ +--- +# handlers file for artifactory_nginx +- name: restart nginx + become: yes + systemd: + name: "{{ nginx_daemon }}" + state: restarted + enabled: yes diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/meta/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/meta/main.yml similarity index 87% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/meta/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/meta/main.yml index bb133f7..5dbaba7 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/meta/main.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/meta/main.yml @@ -1,5 +1,5 @@ galaxy_info: - author: "Jeff Fry " + author: "JFrog Maintainers Team " description: "This role installs NGINX for artifactory. This role is automatically called by the artifactory role and isn't intended to be used separately." company: JFrog diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/Debian.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/Debian.yml similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/Debian.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/Debian.yml index cc41ad0..5ab7957 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/Debian.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/Debian.yml @@ -1,9 +1,9 @@ --- - name: apt-get update + become: yes apt: update_cache: yes register: package_res retries: 5 delay: 60 - become: yes until: package_res is success diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/RedHat.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/RedHat.yml similarity index 63% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/RedHat.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/RedHat.yml index 93c4168..9d806fa 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/tasks/RedHat.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/RedHat.yml @@ -1,6 +1,6 @@ --- - name: epel-release + become: yes yum: name: epel-release - state: present - become: yes \ No newline at end of file + state: present \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/main.yml new file mode 100644 index 0000000..7a2a319 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/tasks/main.yml @@ -0,0 +1,35 @@ +--- +- name: Install dependencies + include_tasks: "{{ ansible_os_family }}.yml" + +- name: Install nginx after dependency installation + become: yes + package: + name: nginx + state: present + register: package_res + retries: 5 + delay: 60 + until: package_res is success + +- name: Configure main nginx conf file. + become: yes + copy: + src: nginx.conf + dest: /etc/nginx/nginx.conf + owner: root + group: root + mode: '0755' + +- name: Configure the artifactory nginx conf + become: yes + template: + src: artifactory.conf.j2 + dest: /etc/nginx/conf.d/artifactory.conf + owner: root + group: root + mode: '0755' + notify: restart nginx + +- name: Restart nginx + meta: flush_handlers \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/templates/artifactory.conf.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/templates/artifactory.conf.j2 similarity index 95% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/templates/artifactory.conf.j2 rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/templates/artifactory.conf.j2 index 58280d9..a3f6eb1 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/templates/artifactory.conf.j2 +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/templates/artifactory.conf.j2 @@ -1,6 +1,6 @@ ########################################################### ## this configuration was generated by JFrog Artifactory ## - ########################################################### +########################################################### ## add HA entries when ha is configure upstream artifactory { diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/vars/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/vars/main.yml similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx/vars/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx/vars/main.yml diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/README.md b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/README.md similarity index 72% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/README.md rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/README.md index 9a32719..cb43b09 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/README.md +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/README.md @@ -5,12 +5,3 @@ The artifactory_nginx_ssl role installs and configures nginx for SSL. * _server_name_: This is the server name. eg. "artifactory.54.175.51.178.xip.io" * _certificate_: This is the SSL cert. * _certificate_key_: This is the SSL private key. - -## Example Playbook -``` ---- -- hosts: primary - roles: - - artifactory - - artifactory_nginx_ssl -``` diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/defaults/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/defaults/main.yml new file mode 100644 index 0000000..8dea698 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/defaults/main.yml @@ -0,0 +1,7 @@ +--- +# defaults file for artifactory_nginx + +## For production deployments,You SHOULD change it. +# server_name: test.artifactory.com + +nginx_daemon: nginx diff --git a/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/handlers/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/handlers/main.yml new file mode 100644 index 0000000..ac1192c --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/handlers/main.yml @@ -0,0 +1,8 @@ +--- +# handlers file for artifactory_nginx_ssl +- name: restart nginx + become: yes + systemd: + name: "{{ nginx_daemon }}" + state: restarted + enabled: yes diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/meta/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/meta/main.yml similarity index 84% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/meta/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/meta/main.yml index 5715d56..64dff56 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/meta/main.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/meta/main.yml @@ -1,5 +1,5 @@ galaxy_info: - author: "Jeff Fry " + author: "JFrog Maintainers Team " description: "The artifactory_nginx_ssl role installs and configures nginx for SSL." company: JFrog diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/tasks/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/tasks/main.yml similarity index 66% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/tasks/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/tasks/main.yml index ea18fe2..447699d 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/tasks/main.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/tasks/main.yml @@ -1,41 +1,40 @@ --- # tasks file for artifactory_nginx -- name: configure the artifactory nginx conf +- name: Configure the artifactory nginx conf + become: yes template: src: artifactory.conf.j2 dest: /etc/nginx/conf.d/artifactory.conf owner: root group: root mode: '0755' - become: yes + notify: restart nginx -- name: ensure nginx dir exists +- name: Ensure nginx dir exists + become: yes file: path: "/var/opt/jfrog/nginx/ssl" state: directory - become: yes -- name: configure certificate +- name: Configure certificate + become: yes template: src: certificate.pem.j2 dest: "/var/opt/jfrog/nginx/ssl/cert.pem" - become: yes + notify: restart nginx -- name: ensure pki exists +- name: Ensure pki exists + become: yes file: path: "/etc/pki/tls" state: directory - become: yes -- name: configure key +- name: Configure key + become: yes template: src: certificate.key.j2 dest: "/etc/pki/tls/cert.key" - become: yes + notify: restart nginx -- name: restart nginx - service: - name: nginx - state: restarted - enabled: yes - become: yes \ No newline at end of file +- name: Restart nginx + meta: flush_handlers diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/templates/artifactory.conf.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/templates/artifactory.conf.j2 similarity index 96% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/templates/artifactory.conf.j2 rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/templates/artifactory.conf.j2 index 315a601..20df8db 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/templates/artifactory.conf.j2 +++ b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/templates/artifactory.conf.j2 @@ -1,6 +1,6 @@ ########################################################### ## this configuration was generated by JFrog Artifactory ## - ########################################################### +########################################################### ## add HA entries when ha is configure upstream artifactory { diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/templates/certificate.key.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/templates/certificate.key.j2 similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/templates/certificate.key.j2 rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/templates/certificate.key.j2 diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/templates/certificate.pem.j2 b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/templates/certificate.pem.j2 similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/templates/certificate.pem.j2 rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/templates/certificate.pem.j2 diff --git a/Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/vars/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/vars/main.yml similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/artifactory_nginx_ssl/vars/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/artifactory_nginx_ssl/vars/main.yml diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/README.md b/Ansible/ansible_collections/jfrog/platform/roles/distribution/README.md new file mode 100644 index 0000000..d805f00 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/README.md @@ -0,0 +1,26 @@ +# Distribution +The Distribution role will install distribution software onto the host. An Artifactory server and Postgress database is required. + +### Role Variables +* _distribution_upgrade_only_: Perform an software upgrade only. Default is false. + +Additional variables can be found in [defaults/main.yml](./defaults/main.yml). +## Example Playbook +``` +--- +- hosts: distribution_servers + roles: + - distribution +``` + +## Upgrades +The distribution role supports software upgrades. To use a role to perform a software upgrade only, use the _xray_upgrade_only_ variables and specify the version. See the following example. + +``` +- hosts: distributionservers + vars: + distribution_version: "{{ lookup('env', 'distribution_version_upgrade') }}" + distribution_upgrade_only: true + roles: + - distribution +``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/defaults/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/defaults/main.yml new file mode 100644 index 0000000..31e2c0a --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/defaults/main.yml @@ -0,0 +1,43 @@ +--- +# defaults file for distribution +# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone) +ansible_marketplace: standalone + +# whether to enable HA +distribution_ha_enabled: false + +distribution_ha_node_type : master + +# The location where distribution should install. +jfrog_home_directory: /opt/jfrog + +# The remote distribution download file +distribution_tar: https://releases.jfrog.io/artifactory/jfrog-distribution/distribution-linux/{{ distribution_version }}/jfrog-distribution-{{ distribution_version }}-linux.tar.gz + +#The distribution install directory +distribution_untar_home: "{{ jfrog_home_directory }}/jfrog-distribution-{{ distribution_version }}-linux" +distribution_home: "{{ jfrog_home_directory }}/distribution" + +distribution_install_script_path: "{{ distribution_home }}/app/bin" +distribution_thirdparty_path: "{{ distribution_home }}/app/third-party" +distribution_archive_service_cmd: "{{ distribution_install_script_path }}/installService.sh" + +#distribution users and groups +distribution_user: distribution +distribution_group: distribution + +distribution_uid: 1040 +distribution_gid: 1040 + +distribution_daemon: distribution + +flow_type: archive + +# Redis details +distribution_redis_url: "redis://localhost:6379" +distribution_redis_password: password + +# if this is an upgrade +distribution_upgrade_only: false + +distribution_system_yaml_template: system.yaml.j2 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/handlers/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/handlers/main.yml new file mode 100644 index 0000000..702c6ae --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/handlers/main.yml @@ -0,0 +1,7 @@ +--- +# handlers file for distribution +- name: restart distribution + become: yes + systemd: + name: "{{ distribution_daemon }}" + state: restarted diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/meta/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/meta/main.yml new file mode 100644 index 0000000..b760917 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/meta/main.yml @@ -0,0 +1,16 @@ +galaxy_info: + author: "JFrog Maintainers Team " + description: "The distribution role will install distribution software onto the host. An Artifactory server and Postgress database is required." + company: JFrog + + issue_tracker_url: "https://github.com/jfrog/JFrog-Cloud-Installers/issues" + + license: license (Apache-2.0) + + min_ansible_version: 2.9 + + galaxy_tags: + - distribution + - jfrog + +dependencies: [] \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/expect.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/expect.yml new file mode 100644 index 0000000..06f61dc --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/expect.yml @@ -0,0 +1,44 @@ +- name: Prepare expect scenario script + set_fact: + expect_scenario: | + set timeout 300 + spawn {{ exp_executable_cmd }} + expect_before timeout { exit 1 } + set CYCLE_END 0 + set count 0 + + while { $CYCLE_END == 0 } { + expect { + {% for each_request in exp_scenarios %} + -nocase -re {{ '{' }}{{ each_request.expecting }}.*} { + send "{{ each_request.sending }}\n" + } + {% endfor %} + eof { + set CYCLE_END 1 + } + } + set count "[expr $count + 1]" + if { $count > 16} { + exit 128 + } + } + + expect eof + lassign [wait] pid spawnid os_error_flag value + + if {$os_error_flag == 0} { + puts "INSTALLER_EXIT_STATUS-$value" + } else { + puts "INSTALLER_EXIT_STATUS-$value" + } + +- name: Interactive with expect + become: yes + ignore_errors: yes + shell: | + {{ expect_scenario }} + args: + executable: /usr/bin/expect + chdir: "{{ exp_dir }}" + register: exp_result diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/install.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/install.yml new file mode 100644 index 0000000..7e6124b --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/install.yml @@ -0,0 +1,155 @@ +--- +- debug: + msg: "Performing installation of Distribution version - {{ distribution_version }}" + +- name: Install expect dependency + yum: + name: expect + state: present + become: yes + when: ansible_os_family == 'Redhat' + +- name: Install expect dependency + apt: + name: expect + state: present + update_cache: yes + become: yes + when: ansible_os_family == 'Debian' + +- name: Ensure group jfdistribution exist + become: yes + group: + name: "{{ distribution_group }}" + gid: "{{ distribution_gid }}" + state: present + +- name: Ensure user distribution exist + become: yes + user: + uid: "{{ distribution_uid }}" + name: "{{ distribution_user }}" + group: "{{ distribution_group }}" + create_home: yes + home: "{{ distribution_home }}" + shell: /bin/bash + state: present + +- name: Download distribution + become: yes + unarchive: + src: "{{ distribution_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ distribution_user }}" + group: "{{ distribution_group }}" + creates: "{{ distribution_untar_home }}" + register: downloaddistribution + until: downloaddistribution is succeeded + retries: 3 + +- name: Check if app directory exists + become: yes + stat: + path: "{{ distribution_home }}/app" + register: app_dir_check + +- name: Copy untar directory to distribution home + become: yes + command: "cp -r {{ distribution_untar_home }}/. {{ distribution_home }}" + when: not app_dir_check.stat.exists + +- name: Create required directories + become: yes + file: + path: "{{ item }}" + state: directory + recurse: yes + owner: "{{ distribution_user }}" + group: "{{ distribution_group }}" + loop: + - "{{ distribution_home }}/var/etc" + - "{{ distribution_home }}/var/etc/security/" + - "{{ distribution_home }}/var/etc/info/" + - "{{ distribution_home }}/var/etc/redis/" + +- name: Configure master key + become: yes + copy: + dest: "{{ distribution_home }}/var/etc/security/master.key" + content: | + {{ master_key }} + owner: "{{ distribution_user }}" + group: "{{ distribution_group }}" + mode: 0640 + +- name: Check if install.sh wrapper script exist + become: yes + stat: + path: "{{ distribution_install_script_path }}/install.sh" + register: install_wrapper_script + +- name: Include interactive installer scripts + include_vars: script/archive.yml + +- name: Install Distribution + include_tasks: expect.yml + vars: + exp_executable_cmd: "./install.sh -u {{ distribution_user }} -g {{ distribution_group }}" + exp_dir: "{{ distribution_install_script_path }}" + exp_scenarios: "{{ distribution_installer_scenario['main'] }}" + args: + apply: + environment: + YQ_PATH: "{{ distribution_thirdparty_path }}/yq" + when: install_wrapper_script.stat.exists + +- name: Configure redis config + become: yes + template: + src: "redis.conf.j2" + dest: "{{ distribution_home }}/var/etc/redis/redis.conf" + notify: restart distribution + +- name: Configure systemyaml + become: yes + template: + src: "{{ distribution_system_yaml_template }}" + dest: "{{ distribution_home }}/var/etc/system.yaml" + notify: restart distribution + +- name: Configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ distribution_home }}/var/etc/info/installer-info.json" + notify: restart distribution + +- name: Update distribution permissions + become: yes + file: + path: "{{ distribution_home }}" + state: directory + recurse: yes + owner: "{{ distribution_user }}" + group: "{{ distribution_group }}" + mode: '0755' + +- name: Install Distribution as a service + become: yes + shell: | + {{ distribution_archive_service_cmd }} + args: + chdir: "{{ distribution_install_script_path }}" + register: check_service_status_result + ignore_errors: yes + +- name: Restart distribution + meta: flush_handlers + +- name : Wait for distribution to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/main.yml new file mode 100644 index 0000000..841c88b --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/main.yml @@ -0,0 +1,6 @@ +- name: perform installation + include_tasks: "install.yml" + when: not distribution_upgrade_only +- name: perform upgrade + include_tasks: "upgrade.yml" + when: distribution_upgrade_only \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/upgrade.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/upgrade.yml new file mode 100644 index 0000000..4e83e9e --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/tasks/upgrade.yml @@ -0,0 +1,111 @@ +--- +- debug: + msg: "Performing upgrade of Distribution version to {{ distribution_version }} " + +- name: Stop distribution + become: yes + systemd: + name: "{{ distribution_daemon }}" + state: stopped + +- name: Download distribution for upgrade + become: yes + unarchive: + src: "{{ distribution_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ distribution_user }}" + group: "{{ distribution_group }}" + creates: "{{ distribution_untar_home }}" + register: downloaddistribution + until: downloaddistribution is succeeded + retries: 3 + +- name: Delete distribution app + become: yes + file: + path: "{{ distribution_home }}/app" + state: absent + +- name: Copy new app to distribution app + become: yes + command: "cp -r {{ distribution_untar_home }}/app/. {{ distribution_home }}/app" + +- name: Check if install.sh wrapper script exist + become: yes + stat: + path: "{{ distribution_install_script_path }}/install.sh" + register: install_wrapper_script + +- name: Include interactive installer scripts + include_vars: script/archive.yml + +- name: Install Distribution + include_tasks: expect.yml + vars: + exp_executable_cmd: "./install.sh -u {{ distribution_user }} -g {{ distribution_group }}" + exp_dir: "{{ distribution_install_script_path }}" + exp_scenarios: "{{ distribution_installer_scenario['main'] }}" + args: + apply: + environment: + YQ_PATH: "{{ distribution_thirdparty_path }}/yq" + when: install_wrapper_script.stat.exists + +- name: Ensure {{ distribution_home }}/var/etc/redis exists + become: yes + file: + path: "{{ distribution_home }}/var/etc/redis/" + state: directory + owner: "{{ distribution_user }}" + group: "{{ distribution_group }}" + +- name: Configure redis config + become: yes + template: + src: "redis.conf.j2" + dest: "{{ distribution_home }}/var/etc/redis/redis.conf" + notify: restart distribution + +- name: Configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ distribution_home }}/var/etc/info/installer-info.json" + notify: restart distribution + +- name: Configure systemyaml + become: yes + template: + src: "{{ distribution_system_yaml_template }}" + dest: "{{ distribution_home }}/var/etc/system.yaml" + notify: restart distribution + +- name: Update Distribution base dir owner and group + become: yes + file: + path: "{{ distribution_home }}" + state: directory + recurse: yes + owner: "{{ distribution_user }}" + group: "{{ distribution_group }}" + mode: '0755' + +- name: Install Distribution as a service + become: yes + shell: | + {{ distribution_archive_service_cmd }} + args: + chdir: "{{ distribution_install_script_path }}" + register: check_service_status_result + ignore_errors: yes + +- name: Restart distribution + meta: flush_handlers + +- name : Wait for distribution to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/installer-info.json.j2 b/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/installer-info.json.j2 new file mode 100644 index 0000000..906e994 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/installer-info.json.j2 @@ -0,0 +1,9 @@ +{{ ansible_managed | comment }} +{ + "productId": "Ansible_Distribution/{{ platform_collection_version }}-{{ distribution_version }}", + "features": [ + { + "featureId": "Channel/{{ ansible_marketplace }}" + } + ] +} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/redis.conf.j2 b/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/redis.conf.j2 new file mode 100644 index 0000000..a1d083f --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/redis.conf.j2 @@ -0,0 +1,15 @@ +{{ ansible_managed | comment }} +# Redis configuration file + +# data directory for redis +dir {{ distribution_home }}/var/data/redis + +# log directory for redis +logfile {{ distribution_home }}/var/log/redis/redis.log + +# pid file location for redis +pidfile {{ distribution_home }}/app/run/redis.pid + +# password for redis +# if changed, the same should be set as value for shared.redis.password in JF_PRODUCT_HOME/var/etc/system.yaml +requirepass {{ distribution_redis_password }} diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/system.yaml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/system.yaml.j2 new file mode 100644 index 0000000..79fa4a7 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/templates/system.yaml.j2 @@ -0,0 +1,20 @@ +configVersion: 1 +shared: + jfrogUrl: {{ jfrog_url }} + node: + ip: {{ ansible_host }} + id: {{ ansible_date_time.iso8601_micro | to_uuid }} + database: + type: "{{ distribution_db_type }}" + driver: "{{ distribution_db_driver }}" + url: "{{ distribution_db_url }}" + username: "{{ distribution_db_user }}" + password: "{{ distribution_db_password }}" + redis: + connectionString: "{{ distribution_redis_url }}" + password: "{{ distribution_redis_password }}" + security: + joinKey: {{ join_key }} +router: + entrypoints: + internalPort: 8046 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/vars/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/vars/main.yml new file mode 100644 index 0000000..cd21505 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/vars/main.yml @@ -0,0 +1,2 @@ +--- + diff --git a/Ansible/ansible_collections/jfrog/platform/roles/distribution/vars/script/archive.yml b/Ansible/ansible_collections/jfrog/platform/roles/distribution/vars/script/archive.yml new file mode 100644 index 0000000..0f3c195 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/distribution/vars/script/archive.yml @@ -0,0 +1,42 @@ +distribution_installer_scenario: + main: + - { + "expecting": "(data|installation) directory \\(", + "sending": "{{ distribution_home }}" + } + - { + "expecting": "join key.*:", + "sending": "{{ join_key }}" + } + - { + "expecting": "jfrog url:", + "sending": "{{ jfrog_url }}" + } + - { + "expecting": "do you want to continue", + "sending": "y" + } + - { + "expecting": "please specify the ip address of this machine", + "sending": "{% if distribution_ha_node_type is defined and distribution_ha_node_type == 'master' %}{{ ansible_host }}{% else %}{{ ansible_host }}{% endif %}" + } + - { + "expecting": "are you adding an additional node", + "sending": "{% if distribution_ha_node_type is defined and distribution_ha_node_type == 'master' %}n{% else %}y{% endif %}" + } + - { + "expecting": "do you want to install postgresql", + "sending": "n" + } + - { + "expecting": "postgresql url.*example", + "sending": "{{ distribution_db_url }}" + } + - { + "expecting": "(postgresql|database)?\\s?username.*", + "sending": "{{ distribution_db_user }}" + } + - { + "expecting": "(confirm\\s?)?(postgresql|database)?\\s?password.*:", + "sending": "{{ distribution_db_password }}" + } diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/README.md b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/README.md new file mode 100644 index 0000000..2f0e3ce --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/README.md @@ -0,0 +1,26 @@ +# MissionControl +The missioncontrol role will install missioncontrol software onto the host. An Artifactory server and Postgress database is required. + +### Role Variables +* _mc_upgrade_only_: Perform an software upgrade only. Default is false. + +Additional variables can be found in [defaults/main.yml](./defaults/main.yml). +## Example Playbook +``` +--- +- hosts: missioncontrol_servers + roles: + - missioncontrol +``` + +## Upgrades +The missioncontrol role supports software upgrades. To use a role to perform a software upgrade only, use the _xray_upgrade_only_ variables and specify the version. See the following example. + +``` +- hosts: missioncontrol_servers + vars: + missioncontrol_version: "{{ lookup('env', 'missioncontrol_version_upgrade') }}" + mc_upgrade_only: true + roles: + - missioncontrol +``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/defaults/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/defaults/main.yml new file mode 100644 index 0000000..f1bd22b --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/defaults/main.yml @@ -0,0 +1,58 @@ +--- +# defaults file for mc +# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone) +ansible_marketplace: standalone + +# whether to enable HA +mc_ha_enabled: false + +mc_ha_node_type : master + +# The location where mc should install. +jfrog_home_directory: /opt/jfrog + +# The remote mc download file +mc_tar: https://releases.jfrog.io/artifactory/jfrog-mc/linux/{{ missionControl_version }}/jfrog-mc-{{ missionControl_version }}-linux.tar.gz + + +#The mc install directory +mc_untar_home: "{{ jfrog_home_directory }}/jfrog-mc-{{ missionControl_version }}-linux" +mc_home: "{{ jfrog_home_directory }}/mc" + +mc_install_script_path: "{{ mc_home }}/app/bin" +mc_thirdparty_path: "{{ mc_home }}/app/third-party" +mc_archive_service_cmd: "{{ mc_install_script_path }}/installService.sh" + +#mc users and groups +mc_user: jfmc +mc_group: jfmc + +mc_uid: 1050 +mc_gid: 1050 + +mc_daemon: mc + +# MissionContol ElasticSearch Details +es_uid: 1060 +es_gid: 1060 + +mc_es_conf_base: "/etc/elasticsearch" +mc_es_user: admin +mc_es_password: admin +mc_es_url: "http://localhost:8082" +mc_es_base_url: "http://localhost:8082/elasticsearch" +mc_es_transport_port: 9300 + +mc_es_home: "/usr/share/elasticsearch" +mc_es_data_dir: "/var/lib/elasticsearch" +mc_es_log_dir: "/var/log/elasticsearch" +mc_es_java_home: "/usr/share/elasticsearch/jdk" +mc_es_script_path: "/usr/share/elasticsearch/bin" +mc_es_searchgaurd_home: "/usr/share/elasticsearch/plugins/search-guard-7" + +flow_type: archive + +# if this is an upgrade +mc_upgrade_only: false + +mc_system_yaml_template: system.yaml.j2 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/localhost.key b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/localhost.key new file mode 100644 index 0000000..229172c --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/localhost.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQDY1nDD1cW5ykZV +rTXrAMJeLuZknW9tg+4s8R+XYrzRMTNr9tAXEYNEa+T92HtqrKaVZtdGiQ6NmS95 +EYezEgVmyGQEuuVlY8ChcX8XgpBsJPBV4+XIRju+RSyEW+ZNkT3EWTRKab+KSgN2 +aZ2OT16UqfJd3JjATZw//xXHRWhCQhchX3nNyzkIgENPtdtSweSLG4NjOHY08U7g +Zee21MCqa/58NVECJXlqK/Tfw/3SPgCmSHLLCyybWfClLmXXIjBuSTtSOLDPj4pw +VrZeR0aePs7ZNJnX/tUICNSZeNzs7+n9QUoAiKYSNKSdDw270Lbo5GQdWuM7nkrc +2txeH8wvAgMBAAECggEAGzbuzZAVp40nlAvlPyrH5PeQmwLXarCq7Uu7Yfir0hA8 +Gp9429cALqThXKrAR/yF+9eodTCGebxxejR6X5MyHQWm5/Znts307fjyBoqwgveF +N9fJOIBNce1PT7K+Y5szBrhbbmt59Wqh/J6iKQD1J0YdJoKlTp1vBZPdBoxDhZfN +TgayY4e71ox7Vew+QrxDXzMA3J+EbbBXFL2yOmpNI/FPpEtbCE9arjSa7oZXJAvd +Aenc6GYctkdbtjpX7zHXz5kHzaAEdmorR+q3w6k8cDHBvc+UoRYgLz3fBaVhhQca +rP4PYp04ztIn3qcOpVoisUkpsQcev2cJrWeFW0WgAQKBgQD7ZFsGH8cE84zFzOKk +ee53zjlmIvXqjQWzSkmxy9UmDnYxEOZbn6epK2I5dtCbU9ZZ3f4KM8TTAM5GCOB+ +j4cN/rqM7MdhkgGL/Dgw+yxGVlwkSsQMil16vqdCIRhEhqjChc7KaixuaBNtIBV0 ++9ZRfoS5fEjrctX4/lULwS6EAQKBgQDcz/C6PV3mXk8u7B48kGAJaKbafh8S3BnF +V0zA7qI/aQHuxmLGIQ7hNfihdZwFgYG4h5bXvBKGsxwu0JGvYDNL44R9zXuztsVX +PEixV572Bx87+mrVEt3bwj3lhbohzorjSF2nnJuFA+FZ0r4sQwudyZ2c8yCqRVhI +mfj36FWQLwKBgHNw1zfNuee1K6zddCpRb8eGZOdZIJJv5fE6KPNDhgLu2ymW+CGV +BDn0GSwIOq1JZ4JnJbRrp3O5x/9zLhwQLtWnZuU2CiztDlbJIMilXuSB3dgwmSyl +EV4/VLFSX0GAkNia96YN8Y9Vra4L8K6Cwx0zOyGuSBIO7uFjcYxvTrwBAoGAWeYn +AgweAL6Ayn/DR7EYCHydAfO7PvhxXZDPZPVDBUIBUW9fo36uCi7pDQNPBEbXw4Mg +fLDLch/V55Fu3tHx0IHO3VEdfet5qKyYg+tCgrQfmVG40QsfXGtWu+2X/E+U6Df8 +OVNfVeZghytv1aFuR01gaBfsQqZ87QITBQuIWm0CgYAKdzhETd+jBBLYyOCaS8mh +zQr/ljIkrZIwPUlBkj6TAsmTJTbh7O6lf50CQMEHyE0MNFOHrvkKn89BObXcmwtV +92parLTR7RAeaPMRxCZs4Xd/oABYVGFjMa7TVNA2S6HReDqqTpJrCCkyVuYkr1f2 +OflnwX2RlaWl45n0qkwkTw== +-----END PRIVATE KEY----- \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/localhost.pem b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/localhost.pem new file mode 100644 index 0000000..d1ee43f --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/localhost.pem @@ -0,0 +1,51 @@ +-----BEGIN CERTIFICATE----- +MIIEcjCCA1qgAwIBAgIGAXY81RkkMA0GCSqGSIb3DQEBCwUAMG4xEzARBgoJkiaJ +k/IsZAEZFgNjb20xFTATBgoJkiaJk/IsZAEZFgVqZnJvZzEUMBIGA1UECgwLamZy +b2csIEluYy4xCzAJBgNVBAsMAkNBMR0wGwYDVQQDDBRzaWduaW5nLmNhLmpmcm9n +LmNvbTAeFw0yMDEyMDcxMDUyNDhaFw0zMDEyMDUxMDUyNDhaMGwxEzARBgoJkiaJ +k/IsZAEZFgNjb20xGTAXBgoJkiaJk/IsZAEZFglsb2NhbGhvc3QxGDAWBgNVBAoM +D2xvY2FsaG9zdCwgSW5jLjEMMAoGA1UECwwDT3BzMRIwEAYDVQQDDAlsb2NhbGhv +c3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDY1nDD1cW5ykZVrTXr +AMJeLuZknW9tg+4s8R+XYrzRMTNr9tAXEYNEa+T92HtqrKaVZtdGiQ6NmS95EYez +EgVmyGQEuuVlY8ChcX8XgpBsJPBV4+XIRju+RSyEW+ZNkT3EWTRKab+KSgN2aZ2O +T16UqfJd3JjATZw//xXHRWhCQhchX3nNyzkIgENPtdtSweSLG4NjOHY08U7gZee2 +1MCqa/58NVECJXlqK/Tfw/3SPgCmSHLLCyybWfClLmXXIjBuSTtSOLDPj4pwVrZe +R0aePs7ZNJnX/tUICNSZeNzs7+n9QUoAiKYSNKSdDw270Lbo5GQdWuM7nkrc2txe +H8wvAgMBAAGjggEWMIIBEjCBmgYDVR0jBIGSMIGPgBSh7peJvc4Im3WkR6/FaUD/ +aYDa8qF0pHIwcDETMBEGCgmSJomT8ixkARkWA2NvbTEaMBgGCgmSJomT8ixkARkW +Cmpmcm9namZyb2cxFDASBgNVBAoMC0pGcm9nLCBJbmMuMQswCQYDVQQLDAJDQTEa +MBgGA1UEAwwRcm9vdC5jYS5qZnJvZy5jb22CAQIwHQYDVR0OBBYEFIuWN8D/hFhl +w0bdSyG+PmymjpVUMAwGA1UdEwEB/wQCMAAwDgYDVR0PAQH/BAQDAgXgMCAGA1Ud +JQEB/wQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAUBgNVHREEDTALgglsb2NhbGhv +c3QwDQYJKoZIhvcNAQELBQADggEBAJQJljyNH/bpvmiYO0+d8El+BdaU7FI2Q2Sq +1xBz/qBQSVmUB0iIeblTdQ58nYW6A/pvh8EnTWE7tRPXw3WQR4it8ldGSDQe2zHt +9U0hcC7DSzYGxlHLm0UI/LNwzdRy0kY8LArE/zGDSQ+6hp2Op21IHtzGfJnILG5G +OZdDWOB/e4cQw2/AcnsrapJU4MJCx28l0N9aSx4wr7SNosHuYOO8CimAdsqQukVt +rcrJZyHNvG5eQUVuQnZRywXDX6tLj8HQHfYLRaMqD57GMU0dg/kvYTYrYR/krbcG +Qf1D/9GCsn081fYblSfSSRRxrbhdYcoI/6xNHIC2y7bO8ZJD9zw= +-----END CERTIFICATE----- +-----BEGIN CERTIFICATE----- +MIIEPTCCAyWgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBwMRMwEQYKCZImiZPyLGQB +GRYDY29tMRowGAYKCZImiZPyLGQBGRYKamZyb2dqZnJvZzEUMBIGA1UECgwLSkZy +b2csIEluYy4xCzAJBgNVBAsMAkNBMRowGAYDVQQDDBFyb290LmNhLmpmcm9nLmNv +bTAeFw0yMDEyMDcxMDUyNDhaFw0zMDEyMDUxMDUyNDhaMG4xEzARBgoJkiaJk/Is +ZAEZFgNjb20xFTATBgoJkiaJk/IsZAEZFgVqZnJvZzEUMBIGA1UECgwLamZyb2cs +IEluYy4xCzAJBgNVBAsMAkNBMR0wGwYDVQQDDBRzaWduaW5nLmNhLmpmcm9nLmNv +bTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALCe74VmqSryFPESO/oq +bgspiOSwGheG/AbUf/2XXPLZNbZJ/hhuI6T+iSW5FYy3jETwwODDlF8GBN6R33+U +gNCjXIMBDUOWkETe1fD2zj1HMTC6angykKJy2Xkw+sWniELbYfTu+SLHsBMPQnVI +jFwDLcbSMbs7ieU/IuQTEnEZxPiKcokOaF7vPntfPwdvRoGwMR0VuX7h+20Af1Il +3ntOuoasoV66K6KuiBRkSBcsV2ercCRQlpXCvIsTJVWASpSTNrpKy8zejjePw/xs +ieMGSo6WIxnIJnOLTJWnrw8sZt0tiNrLbB8npSvP67uUMDGhrZ3Tnro9JtujquOE +zMUCAwEAAaOB4zCB4DASBgNVHRMBAf8ECDAGAQH/AgEAMIGaBgNVHSMEgZIwgY+A +FBX3TQRxJRItQ/hi81MA3eZggFs7oXSkcjBwMRMwEQYKCZImiZPyLGQBGRYDY29t +MRowGAYKCZImiZPyLGQBGRYKamZyb2dqZnJvZzEUMBIGA1UECgwLSkZyb2csIElu +Yy4xCzAJBgNVBAsMAkNBMRowGAYDVQQDDBFyb290LmNhLmpmcm9nLmNvbYIBATAd +BgNVHQ4EFgQUoe6Xib3OCJt1pEevxWlA/2mA2vIwDgYDVR0PAQH/BAQDAgGGMA0G +CSqGSIb3DQEBCwUAA4IBAQAzkcvT1tTjnjguRH4jGPxP1fidiM0DWiWZQlRT9Evt +BkltRwkqOZIdrBLy/KJbOxRSCRaKpxyIYd5bWrCDCWvXArBFDY9j3jGGk8kqXb0/ +VajEKDjHXzJM7HXAzyJO2hKVK4/OoPlzhKqR1ZbZF1F8Omzo7+oNwPqf5Y5hnn2M +qrUWxE216mWE8v7gvbfu39w9XKTFH1/RPgAAJet2dunyLbz3W5NgyBbCWGj/qJCz +TUDD9I8az/XX73HLpkXbcEY5/qrPV6EQWzf+ec4EcgrEi0f8gTKzl7OQaqYDxObk +yixmONVlwYD2FpWqJYAfg04u/CRQMXPPCdUQh/eKrHUg +-----END CERTIFICATE----- \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/root-ca.pem b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/root-ca.pem new file mode 100644 index 0000000..3672009 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/root-ca.pem @@ -0,0 +1,23 @@ +-----BEGIN CERTIFICATE----- +MIIDvjCCAqagAwIBAgIBATANBgkqhkiG9w0BAQsFADBwMRMwEQYKCZImiZPyLGQB +GRYDY29tMRowGAYKCZImiZPyLGQBGRYKamZyb2dqZnJvZzEUMBIGA1UECgwLSkZy +b2csIEluYy4xCzAJBgNVBAsMAkNBMRowGAYDVQQDDBFyb290LmNhLmpmcm9nLmNv +bTAeFw0yMDEyMDcxMDUyNDdaFw0zMDEyMDUxMDUyNDdaMHAxEzARBgoJkiaJk/Is +ZAEZFgNjb20xGjAYBgoJkiaJk/IsZAEZFgpqZnJvZ2pmcm9nMRQwEgYDVQQKDAtK +RnJvZywgSW5jLjELMAkGA1UECwwCQ0ExGjAYBgNVBAMMEXJvb3QuY2EuamZyb2cu +Y29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxyTSYCbGefbdAHgW +zxXhCh7gvOUzyThaC6bcvY7yMqVu3YPxMAV1LEz+J0VMeGvu5HzONyGq89TaIKtr +AyZKxM957Q/TK0NPi0HUIT1wZKPuH89DeH79gfBjyv8XMUhFzKxAaosEa4rhkAMe +B4ukk9twfGotKU1y4j6m1V1gckeDZDRIW4tNzQbEBsL+ZcxDnCeSAAHW3Djb5yzQ +Yj3LPIRN0yu0fL8oN4yVn5tysAfXTum7HIuyKp3gfxhQgSXGVIDHd7Z1HcLrUe2o +2Z7dlsrFCUgHPccOxyFzxGI8bCPFYU75QqbxP699L1chma0It/2D0YxcrXhRkzzg +wzrBFwIDAQABo2MwYTAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFBX3TQRx +JRItQ/hi81MA3eZggFs7MB0GA1UdDgQWBBQV900EcSUSLUP4YvNTAN3mYIBbOzAO +BgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQADggEBAH5XYiOBvHdd3bRfyHeo +Y2i7+u59VU3HDdOm/FVI0JqkzFAp6DLk6Ow5w/2MXbasga03lJ9SpHvKVne+VOaH +Df7xEqCIZeQVofNyOfsl4NOu6NgPSlQx0FZ6lPToZDBGp7D6ftnJcUujGk0W9y7k +GwxojLnP1f/KyjYTCCK6sDXwSn3fZGF5WmnHlzZEyKlLQoLNoEZ1uTjg2CRsa/RU +QxobwNzHGbrLZw5pfeoiF7G27RGoUA/S6mfVFQJVDP5Y3/xJRii56tMaJPwPh0sN +QPLbNvNgeU1dET1msMBnZvzNUko2fmBc2+pU7PyrL9V2pgfHq981Db1ShkNYtMhD +bMw= +-----END CERTIFICATE----- \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sg_roles.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sg_roles.yml new file mode 100644 index 0000000..a659b12 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sg_roles.yml @@ -0,0 +1,7 @@ +_sg_meta: + type: "roles" + config_version: 2 + +sg_anonymous: + cluster_permissions: + - cluster:monitor/health diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sg_roles_mapping.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sg_roles_mapping.yml new file mode 100644 index 0000000..f7abca6 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sg_roles_mapping.yml @@ -0,0 +1,48 @@ +# In this file users, backendroles and hosts can be mapped to Search Guard roles. +# Permissions for Search Guard roles are configured in sg_roles.yml +_sg_meta: + type: "rolesmapping" + config_version: 2 + +## Demo roles mapping +SGS_ALL_ACCESS: + description: "Maps admin to SGS_ALL_ACCESS" + reserved: true + backend_roles: + - "admin" + +SGS_OWN_INDEX: + description: "Allow full access to an index named like the username" + reserved: false + users: + - "*" + +SGS_LOGSTASH: + reserved: false + backend_roles: + - "logstash" + +SGS_KIBANA_USER: + description: "Maps kibanauser to SGS_KIBANA_USER" + reserved: false + backend_roles: + - "kibanauser" + +SGS_READALL: + reserved: true + backend_roles: + - "readall" + +SGS_MANAGE_SNAPSHOTS: + reserved: true + backend_roles: + - "snapshotrestore" + +SGS_KIBANA_SERVER: + reserved: true + users: + - "kibanaserver" + +sg_anonymous: + backend_roles: + - sg_anonymous_backendrole diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sgadmin.key b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sgadmin.key new file mode 100644 index 0000000..61192d9 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sgadmin.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCa3GuNbI30EdRs +S2Dmq87i/4Y7QeOldogzmNYH3m7GMjPFJcJg11Yc2HsAbBYs86fW6gGvO+68bFmY +X5kYvPN+L8KRUCSvmvjHCGf7ULmxiG2Wh7RPzQaAdvqqkMGW1QDwwxA25tP9KfZv +nP/08CPmboP8rcCEhX6HCVh0Im+WT3BBxkikjhVaVru2cLPtKtgtBX7a3HY7XMfp +DRYhXZNf+ZxfWewLQhNNndHwjtuJooLHdtX4WEXUhsrXS7/I+M7BdL/fB0ptwfvg +x1WvC2JnvNnvgdMBoUevlHjugWBVGo4AhOpFqAmQ8MxXZUhPGinDxjFvwrHYwYm0 +w7tVAnTbAgMBAAECggEAAr7esZKzD5ilnWx7RkKMikAvFyKUkJXvnq6RXXFZoZKm +/5tPtABEOKbYekoU3SPgeWkLseK568YBbqXM9ySsLerpSIvVIq1T660pHsowP32/ +8MoRkmYOPRj6WgcX/UetEan7r66ktfT9AJpM6gDgzFm5Zgz0knvFawJ7w8Yzqmks +8JqjA1E433xEUtc00Qm4z7You1I5eyrz1zKxBPZATVM6ScbDq2WXqwgIGUbnAHG2 +6PADvOPP+8Kl0/JNC+SkE8J+KvfCYnJIDZaWTCjdd4cjkFAAHXi16BvF6PY3veel +/LT2nr1/YmcADCt4wuWGn+1HRF+mJgjqTVcfQSJrbQKBgQDJG45Hmku7fnNAn/A9 +FPHmo7CpymxXpg12yf7BuKr4irpJpa6WmXB6EsxCy91rffQTDEh8TnpJG6yj5vyJ +b0dEt3u8RtBfx49UhKG/pDYi9mnUuazH0u6BHu+w4fRi3Cju7sY4qM4aj8rnAlU0 +2DnXWEKIfhd+1cXDwyI8DyuvfwKBgQDFIV7ZgI1weZv7EnNiIKs65y4NWG4uG7jB +Z+Wx8xx9n5OKVxw21NPt2pZzzW3Y3+pRXypcjH13XPrZxfaUt1Y8ylC3/DHFgsid +iXyfjmit4TWiW9busC09Q8YwFZZbMWj/Wd1PRav3/zDICf3B1QRXEqqpYfUtAbXf +SaanZNGopQKBgQDFwO77weHOkN1MIvndVoc4QKYrj/1Rgtuif6afX7Pfiqr8WIuB +U4iiwXFSDZ3BYa1sPZvZgGIHGct9sFmL23y9OZ/W19t3E4kBlxpmlFcXsi8HGz2n +kOcu2Pjheo8R12P475rDhFqHC/Z9inG28RiPhR6HkVYRRqydf3hejpxqiQKBgEJw +ZM9ZjFIEKpYMOecwq4VGtTa6Pyg7H6HPqpK3JTsRtWBCy7ePM35O1bZh3kvh689R +C631i7PXGpSbK+gjgmUqqtnXnc67rXGrDN2Z2Z4A8VqvKVl490ZWuU0reWly1bh6 +SSSWjsceswo4k9XoPXY7TFmaMk/g67M913VDfYYhAoGAXp6HYCZga72N6RdB38TY +i08c/O/xksfkNVo0SuVqr99uQ5TN+d2+o+t5H9Fekl1y9jUSK6q6q6+Vp8zSiyzV +GwAWk9u8dBGoNiWs4cOtQAdyeLbGDIHbIv4jeRqqSl87H6R6wJY4+fWdfm9/KEG7 +N957kwur+XYzE0RfG5wgS3o= +-----END PRIVATE KEY----- \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sgadmin.pem b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sgadmin.pem new file mode 100644 index 0000000..9c672f6 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/files/searchguard/sgadmin.pem @@ -0,0 +1,50 @@ +-----BEGIN CERTIFICATE----- +MIIESjCCAzKgAwIBAgIGAXY81RknMA0GCSqGSIb3DQEBCwUAMG4xEzARBgoJkiaJ +k/IsZAEZFgNjb20xFTATBgoJkiaJk/IsZAEZFgVqZnJvZzEUMBIGA1UECgwLamZy +b2csIEluYy4xCzAJBgNVBAsMAkNBMR0wGwYDVQQDDBRzaWduaW5nLmNhLmpmcm9n +LmNvbTAeFw0yMDEyMDcxMDUyNDlaFw0zMDEyMDUxMDUyNDlaMGYxEzARBgoJkiaJ +k/IsZAEZFgNjb20xFzAVBgoJkiaJk/IsZAEZFgdzZ2FkbWluMRYwFAYDVQQKDA1z +Z2FkbWluLCBJbmMuMQwwCgYDVQQLDANPcHMxEDAOBgNVBAMMB3NnYWRtaW4wggEi +MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCa3GuNbI30EdRsS2Dmq87i/4Y7 +QeOldogzmNYH3m7GMjPFJcJg11Yc2HsAbBYs86fW6gGvO+68bFmYX5kYvPN+L8KR +UCSvmvjHCGf7ULmxiG2Wh7RPzQaAdvqqkMGW1QDwwxA25tP9KfZvnP/08CPmboP8 +rcCEhX6HCVh0Im+WT3BBxkikjhVaVru2cLPtKtgtBX7a3HY7XMfpDRYhXZNf+Zxf +WewLQhNNndHwjtuJooLHdtX4WEXUhsrXS7/I+M7BdL/fB0ptwfvgx1WvC2JnvNnv +gdMBoUevlHjugWBVGo4AhOpFqAmQ8MxXZUhPGinDxjFvwrHYwYm0w7tVAnTbAgMB +AAGjgfUwgfIwgZoGA1UdIwSBkjCBj4AUoe6Xib3OCJt1pEevxWlA/2mA2vKhdKRy +MHAxEzARBgoJkiaJk/IsZAEZFgNjb20xGjAYBgoJkiaJk/IsZAEZFgpqZnJvZ2pm +cm9nMRQwEgYDVQQKDAtKRnJvZywgSW5jLjELMAkGA1UECwwCQ0ExGjAYBgNVBAMM +EXJvb3QuY2EuamZyb2cuY29tggECMB0GA1UdDgQWBBSSIpvK2db0wJf7bw1mhYt8 +A0JUQTAMBgNVHRMBAf8EAjAAMA4GA1UdDwEB/wQEAwIF4DAWBgNVHSUBAf8EDDAK +BggrBgEFBQcDAjANBgkqhkiG9w0BAQsFAAOCAQEAn3cM0PDh8vTJS8zZ7HylMpZl +SaZwd3sxshhBKx4JEc85WQPp60nVADqVhnkVa1rfQQURaMP87hqmzf9eOcesnjn6 +17eSVpDpZ0B1qV46hJd15yYKqFLavqtFpy0ePpk4EoanwJUikphT3yuIB6v3gqfY +h20t7/XmkjEwfGkmgmXOZNb9uOpKjkotWRR/IslSMxoozsdWYQLaqA0De/7Tqpmi +mortmVTOtZCX/ZChuN2XzqUnWZT+xIJomAj4ZCOlw03Yd9eUhrDZBmrYHiUmS4VO +wWFDER3zhwncjg0X2rOqL6N5P8TIfqpVgf1VuDhTAj/GY1ZKrXol28WwQQCA9w== +-----END CERTIFICATE----- +-----BEGIN CERTIFICATE----- +MIIEPTCCAyWgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBwMRMwEQYKCZImiZPyLGQB +GRYDY29tMRowGAYKCZImiZPyLGQBGRYKamZyb2dqZnJvZzEUMBIGA1UECgwLSkZy +b2csIEluYy4xCzAJBgNVBAsMAkNBMRowGAYDVQQDDBFyb290LmNhLmpmcm9nLmNv +bTAeFw0yMDEyMDcxMDUyNDhaFw0zMDEyMDUxMDUyNDhaMG4xEzARBgoJkiaJk/Is +ZAEZFgNjb20xFTATBgoJkiaJk/IsZAEZFgVqZnJvZzEUMBIGA1UECgwLamZyb2cs +IEluYy4xCzAJBgNVBAsMAkNBMR0wGwYDVQQDDBRzaWduaW5nLmNhLmpmcm9nLmNv +bTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALCe74VmqSryFPESO/oq +bgspiOSwGheG/AbUf/2XXPLZNbZJ/hhuI6T+iSW5FYy3jETwwODDlF8GBN6R33+U +gNCjXIMBDUOWkETe1fD2zj1HMTC6angykKJy2Xkw+sWniELbYfTu+SLHsBMPQnVI +jFwDLcbSMbs7ieU/IuQTEnEZxPiKcokOaF7vPntfPwdvRoGwMR0VuX7h+20Af1Il +3ntOuoasoV66K6KuiBRkSBcsV2ercCRQlpXCvIsTJVWASpSTNrpKy8zejjePw/xs +ieMGSo6WIxnIJnOLTJWnrw8sZt0tiNrLbB8npSvP67uUMDGhrZ3Tnro9JtujquOE +zMUCAwEAAaOB4zCB4DASBgNVHRMBAf8ECDAGAQH/AgEAMIGaBgNVHSMEgZIwgY+A +FBX3TQRxJRItQ/hi81MA3eZggFs7oXSkcjBwMRMwEQYKCZImiZPyLGQBGRYDY29t +MRowGAYKCZImiZPyLGQBGRYKamZyb2dqZnJvZzEUMBIGA1UECgwLSkZyb2csIElu +Yy4xCzAJBgNVBAsMAkNBMRowGAYDVQQDDBFyb290LmNhLmpmcm9nLmNvbYIBATAd +BgNVHQ4EFgQUoe6Xib3OCJt1pEevxWlA/2mA2vIwDgYDVR0PAQH/BAQDAgGGMA0G +CSqGSIb3DQEBCwUAA4IBAQAzkcvT1tTjnjguRH4jGPxP1fidiM0DWiWZQlRT9Evt +BkltRwkqOZIdrBLy/KJbOxRSCRaKpxyIYd5bWrCDCWvXArBFDY9j3jGGk8kqXb0/ +VajEKDjHXzJM7HXAzyJO2hKVK4/OoPlzhKqR1ZbZF1F8Omzo7+oNwPqf5Y5hnn2M +qrUWxE216mWE8v7gvbfu39w9XKTFH1/RPgAAJet2dunyLbz3W5NgyBbCWGj/qJCz +TUDD9I8az/XX73HLpkXbcEY5/qrPV6EQWzf+ec4EcgrEi0f8gTKzl7OQaqYDxObk +yixmONVlwYD2FpWqJYAfg04u/CRQMXPPCdUQh/eKrHUg +-----END CERTIFICATE----- \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/handlers/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/handlers/main.yml new file mode 100644 index 0000000..016570c --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/handlers/main.yml @@ -0,0 +1,7 @@ +--- +# handlers file for missioncontrol +- name: restart missioncontrol + become: yes + systemd: + name: "{{ mc_daemon }}" + state: restarted diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/meta/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/meta/main.yml new file mode 100644 index 0000000..2a11e72 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/meta/main.yml @@ -0,0 +1,16 @@ +galaxy_info: + author: "JFrog Maintainers Team " + description: "The missionControl role will install missionControl software onto the host. An Artifactory server and Postgress database is required." + company: JFrog + + issue_tracker_url: "https://github.com/jfrog/JFrog-Cloud-Installers/issues" + + license: license (Apache-2.0) + + min_ansible_version: 2.9 + + galaxy_tags: + - missionControl + - jfrog + +dependencies: [] \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/expect.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/expect.yml new file mode 100644 index 0000000..06f61dc --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/expect.yml @@ -0,0 +1,44 @@ +- name: Prepare expect scenario script + set_fact: + expect_scenario: | + set timeout 300 + spawn {{ exp_executable_cmd }} + expect_before timeout { exit 1 } + set CYCLE_END 0 + set count 0 + + while { $CYCLE_END == 0 } { + expect { + {% for each_request in exp_scenarios %} + -nocase -re {{ '{' }}{{ each_request.expecting }}.*} { + send "{{ each_request.sending }}\n" + } + {% endfor %} + eof { + set CYCLE_END 1 + } + } + set count "[expr $count + 1]" + if { $count > 16} { + exit 128 + } + } + + expect eof + lassign [wait] pid spawnid os_error_flag value + + if {$os_error_flag == 0} { + puts "INSTALLER_EXIT_STATUS-$value" + } else { + puts "INSTALLER_EXIT_STATUS-$value" + } + +- name: Interactive with expect + become: yes + ignore_errors: yes + shell: | + {{ expect_scenario }} + args: + executable: /usr/bin/expect + chdir: "{{ exp_dir }}" + register: exp_result diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/install.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/install.yml new file mode 100644 index 0000000..14b2c30 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/install.yml @@ -0,0 +1,150 @@ +--- +- debug: + msg: "Performing installation of missionControl version - {{ missioncontrol_version }}" + +- name: Install expect dependency + become: yes + yum: + name: expect + state: present + when: ansible_os_family == 'Redhat' + +- name: Install expect dependency + become: yes + apt: + name: expect + state: present + update_cache: yes + when: ansible_os_family == 'Debian' + +- name: Ensure group jfmc exist + become: yes + group: + name: "{{ mc_group }}" + gid: "{{ mc_gid }}" + state: present + +- name: Ensure user jfmc exist + become: yes + user: + uid: "{{ mc_uid }}" + name: "{{ mc_user }}" + group: "{{ mc_group }}" + create_home: yes + home: "{{ mc_home }}" + shell: /bin/bash + state: present + +- name: Download mc + become: yes + unarchive: + src: "{{ mc_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ mc_user }}" + group: "{{ mc_group }}" + creates: "{{ mc_untar_home }}" + register: downloadmc + until: downloadmc is succeeded + retries: 3 + +- name: Check if app directory exists + become: yes + stat: + path: "{{ mc_home }}/app" + register: app_dir_check + +- name: Copy untar directory to mc home + become: yes + command: "cp -r {{ mc_untar_home }}/. {{ mc_home }}" + when: not app_dir_check.stat.exists + +- name: Create required directories + become: yes + file: + path: "{{ item }}" + state: directory + recurse: yes + owner: "{{ mc_user }}" + group: "{{ mc_group }}" + loop: + - "{{ mc_home }}/var/etc" + - "{{ mc_home }}/var/etc/security/" + - "{{ mc_home }}/var/etc/info/" + +- name: Configure master key + become: yes + copy: + dest: "{{ mc_home }}/var/etc/security/master.key" + content: | + {{ master_key }} + owner: "{{ mc_user }}" + group: "{{ mc_group }}" + mode: 0640 + +- name: Setup elasticsearch + import_tasks: setup-elasticsearch.yml + +- name: Check if install.sh wrapper script exist + become: yes + stat: + path: "{{ mc_install_script_path }}/install.sh" + register: install_wrapper_script + +- name: Include interactive installer scripts + include_vars: script/archive.yml + +- name: Install JFMC + include_tasks: expect.yml + vars: + exp_executable_cmd: "./install.sh -u {{ mc_user }} -g {{ mc_group }}" + exp_dir: "{{ mc_install_script_path }}" + exp_scenarios: "{{ mc_installer_scenario['main'] }}" + args: + apply: + environment: + YQ_PATH: "{{ mc_thirdparty_path }}/yq" + when: install_wrapper_script.stat.exists + +- name: Configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ mc_home }}/var/etc/info/installer-info.json" + notify: restart missioncontrol + +- name: Configure systemyaml + become: yes + template: + src: "{{ mc_system_yaml_template }}" + dest: "{{ mc_home }}/var/etc/system.yaml" + notify: restart missioncontrol + +- name: Update correct permissions + become: yes + file: + path: "{{ mc_home }}" + state: directory + recurse: yes + owner: "{{ mc_user }}" + group: "{{ mc_group }}" + mode: '0755' + +- name: Install mc as a service + become: yes + shell: | + {{ mc_archive_service_cmd }} + args: + chdir: "{{ mc_install_script_path }}" + register: check_service_status_result + ignore_errors: yes + +- name: Restart missioncontrol + meta: flush_handlers + +- name : Wait for missionControl to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/main.yml new file mode 100644 index 0000000..6786b82 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/main.yml @@ -0,0 +1,6 @@ +- name: perform installation + include_tasks: "install.yml" + when: not mc_upgrade_only +- name: perform upgrade + include_tasks: "upgrade.yml" + when: mc_upgrade_only \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/setup-elasticsearch.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/setup-elasticsearch.yml new file mode 100644 index 0000000..768e508 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/setup-elasticsearch.yml @@ -0,0 +1,179 @@ +- name: Ensure group elasticsearch exists + become: yes + group: + name: elasticsearch + gid: "{{ es_gid }}" + state: present + +- name: Ensure user elasticsearch exists + become: yes + user: + name: elasticsearch + uid: "{{ es_uid }}" + group: elasticsearch + create_home: yes + home: "{{ mc_es_home }}" + shell: /bin/bash + state: present + +- name: Create required directories + become: yes + file: + path: "{{ item }}" + state: directory + mode: 0755 + recurse: yes + owner: elasticsearch + group: elasticsearch + loop: + - "{{ mc_es_conf_base }}" + - "{{ mc_es_data_dir }}" + - "{{ mc_es_log_dir }}" + - "{{ mc_es_home }}" + +- name: Set max file descriptors limit + become: yes + pam_limits: + domain: 'elasticsearch' + limit_type: '-' + limit_item: nofile + value: '65536' + +- name: Update nproc limit + become: yes + pam_limits: + domain: 'elasticsearch' + limit_type: '-' + limit_item: nproc + value: '4096' + +- name: Setting sysctl values + become: yes + sysctl: name={{ item.name }} value={{ item.value }} sysctl_set=yes + loop: + - { name: "vm.max_map_count", value: 262144} + ignore_errors: yes + +- name: Find elasticsearch package + become: yes + find: + paths: "{{ mc_home }}/app/third-party/elasticsearch" + patterns: "^elasticsearch-oss-.+\\.tar.gz$" + use_regex: yes + file_type: file + register: check_elasticsearch_package_result + +- name: Set elasticsearch package file name + set_fact: + mc_elasticsearch_package: "{{ check_elasticsearch_package_result.files[0].path }}" + when: check_elasticsearch_package_result.matched > 0 + +- name: Ensure /usr/share/elasticsearch exists + file: + path: "{{ mc_es_home }}" + state: directory + owner: elasticsearch + group: elasticsearch + become: yes + +- name: Extract elasticsearch package + become: yes + become_user: elasticsearch + ignore_errors: yes + unarchive: + src: "{{ mc_elasticsearch_package }}" + dest: "{{ mc_es_home }}" + remote_src: yes + extra_opts: + - --strip-components=1 + owner: elasticsearch + group: elasticsearch + register: unarchive_result + when: check_elasticsearch_package_result.matched > 0 + +- name: Copy elasticsearch config files to ES_PATH_CONF dir + become: yes + command: "cp -r {{ mc_es_home }}/config/. {{ mc_es_conf_base }}/" + +- name: Remove elasticsearch config dir + become: yes + file: + path: "{{ mc_es_home }}/config" + state: absent + +- name: Generate HA elasticsearch.yml template file + become: yes + ignore_errors: yes + template: + src: templates/ha/{{ mc_ha_node_type }}.elasticsearch.yml.j2 + dest: "{{ mc_es_conf_base }}/elasticsearch.yml" + owner: elasticsearch + group: elasticsearch + when: + - unarchive_result.extract_results.rc | default(128) == 0 + - flow_type in ["ha-cluster", "ha-upgrade"] + +- name: Generate elasticsearch.yml template file + become: yes + template: + src: templates/elasticsearch.yml.j2 + dest: "{{ mc_es_conf_base }}/elasticsearch.yml" + owner: elasticsearch + group: elasticsearch + when: + - unarchive_result.extract_results.rc | default(128) == 0 + - flow_type in ["archive", "upgrade"] + +- name: Create empty unicast_hosts.txt file + become: yes + file: + path: "{{ mc_es_conf_base }}/unicast_hosts.txt" + state: touch + mode: 0664 + owner: elasticsearch + group: elasticsearch + +- name: Setup searchguard plugin + import_tasks: setup-searchguard.yml + +- name: Update directories permissions + become: yes + file: + path: "{{ item }}" + state: directory + mode: 0755 + recurse: yes + owner: elasticsearch + group: elasticsearch + loop: + - "{{ mc_es_conf_base }}" + - "{{ mc_es_data_dir }}" + - "{{ mc_es_log_dir }}" + - "{{ mc_es_home }}" + +- name: Start elasticsearch + become: yes + become_user: elasticsearch + shell: "{{ mc_es_script_path }}/elasticsearch -d" + environment: + JAVA_HOME: "{{ mc_es_java_home }}" + ES_PATH_CONF: "{{ mc_es_conf_base }}/" + register: start_elasticsearch_result + when: unarchive_result.extract_results.rc | default(128) == 0 + +- name: Wait for elasticsearch to start + pause: + seconds: 15 + +- name: Init searchguard plugin + become: yes + become_user: elasticsearch + shell: | + ./sgadmin.sh -p {{ mc_es_transport_port }} -cacert root-ca.pem \ + -cert sgadmin.pem -key sgadmin.key -cd {{ mc_es_searchgaurd_home }}/sgconfig/ -nhnv -icl + args: + chdir: "{{ mc_es_searchgaurd_home }}/tools/" + environment: + JAVA_HOME: "{{ mc_es_java_home }}" + register: install_searchguard_result + when: check_searchguard_bundle_result.matched == 1 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/setup-searchguard.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/setup-searchguard.yml new file mode 100644 index 0000000..54fcaaf --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/setup-searchguard.yml @@ -0,0 +1,100 @@ +- name: Copy elasticsearch certificate + become: yes + copy: + mode: 0600 + src: files/searchguard/localhost.pem + dest: "{{ mc_es_conf_base }}/localhost.pem" + owner: elasticsearch + group: elasticsearch + +- name: Copy elasticsearch private key + become: yes + copy: + mode: 0600 + src: files/searchguard/localhost.key + dest: "{{ mc_es_conf_base }}/localhost.key" + owner: elasticsearch + group: elasticsearch + +- name: Copy searchguard root ca + become: yes + copy: + mode: 0600 + src: files/searchguard/root-ca.pem + dest: "{{ mc_es_conf_base }}/root-ca.pem" + owner: elasticsearch + group: elasticsearch + +- name: Find searchguard bundle + find: + paths: "{{ mc_home }}/app/third-party/elasticsearch/" + patterns: "^search-guard-.+\\.zip$" + use_regex: yes + file_type: file + register: check_searchguard_bundle_result + +- name: Install searchguard plugin + become: yes + become_user: elasticsearch + ignore_errors: yes + shell: | + {{ mc_es_script_path }}/elasticsearch-plugin install \ + -b file://{{ check_searchguard_bundle_result.files[0].path }} + environment: + JAVA_HOME: "{{ mc_es_java_home }}" + ES_PATH_CONF: "{{ mc_es_conf_base }}/" + register: install_searchguard_result + when: check_searchguard_bundle_result.matched == 1 + +- name: Copy searchguard admin certificate + become: yes + copy: + mode: 0600 + src: files/searchguard/sgadmin.pem + dest: "{{ mc_es_searchgaurd_home }}/tools/sgadmin.pem" + owner: elasticsearch + group: elasticsearch + +- name: Copy searchguard admin private key + become: yes + copy: + mode: 0600 + src: files/searchguard/sgadmin.key + dest: "{{ mc_es_searchgaurd_home }}/tools/sgadmin.key" + owner: elasticsearch + group: elasticsearch + +- name: Copy searchguard root ca + become: yes + copy: + mode: 0600 + src: files/searchguard/root-ca.pem + dest: "{{ mc_es_searchgaurd_home }}/tools/root-ca.pem" + owner: elasticsearch + group: elasticsearch + +- name: Copy roles template + become: yes + copy: + mode: 0600 + src: files/searchguard/sg_roles.yml + dest: "{{ mc_es_searchgaurd_home }}/sgconfig/sg_roles.yml" + owner: elasticsearch + group: elasticsearch + +- name: Copy roles template + become: yes + copy: + mode: 0600 + src: files/searchguard/sg_roles_mapping.yml + dest: "{{ mc_es_searchgaurd_home }}/sgconfig/sg_roles_mapping.yml" + owner: elasticsearch + group: elasticsearch + +- name: Check execution bit + become: yes + file: + path: "{{ mc_es_searchgaurd_home }}/tools/sgadmin.sh" + owner: elasticsearch + group: elasticsearch + mode: 0700 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade-elasticsearch.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade-elasticsearch.yml new file mode 100644 index 0000000..527284e --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade-elasticsearch.yml @@ -0,0 +1,113 @@ +- name: Get elasticsearch pid + shell: "ps -ef | grep -v grep | grep -w elasticsearch | awk '{print $2}'" + register: elasticsearch_pid + +- name: Stop elasticsearch before upgrade + become: yes + shell: kill -9 {{ elasticsearch_pid.stdout }} + when: elasticsearch_pid.stdout | length > 0 + +- name: Waiting until all running processes are killed + wait_for: + path: "/proc/{{ elasticsearch_pid.stdout }}/status" + state: absent + when: elasticsearch_pid.stdout | length > 0 + +- name: Find searchguard bundle for removal + become: yes + find: + paths: "{{ mc_home }}/app/third-party/elasticsearch/" + patterns: "^search-guard-.+\\.zip$" + use_regex: yes + file_type: file + register: check_searchguard_bundle_result + +- name: Remove searchguard plugin + become: yes + become_user: elasticsearch + ignore_errors: yes + shell: | + {{ mc_es_script_path }}/elasticsearch-plugin remove {{ check_searchguard_bundle_result.files[0].path }} + environment: + JAVA_HOME: "{{ mc_es_java_home }}" + ES_PATH_CONF: "{{ mc_es_conf_base }}/config" + register: remove_searchguard_result + when: check_searchguard_bundle_result.matched == 1 + +- name: Delete elasticsearch home dir + become: yes + file: + path: "{{ mc_es_home }}" + state: absent + +- name: Create elasticsearch home dir + become: yes + file: + path: "{{ mc_es_home }}" + state: directory + mode: 0755 + owner: elasticsearch + group: elasticsearch + +- name: Find elasticsearch package + become: yes + find: + paths: "{{ mc_home }}/app/third-party/elasticsearch" + patterns: "^elasticsearch-oss-.+\\.tar.gz$" + use_regex: yes + file_type: file + register: check_elasticsearch_package_result + +- name: Set elasticsearch package file name + set_fact: + mc_elasticsearch_package: "{{ check_elasticsearch_package_result.files[0].path }}" + when: check_elasticsearch_package_result.matched > 0 + +- name: Extract elasticsearch package + become: yes + become_user: elasticsearch + ignore_errors: yes + unarchive: + src: "{{ mc_elasticsearch_package }}" + dest: "{{ mc_es_home }}" + remote_src: yes + extra_opts: + - --strip-components=1 + - --exclude=config + owner: elasticsearch + group: elasticsearch + register: unarchive_result + when: check_elasticsearch_package_result.matched > 0 + +- name: Generate HA elasticsearch.yml template file + become: yes + ignore_errors: yes + template: + src: templates/ha/{{ mc_ha_node_type }}.elasticsearch.yml.j2 + dest: "{{ mc_es_conf_base }}/elasticsearch.yml" + owner: elasticsearch + group: elasticsearch + when: unarchive_result.extract_results.rc | default(128) == 0 + +- name: Create empty unicast_hosts.txt file + become: yes + file: + path: "{{ mc_es_conf_base }}/unicast_hosts.txt" + state: touch + mode: 0644 + owner: elasticsearch + group: elasticsearch + +- name: Upgrade searchguard plugin + import_tasks: upgrade-searchguard.yml + +- name: Start elasticsearch + become: yes + become_user: elasticsearch + ignore_errors: yes + shell: "{{ mc_es_script_path }}/elasticsearch -d" + environment: + JAVA_HOME: "{{ mc_es_java_home }}" + ES_PATH_CONF: "{{ mc_es_conf_base }}/" + when: unarchive_result.extract_results.rc | default(128) == 0 + register: start_elastcsearch_upgraded diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade-searchguard.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade-searchguard.yml new file mode 100644 index 0000000..cde3228 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade-searchguard.yml @@ -0,0 +1,100 @@ +- name: Create elasticsearch config path folder + become: yes + file: + path: "{{ mc_es_conf_base }}/searchguard" + state: directory + mode: 0755 + owner: elasticsearch + group: elasticsearch + +- name: Copy elasticsearch certificate + become: yes + copy: + mode: 0600 + src: files/searchguard/localhost.pem + dest: "{{ mc_es_conf_base }}/localhost.pem" + owner: elasticsearch + group: elasticsearch + +- name: Copy elasticsearch private key + become: yes + copy: + mode: 0600 + src: files/searchguard/localhost.key + dest: "{{ mc_es_conf_base }}/localhost.key" + owner: elasticsearch + group: elasticsearch + +- name: Copy searchguard admin certificate + become: yes + copy: + mode: 0600 + src: files/searchguard/sgadmin.pem + dest: "{{ mc_es_conf_base }}/searchguard/sgadmin.pem" + owner: elasticsearch + group: elasticsearch + +- name: Copy searchguard admin private key + become: yes + copy: + mode: 0600 + src: files/searchguard/sgadmin.key + dest: "{{ mc_es_conf_base }}/searchguard/sgadmin.key" + owner: elasticsearch + group: elasticsearch + +- name: Copy searchguard root ca + become: yes + copy: + mode: 0600 + src: files/searchguard/root-ca.pem + dest: "{{ mc_es_conf_base }}/root-ca.pem" + owner: elasticsearch + group: elasticsearch + +- name: Find searchguard bundle + find: + paths: "{{ mc_home }}/app/third-party/elasticsearch/" + patterns: "^search-guard-.+\\.zip$" + use_regex: yes + file_type: file + register: check_searchguard_bundle_result + +- name: Install searchguard plugin + become: yes + become_user: elasticsearch + ignore_errors: yes + shell: | + {{ mc_es_script_path }}/elasticsearch-plugin install \ + -b file://{{ check_searchguard_bundle_result.files[0].path }} + environment: + JAVA_HOME: "{{ mc_es_java_home }}" + ES_PATH_CONF: "{{ mc_es_conf_base }}/" + register: install_searchguard_result + when: check_searchguard_bundle_result.matched == 1 + +- name: Copy roles template + become: yes + copy: + mode: 0600 + src: files/searchguard/sg_roles.yml + dest: "{{ mc_es_home }}/plugins/search-guard-7/sgconfig/sg_roles.yml" + owner: elasticsearch + group: elasticsearch + +- name: Copy roles template + become: yes + copy: + mode: 0600 + src: files/searchguard/sg_roles_mapping.yml + dest: "{{ mc_es_home }}/plugins/search-guard-7/sgconfig/sg_roles_mapping.yml" + owner: elasticsearch + group: elasticsearch + +- name: Check execution bit + become: yes + file: + path: "{{ mc_es_home }}/plugins/search-guard-7/tools/sgadmin.sh" + owner: elasticsearch + group: elasticsearch + mode: 0700 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade.yml new file mode 100644 index 0000000..b988568 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/tasks/upgrade.yml @@ -0,0 +1,96 @@ +--- +- debug: + msg: "Performing Upgrade of missionControl version - {{ missioncontrol_version }}" + +- name: Stop mc service + become: yes + systemd: + name: "{{ mc_daemon }}" + state: stopped + +- name: Download mc for upgrade + unarchive: + src: "{{ mc_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ mc_user }}" + group: "{{ mc_group }}" + creates: "{{ mc_untar_home }}" + become: yes + register: downloadmc + until: downloadmc is succeeded + retries: 3 + +- name: Delete current app folder + become: yes + file: + path: "{{ mc_home }}/app" + state: absent + +- name: Copy new app to mc app + command: "cp -r {{ mc_untar_home }}/app/. {{ mc_home }}/app" + become: yes + +- name: Delete untar directory + file: + path: "{{ mc_untar_home }}" + state: absent + become: yes + +- name: Upgrade elasticsearch + import_tasks: upgrade-elasticsearch.yml + +- name: Check if install.sh wrapper script exist + become: yes + stat: + path: "{{ mc_install_script_path }}/install.sh" + register: upgrade_wrapper_script + +- name: Include interactive installer scripts + include_vars: script/archive.yml + +- name: Upgrade JFMC + include_tasks: expect.yml + vars: + exp_executable_cmd: "./install.sh -u {{ mc_user }} -g {{ mc_group }}" + exp_dir: "{{ mc_install_script_path }}" + exp_scenarios: "{{ mc_installer_scenario['main'] }}" + args: + apply: + environment: + YQ_PATH: "{{ mc_thirdparty_path }}/yq" + when: upgrade_wrapper_script.stat.exists + +- name: Configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ mc_home }}/var/etc/info/installer-info.json" + notify: restart missioncontrol + +- name: Configure systemyaml + template: + src: "{{ mc_system_yaml_template }}" + dest: "{{ mc_home }}/var/etc/system.yaml" + become: yes + notify: restart missioncontrol + +- name: Update correct permissions + become: yes + file: + path: "{{ mc_home }}" + state: directory + recurse: yes + owner: "{{ mc_user }}" + group: "{{ mc_group }}" + mode: '0755' + +- name: Restart missioncontrol + meta: flush_handlers + +- name : Wait for missionControl to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/elasticsearch.yml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/elasticsearch.yml.j2 new file mode 100644 index 0000000..f755a30 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/elasticsearch.yml.j2 @@ -0,0 +1,21 @@ +discovery.seed_providers: file +transport.port: {{ mc_es_transport_port }} +transport.host: 0.0.0.0 +transport.publish_host: {{ ansible_host }} +network.host: 0.0.0.0 +node.name: {{ ansible_host }} +cluster.initial_master_nodes: {{ ansible_host }} +bootstrap.memory_lock: false +path.data: {{ mc_es_data_dir }} +path.logs: {{ mc_es_log_dir }} + +searchguard.ssl.transport.pemcert_filepath: localhost.pem +searchguard.ssl.transport.pemkey_filepath: localhost.key +searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem +searchguard.ssl.transport.enforce_hostname_verification: false +searchguard.ssl.transport.resolve_hostname: false +searchguard.nodes_dn: +- CN=localhost,OU=Ops,O=localhost\, Inc.,DC=localhost,DC=com +searchguard.authcz.admin_dn: +- CN=sgadmin,OU=Ops,O=sgadmin\, Inc.,DC=sgadmin,DC=com +searchguard.enterprise_modules_enabled: false diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/master.elasticsearch.yml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/master.elasticsearch.yml.j2 new file mode 100644 index 0000000..e5ff5c2 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/master.elasticsearch.yml.j2 @@ -0,0 +1,14 @@ +discovery.seed_providers: file + +{% if mc_elasticsearch_package | regex_search(".*oss-7.*") %} +cluster.initial_master_nodes: {{ ansible_host }} +{% endif %} + +path.data: {{ mc_es_home }}/data +path.logs: {{ mc_es_home }}/logs + +network.host: 0.0.0.0 +node.name: {{ ansible_host }} +transport.host: 0.0.0.0 +transport.port: 9300 +transport.publish_host: {{ ansible_host }} diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/master.system.yaml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/master.system.yaml.j2 new file mode 100644 index 0000000..f1a60cc --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/master.system.yaml.j2 @@ -0,0 +1,21 @@ +configVersion: 1 +shared: + jfrogUrl: {{ jfrog_url }} + node: + ip: {{ ansible_host }} + database: + type: "{{ mc_db_type }}" + driver: "{{ mc_db_driver }}" + url: "{{ mc_db_url }}" + username: "{{ mc_db_user }}" + password: "{{ mc_db_password }}" + elasticsearch: + unicastFile: {{ mc_es_conf_base }}/unicast_hosts.txt + password: {{ mc_es_password }} + url: {{ mc_es_url }} + username: {{ mc_es_user }} + security: + joinKey: {{ join_key }} +router: + entrypoints: + internalPort: 8046 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/slave.elasticsearch.yml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/slave.elasticsearch.yml.j2 new file mode 100644 index 0000000..8c6f135 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/slave.elasticsearch.yml.j2 @@ -0,0 +1,11 @@ +#bootstrap.memory_lock: true +discovery.seed_providers: file + +path.data: {{ mc_es_home }}/data +path.logs: {{ mc_es_home }}/logs + +network.host: 0.0.0.0 +node.name: {{ ansible_host }} +transport.host: 0.0.0.0 +transport.port: 9300 +transport.publish_host: {{ ansible_host }} diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/slave.system.yaml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/slave.system.yaml.j2 new file mode 100644 index 0000000..d10c44d --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/ha/slave.system.yaml.j2 @@ -0,0 +1,22 @@ +configVersion: 1 +shared: + jfrogUrl: {{ jfrog_url }} + node: + ip: {{ ansible_host }} + database: + type: "{{ mc_db_type }}" + driver: "{{ mc_db_driver }}" + url: "{{ mc_db_url }}" + username: "{{ mc_db_user }}" + password: "{{ mc_db_password }}" + elasticsearch: + unicastFile: {{ mc_es_conf_base }}/unicast_hosts.txt + clusterSetup: YES + password: {{ mc_es_password }} + url: {{ mc_es_url }} + username: {{ mc_es_user }} + security: + joinKey: {{ join_key }} +router: + entrypoints: + internalPort: 8046 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/installer-info.json.j2 b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/installer-info.json.j2 new file mode 100644 index 0000000..5e02d5b --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/installer-info.json.j2 @@ -0,0 +1,9 @@ +{{ ansible_managed | comment }} +{ + "productId": "Ansible_MissionControl/{{ platform_collection_version }}-{{ missionControl_version }}", + "features": [ + { + "featureId": "Channel/{{ ansible_marketplace }}" + } + ] +} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/system.yaml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/system.yaml.j2 new file mode 100644 index 0000000..d6b3b33 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/templates/system.yaml.j2 @@ -0,0 +1,35 @@ +configVersion: 1 +shared: + jfrogUrl: {{ jfrog_url }} + node: + ip: {{ mc_primary_ip }} + id: {{ ansible_date_time.iso8601_micro | to_uuid }} + database: + type: "{{ mc_db_type }}" + driver: "{{ mc_db_driver }}" + url: "{{ mc_db_url }}" + elasticsearch: + unicastFile: {{ mc_es_conf_base }}/config/unicast_hosts.txt + password: {{ mc_es_password }} + url: {{ mc_es_url }} + username: {{ mc_es_user }} + security: + joinKey: {{ join_key }} +mc: + database: + username: "{{ mc_db_user }}" + password: "{{ mc_db_password }}" + schema: "jfmc_server" +insight-scheduler: + database: + username: "{{ mc_db_user }}" + password: "{{ mc_db_password }}" + schema: "insight_scheduler" +insight-server: + database: + username: "{{ mc_db_user }}" + password: "{{ mc_db_password }}" + schema: "insight_server" +router: + entrypoints: + internalPort: 8046 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/vars/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/vars/main.yml new file mode 100644 index 0000000..ed97d53 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/vars/main.yml @@ -0,0 +1 @@ +--- diff --git a/Ansible/ansible_collections/jfrog/platform/roles/missionControl/vars/script/archive.yml b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/vars/script/archive.yml new file mode 100644 index 0000000..6d66540 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/missionControl/vars/script/archive.yml @@ -0,0 +1,58 @@ +mc_installer_scenario: + main: + - { + "expecting": "(data|installation) directory \\(", + "sending": "{{ mc_home }}" + } + - { + "expecting": "jfrog url( \\(.+\\))?:(?!.*Skipping prompt)", + "sending": "{{ jfrog_url }}" + } + - { + "expecting": "join key:(?!.*Skipping prompt)", + "sending": "{{ join_key }}" + } + - { + "expecting": "please specify the ip address of this machine(?!.*Skipping prompt)", + "sending": "{% if mc_ha_node_type is defined and mc_ha_node_type == 'master' %}{{ ansible_host }}{% else %}{{ ansible_host }}{% endif %}" + } + - { + "expecting": "are you adding an additional node", + "sending": "{% if mc_ha_node_type is defined and mc_ha_node_type == 'master' %}n{% else %}y{% endif %}" + } + - { + "expecting": "do you want to install postgresql", + "sending": "n" + } + - { + "expecting": "do you want to install elasticsearch", + "sending": "n" + } + - { + "expecting": "(postgresql|database) url.+\\[jdbc:postgresql.+\\]:", + "sending": "{{ mc_db_url }}" + } + - { + "expecting": "(postgresql|database) password", + "sending": "{{ mc_db_password }}" + } + - { + "expecting": "(postgresql|database) username", + "sending": "{{ mc_db_user }}" + } + - { + "expecting": "confirm database password", + "sending": "{{ mc_db_password }}" + } + - { + "expecting": "elasticsearch url:(?!.*Skipping prompt)", + "sending": "{{ mc_es_url }}" + } + - { + "expecting": "elasticsearch username:", + "sending": "{{ mc_es_user }}" + } + - { + "expecting": "elasticsearch password:", + "sending": "{{ mc_es_password }}" + } diff --git a/Ansible/ansible_collections/jfrog/platform/roles/postgres/README.md b/Ansible/ansible_collections/jfrog/platform/roles/postgres/README.md new file mode 100644 index 0000000..05389ce --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/README.md @@ -0,0 +1,23 @@ +# postgres +The postgres role will install Postgresql software and configure a database and user to support an Artifactory or Xray server. + +### Role Variables + +By default, the [_pg_hba.conf_](https://www.postgresql.org/docs/13/auth-pg-hba-conf.html) client authentication file is configured for open access for development purposes through the _postgres_allowed_hosts_ variable: + +``` +postgres_allowed_hosts: + - { type: "host", database: "all", user: "all", address: "0.0.0.0/0", method: "trust"} +``` + +**THIS SHOULD NOT BE USED FOR PRODUCTION.** + +**Update this variable to only allow access from Artifactory, Distibution, MissionControl and Xray.** + +## Example Playbook +``` +--- +- hosts: postgres_servers + roles: + - postgres +``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/defaults/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/defaults/main.yml similarity index 82% rename from Ansible/ansible_collections/jfrog/installers/roles/postgres/defaults/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/postgres/defaults/main.yml index e980ceb..67a999f 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/defaults/main.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/defaults/main.yml @@ -1,21 +1,21 @@ --- -# Put database into alternative location with a bind mount. -postgres_server_bind_mount_var_lib_pgsql: false - -# Where to put database. -postgres_server_bind_mount_var_lib_pgsql_target: "" - # Default version of Postgres server to install. -postgres_server_version: "9.6" +postgres_version: 13 + +# Default listen_addresses of Postgres server +postgres_listen_addresses: 0.0.0.0 + +# Default port of Postgres server +postgres_port: 5432 # Server version in package: -postgres_server_pkg_version: "{{ postgres_server_version|replace('.', '') }}" +postgres_server_pkg_version: "{{ postgres_version|replace('.', '') }}" # Whether or not the files are on ZFS. postgres_server_volume_is_zfs: false # Postgres setting max_connections. -postgres_server_max_connections: 100 +postgres_server_max_connections: 1000 # Postgres setting shared_buffers. postgres_server_shared_buffers: 128MB @@ -48,8 +48,9 @@ postgres_server_max_locks_per_transaction: 64 postgres_server_random_page_cost: "4.0" # User name that the postgres user runs as. -postgres_server_user: postgres +postgres_user: postgres +postgres_locale: "en_US.UTF-8" # Whether or not to lock checkpoints. postgres_server_log_checkpoints: false @@ -85,5 +86,10 @@ postgres_server_auto_explain_log_analyze: true # Sets the hosts that can access the database postgres_allowed_hosts: - - { type: "host", database: "all", user: "all", address: "0.0.0.0/0", method: "trust"} - + - { + type: "host", + database: "all", + user: "all", + address: "0.0.0.0/0", + method: "trust", + } diff --git a/Ansible/ansible_collections/jfrog/platform/roles/postgres/handlers/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/handlers/main.yml new file mode 100644 index 0000000..fc9ffec --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/handlers/main.yml @@ -0,0 +1,6 @@ +--- +- name: restart postgresql + become: yes + systemd: + name: "{{ postgresql_daemon }}" + state: restarted diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/meta/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/meta/main.yml similarity index 86% rename from Ansible/ansible_collections/jfrog/installers/roles/postgres/meta/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/postgres/meta/main.yml index 674d197..e6f64de 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/meta/main.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/meta/main.yml @@ -1,5 +1,5 @@ galaxy_info: - author: "Jeff Fry " + author: "JFrog Maintainers Team " description: "The postgres role will install Postgresql software and configure a database and user to support an Artifactory or Xray server." company: JFrog diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/Debian.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/Debian.yml similarity index 59% rename from Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/Debian.yml rename to Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/Debian.yml index 04c9e91..948ac74 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/Debian.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/Debian.yml @@ -1,37 +1,33 @@ --- -- name: install python2 psycopg2 - apt: - name: python-psycopg2 - update_cache: yes +- name: install acl, python3-psycopg2 become: yes - ignore_errors: yes - -- name: install python3 psycopg2 apt: - name: python3-psycopg2 + name: + - acl + - python3-psycopg2 + state: present update_cache: yes - become: yes ignore_errors: yes - name: add postgres apt key + become: yes apt_key: url: https://www.postgresql.org/media/keys/ACCC4CF8.asc id: "0x7FCC7D46ACCC4CF8" + validate_certs: no state: present - become: yes - name: register APT repository + become: yes apt_repository: repo: deb http://apt.postgresql.org/pub/repos/apt/ {{ ansible_distribution_release }}-pgdg main state: present filename: pgdg - become: yes - name: install postgres packages + become: yes apt: name: - - postgresql-{{ postgres_server_version }} - - postgresql-server-dev-{{ postgres_server_version }} - - postgresql-contrib-{{ postgres_server_version }} + - postgresql-{{ postgres_version }} + - postgresql-contrib-{{ postgres_version }} state: present - become: yes diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/RedHat.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/RedHat.yml similarity index 65% rename from Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/RedHat.yml rename to Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/RedHat.yml index a30eba9..d535bf1 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/tasks/RedHat.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/RedHat.yml @@ -1,31 +1,41 @@ --- - name: install EPEL repository + become: yes yum: name=epel-release state=present when: > # not for Fedora ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux' - become: yes -- name: install python2 psycopg2 +- name: install acl + become: yes yum: name: - - python-psycopg2 + - acl - sudo - wget - perl state: present + ignore_errors: yes -- name: install python3 psycopg2 +- name: install python3-psycopg2 + become: yes yum: name: - python3-psycopg2 - - sudo - - wget - - perl state: present + when: ansible_distribution_major_version == '8' + +- name: install python2-psycopg2 + become: yes + yum: + name: + - python-psycopg2 + state: present + when: ansible_distribution_major_version == '7' - name: fixup some locale issues + become: yes lineinfile: dest: /etc/default/locale line: 'LANGUAGE="{{ item }}"' @@ -38,11 +48,11 @@ - name: get latest version vars: base: http://download.postgresql.org/pub/repos/yum - ver: "{{ ansible_distribution_version }}" + ver: "{{ ansible_distribution_major_version }}" shell: | set -eo pipefail - wget -O - {{ base }}/{{ postgres_server_version }}/redhat/rhel-{{ ver }}-x86_64/ 2>/dev/null | \ - grep 'pgdg-redhat' | \ + wget -O - {{ base }}/reporpms/EL-{{ ver }}-x86_64/ 2>/dev/null | \ + grep 'pgdg-redhat-repo-latest' | \ perl -pe 's/^.*rpm">//g' | \ perl -pe 's/<\/a>.*//g' | \ tail -n 1 @@ -51,22 +61,21 @@ changed_when: false check_mode: false register: latest_version - tags: [skip_ansible_lint] # yes, I want wget here + tags: [skip_ansible_lint] - name: config postgres repository + become: yes vars: base: http://download.postgresql.org/pub/repos/yum - ver: "{{ ansible_distribution_version }}" + ver: "{{ ansible_distribution_major_version }}" yum: - name: "{{ base }}/{{ postgres_server_version }}/redhat/rhel-{{ ver }}-x86_64/{{ latest_version.stdout }}" + name: "{{ base }}/reporpms/EL-{{ ver }}-x86_64/{{ latest_version.stdout }}" state: present - become: yes - name: install postgres packages + become: yes yum: name: - postgresql{{ postgres_server_pkg_version }}-server - postgresql{{ postgres_server_pkg_version }}-contrib - - postgresql{{ postgres_server_pkg_version }}-devel - state: present - become: yes + state: present \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/main.yml new file mode 100644 index 0000000..59612e5 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/tasks/main.yml @@ -0,0 +1,118 @@ +--- +- name: define OS-specific variables + include_vars: "{{ ansible_os_family }}.yml" + +- name: perform installation + include_tasks: "{{ ansible_os_family }}.yml" + +- name: Set PostgreSQL environment variables. + become: yes + template: + src: postgres.sh.j2 + dest: /etc/profile.d/postgres.sh + mode: 0644 + notify: restart postgresql + +- name: Ensure PostgreSQL data directory exists. + become: yes + become_user: postgres + file: + path: "{{ postgresql_data_dir }}" + owner: postgres + group: postgres + state: directory + mode: 0700 + +- name: Initialize PostgreSQL database cluster + become: yes + become_user: postgres + command: "{{ postgresql_bin_path }}/initdb -D {{ postgresql_data_dir }}" + args: + creates: "{{ postgresql_data_dir }}/PG_VERSION" + environment: + LC_ALL: "{{ postgres_locale }}" + +- name: Setup postgres configuration files + become: yes + become_user: postgres + template: + src: "{{ item }}.j2" + dest: "{{ postgresql_config_path }}/{{ item }}" + owner: postgres + group: postgres + mode: u=rw,go=r + loop: + - pg_hba.conf + - postgresql.conf + notify: restart postgresql + +- name: Ensure PostgreSQL is started and enabled on boot + become: yes + systemd: + name: "{{ postgresql_daemon }}" + state: started + enabled: yes + +- name: Hold until Postgresql is up and running + wait_for: + port: "{{ postgres_port }}" + +- name: Create users + become: yes + become_user: postgres + postgresql_user: + name: "{{ item.db_user }}" + password: "{{ item.db_password }}" + conn_limit: "-1" + loop: "{{ db_users|default([]) }}" + no_log: true # secret passwords + +- name: Create a database + become: yes + become_user: postgres + postgresql_db: + name: "{{ item.db_name }}" + owner: "{{ item.db_owner }}" + encoding: UTF-8 + lc_collate: "{{ postgres_locale }}" + lc_ctype: "{{ postgres_locale }}" + template: template0 + loop: "{{ dbs|default([]) }}" + +- name: Check if MC schemas already exists + become: yes + become_user: postgres + command: psql -d {{ mc_db_name }} -t -c "\dn" + register: mc_schemas_loaded + +- name: Create schemas for mission-control + become: yes + become_user: postgres + command: psql -d {{ mc_db_name }} -c 'CREATE SCHEMA {{ item }} authorization {{ mc_db_user }}' + loop: "{{ mc_schemas|default([]) }}" + when: "mc_schemas_loaded.stdout is defined and '{{ item }}' not in mc_schemas_loaded.stdout" + +- name: Grant all privileges to mc user on its schema + become: yes + become_user: postgres + postgresql_privs: + database: "{{ mc_db_name}}" + privs: ALL + type: schema + roles: "{{ mc_db_user }}" + objs: "{{ item }}" + loop: "{{ mc_schemas|default([]) }}" + +- name: Grant privs on db + become: yes + become_user: postgres + postgresql_privs: + database: "{{ item.db_name }}" + role: "{{ item.db_owner }}" + state: present + privs: ALL + type: database + loop: "{{ dbs|default([]) }}" + +- debug: + msg: "Restarted postgres systemd {{ postgresql_daemon }}" diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/templates/pg_hba.conf.j2 b/Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/pg_hba.conf.j2 similarity index 76% rename from Ansible/ansible_collections/jfrog/installers/roles/postgres/templates/pg_hba.conf.j2 rename to Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/pg_hba.conf.j2 index d051806..b861022 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/templates/pg_hba.conf.j2 +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/pg_hba.conf.j2 @@ -1,3 +1,9 @@ +{{ ansible_managed | comment }} +# PostgreSQL Client Authentication Configuration File +# =================================================== +# +# See: https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html + # TYPE DATABASE USER ADDRESS METHOD ## localhost connections through Unix port (user name), IPv4, IPv6 (MD5 pw). local all all peer @@ -8,4 +14,5 @@ host all all ::1/128 md5 {% for host in postgres_allowed_hosts %} {{ host.type | default('host') }} {{ host.database | default('all') }} {{ host.user | default('all') }} {{ host.address | default('0.0.0.0/0') }} {{ item.auth | default('trust') }} {% endfor %} -{% endif %} \ No newline at end of file +{% endif %} + diff --git a/Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/postgres.sh.j2 b/Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/postgres.sh.j2 new file mode 100644 index 0000000..ecb4eea --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/postgres.sh.j2 @@ -0,0 +1,4 @@ +{{ ansible_managed | comment }} +export PGDATA={{ postgresql_data_dir }} +export LC_ALL={{ postgres_locale }} +export PATH=$PATH:{{ postgresql_bin_path }} diff --git a/Ansible/ansible_collections/jfrog/installers/roles/postgres/templates/postgresql.conf.j2 b/Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/postgresql.conf.j2 similarity index 87% rename from Ansible/ansible_collections/jfrog/installers/roles/postgres/templates/postgresql.conf.j2 rename to Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/postgresql.conf.j2 index c213a99..3fd1cda 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/postgres/templates/postgresql.conf.j2 +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/templates/postgresql.conf.j2 @@ -1,3 +1,4 @@ +{{ ansible_managed | comment }} # ----------------------------- # PostgreSQL configuration file # ----------------------------- @@ -16,9 +17,9 @@ # # This file is read on server startup and when the server receives a SIGHUP # signal. If you edit the file on a running system, you have to SIGHUP the -# server for the changes to take effect, or use "pg_ctl reload". Some -# parameters, which are marked below, require a server shutdown and restart to -# take effect. +# server for the changes to take effect, run "pg_ctl reload", or execute +# "SELECT pg_reload_conf()". Some parameters, which are marked below, +# require a server shutdown and restart to take effect. # # Any parameter can also be given as a command-line option to the server, e.g., # "postgres -c log_connections=on". Some parameters can be changed at run time @@ -38,35 +39,16 @@ # The default values of these variables are driven from the -D command-line # option or PGDATA environment variable, represented here as ConfigDir. -{% if postgres_server_config_data_directory is not none %} -data_directory = '{{ postgres_server_config_data_directory }}' -{% else %} -#data_directory = 'ConfigDir' # use data in another directory +data_directory = '{{ postgresql_data_dir }}' # use data in another directory # (change requires restart) -{% endif %} - -{% if postgres_server_config_data_directory %} -hba_file = '{{ postgres_server_config_hba_file }}' -{% else %} -#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file +hba_file = '{{ postgresql_config_path }}/pg_hba.conf' # host-based authentication file # (change requires restart) -{% endif %} - -{% if postgres_server_config_data_directory %} -ident_file = '{{ postgres_server_config_ident_file }}' -{% else %} -#ident_file = 'ConfigDir/pg_ident.conf' # host-based authentication file +ident_file = '{{ postgresql_config_path }}/pg_ident.conf' # ident configuration file # (change requires restart) -{% endif %} -{% if postgres_server_config_external_pid_file %} -external_pid_file = '{{ postgres_server_config_external_pid_file }}' -{% else %} # If external_pid_file is not explicitly set, no extra PID file is written. -#external_pid_file = '' # write an extra PID file +external_pid_file = '{{ postgresql_external_pid_file }}' # write an extra PID file # (change requires restart) -{% endif %} - #------------------------------------------------------------------------------ # CONNECTIONS AND AUTHENTICATION @@ -74,14 +56,14 @@ external_pid_file = '{{ postgres_server_config_external_pid_file }}' # - Connection Settings - -listen_addresses = '0.0.0.0' # what IP address(es) to listen on; +listen_addresses = '{{ postgres_listen_addresses }}' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost'; use '*' for all # (change requires restart) -#port = 5432 # (change requires restart) +port = {{ postgres_port }} # (change requires restart) max_connections = {{ postgres_server_max_connections }} # (change requires restart) #superuser_reserved_connections = 3 # (change requires restart) -#unix_socket_directories = '/var/run/postgresql, /tmp' # comma-separated list of directories +#unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # begin with 0 to use octal notation @@ -91,7 +73,19 @@ max_connections = {{ postgres_server_max_connections }} # (change requires res #bonjour_name = '' # defaults to the computer name # (change requires restart) -# - Security and Authentication - +# - TCP settings - +# see "man 7 tcp" for details + +#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; + # 0 selects the system default +#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; + # 0 selects the system default +#tcp_keepalives_count = 0 # TCP_KEEPCNT; + # 0 selects the system default +#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds; + # 0 selects the system default + +# - Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) @@ -111,16 +105,6 @@ max_connections = {{ postgres_server_max_connections }} # (change requires res #krb_server_keyfile = '' #krb_caseins_users = off -# - TCP Keepalives - -# see "man 7 tcp" for details - -#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; - # 0 selects the system default -#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; - # 0 selects the system default -#tcp_keepalives_count = 0 # TCP_KEEPCNT; - # 0 selects the system default - #------------------------------------------------------------------------------ # RESOURCE USAGE (except WAL) @@ -186,7 +170,7 @@ max_parallel_workers_per_gather = {{ postgres_server_max_parallel_workers_per_g #old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate # (change requires restart) #backend_flush_after = 0 # measured in pages, 0 disables -{% if postgres_server_version|string != "9.6" %} +{% if postgres_version|string != "9.6" %} parallel_leader_participation = {{ "on" if postgres_server_parallel_leader_participation else "off" }} max_parallel_maintenance_workers = {{ postgres_server_max_parallel_maintenance_workers }} {% endif %} @@ -244,6 +228,41 @@ checkpoint_completion_target = 0.8 # checkpoint target duration, 0.0 - 1.0 #archive_timeout = 0 # force a logfile segment switch after this # number of seconds; 0 disables +# - Archive Recovery - + +# These are only used in recovery mode. + +#restore_command = '' # command to use to restore an archived logfile segment + # placeholders: %p = path of file to restore + # %f = file name only + # e.g. 'cp /mnt/server/archivedir/%f %p' + # (change requires restart) +#archive_cleanup_command = '' # command to execute at every restartpoint +#recovery_end_command = '' # command to execute at completion of recovery + +# - Recovery Target - + +# Set these only when performing a targeted recovery. + +#recovery_target = '' # 'immediate' to end recovery as soon as a + # consistent state is reached + # (change requires restart) +#recovery_target_name = '' # the named restore point to which recovery will proceed + # (change requires restart) +#recovery_target_time = '' # the time stamp up to which recovery will proceed + # (change requires restart) +#recovery_target_xid = '' # the transaction ID up to which recovery will proceed + # (change requires restart) +#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed + # (change requires restart) +#recovery_target_inclusive = on # Specifies whether to stop: + # just after the specified recovery target (on) + # just before the recovery target (off) + # (change requires restart) +#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID + # (change requires restart) +#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown' + # (change requires restart) #------------------------------------------------------------------------------ # REPLICATION @@ -294,7 +313,6 @@ checkpoint_completion_target = 0.8 # checkpoint target duration, 0.0 - 1.0 #wal_retrieve_retry_interval = 5s # time to wait before retrying to # retrieve WAL after a failed attempt - #------------------------------------------------------------------------------ # QUERY TUNING #------------------------------------------------------------------------------ @@ -309,9 +327,14 @@ checkpoint_completion_target = 0.8 # checkpoint target duration, 0.0 - 1.0 #enable_material = on #enable_mergejoin = on #enable_nestloop = on +#enable_parallel_append = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on +#enable_partitionwise_join = off +#enable_partitionwise_aggregate = off +#enable_parallel_hash = on +#enable_partition_pruning = on # - Planner Cost Constants - @@ -322,7 +345,18 @@ random_page_cost = {{ postgres_server_random_page_cost }} #cpu_operator_cost = 0.0025 # same scale as above #parallel_tuple_cost = 0.1 # same scale as above #parallel_setup_cost = 1000.0 # same scale as above -#min_parallel_relation_size = 8MB + +#jit_above_cost = 100000 # perform JIT compilation if available + # and query more expensive than this; + # -1 disables +#jit_inline_above_cost = 500000 # inline small functions if query is + # more expensive than this; -1 disables +#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if + # query is more expensive than this; + # -1 disables + +#min_parallel_table_scan_size = 8MB +#min_parallel_index_scan_size = 512kB #effective_cache_size = 4GB # - Genetic Query Optimizer - @@ -344,6 +378,9 @@ random_page_cost = {{ postgres_server_random_page_cost }} #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOIN clauses #force_parallel_mode = off +#jit = on # allow JIT compilation +#plan_cache_mode = auto # auto, force_generic_plan or + # force_custom_plan #------------------------------------------------------------------------------ @@ -480,7 +517,7 @@ log_statement = '{{ postgres_server_log_statements }}' # none, ddl, mod, all #log_temp_files = -1 # log temporary files equal or larger # than the specified size in kilobytes; # -1 disables, 0 logs all temp files -log_timezone = 'Europe/Berlin' +log_timezone = 'Etc/UTC' # - Process Title - diff --git a/Ansible/ansible_collections/jfrog/platform/roles/postgres/vars/Debian.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/vars/Debian.yml new file mode 100644 index 0000000..122f95f --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/vars/Debian.yml @@ -0,0 +1,6 @@ +--- +postgresql_data_dir: "/var/lib/postgresql/{{ postgres_version }}/main" +postgresql_bin_path: "/usr/lib/postgresql/{{ postgres_version }}/bin" +postgresql_config_path: "/etc/postgresql/{{ postgres_version }}/main" +postgresql_daemon: postgresql@{{ postgres_version}}-main +postgresql_external_pid_file: "/var/run/postgresql/{{ postgres_version }}-main.pid" diff --git a/Ansible/ansible_collections/jfrog/platform/roles/postgres/vars/RedHat.yml b/Ansible/ansible_collections/jfrog/platform/roles/postgres/vars/RedHat.yml new file mode 100644 index 0000000..a4a5f37 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/postgres/vars/RedHat.yml @@ -0,0 +1,6 @@ +--- +postgresql_bin_path: "/usr/pgsql-{{ postgres_version }}/bin" +postgresql_data_dir: "/var/lib/pgsql/{{ postgres_version}}/data" +postgresql_config_path: "/var/lib/pgsql/{{ postgres_version}}/data" +postgresql_daemon: postgresql-{{ postgres_version}}.service +postgresql_external_pid_file: "/var/run/postgresql/{{ postgres_version }}-main.pid" diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/README.md b/Ansible/ansible_collections/jfrog/platform/roles/xray/README.md new file mode 100644 index 0000000..c53af51 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/README.md @@ -0,0 +1,26 @@ +# Xray +The xray role will install Xray software onto the host. An Artifactory server and Postgress database is required. + +### Role Variables +* _xray_upgrade_only_: Perform an software upgrade only. Default is false. + +Additional variables can be found in [defaults/main.yml](./defaults/main.yml). +## Example Playbook +``` +--- +- hosts: xray_servers + roles: + - xray +``` + +## Upgrades +The Xray role supports software upgrades. To use a role to perform a software upgrade only, use the _xray_upgrade_only_ variables and specify the version. See the following example. + +``` +- hosts: xray_servers + vars: + xray_version: "{{ lookup('env', 'xray_version_upgrade') }}" + xray_upgrade_only: true + roles: + - xray +``` \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/defaults/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/defaults/main.yml new file mode 100644 index 0000000..c57c008 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/defaults/main.yml @@ -0,0 +1,77 @@ +--- +# defaults file for xray +# indicates were this collection was downlaoded from (galaxy, automation_hub, standalone) +ansible_marketplace: standalone + +# whether to enable HA +xray_ha_enabled: false + +xray_ha_node_type : master + +# The location where xray should install. +jfrog_home_directory: /opt/jfrog + +# The remote xray download file +xray_tar: https://releases.jfrog.io/artifactory/jfrog-xray/xray-linux/{{ xray_version }}/jfrog-xray-{{ xray_version }}-linux.tar.gz + +#The xray install directory +xray_untar_home: "{{ jfrog_home_directory }}/jfrog-xray-{{ xray_version }}-linux" +xray_home: "{{ jfrog_home_directory }}/xray" + +xray_install_script_path: "{{ xray_home }}/app/bin" +xray_thirdparty_path: "{{ xray_home }}/app/third-party" +xray_archive_service_cmd: "{{ xray_install_script_path }}/installService.sh" + +#xray users and groups +xray_user: xray +xray_group: xray + +xray_uid: 1035 +xray_gid: 1035 + +xray_daemon: xray + +flow_type: archive + +#rabbitmq user +xray_rabbitmq_user: guest +xray_rabbitmq_password: guest +xray_rabbitmq_url: "amqp://localhost:5672/" +xray_rabbitmq_default_cookie: "XRAY_RABBITMQ_COOKIE" + +# if this is an upgrade +xray_upgrade_only: false + +xray_system_yaml_template: system.yaml.j2 + +linux_distro: "{{ ansible_distribution | lower }}{{ansible_distribution_major_version}}" + +xray_db_util_search_filter: + ubuntu16: + db5: 'db5.3-util.*ubuntu.*amd64\.deb' + db: 'db-util.*ubuntu.*all.deb' + ubuntu18: + db5: 'db5.3-util.*ubuntu.*amd64\.deb' + db: 'db-util.*ubuntu.*all.deb' + ubuntu20: + db5: 'db5.3-util.*ubuntu.*amd64\.deb' + db: 'db-util.*ubuntu.*all.deb' + debian8: + db5: 'db5.3-util.*deb8.*amd64\.deb' + db: 'db-util_([0-9]{1,3}\.?){3}_all\.deb' + debian9: + db5: 'db5.3-util.*deb9.*amd64\.deb' + db: 'db-util_([0-9]{1,3}\.?){3}_all\.deb' + debian10: + db5: 'TBD' + db: 'db-util_([0-9]{1,3}\.?){3}.*nmu1_all\.deb' + + +yum_python_interpreter: >- + {%- if linux_distro is not defined -%} + /usr/bin/python3 + {%- elif linux_distro in ['centos7', 'rhel7'] -%} + /usr/bin/python + {%- else -%} + /usr/bin/python3 + {%- endif -%} diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/handlers/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/handlers/main.yml new file mode 100644 index 0000000..9af3a06 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/handlers/main.yml @@ -0,0 +1,7 @@ +--- +# handlers file for xray +- name: restart xray + become: yes + systemd: + name: "{{ xray_daemon }}" + state: restarted diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/meta/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/meta/main.yml similarity index 86% rename from Ansible/ansible_collections/jfrog/installers/roles/xray/meta/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/xray/meta/main.yml index b2be45e..c01401f 100644 --- a/Ansible/ansible_collections/jfrog/installers/roles/xray/meta/main.yml +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/meta/main.yml @@ -1,5 +1,5 @@ galaxy_info: - author: "Jeff Fry " + author: "JFrog Maintainers Team " description: "The xray role will install Xray software onto the host. An Artifactory server and Postgress database is required." company: JFrog diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/expect.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/expect.yml new file mode 100644 index 0000000..06f61dc --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/expect.yml @@ -0,0 +1,44 @@ +- name: Prepare expect scenario script + set_fact: + expect_scenario: | + set timeout 300 + spawn {{ exp_executable_cmd }} + expect_before timeout { exit 1 } + set CYCLE_END 0 + set count 0 + + while { $CYCLE_END == 0 } { + expect { + {% for each_request in exp_scenarios %} + -nocase -re {{ '{' }}{{ each_request.expecting }}.*} { + send "{{ each_request.sending }}\n" + } + {% endfor %} + eof { + set CYCLE_END 1 + } + } + set count "[expr $count + 1]" + if { $count > 16} { + exit 128 + } + } + + expect eof + lassign [wait] pid spawnid os_error_flag value + + if {$os_error_flag == 0} { + puts "INSTALLER_EXIT_STATUS-$value" + } else { + puts "INSTALLER_EXIT_STATUS-$value" + } + +- name: Interactive with expect + become: yes + ignore_errors: yes + shell: | + {{ expect_scenario }} + args: + executable: /usr/bin/expect + chdir: "{{ exp_dir }}" + register: exp_result diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/install.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/install.yml new file mode 100644 index 0000000..d279367 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/install.yml @@ -0,0 +1,165 @@ +--- +- debug: + msg: "Performing installation of Xray version : {{ xray_version }}" + +- debug: + msg: "ansible_os_family: {{ ansible_os_family }}" + +- name: Install expect dependency + become: yes + yum: + name: expect + state: present + when: ansible_os_family == 'RedHat' + +- name: Install expect dependency + become: yes + apt: + name: expect + state: present + update_cache: yes + when: ansible_os_family == 'Debian' + +- name: Ensure group xray exist + become: yes + group: + name: "{{ xray_group }}" + gid: "{{ xray_gid }}" + state: present + +- name: Ensure user xray exist + become: yes + user: + uid: "{{ xray_uid }}" + name: "{{ xray_user }}" + group: "{{ xray_group }}" + create_home: yes + home: "{{ xray_home }}" + shell: /bin/bash + state: present + +- name: Download xray + become: yes + unarchive: + src: "{{ xray_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ xray_user }}" + group: "{{ xray_group }}" + creates: "{{ xray_untar_home }}" + register: downloadxray + until: downloadxray is succeeded + retries: 3 + +- name: Check if app directory exists + become: yes + stat: + path: "{{ xray_home }}/app" + register: app_dir_check + +- name: Copy untar directory to xray home + become: yes + command: "cp -r {{ xray_untar_home }}/. {{ xray_home }}" + when: not app_dir_check.stat.exists + +- name: Create required directories + become: yes + file: + path: "{{ item }}" + state: directory + recurse: yes + owner: "{{ xray_user }}" + group: "{{ xray_group }}" + loop: + - "{{ xray_home }}/var/etc" + - "{{ xray_home }}/var/etc/info/" + - "{{ xray_home }}/var/etc/security/" + +- name: Configure master key + become: yes + copy: + dest: "{{ xray_home }}/var/etc/security/master.key" + content: | + {{ master_key }} + owner: "{{ xray_user }}" + group: "{{ xray_group }}" + mode: 0640 + +- name: Setup rabbitmq + import_tasks: rabbitmq/setup/RedHat.yml + when: ansible_os_family == 'RedHat' + +- name: Setup rabbitmq + import_tasks: rabbitmq/setup/Debian.yml + when: ansible_os_family == 'Debian' + +- name: Check if install.sh wrapper script exist + become: yes + stat: + path: "{{ xray_install_script_path }}/install.sh" + register: install_wrapper_script + +- name: Include interactive installer scripts + include_vars: script/archive.yml + +- name: Install xray + include_tasks: expect.yml + vars: + exp_executable_cmd: "./install.sh -u {{ xray_user }} -g {{ xray_group }}" + exp_dir: "{{ xray_install_script_path }}" + exp_scenarios: "{{ xray_installer_scenario['main'] }}" + args: + apply: + environment: + YQ_PATH: "{{ xray_thirdparty_path }}/yq" + when: install_wrapper_script.stat.exists + ignore_errors: yes + +- name: Configure rabbitmq config + become: yes + template: + src: "rabbitmq.conf.j2" + dest: "{{ xray_home }}/app/bin/rabbitmq/rabbitmq.conf" + notify: restart xray + +- name: Configure systemyaml + become: yes + template: + src: "{{ xray_system_yaml_template }}" + dest: "{{ xray_home }}/var/etc/system.yaml" + notify: restart xray + +- name: Configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ xray_home }}/var/etc/info/installer-info.json" + notify: restart xray + +- name: Ensure permissions are correct + become: yes + file: + path: "{{ jfrog_home_directory }}" + state: directory + owner: "{{ xray_user }}" + group: "{{ xray_group }}" + recurse: yes + +- name: Install xray as a service + become: yes + shell: | + {{ xray_archive_service_cmd }} + args: + chdir: "{{ xray_install_script_path }}" + register: check_service_status_result + ignore_errors: yes + +- name: Restart xray + meta: flush_handlers + +- name : Wait for xray to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/main.yml similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/xray/tasks/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/main.yml diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/check/archive.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/check/archive.yml new file mode 100644 index 0000000..528b474 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/check/archive.yml @@ -0,0 +1,63 @@ +- name: Check rabbitmq cluster_keepalive_interval option + become: yes + ignore_errors: yes + shell: | + ./rabbitmqctl --erlang-cookie {{ xray_rabbitmq_default_cookie }} eval \ + 'application:get_env(rabbit, cluster_keepalive_interval).' \ + | tr -d '}{' | cut -d ',' -f2 + args: + chdir: "{{ xray_home }}/app/third-party/rabbitmq/sbin/" + environment: + LC_ALL: en_US.UTF-8 + LC_CTYPE: en_US.UTF-8 + register: cluster_keepalive_interval_value + +- name: Check rabbitmq handshake_timeout option + become: yes + ignore_errors: yes + shell: | + ./rabbitmqctl --erlang-cookie {{ xray_rabbitmq_default_cookie }} eval \ + 'application:get_env(rabbit, handshake_timeout).' \ + | tr -d '}{' | cut -d ',' -f2 + args: + chdir: "{{ xray_home }}/app/third-party/rabbitmq/sbin/" + environment: + LC_ALL: en_US.UTF-8 + LC_CTYPE: en_US.UTF-8 + register: handshake_timeout_value + +- name: Check rabbitmq vm_memory_high_watermark.relative option + become: yes + ignore_errors: yes + shell: | + ./rabbitmqctl --erlang-cookie {{ xray_rabbitmq_default_cookie }} eval \ + 'application:get_env(rabbit, vm_memory_high_watermark).' \ + | tr -d '}{' | cut -d ',' -f2 + args: + chdir: "{{ xray_home }}/app/third-party/rabbitmq/sbin/" + environment: + LC_ALL: en_US.UTF-8 + LC_CTYPE: en_US.UTF-8 + register: vm_memory_high_watermark_relative_value + +- name: Store result + include_role: + name: report + vars: + stop_testing_if_fail: false + test_description: "{{ test_ext_description }}Check rabbitmq custom options values. INST-775" + test_host: "{{ inventory_hostname }}" + test_result: >- + {{ + vm_memory_high_watermark_relative_value.stdout == rabbitmq_custom_values['vm_memory_high_watermark'] + and cluster_keepalive_interval_value.stdout == rabbitmq_custom_values['cluster_keepalive_interval'] + and handshake_timeout_value.stdout == rabbitmq_custom_values['handshake_timeout'] + }} + report_action: "store-result" + log_result: >- + {{ + {} + | combine({'handshake_timeout': {'real': handshake_timeout_value.stdout, 'expected': rabbitmq_custom_values.handshake_timeout}}) + | combine({'vm_memory_high_watermark': {'real': vm_memory_high_watermark_relative_value.stdout, 'expected': rabbitmq_custom_values.vm_memory_high_watermark}}) + | combine({'cluster_keepalive_interval': {'real': cluster_keepalive_interval_value.stdout, 'expected': rabbitmq_custom_values.cluster_keepalive_interval}}) + }} diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/setup/Debian.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/setup/Debian.yml new file mode 100644 index 0000000..ca527e8 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/setup/Debian.yml @@ -0,0 +1,102 @@ +- name: Find libssl package + find: + paths: "{{ xray_home }}/app/third-party/rabbitmq/" + patterns: "^libssl.+\\.deb$" + use_regex: yes + file_type: file + register: check_libssl_package_result + +- name: Set libssl package file name + set_fact: + xray_libssl_package: "{{ check_libssl_package_result.files[0].path }}" + when: check_libssl_package_result.matched > 0 + +- name: Install libssl package + become: yes + apt: + deb: "{{ xray_libssl_package }}" + register: install_libssl_package_result + when: + - ansible_distribution_release == 'xenial' + - check_libssl_package_result.matched > 0 + +- name: Find socat package + find: + paths: "{{ xray_home }}/app/third-party/rabbitmq/" + patterns: "^socat.+\\.deb$" + use_regex: yes + file_type: file + register: check_socat_package_result + +- name: Set socat package file name + set_fact: + xray_socat_package: "{{ check_socat_package_result.files[0].path }}" + when: check_socat_package_result.matched > 0 + +- name: Install socat package + become: yes + ignore_errors: yes + apt: + deb: "{{ xray_socat_package }}" + register: install_socat_package_result + when: check_socat_package_result.matched > 0 + +- name: Find erlang package + find: + paths: "{{ xray_home }}/app/third-party/rabbitmq/" + patterns: "^(esl-)?erlang.+{{ ansible_distribution_release }}.+\\.deb$" + use_regex: yes + file_type: file + register: check_erlang_package_result + +- name: Set erlang package file name + set_fact: + xray_erlang_package: "{{ check_erlang_package_result.files[0].path }}" + when: check_erlang_package_result.matched > 0 + +- name: Install erlang package + become: yes + apt: + deb: "{{ xray_erlang_package }}" + register: install_erlang_package_result + when: check_erlang_package_result.matched > 0 + +- name: Find db5-util package + find: + paths: "{{ xray_home }}/app/third-party/misc/" + patterns: ["{{ xray_db_util_search_filter[linux_distro]['db5'] }}"] + use_regex: yes + file_type: file + register: check_db5_util_package_result + +- name: Set db5-util package file name + set_fact: + xray_db5_util_package: "{{ check_db5_util_package_result.files[0].path }}" + when: check_db5_util_package_result.matched > 0 + +- name: Install db5-util package + become: yes + apt: + deb: "{{ xray_db5_util_package }}" + register: install_db5_util_package_result + when: check_db5_util_package_result.matched > 0 + +- name: Find db-util package + find: + paths: "{{ xray_home }}/app/third-party/misc/" + patterns: ["{{ xray_db_util_search_filter[linux_distro]['db'] }}"] + use_regex: yes + file_type: file + register: check_db_util_package_result + +- name: Set db-util package file name + set_fact: + xray_db_util_package: "{{ check_db_util_package_result.files[0].path }}" + when: check_db_util_package_result.matched > 0 + +- name: Install db-util package + become: yes + apt: + deb: "{{ xray_db_util_package }}" + register: install_db_util_package_result + when: check_db_util_package_result.matched > 0 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/setup/RedHat.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/setup/RedHat.yml new file mode 100644 index 0000000..89fde95 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/setup/RedHat.yml @@ -0,0 +1,59 @@ +- name: Set package prefix + set_fact: + rhel_package_prefix: >- + {%- if linux_distro in ['centos7','rhel7'] -%} + el7 + {%- elif linux_distro in ['centos8','rhel8'] -%} + el8 + {%- endif -%} + +- debug: + msg: "rhel_package_prefix: {{ rhel_package_prefix }}" + +- name: Find socat package + become: yes + find: + paths: "{{ xray_home }}/app/third-party/rabbitmq/" + patterns: "^socat.+{{ rhel_package_prefix }}.+\\.rpm$" + use_regex: yes + file_type: file + register: check_socat_package_result + +- name: Set socat package file name + set_fact: + xray_socat_package: "{{ check_socat_package_result.files[0].path }}" + when: check_socat_package_result.matched > 0 + +- name: Install socat package + become: yes + yum: + name: "{{ xray_socat_package }}" + state: present + vars: + ansible_python_interpreter: "{{ yum_python_interpreter }}" + register: install_socat_package_result + when: check_socat_package_result.matched > 0 + +- name: Find erlang package + become: yes + find: + paths: "{{ xray_home }}/app/third-party/rabbitmq/" + patterns: "^(esl-)?erlang.+{{ rhel_package_prefix }}.+\\.rpm$" + use_regex: yes + file_type: file + register: check_erlang_package_result + +- name: Set erlang package file name + set_fact: + xray_erlang_package: "{{ check_erlang_package_result.files[0].path }}" + when: check_erlang_package_result.matched > 0 + +- name: Install erlang package + become: yes + yum: + name: "{{ xray_erlang_package }}" + state: present + vars: + ansible_python_interpreter: "{{ yum_python_interpreter }}" + register: install_erlang_package_result + when: check_erlang_package_result.matched > 0 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/status/archive.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/status/archive.yml new file mode 100644 index 0000000..3567e4a --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/status/archive.yml @@ -0,0 +1,12 @@ +- name: Get rabbitmq ha cluster status + become: yes + ignore_errors: yes + shell: | + ./rabbitmqctl --erlang-cookie {{ xray_rabbitmq_default_cookie }} \ + --formatter json cluster_status | jq . + args: + chdir: "{{ xray_home }}/app/third-party/rabbitmq/sbin/" + environment: + LC_ALL: en_US.UTF-8 + LC_CTYPE: en_US.UTF-8 + register: ha_rabbitmq_cluster_status diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/upgrade/Debian.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/upgrade/Debian.yml new file mode 100644 index 0000000..4441abc --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/upgrade/Debian.yml @@ -0,0 +1,20 @@ +- name: Find erlang package + find: + paths: "{{ xray_home }}/app/third-party/rabbitmq/" + patterns: "^(esl-)?erlang.+{{ ansible_distribution_release }}.+\\.deb$" + use_regex: yes + file_type: file + register: check_erlang_package_result + +- name: Set erlang package file name + set_fact: + xray_erlang_package: "{{ check_erlang_package_result.files[0].path }}" + when: check_erlang_package_result.matched > 0 + +- name: Install erlang package + become: yes + apt: + deb: "{{ xray_erlang_package }}" + state: present + register: install_erlang_package_result + when: check_erlang_package_result.matched > 0 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/upgrade/RedHat.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/upgrade/RedHat.yml new file mode 100644 index 0000000..4bb43ce --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/rabbitmq/upgrade/RedHat.yml @@ -0,0 +1,32 @@ +- name: Set package prefix + set_fact: + rhel_package_prefix: >- + {%- if linux_distro in ['centos7','rhel7'] -%} + el7 + {%- elif linux_distro in ['centos8','rhel8'] -%} + el8 + {%- endif -%} + +- name: Find erlang package + become: yes + find: + paths: "{{ xray_home }}/app/third-party/rabbitmq/" + patterns: "^(esl-)?erlang.+{{ rhel_package_prefix }}.+\\.rpm$" + use_regex: yes + file_type: file + register: check_erlang_package_result + +- name: Set erlang package file name + set_fact: + xray_erlang_package: "{{ check_erlang_package_result.files[0].path }}" + when: check_erlang_package_result.matched > 0 + +- name: Install erlang package + become: yes + yum: + name: "{{ xray_erlang_package }}" + state: present + vars: + ansible_python_interpreter: "{{ yum_python_interpreter }}" + register: install_erlang_package_result + when: check_erlang_package_result.matched > 0 diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/upgrade.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/upgrade.yml new file mode 100644 index 0000000..31279b4 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/tasks/upgrade.yml @@ -0,0 +1,112 @@ +--- +- debug: + msg: "Performing upgrade of Xray version to {{ xray_version }}..." + +- name: stop xray + become: yes + systemd: + name: "{{ xray_daemon }}" + state: stopped + +- name: download xray for upgrade + become: yes + unarchive: + src: "{{ xray_tar }}" + dest: "{{ jfrog_home_directory }}" + remote_src: yes + owner: "{{ xray_user }}" + group: "{{ xray_group }}" + creates: "{{ xray_untar_home }}" + + register: downloadxray + until: downloadxray is succeeded + retries: 3 + +- name: Delete xray app + become: yes + file: + path: "{{ xray_home }}/app" + state: absent + +- name: Copy new app to xray app + become: yes + command: "cp -r {{ xray_untar_home }}/app/. {{ xray_home }}/app" + +- name: Upgrade rabbitmq + import_tasks: rabbitmq/upgrade/RedHat.yml + when: ansible_os_family == 'RedHat' + +- name: Upgrade rabbitmq + import_tasks: rabbitmq/upgrade/Debian.yml + when: ansible_os_family == 'Debian' + +- name: Check if install.sh wrapper script exist + become: yes + stat: + path: "{{ xray_install_script_path }}/install.sh" + register: install_wrapper_script + +- name: Include interactive installer scripts + include_vars: script/archive.yml + +- name: Install xray + include_tasks: expect.yml + vars: + exp_executable_cmd: "./install.sh -u {{ xray_user }} -g {{ xray_group }}" + exp_dir: "{{ xray_install_script_path }}" + exp_scenarios: "{{ xray_installer_scenario['main'] }}" + args: + apply: + environment: + YQ_PATH: "{{ xray_thirdparty_path }}/yq" + when: install_wrapper_script.stat.exists + ignore_errors: yes + +- name: Configure rabbitmq config + become: yes + template: + src: "rabbitmq.conf.j2" + dest: "{{ xray_home }}/app/bin/rabbitmq/rabbitmq.conf" + notify: restart xray + +- name: Configure systemyaml + become: yes + template: + src: "{{ xray_system_yaml_template }}" + dest: "{{ xray_home }}/var/etc/system.yaml" + notify: restart xray + +- name: configure installer info + become: yes + template: + src: installer-info.json.j2 + dest: "{{ xray_home }}/var/etc/info/installer-info.json" + notify: restart xray + +- name: Ensure permissions are correct + become: yes + file: + path: "{{ jfrog_home_directory }}" + state: directory + owner: "{{ xray_user }}" + group: "{{ xray_group }}" + recurse: yes + +- name: Install xray as a service + become: yes + shell: | + {{ xray_archive_service_cmd }} + args: + chdir: "{{ xray_install_script_path }}" + register: check_service_status_result + ignore_errors: yes + +- name: Restart xray + meta: flush_handlers + +- name : wait for xray to be fully deployed + uri: url=http://127.0.0.1:8082/router/api/v1/system/health timeout=130 + register: result + until: result.status == 200 + retries: 25 + delay: 5 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/installer-info.json.j2 b/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/installer-info.json.j2 new file mode 100644 index 0000000..00c97cb --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/installer-info.json.j2 @@ -0,0 +1,9 @@ +{{ ansible_managed | comment }} +{ + "productId": "Ansible_Xray/{{ platform_collection_version }}-{{ xray_version }}", + "features": [ + { + "featureId": "Channel/{{ ansible_marketplace }}" + } + ] +} \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/rabbitmq.conf.j2 b/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/rabbitmq.conf.j2 new file mode 100644 index 0000000..2acca65 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/rabbitmq.conf.j2 @@ -0,0 +1,9 @@ +loopback_users.guest = false +listeners.tcp.default = 5672 +hipe_compile = false +management.listener.port = 15672 +management.listener.ssl = false +cluster_partition_handling = autoheal +default_user = {{ xray_rabbitmq_user }} + +default_pass = {{ xray_rabbitmq_password }} diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/system.yaml.j2 b/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/system.yaml.j2 new file mode 100644 index 0000000..e51192b --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/templates/system.yaml.j2 @@ -0,0 +1,24 @@ +configVersion: 1 +shared: + jfrogUrl: {{ jfrog_url }} + node: + ip: {{ ansible_host }} + id: {{ ansible_date_time.iso8601_micro | to_uuid }} + database: + type: "{{ xray_db_type }}" + driver: "{{ xray_db_driver }}" + url: "{{ xray_db_url }}" + username: "{{ xray_db_user }}" + password: "{{ xray_db_password }}" + rabbitMq: + autoStop: true + erlangCookie: + value: "{{ xray_rabbitmq_default_cookie }}" + url: "{{ xray_rabbitmq_url }}" + username: "{{ xray_rabbitmq_user }}" + password: "{{xray_rabbitmq_password }}" + security: + joinKey: {{ join_key }} +router: + entrypoints: + internalPort: 8046 \ No newline at end of file diff --git a/Ansible/ansible_collections/jfrog/installers/roles/xray/vars/main.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/vars/main.yml similarity index 100% rename from Ansible/ansible_collections/jfrog/installers/roles/xray/vars/main.yml rename to Ansible/ansible_collections/jfrog/platform/roles/xray/vars/main.yml diff --git a/Ansible/ansible_collections/jfrog/platform/roles/xray/vars/script/archive.yml b/Ansible/ansible_collections/jfrog/platform/roles/xray/vars/script/archive.yml new file mode 100644 index 0000000..1a84f91 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/roles/xray/vars/script/archive.yml @@ -0,0 +1,54 @@ +xray_installer_scenario: + main: + - { + "expecting": "have you disconnected artifactory xray pairings", + "sending": "y" + } + - { + "expecting": "(data|installation) directory \\(", + "sending": "{{ xray_home }}" + } + - { + "expecting": "jfrog url( \\(.+\\))?:(?!.*Skipping prompt)", + "sending": "{{ jfrog_url }}" + } + - { + "expecting": "join key:(?!.*Skipping prompt)", + "sending": "{{ join_key }}" + } + - { + "expecting": "please specify the ip address of this machine(?!.*Skipping prompt)", + "sending": "{% if xray_ha_node_type is defined and xray_ha_node_type == 'master' %}{{ ansible_host }}{% else %}{{ ansible_host }}{% endif %}" + } + - { + "expecting": "are you adding an additional node", + "sending": "{% if xray_ha_node_type is defined and xray_ha_node_type == 'master' %}n{% else %}y{% endif %}" + } + - { + "expecting": "do you want to install postgresql", + "sending": "n" + } + - { + "expecting": "(postgresql|database) url", + "sending": "{{ xray_db_url }}" + } + - { + "expecting": "(postgresql|database) password", + "sending": "{{ xray_db_password }}" + } + - { + "expecting": "(postgresql|database) username", + "sending": "{{ xray_db_user }}" + } + - { + "expecting": "confirm database password", + "sending": "{{ xray_db_password }}" + } + - { + "expecting": "rabbitmq active node name:", + "sending": "{{ ansible_machine_id }}" + } + - { + "expecting": "rabbitmq active node ip:", + "sending": "{{ ansible_host }}" + } diff --git a/Ansible/ansible_collections/jfrog/platform/xray.yml b/Ansible/ansible_collections/jfrog/platform/xray.yml new file mode 100644 index 0000000..62d7ca9 --- /dev/null +++ b/Ansible/ansible_collections/jfrog/platform/xray.yml @@ -0,0 +1,4 @@ +--- +- hosts: xray_servers + roles: + - xray diff --git a/Ansible/examples/host_vars/rt-ha/hosts.yml b/Ansible/examples/host_vars/rt-ha/hosts.yml deleted file mode 100644 index 5a702ac..0000000 --- a/Ansible/examples/host_vars/rt-ha/hosts.yml +++ /dev/null @@ -1,52 +0,0 @@ ---- -all: - vars: - ansible_user: "ubuntu" - ansible_ssh_private_key_file: "{{ lookup('env', 'ansible_key') }}" - children: - database: - hosts: - #artifactory database - 52.86.32.79: - db_users: - - { db_user: "artifactory", db_password: "{{ lookup('env', 'artifactory_password') }}" } - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - artifactory: - vars: - artifactory_version: 7.4.1 - artifactory_ha_enabled: true - master_key: "c97b862469de0d94fbb7d48130637a5a" - join_key: "9bcca98f375c0728d907cc6ee39d4f02" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "jdbc:postgresql://10.0.0.160:5432/artifactory" - db_user: "artifactory" - db_password: "{{ lookup('env', 'artifactory_password') }}" - server_name: "ec2-100-25-104-198.compute-1.amazonaws.com" - certificate: | - -----BEGIN CERTIFICATE----- - x - -----END CERTIFICATE----- - certificate_key: | - -----BEGIN PRIVATE KEY----- - x - -----END PRIVATE KEY----- - children: - primary: - hosts: - 100.25.104.198: - artifactory_is_primary: true - artifactory_license1: x - artifactory_license2: x - artifactory_license3: x - artifactory_license4: x - artifactory_license5: x - secondary: - hosts: - 54.160.107.157: - 35.153.79.44: - vars: - artifactory_is_primary: false - diff --git a/Ansible/examples/host_vars/rt-xray-ha/hosts.yml b/Ansible/examples/host_vars/rt-xray-ha/hosts.yml deleted file mode 100644 index cbb3ef7..0000000 --- a/Ansible/examples/host_vars/rt-xray-ha/hosts.yml +++ /dev/null @@ -1,57 +0,0 @@ ---- -all: - vars: - ansible_user: "ubuntu" - ansible_ssh_private_key_file: "{{ lookup('env', 'ansible_key') }}" - children: - database: - hosts: - #artifactory database - 52.86.32.79: - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - db_users: - - { db_user: "artifactory", db_password: "{{ lookup('env', 'artifactory_password') }}" } - #xray database - 100.25.152.93: - dbs: - - { db_name: "xraydb", db_owner: "xray" } - db_users: - - { db_user: "xray", db_password: "{{ lookup('env', 'xray_password') }}" } - artifactory: - vars: - artifactory_version: 7.4.1 - artifactory_ha_enabled: true - master_key: "c97b862469de0d94fbb7d48130637a5a" - join_key: "9bcca98f375c0728d907cc6ee39d4f02" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "jdbc:postgresql://10.0.0.51:5432/artifactory" - db_user: "artifactory" - db_password: "{{ lookup('env', 'artifactory_password') }}" - server_name: "ec2-18-210-33-94.compute-1.amazonaws.com" - children: - primary: - hosts: - 18.210.33.94: - artifactory_is_primary: true - artifactory_license1: x - artifactory_license2: x - artifactory_license3: x - artifactory_license4: x - artifactory_license5: x - xray: - vars: - xray_version: 3.3.0 - jfrog_url: http://ec2-18-210-33-94.compute-1.amazonaws.com - master_key: "c97b862469de0d94fbb7d48130637a5a" - join_key: "9bcca98f375c0728d907cc6ee39d4f02" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "postgres://10.0.0.5:5432/xraydb?sslmode=disable" - db_user: "xray" - db_password: "{{ lookup('env', 'xray_password') }}" - hosts: -# 34.229.56.166: - 54.237.68.180 diff --git a/Ansible/examples/host_vars/rt-xray/hosts.yml b/Ansible/examples/host_vars/rt-xray/hosts.yml deleted file mode 100644 index 8a844a5..0000000 --- a/Ansible/examples/host_vars/rt-xray/hosts.yml +++ /dev/null @@ -1,45 +0,0 @@ ---- -all: - vars: - ansible_user: "ubuntu" - ansible_ssh_private_key_file: "{{ lookup('env', 'ansible_key') }}" - children: - database: - hosts: - 34.239.107.0: - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - - { db_name: "xraydb", db_owner: "xray" } - db_users: - - { db_user: "artifactory", db_password: "{{ lookup('env', 'artifactory_password') }}" } - - { db_user: "xray", db_password: "{{ lookup('env', 'xray_password') }}" } - artifactory: - hosts: - 54.237.207.135: - artifactory_version: 7.4.1 - artifactory_license1: x - artifactory_license2: x - artifactory_license3: x - artifactory_license4: x - artifactory_license5: x - master_key: "c97b862469de0d94fbb7d48130637a5a" - join_key: "9bcca98f375c0728d907cc6ee39d4f02" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "jdbc:postgresql://10.0.0.59:5432/artifactory" - db_user: "artifactory" - db_password: "{{ lookup('env', 'artifactory_password') }}" - server_name: "ec2-54-237-207-135.compute-1.amazonaws.com" - xray: - hosts: - 100.25.104.174: - xray_version: 3.3.0 - jfrog_url: "http://ec2-54-237-207-135.compute-1.amazonaws.com" - master_key: "c97b862469de0d94fbb7d48130637a5a" - join_key: "9bcca98f375c0728d907cc6ee39d4f02" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "postgres://10.0.0.59:5432/xraydb?sslmode=disable" - db_user: "xray" - db_password: "{{ lookup('env', 'xray_password') }}" diff --git a/Ansible/examples/host_vars/rt/hosts.yml b/Ansible/examples/host_vars/rt/hosts.yml deleted file mode 100644 index f030ff6..0000000 --- a/Ansible/examples/host_vars/rt/hosts.yml +++ /dev/null @@ -1,25 +0,0 @@ ---- -all: - vars: - ansible_user: "ubuntu" - children: - database: - hosts: - 54.83.163.100: - db_users: - - { db_user: "artifactory", db_password: "{{ lookup('env', 'artifactory_password') }}" } - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - primary: - hosts: - 54.165.47.191: - artifactory_version: 7.4.1 - artifactory_is_primary: true - artifactory_license_file: "{{ lookup('env', 'artifactory_license_file') }}" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "jdbc:postgresql://10.0.0.219:5432/artifactory" - db_user: "artifactory" - db_password: "{{ lookup('env', 'artifactory_password') }}" - server_name: "ec2-54-165-47-191.compute-1.amazonaws.com" \ No newline at end of file diff --git a/Ansible/examples/host_vars/ssl/hosts.yml b/Ansible/examples/host_vars/ssl/hosts.yml deleted file mode 100644 index eaf20e1..0000000 --- a/Ansible/examples/host_vars/ssl/hosts.yml +++ /dev/null @@ -1,40 +0,0 @@ ---- -all: - vars: - ansible_user: "ubuntu" - ansible_ssh_private_key_file: "{{ lookup('env', 'ansible_key') }}" - children: - database: - hosts: - 52.86.32.79: - db_users: - - { db_user: "artifactory", db_password: "{{ lookup('env', 'artifactory_password') }}" } - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - primary: - hosts: - 100.25.104.198: - artifactory_version: 7.4.1 - artifactory_is_primary: true - artifactory_license1: x - artifactory_license2: x - artifactory_license3: x - artifactory_license4: x - artifactory_license5: x - master_key: "c97b862469de0d94fbb7d48130637a5a" - join_key: "9bcca98f375c0728d907cc6ee39d4f02" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "jdbc:postgresql://10.0.0.160:5432/artifactory" - db_user: "artifactory" - db_password: "{{ lookup('env', 'artifactory_password') }}" - server_name: "ec2-100-25-104-198.compute-1.amazonaws.com" - certificate: | - -----BEGIN CERTIFICATE----- - x - -----END CERTIFICATE----- - certificate_key: | - -----BEGIN PRIVATE KEY----- - x - -----END PRIVATE KEY----- diff --git a/Ansible/examples/host_vars/xray/hosts.yml b/Ansible/examples/host_vars/xray/hosts.yml deleted file mode 100644 index e48a9fd..0000000 --- a/Ansible/examples/host_vars/xray/hosts.yml +++ /dev/null @@ -1,18 +0,0 @@ ---- -all: - vars: - ansible_user: "centos" - children: - xray: - vars: - xray_version: 3.3.0 - jfrog_url: http://ec2-18-210-33-94.compute-1.amazonaws.com - master_key: "c97b862469de0d94fbb7d48130637a5a" - join_key: "9bcca98f375c0728d907cc6ee39d4f02" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_url: "postgres://10.0.0.5:5432/xraydb?sslmode=disable" - db_user: "xray" - db_password: "{{ lookup('env', 'xray_password') }}" - hosts: - 3.17.132.222 diff --git a/Ansible/examples/inventory/platform/hosts.ini b/Ansible/examples/inventory/platform/hosts.ini new file mode 100644 index 0000000..ae5095a --- /dev/null +++ b/Ansible/examples/inventory/platform/hosts.ini @@ -0,0 +1,15 @@ +# Replace x.x.x.x with public Ips of servers +[postgres_servers] +postgres-1 ansible_host=x.x.x.x + +[artifactory_servers] +artifactory-1 ansible_host=x.x.x.x + +[xray_servers] +xray-1 ansible_host=x.x.x.x + +[distribution_servers] +distribution-1 ansible_host=x.x.x.x + +[missionControl_servers] +missionControl-1 ansible_host=x.x.x.x diff --git a/Ansible/examples/inventory/rt-xray/hosts.ini b/Ansible/examples/inventory/rt-xray/hosts.ini new file mode 100644 index 0000000..139157a --- /dev/null +++ b/Ansible/examples/inventory/rt-xray/hosts.ini @@ -0,0 +1,9 @@ +# Replace x.x.x.x with public Ips of servers +[postgres_servers] +postgres-1 ansible_host=x.x.x.x + +[artifactory_servers] +artifactory-1 ansible_host=x.x.x.x + +[xray_servers] +xray-1 ansible_host=x.x.x.x diff --git a/Ansible/examples/inventory/rt/hosts.ini b/Ansible/examples/inventory/rt/hosts.ini new file mode 100644 index 0000000..02aef40 --- /dev/null +++ b/Ansible/examples/inventory/rt/hosts.ini @@ -0,0 +1,6 @@ +# Replace x.x.x.x with public Ips of servers +[postgres_servers] +postgres-1 ansible_host=x.x.x.x + +[artifactory_servers] +artifactory-1 ansible_host=x.x.x.x diff --git a/Ansible/examples/inventory/xray/hosts.ini b/Ansible/examples/inventory/xray/hosts.ini new file mode 100644 index 0000000..94f6a8f --- /dev/null +++ b/Ansible/examples/inventory/xray/hosts.ini @@ -0,0 +1,6 @@ +# Replace x.x.x.x with public Ips of servers +[postgres_servers] +postgres-1 ansible_host=x.x.x.x + +[xray_servers] +xray-1 ansible_host=x.x.x.x diff --git a/Ansible/examples/playbook-platform.yml b/Ansible/examples/playbook-platform.yml new file mode 100644 index 0000000..f9c6f0e --- /dev/null +++ b/Ansible/examples/playbook-platform.yml @@ -0,0 +1,30 @@ +--- +- hosts: postgres_servers + collections: + - jfrog.platform + roles: + - postgres + +- hosts: artifactory_servers + collections: + - jfrog.platform + roles: + - artifactory + +- hosts: xray_servers + collections: + - jfrog.platform + roles: + - xray + +- hosts: distribution_servers + collections: + - jfrog.platform + roles: + - distribution + +- hosts: missioncontrol_servers + collections: + - jfrog.platform + roles: + - missioncontrol diff --git a/Ansible/examples/playbook-rt-ha.yml b/Ansible/examples/playbook-rt-ha.yml deleted file mode 100644 index 57fc65c..0000000 --- a/Ansible/examples/playbook-rt-ha.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- hosts: database - collections: - - jfrog.installers - roles: - - postgres - -- hosts: primary:secondary - collections: - - jfrog.installers - roles: - - artifactory - - artifactory_nginx_ssl \ No newline at end of file diff --git a/Ansible/examples/playbook-rt-xray.yml b/Ansible/examples/playbook-rt-xray.yml index 6081198..6a1b9b3 100644 --- a/Ansible/examples/playbook-rt-xray.yml +++ b/Ansible/examples/playbook-rt-xray.yml @@ -1,18 +1,18 @@ --- -- hosts: database +- hosts: postgres-servers collections: - - jfrog.installers + - jfrog.platform roles: - postgres -- hosts: artifactory +- hosts: artifactory-servers collections: - - jfrog.installers + - jfrog.platform roles: - artifactory -- hosts: xray +- hosts: xray-servers collections: - - jfrog.installers + - jfrog.platform roles: - xray \ No newline at end of file diff --git a/Ansible/examples/playbook-rt.yml b/Ansible/examples/playbook-rt.yml index 72ffbec..1761a97 100644 --- a/Ansible/examples/playbook-rt.yml +++ b/Ansible/examples/playbook-rt.yml @@ -1,12 +1,12 @@ --- -- hosts: database +- hosts: postgres_servers collections: - - jfrog.installers + - jfrog.platform roles: - postgres -- hosts: primary +- hosts: artifactory_servers collections: - - jfrog.installers + - jfrog.platform roles: - artifactory diff --git a/Ansible/examples/playbook-ssl.yml b/Ansible/examples/playbook-ssl.yml deleted file mode 100644 index 7370111..0000000 --- a/Ansible/examples/playbook-ssl.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- hosts: database - collections: - - jfrog.installers - roles: - - postgres - -- hosts: primary - collections: - - jfrog.installers - roles: - - artifactory - - artifactory_nginx_ssl diff --git a/Ansible/examples/playbook-xray.yml b/Ansible/examples/playbook-xray.yml index 3f0e5a4..ba6a8b6 100644 --- a/Ansible/examples/playbook-xray.yml +++ b/Ansible/examples/playbook-xray.yml @@ -1,6 +1,12 @@ --- -- hosts: xray +- hosts: postgres_servers collections: - - jfrog.installers + - jfrog.platform roles: - - xray \ No newline at end of file + - postgres + +- hosts: xray_servers + collections: + - jfrog.platform + roles: + - xray diff --git a/Ansible/infra/aws/lb-rt-xray-ha-centos78.json b/Ansible/infra/aws/lb-rt-xray-ha-centos78.json deleted file mode 100644 index 73859a8..0000000 --- a/Ansible/infra/aws/lb-rt-xray-ha-centos78.json +++ /dev/null @@ -1,769 +0,0 @@ -{ - "Description": "This template deploys a VPC, with a pair of public and private subnets spread across two Availability Zones. It deploys an internet gateway, with a default route on the public subnets. It deploys a pair of NAT gateways (one in each AZ), and default routes for them in the private subnets.", - "Parameters": { - "SSHKeyName": { - "Description": "Name of the ec2 key you need one to use this template", - "Type": "AWS::EC2::KeyPair::KeyName", - "Default": "choose-key" - }, - "EnvironmentName": { - "Description": "An environment name that is prefixed to resource names", - "Type": "String", - "Default": "Ansible" - }, - "VpcCIDR": { - "Description": "Please enter the IP range (CIDR notation) for this VPC", - "Type": "String", - "Default": "10.192.0.0/16" - }, - "PublicSubnet1CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone", - "Type": "String", - "Default": "10.192.10.0/24" - }, - "PublicSubnet2CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the public subnet in the second Availability Zone", - "Type": "String", - "Default": "10.192.11.0/24" - }, - "PrivateSubnet1CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone", - "Type": "String", - "Default": "10.192.20.0/24" - }, - "PrivateSubnet2CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone", - "Type": "String", - "Default": "10.192.21.0/24" - } - }, - "Mappings": { - "RegionToAmazonAMI": { - "us-east-1": { - "HVM64": "ami-02e98f78" - }, - "us-east-2": { - "HVM64": "ami-01e36b7901e884a10" - }, - "us-west-1": { - "HVM64": "ami-074e2d6769f445be5" - }, - "us-west-2": { - "HVM64": "ami-01ed306a12b7d1c96" - } - } - }, - "Resources": { - "VPC": { - "Type": "AWS::EC2::VPC", - "Properties": { - "CidrBlock": { - "Ref": "VpcCIDR" - }, - "EnableDnsSupport": true, - "EnableDnsHostnames": true, - "Tags": [ - { - "Key": "Name", - "Value": { - "Ref": "EnvironmentName" - } - } - ] - } - }, - "InternetGateway": { - "Type": "AWS::EC2::InternetGateway", - "Properties": { - "Tags": [ - { - "Key": "Name", - "Value": { - "Ref": "EnvironmentName" - } - } - ] - } - }, - "InternetGatewayAttachment": { - "Type": "AWS::EC2::VPCGatewayAttachment", - "Properties": { - "InternetGatewayId": { - "Ref": "InternetGateway" - }, - "VpcId": { - "Ref": "VPC" - } - } - }, - "PublicSubnet1": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 0, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PublicSubnet1CIDR" - }, - "MapPublicIpOnLaunch": true, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Public Subnet (AZ1)" - } - } - ] - } - }, - "PublicSubnet2": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 1, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PublicSubnet2CIDR" - }, - "MapPublicIpOnLaunch": true, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Public Subnet (AZ2)" - } - } - ] - } - }, - "PrivateSubnet1": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 0, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PrivateSubnet1CIDR" - }, - "MapPublicIpOnLaunch": false, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Subnet (AZ1)" - } - } - ] - } - }, - "PrivateSubnet2": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 1, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PrivateSubnet2CIDR" - }, - "MapPublicIpOnLaunch": false, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Subnet (AZ2)" - } - } - ] - } - }, - "NatGateway1EIP": { - "Type": "AWS::EC2::EIP", - "DependsOn": "InternetGatewayAttachment", - "Properties": { - "Domain": "vpc" - } - }, - "NatGateway2EIP": { - "Type": "AWS::EC2::EIP", - "DependsOn": "InternetGatewayAttachment", - "Properties": { - "Domain": "vpc" - } - }, - "NatGateway1": { - "Type": "AWS::EC2::NatGateway", - "Properties": { - "AllocationId": { - "Fn::GetAtt": [ - "NatGateway1EIP", - "AllocationId" - ] - }, - "SubnetId": { - "Ref": "PublicSubnet1" - } - } - }, - "NatGateway2": { - "Type": "AWS::EC2::NatGateway", - "Properties": { - "AllocationId": { - "Fn::GetAtt": [ - "NatGateway2EIP", - "AllocationId" - ] - }, - "SubnetId": { - "Ref": "PublicSubnet2" - } - } - }, - "PublicRouteTable": { - "Type": "AWS::EC2::RouteTable", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Public Routes" - } - } - ] - } - }, - "DefaultPublicRoute": { - "Type": "AWS::EC2::Route", - "DependsOn": "InternetGatewayAttachment", - "Properties": { - "RouteTableId": { - "Ref": "PublicRouteTable" - }, - "DestinationCidrBlock": "0.0.0.0/0", - "GatewayId": { - "Ref": "InternetGateway" - } - } - }, - "PublicSubnet1RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PublicRouteTable" - }, - "SubnetId": { - "Ref": "PublicSubnet1" - } - } - }, - "PublicSubnet2RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PublicRouteTable" - }, - "SubnetId": { - "Ref": "PublicSubnet2" - } - } - }, - "PrivateRouteTable1": { - "Type": "AWS::EC2::RouteTable", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Routes (AZ1)" - } - } - ] - } - }, - "DefaultPrivateRoute1": { - "Type": "AWS::EC2::Route", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable1" - }, - "DestinationCidrBlock": "0.0.0.0/0", - "NatGatewayId": { - "Ref": "NatGateway1" - } - } - }, - "PrivateSubnet1RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable1" - }, - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - }, - "PrivateRouteTable2": { - "Type": "AWS::EC2::RouteTable", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Routes (AZ2)" - } - } - ] - } - }, - "DefaultPrivateRoute2": { - "Type": "AWS::EC2::Route", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable2" - }, - "DestinationCidrBlock": "0.0.0.0/0", - "NatGatewayId": { - "Ref": "NatGateway2" - } - } - }, - "PrivateSubnet2RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable2" - }, - "SubnetId": { - "Ref": "PrivateSubnet2" - } - } - }, - "EC2SecurityGroup": { - "Type": "AWS::EC2::SecurityGroup", - "Properties": { - "GroupDescription": "SSH, Port 80, Database", - "VpcId": { - "Ref": "VPC" - }, - "SecurityGroupIngress": [ - { - "IpProtocol": "tcp", - "FromPort": 22, - "ToPort": 22, - "CidrIp": "0.0.0.0/0" - }, - { - "IpProtocol": "tcp", - "FromPort": 5432, - "ToPort": 5432, - "CidrIp": "0.0.0.0/0" - }, - { - "IpProtocol": "tcp", - "FromPort": 8082, - "ToPort": 8082, - "CidrIp": "0.0.0.0/0" - }, - { - "IpProtocol": "tcp", - "FromPort": 80, - "ToPort": 80, - "SourceSecurityGroupId": { - "Ref": "ELBSecurityGroup" - } - } - ] - } - }, - "ELBSecurityGroup": { - "Type": "AWS::EC2::SecurityGroup", - "Properties": { - "GroupDescription": "SSH and Port 80", - "VpcId": { - "Ref": "VPC" - }, - "SecurityGroupIngress": [ - { - "IpProtocol": "tcp", - "FromPort": 80, - "ToPort": 80, - "CidrIp": "0.0.0.0/0" - } - ] - } - }, - "BastionInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "true", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PublicSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "bastion" - } - ], - "Tenancy": "default" - } - }, - "RTPriInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "rtprimary" - } - ], - "Tenancy": "default" - } - }, - "RTSecInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet2" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "rtsecondary" - } - ], - "Tenancy": "default" - } - }, - "XrayInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "xray" - } - ], - "Tenancy": "default" - } - }, - "DBInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "database" - } - ], - "Tenancy": "default" - } - }, - "EC2TargetGroup": { - "Type": "AWS::ElasticLoadBalancingV2::TargetGroup", - "Properties": { - "HealthCheckIntervalSeconds": 30, - "HealthCheckProtocol": "HTTP", - "HealthCheckTimeoutSeconds": 15, - "HealthyThresholdCount": 2, - "Matcher": { - "HttpCode": "200,302" - }, - "Name": "EC2TargetGroup", - "Port": 80, - "Protocol": "HTTP", - "TargetGroupAttributes": [ - { - "Key": "deregistration_delay.timeout_seconds", - "Value": "20" - } - ], - "Targets": [ - { - "Id": { - "Ref": "RTPriInstance" - } - }, - { - "Id": { - "Ref": "RTSecInstance" - }, - "Port": 80 - } - ], - "UnhealthyThresholdCount": 3, - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": "EC2TargetGroup" - }, - { - "Key": "Port", - "Value": 80 - } - ] - } - }, - "ALBListener": { - "Type": "AWS::ElasticLoadBalancingV2::Listener", - "Properties": { - "DefaultActions": [ - { - "Type": "forward", - "TargetGroupArn": { - "Ref": "EC2TargetGroup" - } - } - ], - "LoadBalancerArn": { - "Ref": "ApplicationLoadBalancer" - }, - "Port": 80, - "Protocol": "HTTP" - } - }, - "ApplicationLoadBalancer": { - "Type": "AWS::ElasticLoadBalancingV2::LoadBalancer", - "Properties": { - "Scheme": "internet-facing", - "Subnets": [ - { - "Ref": "PublicSubnet1" - }, - { - "Ref": "PublicSubnet2" - } - ], - "SecurityGroups": [ - { - "Ref": "ELBSecurityGroup" - } - ] - } - } - }, - - "Outputs": { - "VPC": { - "Description": "Virtual Private Cloud", - "Value": { - "Ref": "VPC" - } - }, - "ALBHostName": { - "Description": "Application Load Balancer Hostname", - "Value": { - "Fn::GetAtt": [ - "ApplicationLoadBalancer", - "DNSName" - ] - } - }, - "BastionInstancePublic": { - "Description": "Bastion", - "Value": { "Fn::GetAtt" : [ "BastionInstance", "PublicIp" ]} - }, - "BastionInstancePrivate": { - "Description": "Bastion", - "Value": { "Fn::GetAtt" : [ "BastionInstance", "PrivateIp" ]} - }, - "RTPriInstancePrivate": { - "Description": "RTPriInstance", - "Value": { "Fn::GetAtt" : [ "RTPriInstance", "PrivateIp" ]} - }, - "RTSecInstancePrivate": { - "Description": "RTSecInstance", - "Value": { "Fn::GetAtt" : [ "RTSecInstance", "PrivateIp" ]} - }, - "XrayInstancePrivate": { - "Description": "XrayInstance", - "Value": { "Fn::GetAtt" : [ "XrayInstance", "PrivateIp" ]} - }, - "DBInstancePrivate": { - "Description": "DBInstance", - "Value": { "Fn::GetAtt" : [ "DBInstance", "PrivateIp" ]} - } - } -} \ No newline at end of file diff --git a/Ansible/infra/aws/lb-rt-xray-ha-ubuntu16.json b/Ansible/infra/aws/lb-rt-xray-ha-ubuntu16.json deleted file mode 100644 index 867e1df..0000000 --- a/Ansible/infra/aws/lb-rt-xray-ha-ubuntu16.json +++ /dev/null @@ -1,769 +0,0 @@ -{ - "Description": "This template deploys a VPC, with a pair of public and private subnets spread across two Availability Zones. It deploys an internet gateway, with a default route on the public subnets. It deploys a pair of NAT gateways (one in each AZ), and default routes for them in the private subnets.", - "Parameters": { - "SSHKeyName": { - "Description": "Name of the ec2 key you need one to use this template", - "Type": "AWS::EC2::KeyPair::KeyName", - "Default": "choose-key" - }, - "EnvironmentName": { - "Description": "An environment name that is prefixed to resource names", - "Type": "String", - "Default": "Ansible" - }, - "VpcCIDR": { - "Description": "Please enter the IP range (CIDR notation) for this VPC", - "Type": "String", - "Default": "10.192.0.0/16" - }, - "PublicSubnet1CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone", - "Type": "String", - "Default": "10.192.10.0/24" - }, - "PublicSubnet2CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the public subnet in the second Availability Zone", - "Type": "String", - "Default": "10.192.11.0/24" - }, - "PrivateSubnet1CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone", - "Type": "String", - "Default": "10.192.20.0/24" - }, - "PrivateSubnet2CIDR": { - "Description": "Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone", - "Type": "String", - "Default": "10.192.21.0/24" - } - }, - "Mappings": { - "RegionToAmazonAMI": { - "us-east-1": { - "HVM64": "ami-03e33c1cefd1d3d74" - }, - "us-east-2": { - "HVM64": "ami-07d9419c80dc1113c" - }, - "us-west-1": { - "HVM64": "ami-0ee1a20d6b0c6a347" - }, - "us-west-2": { - "HVM64": "ami-0813245c0939ab3ca" - } - } - }, - "Resources": { - "VPC": { - "Type": "AWS::EC2::VPC", - "Properties": { - "CidrBlock": { - "Ref": "VpcCIDR" - }, - "EnableDnsSupport": true, - "EnableDnsHostnames": true, - "Tags": [ - { - "Key": "Name", - "Value": { - "Ref": "EnvironmentName" - } - } - ] - } - }, - "InternetGateway": { - "Type": "AWS::EC2::InternetGateway", - "Properties": { - "Tags": [ - { - "Key": "Name", - "Value": { - "Ref": "EnvironmentName" - } - } - ] - } - }, - "InternetGatewayAttachment": { - "Type": "AWS::EC2::VPCGatewayAttachment", - "Properties": { - "InternetGatewayId": { - "Ref": "InternetGateway" - }, - "VpcId": { - "Ref": "VPC" - } - } - }, - "PublicSubnet1": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 0, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PublicSubnet1CIDR" - }, - "MapPublicIpOnLaunch": true, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Public Subnet (AZ1)" - } - } - ] - } - }, - "PublicSubnet2": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 1, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PublicSubnet2CIDR" - }, - "MapPublicIpOnLaunch": true, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Public Subnet (AZ2)" - } - } - ] - } - }, - "PrivateSubnet1": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 0, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PrivateSubnet1CIDR" - }, - "MapPublicIpOnLaunch": false, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Subnet (AZ1)" - } - } - ] - } - }, - "PrivateSubnet2": { - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "AvailabilityZone": { - "Fn::Select": [ - 1, - { - "Fn::GetAZs": "" - } - ] - }, - "CidrBlock": { - "Ref": "PrivateSubnet2CIDR" - }, - "MapPublicIpOnLaunch": false, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Subnet (AZ2)" - } - } - ] - } - }, - "NatGateway1EIP": { - "Type": "AWS::EC2::EIP", - "DependsOn": "InternetGatewayAttachment", - "Properties": { - "Domain": "vpc" - } - }, - "NatGateway2EIP": { - "Type": "AWS::EC2::EIP", - "DependsOn": "InternetGatewayAttachment", - "Properties": { - "Domain": "vpc" - } - }, - "NatGateway1": { - "Type": "AWS::EC2::NatGateway", - "Properties": { - "AllocationId": { - "Fn::GetAtt": [ - "NatGateway1EIP", - "AllocationId" - ] - }, - "SubnetId": { - "Ref": "PublicSubnet1" - } - } - }, - "NatGateway2": { - "Type": "AWS::EC2::NatGateway", - "Properties": { - "AllocationId": { - "Fn::GetAtt": [ - "NatGateway2EIP", - "AllocationId" - ] - }, - "SubnetId": { - "Ref": "PublicSubnet2" - } - } - }, - "PublicRouteTable": { - "Type": "AWS::EC2::RouteTable", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Public Routes" - } - } - ] - } - }, - "DefaultPublicRoute": { - "Type": "AWS::EC2::Route", - "DependsOn": "InternetGatewayAttachment", - "Properties": { - "RouteTableId": { - "Ref": "PublicRouteTable" - }, - "DestinationCidrBlock": "0.0.0.0/0", - "GatewayId": { - "Ref": "InternetGateway" - } - } - }, - "PublicSubnet1RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PublicRouteTable" - }, - "SubnetId": { - "Ref": "PublicSubnet1" - } - } - }, - "PublicSubnet2RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PublicRouteTable" - }, - "SubnetId": { - "Ref": "PublicSubnet2" - } - } - }, - "PrivateRouteTable1": { - "Type": "AWS::EC2::RouteTable", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Routes (AZ1)" - } - } - ] - } - }, - "DefaultPrivateRoute1": { - "Type": "AWS::EC2::Route", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable1" - }, - "DestinationCidrBlock": "0.0.0.0/0", - "NatGatewayId": { - "Ref": "NatGateway1" - } - } - }, - "PrivateSubnet1RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable1" - }, - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - }, - "PrivateRouteTable2": { - "Type": "AWS::EC2::RouteTable", - "Properties": { - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Sub": "${EnvironmentName} Private Routes (AZ2)" - } - } - ] - } - }, - "DefaultPrivateRoute2": { - "Type": "AWS::EC2::Route", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable2" - }, - "DestinationCidrBlock": "0.0.0.0/0", - "NatGatewayId": { - "Ref": "NatGateway2" - } - } - }, - "PrivateSubnet2RouteTableAssociation": { - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "RouteTableId": { - "Ref": "PrivateRouteTable2" - }, - "SubnetId": { - "Ref": "PrivateSubnet2" - } - } - }, - "EC2SecurityGroup": { - "Type": "AWS::EC2::SecurityGroup", - "Properties": { - "GroupDescription": "SSH, Port 80, Database", - "VpcId": { - "Ref": "VPC" - }, - "SecurityGroupIngress": [ - { - "IpProtocol": "tcp", - "FromPort": 22, - "ToPort": 22, - "CidrIp": "0.0.0.0/0" - }, - { - "IpProtocol": "tcp", - "FromPort": 5432, - "ToPort": 5432, - "CidrIp": "0.0.0.0/0" - }, - { - "IpProtocol": "tcp", - "FromPort": 8082, - "ToPort": 8082, - "CidrIp": "0.0.0.0/0" - }, - { - "IpProtocol": "tcp", - "FromPort": 80, - "ToPort": 80, - "SourceSecurityGroupId": { - "Ref": "ELBSecurityGroup" - } - } - ] - } - }, - "ELBSecurityGroup": { - "Type": "AWS::EC2::SecurityGroup", - "Properties": { - "GroupDescription": "SSH and Port 80", - "VpcId": { - "Ref": "VPC" - }, - "SecurityGroupIngress": [ - { - "IpProtocol": "tcp", - "FromPort": 80, - "ToPort": 80, - "CidrIp": "0.0.0.0/0" - } - ] - } - }, - "BastionInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "true", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PublicSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "bastion" - } - ], - "Tenancy": "default" - } - }, - "RTPriInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "rtprimary" - } - ], - "Tenancy": "default" - } - }, - "RTSecInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet2" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "rtsecondary" - } - ], - "Tenancy": "default" - } - }, - "XrayInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "xray" - } - ], - "Tenancy": "default" - } - }, - "DBInstance": { - "Type": "AWS::EC2::Instance", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "RegionToAmazonAMI", - { - "Ref": "AWS::Region" - }, - "HVM64" - ] - }, - "InstanceInitiatedShutdownBehavior": "stop", - "InstanceType": "t2.medium", - "KeyName": { - "Ref": "SSHKeyName" - }, - "Monitoring": "true", - "NetworkInterfaces": [ - { - "AssociatePublicIpAddress": "false", - "DeviceIndex": "0", - "GroupSet": [ - { - "Ref": "EC2SecurityGroup" - } - ], - "SubnetId": { - "Ref": "PrivateSubnet1" - } - } - ], - "Tags": [ - { - "Key": "Name", - "Value": "database" - } - ], - "Tenancy": "default" - } - }, - "EC2TargetGroup": { - "Type": "AWS::ElasticLoadBalancingV2::TargetGroup", - "Properties": { - "HealthCheckIntervalSeconds": 30, - "HealthCheckProtocol": "HTTP", - "HealthCheckTimeoutSeconds": 15, - "HealthyThresholdCount": 2, - "Matcher": { - "HttpCode": "200,302" - }, - "Name": "EC2TargetGroup", - "Port": 80, - "Protocol": "HTTP", - "TargetGroupAttributes": [ - { - "Key": "deregistration_delay.timeout_seconds", - "Value": "20" - } - ], - "Targets": [ - { - "Id": { - "Ref": "RTPriInstance" - } - }, - { - "Id": { - "Ref": "RTSecInstance" - }, - "Port": 80 - } - ], - "UnhealthyThresholdCount": 3, - "VpcId": { - "Ref": "VPC" - }, - "Tags": [ - { - "Key": "Name", - "Value": "EC2TargetGroup" - }, - { - "Key": "Port", - "Value": 80 - } - ] - } - }, - "ALBListener": { - "Type": "AWS::ElasticLoadBalancingV2::Listener", - "Properties": { - "DefaultActions": [ - { - "Type": "forward", - "TargetGroupArn": { - "Ref": "EC2TargetGroup" - } - } - ], - "LoadBalancerArn": { - "Ref": "ApplicationLoadBalancer" - }, - "Port": 80, - "Protocol": "HTTP" - } - }, - "ApplicationLoadBalancer": { - "Type": "AWS::ElasticLoadBalancingV2::LoadBalancer", - "Properties": { - "Scheme": "internet-facing", - "Subnets": [ - { - "Ref": "PublicSubnet1" - }, - { - "Ref": "PublicSubnet2" - } - ], - "SecurityGroups": [ - { - "Ref": "ELBSecurityGroup" - } - ] - } - } - }, - - "Outputs": { - "VPC": { - "Description": "Virtual Private Cloud", - "Value": { - "Ref": "VPC" - } - }, - "ALBHostName": { - "Description": "Application Load Balancer Hostname", - "Value": { - "Fn::GetAtt": [ - "ApplicationLoadBalancer", - "DNSName" - ] - } - }, - "BastionInstancePublic": { - "Description": "Bastion", - "Value": { "Fn::GetAtt" : [ "BastionInstance", "PublicIp" ]} - }, - "BastionInstancePrivate": { - "Description": "Bastion", - "Value": { "Fn::GetAtt" : [ "BastionInstance", "PrivateIp" ]} - }, - "RTPriInstancePrivate": { - "Description": "RTPriInstance", - "Value": { "Fn::GetAtt" : [ "RTPriInstance", "PrivateIp" ]} - }, - "RTSecInstancePrivate": { - "Description": "RTSecInstance", - "Value": { "Fn::GetAtt" : [ "RTSecInstance", "PrivateIp" ]} - }, - "XrayInstancePrivate": { - "Description": "XrayInstance", - "Value": { "Fn::GetAtt" : [ "XrayInstance", "PrivateIp" ]} - }, - "DBInstancePrivate": { - "Description": "DBInstance", - "Value": { "Fn::GetAtt" : [ "DBInstance", "PrivateIp" ]} - } - } -} \ No newline at end of file diff --git a/Ansible/infra/azure/lb-rt-xray-ha.json b/Ansible/infra/azure/lb-rt-xray-ha.json deleted file mode 100644 index 1211d17..0000000 --- a/Ansible/infra/azure/lb-rt-xray-ha.json +++ /dev/null @@ -1,679 +0,0 @@ -{ - "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "vnetName": { - "type": "string", - "defaultValue": "vnet01", - "metadata": { - "description": "Name of new vnet to deploy into." - } - }, - "vnetAddressRange": { - "type": "string", - "defaultValue": "10.0.0.0/16", - "metadata": { - "description": "IP prefix for available addresses in vnet address space." - } - }, - "subnetAddressRange": { - "type": "string", - "defaultValue": "10.0.0.0/24", - "metadata": { - "description": "Subnet IP prefix MUST be within vnet IP prefix address space." - } - }, - "location": { - "type": "string", - "defaultValue": "[resourceGroup().location]", - "metadata": { - "description": "Location for all resources." - } - }, - "adminPublicKey": { - "type": "string", - "metadata": { - "description": "The ssh public key for the VMs." - } - }, - "sizeOfDiskInGB": { - "type": "int", - "defaultValue": 128, - "minValue": 128, - "maxValue": 1024, - "metadata": { - "description": "Size of data disk in GB 128-1024" - } - }, - "vmSize": { - "type": "string", - "defaultValue": "Standard_D2s_v3", - "metadata": { - "description": "Size of the VMs" - } - }, - "numberOfArtifactory": { - "type": "int", - "defaultValue": 1, - "minValue": 1, - "maxValue": 5, - "metadata": { - "description": "Number of Artifactory servers." - } - }, - "numberOfXray": { - "type": "int", - "defaultValue": 1, - "minValue": 1, - "maxValue": 5, - "metadata": { - "description": "Number of Xray servers." - } - }, - "numberOfDb": { - "type": "int", - "defaultValue": 1, - "minValue": 1, - "maxValue": 2, - "metadata": { - "description": "Number of database servers." - } - } - }, - "variables": { - "vnetName": "[parameters('vnetName')]", - "vnetAddressRange": "[parameters('vnetAddressRange')]", - "subnetAddressRange": "[parameters('subnetAddressRange')]", - "subnetName": "mainSubnet", - "loadBalancerName": "LB", - "loadBalancerIp": "lbIp", - "numberOfArtifactory": "[parameters('numberOfArtifactory')]", - "numberOfXray": "[parameters('numberOfXray')]", - "numberOfDb": "[parameters('numberOfDb')]", - "availabilitySetName": "availSet", - "vmArtPri": "vmArtPri", - "vmArtSec": "vmArtSec", - "vmXray": "vmXray", - "vmDb": "vmDb", - "storageAccountNameDiag": "[concat('diag',uniqueString(resourceGroup().id))]", - "subnet-id": "[resourceId('Microsoft.Network/virtualNetworks/subnets',variables('vnetName'),variables('subnetName'))]", - "imagePublisher": "Canonical", - "imageOffer": "UbuntuServer", - "imageSku": "16.04-LTS", - "mainNsg": "mainNsg", - "adminUsername": "ubuntu" - }, - "resources": [ - { - "apiVersion": "2019-08-01", - "type": "Microsoft.Network/publicIPAddresses", - "name": "[variables('loadBalancerIp')]", - "location": "[parameters('location')]", - "properties": { - "publicIPAllocationMethod": "Static" - } - }, - { - "type": "Microsoft.Compute/availabilitySets", - "name": "[variables('availabilitySetName')]", - "apiVersion": "2019-12-01", - "location": "[parameters('location')]", - "sku": { - "name": "Aligned" - }, - "properties": { - "platformFaultDomainCount": 2, - "platformUpdateDomainCount": 2 - } - }, - { - "apiVersion": "2019-06-01", - "type": "Microsoft.Storage/storageAccounts", - "name": "[variables('storageAccountNameDiag')]", - "location": "[parameters('location')]", - "kind": "StorageV2", - "sku": { - "name": "Standard_LRS" - } - }, - { - "comments": "Simple Network Security Group for subnet [Subnet]", - "type": "Microsoft.Network/networkSecurityGroups", - "apiVersion": "2019-08-01", - "name": "[variables('mainNsg')]", - "location": "[parameters('location')]", - "properties": { - "securityRules": [ - { - "name": "allow-ssh", - "properties": { - "description": "Allow SSH", - "protocol": "TCP", - "sourcePortRange": "*", - "destinationPortRange": "22", - "sourceAddressPrefix": "*", - "destinationAddressPrefix": "*", - "access": "Allow", - "priority": 100, - "direction": "Inbound", - "sourcePortRanges": [], - "destinationPortRanges": [], - "sourceAddressPrefixes": [], - "destinationAddressPrefixes": [] - } - }, - { - "name": "allow-http", - "properties": { - "description": "Allow HTTP", - "protocol": "TCP", - "sourcePortRange": "*", - "destinationPortRange": "80", - "sourceAddressPrefix": "*", - "destinationAddressPrefix": "*", - "access": "Allow", - "priority": 110, - "direction": "Inbound", - "sourcePortRanges": [], - "destinationPortRanges": [], - "sourceAddressPrefixes": [], - "destinationAddressPrefixes": [] - } - } - ] - } - }, - { - "apiVersion": "2019-08-01", - "type": "Microsoft.Network/virtualNetworks", - "name": "[variables('vnetName')]", - "location": "[parameters('location')]", - "dependsOn": [ - "[resourceId('Microsoft.Network/networkSecurityGroups', variables('mainNsg'))]" - ], - "properties": { - "addressSpace": { - "addressPrefixes": [ - "[variables('vnetAddressRange')]" - ] - }, - "subnets": [ - { - "name": "[variables('subnetName')]", - "properties": { - "addressPrefix": "[variables('subnetAddressRange')]", - "networkSecurityGroup": { - "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('mainNsg'))]" - } - } - } - ] - } - }, - { - "apiVersion": "2018-10-01", - "name": "[variables('loadBalancerName')]", - "type": "Microsoft.Network/loadBalancers", - "location": "[parameters('location')]", - "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/',variables('loadBalancerIp'))]" - ], - "properties": { - "frontendIpConfigurations": [ - { - "name": "LBFE", - "properties": { - "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('loadBalancerIp'))]" - } - } - } - ], - "backendAddressPools": [ - { - "name": "LBArt" - } - ], - "inboundNatRules": [ - { - "name": "ssh", - "properties": { - "frontendIPConfiguration": { - "id": "[resourceId('Microsoft.Network/loadBalancers/frontendIPConfigurations',variables('loadBalancerName'),'LBFE')]" - }, - "frontendPort": 22, - "backendPort": 22, - "enableFloatingIP": false, - "idleTimeoutInMinutes": 4, - "protocol": "Tcp", - "enableTcpReset": false - } - } - ], - "loadBalancingRules": [ - { - "properties": { - "frontendIPConfiguration": { - "id": "[resourceId('Microsoft.Network/loadBalancers/frontendIPConfigurations', variables('loadBalancerName'), 'LBFE')]" - }, - "backendAddressPool": { - "id": "[resourceId('Microsoft.Network/loadBalancers/backendAddressPools', variables('loadBalancerName'), 'LBArt')]" - }, - "probe": { - "id": "[resourceId('Microsoft.Network/loadBalancers/probes', variables('loadBalancerName'), 'lbprobe')]" - }, - "protocol": "Tcp", - "frontendPort": 80, - "backendPort": 80, - "idleTimeoutInMinutes": 15 - }, - "name": "lbrule" - } - ], - "probes": [ - { - "properties": { - "protocol": "Tcp", - "port": 80, - "intervalInSeconds": 15, - "numberOfProbes": 2 - }, - "name": "lbprobe" - } - ] - } - }, - { - "apiVersion": "2019-08-01", - "type": "Microsoft.Network/networkInterfaces", - "name": "[variables('vmArtPri')]", - "location": "[parameters('location')]", - "dependsOn": [ - "[variables('vnetName')]", - "[variables('loadBalancerName')]" - ], - "properties": { - "ipConfigurations": [ - { - "name": "ipconfig", - "properties": { - "privateIPAllocationMethod": "Dynamic", - "subnet": { - "id": "[variables('subnet-id')]" - }, - "loadBalancerBackendAddressPools": [ - { - "id": "[resourceId('Microsoft.Network/loadBalancers/backendAddressPools',variables('loadBalancerName'),'LBArt')]" - } - ], - "loadBalancerInboundNatRules": [ - { - "id": "[resourceId('Microsoft.Network/loadBalancers/inboundNatRules', variables('loadBalancerName'), 'ssh')]" - } - ] - } - } - ] - } - }, - { - "apiVersion": "2019-08-01", - "type": "Microsoft.Network/networkInterfaces", - "name": "[concat(variables('vmArtSec'),copyindex())]", - "copy": { - "name": "netIntLoop", - "count": "[sub(variables('numberOfArtifactory'),1)]" - }, - "location": "[parameters('location')]", - "dependsOn": [ - "[variables('vnetName')]", - "[variables('loadBalancerName')]" - ], - "properties": { - "ipConfigurations": [ - { - "name": "ipconfig", - "properties": { - "privateIPAllocationMethod": "Dynamic", - "subnet": { - "id": "[variables('subnet-id')]" - }, - "loadBalancerBackendAddressPools": [ - { - "id": "[resourceId('Microsoft.Network/loadBalancers/backendAddressPools',variables('loadBalancerName'),'LBArt')]" - } - ] - } - } - ] - } - }, - { - "apiVersion": "2019-08-01", - "type": "Microsoft.Network/networkInterfaces", - "name": "[concat(variables('vmXray'),copyindex())]", - "copy": { - "name": "netXrLoop", - "count": "[variables('numberOfXray')]" - }, - "location": "[parameters('location')]", - "dependsOn": [ - "[variables('vnetName')]" - ], - "properties": { - "ipConfigurations": [ - { - "name": "ipconfig", - "properties": { - "privateIPAllocationMethod": "Dynamic", - "subnet": { - "id": "[variables('subnet-id')]" - } - } - } - ] - } - }, - { - "apiVersion": "2019-08-01", - "type": "Microsoft.Network/networkInterfaces", - "name": "[concat(variables('vmDb'),copyindex())]", - "copy": { - "name": "netDbLoop", - "count": "[variables('numberOfDb')]" - }, - "location": "[parameters('location')]", - "dependsOn": [ - "[variables('vnetName')]" - ], - "properties": { - "ipConfigurations": [ - { - "name": "ipconfig", - "properties": { - "privateIPAllocationMethod": "Dynamic", - "subnet": { - "id": "[variables('subnet-id')]" - } - } - } - ] - } - }, - { - "apiVersion": "2019-12-01", - "type": "Microsoft.Compute/virtualMachines", - "name": "[variables('vmArtPri')]", - "location": "[parameters('location')]", - "dependsOn": [ - "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountNameDiag'))]", - "[resourceId('Microsoft.Network/networkInterfaces', variables('vmArtPri'))]", - "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - ], - "properties": { - "availabilitySet": { - "id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - }, - "hardwareProfile": { - "vmSize": "[parameters('vmSize')]" - }, - "osProfile": { - "computerName": "[variables('vmArtPri')]", - "adminUsername": "[variables('adminUsername')]", - "linuxConfiguration": { - "disablePasswordAuthentication": true, - "ssh": { - "publicKeys": [ - { - "path": "[concat('/home/', variables('adminUsername'), '/.ssh/authorized_keys')]", - "keyData": "[parameters('adminPublicKey')]" - } - ] - } - } - }, - "storageProfile": { - "imageReference": { - "publisher": "[variables('imagePublisher')]", - "offer": "[variables('imageOffer')]", - "sku": "[variables('imageSku')]", - "version": "latest" - }, - "osDisk": { - "createOption": "FromImage" - } - }, - "networkProfile": { - "networkInterfaces": [ - { - "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('vmArtPri'))]" - } - ] - }, - "diagnosticsProfile": { - "bootDiagnostics": { - "enabled": true, - "storageUri": "[reference(variables('storageAccountNameDiag'), '2019-06-01').primaryEndpoints.blob]" - } - } - } - }, - { - "apiVersion": "2019-12-01", - "type": "Microsoft.Compute/virtualMachines", - "name": "[concat(variables('vmArtSec'), copyindex())]", - "copy": { - "name": "virtualMachineLoop", - "count": "[sub(variables('numberOfArtifactory'),1)]" - }, - "location": "[parameters('location')]", - "dependsOn": [ - "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountNameDiag'))]", - "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmArtSec'),copyindex()))]", - "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - ], - "properties": { - "availabilitySet": { - "id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - }, - "hardwareProfile": { - "vmSize": "[parameters('vmSize')]" - }, - "osProfile": { - "computerName": "[concat(variables('vmArtSec'), copyindex())]", - "adminUsername": "[variables('adminUsername')]", - "linuxConfiguration": { - "disablePasswordAuthentication": true, - "ssh": { - "publicKeys": [ - { - "path": "[concat('/home/', variables('adminUsername'), '/.ssh/authorized_keys')]", - "keyData": "[parameters('adminPublicKey')]" - } - ] - } - } - }, - "storageProfile": { - "imageReference": { - "publisher": "[variables('imagePublisher')]", - "offer": "[variables('imageOffer')]", - "sku": "[variables('imageSku')]", - "version": "latest" - }, - "osDisk": { - "createOption": "FromImage" - } - }, - "networkProfile": { - "networkInterfaces": [ - { - "id": "[resourceId('Microsoft.Network/networkInterfaces',concat(variables('vmArtSec'),copyindex()))]" - } - ] - }, - "diagnosticsProfile": { - "bootDiagnostics": { - "enabled": true, - "storageUri": "[reference(variables('storageAccountNameDiag'), '2019-06-01').primaryEndpoints.blob]" - } - } - } - }, - { - "apiVersion": "2019-12-01", - "type": "Microsoft.Compute/virtualMachines", - "name": "[concat(variables('vmXray'), copyindex())]", - "copy": { - "name": "virtualMachineLoop", - "count": "[variables('numberOfXray')]" - }, - "location": "[parameters('location')]", - "dependsOn": [ - "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountNameDiag'))]", - "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmXray'),copyindex()))]", - "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - ], - "properties": { - "availabilitySet": { - "id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - }, - "hardwareProfile": { - "vmSize": "[parameters('vmSize')]" - }, - "osProfile": { - "computerName": "[concat(variables('vmXray'), copyindex())]", - "adminUsername": "[variables('adminUsername')]", - "linuxConfiguration": { - "disablePasswordAuthentication": true, - "ssh": { - "publicKeys": [ - { - "path": "[concat('/home/', variables('adminUsername'), '/.ssh/authorized_keys')]", - "keyData": "[parameters('adminPublicKey')]" - } - ] - } - } - }, - "storageProfile": { - "imageReference": { - "publisher": "[variables('imagePublisher')]", - "offer": "[variables('imageOffer')]", - "sku": "[variables('imageSku')]", - "version": "latest" - }, - "osDisk": { - "createOption": "FromImage" - } - }, - "networkProfile": { - "networkInterfaces": [ - { - "id": "[resourceId('Microsoft.Network/networkInterfaces',concat(variables('vmXray'),copyindex()))]" - } - ] - }, - "diagnosticsProfile": { - "bootDiagnostics": { - "enabled": true, - "storageUri": "[reference(variables('storageAccountNameDiag'), '2019-06-01').primaryEndpoints.blob]" - } - } - } - }, - { - "apiVersion": "2019-12-01", - "type": "Microsoft.Compute/virtualMachines", - "name": "[concat(variables('vmDb'), copyindex())]", - "copy": { - "name": "virtualMachineLoop", - "count": "[variables('numberOfDb')]" - }, - "location": "[parameters('location')]", - "dependsOn": [ - "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountNameDiag'))]", - "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmDb'),copyindex()))]", - "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - ], - "properties": { - "availabilitySet": { - "id": "[resourceId('Microsoft.Compute/availabilitySets', variables('availabilitySetName'))]" - }, - "hardwareProfile": { - "vmSize": "[parameters('vmSize')]" - }, - "osProfile": { - "computerName": "[concat(variables('vmDb'), copyindex())]", - "adminUsername": "[variables('adminUsername')]", - "linuxConfiguration": { - "disablePasswordAuthentication": true, - "ssh": { - "publicKeys": [ - { - "path": "[concat('/home/', variables('adminUsername'), '/.ssh/authorized_keys')]", - "keyData": "[parameters('adminPublicKey')]" - } - ] - } - } - }, - "storageProfile": { - "imageReference": { - "publisher": "[variables('imagePublisher')]", - "offer": "[variables('imageOffer')]", - "sku": "[variables('imageSku')]", - "version": "latest" - }, - "osDisk": { - "createOption": "FromImage" - } - }, - "networkProfile": { - "networkInterfaces": [ - { - "id": "[resourceId('Microsoft.Network/networkInterfaces',concat(variables('vmDb'),copyindex()))]" - } - ] - }, - "diagnosticsProfile": { - "bootDiagnostics": { - "enabled": true, - "storageUri": "[reference(variables('storageAccountNameDiag'), '2019-06-01').primaryEndpoints.blob]" - } - } - } - } - ], - "outputs": { - "lbIp": { - "type": "string", - "value": "[reference(resourceId('Microsoft.Network/publicIPAddresses', variables('loadBalancerIp'))).ipAddress]" - }, - "vmArtPriIp": { - "type": "string", - "value": "[reference(resourceId('Microsoft.Network/networkInterfaces', variables('vmArtPri'))).ipConfigurations[0].properties.privateIPAddress]" - }, - "vmArtSecArrIp": { - "type": "array", - "copy": { - "count": "[sub(variables('numberOfArtifactory'),1)]", - "input": "[reference(resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmArtSec'),copyindex()))).ipConfigurations[0].properties.privateIPAddress]" - } - }, - "vmXrayArrIp": { - "type": "array", - "copy": { - "count": "[variables('numberOfXray')]", - "input": "[reference(resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmXray'),copyindex()))).ipConfigurations[0].properties.privateIPAddress]" - } - }, - "vmDbArrIp": { - "type": "array", - "copy": { - "count": "[variables('numberOfDb')]", - "input": "[reference(resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmDb'),copyindex()))).ipConfigurations[0].properties.privateIPAddress]" - } - } - } -} \ No newline at end of file diff --git a/Ansible/pipelines.yaml b/Ansible/pipelines.yaml deleted file mode 100644 index 83fb517..0000000 --- a/Ansible/pipelines.yaml +++ /dev/null @@ -1,58 +0,0 @@ -resources: - - name: ansibleRepo - type: GitRepo - configuration: - gitProvider: jefferyfryGithub - path: jefferyfry/JFrog-Cloud-Installers -pipelines: - - name: ansible_automation_pipeline - steps: - - name: execute_aws_ansible_playbook - type: Bash - configuration: - runtime: - type: image - image: - auto: - language: java - versions: - - "8" - integrations: - - name: ansibleAwsKeys - - name: ansibleEnvVars - - name: ansiblePrivateKey - inputResources: - - name: ansibleRepo - execution: - onStart: - - echo "Executing AWS Ansible playbook..." - onExecute: - - sudo apt-get update - - sudo apt-get install gnupg2 - - sudo apt-get install software-properties-common - - sudo apt-add-repository --yes --update ppa:ansible/ansible - - sudo apt -y --allow-unauthenticated install ansible - - sudo pip install packaging - - sudo pip install boto3 botocore - - cd dependencyState/resources/ansibleRepo - - echo 'Setting environment variables...' - - export artifactory_version="$int_ansibleEnvVars_artifactory_version" - - export xray_version="$int_ansibleEnvVars_xray_version" - - export artifactory_license1="$int_ansibleEnvVars_artifactory_license1" - - export artifactory_license2="$int_ansibleEnvVars_artifactory_license2" - - export artifactory_license3="$int_ansibleEnvVars_artifactory_license3" - - export master_key="$int_ansibleEnvVars_master_key" - - export join_key="$int_ansibleEnvVars_join_key" - - export ssh_public_key_name="$int_ansibleEnvVars_ssh_public_key_name" - - export cfn_template="$int_ansibleEnvVars_cfn_template" - - export stack_name="$int_ansibleEnvVars_stack_name" - - export AWS_ACCESS_KEY_ID="$int_ansibleEnvVars_AWS_ACCESS_KEY_ID" - - export AWS_SECRET_KEY="$int_ansibleEnvVars_AWS_SECRET_KEY" - - printenv - - pwd - - ls - - eval $(ssh-agent -s) - - ssh-add <(echo "$int_ansiblePrivateKey_key") - - ansible-playbook Ansible/test/aws/playbook-ha-install.yaml - onComplete: - - echo "AWS Ansible playbook complete." \ No newline at end of file diff --git a/Ansible/test/aws/playbook-ha-install.yaml b/Ansible/test/aws/playbook-ha-install.yaml deleted file mode 100644 index 0587e30..0000000 --- a/Ansible/test/aws/playbook-ha-install.yaml +++ /dev/null @@ -1,151 +0,0 @@ ---- -- name: Provision AWS test infrastructure - hosts: localhost - tasks: - - shell: 'pwd' - register: cmd - - - debug: - msg: "{{ cmd.stdout }}" - - name: Create AWS test system - cloudformation: - stack_name: "{{ lookup('env', 'stack_name') }}" - state: "present" - region: "us-east-1" - disable_rollback: true - template: "{{ lookup('env', 'cfn_template') }}" - template_parameters: - SSHKeyName: "{{ lookup('env', 'ssh_public_key_name') }}" - tags: - Stack: "{{ lookup('env', 'stack_name') }}" - register: AWSDeployment - - name: Get AWS deployment details - debug: - var: AWSDeployment - - - name: Add bastion - add_host: - hostname: "{{ AWSDeployment.stack_outputs.BastionInstancePublic }}" - groups: bastion - ansible_user: "ubuntu" - - name: Add new RT primary to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.RTPriInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - artifactory_version: "{{ lookup('env', 'artifactory_version') }}" - db_url: "jdbc:postgresql://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/artifactory" - server_name: "{{ AWSDeployment.stack_outputs.ALBHostName }}" - artifactory_is_primary: true - artifactory_license1: "{{ lookup('env', 'artifactory_license1') }}" - artifactory_license2: "{{ lookup('env', 'artifactory_license2') }}" - artifactory_license3: "{{ lookup('env', 'artifactory_license3') }}" - groups: - - artifactory - - - name: Add RT secondaries to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.RTSecInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - artifactory_version: "{{ lookup('env', 'artifactory_version') }}" - db_url: "jdbc:postgresql://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/artifactory" - server_name: "{{ AWSDeployment.stack_outputs.ALBHostName }}" - artifactory_is_primary: false - groups: - - artifactory - - - name: Add xrays to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.XrayInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - xray_version: "{{ lookup('env', 'xray_version') }}" - jfrog_url: "http://{{ AWSDeployment.stack_outputs.ALBHostName }}" - master_key: "{{ lookup('env', 'master_key') }}" - join_key: "{{ lookup('env', 'join_key') }}" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_user: "xray" - db_password: "xray" - db_url: "postgres://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/xraydb?sslmode=disable" - groups: xray - - - name: Add DBs to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.DBInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - db_users: - - { db_user: "artifactory", db_password: "Art1fAct0ry" } - - { db_user: "xray", db_password: "xray" } - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - - { db_name: "xraydb", db_owner: "xray" } - groups: database - - - name: Set up test environment file - copy: - src: ../tests/src/test/resources/testenv_tpl.yaml - dest: ../tests/src/test/resources/testenv.yaml - - - name: Set up test environment url - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'urlval' - replace: "http://{{ AWSDeployment.stack_outputs.ALBHostName }}" - - - name: Set up test environment external_ip - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'ipval' - replace: "{{ AWSDeployment.stack_outputs.ALBHostName }}" - - - name: Set up test environment rt_password - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'passval' - replace: "password" - - - name: show testenv.yaml - debug: var=item - with_file: - - ../tests/src/test/resources/testenv.yaml - - - name: Wait 300 seconds for port 22 - wait_for: - port: 22 - host: "{{ AWSDeployment.stack_outputs.BastionInstancePublic }}" - delay: 10 - - - debug: - msg: "Unified URL is at http://{{ AWSDeployment.stack_outputs.ALBHostName }}" - -- hosts: database - roles: - - postgres - -- hosts: artifactory - vars: - artifactory_ha_enabled: true - master_key: "{{ lookup('env', 'master_key') }}" - join_key: "{{ lookup('env', 'join_key') }}" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_user: "artifactory" - db_password: "Art1fAct0ry" - roles: - - artifactory - -- hosts: xray - roles: - - xray - -- name: Test - hosts: localhost - tasks: - - name: Run tests - shell: - cmd: ./gradlew clean unified_test - chdir: ../tests/ \ No newline at end of file diff --git a/Ansible/test/aws/playbook-ha-upgrade.yaml b/Ansible/test/aws/playbook-ha-upgrade.yaml deleted file mode 100644 index fa97c16..0000000 --- a/Ansible/test/aws/playbook-ha-upgrade.yaml +++ /dev/null @@ -1,172 +0,0 @@ ---- -- name: Provision AWS test infrastructure - hosts: localhost - tasks: - - shell: 'pwd' - register: cmd - - - debug: - msg: "{{ cmd.stdout }}" - - name: Create AWS test system - cloudformation: - stack_name: "{{ lookup('env', 'stack_name') }}" - state: "present" - region: "us-east-1" - disable_rollback: true - template: "{{ lookup('env', 'cfn_template') }}" - template_parameters: - SSHKeyName: "{{ lookup('env', 'ssh_public_key_name') }}" - tags: - Stack: "{{ lookup('env', 'stack_name') }}" - register: AWSDeployment - - name: Get AWS deployment details - debug: - var: AWSDeployment - - - name: Add bastion - add_host: - hostname: "{{ AWSDeployment.stack_outputs.BastionInstancePublic }}" - groups: bastion - ansible_user: "ubuntu" - - name: Add new RT primary to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.RTPriInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - artifactory_version: "{{ lookup('env', 'artifactory_version') }}" - db_url: "jdbc:postgresql://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/artifactory" - server_name: "{{ AWSDeployment.stack_outputs.ALBHostName }}" - artifactory_is_primary: true - artifactory_license_file: "{{ lookup('env', 'artifactory_license_file') }}" - groups: - - artifactory - - - name: Add RT secondaries to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.RTSecInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - artifactory_version: "{{ lookup('env', 'artifactory_version') }}" - db_url: "jdbc:postgresql://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/artifactory" - server_name: "{{ AWSDeployment.stack_outputs.ALBHostName }}" - artifactory_is_primary: false - groups: - - artifactory - - - name: Add xrays to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.XrayInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - xray_version: "{{ lookup('env', 'xray_version') }}" - jfrog_url: "http://{{ AWSDeployment.stack_outputs.ALBHostName }}" - master_key: "{{ lookup('env', 'master_key') }}" - join_key: "{{ lookup('env', 'join_key') }}" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_user: "xray" - db_password: "xray" - db_url: "postgres://{{ AWSDeployment.stack_outputs.DBInstancePrivate }}:5432/xraydb?sslmode=disable" - groups: xray - - - name: Add DBs to host group - add_host: - hostname: "{{ AWSDeployment.stack_outputs.DBInstancePrivate }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ AWSDeployment.stack_outputs.BastionInstancePublic }} -W %h:%p"' - db_users: - - { db_user: "artifactory", db_password: "Art1fAct0ry" } - - { db_user: "xray", db_password: "xray" } - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - - { db_name: "xraydb", db_owner: "xray" } - groups: database - - - name: Set up test environment file - copy: - src: ../tests/src/test/resources/testenv_tpl.yaml - dest: ../tests/src/test/resources/testenv.yaml - - - name: Set up test environment url - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'urlval' - replace: "http://{{ AWSDeployment.stack_outputs.ALBHostName }}" - - - name: Set up test environment external_ip - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'ipval' - replace: "{{ AWSDeployment.stack_outputs.ALBHostName }}" - - - name: Set up test environment rt_password - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'passval' - replace: "password" - - - name: show testenv.yaml - debug: var=item - with_file: - - ../tests/src/test/resources/testenv.yaml - - - name: Wait 300 seconds for port 22 - wait_for: - port: 22 - host: "{{ AWSDeployment.stack_outputs.BastionInstancePublic }}" - delay: 10 - - - debug: - msg: "Unified URL is at http://{{ AWSDeployment.stack_outputs.ALBHostName }}" - -# apply roles to install software -- hosts: database - roles: - - postgres - -- hosts: artifactory - vars: - artifactory_ha_enabled: true - master_key: "{{ lookup('env', 'master_key') }}" - join_key: "{{ lookup('env', 'join_key') }}" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_user: "artifactory" - db_password: "Art1fAct0ry" - roles: - - artifactory - -- hosts: xray - roles: - - xray - -- name: Test - hosts: localhost - tasks: - - name: Run tests - shell: - cmd: ./gradlew clean unified_test - chdir: ../tests/ - -# Now upgrade -- name: Upgrade - hosts: localhost - tasks: - - pause: - prompt: "Proceed to upgrade?" - minutes: 5 - -- hosts: artifactory - vars: - artifactory_version: "{{ lookup('env', 'artifactory_version_upgrade') }}" - artifactory_upgrade_only: true - roles: - - artifactory - -- hosts: xray - vars: - xray_version: "{{ lookup('env', 'xray_version_upgrade') }}" - xray_upgrade_only: true - roles: - - xray \ No newline at end of file diff --git a/Ansible/test/aws/runAwsInstall.sh b/Ansible/test/aws/runAwsInstall.sh deleted file mode 100755 index 6b1a735..0000000 --- a/Ansible/test/aws/runAwsInstall.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env bash - -export stack_name=$1 -export cfn_template="~/git/JFrog-Cloud-Installers/Ansible/infra/aws/lb-rt-xray-ha-ubuntu16.json" -export ssh_public_key_name=jeff-ansible -export artifactory_license_file="~/Desktop/artifactory.cluster.license" -export master_key=d8c19a03036f83ea45f2c658e22fdd60 -export join_key=d8c19a03036f83ea45f2c658e22fdd61 -export ansible_user=ubuntu -export artifactory_version="7.4.3" -export xray_version="3.4.0" -ansible-playbook Ansible/test/aws/playbook-ha-install.yaml \ No newline at end of file diff --git a/Ansible/test/aws/runAwsUpgrade.sh b/Ansible/test/aws/runAwsUpgrade.sh deleted file mode 100755 index 191fe97..0000000 --- a/Ansible/test/aws/runAwsUpgrade.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env bash - -export stack_name=$1 -export cfn_template="~/git/JFrog-Cloud-Installers/Ansible/infra/aws/lb-rt-xray-ha-ubuntu16.json" -export ssh_public_key_name=jeff-ansible -export artifactory_license_file="~/Desktop/artifactory.cluster.license" -export master_key=d8c19a03036f83ea45f2c658e22fdd60 -export join_key=d8c19a03036f83ea45f2c658e22fdd61 -export ansible_user=ubuntu -export artifactory_version="7.4.3" -export xray_version="3.4.0" -export artifactory_version_upgrade="7.6.1" -export xray_version_upgrade="3.5.2" -ansible-playbook Ansible/test/aws/playbook-ha-upgrade.yaml \ No newline at end of file diff --git a/Ansible/test/azure/playbook-ha-install.yaml b/Ansible/test/azure/playbook-ha-install.yaml deleted file mode 100644 index 6304319..0000000 --- a/Ansible/test/azure/playbook-ha-install.yaml +++ /dev/null @@ -1,165 +0,0 @@ ---- -- name: Provision Azure test infrastructure - hosts: localhost - tasks: - - name: Create azure test system - azure_rm_deployment: - resource_group: "{{ lookup('env', 'azure_resource_group') }}" - location: eastus - name: AzureAnsibleInfra - parameters: - vnetName: - value: "vnetAnsible" - vnetAddressRange: - value: "10.0.0.0/16" - subnetAddressRange: - value: "10.0.0.0/24" - location: - value: "eastus" - adminPublicKey: - value: "{{ lookup('env', 'ssh_public_key') }}" - sizeOfDiskInGB: - value: 128 - vmSize: - value: Standard_D2s_v3 - numberOfArtifactory: - value: 2 - numberOfXray: - value: 1 - numberOfDb: - value: 1 - template_link: "{{ lookup('env', 'arm_template') }}" - register: azureDeployment - - name: Get Azure deployment details - debug: - var: azureDeployment - - - name: Add bastion - add_host: - hostname: "{{ azureDeployment.deployment.outputs.lbIp.value }}" - groups: bastion - ansible_user: "ubuntu" - - name: Add new RT primary to host group - add_host: - hostname: "{{ azureDeployment.deployment.outputs.vmArtPriIp.value }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"' - artifactory_version: "{{ lookup('env', 'artifactory_version') }}" - db_url: "jdbc:postgresql://{{ azureDeployment.deployment.outputs.vmDbArrIp.value[0] }}:5432/artifactory" - server_name: "rt.{{ azureDeployment.deployment.outputs.lbIp.value }}.xip.io" - artifactory_is_primary: true - artifactory_license1: "{{ lookup('env', 'artifactory_license1') }}" - artifactory_license2: "{{ lookup('env', 'artifactory_license2') }}" - artifactory_license3: "{{ lookup('env', 'artifactory_license3') }}" - groups: - - artifactory - - - name: Add RT secondaries to host group - add_host: - hostname: "{{ item }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"' - artifactory_version: "{{ lookup('env', 'artifactory_version') }}" - db_url: "jdbc:postgresql://{{ azureDeployment.deployment.outputs.vmDbArrIp.value[0] }}:5432/artifactory" - server_name: "rt.{{ azureDeployment.deployment.outputs.lbIp.value }}.xip.io" - artifactory_is_primary: false - groups: - - artifactory - loop: "{{ azureDeployment.deployment.outputs.vmArtSecArrIp.value }}" - - - name: Add xrays to host group - add_host: - hostname: "{{ item }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"' - xray_version: "{{ lookup('env', 'xray_version') }}" - jfrog_url: "http://rt.{{ azureDeployment.deployment.outputs.lbIp.value }}.xip.io" - master_key: "{{ lookup('env', 'master_key') }}" - join_key: "{{ lookup('env', 'join_key') }}" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_user: "xray" - db_password: "xray" - db_url: "postgres://{{ azureDeployment.deployment.outputs.vmDbArrIp.value[0] }}:5432/xraydb?sslmode=disable" - groups: xray - loop: "{{ azureDeployment.deployment.outputs.vmXrayArrIp.value }}" - - - name: Add DBs to host group - add_host: - hostname: "{{ item }}" - ansible_user: "ubuntu" - ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -A ubuntu@{{ azureDeployment.deployment.outputs.lbIp.value }} -W %h:%p"' - db_users: - - { db_user: "artifactory", db_password: "Art1fAct0ry" } - - { db_user: "xray", db_password: "xray" } - dbs: - - { db_name: "artifactory", db_owner: "artifactory" } - - { db_name: "xraydb", db_owner: "xray" } - groups: database - loop: "{{ azureDeployment.deployment.outputs.vmDbArrIp.value }}" - - - name: Set up test environment url - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'urlval' - replace: "http://rt.{{ azureDeployment.deployment.outputs.lbIp.value }}.xip.io" - - - name: Set up test environment external_ip - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'ipval' - replace: "{{ azureDeployment.deployment.outputs.lbIp.value }}" - - - name: Set up test environment rt_password - replace: - path: ../tests/src/test/resources/testenv.yaml - regexp: 'passval' - replace: "password" - - - name: show testenv.yaml - debug: var=item - with_file: - - ../tests/src/test/resources/testenv.yaml - - - name: Wait 300 seconds for port 22 - wait_for: - port: 22 - host: "{{ azureDeployment.deployment.outputs.lbIp.value }}" - delay: 10 - - - debug: - msg: "Unified URL is at http://rt.{{ azureDeployment.deployment.outputs.lbIp.value }}.xip.io" - -- hosts: database - roles: - - postgres - -- hosts: artifactory - vars: - artifactory_ha_enabled: true - master_key: "{{ lookup('env', 'master_key') }}" - join_key: "{{ lookup('env', 'join_key') }}" - db_download_url: "https://jdbc.postgresql.org/download/postgresql-42.2.12.jar" - db_type: "postgresql" - db_driver: "org.postgresql.Driver" - db_user: "artifactory" - db_password: "Art1fAct0ry" - roles: - - artifactory - -- hosts: xray - roles: - - xray - -- name: Test - hosts: localhost - tasks: - - name: Run tests - shell: - cmd: ./gradlew clean unified_test - chdir: ../tests/ - - name: Cleanup and delete resource group - azure_rm_resourcegroup: - name: "{{ lookup('env', 'azure_resource_group') }}" - force_delete_nonempty: yes - state: absent \ No newline at end of file diff --git a/Ansible/test/azure/runAzure.sh b/Ansible/test/azure/runAzure.sh deleted file mode 100755 index c9d7e80..0000000 --- a/Ansible/test/azure/runAzure.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/usr/bin/env bash - -ansible-playbook Ansible/test/azure/playbook.yaml \ No newline at end of file diff --git a/Ansible/test/tests/.gradle/6.5/executionHistory/executionHistory.bin b/Ansible/test/tests/.gradle/6.5/executionHistory/executionHistory.bin deleted file mode 100644 index e2cad61..0000000 Binary files a/Ansible/test/tests/.gradle/6.5/executionHistory/executionHistory.bin and /dev/null differ diff --git a/Ansible/test/tests/.gradle/6.5/executionHistory/executionHistory.lock b/Ansible/test/tests/.gradle/6.5/executionHistory/executionHistory.lock deleted file mode 100644 index d7a4c5f..0000000 Binary files a/Ansible/test/tests/.gradle/6.5/executionHistory/executionHistory.lock and /dev/null differ diff --git a/Ansible/test/tests/.gradle/6.5/fileChanges/last-build.bin b/Ansible/test/tests/.gradle/6.5/fileChanges/last-build.bin deleted file mode 100644 index f76dd23..0000000 Binary files a/Ansible/test/tests/.gradle/6.5/fileChanges/last-build.bin and /dev/null differ diff --git a/Ansible/test/tests/.gradle/6.5/fileContent/fileContent.lock b/Ansible/test/tests/.gradle/6.5/fileContent/fileContent.lock deleted file mode 100644 index 1f397f7..0000000 Binary files a/Ansible/test/tests/.gradle/6.5/fileContent/fileContent.lock and /dev/null differ diff --git a/Ansible/test/tests/.gradle/6.5/fileHashes/fileHashes.bin b/Ansible/test/tests/.gradle/6.5/fileHashes/fileHashes.bin deleted file mode 100644 index 4782416..0000000 Binary files a/Ansible/test/tests/.gradle/6.5/fileHashes/fileHashes.bin and /dev/null differ diff --git a/Ansible/test/tests/.gradle/6.5/fileHashes/fileHashes.lock b/Ansible/test/tests/.gradle/6.5/fileHashes/fileHashes.lock deleted file mode 100644 index 88d74f8..0000000 Binary files a/Ansible/test/tests/.gradle/6.5/fileHashes/fileHashes.lock and /dev/null differ diff --git a/Ansible/test/tests/.gradle/6.5/gc.properties b/Ansible/test/tests/.gradle/6.5/gc.properties deleted file mode 100644 index e69de29..0000000 diff --git a/Ansible/test/tests/.gradle/buildOutputCleanup/buildOutputCleanup.lock b/Ansible/test/tests/.gradle/buildOutputCleanup/buildOutputCleanup.lock deleted file mode 100644 index cad006e..0000000 Binary files a/Ansible/test/tests/.gradle/buildOutputCleanup/buildOutputCleanup.lock and /dev/null differ diff --git a/Ansible/test/tests/.gradle/buildOutputCleanup/cache.properties b/Ansible/test/tests/.gradle/buildOutputCleanup/cache.properties deleted file mode 100644 index 9d7456f..0000000 --- a/Ansible/test/tests/.gradle/buildOutputCleanup/cache.properties +++ /dev/null @@ -1,2 +0,0 @@ -#Thu Jun 18 12:50:09 PDT 2020 -gradle.version=6.5 diff --git a/Ansible/test/tests/.gradle/buildOutputCleanup/outputFiles.bin b/Ansible/test/tests/.gradle/buildOutputCleanup/outputFiles.bin deleted file mode 100644 index b2b9c92..0000000 Binary files a/Ansible/test/tests/.gradle/buildOutputCleanup/outputFiles.bin and /dev/null differ diff --git a/Ansible/test/tests/.gradle/checksums/checksums.lock b/Ansible/test/tests/.gradle/checksums/checksums.lock deleted file mode 100644 index 19bc257..0000000 Binary files a/Ansible/test/tests/.gradle/checksums/checksums.lock and /dev/null differ diff --git a/Ansible/test/tests/.gradle/vcs-1/gc.properties b/Ansible/test/tests/.gradle/vcs-1/gc.properties deleted file mode 100644 index e69de29..0000000 diff --git a/Ansible/test/tests/README.md b/Ansible/test/tests/README.md deleted file mode 100755 index 21db3cf..0000000 --- a/Ansible/test/tests/README.md +++ /dev/null @@ -1,19 +0,0 @@ -## Test framework - -### How to run it locally - -``` -./gradlew clean commonTests -``` - -### Adding new tests - -### Gradle cleanup. Delete the folder: -``` - ~/.gradle/caches/ - ./gradlew clean -``` -### Or run -``` - ./gradlew clean -``` \ No newline at end of file diff --git a/Ansible/test/tests/build.gradle b/Ansible/test/tests/build.gradle deleted file mode 100644 index 1a41ee3..0000000 --- a/Ansible/test/tests/build.gradle +++ /dev/null @@ -1,63 +0,0 @@ -plugins { - id 'groovy' -} - -group 'org.example' -version '1.0-SNAPSHOT' - -repositories { - mavenCentral() -} - -dependencies { - compile 'org.codehaus.groovy:groovy-all:3.0.0' - testCompile 'io.rest-assured:rest-assured:4.1.1' - testCompile 'org.testng:testng:6.14.3' - testCompile 'org.yaml:snakeyaml:1.17' -} - -test { - outputs.upToDateWhen { false } - useTestNG(){ - suites("src/test/groovy/testng.xml") - } - //maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1 - testLogging { - showStandardStreams = true - } - -} - -task artifactory_jcr_test(type: Test) { - useTestNG() { - useDefaultListeners = true - suites 'src/test/groovy/testng.xml' - includeGroups ('common', 'jcr') - } - testLogging { - showStandardStreams = true - } -} - -task artifactory_ha_test(type: Test) { - useTestNG() { - useDefaultListeners = true - suites 'src/test/groovy/testng.xml' - includeGroups('common','pro') - } - testLogging { - showStandardStreams = true - } -} - -task unified_test(type: Test) { - useTestNG() { - useDefaultListeners = true - suites 'src/test/groovy/testng.xml' - includeGroups('common','pro','xray') - } - testLogging { - showStandardStreams = true - } -} - diff --git a/Ansible/test/tests/gradle/wrapper/gradle-wrapper.jar b/Ansible/test/tests/gradle/wrapper/gradle-wrapper.jar deleted file mode 100644 index 87b738c..0000000 Binary files a/Ansible/test/tests/gradle/wrapper/gradle-wrapper.jar and /dev/null differ diff --git a/Ansible/test/tests/gradle/wrapper/gradle-wrapper.properties b/Ansible/test/tests/gradle/wrapper/gradle-wrapper.properties deleted file mode 100644 index f9f4003..0000000 --- a/Ansible/test/tests/gradle/wrapper/gradle-wrapper.properties +++ /dev/null @@ -1,6 +0,0 @@ -#Wed Feb 12 10:23:21 PST 2020 -distributionUrl=https\://services.gradle.org/distributions/gradle-6.5-all.zip -distributionBase=GRADLE_USER_HOME -distributionPath=wrapper/dists -zipStorePath=wrapper/dists -zipStoreBase=GRADLE_USER_HOME diff --git a/Ansible/test/tests/gradlew b/Ansible/test/tests/gradlew deleted file mode 100755 index af6708f..0000000 --- a/Ansible/test/tests/gradlew +++ /dev/null @@ -1,172 +0,0 @@ -#!/usr/bin/env sh - -############################################################################## -## -## Gradle start up script for UN*X -## -############################################################################## - -# Attempt to set APP_HOME -# Resolve links: $0 may be a link -PRG="$0" -# Need this for relative symlinks. -while [ -h "$PRG" ] ; do - ls=`ls -ld "$PRG"` - link=`expr "$ls" : '.*-> \(.*\)$'` - if expr "$link" : '/.*' > /dev/null; then - PRG="$link" - else - PRG=`dirname "$PRG"`"/$link" - fi -done -SAVED="`pwd`" -cd "`dirname \"$PRG\"`/" >/dev/null -APP_HOME="`pwd -P`" -cd "$SAVED" >/dev/null - -APP_NAME="Gradle" -APP_BASE_NAME=`basename "$0"` - -# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. -DEFAULT_JVM_OPTS='"-Xmx64m"' - -# Use the maximum available, or set MAX_FD != -1 to use that value. -MAX_FD="maximum" - -warn () { - echo "$*" -} - -die () { - echo - echo "$*" - echo - exit 1 -} - -# OS specific support (must be 'true' or 'false'). -cygwin=false -msys=false -darwin=false -nonstop=false -case "`uname`" in - CYGWIN* ) - cygwin=true - ;; - Darwin* ) - darwin=true - ;; - MINGW* ) - msys=true - ;; - NONSTOP* ) - nonstop=true - ;; -esac - -CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar - -# Determine the Java command to use to start the JVM. -if [ -n "$JAVA_HOME" ] ; then - if [ -x "$JAVA_HOME/jre/sh/java" ] ; then - # IBM's JDK on AIX uses strange locations for the executables - JAVACMD="$JAVA_HOME/jre/sh/java" - else - JAVACMD="$JAVA_HOME/bin/java" - fi - if [ ! -x "$JAVACMD" ] ; then - die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME - -Please set the JAVA_HOME variable in your environment to match the -location of your Java installation." - fi -else - JAVACMD="java" - which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. - -Please set the JAVA_HOME variable in your environment to match the -location of your Java installation." -fi - -# Increase the maximum file descriptors if we can. -if [ "$cygwin" = "false" -a "$darwin" = "false" -a "$nonstop" = "false" ] ; then - MAX_FD_LIMIT=`ulimit -H -n` - if [ $? -eq 0 ] ; then - if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then - MAX_FD="$MAX_FD_LIMIT" - fi - ulimit -n $MAX_FD - if [ $? -ne 0 ] ; then - warn "Could not set maximum file descriptor limit: $MAX_FD" - fi - else - warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT" - fi -fi - -# For Darwin, add options to specify how the application appears in the dock -if $darwin; then - GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\"" -fi - -# For Cygwin, switch paths to Windows format before running java -if $cygwin ; then - APP_HOME=`cygpath --path --mixed "$APP_HOME"` - CLASSPATH=`cygpath --path --mixed "$CLASSPATH"` - JAVACMD=`cygpath --unix "$JAVACMD"` - - # We build the pattern for arguments to be converted via cygpath - ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null` - SEP="" - for dir in $ROOTDIRSRAW ; do - ROOTDIRS="$ROOTDIRS$SEP$dir" - SEP="|" - done - OURCYGPATTERN="(^($ROOTDIRS))" - # Add a user-defined pattern to the cygpath arguments - if [ "$GRADLE_CYGPATTERN" != "" ] ; then - OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)" - fi - # Now convert the arguments - kludge to limit ourselves to /bin/sh - i=0 - for arg in "$@" ; do - CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -` - CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option - - if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition - eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"` - else - eval `echo args$i`="\"$arg\"" - fi - i=$((i+1)) - done - case $i in - (0) set -- ;; - (1) set -- "$args0" ;; - (2) set -- "$args0" "$args1" ;; - (3) set -- "$args0" "$args1" "$args2" ;; - (4) set -- "$args0" "$args1" "$args2" "$args3" ;; - (5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;; - (6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;; - (7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;; - (8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;; - (9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;; - esac -fi - -# Escape application args -save () { - for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" ; done - echo " " -} -APP_ARGS=$(save "$@") - -# Collect all arguments for the java command, following the shell quoting and substitution rules -eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS" - -# by default we should be in the correct project dir, but when run from Finder on Mac, the cwd is wrong -if [ "$(uname)" = "Darwin" ] && [ "$HOME" = "$PWD" ]; then - cd "$(dirname "$0")" -fi - -exec "$JAVACMD" "$@" diff --git a/Ansible/test/tests/gradlew.bat b/Ansible/test/tests/gradlew.bat deleted file mode 100644 index 6d57edc..0000000 --- a/Ansible/test/tests/gradlew.bat +++ /dev/null @@ -1,84 +0,0 @@ -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem Gradle startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME% - -@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS="-Xmx64m" - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto init - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto init - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:init -@rem Get command-line arguments, handling Windows variants - -if not "%OS%" == "Windows_NT" goto win9xME_args - -:win9xME_args -@rem Slurp the command line arguments. -set CMD_LINE_ARGS= -set _SKIP=2 - -:win9xME_args_slurp -if "x%~1" == "x" goto execute - -set CMD_LINE_ARGS=%* - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar - -@rem Execute Gradle -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS% - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/Ansible/test/tests/settings.gradle b/Ansible/test/tests/settings.gradle deleted file mode 100644 index d30fe0d..0000000 --- a/Ansible/test/tests/settings.gradle +++ /dev/null @@ -1,2 +0,0 @@ -rootProject.name = 'fozzie_jfrog_tests' - diff --git a/Ansible/test/tests/src/test/groovy/steps/RepositorySteps.groovy b/Ansible/test/tests/src/test/groovy/steps/RepositorySteps.groovy deleted file mode 100644 index 3ae8b07..0000000 --- a/Ansible/test/tests/src/test/groovy/steps/RepositorySteps.groovy +++ /dev/null @@ -1,139 +0,0 @@ -package steps - - -import static io.restassured.RestAssured.given - -class RepositorySteps { - - def getHealthCheckResponse(artifactoryURL) { - return given() - .when() - .get("http://" + artifactoryURL + "/router/api/v1/system/health") - .then() - .extract().response() - } - - def ping() { - return given() - .when() - .get("/api/system/ping") - .then() - .extract().response() - } - - def createRepositories(File body, username, password) { - return given() - .auth() - .preemptive() - .basic("${username}", "${password}") - .header("Cache-Control", "no-cache") - .header("content-Type", "application/yaml") - .body(body) - .when() - .patch("/api/system/configuration") - .then() - .extract().response() - } - // https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-GetRepositories - def getRepos() { - return given() - .header("Cache-Control", "no-cache") - .header("content-Type", "application/yaml") - .when() - .get("/api/repositories") - .then() - .extract().response() - - } - // https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteRepository - def deleteRepository(repoName, username, password) { - return given() - .auth() - .preemptive() - .basic("${username}", "${password}") - .header("Cache-Control", "no-cache") - .header("content-Type", "application/yaml") - .when() - .delete("/api/repositories/" + repoName) - .then() - .extract().response() - - } - - def createDirectory(repoName, directoryName) { - return given() - .header("Cache-Control", "no-cache") - .header("content-Type", "application/yaml") - .when() - .put("/" + repoName + "/" + directoryName) - .then() - .extract().response() - - } - - def deployArtifact(repoName, directoryName, artifact, filename) { - return given() - .header("Cache-Control", "no-cache") - .header("Content-Type", "application/json") - .body(artifact) - .when() - .put("/" + repoName + "/" + directoryName + "/" + filename) - .then() - .extract().response() - - } - - def deleteItem(repoName, directoryName, artifact, filename) { - return given() - .header("Cache-Control", "no-cache") - .header("Content-Type", "application/json") - .body(artifact) - .when() - .delete("/" + repoName + "/" + directoryName + "/" + filename) - .then() - .extract().response() - - } - - def getInfo(repoName, directoryName, artifact, filename) { - return given() - .header("Cache-Control", "no-cache") - .header("Content-Type", "application/json") - .body(artifact) - .when() - .get("/api/storage/" + repoName + "/" + directoryName + "/" + filename) - .then() - .extract().response() - - } - - def createSupportBundle(name, startDate, endDate) { - return given() - .header("Cache-Control", "no-cache") - .header("Content-Type", "application/json") - .body("{ \n" + - " \"name\":\"${name}\",\n" + - " \"description\":\"desc\",\n" + - " \"parameters\":{ \n" + - " \"configuration\": \"true\",\n" + - " \"system\": \"true\", \n" + - " \"logs\":{ \n" + - " \"include\": \"true\", \n" + - " \"start_date\":\"${startDate}\",\n" + - " \"end_date\":\"${endDate}\"\n" + - " },\n" + - " \"thread_dump\":{ \n" + - " \"count\": 1,\n" + - " \"interval\": 0\n" + - " }\n" + - " }\n" + - "}") - .when() - .post("/api/system/support/bundle") - .then() - .extract().response() - - } - - -} \ No newline at end of file diff --git a/Ansible/test/tests/src/test/groovy/testng.xml b/Ansible/test/tests/src/test/groovy/testng.xml deleted file mode 100644 index cee4eb7..0000000 --- a/Ansible/test/tests/src/test/groovy/testng.xml +++ /dev/null @@ -1,10 +0,0 @@ - - - - - - - - - - diff --git a/Ansible/test/tests/src/test/groovy/tests/HealthCheckTest.groovy b/Ansible/test/tests/src/test/groovy/tests/HealthCheckTest.groovy deleted file mode 100644 index 4e7fcce..0000000 --- a/Ansible/test/tests/src/test/groovy/tests/HealthCheckTest.groovy +++ /dev/null @@ -1,57 +0,0 @@ -package tests - -import io.restassured.RestAssured -import io.restassured.path.json.JsonPath -import io.restassured.response.Response -import org.hamcrest.Matchers -import org.testng.Reporter -import org.testng.annotations.BeforeSuite -import org.testng.annotations.Test -import steps.RepositorySteps -import org.yaml.snakeyaml.Yaml -import utils.Shell - -class HealthCheckTest extends RepositorySteps{ - Yaml yaml = new Yaml() - def configFile = new File("./src/test/resources/testenv.yaml") - def config = yaml.load(configFile.text) - def artifactoryURL - - - @BeforeSuite(alwaysRun = true) - def setUp() { - artifactoryURL = config.artifactory.external_ip - RestAssured.baseURI = "http://${artifactoryURL}/artifactory" - } - - - @Test(priority=0, groups="common", testName = "Health check for all 4 services") - void healthCheckTest(){ - Response response = getHealthCheckResponse(artifactoryURL) - response.then().assertThat().statusCode(200). - body("router.state", Matchers.equalTo("HEALTHY")) - - int bodySize = response.body().jsonPath().getList("services").size() - for (int i = 0; i < bodySize; i++) { - JsonPath jsonPathEvaluator = response.jsonPath() - String serviceID = jsonPathEvaluator.getString("services[" + i + "].service_id") - String nodeID = jsonPathEvaluator.getString("services[" + i + "].node_id") - response.then(). - body("services[" + i + "].state", Matchers.equalTo("HEALTHY")) - - Reporter.log("- Health check. Service \"" + serviceID + "\" on node \"" + nodeID + "\" is healthy", true) - } - - } - - @Test(priority=1, groups=["ping","common"], testName = "Ping (In HA 200 only when licences were added)") - void pingTest() { - Response response = ping() - response.then().assertThat().statusCode(200). - body(Matchers.hasToString("OK")) - Reporter.log("- Ping test. Service is OK", true) - } - - - -} diff --git a/Ansible/test/tests/src/test/groovy/utils/ConfigurationUtil.groovy b/Ansible/test/tests/src/test/groovy/utils/ConfigurationUtil.groovy deleted file mode 100644 index 208bf29..0000000 --- a/Ansible/test/tests/src/test/groovy/utils/ConfigurationUtil.groovy +++ /dev/null @@ -1,19 +0,0 @@ -package utils - -class ConfigurationUtil { - - static def getEnvironmentVariableValue(def name) { - def value = System.getProperty(name) - if (value == null) { - value = System.getenv(name) - if (value == null) { - throw new Exception("Environment variable $name not set!"); - } - } - return value - } - - - - -} diff --git a/Ansible/test/tests/src/test/groovy/utils/DSL.groovy b/Ansible/test/tests/src/test/groovy/utils/DSL.groovy deleted file mode 100644 index 6a14f48..0000000 --- a/Ansible/test/tests/src/test/groovy/utils/DSL.groovy +++ /dev/null @@ -1,81 +0,0 @@ -package utils - -import utils.ProcessOutputStream - -/** - * Created by eliom on 6/19/18. - */ -class DSL { - - /** - * Run shell command - */ - static def sh = { command, outputBuffer = null, folder = null, silent = false, customEnvVariables = null, errorBuffer = null -> - //def workdir = ConfigurationUtil.getEnvironmentVariableValue("KERMIT_WORKSPACE_DIR") - def workdir = "/Users/danielmi/projects/soldev/.kermit-workspace" - def commandFolder - if (folder != null) { - commandFolder = new File(folder, workdir) - } else { - commandFolder = workdir - } - - if (!silent) { - println "Running command at ${commandFolder}: $command" - } - def proc = null - try { - def env = System.getenv().collect { k, v -> "$k=$v" } - - if (customEnvVariables != null) { - env.addAll( customEnvVariables.collect { k, v -> "$k=$v" }) - } - - if (command instanceof List && command.size() > 0 && command[0] instanceof List) { - //Pipe Commands - command.each { - if (proc != null) { - proc = proc | it.execute(env, commandFolder) - } else { - proc = it.execute(env, commandFolder) - } - } - } else { - proc = command.execute(env, commandFolder) - } - } catch (IOException e) { - println "Failed to execute command: ${e.getMessage()}" - return -1 - } - def processOutput = new ProcessOutputStream(silent, outputBuffer == null) - def errorOutput = processOutput - if (errorBuffer != null) { - errorOutput = new ProcessOutputStream(silent, errorBuffer == null) - } - - proc.consumeProcessOutput(processOutput, errorOutput) - def exitStatus = proc.waitFor() - if (!silent) { - println "Exit: $exitStatus" - } - if (outputBuffer != null) { - outputBuffer.append(processOutput.toString()) - } - - processOutput.close() - - if (errorBuffer != null) { - errorBuffer.append(errorOutput.toString()) - errorOutput.close() - } - - return exitStatus - } - - - //... - - - - -} diff --git a/Ansible/test/tests/src/test/groovy/utils/EnvironmentConfig.groovy b/Ansible/test/tests/src/test/groovy/utils/EnvironmentConfig.groovy deleted file mode 100644 index db7d017..0000000 --- a/Ansible/test/tests/src/test/groovy/utils/EnvironmentConfig.groovy +++ /dev/null @@ -1,10 +0,0 @@ -package utils - - - -class EnvironmentConfig { - - - - -} diff --git a/Ansible/test/tests/src/test/groovy/utils/ProcessOutputStream.groovy b/Ansible/test/tests/src/test/groovy/utils/ProcessOutputStream.groovy deleted file mode 100644 index aff11c2..0000000 --- a/Ansible/test/tests/src/test/groovy/utils/ProcessOutputStream.groovy +++ /dev/null @@ -1,32 +0,0 @@ -package utils - -public class ProcessOutputStream extends ByteArrayOutputStream{ - - private boolean silent = false; - private boolean discardOutput = false; - - public ProcessOutputStream(boolean silent, boolean discardOutput) { - this.silent = silent; - this.discardOutput = discardOutput; - } - - @Override - public synchronized void write(int b) { - if (!silent) { - System.out.write(b); - } - if (!discardOutput) { - super.write(b); - } - } - - @Override - public synchronized void write(byte[] b, int off, int len) { - if (!silent) { - System.out.write(b, off, len); - } - if (!discardOutput) { - super.write(b, off, len); - } - } -} \ No newline at end of file diff --git a/Ansible/test/tests/src/test/groovy/utils/Shell.groovy b/Ansible/test/tests/src/test/groovy/utils/Shell.groovy deleted file mode 100644 index 87d4aaf..0000000 --- a/Ansible/test/tests/src/test/groovy/utils/Shell.groovy +++ /dev/null @@ -1,17 +0,0 @@ -package utils - -class Shell { - - def executeProc(cmd) { - println(cmd) - def proc = cmd.execute() - - proc.in.eachLine {line -> - println line - } - - println proc.err.text - - proc.exitValue() - } -} diff --git a/Ansible/test/tests/src/test/groovy/utils/WorkSpaceManager.groovy b/Ansible/test/tests/src/test/groovy/utils/WorkSpaceManager.groovy deleted file mode 100644 index f1262bb..0000000 --- a/Ansible/test/tests/src/test/groovy/utils/WorkSpaceManager.groovy +++ /dev/null @@ -1,32 +0,0 @@ -package utils - -/** - * Created by eliom on 6/26/18. - */ -class WorkspaceManager { - - //TODO: Make it Thread safe - def static currentPath = [] - - def static pushPath(path) { - currentPath.push(path) - } - - def static popPath() { - currentPath.pop() - } - - def static getCurrentDir() { - def workspaceRoot = ConfigurationUtil.getWorkspaceDir() - if (currentPath.size() > 0) { - def currentDir = new File(currentPath.join('/'), workspaceRoot) - if (!currentDir.exists()) { - currentDir.mkdirs() - } - return currentDir - } else { - return workspaceRoot - } - } - -} diff --git a/Ansible/test/tests/src/test/resources/enableRabbitMQ.json b/Ansible/test/tests/src/test/resources/enableRabbitMQ.json deleted file mode 100644 index c6e54d4..0000000 --- a/Ansible/test/tests/src/test/resources/enableRabbitMQ.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "sslInsecure": false, - "maxDiskDataUsage": 80, - "monitorSamplingInterval": 300, - "mailNoSsl": false, - "messageMaxTTL": 7, - "jobInterval": 86400, - "allowSendingAnalytics": true, - "httpsPort": 443, - "enableTlsConnectionToRabbitMQ": true -} \ No newline at end of file diff --git a/Ansible/test/tests/src/test/resources/integration.json b/Ansible/test/tests/src/test/resources/integration.json deleted file mode 100644 index 08ba303..0000000 --- a/Ansible/test/tests/src/test/resources/integration.json +++ /dev/null @@ -1,9 +0,0 @@ -{ - "vendor": "whitesource5", - "api_key": "12345", - "enabled": true, - "context": "project_id", - "url": "https://saas.whitesourcesoftware.com/xray", - "description": "WhiteSource provides a simple yet powerful open source security and licenses management solution. More details at http://www.whitesourcesoftware.com.", - "test_url": "https://saas.whitesourcesoftware.com/xray/api/checkauth" -} \ No newline at end of file diff --git a/Ansible/test/tests/src/test/resources/repositories/CreateDefault.yaml b/Ansible/test/tests/src/test/resources/repositories/CreateDefault.yaml deleted file mode 100644 index ef1f5fd..0000000 --- a/Ansible/test/tests/src/test/resources/repositories/CreateDefault.yaml +++ /dev/null @@ -1,554 +0,0 @@ -localRepositories: - libs-release-local: - type: maven - description: "production deployment" - repoLayout: maven-2-default - xray: - enabled: true - libs-snapshot-local: - type: maven - description: "snapshot deployment" - repoLayout: maven-2-default - xray: - enabled: true - maven-prod-local: - type: maven - description: "production release deployment" - repoLayout: maven-2-default - xray: - enabled: true - maven-dev-local: - type: maven - description: "development release deployment" - repoLayout: maven-2-default - xray: - enabled: true - maven-release-local: - type: maven - description: "development release deployment" - repoLayout: maven-2-default - xray: - enabled: true - maven-snapshot-local: - type: maven - description: "development release deployment" - repoLayout: maven-2-default - xray: - enabled: true - gradle-prod-local: - type: gradle - description: "production deployment" - repoLayout: gradle-default - xray: - enabled: true - gradle-dev-local: - type: gradle - description: "development deployment" - repoLayout: gradle-default - xray: - enabled: true - tomcat-local: - type: generic - description: "used by demo" - repoLayout: simple-default - xray: - enabled: true - generic-prod-local: - type: generic - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - generic-dev-local: - type: generic - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - ivy-prod-local: - type: ivy - description: "production deployment" - repoLayout: "ivy-default" - xray: - enabled: true - ivy-dev-local: - type: ivy - description: "development deployment" - repoLayout: ivy-default - xray: - enabled: true - helm-prod-local: - type: helm - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - helm-dev-local: - type: helm - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - sbt-prod-local: - type: sbt - description: "production deployment" - repoLayout: sbt-default - xray: - enabled: true - sbt-dev-local: - type: sbt - description: "development deployment" - repoLayout: sbt-default - xray: - enabled: true - nuget-prod-local: - type: nuget - description: "production deployment" - repoLayout: nuget-default - xray: - enabled: true - nuget-dev-local: - type: nuget - description: "development deployment" - repoLayout: nuget-default - xray: - enabled: true - gems-prod-local: - type: gems - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - gems-dev-local: - type: gems - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - npm-prod-local: - type: npm - description: "production deployment" - repoLayout: npm-default - xray: - enabled: true - npm-dev-local: - type: npm - description: "development deployment" - repoLayout: npm-default - xray: - enabled: true - bower-prod-local: - type: bower - description: "production deployment" - repoLayout: bower-default - xray: - enabled: true - bower-dev-local: - type: bower - description: "development deployment" - repoLayout: bower-default - xray: - enabled: true - debian-prod-local: - type: debian - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - debian-dev-local: - type: debian - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - php-prod-local: - type: composer - description: "production deployment" - repoLayout: composer-default - xray: - enabled: true - php-dev-local: - type: composer - description: "development deployment" - repoLayout: composer-default - xray: - enabled: true - pypi-prod-local: - type: pypi - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - pypi-dev-local: - type: pypi - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - docker-prod-local: - type: docker - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - docker-stage-local: - type: docker - description: "stage deployment" - repoLayout: simple-default - xray: - enabled: true - docker-dev-local: - type: docker - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - docker-local: - type: docker - description: "docker deployment" - repoLayout: simple-default - xray: - enabled: true - docker-push: - type: docker - description: "docker push repo for push replication testing" - repoLayout: simple-default - xray: - enabled: true - vagrant-prod-local: - type: vagrant - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - vagrant-dev-local: - type: vagrant - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - gitlfs-prod-local: - type: gitlfs - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - gitlfs-dev-local: - type: gitlfs - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - rpm-prod-local: - type: yum - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - rpm-dev-local: - type: yum - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - conan-prod-local: - type: conan - description: "production deployment" - repoLayout: conan-default - xray: - enabled: true - conan-dev-local: - type: conan - description: "development deployment" - repoLayout: conan-default - xray: - enabled: true - chef-prod-local: - type: chef - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - chef-dev-local: - type: chef - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - puppet-prod-local: - type: puppet - description: "production deployment" - repoLayout: puppet-default - xray: - enabled: true - puppet-dev-local: - type: puppet - description: "development deployment" - repoLayout: puppet-default - xray: - enabled: true - go-prod-local: - type: go - description: "production deployment" - repoLayout: go-default - xray: - enabled: true - go-staging-local: - type: go - description: "production deployment" - repoLayout: go-default - xray: - enabled: true -remoteRepositories: - docker-remote: - type: docker - url: https://registry-1.docker.io - repoLayout: simple-default - enableTokenAuthentication: true - xray: - enabled: true - helm-remote: - type: helm - url: https://storage.googleapis.com/kubernetes-charts - repoLayout: simple-default - xray: - enabled: true - jcenter: - type: maven - url: https://jcenter.bintray.com - repoLayout: maven-2-default - xray: - enabled: true - npm-remote: - type: npm - url: https://registry.npmjs.org - repoLayout: npm-default - xray: - enabled: true - nuget-remote: - type: nuget - url: https://www.nuget.org/ - repoLayout: nuget-default - xray: - enabled: true - bower-remote: - type: bower - url: https://github.com/ - repoLayout: bower-default - xray: - enabled: true - gems-remote: - type: gems - url: https://rubygems.org/ - repoLayout: simple-default - xray: - enabled: true - debian-remote: - type: debian - url: http://archive.ubuntu.com/ubuntu/ - repoLayout: simple-default - xray: - enabled: true - php-remote: - type: composer - url: https://github.com/ - repoLayout: composer-default - xray: - enabled: true - pypi-remote: - type: pypi - url: https://files.pythonhosted.org - repoLayout: simple-default - xray: - enabled: true - rpm-remote: - type: yum - url: http://mirror.centos.org/centos/ - repoLayout: simple-default - xray: - enabled: true - chef-remote: - type: chef - url: https://supermarket.chef.io - repoLayout: simple-default - xray: - enabled: true - puppet-remote: - type: puppet - url: https://forgeapi.puppetlabs.com/ - repoLayout: puppet-default - xray: - enabled: true -virtualRepositories: - maven-release-virtual: - type: maven - repositories: - - maven-prod-local - - jcenter - - maven-release-local - - libs-release-local - description: "maven release virtual repositories" - defaultDeploymentRepo: maven-release-local - maven-snapshot-virtual: - type: maven - repositories: - - maven-snapshot-local - - jcenter - - maven-dev-local - - libs-snapshot-local - description: "maven snapshot virtual repositories" - defaultDeploymentRepo: maven-snapshot-local - gradle-virtual: - type: gradle - repositories: - - gradle-dev-local - - jcenter - - gradle-prod-local - - libs-release-local - description: "gradle virtual repositories" - defaultDeploymentRepo: gradle-dev-local - docker-PLACEHOLDERFORBUILDSTEP: - type: docker - repositories: - - docker-local - - docker-remote - - docker-dev-local - - docker-prod-local - - docker-stage-local - - docker-push - description: "docker virtual" - defaultDeploymentRepo: docker-stage-local - docker-virtual: - type: docker - repositories: - - docker-local - - docker-remote - - docker-dev-local - - docker-prod-local - - docker-stage-local - - docker-push - description: "docker virtual" - defaultDeploymentRepo: docker-stage-local - libs-release: - type: maven - repositories: - - libs-release-local - - jcenter - description: "maven libraries virtual" - defaultDeploymentRepo: libs-release-local - libs-snapshot: - type: maven - repositories: - - libs-snapshot-local - - jcenter - description: "maven libraries virtual" - defaultDeploymentRepo: libs-snapshot-local - ivy-virtual: - type: ivy - repositories: - - ivy-prod-local - - ivy-dev-local - - jcenter - description: "ivy virtual" - defaultDeploymentRepo: ivy-dev-local - generic-virtual: - type: generic - repositories: - - generic-prod-local - - generic-dev-local - description: "generic virtual" - defaultDeploymentRepo: generic-dev-local - helm-virtual: - type: helm - repositories: - - helm-prod-local - - helm-dev-local - - helm-remote - description: "helm virtual" - defaultDeploymentRepo: helm-dev-local - nuget-virtual: - type: nuget - repositories: - - nuget-prod-local - - nuget-dev-local - - nuget-remote - description: "nuget virtual" - defaultDeploymentRepo: nuget-dev-local - npm-virtual: - type: npm - repositories: - - npm-dev-local - - npm-remote - - npm-prod-local - description: "npm virtual" - defaultDeploymentRepo: npm-dev-local - chef-virtual: - type: chef - repositories: - - chef-dev-local - - chef-remote - - chef-prod-local - description: "chef virtual" - defaultDeploymentRepo: chef-dev-local - puppet-virtual: - type: puppet - repositories: - - puppet-dev-local - - puppet-remote - - puppet-prod-local - description: "puppet virtual" - defaultDeploymentRepo: puppet-dev-local - rpm-virtual: - type: yum - repositories: - - rpm-dev-local - - rpm-remote - - rpm-prod-local - description: "rpm virtual" - defaultDeploymentRepo: rpm-dev-local - gitlfs-virtual: - type: gitlfs - repositories: - - gitlfs-dev-local - - gitlfs-prod-local - description: "gitlfs virtual" - defaultDeploymentRepo: gitlfs-dev-local - pypi-virtual: - type: pypi - repositories: - - pypi-dev-local - - pypi-prod-local - - pypi-remote - description: "pypi virtual" - defaultDeploymentRepo: pypi-dev-local - bower-virtual: - type: bower - repositories: - - bower-dev-local - - bower-prod-local - - bower-remote - description: "bower virtual" - defaultDeploymentRepo: bower-dev-local - gems-virtual: - type: gems - repositories: - - gems-dev-local - - gems-prod-local - - gems-remote - description: "gems virtual" - defaultDeploymentRepo: gems-dev-local - sbt-virtual: - type: sbt - repositories: - - sbt-dev-local - - sbt-prod-local - - jcenter - description: "sbt virtual" - defaultDeploymentRepo: sbt-dev-local - go-staging: - type: go - repositories: - - go-staging-local - - go-prod-local - description: "go virtual" - defaultDeploymentRepo: go-staging-local diff --git a/Ansible/test/tests/src/test/resources/repositories/CreateJCR.yaml b/Ansible/test/tests/src/test/resources/repositories/CreateJCR.yaml deleted file mode 100644 index 38a5feb..0000000 --- a/Ansible/test/tests/src/test/resources/repositories/CreateJCR.yaml +++ /dev/null @@ -1,119 +0,0 @@ -localRepositories: - tomcat-local: - type: generic - description: "used by demo" - repoLayout: simple-default - xray: - enabled: true - generic-prod-local: - type: generic - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - generic-dev-local: - type: generic - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - helm-prod-local: - type: helm - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - helm-dev-local: - type: helm - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - docker-generator: - type: docker - description: "docker generator repo for generation testing" - repoLayout: simple-default - xray: - enabled: true - docker-prod-local: - type: docker - description: "production deployment" - repoLayout: simple-default - xray: - enabled: true - docker-stage-local: - type: docker - description: "stage deployment" - repoLayout: simple-default - xray: - enabled: true - docker-dev-local: - type: docker - description: "development deployment" - repoLayout: simple-default - xray: - enabled: true - docker-local: - type: docker - description: "docker deployment" - repoLayout: simple-default - xray: - enabled: true - docker-push: - type: docker - description: "docker push repo for push replication testing" - repoLayout: simple-default - xray: - enabled: true -virtualRepositories: - generic-virtual: - type: generic - repositories: - - generic-prod-local - - generic-dev-local - description: "generic virtual" - defaultDeploymentRepo: generic-dev-local - helm-virtual: - type: helm - repositories: - - helm-prod-local - - helm-dev-local - - helm-remote - description: "helm virtual" - defaultDeploymentRepo: helm-dev-local - docker-PLACEHOLDERFORBUILDSTEP: - type: docker - repositories: - - docker-local - - docker-remote - - docker-dev-local - - docker-prod-local - - docker-stage-local - - docker-push - description: "docker virtual" - defaultDeploymentRepo: docker-stage-local - docker-virtual: - type: docker - repositories: - - docker-local - - docker-remote - - docker-dev-local - - docker-prod-local - - docker-stage-local - - docker-push - description: "docker virtual" - defaultDeploymentRepo: docker-stage-local -remoteRepositories: - helm-remote: - type: helm - url: https://storage.googleapis.com/kubernetes-charts - repoLayout: simple-default - xray: - enabled: true - docker-remote: - type: docker - url: https://registry-1.docker.io - repoLayout: simple-default - enableTokenAuthentication: true - xray: - enabled: true \ No newline at end of file diff --git a/Ansible/test/tests/src/test/resources/repositories/artifact.zip b/Ansible/test/tests/src/test/resources/repositories/artifact.zip deleted file mode 100644 index 0e86cb5..0000000 Binary files a/Ansible/test/tests/src/test/resources/repositories/artifact.zip and /dev/null differ diff --git a/Ansible/test/tests/src/test/resources/testenv.yaml b/Ansible/test/tests/src/test/resources/testenv.yaml deleted file mode 100644 index 3480812..0000000 --- a/Ansible/test/tests/src/test/resources/testenv.yaml +++ /dev/null @@ -1,6 +0,0 @@ -artifactory: - url: http://Ansib-Appli-1NLZU3V2AGK49-291976964.us-east-1.elb.amazonaws.com - external_ip: Ansib-Appli-1NLZU3V2AGK49-291976964.us-east-1.elb.amazonaws.com - distribution: artifactory_ha - rt_username: admin - rt_password: password \ No newline at end of file diff --git a/Ansible/test/tests/src/test/resources/testenv_tpl.yaml b/Ansible/test/tests/src/test/resources/testenv_tpl.yaml deleted file mode 100644 index 55ff648..0000000 --- a/Ansible/test/tests/src/test/resources/testenv_tpl.yaml +++ /dev/null @@ -1,6 +0,0 @@ -artifactory: - url: urlval - external_ip: ipval - distribution: artifactory_ha - rt_username: admin - rt_password: passval \ No newline at end of file diff --git a/ansible.cfg b/ansible.cfg deleted file mode 100644 index 3d08974..0000000 --- a/ansible.cfg +++ /dev/null @@ -1,10 +0,0 @@ -[defaults] -# Installs collections into [current dir]/ansible_collections/namespace/collection_name -collections_paths = ~/.ansible/collections:/usr/share/ansible/collections:collection - -# Installs roles into [current dir]/roles/namespace.rolename -roles_path = Ansible/ansible_collections/jfrog/installers/roles - -host_key_checking = false - -deprecation_warnings=False \ No newline at end of file