Add the ability to disable local authentication
SUMMARY
When an external authentication system is enabled, users would like the ability to disable local authentication for enhanced security.
related #4553
TODO
create a configure-Tower-in-Tower setting, DISABLE_LOCAL_AUTH
expose the setting in the settings UI
be able to query out all local-only users
User.objects.filter(Q(profile__isnull=True) | Q(profile__ldap_dn=''), enterprise_auth__isnull=True, social_auth__isnull=True)
see: awx/main/utils/common.py, get_external_account
write a thin wrapper around the Django model-based auth backend
update the UI tests to include the new setting
be able to trigger a side-effect when this setting changes
revoke all OAuth2 tokens for users that do not have a remote
auth backend associated with them
revoke sessions for local-only users
ultimately I did this by adding a new middleware that checks the value of this new setting and force-logouts any local-only user making a request after it is enabled
settings API endpoint raises a validation error if there are no external users or auth sources configured
The remote user existence validation has been removed, since ultimately we can't know for sure if a sysadmin-level user will still have access to the UI. This is being dealt with by using a confirmation modal, see below.
add a modal asking the user to confirm that they want to turn this setting on
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
API
UI
AWX VERSION
Reviewed-by: Jeff Bradberry <None>
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Mat Wilson <mawilson@redhat.com>
Reviewed-by: Michael Abashian <None>
Reviewed-by: Chris Meyers <None>
--- Passing this parameter in receptor_params in AWXReceptorJob class
--- Removed AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES from defaults.py as it was not being used anywhere
--- Removed AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY from defaults.py as it was not being used anywhere
when the DISABLE_LOCAL_AUTH setting is set. This avoids the ugliness
of getting a SuspiciousOperation error for any request/response cycles
that are in flight when a user gets bounced.
Isolated removal
SUMMARY
Removal of the isolated nodes feature.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
API
AWX VERSION
Reviewed-by: Alan Rominger <arominge@redhat.com>
Reviewed-by: Jeff Bradberry <None>
Reviewed-by: Elyézer Rezende <None>
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
With the change to use pk-based interval slicing for the job events
table, we need analytics.gather to be the code that manages all of the
"expensive" collector slicing. While we are at it, let's ship each
chunked tarball file as we produce it.
- Adds a Metrics() class that can track data such as number of
events the callback receiver inserted into database
- Exposes this metric data at the api/v2/metrics/ endpoint.
This data is prometheus-friendly
- Metric data is stored in memory, then periodically saved to Redis.
- Metric data is periodically broadcast to other nodes in the cluster,
so that each node has a copy of the most recent metric data collected.
- In K8S-based installs, only container groups are intended to be used
for playbook execution (JTs, adhoc, inventory updates), so in this
scenario, other job types have a task impact of zero.
- In K8S-based installs, traditional instances have *zero* capacity
(because they're only members of the control plane where services
- http/s, local control plane execution - run)
- This commit also includes some changes that allow for the task manager
to launch tasks with task_impact=0 on instances that have capacity=0
(previously, an instance with zero capacity would never be selected
as the "execution node"
This means that when IS_K8S=True, any Job Template associated with an
Instance Group will never actually go from pending -> running (because
there's no capacity - all playbooks must run through Container Groups).
For an improved ux, our intention is to introduce logic into the
operator install process such that the *default* group that's created at
install time is a *Container Group* that's configured to point at the
K8S cluster where awx itself is deployed.
- a new unique name field to EE
- a new configure-Tower-in-Tower setting DEFAULT_EXECUTION_ENVIRONMENT
- an Org-level execution_environment_admin_role
- a default_environment field on Project
- a new Container Registry credential type
- order EEs by reverse of the created timestamp
- a method to resolve which EE to use on jobs
- removes local_docker installer and points community users to our development environment (make docker-compose)
- provides a migration path from Local Docker Compose installations --> the dev environment
- the dev env can now be configured to use an external database
- consolidated the Local Docker and dev env docker-compose.yml files into one template file, used by the dockerfile role
- added a 'sources' role to template out config files
- the postgres data dir is no longer a bind-mount, it is a docker volume
- the redis socket is not longer a bind-mount, it is a docker volume
- the local_settings.py.docker-compose file no longer needs to be copied over in the dev env
- Create tmp rsyslog.conf in rsyslog volume to avoid cross-linking. Previously, the tmp code-generated rsyslog.conf was being written to /tmp (by default). As a result, we were attempting to shutil.move() across volumes.
- move k8s image build and push roles under tools/ansible
- See tools/docker-compose/README.md for usage of these changes
this middleware allready existed, and we were trying to log this
data but it was not working.
Hope is these logs will be able to be shipped via external logging
and we could use kibana to track response time of different endpoints
Various points (e.g. created, running, processing events), are
structured into json format and output to /var/log/tower/job_lifecycle.log
As part of this work, the DependencyGraph is reworked to return
which job object is doing the blocking, rather than a boolean.