This reverts 8beed7f4, which says:
Disable ControlPersist for ad hoc commands, should avoid any
issues with proot or needing to clean up sockets afterwards.
Given we've switched to the much less finicky bwrap for process
isolation, along with runner-based process killing, this probably
isn't needed any more.
update test data files
Adopt official vendor location
openstack not published yet
Add collections to show paths
Add collections loc to installer settings
Add vendored collections to show path again
there's a race condition if we do this pre-commit where the correct
value isn't actually *persisted* to the database yet, and we end up
saving the *prior* setting values
- Add a placeholder rsyslog.conf so it doesn't fail on start
- Create access restricted directory for unix socket to be created in
- Create RSyslogHandler to exit early when logging socket doesn't exist
- Write updated logging settings when dispatcher comes up and restart rsyslog so they take effect
- Move rsyslogd to the web container and create rpc supervisor.sock
- Add env var for supervisor.conf path
- this change adds rsyslog (https://github.com/rsyslog/rsyslog) as
a new service that runs on every AWX node (managed by supervisord)
in particular, this feature requires a recent version (v8.38+) of
rsyslog that supports the omhttp module
(https://github.com/rsyslog/rsyslog-doc/pull/750)
- the "external_logger" handler in AWX is now a SysLogHandler that ships
logs to the local UDP port where rsyslog is configured to listen (by
default, 51414)
- every time a LOG_AGGREGATOR_* setting is changed, every AWX node
reconfigures and restarts its local instance of rsyslog so that its
fowarding settings match what has been configured in AWX
- unlike the prior implementation, if the external logging aggregator
(splunk/logstash) goes temporarily offline, rsyslog will retain the
messages and ship them when the log aggregator is back online
- 4xx or 5xx level errors are recorded at /var/log/tower/external.err
success/failure notifications for *playbooks* include summary data about
the hosts in based on the contents of the playbook_on_stats event
the current implementation suffers from a number of race conditions that
sometimes can cause that data to be missing or incomplete; this change
makes it so that for *playbooks* we build (and send) the notification in
response to the playbook_on_stats event, not the EOF event
postgres has a limitation on its notify message size (8k), and the
messages we generate for deep copying functionality easily go over this
limit; instead of passing a giant nested data structure across the
message bus, this change makes it so that we temporarily store the JSON
structure in memcached, and look it up from *within* the task
see: https://github.com/ansible/tower/issues/4162
* Under the new postgres backed notify/listen message queue, this never
actually worked. Without using the database to store state, we can not
provide a at-most-once delivery mechanism w/ multi-readers.
* With this change, work is done ONLY on the node that requested for the
work to be done. Under rabbitmq, the node that was first to get the
message off the queue would do the work; presumably the least busy node.
Scenario - job is launched and spawns inventory update and project update.
If the inventory update fails, then it will fail the job and the project update.
It will fail the project update even if that update already ran and was successful.
This code change will not fail the project update if it has already ran successfully.
In cases where other jobs depend on that project update (but not the failed inventory
update), then we don't want those jobs to fail.
additionaly, optimize away several per-event host lookups and
changed/failed propagation lookups
we've always performed these (fairly expensive) queries *on every event
save* - if you're processing tens of thousands of events in short
bursts, this is way too slow
this commit also introduces a new command for profiling the insertion
rate of events, `awx-manage callback_stats`
see: https://github.com/ansible/awx/issues/5514
* update inventory path to be in tmp project clone
* copy project folder for inventory scm launch type
* Optionally accept inventory collection paths from ansible.cfg