To clarify the role of actor, the request data that initiates an event has been
separated. The ActorRecord is pared down to just the username. This eliminates
confusion about where event related data should be added.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Endpoints are now created at applications startup time, using notification
configuration. The instances are then added to a Broadcaster instance, which
becomes the main event sink for the application. At request time, an event
bridge is configured to listen to repository method calls. The actor and source
of the eventBridge are created from the requeest context and application,
respectively. The result is notifications are dispatched with calls to the
context's Repository instance and are queued to each endpoint via the
broadcaster.
This commit also adds the concept of a RequestID and App.InstanceID. The
request id uniquely identifies each request and the InstanceID uniquely
identifies a run of the registry. These identifiers can be used in the future
to correlate log messages with generated events to support rich debugging.
The fields of the app were slightly reorganized for clarity and a few horrid
util functions have been removed.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This changeset implements webhook notification endpoints for dispatching
registry events. Repository instances can be decorated by a listener that
converts calls into context-aware events, using a bridge. Events generated in
the bridge are written to a sink. Implementations of sink include a broadcast
and endpoint sink which can be used to configure event dispatch. Endpoints
represent a webhook notification target, with queueing and retries built in.
They can be added to a Broadcaster, which is a simple sink that writes a block
of events to several sinks, to provide a complete dispatch mechanism.
The main caveat to the current approach is that all unsent notifications are
inmemory. Best effort is made to ensure that notifications are not dropped, to
the point where queues may back up on faulty endpoints. If the endpoint is
fixed, the events will be retried and all messages will go through.
Internally, this functionality is all made up of Sink objects. The queuing
functionality is implemented with an eventQueue sink and retries are
implemented with retryingSink. Replacing the inmemory queuing with something
persistent should be as simple as replacing broadcaster with a remote queue and
that sets up the sinks to be local workers listening to that remote queue.
Metrics are kept for each endpoint and exported via expvar. This may not be a
permanent appraoch but should provide enough information for troubleshooting
notification problems.
Signed-off-by: Stephen J Day <stephen.day@docker.com>