Add support for service containers (#1949)

* Support services (#42)

Removed createSimpleContainerName and AutoRemove flag

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>

* Support services options (#45)

Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>

* Support intepolation for `env` of `services` (#47)

Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>

* Support services `credentials` (#51)

If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: 0c1f2edb99/pkg/runner/run_context.go (L228-L269)

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>

* Add ContainerMaxLifetime and ContainerNetworkMode options

from: b9c20dcaa4

* Fix container network issue (#56)

Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177

- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if  `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
  - If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`.  Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
  - Because on the one hand,  `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
  - On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore  the stage of `docker start`, because the name of these containers and netwoks won't be repeat.

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>

* Check volumes (#60)

This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.

Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)

In addition, volumes specified by `options` will also be checked.

Currently, the following default volumes (see a72822b3f8/pkg/runner/run_context.go (L116-L166)) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)

Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>

* Remove ContainerMaxLifetime; fix lint

* Remove unused ValidVolumes

* Remove ConnectToNetwork

* Add docker stubs

* Close docker clients to prevent file descriptor leaks

* Fix the error when removing network in self-hosted mode (#69)

Fixes https://gitea.com/gitea/act_runner/issues/255

Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>

* Move service container and network cleanup to rc.cleanUpJobContainer

* Add --network flag; default to host if not using service containers or set explicitly

* Correctly close executor to prevent fd leak

* Revert to tail instead of full path

* fix network duplication

* backport networkingConfig for aliaes

* don't hardcode netMode host

* Convert services test to table driven tests

* Add failing tests for services

* Expose service container ports onto the host

* Set container network mode in artifacts server test to host mode

* Log container network mode when creating/starting a container

* fix: Correctly handle ContainerNetworkMode

* fix: missing service container network

* Always remove service containers

Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.

* Remove networks only if no active endpoints exist

* Ensure job containers are stopped before starting a new job

* fix: go build -tags WITHOUT_DOCKER

---------

Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
This commit is contained in:
Sam Foo 2023-10-19 02:24:52 -07:00 committed by GitHub
parent ace4cd47c7
commit ceeb6c160c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
14 changed files with 469 additions and 100 deletions

View file

@ -56,6 +56,7 @@ type Input struct {
matrix []string
actionCachePath string
logPrefixJobID bool
networkName string
}
func (i *Input) resolve(path string) string {

View file

@ -14,6 +14,7 @@ import (
"github.com/AlecAivazis/survey/v2"
"github.com/adrg/xdg"
"github.com/andreaskoch/go-fswatch"
docker_container "github.com/docker/docker/api/types/container"
"github.com/joho/godotenv"
gitignore "github.com/sabhiram/go-gitignore"
log "github.com/sirupsen/logrus"
@ -96,6 +97,7 @@ func Execute(ctx context.Context, version string) {
rootCmd.PersistentFlags().StringVarP(&input.cacheServerAddr, "cache-server-addr", "", common.GetOutboundIP().String(), "Defines the address to which the cache server binds.")
rootCmd.PersistentFlags().Uint16VarP(&input.cacheServerPort, "cache-server-port", "", 0, "Defines the port where the artifact server listens. 0 means a randomly available port.")
rootCmd.PersistentFlags().StringVarP(&input.actionCachePath, "action-cache-path", "", filepath.Join(CacheHomeDir, "act"), "Defines the path where the actions get cached and host workspaces created.")
rootCmd.PersistentFlags().StringVarP(&input.networkName, "network", "", "host", "Sets a docker network name. Defaults to host.")
rootCmd.SetArgs(args())
if err := rootCmd.Execute(); err != nil {
@ -612,6 +614,7 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
ReplaceGheActionWithGithubCom: input.replaceGheActionWithGithubCom,
ReplaceGheActionTokenWithGithubCom: input.replaceGheActionTokenWithGithubCom,
Matrix: matrixes,
ContainerNetworkMode: docker_container.NetworkMode(input.networkName),
}
r, err := runner.New(config)
if err != nil {

View file

@ -4,28 +4,32 @@ import (
"context"
"io"
"github.com/docker/go-connections/nat"
"github.com/nektos/act/pkg/common"
)
// NewContainerInput the input for the New function
type NewContainerInput struct {
Image string
Username string
Password string
Entrypoint []string
Cmd []string
WorkingDir string
Env []string
Binds []string
Mounts map[string]string
Name string
Stdout io.Writer
Stderr io.Writer
NetworkMode string
Privileged bool
UsernsMode string
Platform string
Options string
Image string
Username string
Password string
Entrypoint []string
Cmd []string
WorkingDir string
Env []string
Binds []string
Mounts map[string]string
Name string
Stdout io.Writer
Stderr io.Writer
NetworkMode string
Privileged bool
UsernsMode string
Platform string
Options string
NetworkAliases []string
ExposedPorts nat.PortSet
PortBindings nat.PortMap
}
// FileEntry is a file to copy to a container

View file

@ -0,0 +1,79 @@
//go:build !(WITHOUT_DOCKER || !(linux || darwin || windows))
package container
import (
"context"
"github.com/docker/docker/api/types"
"github.com/nektos/act/pkg/common"
)
func NewDockerNetworkCreateExecutor(name string) common.Executor {
return func(ctx context.Context) error {
cli, err := GetDockerClient(ctx)
if err != nil {
return err
}
defer cli.Close()
// Only create the network if it doesn't exist
networks, err := cli.NetworkList(ctx, types.NetworkListOptions{})
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
common.Logger(ctx).Debugf("Network %v exists", name)
return nil
}
}
_, err = cli.NetworkCreate(ctx, name, types.NetworkCreate{
Driver: "bridge",
Scope: "local",
})
if err != nil {
return err
}
return nil
}
}
func NewDockerNetworkRemoveExecutor(name string) common.Executor {
return func(ctx context.Context) error {
cli, err := GetDockerClient(ctx)
if err != nil {
return err
}
defer cli.Close()
// Make shure that all network of the specified name are removed
// cli.NetworkRemove refuses to remove a network if there are duplicates
networks, err := cli.NetworkList(ctx, types.NetworkListOptions{})
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
result, err := cli.NetworkInspect(ctx, network.ID, types.NetworkInspectOptions{})
if err != nil {
return err
}
if len(result.Containers) == 0 {
if err = cli.NetworkRemove(ctx, network.ID); err != nil {
common.Logger(ctx).Debugf("%v", err)
}
} else {
common.Logger(ctx).Debugf("Refusing to remove network %v because it still has active endpoints", name)
}
}
}
return err
}
}

View file

@ -29,6 +29,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
specs "github.com/opencontainers/image-spec/specs-go/v1"
@ -66,7 +67,7 @@ func supportsContainerImagePlatform(ctx context.Context, cli client.APIClient) b
func (cr *containerReference) Create(capAdd []string, capDrop []string) common.Executor {
return common.
NewInfoExecutor("%sdocker create image=%s platform=%s entrypoint=%+q cmd=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd).
NewInfoExecutor("%sdocker create image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode).
Then(
common.NewPipelineExecutor(
cr.connect(),
@ -78,7 +79,7 @@ func (cr *containerReference) Create(capAdd []string, capDrop []string) common.E
func (cr *containerReference) Start(attach bool) common.Executor {
return common.
NewInfoExecutor("%sdocker run image=%s platform=%s entrypoint=%+q cmd=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd).
NewInfoExecutor("%sdocker run image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode).
Then(
common.NewPipelineExecutor(
cr.connect(),
@ -346,8 +347,8 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
}
if len(copts.netMode.Value()) == 0 {
if err = copts.netMode.Set("host"); err != nil {
return nil, nil, fmt.Errorf("Cannot parse networkmode=host. This is an internal error and should not happen: '%w'", err)
if err = copts.netMode.Set(cr.input.NetworkMode); err != nil {
return nil, nil, fmt.Errorf("Cannot parse networkmode=%s. This is an internal error and should not happen: '%w'", cr.input.NetworkMode, err)
}
}
@ -391,10 +392,11 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
input := cr.input
config := &container.Config{
Image: input.Image,
WorkingDir: input.WorkingDir,
Env: input.Env,
Tty: isTerminal,
Image: input.Image,
WorkingDir: input.WorkingDir,
Env: input.Env,
ExposedPorts: input.ExposedPorts,
Tty: isTerminal,
}
logger.Debugf("Common container.Config ==> %+v", config)
@ -430,13 +432,14 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
}
hostConfig := &container.HostConfig{
CapAdd: capAdd,
CapDrop: capDrop,
Binds: input.Binds,
Mounts: mounts,
NetworkMode: container.NetworkMode(input.NetworkMode),
Privileged: input.Privileged,
UsernsMode: container.UsernsMode(input.UsernsMode),
CapAdd: capAdd,
CapDrop: capDrop,
Binds: input.Binds,
Mounts: mounts,
NetworkMode: container.NetworkMode(input.NetworkMode),
Privileged: input.Privileged,
UsernsMode: container.UsernsMode(input.UsernsMode),
PortBindings: input.PortBindings,
}
logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
@ -445,7 +448,22 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
return err
}
resp, err := cr.cli.ContainerCreate(ctx, config, hostConfig, nil, platSpecs, input.Name)
var networkingConfig *network.NetworkingConfig
logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
if hostConfig.NetworkMode.IsUserDefined() && len(input.NetworkAliases) > 0 {
endpointConfig := &network.EndpointSettings{
Aliases: input.NetworkAliases,
}
networkingConfig = &network.NetworkingConfig{
EndpointsConfig: map[string]*network.EndpointSettings{
input.NetworkMode: endpointConfig,
},
}
} else {
logger.Debugf("not a use defined config??")
}
resp, err := cr.cli.ContainerCreate(ctx, config, hostConfig, networkingConfig, platSpecs, input.Name)
if err != nil {
return fmt.Errorf("failed to create container: '%w'", err)
}

View file

@ -19,6 +19,7 @@ func TestDocker(t *testing.T) {
ctx := context.Background()
client, err := GetDockerClient(ctx)
assert.NoError(t, err)
defer client.Close()
dockerBuild := NewDockerBuildExecutor(NewDockerBuildExecutorInput{
ContextDir: "testdata",

View file

@ -55,3 +55,15 @@ func NewDockerVolumeRemoveExecutor(volume string, force bool) common.Executor {
return nil
}
}
func NewDockerNetworkCreateExecutor(name string) common.Executor {
return func(ctx context.Context) error {
return nil
}
}
func NewDockerNetworkRemoveExecutor(name string) common.Executor {
return func(ctx context.Context) error {
return nil
}
}

View file

@ -19,6 +19,7 @@ type jobInfo interface {
result(result string)
}
//nolint:contextcheck,gocyclo
func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executor {
steps := make([]common.Executor, 0)
preSteps := make([]common.Executor, 0)
@ -87,7 +88,7 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
postExec := useStepLogger(rc, stepModel, stepStagePost, step.post())
if postExecutor != nil {
// run the post exector in reverse order
// run the post executor in reverse order
postExecutor = postExec.Finally(postExecutor)
} else {
postExecutor = postExec
@ -101,7 +102,12 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
// always allow 1 min for stopping and removing the runner, even if we were cancelled
ctx, cancel := context.WithTimeout(common.WithLogger(context.Background(), common.Logger(ctx)), time.Minute)
defer cancel()
err = info.stopContainer()(ctx) //nolint:contextcheck
logger := common.Logger(ctx)
logger.Infof("Cleaning up container for job %s", rc.JobName)
if err = info.stopContainer()(ctx); err != nil {
logger.Errorf("Error while stop job container: %v", err)
}
}
setJobResult(ctx, info, rc, jobError == nil)
setJobOutputs(ctx, rc)

View file

@ -17,12 +17,12 @@ import (
"runtime"
"strings"
"github.com/opencontainers/selinux/go-selinux"
"github.com/docker/go-connections/nat"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/container"
"github.com/nektos/act/pkg/exprparser"
"github.com/nektos/act/pkg/model"
"github.com/opencontainers/selinux/go-selinux"
)
// RunContext contains info about current job
@ -40,6 +40,7 @@ type RunContext struct {
IntraActionState map[string]map[string]string
ExprEval ExpressionEvaluator
JobContainer container.ExecutionsEnvironment
ServiceContainers []container.ExecutionsEnvironment
OutputMappings map[MappableOutput]MappableOutput
JobName string
ActionPath string
@ -87,6 +88,18 @@ func (rc *RunContext) jobContainerName() string {
return createContainerName("act", rc.String())
}
// networkName return the name of the network which will be created by `act` automatically for job,
// only create network if using a service container
func (rc *RunContext) networkName() (string, bool) {
if len(rc.Run.Job().Services) > 0 {
return fmt.Sprintf("%s-%s-network", rc.jobContainerName(), rc.Run.JobID), true
}
if rc.Config.ContainerNetworkMode == "" {
return "host", false
}
return string(rc.Config.ContainerNetworkMode), false
}
func getDockerDaemonSocketMountPath(daemonPath string) string {
if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 {
scheme := daemonPath[:protoIndex]
@ -226,6 +239,7 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
}
}
//nolint:gocyclo
func (rc *RunContext) startJobContainer() common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
@ -259,41 +273,126 @@ func (rc *RunContext) startJobContainer() common.Executor {
ext := container.LinuxContainerEnvironmentExtensions{}
binds, mounts := rc.GetBindsAndMounts()
// specify the network to which the container will connect when `docker create` stage. (like execute command line: docker create --network <networkName> <image>)
// if using service containers, will create a new network for the containers.
// and it will be removed after at last.
networkName, createAndDeleteNetwork := rc.networkName()
// add service containers
for serviceID, spec := range rc.Run.Job().Services {
// interpolate env
interpolatedEnvs := make(map[string]string, len(spec.Env))
for k, v := range spec.Env {
interpolatedEnvs[k] = rc.ExprEval.Interpolate(ctx, v)
}
envs := make([]string, 0, len(interpolatedEnvs))
for k, v := range interpolatedEnvs {
envs = append(envs, fmt.Sprintf("%s=%s", k, v))
}
username, password, err = rc.handleServiceCredentials(ctx, spec.Credentials)
if err != nil {
return fmt.Errorf("failed to handle service %s credentials: %w", serviceID, err)
}
serviceBinds, serviceMounts := rc.GetServiceBindsAndMounts(spec.Volumes)
exposedPorts, portBindings, err := nat.ParsePortSpecs(spec.Ports)
if err != nil {
return fmt.Errorf("failed to parse service %s ports: %w", serviceID, err)
}
serviceContainerName := createContainerName(rc.jobContainerName(), serviceID)
c := container.NewContainer(&container.NewContainerInput{
Name: serviceContainerName,
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
Image: spec.Image,
Username: username,
Password: password,
Env: envs,
Mounts: serviceMounts,
Binds: serviceBinds,
Stdout: logWriter,
Stderr: logWriter,
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: spec.Options,
NetworkMode: networkName,
NetworkAliases: []string{serviceID},
ExposedPorts: exposedPorts,
PortBindings: portBindings,
})
rc.ServiceContainers = append(rc.ServiceContainers, c)
}
rc.cleanUpJobContainer = func(ctx context.Context) error {
if rc.JobContainer != nil && !rc.Config.ReuseContainers {
return rc.JobContainer.Remove().
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName(), false)).
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName()+"-env", false))(ctx)
reuseJobContainer := func(ctx context.Context) bool {
return rc.Config.ReuseContainers
}
if rc.JobContainer != nil {
return rc.JobContainer.Remove().IfNot(reuseJobContainer).
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName(), false)).IfNot(reuseJobContainer).
Then(container.NewDockerVolumeRemoveExecutor(rc.jobContainerName()+"-env", false)).IfNot(reuseJobContainer).
Then(func(ctx context.Context) error {
if len(rc.ServiceContainers) > 0 {
logger.Infof("Cleaning up services for job %s", rc.JobName)
if err := rc.stopServiceContainers()(ctx); err != nil {
logger.Errorf("Error while cleaning services: %v", err)
}
if createAndDeleteNetwork {
// clean network if it has been created by act
// if using service containers
// it means that the network to which containers are connecting is created by `act_runner`,
// so, we should remove the network at last.
logger.Infof("Cleaning up network for job %s, and network name is: %s", rc.JobName, networkName)
if err := container.NewDockerNetworkRemoveExecutor(networkName)(ctx); err != nil {
logger.Errorf("Error while cleaning network: %v", err)
}
}
}
return nil
})(ctx)
}
return nil
}
jobContainerNetwork := rc.Config.ContainerNetworkMode.NetworkName()
if rc.containerImage(ctx) != "" {
jobContainerNetwork = networkName
} else if jobContainerNetwork == "" {
jobContainerNetwork = "host"
}
rc.JobContainer = container.NewContainer(&container.NewContainerInput{
Cmd: nil,
Entrypoint: []string{"tail", "-f", "/dev/null"},
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: username,
Password: password,
Name: name,
Env: envList,
Mounts: mounts,
NetworkMode: "host",
Binds: binds,
Stdout: logWriter,
Stderr: logWriter,
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: rc.options(ctx),
Cmd: nil,
Entrypoint: []string{"tail", "-f", "/dev/null"},
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: username,
Password: password,
Name: name,
Env: envList,
Mounts: mounts,
NetworkMode: jobContainerNetwork,
NetworkAliases: []string{rc.Name},
Binds: binds,
Stdout: logWriter,
Stderr: logWriter,
Privileged: rc.Config.Privileged,
UsernsMode: rc.Config.UsernsMode,
Platform: rc.Config.ContainerArchitecture,
Options: rc.options(ctx),
})
if rc.JobContainer == nil {
return errors.New("Failed to create job container")
}
return common.NewPipelineExecutor(
rc.pullServicesImages(rc.Config.ForcePull),
rc.JobContainer.Pull(rc.Config.ForcePull),
rc.stopJobContainer(),
container.NewDockerNetworkCreateExecutor(networkName).IfBool(createAndDeleteNetwork),
rc.startServiceContainers(networkName),
rc.JobContainer.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop),
rc.JobContainer.Start(false),
rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
@ -369,16 +468,50 @@ func (rc *RunContext) UpdateExtraPath(ctx context.Context, githubEnvPath string)
return nil
}
// stopJobContainer removes the job container (if it exists) and its volume (if it exists) if !rc.Config.ReuseContainers
// stopJobContainer removes the job container (if it exists) and its volume (if it exists)
func (rc *RunContext) stopJobContainer() common.Executor {
return func(ctx context.Context) error {
if rc.cleanUpJobContainer != nil && !rc.Config.ReuseContainers {
if rc.cleanUpJobContainer != nil {
return rc.cleanUpJobContainer(ctx)
}
return nil
}
}
func (rc *RunContext) pullServicesImages(forcePull bool) common.Executor {
return func(ctx context.Context) error {
execs := []common.Executor{}
for _, c := range rc.ServiceContainers {
execs = append(execs, c.Pull(forcePull))
}
return common.NewParallelExecutor(len(execs), execs...)(ctx)
}
}
func (rc *RunContext) startServiceContainers(_ string) common.Executor {
return func(ctx context.Context) error {
execs := []common.Executor{}
for _, c := range rc.ServiceContainers {
execs = append(execs, common.NewPipelineExecutor(
c.Pull(false),
c.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop),
c.Start(false),
))
}
return common.NewParallelExecutor(len(execs), execs...)(ctx)
}
}
func (rc *RunContext) stopServiceContainers() common.Executor {
return func(ctx context.Context) error {
execs := []common.Executor{}
for _, c := range rc.ServiceContainers {
execs = append(execs, c.Remove().Finally(c.Close()))
}
return common.NewParallelExecutor(len(execs), execs...)(ctx)
}
}
// Prepare the mounts and binds for the worker
// ActionCacheDir is for rc
@ -853,3 +986,53 @@ func (rc *RunContext) handleCredentials(ctx context.Context) (string, string, er
return username, password, nil
}
func (rc *RunContext) handleServiceCredentials(ctx context.Context, creds map[string]string) (username, password string, err error) {
if creds == nil {
return
}
if len(creds) != 2 {
err = fmt.Errorf("invalid property count for key 'credentials:'")
return
}
ee := rc.NewExpressionEvaluator(ctx)
if username = ee.Interpolate(ctx, creds["username"]); username == "" {
err = fmt.Errorf("failed to interpolate credentials.username")
return
}
if password = ee.Interpolate(ctx, creds["password"]); password == "" {
err = fmt.Errorf("failed to interpolate credentials.password")
return
}
return
}
// GetServiceBindsAndMounts returns the binds and mounts for the service container, resolving paths as appopriate
func (rc *RunContext) GetServiceBindsAndMounts(svcVolumes []string) ([]string, map[string]string) {
if rc.Config.ContainerDaemonSocket == "" {
rc.Config.ContainerDaemonSocket = "/var/run/docker.sock"
}
binds := []string{}
if rc.Config.ContainerDaemonSocket != "-" {
daemonPath := getDockerDaemonSocketMountPath(rc.Config.ContainerDaemonSocket)
binds = append(binds, fmt.Sprintf("%s:%s", daemonPath, "/var/run/docker.sock"))
}
mounts := map[string]string{}
for _, v := range svcVolumes {
if !strings.Contains(v, ":") || filepath.IsAbs(v) {
// Bind anonymous volume or host file.
binds = append(binds, v)
} else {
// Mount existing volume.
paths := strings.SplitN(v, ":", 2)
mounts[paths[0]] = paths[1]
}
}
return binds, mounts
}

View file

@ -7,10 +7,10 @@ import (
"os"
"runtime"
log "github.com/sirupsen/logrus"
docker_container "github.com/docker/docker/api/types/container"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/model"
log "github.com/sirupsen/logrus"
)
// Runner provides capabilities to run GitHub actions
@ -20,44 +20,45 @@ type Runner interface {
// Config contains the config for a new runner
type Config struct {
Actor string // the user that triggered the event
Workdir string // path to working directory
ActionCacheDir string // path used for caching action contents
BindWorkdir bool // bind the workdir to the job container
EventName string // name of event to run
EventPath string // path to JSON file to use for event.json in containers
DefaultBranch string // name of the main branch for this repository
ReuseContainers bool // reuse containers to maintain state
ForcePull bool // force pulling of the image, even if already present
ForceRebuild bool // force rebuilding local docker image action
LogOutput bool // log the output from docker run
JSONLogger bool // use json or text logger
LogPrefixJobID bool // switches from the full job name to the job id
Env map[string]string // env for containers
Inputs map[string]string // manually passed action inputs
Secrets map[string]string // list of secrets
Vars map[string]string // list of vars
Token string // GitHub token
InsecureSecrets bool // switch hiding output when printing to terminal
Platforms map[string]string // list of platforms
Privileged bool // use privileged mode
UsernsMode string // user namespace to use
ContainerArchitecture string // Desired OS/architecture platform for running containers
ContainerDaemonSocket string // Path to Docker daemon socket
ContainerOptions string // Options for the job container
UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true
GitHubInstance string // GitHub instance to use, default "github.com"
ContainerCapAdd []string // list of kernel capabilities to add to the containers
ContainerCapDrop []string // list of kernel capabilities to remove from the containers
AutoRemove bool // controls if the container is automatically removed upon workflow completion
ArtifactServerPath string // the path where the artifact server stores uploads
ArtifactServerAddr string // the address the artifact server binds to
ArtifactServerPort string // the port the artifact server binds to
NoSkipCheckout bool // do not skip actions/checkout
RemoteName string // remote name in local git repo config
ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub
ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub.
Matrix map[string]map[string]bool // Matrix config to run
Actor string // the user that triggered the event
Workdir string // path to working directory
ActionCacheDir string // path used for caching action contents
BindWorkdir bool // bind the workdir to the job container
EventName string // name of event to run
EventPath string // path to JSON file to use for event.json in containers
DefaultBranch string // name of the main branch for this repository
ReuseContainers bool // reuse containers to maintain state
ForcePull bool // force pulling of the image, even if already present
ForceRebuild bool // force rebuilding local docker image action
LogOutput bool // log the output from docker run
JSONLogger bool // use json or text logger
LogPrefixJobID bool // switches from the full job name to the job id
Env map[string]string // env for containers
Inputs map[string]string // manually passed action inputs
Secrets map[string]string // list of secrets
Vars map[string]string // list of vars
Token string // GitHub token
InsecureSecrets bool // switch hiding output when printing to terminal
Platforms map[string]string // list of platforms
Privileged bool // use privileged mode
UsernsMode string // user namespace to use
ContainerArchitecture string // Desired OS/architecture platform for running containers
ContainerDaemonSocket string // Path to Docker daemon socket
ContainerOptions string // Options for the job container
UseGitIgnore bool // controls if paths in .gitignore should not be copied into container, default true
GitHubInstance string // GitHub instance to use, default "github.com"
ContainerCapAdd []string // list of kernel capabilities to add to the containers
ContainerCapDrop []string // list of kernel capabilities to remove from the containers
AutoRemove bool // controls if the container is automatically removed upon workflow completion
ArtifactServerPath string // the path where the artifact server stores uploads
ArtifactServerAddr string // the address the artifact server binds to
ArtifactServerPort string // the port the artifact server binds to
NoSkipCheckout bool // do not skip actions/checkout
RemoteName string // remote name in local git repo config
ReplaceGheActionWithGithubCom []string // Use actions from GitHub Enterprise instance to GitHub
ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub.
Matrix map[string]map[string]bool // Matrix config to run
ContainerNetworkMode docker_container.NetworkMode // the network mode of job containers (the value of --network)
}
type caller struct {

View file

@ -302,6 +302,11 @@ func TestRunEvent(t *testing.T) {
{workdir, "set-env-step-env-override", "push", "", platforms, secrets},
{workdir, "set-env-new-env-file-per-step", "push", "", platforms, secrets},
{workdir, "no-panic-on-invalid-composite-action", "push", "jobs failed due to invalid action", platforms, secrets},
// services
{workdir, "services", "push", "", platforms, secrets},
{workdir, "services-host-network", "push", "", platforms, secrets},
{workdir, "services-with-container", "push", "", platforms, secrets},
}
for _, table := range tables {

View file

@ -0,0 +1,14 @@
name: services-host-network
on: push
jobs:
services-host-network:
runs-on: ubuntu-latest
services:
nginx:
image: "nginx:latest"
ports:
- "8080:80"
steps:
- run: apt-get -qq update && apt-get -yqq install --no-install-recommends curl net-tools
- run: netstat -tlpen
- run: curl -v http://localhost:8080

View file

@ -0,0 +1,16 @@
name: services-with-containers
on: push
jobs:
services-with-containers:
runs-on: ubuntu-latest
# https://docs.github.com/en/actions/using-containerized-services/about-service-containers#running-jobs-in-a-container
container:
image: "ubuntu:latest"
services:
nginx:
image: "nginx:latest"
ports:
- "8080:80"
steps:
- run: apt-get -qq update && apt-get -yqq install --no-install-recommends curl
- run: curl -v http://nginx:80

26
pkg/runner/testdata/services/push.yaml vendored Normal file
View file

@ -0,0 +1,26 @@
name: services
on: push
jobs:
services:
name: Reproduction of failing Services interpolation
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12
env:
POSTGRES_USER: runner
POSTGRES_PASSWORD: mysecretdbpass
POSTGRES_DB: mydb
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- name: Echo the Postgres service ID / Network / Ports
run: |
echo "id: ${{ job.services.postgres.id }}"
echo "network: ${{ job.services.postgres.network }}"
echo "ports: ${{ job.services.postgres.ports }}"