Skip to content

Add dind-rootless#165

Closed
AkihiroSuda wants to merge 1 commit intodocker-library:masterfrom
AkihiroSuda:rootless
Closed

Add dind-rootless#165
AkihiroSuda wants to merge 1 commit intodocker-library:masterfrom
AkihiroSuda:rootless

Conversation

@AkihiroSuda
Copy link
Copy Markdown
Contributor

@AkihiroSuda AkihiroSuda commented Jul 11, 2019

Usage:

$ docker build -t dind-rootless .
$ docker run -d --name dind-rootless --privileged dind-rootless
$ docker exec dind-rootless docker info
  • The daemon runs in an unprivileged user with ID 1000
  • --privileged is still required due to seccomp, apparmor, procfs, and sysfs stuff

Signed-off-by: Akihiro Suda akihiro.suda.cz@hco.ntt.co.jp

@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

@AkihiroSuda AkihiroSuda force-pushed the rootless branch 2 times, most recently from d71b5c3 to a4ed75c Compare July 11, 2019 04:49
@tao12345666333
Copy link
Copy Markdown
Contributor

a quick test.

(MoeLove) ➜  rl docker run --rm -d  --privileged local/docker:rootles
47b1342c2f0b321aeb08bcee52639adb8f8a09690bf9cfbc08150a9c3b620e0c
(MoeLove) ➜  rl docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                  NAMES
47b1342c2f0b        local/docker:rootles   "dockerd-rootless.sh…"   3 seconds ago       Up 1 second         2375/tcp                               frosty_black
(MoeLove) ➜  rl docker exec -it 47b1342c2f0b sh
/ $ docker version
Client: Docker Engine - Community
 Version:           19.03.0-rc3
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        27fcb77
 Built:             Thu Jun 20 01:59:14 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-rc3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       27fcb77
  Built:            Thu Jun 20 02:06:58 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
/ $ docker run --rm -it alpine 
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
921b31ab772b: Pull complete 
Digest: sha256:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f4964c045a16d19dc54
Status: Downloaded newer image for alpine:latest
docker: Error response from daemon: error creating overlay mount to /home/user/.local/share/docker/overlay2/e675c667068bee97a945a947e01c281519833b643f95e50294ca2840d45682b2-init/merged: operation not permitted.
See 'docker run --help'.
@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

What's your host kernel?
You might need --storage-driver vfs

@tao12345666333
Copy link
Copy Markdown
Contributor

Kernel Version: 5.1.11-200.fc29.x86_64
 Operating System: Fedora 29 (Workstation Edition)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.56GiB

@tao12345666333
Copy link
Copy Markdown
Contributor

You are right! 👍 It can solve this error.

I have another error

(MoeLove) ➜  rl docker run --rm  -d  --privileged local/docker:rootles --storage-driver vfs 
1939f33a7be652d20207c0612565e5819d85e8fa21963ea3f319bfeb60e77b28
(MoeLove) ➜  rl docker exec -it $(docker ps -ql) sh
/ $ docker version
Client: Docker Engine - Community
 Version:           19.03.0-rc3
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        27fcb77
 Built:             Thu Jun 20 01:59:14 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-rc3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       27fcb77
  Built:            Thu Jun 20 02:06:58 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
/ $ docker run --rm -it alpine 
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
921b31ab772b: Pull complete 
Digest: sha256:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f4964c045a16d19dc54
Status: Downloaded newer image for alpine:latest
/ # docker: Error response from daemon: cgroups: cgroup deleted: unknown.
@tao12345666333
Copy link
Copy Markdown
Contributor

tao12345666333 commented Jul 11, 2019

systemd cgls info

 ├─docker-1939f33a7be652d20207c0612565e5819d85e8fa21963ea3f319bfeb60e77b28.scope
  │ ├─6240 rootlesskit --net=vpnkit --mtu=1500 --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run /usr/local/bin/dockerd-rootless.sh --experimental --storage-driver vfs
  │ ├─6286 /proc/self/exe --net=vpnkit --mtu=1500 --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run /usr/local/bin/dockerd-rootless.sh --experimental --storage-driver vfs
  │ ├─6299 vpnkit --ethernet /tmp/rootlesskit815445915/vpnkit-ethernet.sock --mtu 1500 --host-ip 0.0.0.0
  │ ├─6343 dockerd --experimental --storage-driver vfs
  │ ├─6395 containerd --config /run/user/1000/docker/containerd/containerd.toml --log-level info
  │ └─7567 containerd-shim -namespace moby -workdir /home/user/.local/share/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/57c8d92cef0f64ba649c4cd781fa036b61f44348ee1aaaddd23e14e83e8fc30b -address
Copy link
Copy Markdown
Contributor

@tao12345666333 tao12345666333 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In short, I think there is a problem with my environment, I will continue to debug and solve it.

For this change, LGTM
Thanks 👍

@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

cgroup error is weird. does docker info contains Cgroup Driver: none?

Usage:

  $ docker build -t dind-rootless .
  $ docker run -d --name dind-rootless --privileged dind-rootless
  $ docker exec dind-rootless docker info

* The daemon runs in an unprivileged user with ID 1000
* `--privileged` is still required due to seccomp, apparmor, procfs, and sysfs stuff
* `-H tcp://....` will be supported soon: moby/moby#39493

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

AkihiroSuda commented Jul 11, 2019

updated pr to add entrypoint script for automatically fallback to vfs storage driver

@tao12345666333
Copy link
Copy Markdown
Contributor

cgroup error is weird. does docker info contains Cgroup Driver: none?

Yes, the full outputs here.

Server:   
 Containers: 0                                          
  Running: 0                                            
  Paused: 0   
  Stopped: 0         
 Images: 1
 Server Version: 19.03.0-rc3 
 Storage Driver: vfs
 Logging Driver: json-file                                                                                       
 Cgroup Driver: none                                    
 Plugins:         
  Volume: local                                         
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive     
 Runtimes: runc
 Default Runtime: runc                                  
 Init Binary: docker-init                               
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
  rootless
 Kernel Version: 5.1.11-200.fc29.x86_64
 Operating System: Alpine Linux v3.10 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.56GiB
 Name: 1939f33a7be6
 ID: VAFX:UT42:PMXT:BCEE:BN5J:LQ73:RH2R:P32D:ZXJV:QAWF:DYEJ:AGMH
 Docker Root Dir: /home/user/.local/share/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

@tao12345666333
Copy link
Copy Markdown
Contributor

updated pr to add entrypoint script for automatically fallback to vfs storage driver

I will give it a try ASAP. (on holiday.

@tao12345666333
Copy link
Copy Markdown
Contributor

Using latest update to build image and run it.

I get the same error.

Here is the container's log.

+ '[' -w /run/user/1000 ]
+ '[' -w /home/user ]
+ rootlesskit=
+ which docker-rootlesskit
+ which rootlesskit
+ rootlesskit=rootlesskit
+ break
+ '[' -z rootlesskit ]
+ : 
+ : 
+ net=
+ mtu=
+ '[' -z ]
+ which slirp4netns
+ '[' -z ]
+ which vpnkit
+ net=vpnkit
+ '[' -z ]
+ mtu=1500
+ '[' -z ]
+ _DOCKERD_ROOTLESS_CHILD=1
+ export _DOCKERD_ROOTLESS_CHILD
+ exec rootlesskit '--net=vpnkit' '--mtu=1500' --disable-host-loopback '--port-driver=builtin' '--copy-up=/etc' '--copy-up=/run' /usr/local/bin/dockerd-rootless.sh --experimental '--storage-driver=vfs'
time="2019-07-15T10:18:12Z" level=warning msg="\"builtin\" port driver is experimental"
+ '[' -w /run/user/1000 ]
+ '[' -w /home/user ]
+ rootlesskit=
+ which docker-rootlesskit
+ which rootlesskit
+ rootlesskit=rootlesskit
+ break
+ '[' -z rootlesskit ]
+ : 
+ : 
+ net=
+ mtu=
+ '[' -z ]
+ which slirp4netns
+ '[' -z ]
+ which vpnkit
+ net=vpnkit
+ '[' -z ]
+ mtu=1500
+ '[' -z 1 ]
+ '[' 1 '=' 1 ]
+ rm -f /run/docker /run/xtables.lock
+ exec dockerd --experimental '--storage-driver=vfs'
time="2019-07-15T10:18:12.704206692Z" level=info msg="Starting up"
time="2019-07-15T10:18:12.704257560Z" level=warning msg="Running experimental build"
time="2019-07-15T10:18:12.704268669Z" level=warning msg="Running in rootless mode. Cgroups, AppArmor, and CRIU are disabled."
time="2019-07-15T10:18:12.704277825Z" level=info msg="Running with RootlessKit integration"
time="2019-07-15T10:18:12.707677975Z" level=warning msg="could not change group /run/user/1000/docker.sock to docker: group docker not found"
time="2019-07-15T10:18:12.709249023Z" level=info msg="libcontainerd: started new containerd process" pid=89
time="2019-07-15T10:18:12.709309157Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2019-07-15T10:18:12.709322199Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2019-07-15T10:18:12.709360626Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/user/1000/docker/containerd/containerd.sock 0  <nil>}] }" module=grpc
time="2019-07-15T10:18:12.709404577Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2019-07-15T10:18:12.709521125Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00014fdc0, CONNECTING" module=grpc
time="2019-07-15T10:18:12.735324178Z" level=info msg="starting containerd" revision=894b81a4b802e4eb2a91d1ce216b8817763c29fb version=v1.2.6 
time="2019-07-15T10:18:12.735951800Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 
time="2019-07-15T10:18:12.736201811Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 
time="2019-07-15T10:18:12.736747565Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /home/user/.local/share/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
time="2019-07-15T10:18:12.736824698Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 
time="2019-07-15T10:18:12.744558548Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "Device \"aufs\" does not exist.\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1" 
time="2019-07-15T10:18:12.744585880Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 
time="2019-07-15T10:18:12.744874458Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 
time="2019-07-15T10:18:12.745167830Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 
time="2019-07-15T10:18:12.745645910Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /home/user/.local/share/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 
time="2019-07-15T10:18:12.745664647Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 
time="2019-07-15T10:18:12.745770960Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /home/user/.local/share/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 
time="2019-07-15T10:18:12.745785015Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /home/user/.local/share/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 
time="2019-07-15T10:18:12.745835467Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "Device \"aufs\" does not exist.\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1" 
time="2019-07-15T10:18:12.752879615Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 
time="2019-07-15T10:18:12.752915804Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 
time="2019-07-15T10:18:12.753038244Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.753097313Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.753165387Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.753218376Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.753277662Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.753295820Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.753320542Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.753338845Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 
time="2019-07-15T10:18:12.753591269Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 
time="2019-07-15T10:18:12.753785260Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 
time="2019-07-15T10:18:12.754218836Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 
time="2019-07-15T10:18:12.754248661Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 
time="2019-07-15T10:18:12.754365147Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754417344Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754437918Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754461811Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754512448Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754530454Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754568578Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754603056Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754621424Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 
time="2019-07-15T10:18:12.754727748Z" level=warning msg="failed to load plugin io.containerd.internal.v1.opt" error="mkdir /opt/containerd: permission denied" 
time="2019-07-15T10:18:12.754749236Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754768044Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754803353Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.754873935Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 
time="2019-07-15T10:18:12.755147248Z" level=info msg=serving... address="/run/user/1000/docker/containerd/containerd-debug.sock" 
time="2019-07-15T10:18:12.755266859Z" level=info msg=serving... address="/run/user/1000/docker/containerd/containerd.sock" 
time="2019-07-15T10:18:12.755281989Z" level=info msg="containerd successfully booted in 0.020750s" 
time="2019-07-15T10:18:12.761548581Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00014fdc0, READY" module=grpc
time="2019-07-15T10:18:12.766153583Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2019-07-15T10:18:12.766251880Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2019-07-15T10:18:12.766285326Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/user/1000/docker/containerd/containerd.sock 0  <nil>}] }" module=grpc
time="2019-07-15T10:18:12.766317779Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2019-07-15T10:18:12.766452355Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000730460, CONNECTING" module=grpc
time="2019-07-15T10:18:12.766485750Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
time="2019-07-15T10:18:12.766861769Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000730460, READY" module=grpc
time="2019-07-15T10:18:12.767644511Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2019-07-15T10:18:12.767676528Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2019-07-15T10:18:12.767706489Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/user/1000/docker/containerd/containerd.sock 0  <nil>}] }" module=grpc
time="2019-07-15T10:18:12.767793004Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2019-07-15T10:18:12.767962920Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007b31c0, CONNECTING" module=grpc
time="2019-07-15T10:18:12.769242187Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007b31c0, READY" module=grpc
time="2019-07-15T10:18:12.798616493Z" level=warning msg="Your kernel does not support cgroup rt period"
time="2019-07-15T10:18:12.798652557Z" level=warning msg="Your kernel does not support cgroup rt runtime"
time="2019-07-15T10:18:12.798665343Z" level=warning msg="Your kernel does not support cgroup blkio weight"
time="2019-07-15T10:18:12.798673700Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
time="2019-07-15T10:18:12.798885292Z" level=info msg="Loading containers: start."
time="2019-07-15T10:18:12.809223710Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: Device \"bridge\" does not exist.\nbridge                204800  1 br_netfilter\nstp                    16384  1 bridge\nllc                    16384  2 bridge,stp\nDevice \"br_netfilter\" does not exist.\nbr_netfilter           28672  0 \nbridge                204800  1 br_netfilter\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
time="2019-07-15T10:18:12.814524658Z" level=warning msg="Running modprobe nf_nat failed with message: `Device \"nf_nat\" does not exist.\nnf_nat                 49152  5 xt_REDIRECT,xt_nat,ipt_MASQUERADE,ip6table_nat,iptable_nat\nnf_conntrack          147456  9 xt_REDIRECT,xt_nat,nf_conntrack_netlink,ipt_MASQUERADE,nf_conntrack_netbios_ns,nf_conntrack_broadcast,xt_CT,xt_conntrack,nf_nat\nlibcrc32c              16384  2 nf_nat,nf_conntrack\nmodprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1"
time="2019-07-15T10:18:12.824037405Z" level=warning msg="Running modprobe xt_conntrack failed with message: `Device \"xt_conntrack\" does not exist.\nxt_conntrack           16384 23 \nnf_conntrack          147456  9 xt_REDIRECT,xt_nat,nf_conntrack_netlink,ipt_MASQUERADE,nf_conntrack_netbios_ns,nf_conntrack_broadcast,xt_CT,xt_conntrack,nf_nat\nmodprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1"
time="2019-07-15T10:18:12.894200117Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
time="2019-07-15T10:18:12.929827868Z" level=info msg="Loading containers: done."
time="2019-07-15T10:18:12.935458853Z" level=info msg="Docker daemon" commit=27fcb77 graphdriver(s)=vfs version=19.03.0-rc3
time="2019-07-15T10:18:12.935658988Z" level=info msg="Daemon has completed initialization"
time="2019-07-15T10:18:12.989006670Z" level=info msg="API listen on /run/user/1000/docker.sock"
time="2019-07-15T10:19:03.878829230Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a/shim.sock" debug=false pid=271 
time="2019-07-15T10:19:04.099318989Z" level=error msg="stream copy error: read /proc/self/fd/24: file already closed"
time="2019-07-15T10:19:04.109637597Z" level=error msg="failed to delete task after fail start" container=068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a error="task must be stopped before deletion: running: failed precondition" module=libcontainerd namespace=moby
time="2019-07-15T10:19:04.136752369Z" level=error msg="failed to delete failed start container" container=068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a error="cannot delete running task 068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a: failed precondition"
time="2019-07-15T10:19:04.168813771Z" level=warning msg="Ignoring Exit Event, no such exec command found" container=068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a exec-id=068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a exec-pid=289
time="2019-07-15T10:19:04.179587476Z" level=warning msg="068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a cleanup: failed to unmount IPC: umount /home/user/.local/share/docker/containers/068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a/mounts/shm, flags: 0x2: no such file or directory"
time="2019-07-15T10:19:04.191709962Z" level=error msg="068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a cleanup: failed to delete container from containerd: cannot delete running task 068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a: failed precondition"
time="2019-07-15T10:19:04.222439488Z" level=error msg="Handler for POST /v1.40/containers/068a953a4fd51645b9f2ae902f4a7e3c390832f7c3434d10839b5d0a9f9cbe3a/start returned error: cgroups: cgroup deleted: unknown"

@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

AkihiroSuda commented Jul 16, 2019

Is the failure specific to dind?

@tao12345666333
Copy link
Copy Markdown
Contributor

It specific to dind-rootless.

docker:dind works fine.

@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

Does non-dind rootless work?

@tao12345666333
Copy link
Copy Markdown
Contributor

Does non-dind rootless work?

Yes. It works well.

Its Cgroup Driver is also none.

Server:                                                                                                          
 Containers: 0                                                                                                   
  Running: 0                                            
  Paused: 0                 
  Stopped: 0         
 Images: 0                                                                                                       
 Server Version: master-dockerproject-2019-07-15        
 Storage Driver: vfs                                    
 Logging Driver: json-file                                                                                       
 Cgroup Driver: none                                    
 Plugins:         
  Volume: local                                         
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive   
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 85f6aa58b8a3170aec9824568f7a31832878b603
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:                                      
  seccomp                                               
   Profile: default                                     
  rootless                                              
 Kernel Version: 5.1.11-200.fc29.x86_64                                                                          
 Operating System: Fedora 29 (Workstation Edition)
 OSType: linux              
 Architecture: x86_64
 CPUs: 4                                                                                                         
 Total Memory: 15.56GiB                                 
 Name: localhost.localdomain 
 ID: MEWH:DOBY:ZDM7:QHCG:ID3D:MPUW:NWO4:3G2B:PD7Y:QPIP:BFJV:HTII
 Docker Root Dir: /home/tao/.local/share/docker
 Debug Mode: false
 Username: taobeier                                     
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
(MoeLove) ➜  bin docker run --rm -it alpine
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
050382585609: Pull complete 
Digest: sha256:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998
Status: Downloaded newer image for alpine:latest
/ # 
/ # ls
bin    etc    lib    mnt    proc   run    srv    tmp    var
dev    home   media  opt    root   sbin   sys    usr
/ # cat /etc/os-release 
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.1
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"

@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

AkihiroSuda commented Jul 16, 2019

I confirmed that the issue can be avoided by applying the following patch to containerd daemon, but not sure why it is specific to Fedora. Couldn't reproduce on Ubuntu.

diff --git a/runtime/v1/linux/task.go b/runtime/v1/linux/task.go
index e13255e9..80eb0d64 100644
--- a/runtime/v1/linux/task.go
+++ b/runtime/v1/linux/task.go
@@ -124,7 +124,7 @@ func (t *Task) Start(ctx context.Context) error {
        t.pid = int(r.Pid)
        if !hasCgroup {
                cg, err := cgroups.Load(cgroups.V1, cgroups.PidPath(t.pid))
-               if err != nil {
+               if err != nil && err != cgroups.ErrCgroupDeleted {
                        return err
                }
                t.mu.Lock(

EDIT: return ErrCgroupDeleted doesn't happen on Ubuntu because Ubuntu has /sys/fs/cgroup/rdma but the parent Docker doesn't use rdma cgroup.

On Ubuntu, as /proc/PID/cgroup always contain 10:rdma:/, containerd/cgroups.Load always detect rdma as an active subsystem, even though it is not writable in rootless mode currently.

EDIT: PR containerd/containerd#3419

@tao12345666333
Copy link
Copy Markdown
Contributor

Thanks. Let me have a try.

AkihiroSuda added a commit to AkihiroSuda/containerd that referenced this pull request Jul 17, 2019
Fix a Rootless Docker-in-Docker issue on Fedora 30: docker-library/docker#165 (comment)
Related: containerd#1598

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
AkihiroSuda added a commit to AkihiroSuda/containerd that referenced this pull request Jul 18, 2019
Fix a Rootless Docker-in-Docker issue on Fedora 30: docker-library/docker#165 (comment)
Related: containerd#1598

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit fab016c)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
@AkihiroSuda
Copy link
Copy Markdown
Contributor Author

closing in favor of #174

tussennet pushed a commit to tussennet/containerd that referenced this pull request Sep 11, 2020
Fix a Rootless Docker-in-Docker issue on Fedora 30: docker-library/docker#165 (comment)
Related: containerd#1598

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants