swarm cluster 환경에서 특정 node의 모든 task가 restart되는 현상
Created 2022-02-07, Last modified 2022-02-14
Problem
3개의 manager node가 있는 swarm cluster 환경에서 특정 node에 배포된 service task가 동일한 시간에 모두 재배포 됨을 발견했다. 관련해 다른 node에서 발생했는지, service로 배포된 task만 재배포 되었는지 확인했다.
특정 node만 발생했는지 => 해당 node가 속한 swarm cluster의 전체 node 목록은 아래와 같으며, 해당 node에서만 발생한것으로 확인했다.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
lo9isvanmlq88qq23ns6e1eur * fs-store-repo Ready Active Leader 20.10.5
4xb1pzaf14ronrzcggfgv70pn fs-testbed-60 Ready Active Reachable 20.10.5
q2555giglkrp82c8ny7l8d26x txstgfin Ready Active Reachable 20.10.5
task만 재배포가 되었는지 standalone container도 재배포 되었는지 => 해당 node가 속한 swarm cluster에 배포된 모든 service task에 대해 해당 시간에 해당 node에서만 재배포가 발생했다(standalone container는 해당 현상이 발생하지 않음).
$ docker stack ps -f "node=fs-store-repo" monitoring_stack_server
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
y61hla6heibv monitoring_stack_server_cassandra.1 harbor.danal.co.kr/docker_hub/library/cassandra:3.11.11 fs-store-repo Running Running 26 hours ago
7jcjqojhei3w \_ monitoring_stack_server_cassandra.1 harbor.danal.co.kr/docker_hub/library/cassandra:3.11.11 fs-store-repo Shutdown Shutdown 29 hours ago
bhc880yzmsbh monitoring_stack_server_consul.1 harbor.danal.co.kr/docker_hub/library/consul:1.10.4 fs-store-repo Shutdown Rejected 23 hours ago "No such image: harbor.danal.c…"
nveu6wvxiwgm \_ monitoring_stack_server_consul.1 harbor.danal.co.kr/docker_hub/library/consul:1.10.4 fs-store-repo Shutdown Rejected 26 hours ago "No such image: harbor.danal.c…"
ku0w3xxzt6k5 \_ monitoring_stack_server_consul.1 harbor.danal.co.kr/docker_hub/library/consul:1.10.4 fs-store-repo Shutdown Rejected 26 hours ago "No such image: harbor.danal.c…"
1sxww9yru4su monitoring_stack_server_cortex.2 harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 fs-store-repo Running Running 26 hours ago
h4c02q3uqe46 \_ monitoring_stack_server_cortex.2 harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 fs-store-repo Shutdown Failed 29 hours ago "task: non-zero exit (1)"
q5czcqbq8bk4 \_ monitoring_stack_server_cortex.2 harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 fs-store-repo Shutdown Failed 29 hours ago "task: non-zero exit (1)"
men9glmx7jtl \_ monitoring_stack_server_cortex.2 harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 fs-store-repo Shutdown Shutdown 29 hours ago
k4c9ddaie5z8 monitoring_stack_server_grafana.1 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 fs-store-repo Shutdown Failed 29 hours ago "task: non-zero exit (1)"
nny6zj8pox5b \_ monitoring_stack_server_grafana.1 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 fs-store-repo Shutdown Failed 29 hours ago "task: non-zero exit (1)"
ma6q08supcs1 \_ monitoring_stack_server_grafana.1 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 fs-store-repo Shutdown Failed 29 hours ago "task: non-zero exit (1)"
q0tmv7upvjwo \_ monitoring_stack_server_grafana.1 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 fs-store-repo Shutdown Failed 29 hours ago "task: non-zero exit (1)"
lmszbxi5p6tj \_ monitoring_stack_server_grafana.1 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 fs-store-repo Shutdown Failed 29 hours ago "task: non-zero exit (1)"
vf3hm2cjy0a3 \_ monitoring_stack_server_grafana.1 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 fs-store-repo Shutdown Shutdown 29 hours ago
5yrv28cz9ws4 monitoring_stack_server_prometheus.2 harbor.danal.co.kr/docker_hub/prom/prometheus:v2.32.0 fs-store-repo Running Running 26 hours ago
kvoenmvugw0h \_ monitoring_stack_server_prometheus.2 harbor.danal.co.kr/docker_hub/prom/prometheus:v2.32.0 fs-store-repo Shutdown Shutdown 29 hours ago
$ docker ps -a -f "name=monitoring_stack_server"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
acec0d3c49f9 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 "/run.sh" 29 hours ago Exited (1) 29 hours ago monitoring_stack_server_grafana.1.k4c9ddaie5z8909fkof4y3n16
c9f5e96b7bd2 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 "/run.sh" 29 hours ago Exited (1) 29 hours ago monitoring_stack_server_grafana.1.nny6zj8pox5b1fsobh8dt7279
4782a2484581 harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 "/bin/cortex -config…" 29 hours ago Up 29 hours 80/tcp monitoring_stack_server_cortex.2.1sxww9yru4suzung344w07skv
35888cd3025d harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 "/run.sh" 29 hours ago Exited (1) 29 hours ago monitoring_stack_server_grafana.1.ma6q08supcs1gullyjy9dn2y3
d25c66d7037a harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 "/run.sh" 29 hours ago Exited (1) 29 hours ago monitoring_stack_server_grafana.1.q0tmv7upvjwot49mwn58ennkd
b8b00aaaf420 harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 "/bin/cortex -config…" 29 hours ago Exited (1) 29 hours ago monitoring_stack_server_cortex.2.h4c02q3uqe465d29j5x8cdalv
ed1493f11979 harbor.danal.co.kr/docker_hub/prom/prometheus:v2.32.0 "/bin/prometheus --c…" 29 hours ago Up 29 hours 0.0.0.0:9090->9090/tcp monitoring_stack_server_prometheus.2.5yrv28cz9ws4qjkygrraxln34
fa1ad77cebf4 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 "/run.sh" 29 hours ago Exited (1) 29 hours ago monitoring_stack_server_grafana.1.lmszbxi5p6tjuuxuoicjtlb9w
ad0faf8d0463 harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 "/bin/cortex -config…" 29 hours ago Exited (1) 29 hours ago monitoring_stack_server_cortex.2.q5czcqbq8bk44y0j0sbittt9w
c6b6fdf455f6 harbor.danal.co.kr/docker_hub/library/cassandra:3.11.11 "./cassandra-init.sh" 29 hours ago Up 29 hours 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp monitoring_stack_server_cassandra.1.y61hla6heibv6219t3uusetop
9bfc9bba6959 harbor.danal.co.kr/docker_hub/grafana/grafana-oss:8.3.3 "/run.sh" 4 days ago Exited (0) 29 hours ago monitoring_stack_server_grafana.1.vf3hm2cjy0a3lthq7v6wh41l2
cffbd9a7c80d harbor.danal.co.kr/docker_hub/library/cassandra:3.11.11 "./cassandra-init.sh" 4 days ago Exited (143) 29 hours ago monitoring_stack_server_cassandra.1.7jcjqojhei3w8akqg8j2m59ng
f9104ce653ee harbor.danal.co.kr/docker_hub/prom/prometheus:v2.32.0 "/bin/prometheus --c…" 4 days ago Exited (0) 29 hours ago monitoring_stack_server_prometheus.2.kvoenmvugw0he383y7bki36nk
7299ba281c4a harbor.danal.co.kr/quay_io/cortexproject/cortex:v1.11.0 "/bin/cortex -config…" 3 weeks ago Exited (137) 29 hours ago
결론: 3개의 manager node로 구성된 swarm cluster 환경에서 특정 시간에 특정 node(192.168.136.61, fs-store-repo)의 standalone container를 제외한 service task가 재배포 되는 현상이 발생했다.
Analysis
특정 container가 아닌 swarm 내 특정 node의 모든 service task가 재배포된 이유를 확인하기 위해 각 node의 docker daemon의 로그를 확인했다.
문제가 발생하지 않은 node(192.168.136.10, txstgfin)
# journalctl -u docker -S "2022-02-06 04:00" -U "2022-02-06 05:00"
...(생략)...
Feb 06 04:55:42 txstgfin dockerd[349591]: time="2022-02-06T04:55:42.880256768+09:00" level=info msg="NetworkDB stats txstgfin(43496003523c) - netID:0e9wx1tt26ur9uaidj5bas2cf leaving:false netPeers:3 entries:12 Queue qLen:0 netMsg/s:0"
Feb 06 04:55:42 txstgfin dockerd[349591]: time="2022-02-06T04:55:42.880288291+09:00" level=info msg="NetworkDB stats txstgfin(43496003523c) - netID:sodw73b5hbh2y0dfyenvuux7u leaving:false netPeers:3 entries:11 Queue qLen:0 netMsg/s:0"
Feb 06 04:58:00 txstgfin dockerd[349591]: time="2022-02-06T04:58:00.952318073+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:02 txstgfin dockerd[349591]: time="2022-02-06T04:58:02.480257999+09:00" level=info msg="memberlist: Suspect b8a14f25127c has failed, no acks received"
Feb 06 04:58:02 txstgfin dockerd[349591]: time="2022-02-06T04:58:02.952694244+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:04 txstgfin dockerd[349591]: time="2022-02-06T04:58:04.481218489+09:00" level=info msg="memberlist: Suspect b8a14f25127c has failed, no acks received"
Feb 06 04:58:04 txstgfin dockerd[349591]: time="2022-02-06T04:58:04.627705970+09:00" level=info msg="memberlist: Marking b8a14f25127c as failed, suspect timeout reached (0 peer confirmations)"
Feb 06 04:58:04 txstgfin dockerd[349591]: time="2022-02-06T04:58:04.630085505+09:00" level=info msg="Node b8a14f25127c/192.168.136.61, left gossip cluster"
Feb 06 04:58:04 txstgfin dockerd[349591]: time="2022-02-06T04:58:04.634415617+09:00" level=info msg="Node b8a14f25127c change state NodeActive --> NodeFailed"
Feb 06 04:58:04 txstgfin dockerd[349591]: time="2022-02-06T04:58:04.659908328+09:00" level=info msg="Node b8a14f25127c/192.168.136.61, added to failed nodes list"
Feb 06 04:58:04 txstgfin dockerd[349591]: time="2022-02-06T04:58:04.953035169+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:06 txstgfin dockerd[349591]: time="2022-02-06T04:58:06.991076338+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:08 txstgfin dockerd[349591]: time="2022-02-06T04:58:08.991374466+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:09 txstgfin dockerd[349591]: time="2022-02-06T04:58:09.907004228+09:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
Feb 06 04:58:10 txstgfin dockerd[349591]: time="2022-02-06T04:58:10.227326496+09:00" level=warning msg="memberlist: Refuting a suspect message (from: b7a3bb9ca95d)"
Feb 06 04:58:10 txstgfin dockerd[349591]: time="2022-02-06T04:58:10.228009303+09:00" level=info msg="Node b8a14f25127c/192.168.136.61, joined gossip cluster"
Feb 06 04:58:10 txstgfin dockerd[349591]: time="2022-02-06T04:58:10.228077498+09:00" level=info msg="Node b8a14f25127c change state NodeFailed --> NodeActive"
Feb 06 04:58:10 txstgfin dockerd[349591]: time="2022-02-06T04:58:10.991829405+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:12 txstgfin dockerd[349591]: time="2022-02-06T04:58:12.339077733+09:00" level=info msg="worker lo9isvanmlq88qq23ns6e1eur was successfully registered" method="(*Dispatcher).register"
Feb 06 04:58:12 txstgfin dockerd[349591]: time="2022-02-06T04:58:12.992106444+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:14 txstgfin dockerd[349591]: time="2022-02-06T04:58:14.992428810+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:16 txstgfin dockerd[349591]: time="2022-02-06T04:58:16.697882426+09:00" level=warning msg="rmServiceBinding 48c15a6819341f57c4c22b037b412873a4ddb596e9a2cc33c1ca02cbd136d2fe possible transient state ok:false entries:0 set:false "
Feb 06 04:58:16 txstgfin dockerd[349591]: time="2022-02-06T04:58:16.895395886+09:00" level=warning msg="rmServiceBinding a23b02259714bb72024ead4e13b650b908d47f4e8363839acd59882b72d9d87f possible transient state ok:false entries:0 set:false "
Feb 06 04:58:17 txstgfin dockerd[349591]: time="2022-02-06T04:58:17.698577541+09:00" level=warning msg="rmServiceBinding 9343adfc22e0e131a59ed4ece8a5f9d247a6b2bf48c7b4e6769c4fafa394ec5f possible transient state ok:false entries:0 set:false "
Feb 06 04:58:17 txstgfin dockerd[349591]: time="2022-02-06T04:58:17.895610042+09:00" level=warning msg="rmServiceBinding ea678c9ebc72740fe510f81a3075a523d38815712893d81a3a3e1c82ef862d83 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:20 txstgfin dockerd[349591]: time="2022-02-06T04:58:20.094486586+09:00" level=warning msg="rmServiceBinding d6380d6a985ace4a5f33586ff98e90fabaf13cab325c151879093b22779cec8e possible transient state ok:false entries:0 set:false "
Feb 06 04:58:21 txstgfin dockerd[349591]: time="2022-02-06T04:58:21.096368212+09:00" level=warning msg="rmServiceBinding a2a45edb725f811c037888bc57ad4c8b3cae7ae970d069258f419c1ab4e4c62d possible transient state ok:false entries:0 set:false "
Feb 06 04:58:26 txstgfin dockerd[349591]: time="2022-02-06T04:58:26.293903524+09:00" level=warning msg="rmServiceBinding 954f50fe9c02726006300504e5220dd17719b191a0ce2f652473e0b657dc0268 possible transient state ok:false entries:0 set:false "
Feb 06 04:59:13 txstgfin dockerd[349591]: time="2022-02-06T04:59:13.121483635+09:00" level=warning msg="underweighting node lo9isvanmlq88qq23ns6e1eur for service u5f5n764qwwr1x8dexanrjmoi because it experienced 5 failures or rejections within 5m0s" module=node node.id=q2555giglkrp82c8ny7l8d26x
Feb 06 05:00:42 txstgfin dockerd[349591]: time="2022-02-06T05:00:42.883442420+09:00" level=info msg="NetworkDB stats txstgfin(43496003523c) - netID:z9jn6yve4s5h49lw92anpjnhs leaving:false netPeers:3 entries:19 Queue qLen:0 netMsg/s:0"
Feb 06 05:00:42 txstgfin dockerd[349591]: time="2022-02-06T05:00:42.883561671+09:00" level=info msg="NetworkDB stats txstgfin(43496003523c) - netID:0e9wx1tt26ur9uaidj5bas2cf leaving:false netPeers:3 entries:22 Queue qLen:0 netMsg/s:0"
...(생략)...
04:58:00 ~ 04:58:04(5~8 Line): cluster 내 특정 memberlist(b8a14f25127c, 192.168.136.61)에 대한 heartbeat 2회 실패함에 따라 해당 node가 문제가 발생했다고 의심(suspect)
04:58:04 ~ 04:58:04(9~12 Line): heartbeat에 실패한 memberlist(b8a14f25127c)가 gossip cluster에 더 이상 없다고 판단하며 상태(state)를 변경(NodeActive --> NodeFailed) 및 failed nodes list에 추가
04:58:04 ~ 04:58:09(13~16 Line): ??
04:58:10 ~ 04:58:10(17~19 Line): memberlist(b7a3bb9ca95d, 192.168.136.60)로 부터 위에서 결정된 의심(suspect)에 대한 반박(refuting). 즉, memberlist(b8a14f25127c)가 다시 gossip cluster에 참여(join)했기 때문에 상태를 변경(NodeFailed -> NodeActive)
04:58:10 ~ 04:58:14(20~23 Line): ??
04:58:16 ~ 04:59:13(24~31 Line): memberlist(b8a14f25127c) 내 service task 재배포와 관련된 로그
문제가 발생하지 않은 node(192.168.136.60, fs-testbed-60)
# journalctl -u docker -S "2022-02-06 04:00" -U "2022-02-06 05:00"
...(생략)...
Feb 06 04:57:34 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:57:34.219160381+09:00" level=info msg="NetworkDB stats fs-testbed-60(b7a3bb9ca95d) - netID:sodw73b5hbh2y0dfyenvuux7u leaving:false netPeers:3 entries:11 Queue qLen:0 netMsg/s:0"
Feb 06 04:57:34 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:57:34.219183816+09:00" level=info msg="NetworkDB stats fs-testbed-60(b7a3bb9ca95d) - netID:mfcm4gkvqocgr5s5wo3rix9y7 leaving:false netPeers:2 entries:16 Queue qLen:0 netMsg/s:0"
Feb 06 04:58:00 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:00.619482598+09:00" level=info msg="memberlist: Suspect b8a14f25127c has failed, no acks received"
Feb 06 04:58:02 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:02.619313363+09:00" level=info msg="memberlist: Suspect b8a14f25127c has failed, no acks received"
Feb 06 04:58:04 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:04.622912049+09:00" level=info msg="memberlist: Marking b8a14f25127c as failed, suspect timeout reached (0 peer confirmations)"
Feb 06 04:58:04 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:04.623047486+09:00" level=info msg="Node b8a14f25127c/192.168.136.61, left gossip cluster"
Feb 06 04:58:04 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:04.623088771+09:00" level=info msg="Node b8a14f25127c change state NodeActive --> NodeFailed"
Feb 06 04:58:04 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:04.623754424+09:00" level=info msg="memberlist: Suspect b8a14f25127c has failed, no acks received"
Feb 06 04:58:04 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:04.630359485+09:00" level=info msg="Node b8a14f25127c/192.168.136.61, added to failed nodes list"
Feb 06 04:58:10 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:10.215205223+09:00" level=info msg="Node b8a14f25127c/192.168.136.61, joined gossip cluster"
Feb 06 04:58:10 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:10.215331839+09:00" level=info msg="Node b8a14f25127c change state NodeFailed --> NodeActive"
Feb 06 04:58:10 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:10.215390753+09:00" level=warning msg="memberlist: Refuting a suspect message (from: b7a3bb9ca95d)"
Feb 06 04:58:16 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:16.843181979+09:00" level=warning msg="rmServiceBinding 48c15a6819341f57c4c22b037b412873a4ddb596e9a2cc33c1ca02cbd136d2fe possible transient state ok:false entries:0 set:false "
Feb 06 04:58:17 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:17.027007355+09:00" level=warning msg="rmServiceBinding a23b02259714bb72024ead4e13b650b908d47f4e8363839acd59882b72d9d87f possible transient state ok:false entries:0 set:false "
Feb 06 04:58:17 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:17.698610621+09:00" level=warning msg="rmServiceBinding 9343adfc22e0e131a59ed4ece8a5f9d247a6b2bf48c7b4e6769c4fafa394ec5f possible transient state ok:false entries:0 set:false "
Feb 06 04:58:17 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:17.895821078+09:00" level=warning msg="rmServiceBinding ea678c9ebc72740fe510f81a3075a523d38815712893d81a3a3e1c82ef862d83 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:19 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:19.502650748+09:00" level=warning msg="rmServiceBinding 8a4e5dd1a9640b55f850c21cc6724ab6c46cac23f6b63f1eef7873b7c247c1a5 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:20 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:20.093895730+09:00" level=warning msg="rmServiceBinding d6380d6a985ace4a5f33586ff98e90fabaf13cab325c151879093b22779cec8e possible transient state ok:false entries:0 set:false "
Feb 06 04:58:21 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:21.096292487+09:00" level=warning msg="rmServiceBinding a2a45edb725f811c037888bc57ad4c8b3cae7ae970d069258f419c1ab4e4c62d possible transient state ok:false entries:0 set:false "
Feb 06 04:58:21 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:21.915898794+09:00" level=warning msg="rmServiceBinding 994781b3ccb17ce8d30649a78719f7b2a05788ce30290b12a9eaaeac237e38d1 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:26 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:26.293937508+09:00" level=warning msg="rmServiceBinding 954f50fe9c02726006300504e5220dd17719b191a0ce2f652473e0b657dc0268 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:26 fs-testbed-60 dockerd[2747094]: time="2022-02-06T04:58:26.295071603+09:00" level=warning msg="rmServiceBinding 2a78333e072935b501ca8ec917bd43d9d9b535746e11980b01834990de26ee0d possible transient state ok:false entries:0 set:false "
Feb 06 05:02:34 fs-testbed-60 dockerd[2747094]: time="2022-02-06T05:02:34.419129003+09:00" level=info msg="NetworkDB stats fs-testbed-60(b7a3bb9ca95d) - netID:mfcm4gkvqocgr5s5wo3rix9y7 leaving:false netPeers:2 entries:36 Queue qLen:0 netMsg/s:0"
Feb 06 05:02:34 fs-testbed-60 dockerd[2747094]: time="2022-02-06T05:02:34.419297372+09:00" level=info msg="NetworkDB stats fs-testbed-60(b7a3bb9ca95d) - netID:z9jn6yve4s5h49lw92anpjnhs leaving:false netPeers:3 entries:19 Queue qLen:0 netMsg/s:0"
...(생략)...
04:58:00 ~ 04:58:04(5~8 Line): cluster 내 특정 memberlist(b8a14f25127c, 192.168.136.61)에 대한 heartbeat 2회 실패함에 따라 해당 node가 문제가 발생했다고 의심(suspect)
04:58:04 ~ 04:58:04(9~11 Line): heartbeat에 실패한 memberlist(b8a14f25127c)가 gossip cluster에 더 이상 없다고 판단하며 상태(state)를 변경(NodeActive --> NodeFailed) 및 failed nodes list에 추가
04:58:10 ~ 04:58:10(12~14 Line): memberlist(b8a14f25127c)가 다시 gossip cluster에 참여(join)함에 따라 상태를 변경((NodeFailed -> NodeActive)) 및 위에서 결정된 의심(suspect)에 대한 반박(refuting)
04:58:16 ~ 04:58:26(15~24 Line): memberlist(b8a14f25127c)내 service task 재배포와 관련된 로그
문제가 발생한 node(192.168.136.61, fs-store-repo)
# journalctl -u docker -S "2022-02-06 04:00" -U "2022-02-06 05:00"
...(생략)...
Feb 06 04:54:18 fs-store-repo dockerd[18696]: time="2022-02-06T04:54:18.502786888+09:00" level=info msg="NetworkDB stats fs-store-repo(b8a14f25127c) - netID:z9jn6yve4s5h49lw92anpjnhs leaving:false netPeers:3 entries:15 Queue qLen:0 netMsg/s:0"
Feb 06 04:54:18 fs-store-repo dockerd[18696]: time="2022-02-06T04:54:18.502831431+09:00" level=info msg="NetworkDB stats fs-store-repo(b8a14f25127c) - netID:0e9wx1tt26ur9uaidj5bas2cf leaving:false netPeers:3 entries:12 Queue qLen:0 netMsg/s:0"
Feb 06 04:58:00 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:00.503198446+09:00" level=info msg="memberlist: Suspect b7a3bb9ca95d has failed, no acks received"
Feb 06 04:58:00 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:00.819624891+09:00" level=warning msg="memberlist: Refuting a suspect message (from: b7a3bb9ca95d)"
Feb 06 04:58:00 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:00.948743760+09:00" level=error msg="error receiving response" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Feb 06 04:58:02 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:02.503916326+09:00" level=info msg="memberlist: Suspect 43496003523c has failed, no acks received"
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.503985144+09:00" level=info msg="memberlist: Marking b7a3bb9ca95d as failed, suspect timeout reached (0 peer confirmations)"
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.504094327+09:00" level=info msg="Node b7a3bb9ca95d/192.168.136.60, left gossip cluster"
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.508489769+09:00" level=info msg="Node b7a3bb9ca95d change state NodeActive --> NodeFailed"
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.524060466+09:00" level=info msg="Node b7a3bb9ca95d/192.168.136.60, added to failed nodes list"
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.928461811+09:00" level=error msg="heartbeat to manager { } failed" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" method="(*session).heartbeat" module=node/agent node.id=lo9isvanmlq88qq23ns6e1eur session.id=4xtpor0zxt7hww5bdqq9s3tbx sessionID=4xtpor0zxt7hww5bdqq9s3tbx
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.928594010+09:00" level=error msg="agent: session failed" backoff=100ms error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node/agent node.id=lo9isvanmlq88qq23ns6e1eur
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.928679447+09:00" level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=lo9isvanmlq88qq23ns6e1eur
Feb 06 04:58:04 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:04.928736543+09:00" level=info msg="waiting 91.21106ms before registering session" module=node/agent node.id=lo9isvanmlq88qq23ns6e1eur
Feb 06 04:58:06 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:06.505813934+09:00" level=info msg="memberlist: Marking 43496003523c as failed, suspect timeout reached (0 peer confirmations)"
Feb 06 04:58:06 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:06.506002617+09:00" level=info msg="Node 43496003523c/192.168.136.10, left gossip cluster"
Feb 06 04:58:06 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:06.506023661+09:00" level=info msg="memberlist: Suspect 43496003523c has failed, no acks received"
Feb 06 04:58:06 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:06.506038559+09:00" level=info msg="Node 43496003523c change state NodeActive --> NodeFailed"
Feb 06 04:58:06 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:06.507046577+09:00" level=info msg="Node 43496003523c/192.168.136.10, added to failed nodes list"
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.020324250+09:00" level=error msg="agent: session failed" backoff=300ms error="session initiation timed out" module=node/agent node.id=lo9isvanmlq88qq23ns6e1eur
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.020431463+09:00" level=info msg="manager selected by agent for new session: { }" module=node/agent node.id=lo9isvanmlq88qq23ns6e1eur
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.020474383+09:00" level=info msg="waiting 280.226328ms before registering session" module=node/agent node.id=lo9isvanmlq88qq23ns6e1eur
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.214535905+09:00" level=error msg="node: b7a3bb9ca95d is unknown to memberlist"
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.227064558+09:00" level=info msg="Node b7a3bb9ca95d/192.168.136.60, joined gossip cluster"
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.227129422+09:00" level=info msg="Node b7a3bb9ca95d change state NodeFailed --> NodeActive"
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.285071491+09:00" level=info msg="Node 43496003523c/192.168.136.10, joined gossip cluster"
Feb 06 04:58:10 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:10.285153640+09:00" level=info msg="Node 43496003523c change state NodeFailed --> NodeActive"
Feb 06 04:58:12 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:12.074027643+09:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
Feb 06 04:58:12 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:12.074095732+09:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
Feb 06 04:58:12 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:12.074147352+09:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
Feb 06 04:58:12 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:12.074230081+09:00" level=error msg="error while reading from stream" error="rpc error: code = Canceled desc = context canceled"
Feb 06 04:58:14 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:14+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:15 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:15+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:15 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:15+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:16 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:16.539484690+09:00" level=info msg="ignoring event" container=bcad1203ec090bad1e3d64363930f2fcb8a51043dc695908695ffdd4eb0933f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:16 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:16.570506528+09:00" level=warning msg="rmServiceBinding 48c15a6819341f57c4c22b037b412873a4ddb596e9a2cc33c1ca02cbd136d2fe possible transient state ok:false entries:0 set:false "
Feb 06 04:58:16 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:16.681751115+09:00" level=info msg="ignoring event" container=9bfc9bba695978a145cfa0b4f8f7e43f9ff96f0b14bf97af118c98aa28da55f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:16 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:16.693241748+09:00" level=warning msg="rmServiceBinding 5cbb22f0c0d32918e4e4a299207608b81afd7d035434913f35f07b869c6b8953 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:16 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:16.711195304+09:00" level=info msg="ignoring event" container=0005895afd15def1aa0037e299f867637cb7064d4c2aa7b92d83dbf7416ff1c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:16 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:16.737463185+09:00" level=warning msg="rmServiceBinding a23b02259714bb72024ead4e13b650b908d47f4e8363839acd59882b72d9d87f possible transient state ok:false entries:0 set:false "
Feb 06 04:58:16 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:16.879846065+09:00" level=info msg="ignoring event" container=9df995066c12afefbb54e49d32578743acad767e2f1c91d3857e0c462635f3e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:17 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:17+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:17 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:17.687399189+09:00" level=info msg="ignoring event" container=3fd417d811bd32af03723de33fe5146f56c2ed3ba36caedf11e8d699a2195945 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:17 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:17.689163249+09:00" level=warning msg="rmServiceBinding 9343adfc22e0e131a59ed4ece8a5f9d247a6b2bf48c7b4e6769c4fafa394ec5f possible transient state ok:false entries:0 set:false "
Feb 06 04:58:17 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:17.699362619+09:00" level=warning msg="rmServiceBinding ea678c9ebc72740fe510f81a3075a523d38815712893d81a3a3e1c82ef862d83 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:19 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:19.378186052+09:00" level=info msg="ignoring event" container=f9104ce653ee6ca84062e3acec5bca21659bb6b3a40887b184b0b3721f862c7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:19 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:19.382062869+09:00" level=warning msg="rmServiceBinding 8a4e5dd1a9640b55f850c21cc6724ab6c46cac23f6b63f1eef7873b7c247c1a5 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:19 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:19.467585689+09:00" level=info msg="ignoring event" container=3e8e1fdc30c9dac23e824b4be3ff4e550d0303ca034eaf6e91c1dabfcdb77791 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:20 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:20.039417940+09:00" level=info msg="ignoring event" container=e09da9760a513247aa924519c38face9ec7dfbd9001348b8f62ef76099a47b42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:20 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:20.046151358+09:00" level=warning msg="rmServiceBinding d6380d6a985ace4a5f33586ff98e90fabaf13cab325c151879093b22779cec8e possible transient state ok:false entries:0 set:false "
Feb 06 04:58:20 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:20.774997165+09:00" level=info msg="ignoring event" container=cffbd9a7c80d38699d05ce01725dae0a00431afb1da764aa114a387d3e76f62d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:21 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:21.081300121+09:00" level=warning msg="rmServiceBinding a2a45edb725f811c037888bc57ad4c8b3cae7ae970d069258f419c1ab4e4c62d possible transient state ok:false entries:0 set:false "
Feb 06 04:58:21 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:21+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:21 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:21.845340448+09:00" level=warning msg="rmServiceBinding 994781b3ccb17ce8d30649a78719f7b2a05788ce30290b12a9eaaeac237e38d1 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:22 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:22+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:23 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:23+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:23 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:23+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:23 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:23.723545777+09:00" level=info msg="ignoring event" container=ad0faf8d0463a116d553a2d45ff06ad7086b38c1473ff58c233a1c4ed74dccc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:23 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:23.848108917+09:00" level=info msg="ignoring event" container=fa1ad77cebf4f21e96d7da9e5c5c19c779bd3f5a6f6208d983534dbf36ff5d3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:24 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:24.572199994+09:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=lo9isvanmlq88qq23ns6e1eur service.id=orau74u0ih9t0ty5bbwxykx7v task.id=q5czcqbq8bk44y0j0sbittt9w
Feb 06 04:58:24 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:24.825547445+09:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=lo9isvanmlq88qq23ns6e1eur service.id=u5f5n764qwwr1x8dexanrjmoi task.id=lmszbxi5p6tjuuxuoicjtlb9w
Feb 06 04:58:25 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:25.980337315+09:00" level=info msg="Container 7299ba281c4a0f9a45de66f58c43d2f08c2e8a4ba16d5cf0f327d710a48158c8 failed to exit within 10 seconds of signal 15 - using the force"
Feb 06 04:58:26 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:26.214436416+09:00" level=info msg="ignoring event" container=7299ba281c4a0f9a45de66f58c43d2f08c2e8a4ba16d5cf0f327d710a48158c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:26 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:26.217913128+09:00" level=warning msg="rmServiceBinding 954f50fe9c02726006300504e5220dd17719b191a0ce2f652473e0b657dc0268 possible transient state ok:false entries:0 set:false "
Feb 06 04:58:26 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:26.258981933+09:00" level=warning msg="rmServiceBinding 2a78333e072935b501ca8ec917bd43d9d9b535746e11980b01834990de26ee0d possible transient state ok:false entries:0 set:false "
Feb 06 04:58:35 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:35+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:35 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:35+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:36 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:36.850587080+09:00" level=info msg="ignoring event" container=d25c66d7037a73a6d47df4401bf15a19369f15b7acc23d46298dd25ab4168e97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:37 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:37.185075511+09:00" level=info msg="ignoring event" container=b8b00aaaf42005d80248a23bcb4e5e2a265f7d1dcfd959a88e5a09db9d558bbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:37 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:37.576504943+09:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=lo9isvanmlq88qq23ns6e1eur service.id=u5f5n764qwwr1x8dexanrjmoi task.id=q0tmv7upvjwot49mwn58ennkd
Feb 06 04:58:37 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:37.728695953+09:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=lo9isvanmlq88qq23ns6e1eur service.id=orau74u0ih9t0ty5bbwxykx7v task.id=h4c02q3uqe465d29j5x8cdalv
Feb 06 04:58:48 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:48+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:48 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:48+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:58:49 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:49.100784471+09:00" level=info msg="ignoring event" container=35888cd3025d2c4aadf2f30c1c6cd9b5fc1cb6b8820267c24dd3d2ead0e16566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:58:49 fs-store-repo dockerd[18696]: time="2022-02-06T04:58:49.499243772+09:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=lo9isvanmlq88qq23ns6e1eur service.id=u5f5n764qwwr1x8dexanrjmoi task.id=ma6q08supcs1gullyjy9dn2y3
Feb 06 04:59:00 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:00+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:59:00 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:00.854311112+09:00" level=info msg="ignoring event" container=c9f5e96b7bd23c72ce0b34621ff76d16fc6d4fd33a40c5267d5a9b75357acb37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:59:01 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:01.248452496+09:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=lo9isvanmlq88qq23ns6e1eur service.id=u5f5n764qwwr1x8dexanrjmoi task.id=nny6zj8pox5b1fsobh8dt7279
Feb 06 04:59:11 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:11+09:00" level=warning msg="file does not exist: /proc/sys/net/ipv6/conf/all/disable_ipv6 : stat /proc/sys/net/ipv6/conf/all/disable_ipv6: no such file or directory Has IPv6 been disabled in this node's kernel?"
Feb 06 04:59:12 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:12.449535992+09:00" level=info msg="ignoring event" container=acec0d3c49f9a3a385ece3ad8486c3521ab180cbb6830689a1b56d8abb0899cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 06 04:59:13 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:13.003956459+09:00" level=error msg="fatal task error" error="task: non-zero exit (1)" module=node/agent/taskmanager node.id=lo9isvanmlq88qq23ns6e1eur service.id=u5f5n764qwwr1x8dexanrjmoi task.id=k4c9ddaie5z8909fkof4y3n16
Feb 06 04:59:18 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:18.692748023+09:00" level=info msg="NetworkDB stats fs-store-repo(b8a14f25127c) - netID:mfcm4gkvqocgr5s5wo3rix9y7 leaving:false netPeers:2 entries:36 Queue qLen:0 netMsg/s:0"
Feb 06 04:59:18 fs-store-repo dockerd[18696]: time="2022-02-06T04:59:18.692895699+09:00" level=info msg="NetworkDB stats fs-store-repo(b8a14f25127c) - netID:z9jn6yve4s5h49lw92anpjnhs leaving:false netPeers:3 entries:19 Queue qLen:0 netMsg/s:0"
...(생략)...lo
04:58:00 ~ 04:58:10(5~24 Line): clutser 내 특정 memberlist(b7a3bb9ca95d, 192.168.136.60)(43496003523c, 192.168.136.10)에 대한 heartbeat이 실패함에 따라 상태(state)를 변경(NodeActive --> NodeFailed) 및 failed nodes list에 추가
04:58:10 ~ 04:58:10(25~29 Line): memberlist(b7a3bb9ca95d, 43496003523c)가 다시 gossip cluster에 참여(join)함에 따라 상태를 변경((NodeFailed -> NodeActive))
04:58:12 ~ 04:58:15(30~36 Line): ??
04:58:16 ~ 04:59:13(37~83 Line): memberlist(b8a14f25127c)내 service task 재배포와 관련된 로그
결론: 어떤 문제(아직 파악하지 못함)로 인해 cluster 내 특정 node(192.168.136.61, fs-store-repo)만 다른 node들과 heartbeat이 여러 차례 실패해 분리된 cluster를 구성한것으로 확인(04:58:00 ~ 04:58:10). 이후 다시 heartbeat이 성공함에 따라 cluster에 joint하게 되었으며, 이에 따라 task에 대한 업데이트를 위해 재배포가 이루어진 것으로 추측.
Resolution
문제의 정확한 원인은 파악하지 못했으나 github 내 issue 목록에서 관련된 내용을 조회 시, 동일한 현상이 발생하는 경우가 있음
docker swarm cluster 내 node의 healthcheck을 위한 heartbeat 주기 설정(docker swarm update 명령어의 --dispatcher-heartbeat flag)을 통해 문제가 해결한 케이스가 확인된다. 해당 flag를 이용해 주기를 더 크게 설정 및 모니터링 함으로써 동일한 문제가 발생하는지 확인하기로 함
$ docker swarm update --dispatcher-heartbeat 20s
결론: 위와 같이 swarm 클러스터 내 --dispatcher-heartbeat 기본값 5s => 20s 변경 후 1주일 동안 동일 현상이 재현되지 않음을 확인했다.
Etc
docker daemon 로그 조회 시, swarm 로그는 consul과 관련된 것으로 보이며 더 정확한 분석을 위해 consul 및 swarm의 세부 구현 확인 필요
docker daemon 로그 뿐만 아니라, docker raft log(swarm log)도 확인 필요
heartbeat timeout으로 발생한 상황인 것을 알았으나 정확하게 왜 heartbeat timeout이 발생했는지에 대한 분석은 할 수 없었다. 네트워크 레벨에서의 확인 필요
Last updated