Mac m1上使用minikube安装zadig

Macbook M1 MAX上已安装好minikube,kubernetes版本为1.23.8.参考基于现有Kubernetes安装文档快速体验安装。步骤如下:
export IP=127.0.0.1
export PORT=30000

# 快速体验:
curl -LO https://github.com/koderover/zadig/releases/download/v1.13.0/install_quickstart.sh
chmod +x ./install_quickstart.sh
./install_quickstart.sh

出现asland、cron、dind、warpdrive、ingress的pod无法运行,状态为CrashLoopBackOff

zadig中aslan报错信息如下:
kubectl logs -f aslan-ccc968cbf-sd8ct -n zadig

Defaulted container “nsqd” out of: nsqd, aslan

runtime: failed to create new OS thread (have 2 already; errno=22)

fatal error: newosproc

runtime stack:

runtime.throw(0x884241, 0x9)

/usr/local/Cellar/go/1.8/libexec/src/runtime/panic.go:596 +0x95

runtime.newosproc(0xc42002c000, 0xc42003c000)

/usr/local/Cellar/go/1.8/libexec/src/runtime/os_linux.go:163 +0x18c

runtime.newm(0x89e450, 0x0)

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1628 +0x137

runtime.main.func1()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:126 +0x36

runtime.systemstack(0xa8f900)

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:327 +0x79

runtime.mstart()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1132

goroutine 1 [running]:

runtime.systemstack_switch()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:281 fp=0xc420028788 sp=0xc420028780

runtime.main()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:127 +0x6c fp=0xc4200287e0 sp=0xc420028788

runtime.goexit()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200287e8 sp=0xc4200287e0

zadig中cron组件报错信息如下:
kubectl logs -f cron-8486646f5d-76rtw -n zadig

Defaulted container “nsqd” out of: nsqd, cron

runtime: failed to create new OS thread (have 2 already; errno=22)

fatal error: newosproc

runtime stack:

runtime.throw(0x884241, 0x9)

/usr/local/Cellar/go/1.8/libexec/src/runtime/panic.go:596 +0x95

runtime.newosproc(0xc42002c000, 0xc42003c000)

/usr/local/Cellar/go/1.8/libexec/src/runtime/os_linux.go:163 +0x18c

runtime.newm(0x89e450, 0x0)

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1628 +0x137

runtime.main.func1()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:126 +0x36

runtime.systemstack(0xa8f900)

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:327 +0x79

runtime.mstart()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1132

goroutine 1 [running]:

runtime.systemstack_switch()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:281 fp=0xc420028788 sp=0xc420028780

runtime.main()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:127 +0x6c fp=0xc4200287e0 sp=0xc420028788

runtime.goexit()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200287e8 sp=0xc4200287e0

dind报错信息如下:
kubectl logs -f dind-0 -n zadig

ip: can’t find device ‘ip_tables’

modprobe: can’t change directory to ‘/lib/modules’: No such file or directory

time=“2022-07-13T10:24:14.318727055Z” level=info msg=“Starting up”

time=“2022-07-13T10:24:14.340618555Z” level=warning msg=“could not change group /var/run/docker.sock to docker: group docker not found”

time=“2022-07-13T10:24:14.342389722Z” level=warning msg=“Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network.” host=“tcp://0.0.0.0:2375”

time=“2022-07-13T10:24:14.342665805Z” level=warning msg=“Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there!” host=“tcp://0.0.0.0:2375”

time=“2022-07-13T10:24:15.343032500Z” level=warning msg=“Binding to an IP address without --tlsverify is deprecated. Startup is intentionally being slowed down to show this message” host=“tcp://0.0.0.0:2375”

time=“2022-07-13T10:24:15.343241291Z” level=warning msg=“Please consider generating tls certificates with client validation to prevent exposing unauthenticated root access to your network” host=“tcp://0.0.0.0:2375”

time=“2022-07-13T10:24:15.343795916Z” level=warning msg=“You can override this by explicitly specifying ‘–tls=false’ or ‘–tlsverify=false’” host=“tcp://0.0.0.0:2375”

time=“2022-07-13T10:24:15.343945875Z” level=warning msg=“Support for listening on TCP without authentication or explicit intent to run without authentication will be removed in the next release” host=“tcp://0.0.0.0:2375”

time=“2022-07-13T10:24:30.362414632Z” level=info msg=“libcontainerd: started new containerd process” pid=90

time=“2022-07-13T10:24:30.364997215Z” level=info msg=“parsed scheme: "unix"” module=grpc

time=“2022-07-13T10:24:30.365220340Z” level=info msg=“scheme "unix" not registered, fallback to default scheme” module=grpc

time=“2022-07-13T10:24:30.366073424Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }” module=grpc

time=“2022-07-13T10:24:30.366383840Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc

time=“2022-07-13T10:24:30Z” level=warning msg=“deprecated version : 1, please switch to version 2

time=“2022-07-13T10:24:30.622392757Z” level=info msg=“starting containerd” revision=3df54a852345ae127d1fa3092b95168e4a88e2f8 version=v1.5.11

time=“2022-07-13T10:24:30.769701507Z” level=info msg=“loading plugin "io.containerd.content.v1.content"…” type=io.containerd.content.v1

time=“2022-07-13T10:24:30.770827590Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.aufs"…” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.832115632Z” level=info msg=“skip loading plugin "io.containerd.snapshotter.v1.aufs"…” error=“aufs is not supported (modprobe aufs failed: exit status 1 "ip: can’t find device ‘aufs’\nmodprobe: can’t change directory to ‘/lib/modules’: No such file or directory\n"): skip plugin” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.832754924Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.btrfs"…” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.834020632Z” level=info msg=“skip loading plugin "io.containerd.snapshotter.v1.btrfs"…” error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.834193257Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.devmapper"…” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.834593174Z” level=warning msg=“failed to load plugin io.containerd.snapshotter.v1.devmapper” error=“devmapper not configured”

time=“2022-07-13T10:24:30.834745507Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.native"…” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.835164424Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.overlayfs"…” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.838367965Z” level=info msg=“loading plugin "io.containerd.snapshotter.v1.zfs"…” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.839155382Z” level=info msg=“skip loading plugin "io.containerd.snapshotter.v1.zfs"…” error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin” type=io.containerd.snapshotter.v1

time=“2022-07-13T10:24:30.839286340Z” level=info msg=“loading plugin "io.containerd.metadata.v1.bolt"…” type=io.containerd.metadata.v1

time=“2022-07-13T10:24:30.839780174Z” level=warning msg=“could not use snapshotter devmapper in metadata plugin” error=“devmapper not configured”

time=“2022-07-13T10:24:30.839970840Z” level=info msg=“metadata content store policy set” policy=shared

time=“2022-07-13T10:24:30.852987591Z” level=info msg=“loading plugin "io.containerd.differ.v1.walking"…” type=io.containerd.differ.v1

time=“2022-07-13T10:24:30.853311049Z” level=info msg=“loading plugin "io.containerd.gc.v1.scheduler"…” type=io.containerd.gc.v1

time=“2022-07-13T10:24:30.854730049Z” level=info msg=“loading plugin "io.containerd.service.v1.introspection-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.855950382Z” level=info msg=“loading plugin "io.containerd.service.v1.containers-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.856115216Z” level=info msg=“loading plugin "io.containerd.service.v1.content-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.856293257Z” level=info msg=“loading plugin "io.containerd.service.v1.diff-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.856482924Z” level=info msg=“loading plugin "io.containerd.service.v1.images-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.856677882Z” level=info msg=“loading plugin "io.containerd.service.v1.leases-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.856829799Z” level=info msg=“loading plugin "io.containerd.service.v1.namespaces-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.856967757Z” level=info msg=“loading plugin "io.containerd.service.v1.snapshots-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.857192841Z” level=info msg=“loading plugin "io.containerd.runtime.v1.linux"…” type=io.containerd.runtime.v1

time=“2022-07-13T10:24:30.857945382Z” level=info msg=“loading plugin "io.containerd.runtime.v2.task"…” type=io.containerd.runtime.v2

time=“2022-07-13T10:24:30.859215299Z” level=info msg=“loading plugin "io.containerd.monitor.v1.cgroups"…” type=io.containerd.monitor.v1

time=“2022-07-13T10:24:30.861816049Z” level=info msg=“loading plugin "io.containerd.service.v1.tasks-service"…” type=io.containerd.service.v1

time=“2022-07-13T10:24:30.862346382Z” level=info msg=“loading plugin "io.containerd.internal.v1.restart"…” type=io.containerd.internal.v1

time=“2022-07-13T10:24:30.864160382Z” level=info msg=“loading plugin "io.containerd.grpc.v1.containers"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.864718882Z” level=info msg=“loading plugin "io.containerd.grpc.v1.content"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.865307507Z” level=info msg=“loading plugin "io.containerd.grpc.v1.diff"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.865846257Z” level=info msg=“loading plugin "io.containerd.grpc.v1.events"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.866477591Z” level=info msg=“loading plugin "io.containerd.grpc.v1.healthcheck"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.866689757Z” level=info msg=“loading plugin "io.containerd.grpc.v1.images"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.866830591Z” level=info msg=“loading plugin "io.containerd.grpc.v1.leases"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.866966882Z” level=info msg=“loading plugin "io.containerd.grpc.v1.namespaces"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.867088882Z” level=info msg=“loading plugin "io.containerd.internal.v1.opt"…” type=io.containerd.internal.v1

time=“2022-07-13T10:24:30.868796382Z” level=info msg=“loading plugin "io.containerd.grpc.v1.snapshots"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.869004924Z” level=info msg=“loading plugin "io.containerd.grpc.v1.tasks"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.869151382Z” level=info msg=“loading plugin "io.containerd.grpc.v1.version"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.869235757Z” level=info msg=“loading plugin "io.containerd.grpc.v1.introspection"…” type=io.containerd.grpc.v1

time=“2022-07-13T10:24:30.873150591Z” level=info msg=serving… address=/var/run/docker/containerd/containerd-debug.sock

time=“2022-07-13T10:24:30.873428507Z” level=info msg=serving… address=/var/run/docker/containerd/containerd.sock.ttrpc

time=“2022-07-13T10:24:30.874543424Z” level=info msg=serving… address=/var/run/docker/containerd/containerd.sock

time=“2022-07-13T10:24:30.877454757Z” level=info msg=“containerd successfully booted in 0.293065s”

time=“2022-07-13T10:24:30.938329716Z” level=info msg=“parsed scheme: "unix"” module=grpc

time=“2022-07-13T10:24:30.938480091Z” level=info msg=“scheme "unix" not registered, fallback to default scheme” module=grpc

time=“2022-07-13T10:24:30.938669257Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }” module=grpc

time=“2022-07-13T10:24:30.938766257Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc

time=“2022-07-13T10:24:30.946153674Z” level=info msg=“parsed scheme: "unix"” module=grpc

time=“2022-07-13T10:24:30.946246466Z” level=info msg=“scheme "unix" not registered, fallback to default scheme” module=grpc

time=“2022-07-13T10:24:30.946319674Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }” module=grpc

time=“2022-07-13T10:24:30.946381924Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc

time=“2022-07-13T10:24:31.136632591Z” level=info msg=“Loading containers: start.”

time=“2022-07-13T10:24:31.156003216Z” level=warning msg=“Running iptables --wait -t nat -L -n failed with message: iptables v1.8.7 (legacy): can't initialize iptables table nat’: iptables who? (do you need to insmod?)\nPerhaps iptables or your kernel needs to be upgraded.`, error: exit status 3”

time=“2022-07-13T10:24:31.771967508Z” level=info msg=“stopping event stream following graceful shutdown” error=“” module=libcontainerd namespace=moby

time=“2022-07-13T10:24:31.816175091Z” level=info msg=“stopping healthcheck following graceful shutdown” module=libcontainerd

time=“2022-07-13T10:24:31.819194049Z” level=info msg=“stopping event stream following graceful shutdown” error=“context canceled” module=libcontainerd namespace=plugins.moby

failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.7 (legacy): can’t initialize iptables table `nat’: iptables who? (do you need to insmod?)

Perhaps iptables or your kernel needs to be upgraded.

(exit status 3)

nsqlookup报错信息如下:
kubectl logs -f nsqlookup-0 -n zadig

runtime: failed to create new OS thread (have 2 already; errno=22)

fatal error: newosproc

runtime stack:

runtime.throw(0x819c36, 0x9)

/usr/local/Cellar/go/1.8/libexec/src/runtime/panic.go:596 +0x95

runtime.newosproc(0xc42002a000, 0xc42003a000)

/usr/local/Cellar/go/1.8/libexec/src/runtime/os_linux.go:163 +0x18c

runtime.newm(0x830ef0, 0x0)

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1628 +0x137

runtime.main.func1()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:126 +0x36

runtime.systemstack(0x9fbe00)

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:327 +0x79

runtime.mstart()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1132

goroutine 1 [running]:

runtime.systemstack_switch()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:281 fp=0xc420026788 sp=0xc420026780

runtime.main()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:127 +0x6c fp=0xc4200267e0 sp=0xc420026788

runtime.goexit()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200267e8 sp=0xc4200267e0

warpdrive报错信息如下:
kubectl logs -f warpdrive-9bd8bf57b-cp72d -n zadig

Defaulted container “nsqd” out of: nsqd, warpdrive

runtime: failed to create new OS thread (have 2 already; errno=22)

fatal error: newosproc

runtime stack:

runtime.throw(0x884241, 0x9)

/usr/local/Cellar/go/1.8/libexec/src/runtime/panic.go:596 +0x95

runtime.newosproc(0xc42002c000, 0xc42003c000)

/usr/local/Cellar/go/1.8/libexec/src/runtime/os_linux.go:163 +0x18c

runtime.newm(0x89e450, 0x0)

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1628 +0x137

runtime.main.func1()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:126 +0x36

runtime.systemstack(0xa8f900)

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:327 +0x79

runtime.mstart()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:1132

goroutine 1 [running]:

runtime.systemstack_switch()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:281 fp=0xc420028788 sp=0xc420028780

runtime.main()

/usr/local/Cellar/go/1.8/libexec/src/runtime/proc.go:127 +0x6c fp=0xc4200287e0 sp=0xc420028788

runtime.goexit()

/usr/local/Cellar/go/1.8/libexec/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200287e8 sp=0xc4200287e0

aslan的events信息
Events:
Type Reason Age From Message


Normal Scheduled 23m default-scheduler Successfully assigned zadig/aslan-ccc968cbf-sd8ct to minikube
Warning FailedMount 23m kubelet MountVolume.SetUp failed for volume “aes-key” : failed to sync secret cache: timed out waiting for the condition
Normal Pulled 23m kubelet Successfully pulled image “ccr.ccs.tencentyun.com/koderover-public/nsqio-nsq:v1.0.0-compat” in 804.758417ms
Normal Pulling 23m kubelet Pulling image “ccr.ccs.tencentyun.com/koderover-rc/aslan:1.13.0-amd64
Normal Pulled 23m kubelet Successfully pulled image “ccr.ccs.tencentyun.com/koderover-rc/aslan:1.13.0-amd64” in 1.686635167s
Normal Created 23m kubelet Created container aslan
Normal Started 23m kubelet Started container aslan
Normal Created 23m (x2 over 23m) kubelet Created container nsqd
Normal Started 23m (x2 over 23m) kubelet Started container nsqd
Normal Pulled 23m kubelet Successfully pulled image “ccr.ccs.tencentyun.com/koderover-public/nsqio-nsq:v1.0.0-compat” in 6.800429961s
Normal Pulling 23m (x3 over 23m) kubelet Pulling image “ccr.ccs.tencentyun.com/koderover-public/nsqio-nsq:v1.0.0-compat
Warning Unhealthy 23m (x9 over 23m) kubelet Readiness probe failed: Get “http://172.17.0.15:25000/api/health”: dial tcp 172.17.0.15:25000: connect: connection refused
Warning BackOff 3m39s (x124 over 23m) kubelet Back-off restarting failed container

m1芯片为ARM64架构,现在zadig暂不支持