我有一个非常标准的Kubernetes安装,在Ubuntu上作为单节点集群运行。我正在尝试配置CoreDNS以解析我的Kubernetes集群和某些外部域名中的所有内部服务。到目前为止,我只是在做实验。我首先创建了一个busybox pod,如下所示:https : //kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
直到我对corefile进行更改,所有操作都按照指南中的描述进行。我看到几个问题:
kubectl -n kube-system edit configmap coredns
and replaced .:53
with cluster.local:53
. After waiting, things look promising. google.com
resolution began failing, while kubernetes.default.svc.cluster.local
continued to succeed. However, kubernetes.default
resolution began failing too. Why is that? There is still a search entry for svc.cluster.local
in the busybody pod’s /etc/resolv.conf
. All that changed was the corefile.I tried to add an additional stanza/block to the corefile (again, by editing the config map). I added a simple block :
.:53{
log
}
It seems that the corefile fails to compile or something. The pods seem healthy and don’t report any errors to the logs, but the requests all hang and fail.
我尝试添加日志插件,但是由于该插件仅应用于与该插件匹配的域,并且域名不匹配或corefile损坏,因此无法正常工作。
为了透明起见,这是我的新corefile:
cluster.local:53 {
errors
log
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
看起来您的Corefile在通过“ kubectl edit ...”命令进行编辑时被损坏了。可能是默认文本编辑器的错,但这绝对是有效的。
我建议您使用以下命令替换当前的配置映射:
kubectl get -n kube-system cm/coredns --export -o yaml | kubectl replace -n kube-system -f coredns_cm.yaml
#coredns_cm.yaml
apiVersion: v1
data:
Corefile: |
cluster.local:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: coredns