ok so cilium's hubble tool is really helpful, even the non-enterprise version
it has been really helpful for troubleshooting network policies
but can we maybe talk about how it renders the connections?
like, what in the heck? #Kubernetes #Cilium
Nico Vibert, my beloved,,,,
is Cilium native routing mode supposed to publish pod IPs on the interfaces in the host network namespace?
That would make sense to me as using the native network layer 2/3 routing.
Or am I required to turn on SNAT using the IP masquerading feature?
Pods are getting valid IPv6 GUAs in the LAN/host subnet, but of course nothing can return to them...
Kubernetes DNS question:
Couldn't the CNI actually manage DNS instead of CoreDNS?
I mean it'd be potentially a lot of data to throw at eBPF for in-cluster records. It's already distributing information for routing.
It could also enforce all-pods upstreams by using DNAT - assuming DNSSEC remains a niche concern.
...in my defence, I never said my ideas were good...
Ok, serious question, what do I replace tailscale with? Can I run ipsec in a k8s pod?
I know #cilium can do multicluster but it wasn't fun. Managing another CA and making sure policies are multicluster-aware sucks. And I've hit a few issues where I had to restart the cilium node agent until it'd finally catch up (was a while ago, so maybe a non-issue nowadays).
What I want to have is to have a k8s service in cluster A that resolves to a k8s pod in cluster B. It's almost http only but not quite.
I guess I could get away with setting up an LB pool in both clusters and doing a host-to-host wireguard or ipsec to bridge those lb pools over. Still not ideal as it'd be harder to firewall everything off.
A race condition in the #Cilium agent can cause the agent to ignore labels that should be applied to a node. This could in turn cause CiliumClusterwideNetworkPolicies intended for nodes with the ignored label to not apply, leading to policy bypass.
#ebpf
https://github.com/cilium/cilium/security/advisories/GHSA-q7w8-72mr-vpgw
Using Cilium Hubble Exporter to log blocked egress traffic on Azure Kubernetes Service https://www.danielstechblog.io/using-cilium-hubble-exporter-to-log-blocked-egress-traffic-on-azure-kubernetes-service/ #Azure #AKS #AzureKubernetesService #Kubernetes #Cilium
Up early studying the Cilium docs. Hopefully some great wisdom is revealed to me about IPv6 native routing and what's holding me up
I also ended up replacing my Ubiquiti routers with #mikrotik. I want working load balancing in this cluster, so I have tried out #metallb. However, I could not get it to work in L2 operating mode with either #calico or #cilium. So, I've overhauled my home network to support BGP in order to have more options. This project keeps getting bigger.
Note to self: When converting a Kubernetes cluster with Cilium as CNI to replace MetalLB with Cilium's new L2 announcements, you need to tweak some settings in your Cilium installation. Especially enabling Cilium to act as a kube-proxy replacement (if you are not already doing so) and enabling the l2 announcements. Which means kube-proxy needs to be disabled in k3s.
In other words: k3s on my Raspi4 is now providing a loadbalancer to blocky using Cilium's L2 announcements...