mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

355K
active users

#cilium

1 post1 participant0 posts today
James Wynn 🧐<p>I&#39;m tempted to break everything by switching my cluster to <a href="https://mastodon.social/tags/cilium" class="mention hashtag" rel="tag">#<span>cilium</span></a> CNI today. Surely it couldn&#39;t be as disastrous as when I deployed <a href="https://mastodon.social/tags/multus" class="mention hashtag" rel="tag">#<span>multus</span></a>, right?</p><p><a href="https://mastodon.social/tags/Kubernetes" class="mention hashtag" rel="tag">#<span>Kubernetes</span></a> <a href="https://mastodon.social/tags/selfhosted" class="mention hashtag" rel="tag">#<span>selfhosted</span></a> <a href="https://mastodon.social/tags/homelab" class="mention hashtag" rel="tag">#<span>homelab</span></a> <a href="https://mastodon.social/tags/networking" class="mention hashtag" rel="tag">#<span>networking</span></a></p>
Blint<p><a href="https://tutter.org/tags/Question" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Question</span></a> </p><p>What <a href="https://tutter.org/tags/CNI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CNI</span></a> do you recommend for <a href="https://tutter.org/tags/HomeLab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HomeLab</span></a>? </p><p>I have experience with:<br>- <a href="https://tutter.org/tags/Istio" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Istio</span></a><br>- <a href="https://tutter.org/tags/Calico" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Calico</span></a> <br>- <a href="https://tutter.org/tags/Cilium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cilium</span></a> *<br>- <a href="https://tutter.org/tags/Flannel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Flannel</span></a> *</p><p>*: I didn't configured them manually, they came with <a href="https://tutter.org/tags/k3s" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>k3s</span></a> and I left them on defaults</p><p>I finally managed to create a new cluster with <a href="https://tutter.org/tags/Talos" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Talos</span></a>, and want to configure CNI for it. 😊</p>
JulianCalaby<p>Cursed homelab update:</p><p>Server #2 is very sick with two semi-dead HDDs which is stretching it's RAID6 system disk to the limit. This has also taken out the OSD on that server, so everything is a little broken with Server #1 taking the brunt of the extra load.</p><p>Disk "F" died, by which I mean the SAS HBA gave up on it, locking up a bunch of threads in the kernel and userspace, but a reboot brought it back.</p><p>Then disk "H" dropped off the face of the earth, which means it's probably power as that what happened a couple of times when the homelab's hardware was more cursed.</p><p>The solution is to buy ~4-5 NAS class HDDs, replace disks "F" and "H", throw one in the spare bay in server #3, and use the last one as "bulk" storage on my gaming machine, freeing up 1-2 HDDs which can then get recycled into servers #1 and #2.</p><p>Beyond that, Server #2 has been running perfectly for the past month, which clearly means that the future is using low-power ITX motherboards as servers. (I really need some time to flesh out an idea for a poor-man's blade server which is perilously close to this.)</p><p>I've also configured OpnSense and Cilium to do some BGP magic and conjure up a private IP range (technically ~5+) for services in Kubernetes.</p><p>So now I just need to build a bridge to go from Terraform to Ansible so I can spin up the container in Terraform then configure the VMs and stuff to use it without Ansible needing to know the details in advance. (It's currently going the other way: Ansible knows the details, tells Terraform, then Terraform sets up the Kubernetes services to match)</p><p><a href="https://social.treehouse.systems/tags/homelab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>homelab</span></a> <a href="https://social.treehouse.systems/tags/kubernetes" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>kubernetes</span></a> <a href="https://social.treehouse.systems/tags/opnsense" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opnsense</span></a> <a href="https://social.treehouse.systems/tags/cilium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cilium</span></a> <a href="https://social.treehouse.systems/tags/ansible" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ansible</span></a> <a href="https://social.treehouse.systems/tags/terraform" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>terraform</span></a> <a href="https://social.treehouse.systems/tags/CursedHomelab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CursedHomelab</span></a></p>
Xavier<p>Hubble is up and running ! Great way to visualize traffic in your cluster. <br /><a href="https://mastodon.social/tags/DevOps" class="mention hashtag" rel="tag">#<span>DevOps</span></a> <a href="https://mastodon.social/tags/k8s" class="mention hashtag" rel="tag">#<span>k8s</span></a> <a href="https://mastodon.social/tags/learnk8s" class="mention hashtag" rel="tag">#<span>learnk8s</span></a> <a href="https://mastodon.social/tags/cncf" class="mention hashtag" rel="tag">#<span>cncf</span></a> <a href="https://mastodon.social/tags/100DaysOfCode" class="mention hashtag" rel="tag">#<span>100DaysOfCode</span></a> <a href="https://mastodon.social/tags/100DaysOfDevOps" class="mention hashtag" rel="tag">#<span>100DaysOfDevOps</span></a> <a href="https://mastodon.social/tags/kubernetes" class="mention hashtag" rel="tag">#<span>kubernetes</span></a> <a href="https://mastodon.social/tags/cilium" class="mention hashtag" rel="tag">#<span>cilium</span></a></p>
Rafał Rudewicz<p>Newsletter #177 już na skrzynkach! 📬</p><p>W środku:</p><p>🌐 BUM w ACI<br />🔍 IP vs CLNP<br />📘 (Nieoficjalny) podręcznik CCNP-SP</p><p>Sprawdź ranking firewalli Fortigate i dowiedz się, czemu Cilium to #1 CNI dla K8S! <a href="https://mastodon.social/tags/newsletter" class="mention hashtag" rel="tag">#<span>newsletter</span></a> <a href="https://mastodon.social/tags/IT" class="mention hashtag" rel="tag">#<span>IT</span></a> <a href="https://mastodon.social/tags/Fortigate" class="mention hashtag" rel="tag">#<span>Fortigate</span></a> <a href="https://mastodon.social/tags/Cilium" class="mention hashtag" rel="tag">#<span>Cilium</span></a> <a href="https://mastodon.social/tags/K8S" class="mention hashtag" rel="tag">#<span>K8S</span></a></p>

is Cilium native routing mode supposed to publish pod IPs on the interfaces in the host network namespace?

That would make sense to me as using the native network layer 2/3 routing.

Or am I required to turn on SNAT using the IP masquerading feature?

Pods are getting valid IPv6 GUAs in the LAN/host subnet, but of course nothing can return to them...

Kubernetes DNS question:

Couldn't the CNI actually manage DNS instead of CoreDNS?

I mean it'd be potentially a lot of data to throw at eBPF for in-cluster records. It's already distributing information for routing.

It could also enforce all-pods upstreams by using DNAT - assuming DNSSEC remains a niche concern.

...in my defence, I never said my ideas were good...

Ok, serious question, what do I replace tailscale with? Can I run ipsec in a k8s pod?

I know #cilium can do multicluster but it wasn't fun. Managing another CA and making sure policies are multicluster-aware sucks. And I've hit a few issues where I had to restart the cilium node agent until it'd finally catch up (was a while ago, so maybe a non-issue nowadays).

What I want to have is to have a k8s service in cluster A that resolves to a k8s pod in cluster B. It's almost http only but not quite.

I guess I could get away with setting up an LB pool in both clusters and doing a host-to-host wireguard or ipsec to bridge those lb pools over. Still not ideal as it'd be harder to firewall everything off.

Note to self: When converting a Kubernetes cluster with Cilium as CNI to replace MetalLB with Cilium's new L2 announcements, you need to tweak some settings in your Cilium installation. Especially enabling Cilium to act as a kube-proxy replacement (if you are not already doing so) and enabling the l2 announcements. Which means kube-proxy needs to be disabled in k3s.

In other words: k3s on my Raspi4 is now providing a loadbalancer to blocky using Cilium's L2 announcements...