Комментарии:
This setup is pure gold. I can't thank you enough, within a day I've understood how MetalLB is the LB alternative for self-hosted / bare-metal kubernetes deployment and this playbook has saved me many many expensive hours which would have been needed to get my test lab up. Can't thank you enough!
ОтветитьThis is my situation: I have a few machines with docker running. I can't install anything on them. Is there an option for me to have a kubernetes cluster? Where should I start?
ОтветитьInconceivable!! This worked really well right out of the box as promised. I had a k3s 3-node Raspberry Pi cluster up and running in minutes - and I love the Ansible add in. I was vaguely familiar with Ansible from a introduction about a year ago, but this took my understanding to a whole new level. Thank you very much!
ОтветитьSo after some time, and aggravation, and a bunch of dependencies with a lot of googling with a bunch of extra steeps; I finally got this up and running on my 3 pi 4s.
Thanks for the Git repo---------------------old below------------------
I tried this setup on 3 Pi 4s, and ran into an issue.
original message: No filter named 'split' found
come to find that stack overflow says this is a ansable issue missing.
So I reinstall ansable etc. then get a python import error.
I really wanted this to work, but this does not work with Pi 4s on Ubuntu. I will try Pi OS 64bit, but I am unsure this will work either.
Thank you so much for your assistance in setting up k3s using Ansible. Could you possibly create an updated video on how to install Rancher along with Traefik + cert-manager? Additionally, could you demonstrate how to use this k3s cluster with a GitLab CI/CD pipeline? It would be of great help.
ОтветитьAlthough this setup was great and worked really fast, I don't think I want to use it right now. The reason is that I won't learn anything from it. So if something goes wrong or I would like to tweak something, I would not have any clue how to do it. For example I noticed in the settings that there was an entry for setting up bgp and ASN, since I use pfSense I figure that would be a great way of getting the routing for the K3s cluster working. But no matter what I tried I could not get it working. But it has inspired me to actually start learning Kubernetes and start building my first k3s cluster from scratch.
It would actually be great if you would break down in a more in depth video or series the different parts that you used to get this running and options one could use to tweak things. Because it is really cool how fast it goes to get the cluster up and running.
thanks for the great video content, but please this setup work with VMware workstation and if it does, what parameters should be changed
Ответитьhow to exposing k3s api to the internet?
ОтветитьMy issue was the wrong lan network! Mine is a 10. Network not a 192. The three masters would work and join but it'd hang on joining the agents. Changed everything including vip and flannel ip ranges to my lan and it worked like a charm! Also I was using -K for become password but I tried it without that and it worked. Hope this helps anyone out who may have the same problem. Thanks for the work to get this going!!!
ОтветитьSo quick! Only spent a week to get it to work in a few minutes 😂😅😂
ОтветитьThis truly is pure gold. The only thing i would add to this is to also have FORKs for different hypervisors. Ansible is very friendly with all hypervisors and can create the K3s VMs automagically.
ОтветитьI use Christian Lempa's Ubuntu Packer and Proxmox Terraform and this. Works like a charm, close to cloud-like k8s deployment. Though some steps are still manual and non-ephemeral, for example IP addressing, etc.
ОтветитьHardware heaven sent me
ОтветитьThanks!
Ответитьfor metalLB do I need to do anything on my router side, I use a dream machine pro? and not able to hit the ip address from services
ОтветитьHardware Haven sent me to a real expert 👍
ОтветитьHardware Haven sent me to a real expert 👍
ОтветитьJust want to provide a testament to how good and useful Tim's work is. I messed my K3s cluster pretty badly so decided to reset from scratch. 15 minutes and 2 commands ("ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini" and then "ansible-playbook site.yml -i inventory/my-cluster/hosts.ini") and I have a fresh start. Another 15 minutes of "kubectl apply -f" to reinstate my deployment yamls and everything is back to its original working state.
Thanks a lot mate!
👍
Can I use the same node as master and worker in your playbook???
ОтветитьLove the video Tim. I'm struggling to get this up and running on 5 ubuntu 22.04 machines. I've noticed that the args in your video don't match what's in your repo. Any reason for the change or are the original args listed anywhere? Wasn't sure if I should open an issue on GH or not.
ОтветитьWow I set up this cluster having almost no idea what to do with it.
After setting up the cluster I relied on various blogs to get services running. I'm now at a point where I've set up services using only docker documentation, docker-compose files, and Kompose.
My latest project has been delving into using BGP on metallb to be able to direct traffic from certain pods to my VPN.
Thank you so much Tim!!!
When you start hearing the background music you can't un-hear it, and then the rest of the video is lost ... bummer.
ОтветитьReally appreciate this video. I definitely need to research your blogs and understand them, I know what I want but the order of execution eludes me. I've got a HA SQL cluster already (so want to use that instead of etcd), I do want Longhorn, and Rancher and Traefik 2.... if I'm right I can just add the datastore param to the global_vars and it should use that SQL db, but how do I stop it installing etcd? And I'd assuming the best order of events would be the ansible playbook, then longhorn, rancher and traefik 2 (in that order)... as for cert-manager.... I guess between longhorn/rancher?
ОтветитьHA = High-Availability, presumably by automatic failover.
k3s = k8s but lean, 10x faster by eliminating bloat like drivers.
k8s = Kubernetes, Ansible for container management?
Ansible = YAML-based script runner to install and configure software. I hear Terraform is better because it figures out execution order on its own.
Sample group vars file not matching video. For example "kubelet-arg node-status-update-frequency=5s" is missing.
ОтветитьI don't really understand how this is HA? If one application doesn't run, the application is down and not accessible anymore, isn't it?
ОтветитьSo so awesome, just tried this out and works so well. Thanks for the supporting documentation as well!
Ответитьawesome video.
Can this setup also upgrade k3s cluster versions without downtime to apps? (I currently use system upgrade controller for that)
Hi Tim, is there an easy way to add more master nodes with etcd later on? Thank so much.
ОтветитьWhy use kubevip or MetalLB instead of just traefik everywhere? Doesn't k3s ship with traefik by default?
ОтветитьI gotta take some time to debug this to my use case, as the kibe-vip LB is not working o my odroid-c2 (armbian arm64) cluster. But thanks for the hard work to put it all together
ОтветитьI just opened an issue on Github. I'm going with a default setup.- so when then remaining two master nodes try to connect to the master that's running kube-vip they error on missing CA certs "starting kubernetes: preparing server: failed to get CA certs" ?? Thanks!
ОтветитьI managed to install it with SSH, what an absolute head f*ck that was. after 5 complete vm and cloud-int removals, I finally got it up and running.
I have no expertise in using any of these programs, and my networking knowledge is little to none. Basic networking setups.
I must say if you could provide maybe a section for ssh setup in the tutorial paper that would be great for others with no expertise.
So thank you very much im having great pleasure making your face do weird pause poses as i get through this video :)
Hi, first of all thanks a lot for such a great tutorial! Can you please elaborate why the netaddr dependency is needed? where exactly is it used?
ОтветитьHi Tim, I've discovered and been following your channel since a year and basically watched all your videos. So well explained every time!
If I were to try your Ansible script to test things out at a small scale in a first time, would the script work if I were to put the same few IP addresses both as Masters and as Workers?
(I know it's not best).
Also, one thing I always notice in your video is how many IP addresses you have, more precisely all the different subnets you use. It would be very useful to get a video on the segmentation logic you use. Because in the case of deploying this script, I really don't have a clue on which IP (and ranges) to use so that it does not interfere with other devices, VMs, services, etc. and so that I don't have to redo the whole deployment in the future.
Thank you.
So rancher is not need in this setup any more
ОтветитьWill you do a follow up video on how to set up rancher on this fresh deployed k3s cluster without integrated traefik? Your "High Availability Rancher on kubernetes" misses some details as far as I get it 🙂
ОтветитьAny thoughts on why I would be getting unable to create VM 8000 - VM 8000 already exists on node 'pve'? I don’t have a VM 8000, and never have
ОтветитьIm trying to deploy this cluster on a testing server, but when i start to install rancher i don´t have a dns to assign. How can i do this with this infraestructure?
ОтветитьWhat a newbie. If someone said that this is magic, please do "LEARN MORE".
ОтветитьHelpful video Tim. Thank you so much. I have a question : Is this prod ready or for testing only ? Should I adjust some params for prod deployment ?
ОтветитьThanks for the vid and appreciate you publishing your repo! :) Very helpful and I was able to use them along with k3s-ansible upstream, Lempa's vid, and the k3s docs to pull it all apart, figure it out, and get my own k3s setup codified. However, I skipped all the Metal LB as I found it trivial to get kube-vip to work as the load balancer for both the control plane and services. Curious as to what you got caught up on?
ОтветитьIt's great! I have a question in site.yml. What is the purpose of raspberripy in there? I install k3s on CentOS so should I remove it?
ОтветитьWelp, you've done it now, Tim. Great job!
Ответить# TIL
ОтветитьI'm trying to get this to work, but the VIP never comes up and the step that waits for all the servers to join the cluster times out because it ends up trying to access the control plane via the VIP.. Oh, and my VMs are all based off the focal-server-cloudimg-amd64 image, which I resized the partition and fs by 32Gig.
ОтветитьThis is great content but could you tell me how could i create a k3s cluster with Cilium cni using this setup instead of flannel ?
ОтветитьNice content man, thank you! Although the constant cutting in your speech almost seems like you are lagging...
ОтветитьI keep getting an error for my masters when attempting to provision the cluster. Something about 'no usable temporary directory found in /tmp, /var/tmp, /usr/tmp or my home/user directory'. The directories exist, not sure what it means they're not usable. I tried pasting the full error here but my post keeps getting deleted.
Any idea what might be causing this and how to resolve it?