3 Node Hyperconverged Proxmox cluster: Failure testing, Ceph performance, 10Gb mesh network

3 Node Hyperconverged Proxmox cluster: Failure testing, Ceph performance, 10Gb mesh network

ElectronicsWizardry

1 год назад

35,458 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

Craig MacPherson
Craig MacPherson - 07.11.2023 16:54

Since you asked, how about reliable VL intensive OLTP database using no data loss log shipping, very fast failover on multi-node active/passive HA cluster config with enterprise class database products like Oracle and HANA. Hit it hard, every server hardware, OS, network, database, heartbeat, corruption, simulated WAN, DC environment, and disaster failure scenario you can come up with. Show that this product can compete in enterprise environments. Perhaps it can. Enjoy the challenge. I look forward to viewing more of your videos. Amazing talent you have, loved the chickens.

Ответить
Marty Wise
Marty Wise - 25.10.2023 13:54

Thanks! Super vid! Searching for parts and planning construction of my own PVE cluster.

Ответить
Eric Blom
Eric Blom - 22.10.2023 03:55

Great content. I'd really like to watch a deep drive on network setup that covers separate networks for Ceph (>=10Gb), VM access outside of the cluster, and a intra-cluster management network (<=1Gb)

Ответить
Left Blank
Left Blank - 17.10.2023 05:37

Yea, but has it eliminated side fumbling?

Ответить
The Technology Studio
The Technology Studio - 23.09.2023 04:20

Could you post a network diagram so I can build the same setup? I am not sure how to do the mesh network

Ответить
TheOnlyEpsilonAlpha
TheOnlyEpsilonAlpha - 06.09.2023 00:22

Impressive, i wonder: You called up that WebUI over a direct IP right? A reasonable addition, to make also that be fault-tolerant, would be to set up a load balancing setup for the Web UI, so you would have a DNS Name to call your Interface which routes to a functional node at all times.

Or do you have something like vIP running already, which routes to a functional virtual IP?

Ответить
rm -r *
rm -r * - 27.08.2023 23:52

Imagine finding one of those in a dark server room in the last corner. Brrr. Creepy.

Ответить
Klango Bra
Klango Bra - 26.08.2023 02:36

Valeu!

Ответить
TheBlaser55
TheBlaser55 - 22.08.2023 03:29

WOW I have been looking at something like this for a while.

Ответить
Weave
Weave - 04.08.2023 15:01

Nice. Usually I watch instructional videos at 1.25x or 1.5x -- yours is the first one I thought I was going to have to run it at lower than 1x!

Ответить
Allards
Allards - 03.08.2023 09:03

Thanks you for this video, I never heard of the Proxmox Full Mesh Network Ceph feature before.

I recently bought three Mini-pc’s for the purpose of building an Proxmox HA cluster. I was planning on getting a small 2.5 GB switch for the storage.

Since the Mini-pc’s have two 2.5 GB ports I will use them in a Full Mesh Network buying separate USB-C to Ethernet adapters for the LAN connectivity.

For my Homelab such a setup is more than powerful enough.
Going to have a lot of fun (and frustration 😅) with an advanced Proxmox setup and Kubernetes Cluster on top of it..

Ответить
Syav7
Syav7 - 28.07.2023 05:40

Very nice 👍

Ответить
Pankaj Joshi
Pankaj Joshi - 27.07.2023 14:15

1. Does it increases speed?
2. How to connect more than onr pls show.
I have 20 old dual core PCs in my lab , how can I parallely use these processor.


Thank you

Ответить
Mickey Mishra
Mickey Mishra - 17.07.2023 14:34

I love it when old hardware gets used. Sure it may take more power, but in my experience? Mixing and matching may be hard to do? But its overall a better idea for uptime. Chances that 3 sets of gear fail at the same time from different product lines? Yea, not going to happen!

This is wonderful that more people are using the DAC cables. I stepped away from home server stuff years ago, but its nice seeing other folks keep the hobby alive.

Ответить
Doug Jenkins
Doug Jenkins - 13.07.2023 04:47

Great Video. A suggestion to test your setup is to simulate a power outage and see how the cluster responds. I have a 3 node ProxMox cluster running Ceph and I am setting up an extra cheap PC to run NUT to manage the UPS. My goal is to simulate a power outage (un plug the UPS) and have the cluster "Gracefully" shutdown and restart when power is restored.

Ответить
Joshua Maserow
Joshua Maserow - 08.07.2023 18:06

Well done dude. You leveled up your game. Glad I subscribed.

Ответить
ZimTachyon
ZimTachyon - 26.06.2023 03:30

You are genuinely great at presenting this content. You first hinted at not using a switch which caught my attention right away. Then you showed the triangle configuration to answer how you anticipate it would work. Finally you asked and answered the same questions I had like how do you avoid loops. Excellent presentation and extremely valuable.

Ответить
P. S.
P. S. - 12.06.2023 14:45

Is where a way to kill 1 node(for example electricity gonr) and still have working vm without any interuption? Or at first works fence and after that restarting vm and loads up? So it usually around 1-2 minutes. I would like know is there any suggestions to have 0 downtime, because data is in shared ceph.

Ответить
Lieb Johnson
Lieb Johnson - 11.06.2023 16:08

Putting together a three node ceps cluster which needs to be powered efficient, quiet, and have ~50tb of available storage. Would love a parts recommendation.

Ответить
Orange Fish
Orange Fish - 10.06.2023 05:27

Your talking a little fast to follow.. but i get it... that means your are passionate about the subject!

Ответить