You'll have to wait and see.
Jan 29 09:00:26 n3 kernel: [259644.705892] CPU: 14 PID: 12511 Comm: kvm Tainted: P O L 4.4.35-1-pve #1
Looks like you kernel panic'd on a kernel that is known to have OOM issues, which is fixed(rather commits reverted) in 4.4.35-2, which you apparently rebooted to...
My advice would be to NOT Use proxmox to learn how ceph works. Go to cephs website and bring up a ceph Rbd cluster using the quick start. It's super easy. It'll help you realize that proxmox just writes wrappers around ceph commands and help you understand where you are failing.
Your of...
Sounds like you have a port filter going on in between you and the proxmox host.
Run an nmap from the source computer to port 8006( nmap -p 8006 ${proxmox_host}), see if it shows as open. At the same time run a tcpdump on the proxmox host and see if you can capture packets hitting port 8006...
Check your BIOS for the hardware watchdog and turn it off. I had that ghost reboot non-sense happen to me due to hardware watchdog on Supermicro X8 and X9's. Though the ZFS stuff is interesting, since run all SSDs(samsung 840s) I turn my arc down to 2Gb as i'm more concerned with having memory...
The VM doesn't know that its a ZFS vol, it just knows its a block device, so just go about your normal business with mounting it....
I think you're over thinking it
So fstab is formatted like this:
If you say you want to mount /dev/sdb, you'll want to use its UUID instead of /dev/sdb as that...
Is there an option somewhere that I'm missing which allows you to move a selection of VM's off a node? Like something where i can tick off a checkbox and migrate those VMs in one go instead of clicking on each one and moving them one by one?
Note:
- Yes, I know there is a "migrate all"...
Sounds like you're trying to pass through the zpool filesystem to the vm and attach it from within the vm? Akin to something like a iscsi target/nfs share? Not quite sure it works like that in this implementation, though i could be mistaken.
When you create a zpool, add it to proxmox as...
tldr;
If you start a VM migration whos disks reside on a zfs volume and for whatever reason cancel in the middle of its progress, proxmox will not properly clean itself up resulting in failures of consequent attempts to migrate that same VM. Solution is to manually remove the zfs snapshots...
Might want to check you're not dropping packets somewhere or link is flapping. Also run the command manually on the node thats keeps failing and see if you can get a more verbose output.
What are the exact VM settings to get the VM to even boot?
I've literally copy pasted this into the vm.conf and it won't take.
numa0: cpus=0;2;4;6,hostnodes=0,memory=4096,policy=bind
8 cores
4 vcpu
numa enabled
I've noticed toggling hotplug cpu and memory doesn't play well so thats off...
v4 allows you a move all option.
But yes i agree it lacks in the clustering area of visibility. Luckily you can get around this by running zabbix or zenoss or any other monitoring platorm.
Make options for all of them that you can toggle via a setting. This is great for capacity planning large clusters. For example, On our current KVM environment all hosts are generating a total of ~2k write IOPS across all local disks. This helps me capacity plan my Ceph storage cluster. It...
I'd include node communication stats(latency between nodes). I'll be running a 35+ node cluster and from what I've seen so far, the only way to tell if the cluster is in healthy state is cephcm status. I'd include relevant services as well critical to cluster being healthy such as corosync...
Where do you get that from? Multicast is required for corosync operations. Multicast is not synonymous with IPv6. IPv6 does relies on multicast communication for NDP, a protocol that replaces arp with multicast operations at link layer. IPv4 also supports multicast depending on your switching gear.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.