Search results

  1. D

    Mouse not working on console (novnc)?

    ahhh haaaaaaaaaa!!! Works!
  2. D

    Kernel Panic

    You'll have to wait and see. Jan 29 09:00:26 n3 kernel: [259644.705892] CPU: 14 PID: 12511 Comm: kvm Tainted: P O L 4.4.35-1-pve #1 Looks like you kernel panic'd on a kernel that is known to have OOM issues, which is fixed(rather commits reverted) in 4.4.35-2, which you apparently rebooted to...
  3. D

    Understanding Ceph

    My advice would be to NOT Use proxmox to learn how ceph works. Go to cephs website and bring up a ceph Rbd cluster using the quick start. It's super easy. It'll help you realize that proxmox just writes wrappers around ceph commands and help you understand where you are failing. Your of...
  4. D

    Fresh 4.4 install -- Can SSH, no Web interface (

    Sounds like you have a port filter going on in between you and the proxmox host. Run an nmap from the source computer to port 8006( nmap -p 8006 ${proxmox_host}), see if it shows as open. At the same time run a tcpdump on the proxmox host and see if you can capture packets hitting port 8006...
  5. D

    Random Restarting

    Check your BIOS for the hardware watchdog and turn it off. I had that ghost reboot non-sense happen to me due to hardware watchdog on Supermicro X8 and X9's. Though the ZFS stuff is interesting, since run all SSDs(samsung 840s) I turn my arc down to 2Gb as i'm more concerned with having memory...
  6. D

    Mount zfs pool on node from linux vm for data storage

    The VM doesn't know that its a ZFS vol, it just knows its a block device, so just go about your normal business with mounting it.... I think you're over thinking it So fstab is formatted like this: If you say you want to mount /dev/sdb, you'll want to use its UUID instead of /dev/sdb as that...
  7. D

    Batch migration??

    Is there an option somewhere that I'm missing which allows you to move a selection of VM's off a node? Like something where i can tick off a checkbox and migrate those VMs in one go instead of clicking on each one and moving them one by one? Note: - Yes, I know there is a "migrate all"...
  8. D

    Mount zfs pool on node from linux vm for data storage

    Sounds like you're trying to pass through the zpool filesystem to the vm and attach it from within the vm? Akin to something like a iscsi target/nfs share? Not quite sure it works like that in this implementation, though i could be mistaken. When you create a zpool, add it to proxmox as...
  9. D

    Noticed a proxmox-zfs bug

    tldr; If you start a VM migration whos disks reside on a zfs volume and for whatever reason cancel in the middle of its progress, proxmox will not properly clean itself up resulting in failures of consequent attempts to migrate that same VM. Solution is to manually remove the zfs snapshots...
  10. D

    NFS Share problem

    Might want to check you're not dropping packets somewhere or link is flapping. Also run the command manually on the node thats keeps failing and see if you can get a more verbose output.
  11. D

    [SOLVED] How to specify numa cpus in vm.conf?

    What are the exact VM settings to get the VM to even boot? I've literally copy pasted this into the vm.conf and it won't take. numa0: cpus=0;2;4;6,hostnodes=0,memory=4096,policy=bind 8 cores 4 vcpu numa enabled I've noticed toggling hotplug cpu and memory doesn't play well so thats off...
  12. D

    Preview, Feedback wanted - Cluster Dashboard

    v4 allows you a move all option. But yes i agree it lacks in the clustering area of visibility. Luckily you can get around this by running zabbix or zenoss or any other monitoring platorm.
  13. D

    Preview, Feedback wanted - Cluster Dashboard

    Make options for all of them that you can toggle via a setting. This is great for capacity planning large clusters. For example, On our current KVM environment all hosts are generating a total of ~2k write IOPS across all local disks. This helps me capacity plan my Ceph storage cluster. It...
  14. D

    Preview, Feedback wanted - Cluster Dashboard

    I'd include node communication stats(latency between nodes). I'll be running a 35+ node cluster and from what I've seen so far, the only way to tell if the cluster is in healthy state is cephcm status. I'd include relevant services as well critical to cluster being healthy such as corosync...
  15. D

    Cluster "Flapping"

    https://pve.proxmox.com/wiki/Multicast_notes ?
  16. D

    Nodes getting offline

    Tcpdump your primary nics, also check for packet loss.
  17. D

    Nodes getting offline

    Where do you get that from? Multicast is required for corosync operations. Multicast is not synonymous with IPv6. IPv6 does relies on multicast communication for NDP, a protocol that replaces arp with multicast operations at link layer. IPv4 also supports multicast depending on your switching gear.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!