Recent content by czechsys

  1. WARNING: CPU: 0 PID: ... [openvswitch]

    Same on hp dl380p g8 intel cpu. Probably solved in kernel patch mentioned in https://github.com/openshift/okd/issues/1189.
  2. Slow restore backup to DRBD9 storage.

    Proxmox don't support drbd. Ask Linbit.
  3. Add server as non-voting member of cluster?

    Only disk migration? Use shared/network storage or something as zfs sync.
  4. Proxmox HA cluster with ceph - need help with network topology and storage

    ad 1] a] 1 corosync link on ceph bond + 1 corosync link in mesh b] 1 corosync link on 1gbps link on dedicated switch + 1 corosync link in mesh c] 1 corosync link on 10gbps + 1 corosync link on 10gbps - all without mesh etc... tldr: split corosync link so they aren't dependent on one logical...
  5. 3 site Ceph cluster

    Check https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network and other network related things https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
  6. Replacing a node in a cluster - LVM to ZFS

    You can use same name. Search PVE docs for removing/adding node. Non-uefi boot way: https://aaronlauterer.com/blog/2021/move-grub-and-boot-to-other-disk/ (or search google).
  7. Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    Do you really understand what are you benchmarking? You can't calculate ceph iops based on ssd iops. Read this https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ .
  8. Cluster problems after removal and re-adding the same node

    Cluster communication is based on ssh, so fix ssh (key, authorized_keys) on every node (/etc/root, /etc/pve directories).
  9. Ceph pool disk size increase.

    OSD size -> weight parameter OSD change - don't remove nodes from cluster, just change disks. Check documentation (pve, ceph).
  10. Stopped VM raise zabbix notification of the network interface due backup

    Hi, running backup of the stopped VM regularly and when it meets some zabbix check, it will raise notification because VM interface on PVE is going up/down: PVE: INFO: Finished Backup of VM 101 (00:00:10) INFO: Backup finished at 2022-02-11 02:00:35 INFO: Starting Backup of VM 102 (qemu) INFO...
  11. [SOLVED] Hardware RAID Notification/Status

    There is so much HW controllers so PVE doesn't support this, you need your own monitoring. SMART is for checking state of disk, but it's not same as state of disk in array. Result - use your own monitoring.
  12. PVE 7.1 DMAR: DRHD errors - ilo4 problems on HP DL3xx G8

    Currently testing this fix, short-term it made ilo4 workable, will see in mid-term. GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=off intremap=off"

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!