Search results

  1. C

    3 site Ceph cluster

    Check https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network and other network related things https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
  2. C

    Replacing a node in a cluster - LVM to ZFS

    You can use same name. Search PVE docs for removing/adding node. Non-uefi boot way: https://aaronlauterer.com/blog/2021/move-grub-and-boot-to-other-disk/ (or search google).
  3. C

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    Do you really understand what are you benchmarking? You can't calculate ceph iops based on ssd iops. Read this https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ .
  4. C

    Cluster problems after removal and re-adding the same node

    Cluster communication is based on ssh, so fix ssh (key, authorized_keys) on every node (/etc/root, /etc/pve directories).
  5. C

    Ceph pool disk size increase.

    OSD size -> weight parameter OSD change - don't remove nodes from cluster, just change disks. Check documentation (pve, ceph).
  6. C

    Stopped VM raise zabbix notification of the network interface due backup

    Hi, running backup of the stopped VM regularly and when it meets some zabbix check, it will raise notification because VM interface on PVE is going up/down: PVE: INFO: Finished Backup of VM 101 (00:00:10) INFO: Backup finished at 2022-02-11 02:00:35 INFO: Starting Backup of VM 102 (qemu) INFO...
  7. C

    [SOLVED] Hardware RAID Notification/Status

    There is so much HW controllers so PVE doesn't support this, you need your own monitoring. SMART is for checking state of disk, but it's not same as state of disk in array. Result - use your own monitoring.
  8. C

    PVE 7.1 DMAR: DRHD errors - ilo4 problems on HP DL3xx G8

    Currently testing this fix, short-term it made ilo4 workable, will see in mid-term. GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=off intremap=off"
  9. C

    PVE 7.1 DMAR: DRHD errors - ilo4 problems on HP DL3xx G8

    We have all HP DL3xx G8 on PVE7.1, version from last ugraded below: proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve) pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe) pve-kernel-helper: 7.1-8 pve-kernel-5.13: 7.1-6 pve-kernel-5.13.19-3-pve: 5.13.19-7 ceph: 15.2.15-pve1 ceph-fuse...
  10. C

    2 Node Cluster HA DRBD or CEPH?

    DRBD isn't officially supported. You are on your own.
  11. C

    Installation on R510

    Fix /etc/network/interfaces for your needs.
  12. C

    pveperf fsync performance slower with raid10 than raid1?

    2x 3.84T in zfs r1 = 1TB? what? HD SIZE: 1026.72 GB (raid1-ssd-pool) 4x 3.84T in zfs r10 = 820GB? wtf? HD SIZE: 820.30 GB (raid10-ssd-pool)
  13. C

    [SOLVED] PVE 7.1.8 - notes formatting

    Tab notes has broken formatting. I restored VM from PVE6.4 to 7.1 with such notes: In edit panel are those lines line by line. root IP vg0 - root 8G, swap 2G v20210914 In view panel are those lines all on one line. root IP vg0 - root 8G, swap 2G v20210914 Clearing notes to empy->save->reenter...
  14. C

    6.4 to 7.0 didn't work

    Uncomment: # deb http://ftp.us.debian.org/debian bullseye main contrib # deb http://ftp.us.debian.org/debian bullseye-updates main contrib
  15. C

    Monitors won't start after upgrading.

    So you upgraded one node to PVE7 and upgraded ceph to Octopus too. There's the problem. Before PVE team reply, my possible theoretical solutions : 1] downgrade ceph on the PVE7 node or 2] stop VMs, backup VMs, upgrade rest of the cluster. No warranty from me for any point written above.
  16. C

    Cluster migration NFS

    Easy way - just disable nfs storage on old cluster.
  17. C

    HA or migration of VMs that are turned off on a node that is shut down or rebooted

    https://pve.proxmox.com/wiki/High_Availability#ha_manager_start_failure_policy -> Shutdown policy