Search results

  1. C

    Ceph pool disk size increase.

    OSD size -> weight parameter OSD change - don't remove nodes from cluster, just change disks. Check documentation (pve, ceph).
  2. C

    Stopped VM raise zabbix notification of the network interface due backup

    Hi, running backup of the stopped VM regularly and when it meets some zabbix check, it will raise notification because VM interface on PVE is going up/down: PVE: INFO: Finished Backup of VM 101 (00:00:10) INFO: Backup finished at 2022-02-11 02:00:35 INFO: Starting Backup of VM 102 (qemu) INFO...
  3. C

    [SOLVED] Hardware RAID Notification/Status

    There is so much HW controllers so PVE doesn't support this, you need your own monitoring. SMART is for checking state of disk, but it's not same as state of disk in array. Result - use your own monitoring.
  4. C

    PVE 7.1 DMAR: DRHD errors - ilo4 problems on HP DL3xx G8

    Currently testing this fix, short-term it made ilo4 workable, will see in mid-term. GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=off intremap=off"
  5. C

    PVE 7.1 DMAR: DRHD errors - ilo4 problems on HP DL3xx G8

    We have all HP DL3xx G8 on PVE7.1, version from last ugraded below: proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve) pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe) pve-kernel-helper: 7.1-8 pve-kernel-5.13: 7.1-6 pve-kernel-5.13.19-3-pve: 5.13.19-7 ceph: 15.2.15-pve1 ceph-fuse...
  6. C

    2 Node Cluster HA DRBD or CEPH?

    DRBD isn't officially supported. You are on your own.
  7. C

    Installation on R510

    Fix /etc/network/interfaces for your needs.
  8. C

    pveperf fsync performance slower with raid10 than raid1?

    2x 3.84T in zfs r1 = 1TB? what? HD SIZE: 1026.72 GB (raid1-ssd-pool) 4x 3.84T in zfs r10 = 820GB? wtf? HD SIZE: 820.30 GB (raid10-ssd-pool)
  9. C

    [SOLVED] PVE 7.1.8 - notes formatting

    Tab notes has broken formatting. I restored VM from PVE6.4 to 7.1 with such notes: In edit panel are those lines line by line. root IP vg0 - root 8G, swap 2G v20210914 In view panel are those lines all on one line. root IP vg0 - root 8G, swap 2G v20210914 Clearing notes to empy->save->reenter...
  10. C

    6.4 to 7.0 didn't work

    Uncomment: # deb http://ftp.us.debian.org/debian bullseye main contrib # deb http://ftp.us.debian.org/debian bullseye-updates main contrib
  11. C

    Monitors won't start after upgrading.

    So you upgraded one node to PVE7 and upgraded ceph to Octopus too. There's the problem. Before PVE team reply, my possible theoretical solutions : 1] downgrade ceph on the PVE7 node or 2] stop VMs, backup VMs, upgrade rest of the cluster. No warranty from me for any point written above.
  12. C

    Cluster migration NFS

    Easy way - just disable nfs storage on old cluster.
  13. C

    HA or migration of VMs that are turned off on a node that is shut down or rebooted

    https://pve.proxmox.com/wiki/High_Availability#ha_manager_start_failure_policy -> Shutdown policy
  14. C

    Proxmox with Ceph - Disk crashed rate is too high

    P440ar ? It's not real HBA controller, maybe problem is there...
  15. C

    proxmox management interface matters?

    Create Datacenter -> Storage item with that 10G subnet and select backup option.
  16. C

    Integrating PMG and Setting up Certificates & DNS Records

    DMARC etc need to point to server, which is sending the mail, so mailservers. There is no cost to having PMG in those records too anyway. For certificates, the right way is such that works in long run.
  17. C

    Extremely SLOW Ceph Storage from over 60% usage ???

    You cant't evade not using swap when swap is present. You can set swappiness parameter, remove swap, raise RAM, or more debug the problem.
  18. C

    MTU-size, CEPH and public network

    You can do tests. From my point of view mixing 1500/9k mtu on same interface is calling for problems. I tried something as this before even ceph was in pve and it was mess. Network latency will have higher performance impact than 9k mtu.