Search results

  1. C

    [SOLVED] One by one upgrade from v6.4 to v7.

    Yes, it's the other way to upgrade (aka, more complicated).
  2. C

    Building a separate ceph storage cluster

    For OP: 4. Rook or any configuration management solutions Anyway, we are calculation external Ceph storage for our PVE too. PVE Staff, are there some requirements for external clusters? For example, version difference etc? PVE documentation handles hyperconverged/updates mainly.
  3. C

    Do cluster with many VMs

    Read documentation. Search forum. Analyze your requirements. Test your setup. etc etc Do your job or pay some skilled engineer.
  4. C

    Is it possible for HA to simply monitor a network link ?

    Not from PVE side. You need implement something as STONITH (locally or remotely), for example shutdown the affected node.
  5. C

    [SOLVED] Recommendation small Ceph setup

    If you aren't skilled with ceph, the better way will be 1 osd per ssd.
  6. C

    [SOLVED] Backup to NFS Input/output error

    There nfs works. INFO: starting new backup job: vzdump 101 --node proxmox-backup-01 --storage nfs-backup --remove 0 --compress zstd --mode snapshot INFO: Starting Backup of VM 101 (qemu) INFO: Backup started at 2021-10-21 12:19:19 INFO: status = running INFO: VM Name: REDACTED INFO: include...
  7. C

    Upgrade 6.2 to 6.4 - high disk utilization in VMs

    Previous image is zabbix standard disk template. Even netdata shows something crazy for VM system disk: And VM db data disk: Both disks are on this PVE host, dedicated raid for VM images: PVE OS raid diskset: All VMs are Debian 10. Upgrade was done only on PVE host level.
  8. C

    Upgrade 6.2 to 6.4 - high disk utilization in VMs

    Hi, we upgraded our PVE cluster (very old HP G7 and 3yr old Dell R940) 6.2 to 6.4 and disk utilization in VMs raised from floor. This problem is same for VMs on: - nfs ssd storage (raw files), default (no cache) - local ssd disks (LVM thick), default (no cache) The change depends on VM...
  9. C

    [SOLVED] Failed node and recovery in cluster

    Thanks. I used way 1]. All without any hiccup.
  10. C

    [SOLVED] Failed node and recovery in cluster

    Hi, one of my cluster nodes hard failed due failed disks in raid. Now because cluster is 6.2, we decided to upgrade to 6.4 (needed req for 7). Reinstalled node will have same fqdn as failed node. Now i have two possible ways: 1] remove failed node from cluster (aka cleanup) and add reinstalled...
  11. C

    OMG I mistakenly pass-thru NVME chip on M/B

    Use install ISO or any iso with recovery/repair mode.
  12. C

    Problem with Disk on RAID6 more 10Tb

    Your disk/raid setup is strange. 1] /dev/sdb as raid6 on Perc 2] fio filename is /dev/sdd <<--- ??? 3] errors are from /dev/sdd and sde <<--- ???
  13. C

    [SOLVED] Cannot start newly migrated VM : conf file format error

    You need add new VM datastore name when you removed original local-zfs datastore name.
  14. C

    Backup Job Error

    You misread backup log. Backup failed because VM is not running, not otherwise. Check logs.
  15. C

    Open vSwitch across physical servers

    node1->node2 & node1->node3 vs dual 10G LACP are in theory same througput you can move pve management to ens19&20 bond and use 2x1Gbps links unbonded for corosync. You can use mesh in 3 nodes. But when you will want add more nodes in future, it will limit you.
  16. C

    Some notes and questions about Proxmox Cluster networking

    Hiding default port on different number is security by obscurity. I think, we need root password in cluster join, if i remember correctly (it's some long time, when i created new cluster).