Search results

  1. N

    Add new OSD to existing CEPH POOL

    Hi all, i've 4 nodes proxmox, with CEPH, only three are monitors. For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the...
  2. N

    MTU Warning

    Hello, somebody with same problem?
  3. N

    MTU Warning

    The error is ever reported on interface that hasn't got MTU set to 9000, but i'd insert the interface in cluster configuration as failover. For this i've setup mtu in corosync, because so i theorically, i forced calls between nodes in 1500.
  4. N

    MTU Warning

    I've already done it.
  5. N

    MTU Warning

    Yes, i can create vlan. But, don't understand your suggestion. I've this conf auto eth0 iface eth0 inet manual mtu 9000 #vRACK - In USE auto eth1 iface eth1 inet manual #Public - In USE auto vmbr0 iface vmbr0 inet static address 1.1.1.1/24 gateway 10.10.10.10...
  6. N

    MTU Warning

    Yes, of course. My version number is 5, and all nodes have it, and it was replicated and in /etc/corosync/corosync.conf correctly byself.
  7. N

    MTU Warning

    Hi ALL, i've some nodes on OVH Every node has 2 physical network interfaces is not support MTU 9000 --> vmbr1 is support MTU 9000 (vRACK) --> vmbr0 I've setup my cluster, where link 0 is on vmbr0 (for every node) and link 1 on vmbr1 (for every node). I've also insert in corosync.conf the...
  8. N

    Proxmox PVE-ZSYNC

    Ah ok, so i need to use this: zfs send <pool>/[<path>/]vm-<VMID>-disk-<number>@<last_snapshot> | [ssh root@<destination>] zfs receive <pool>/<path>/vm-<VMID>-disk-<number> without [ssh root@<destination>] Thanks, now i understand!
  9. N

    Proxmox PVE-ZSYNC

    Thanks for your answer!. But i don't understand. I cannot use this: zfs send myzpool/zsync/vm-146-disk-0@rep_test2_2021-09-17_13:30:01 | ssh 192.168.20.141 zfs receive myzpool/vm-123-disk-7 Because, i'm using pve-zsync to replicate data from one host to other, so if i lost the source host i...
  10. N

    Proxmox PVE-ZSYNC

    Hi to ALL, i recently implement pve-zsync in my environment, but i'm questioning about recovery. Because now, on target host i've But, in case i need to startup the VM, that in this time has this config how i merge the last replica with the disk?.. The last replica is incremental only...
  11. N

    Backup of System Backup PBS

    Hi to ALL, anyone has made a secure copy of VM configuration in case of failure? ... Just to understand how to have a backup of PBS or the possibility to create an HA. Thanks
  12. N

    Proxmox VE Storage Replication

    Hi to all, is possible to create a job replication, between two nodes, without the job starts immediately? . Because, i'm creating job replication to start on schedule day and hour, but as soon as i active the job it starts. Proxmox VE versione : 6.4-13
  13. N

    storage migration failed: block job (mirror) error

    Hi, i've the same problem now.... From NFS to LOCAL ZFS i obtain this error: drive mirror is starting for drive-scsi2 drive-scsi2: Cancelling block job drive-scsi2: Done. TASK ERROR: storage migration failed: block job (mirror) error: drive-scsi2: 'mirror' has been cancelled i'v migrated...
  14. N

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Ok, perfect. Thanks for your time.
  15. N

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Thanks. But alwin said that " ceph osd set noout " : I don't recommend to set this, since all nodes will boot again and may or may not start properly. So, i need to do or not?. Thanks
  16. N

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Sorry, but don't understand. What do you mean? Thanks
  17. N

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Hi, anyone has found the answer?. Which is the best procedure? thanks
  18. N

    Adding new roles

    Hi, is it possible, to add permission to add "networks" to role Hepdesk? Because, it's very cool the role, but i wouldn't create a user Administrator, because is more for my case. Thanks
  19. N

    Proxmox VE monitoring

    Hi to all, i'm looking for a monitor software for my cluster proxmox, for VMs, perfomance and with alerts capabilities. I've test: Zabbix ( but doesn't do autodiscover for VMs) , Graphite with Grafana o with InfluxDB but i wasn't able to create alerts. Is there someone that found the best tool...
  20. N

    Problem RAM - Balloning - Windows Server

    Sorry, but don't understand. What you mean?