Recent content by PaulVM

  1. P

    failed Import ZFS pool

    Same results in production servers with last updates: # pveversion pve-manager/8.0.9/fd1a0ae1b385cdcd (running kernel: 6.2.16-19-pve) If can be useful, form journalctl -r I have: Nov 19 12:44:58 pveprod01 systemd[1]: Reached target zfs-volumes.target - ZFS volumes are ready. Nov 19 12:44:58...
  2. P

    sshfs on proxmox VE8

    Calirification: this answer is valido also for ceph-fuse? dpkg: fuse: dependency problems, but removing anyway as you requested :pve-cluster depends on fuse. ceph-fuse depends on fuse. And, curiosity: I am not a developer, so my knowledge is limited, but first curiosity that come to me is...
  3. P

    dpkg: fuse: dependency problems, but removing anyway as you requested

    Simply installed sshfs and got this warning. It refers to pve-cluster and ceph-fuse, so I ask if this can be something to considered. I suppose Proxmox Team had his good reasons for maintaining the "old" fuse instead of fuse3. # apt install sshfs Reading package lists... Done Building...
  4. P

    failed Import ZFS pool

    Reinstalled the cluster and recreated zpools. Same problems. I have read other threads that reports the same "problem", but anyone got a real answer. I tried to simply disable the services with: systemctl disable zfs-import@zfs1.service like suggested in...
  5. P

    failed Import ZFS pool

    I am able to migrate a couple of VMs, but if I try a CT, either running or switched off, I have: trying to acquire lock... TASK ERROR: can't lock file '/var/lock/pve-manager/pve-migrate-41200' - got timeout I simply activated replication of the VMs/CTs to the other nodes and added the VMs/CTs...
  6. P

    failed Import ZFS pool

    Playing with a test cluster. After disconnecting one node to test HA migration (VMs replication active), I have some VMs corrupted (hangs on boot like the disks are corrupted). From systemctl: ● zfs-import@zp01.service loaded failed failed...
  7. P

    cluster and replication doubt

    So, migration and replication only is done using 1 network by default (the default WAN network)? And I can only optionalluy chose which network use? I supposed that confugiring multiple networks in the cluster configuration automatically it uses them as needed. Thanks, P.
  8. P

    cluster and replication doubt

    I can't understand why replication stops if I disconnect the cable in the "local" network while it continue if I disconnect the "internet" network. I supposed it uses the "local" link for replication and the alternate link if the "local" fail. I have this test cluster (3 nodes, 2 nic)...
  9. P

    move VM from ext4 (PVE1) to ZFS (PVE2)

    :-( Time consuming ... No. pve1 is in the old-cluster, pve2 is n the new-cluster
  10. P

    move VM from ext4 (PVE1) to ZFS (PVE2)

    Hi, Is there some fast way/tricks to move VMs from an old PVE 7.0 where they are stored as normale files to a new PVE 8.0 using ZFS (block evice)? I suppose the simplier answer may be to use backup/restore, but I am searching for something like directly copy folder images because I have about 3...
  11. P

    ceph pool(s) size/number

    Thanks for the explanation. I obviously had read a few documentation, so, some of this concept are familiar to me. I miss the practical experience. My focus was in evaluating the specific situation: 3 TB data amount 15 TB disk space (4 x 3.84) We are at 20% of physical space. With 2 disks/OSD we...
  12. P

    ceph pool(s) size/number

    I am at my first experiences with ceph, so I don't know it so much. In my limited experience to standard file Systems and/or storage, I usually prefer to have separated littler "pools" if possible so, if something go wrong it is simplier and faster to recover the one that have problems. I don't...
  13. P

    ceph pool(s) size/number

    Is it a so complicated or so stupid question? Thanks, P.
  14. P

    ceph pool(s) size/number

    I have 3 node ceph cluster, each node has 4x3.84TB disks to use as OSD. VM images are about 3 TB. Does Is it better to create a pool with all the 4x3.84 TB for each node (15TB), or create 2 pools of 2x3.84 TB for each node (7.5TB)? Thanks, P.
  15. P

    cluster: mix PVE 7 & 8

    Now that I am at implementing the new nodes, I evaluated that my final goal is to have a 8.x cluster that have 3 new nodes (with ceph), and the only interesting things that I need to insert/mantain in the new cluster from the old (apart VM & CT), are the firewall rules and the PBS host. For the...