Search results

  1. F

    Module 'volumes' has failed dependency: /lib/python3/dist-packages/cephfs.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ceph_abort_conn

    Hi, after Upgrade to 6.4.13 on a 3-node cluster I get a health warning: Module 'volumes' has failed dependency: /lib/python3/dist-packages/cephfs.cpython-37m-x86_64-linux-gnu.so: undefined symbol: ceph_abort_conn What am I missing? I have already checked, that python-cephfs as well as...
  2. F

    VM Replication, Cluster Quorum and Node via unstable Network Link

    Hi,thanks for clearing that up. May I ask, why is ZFS the limitation? Isn't there an other way to sync online? Like creating a vm snapshot as one does it during backup? One could even dirty bitmaps-feature as used with PBS but instead of adding chunks, updating a vm disk... Maybe PBS could...
  3. F

    VM Replication, Cluster Quorum and Node via unstable Network Link

    Hi, thank you for the hint with pve-zsync: I tried that today, non successful: pve-zsync sync --source 120 --dest 192.168.10.71:120 --Verbose --maxsnap 2 --limit 512 ERROR Message: ERROR: in path The source VM lies on a ceph storage, the destination storage is a "normal" path. What I was not...
  4. F

    VM Replication, Cluster Quorum and Node via unstable Network Link

    Good Morning everyone! I would like to use the VM replication feature. The replication target node is within a LAN-Segment, that is connected via a Outdoor-WIFI P2P Link only. Although the average throughput of that link is about 30 MB/s, I consider it as "unstable" as the link is, for one used...
  5. F

    /dev/tape/by-id/scsi-10WT044700-sg failed - decode mode sense failed - wrong mode_data_len

    I know, but for the sake of the product, one has to test it! And what better way to test it from the prospective, as if it would be already enterprise-ready... then one can say, what's still missing!
  6. F

    /dev/tape/by-id/scsi-10WT044700-sg failed - decode mode sense failed - wrong mode_data_len

    What is the estimated time-span from the testing to the "production" repo? Related question: Do you rely on /etc/stinit.def to initialize the tape-drive correctly or do you use an pbs internal way to initialize the drive?
  7. F

    Tapespeed

    It's all good! I understood! I'm working long enough in that business! I have some super-duper Samsung Enterprise-Class 4TB SSDs as spares for my ceph around, which I want to "abuse" to test the difference, but that will take some time! As I migrated from vsphere+ veeam, I thought about a lot...
  8. F

    Tapespeed

    this is rather cynic, isn't it? And will not help to address the actual problem nor improve the product, but well... sometimes cynicism is the only way to cope on things, you cannot change anyway!
  9. F

    Tapespeed

    Where do you want me to change the chunk-size? Veeam or in PBS? I have no idea, where to change it in either product. regards, Felix
  10. F

    Tapespeed

    Ok. I take that as it is - I'm used to work with what I have, and thats a HDD-RAID, that will read with DD up to 300MB/s which is, from my point of view, good enough at that point. Though I can considere about SSDs. My thinking is like this: If I can write a big (1TB) file from the same...
  11. F

    Tapespeed

    Hi Thomas! Okay, what are the problems: First, I think, that swapping a well and fast working HDD RAID with SSDs is not a solution but a workaround. Secondly I think, to mitigate the seek-operations while I/O, one could simply create a file-system in a top-level container what ever sort of...
  12. F

    Tapespeed

    I'm coming from vSphere + Veeam B&R. PVE is working like a charm as does BPS. So far I can see 2 general disadvantages which I want to address in the hope, they can be fixed in near future: as I migrated the vSphere 1:1 to PVE, and use the dirty-bitmap feature +dedupe of PBS, I would have...
  13. F

    /dev/tape/by-id/scsi-10WT044700-sg failed - decode mode sense failed - wrong mode_data_len

    No doubt! my tapedrive... I'm also chatty! I'll wait for the patch.
  14. F

    /dev/tape/by-id/scsi-10WT044700-sg failed - decode mode sense failed - wrong mode_data_len

    Ok, I just realized, the above sg_raw.out.gz was made without tape loaded. So attached to this post is now the sg_raw output with tape loaded.
  15. F

    /dev/tape/by-id/scsi-10WT044700-sg failed - decode mode sense failed - wrong mode_data_len

    Good Morning, sorry for the delay, an other backup was in progress. Attached are the screenshot from the task as well as the output from sg_raw. regards, Felix
  16. F

    /dev/tape/by-id/scsi-10WT044700-sg failed - decode mode sense failed - wrong mode_data_len

    Hi, I want to try out the new tech-preview on the PBS Tapebackup - so far I did not come very far, as I get the above message, when I try to label a tape. This is my tapedrive (tapeinfo -f /dev/st0): Product Type: Tape Drive Vendor ID: 'IBM ' Product ID: 'ULTRIUM-HH7 ' Revision: 'G9Q1'...
  17. F

    Windows VMs stop on RDP or local console

    Any news, when this patch now will be available for the current branches? It would be unfortunate, If I upgrade from 5.3.9 (for which you sent me the patch) to 5.4 and the VMs would crash again! regards, Felix
  18. F

    Windows VMs stop on RDP or local console

    Hi Good Morning, Sorry, I couldn't get back any sooner. Yes, so far there have not been any issues anymore and the system has been undergoing an extreme phase of login/logoffs as well as local console administration to ensure stability on the admin side too. On my opinion, your patch solved the...
  19. F

    Windows VMs stop on RDP or local console

    I have applied the patch. Prior to that patch, the system would crash approximately every 2 days, however it also run up to 11 days without incident, depending on how many logins where. So I have to wait a while (up to 2 weeks of usage) until it's safe to say, that this patch fixed the problem...