Search results

  1. B

    Slow live migration for hosts with long uptime.

    I know there was already a post about it but it was unresolved. I had to live migrate a lot of VMs recently (and still have to migrate more) and noticed the relationship between long uptime and the live migration time, I have proxmox clusted 6.1-5 and I have xternal ceph cluster for external...
  2. B

    Updating from 6.1-5 to 6.3

    no I have not started yet just prepping. Wanted to make sure that if I update node 1 I will not have to update node 2, 3 and 4 right away and will be able to do this next day. One node at the time , need to move the VMs.
  3. B

    Updating from 6.1-5 to 6.3

    Is there a time limit to update all nodes? I can update one note and in two days update another, next 2-day 3rd note etc. Should I anticipate any issues, or it is like with updating when the corosync version changed (which I don't see in teh release notes) and there was a limited time to update...
  4. B

    Failed to migrate disk (Device /dev/dm-xx not initialized in udev database even after waiting 10000

    Had the same issue on PVE6.2-4 while updating which coincidentally was right after I imported a drive via "qm importdisk command". udev admintrigger worked for me. Thx
  5. B

    ZFS on SSDs with replication

    Thanks, finally finished that project . Everything works great.
  6. B

    Migrating from QNAP Virtualization Staion to PVE

    I am trying to migrate a virtuall machine , Win Server from QNAP to PVE. The QNAP backup is just one file backup.img , there are two drives there. When I open this backup with app like powerISO I see 3 files 0.ntfs , 1,ntfs (which seems to be a main file taking 99.999% of space), and 2 just say...
  7. B

    The server certificate /etc/pve/local/pve-ssl.pem is not yet active

    Had an issue with this in the log: Aug 03 18:35:02 pve1-weha pveproxy[1728]: proxy detected vanished client connection Aug 03 18:35:02 pve1-weha pveproxy[1729]: '/etc/pve/nodes/pve2-weha/pve-ssl.pem' does not exist! Aug 03 18:35:32 pve1-weha pveproxy[1729]: proxy detected vanished client...
  8. B

    ZFS on SSDs with replication

    I have a simple setup with PVE install on regular HDDs (RAID) and planing for zfs pool for a VM on ZFS. I have 2 enterprise Samsungs SSDs mixed use with 3 DWPD , 800GB per server. There is going to be only one virtual machine running on this pool about 150-200GB size. Is the default GUI setup...
  9. B

    3 sever cluster without external storage.

    good point about 3rd node for ceph I guess I have no choice but to go with replication. I need a third physical node , it will have local storage and in case of a total disaster it can serve as a PVE server with restored backup with expected -1. Is it possible to connect bonded interfaces...
  10. B

    3 sever cluster without external storage.

    I have 3 servers, one for keeping quorum and two production servers. I ma planing to put some ssds for quest VMs but I was wondering if I should go with storage replication or ceph on the two production nodes. It is a relatively simple setup and very few VMs will be running there 2 or 3, 4...
  11. B

    CEPH : SSD wearout

    Thx, will give it a try.
  12. B

    CEPH : SSD wearout

    Nice graph , I assume this is Zabbix. Did you have to install the agent on proxmox nodes to get that info from SMART ? BTW my SSD on Ceph installed on Proxmox say N/A under Wearout. Not sure if this is a bug or they say N/A because there is no wearout so far. I thought with no wearout it...
  13. B

    Removing a deleted LVM/LG from proxmox webgui

    Did you remove it from the cluster storage ? If not please try that , also fdisk /dev/sdx
  14. B

    Removing a deleted LVM/LG from proxmox webgui

    you can use lvdisplay, vgdisplay and pvdisplay to list everything that is related to LVM. Then remove accordingly. by remove command (vgremove, pvremove etc.) , then you might have to do fdisk or blkdiscard.
  15. B

    Dedicated ceph servers/cluster i/o delay

    Thank you for that Wolfgang , no problems so far. It is just strange. We updated from 5.x both our clusters (PVE and Ceph on PVE) and before I saw the opposite. The I/O wait was half the CPU usage not it it the CPU usage that is half of I/O wait. Is anybody else experiencing this ? Thank you
  16. B

    Local directory storage with LVM vs. LVM-thin

    Thank you, will try to test when I have time.
  17. B

    Removing a deleted LVM/LG from proxmox webgui

    Did you do vgremove as well ? After that you might also need to do fdisk /dev/sdx to remove the partition. Was actually doing it recently few times, it was annoying but easy enough to default the drive to be able to use it for something else. If you have SSD you might want to use blkdiscard...
  18. B

    Dedicated ceph servers/cluster i/o delay

    I am running ceph dedicated 4 node cluster with 10Gbos networks for ceph cluster and ceph public networks over bonded interfaces: proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve) pve-manager: 6.1-5 (running version: 6.1-5/9bf06119) pve-kernel-5.3: 6.1-1 pve-kernel-helper: 6.1-1...
  19. B

    Local directory storage with LVM vs. LVM-thin

    I used to do this from CLI on Proxmox 4.x but after reinstalling to new 6.1 version I used the web interface and added local storage of type directory to the system. I used the lvm-thin. Is there performance difference of lvm-thin vs. lvm volumes when mounted as directories ? Thank you
  20. B

    no such cluster node 'nodename' (500) [SOLVED]

    Got that issue too on 6.1.1, restarting corosync on affected node fixed the issue.