Search results

  1. M

    Incoherenet disk size on NAS and in proxmox

    Thanks for your replies. I think I got my questions answered by the VM. It just stopped responding when right-clicking on the disk, and disk manager wouldn't open. Just great on a friday afternoon :D
  2. M

    Incoherenet disk size on NAS and in proxmox

    Here is the qcow metadata: image: vm-100-disk-0.qcow2 file format: qcow2 virtual size: 33.3 TiB (36614596198400 bytes) disk size: 18.7 TiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt...
  3. M

    Incoherenet disk size on NAS and in proxmox

    Hi. I have a backup vm (veeam) with a variety of disks, which one of them is located on an NFS share. It was about 20 TB large, residing on a NAS. I decided to expand the pool on the NAS, in order to be able to expand the disk. I expanded it from about 20TB to 41TB. When i got about to expand...
  4. M

    Remove nodes from cluster - reinstall with same hostname and IP?

    Thanks for elaborating. I really don't see it as reasonable to render a used ip unuseful after It's been on a host once, either. That subnet is a dedicated pve management net. Well, I guess I can get a new ip address within the subnet, but the nodename shows which bladechassies position the...
  5. M

    Remove nodes from cluster - reinstall with same hostname and IP?

    Hi. I'm about to remove two nodes from my 13 node cluster, to form a new cluster (with a q device). The new cluster will be handled by multiportal, to see if it works out for us. Can I just remove them (per documentation), reinstall them and reuse IP address and nodenames, or will this cause...
  6. M

    rx-usecs 0 overhead on CPU

    Hi. I have a HPE bladechassies with BL460C's in it, with Intel 82599 10 GB dual port nic's in them. Running PVE with lvm on iscsi multipath connected luns I got the Spurious native interrupt! in the syslogs, and I was lurking around google to fins anything on this. I found someone issuing...
  7. M

    LVM on iSCSI from NFS

    Thanks @bbgeek17 and @Kingneutron , much appreciated. I've read the blockbridge link multiple times, but I thought I'd just ask.. :) I cannot live migrate to LVM on iscsi when the disks are set with io_uring, the GUI just refuses to migrate the disks with a note that says "TASK ERROR: storage...
  8. M

    LVM on iSCSI from NFS

    Hi. I'm about to move from NFS to LVM on multipath iSCSI as shared storage in our proxmox cluster. I understand that I need to change the disk definitions on the VM's to use async IO native instead of the default io_uring, and I need to restart the VM's to do that. We have about 230 VM's to...
  9. M

    Local backup job and offsite sync

    Unfortunately it is desirable when switching backup solutions. But thanks, I'll keep an eye on the backups and remove them manually.
  10. M

    Local backup job and offsite sync

    Hi. I have a PBS server running a backup job of our PVE environment. It has a job which keeps last 7 days and then prunes. This job syncs every day to an offsite storage, and it works very well. The problem is, i need to have 7 days of retention on local backup, and 30 days + 1 monthly for 6...
  11. M

    Multiple Proxmox clusters on the same subnet

    Will the VM:s be on the same subnet aswell? In that case you might want to set the mac address prefix to different values, otherwise you could find yourself having a spanning-tree shutdown on switch interfaces, if 2 vm:s get the same mac address. Not very likely, but a real risk. I don't see...
  12. M

    [SOLVED] Change backup jobs

    Thanks. Yeah, I read up on it a little more, just to make sure. I created a job for the entire cluster instead, and can exclude a VM no matter what host it's on. Thanks for your reply.
  13. M

    [SOLVED] Change backup jobs

    Hi. I have a pve cluster of 7 nodes. Today I am running one backup job per server, but I need to change that in order to be able to exclude a VM on a clusterwide scale. VM's might migrate to other hosts, and since I only can exclude VM's on the hosts they're running on, that is a problem. If i...
  14. M

    [SOLVED] VM's with disks without unit (K,M,G,T etc)

    For reference; I could not "rescan entities" while adding a host to veeam backup & replication, and the following error occured in the veeam logfile: 2024-11-08 15:20:56.4722 00090 [16596] ERROR | [ProxmoxNode][hostname][00000000-0000-0000-0000-ac1f6bed8a5c][0HN8001R7RINV:0000000C]: Failed...
  15. M

    [SOLVED] VM's with disks without unit (K,M,G,T etc)

    It worked, thanks a lot! Now I can add the hosts to veeam, but I'm going to tell them that it still is a valid qm config, so they should support it.
  16. M

    [SOLVED] VM's with disks without unit (K,M,G,T etc)

    Yeah, I was rather suprised when the support told me this, and a bit upset... But I could just aswell resize it to 2726295569K then, and not have to worry about the last 512 bytes getting lost, right? Thanks for the intelligent reply, I wish I'd thought of it myself :D I believe that qm...
  17. M

    [SOLVED] VM's with disks without unit (K,M,G,T etc)

    Hi. I have two VM's which have disks with no unit specified. The disks are qcow2 on NFS share, imported from esx and converted from vmdk. Attached to the vm as sata. One VM has a disk with size 2791726662144, like this: sata0: <storage>:vmid/vm-vmid-disk-0.qcow2,size=2791726662144 The problem...
  18. M

    Is this a reasonable setup?

    Hi. We have a customer that request a solution for vm's with replication. All vm's will fit on a single node. My thought is to install two equal nodes with the same amount of disks, use the disks in a local zfs pool, and just setup vm replication between the hosts - preferably I can connect the...
  19. M

    Bonding and VLAN

    Sorry, I'll try to be more clear. I had an idea that I maybe could tag vlans directly on the physical interfaces, instead of using the bond, in order to be able to utilize both NICs for multipahted shared storage. At the same time, I'd like to keep the bond for redundancy of the network...
  20. M

    Bonding and VLAN

    Hi. I have the following network config, where vlan 250-251 are for storage, and 254 is for migration. Would it be ok to set the vlan-raw-device to enp65s0f0 for vlan 250 and enp65s0f1 for vlan251, in order to use both network cards for storage, or are there any disadvantages? auto enp65s0f0...