Search results

  1. H

    job failed with err -5 - Input/output error (BTRFS) transfer/backup/clone

    Does Proxmox VM datastore support btrfs fully? On https://pve.proxmox.com/wiki/BTRFS it does say it can be used as the "root file system"
  2. H

    job failed with err -5 - Input/output error (BTRFS) transfer/backup/clone

    Which commands? I am not sure if repairing would cause more problems?
  3. H

    job failed with err -5 - Input/output error (BTRFS) transfer/backup/clone

    Ill run some tests but I don't think that's the case have been reading this is very common with BTRFS Also from trying to copy all the virtual machines running find /my-storage -type f -exec cp {} /dev/null \; cp: error reading '/my-storage/images/152/vm-152-disk-0/disk.raw': Input/output...
  4. H

    job failed with err -5 - Input/output error (BTRFS) transfer/backup/clone

    To add syslog contains Sep 26 20:15:02 HOSTNAME vzdump[3822900]: INFO: Starting Backup of VM 152 (qemu) Sep 26 20:15:32 HOSTNAME kernel: [272661.568777] BTRFS warning (device nvme0n1): csum failed root 445 ino 257 off 10338160640 csum 0xd2eda394 expected csum 0x8ba91653 mirror 1 Sep 26 20:15:32...
  5. H

    job failed with err -5 - Input/output error (BTRFS) transfer/backup/clone

    After testing btrfs for a few days it seems every day more virtual machines are having this same issue when trying to create a backup. Transfer (BTRFS > ZFS) 2021-09-25 22:16:24 starting migration of VM 520 to node 'NODE1' (192.168.0.120) 2021-09-25 22:16:24 found local disk...
  6. H

    grub_is_lockdown after upgrade to 7

    After upgrading to 6 > 7 and then rebooting my OVH node is showing the following error error: symbol 'grub_is_lockdown' not found. Entering rescue mode... grub rescue> I currently only have access through rescue. Changing boot order straight to hard drive shows the following on boot
  7. H

    btrfs on Proxmox 6? Cluster

    Thank you. What happens if I upgrade without migrating/stopping the virtual machines?
  8. H

    btrfs on Proxmox 6? Cluster

    Ok thanks for the reply. Is it necessary to power down all virtual machines during the upgrade? What are the real risks to leaving them running. 6.4 > 7
  9. H

    btrfs on Proxmox 6? Cluster

    I connected my new Proxmox 7 with btrfs into a cluster that uses Proxmox 6 however there is no option to add btrfs via the GUI in 6 Is there another way?
  10. H

    Route IPv6 through another interface? OVH

    I am using OVH vRACK on vmbr1 iface vmbr0 inet6 static address 2001:41d0:803:8400::1/128 gateway fe80::1 up ip -6 route add 2001:41d0:803:8400::/56 dev vmbr0 auto vmbr1 iface vmbr1 inet static address 192.168.0.122/16 bridge-ports bond1 bridge-stp off bridge-fd 0...
  11. H

    IPv6 stopped working after upgrading to 7

    Looks like there is an issue with the eth1 interface auto vmbr0 iface vmbr0 inet static address 51.195.234.xxx/24 gateway 51.195.234.254 bridge-ports eth0 bridge-stp off bridge-fd 0 iface vmbr0 inet6 static address 2001:41d0:803:8400::1/128 gateway fe80::1 works...
  12. H

    IPv6 stopped working after upgrading to 7

    Yes it is installed. If I modify the setting and apply it works until reboot.
  13. H

    MAC of multiple virtual machines change

    Didn't see that but no don't think so I have another server with nothing on it with OVH and it retrurns ? (51.195.234.254) at fe:ed:de:ad:be:ef [ether] on vmbr0 or it's an issue on OVH end
  14. H

    IPv6 stopped working after upgrading to 7

    After upgrading to PVE 7 IPv6 has stopped working. If I restart the network (ifup -a) the 2001:41d0:803:8400::1/128 does start pinging until the node reboots again and the virtual machines then start pinging too. auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto eth1...
  15. H

    MAC of multiple virtual machines change

    arp -a output shows the following ipxxx.ip-51-89-201.eu (51.89.201.xxx) at fe:ed:de:ad:be:ef [ether] on vmbr0 After rebooting the VM (MAC returns correctly) ipxxx.ip-51-89-201.eu (51.89.201.xxx) at 02:00:00:9b:81:48 [ether] on vmbr0 What could be the cause of this? It seems to happen to...
  16. H

    Restart all virtual machines network?

    Some of my virtual machine have lost network connectivity (I believe from the node reaching 100% RAM before it swapped) is there a way to easily disconnect/reconnect their network? systemctl restart networking on the node does not work. The only way I have found so far is just to...
  17. H

    btrfs RAID0 3 disks

    Thanks for the input, RAID0 it is then although I have read single are in 64K chunks. What do you think about putting the metadata as RAID1? mkfs.btrfs -m raid1 -d raid0 -L My-Storage /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 good or bad for performance? I am expecting 5x more writes than reads...
  18. H

    zram on a NVMe overkill?

    I have the following NVMe PCIe 3 drive that I will be using as swap SAMSUNG MZQLB1T9HAJR-00007 https://www.scan.co.uk/products/192tb-samsung-pm983-25-u2-ssd-pcie-30-x4-nvme-mlc-3d-v-nand-3000mb-s-read-1900mb-s-write-540k-50k-io The question is do I really need to use zram? What kind of...
  19. H

    btrfs RAID0 3 disks

    I came across the following on creating btrfs. I am curious what do you guys suggest if I want performance? What are the pros/cons for each # Create a filesystem across three drives (metadata mirrored, linear data allocation) mkfs.btrfs -d single /dev/sdb /dev/sdc /dev/sdd # Stripe the data...
  20. H

    CPU limit is being ignored?

    Well maybe you should ask why that feature even exists. There are lots of use cases I am sure. Thanks for the info about KVM64 I have not noticed it doing this on virtual machines running host flag.