Search results

  1. A

    Upgrading zfs to 2.2.0-rc1 ?

    Thanks for the confirmation, @LnxBil, and yes, datasets/snapshots in LXC are clearly a benefit!
  2. A

    Upgrading zfs to 2.2.0-rc1 ?

    Congratulations everyone on the ZFS 2.2.0 release! Having said that ZFS 2.2.0 itself has substantial amount optimisations, I'm curious, if the native (zoned) ZFS filesystem for LXC will be implemented in Proxmox. I think this could have some good benefits, for example: - probably some...
  3. A

    [SOLVED] TASK ERROR: timeout: no zvol device link for 'vm-700-disk-0' found after 300 sec found.

    Thanks you. I've had the same issue. I've just exported the pool without importing. Just one command zpool export POOLNAME and it started.
  4. A

    [SOLVED] Mount via loop device in container

    Yes, apparently it changed ( cgroup -> cgroup2 ): lxc.apparmor.profile: lxc-container-default-cgns-with-mounting lxc.cgroup2.devices.allow = b 7:* rwm lxc.cgroup2.devices.allow = c 10:237 rwm lxc.mount.entry = /dev/loop0 dev/loop0 none bind,create=file 0 0 lxc.mount.entry = /dev/loop1 dev/loop1...
  5. A

    ZFS sync=disabled safeness

    That shouldn't have happened, according to the documentation, but probably something went wrong.. Thanks!
  6. A

    ZFS sync=disabled safeness

    Thanks for your answer, Fabian. Sorry, I didn't make it clear. I understand that when we are talking about distributed system - it totally depends on the application, If the app can't handle that situation, then of cause one node may be confused about the state of the other node. Totally agree...
  7. A

    ZFS sync=disabled safeness

    It's interesting to read.. Does anybody really experience data corruption of any kind or corrupted snapshot during the power loss with sync=disabled? To my understanding, the consequences will be exactly the same as if the power loss happened ~ 5 seconds earlier with sync=standard. Am I wrong?
  8. A

    [SOLVED] Can't destroy ZVOL from pool "dataset is busy" -> Solution: LVM picked up on VG/PV inside. Need "filter" in lvm.conf

    In my case the pveproxy process and 3 pveproxy worker processes were using the volume, according to this command: Restarting pveproxy helped: PVE 7.3-4
  9. A

    DRBD9: both nodes outdated

    drbdadm -- --overwrite-data-of-peer primary vm-221-disk-1 drbdsetup primary --force vm-221-disk-1 None of this commands worked for me until I brought down the resource on the other nodes. After that, first one succeeded, I didn't try the second;
  10. A

    [SOLVED] Attach a raw image as a USB disk to the VM

    The VM args in shantanu post didn't work for me. Probably it is outdated. I use: args: -drive id=stick,if=none,format=raw,file=/home/stick.img -device nec-usb-xhci,id=xhci -device usb-storage,bus=xhci.0,drive=stick Source: https://qemu-project.gitlab.io/qemu/system/devices/usb.html If you use...
  11. A

    Can QEMU-KVM execute in LXC on PVE?

    I suppose TS talks about this fix: lxc.cgroup.devices.allow: c 10:232 rwm should be changed to lxc.cgroup2.devices.allow: c 10:232 rwm
  12. A

    Online migration / disk move problem

    Thank you for the clarification, Fabian. Unfortunately we can't use DRBD until the size issue is fixed. Hope it will be fixed soon. Best regards, Albert
  13. A

    Online migration / disk move problem

    Thanks for the link, Fabian, Currently we can live with offline migration, but what really has blown my mind is that the migration process altering data! I mean disk size. If I understand right, it has been made deliberately for debugging process, isn't it? So it is not related to DRBD and I...
  14. A

    Online migration / disk move problem

    This really is a shame. I can't even migrate to DRBD offline because of the inconsistency in size after migration: On ZFS: root@dc2:~# blockdev --getsize64 /dev/sda 2151677952 After moving to DRBD: root@dc2:~# blockdev --getsize64 /dev/sda 2155372544 (+3694592 bytes = 3608K) Back to ZFS...
  15. A

    [SOLVED] Running GitLab in LXC

    I just uncommented/edited this line in /etc/gitlab/gitlab.rb package['modify_kernel_parameters'] = false
  16. A

    Removing vlan id 1 from a trunk

    Hi, you can specify different VLAN tags to different VM groups that you want to isolate as well as 'trunks='. Though I've never came across such use case myself, but for cutting untagged traffic completely your patch is a nice solution.
  17. A

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Thanks for advice, it might be reasonable in some circumstances. Currently I have to set it even if some nodes fail to start, and remove the flag only during in the period of minimum cluster load. You are not serious. Do you?) There are numerous reasons when you need to shut down the cluster...
  18. A

    Proxmox VE 6 does not boot ubuntu cloud images with cloud-init for VM.

    Hi, @saalih416, I actually did it wrong, I used .img from .tar.gz file. Names and sizes was the same, as I can remember, I thought that it was the same image. README in currently available .tar.gz archives points out that the image in archive is a raw ext4 partition. Probably I missed README...
  19. A

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Hi, Can someone explain how to shutdown the hyper-converged cluster properly? I suppose the steps should be as follows: 1. Shutdown all VMs on every node 2. Set the following flags: # ceph osd set noout # ceph osd set nobackfill # ceph osd set norecover 3. Shutdown the nodes only after All...
  20. A

    False message of 2nd authentication step to this forum

    It's done already. Of cause it confirms network, it's clear. I need to confirm host.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!