Search results

  1. elurex

    Moving /var/lib/vz

    do it as a script; mv /var/lib/vz /var/lib/vz.bak; zfs create [new pool]/var-lib-vz; zfs set mountpoint=/var/lib/vz [new pool]/var-lib-vz; zfs mount -a;
  2. elurex

    Ceph User Survey 2019

    done... i really wish there is a way bcache can be use with ceph nautilus. DM-Cache really under performed.
  3. elurex

    IO delay

    you might need to trim your sata ssd pool... but... beware of trimming shorten your ssd life... after trim you release more ssd space and it definately helps with performance, but on the other hand it will shorten your ssd life
  4. elurex

    [TUTORIAL] Cockpit ZFS Manager

    I came across Cockpit and its ZFS manager which works on PVE 6.x fresh install and also PVE 6.X upgrade from pve5to6 with a few ln- s It can take take of 98% of zfs cli such as trim, attache, detach, replace, snapshot, rollback, clone and etc... To install cockpit on PVE 6.0, echo "deb...
  5. elurex

    I Love Proxmox

    buy subscription and show your love
  6. elurex

    How do I license MS Server 2016 standard and assign cores?

    same here... I did a lot of V2P just for windows 10
  7. elurex

    How do I license MS Server 2016 standard and assign cores?

    100% correct... please beware of any Windows System virtualization , its licensing is very tricky and varies alot between os and packages!. Right now it is almost impossible to virtualize windows 10 due to VDA requirements.
  8. elurex

    How do I license MS Server 2016 standard and assign cores?

    Server Standard: 2 Windows Server VMs <--- this is wrong, MS only allow you to do this if your hypervisor is using HyperV
  9. elurex

    [SOLVED] Nvidia GPU passthrough code 43

    I am using proxmox ve 6.0 and default its cpu args already have the following -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmox,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,kvm=off' following is my vm conf agent: 1 bios...
  10. elurex

    [SOLVED] Nvidia GPU passthrough code 43

    I would suggest the following config for your qemu if you are using PVE 6.0 hostpci0: 04:00,pcie=1,x-vga=1 machine: pc-q35-3.1
  11. elurex

    Ceph rbd error - sysfs write failed

    no, I don't.... root@gpu01:/etc/pve/priv/ceph# ls -al total 0 drwx------ 2 root www-data 0 Aug 12 05:10 . drwx------ 2 root www-data 0 May 3 05:02 .. A temporarily work around this issue is to reboot pve host, then it will stop working again mounting additional rbd disk with sysfs write...
  12. elurex

    Ceph rbd error - sysfs write failed

    no... cephx is disabled
  13. elurex

    Ceph rbd error - sysfs write failed

    root@gpu01:~# rbd showmapped id pool namespace image snap device 0 rbd vm-101-disk-1 - /dev/rbd0 1 rbd vm-101-disk-2 - /dev/rbd1 2 rbd vm-101-disk-0 - /dev/rbd2 3 rbd vm-104-disk-1 - /dev/rbd3 4 rbd vm-104-disk-2 -...
  14. elurex

    Ceph rbd error - sysfs write failed

    Does not work ........ root@gpu05:~# rbd feature enable rbd/vm-104-disk-0 exclusive-lock root@gpu05:~# rbd feature enable rbd/vm-104-disk-1 exclusive-lock root@gpu05:~# rbd feature enable rbd/vm-104-disk-2 exclusive-lock /dev/rbd9 /dev/rbd10 /dev/rbd11 rbd: sysfs write failed can't unmap rbd...
  15. elurex

    Proxmox Cluster

    yes you can, but since you already have existing running vm setup already, when you adding the 2nd, 3rd nodes to the cluster, you must first make sure /etc/pve/qemu-server/ and /etc/pve/lxc/ has no config files (move all *.conf to /root/qemu-server and /etc/lxc/), then also make sure all your...
  16. elurex

    Ceph rbd error - sysfs write failed

    rbd image 'vm-104-disk-1': size 60 GiB in 15360 objects order 22 (4 MiB objects) snapshot_count: 0 id: 433efbc6fb9b03 block_name_prefix: rbd_data.433efbc6fb9b03 format: 2 features: layering op_features: flags...
  17. elurex

    Ceph rbd error - sysfs write failed

    root@gpu01:~# ceph versions { "mon": { "ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 3 }, "mgr": { "ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)": 1 }, "osd": { "ceph version...
  18. elurex

    (Linux) GPU passthrough issues

    please try pc-q35-3.1 instead of q35 or i440fx if you are using PVE 6.0
  19. elurex

    Ceph rbd error - sysfs write failed

    I am running PVE 6.0 and on this pve node, I have already started 2 vm with rbd disks to my ceph storage, However, sometime if I need to start third VM using rbd disk, PVE will error with following msg. It only can be solve if I reboot the pve node and then I can start 3 VM or more using rbd...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!