Search results

  1. elurex

    Degraded Windows VM-Performance on SSD with Ceph-Cluster

    what version of ceph is running? may I know the ssd model?
  2. elurex

    [SOLVED] Infiniband Device to LXC

    I solve my own problem For LXC.conf adds the following lxc.cgroup.devices.allow: a 231:* rwm lxc.cgroup.devices.allow: a 10:54 rwm lxc.cgroup.devices.allow: a 10:229 rwm lxc.autodev: 1 lxc.hook.autodev: /var/lib/lxc/302/mount-hook.sh lxc.apparmor.profile: unconfined lxc.mount.auto: cgroup:rw...
  3. elurex

    [SOLVED] Infiniband Device to LXC

    Currently I am following the reference from https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAT https://blog.blaok.me/pass-a-device-to-a-linux-container/
  4. elurex

    [SOLVED] Infiniband Device to LXC

    How to passthrough an infiniband device to LXC? according to https://lxd.readthedocs.io/en/stable-3.0/containers/#device-types https://github.com/lxc/lxd/issues/3983...
  5. elurex

    [SOLVED] Moving /var/lib/vz

    do it as a script; mv /var/lib/vz /var/lib/vz.bak; zfs create [new pool]/var-lib-vz; zfs set mountpoint=/var/lib/vz [new pool]/var-lib-vz; zfs mount -a;
  6. elurex

    Ceph User Survey 2019

    done... i really wish there is a way bcache can be use with ceph nautilus. DM-Cache really under performed.
  7. elurex

    IO delay

    you might need to trim your sata ssd pool... but... beware of trimming shorten your ssd life... after trim you release more ssd space and it definately helps with performance, but on the other hand it will shorten your ssd life
  8. elurex

    [TUTORIAL] Cockpit ZFS Manager

    I came across Cockpit and its ZFS manager which works on PVE 6.x fresh install and also PVE 6.X upgrade from pve5to6 with a few ln- s It can take take of 98% of zfs cli such as trim, attache, detach, replace, snapshot, rollback, clone and etc... To install cockpit on PVE 6.0, echo "deb...
  9. elurex

    I Love Proxmox

    buy subscription and show your love
  10. elurex

    How do I license MS Server 2016 standard and assign cores?

    same here... I did a lot of V2P just for windows 10
  11. elurex

    How do I license MS Server 2016 standard and assign cores?

    100% correct... please beware of any Windows System virtualization , its licensing is very tricky and varies alot between os and packages!. Right now it is almost impossible to virtualize windows 10 due to VDA requirements.
  12. elurex

    How do I license MS Server 2016 standard and assign cores?

    Server Standard: 2 Windows Server VMs <--- this is wrong, MS only allow you to do this if your hypervisor is using HyperV
  13. elurex

    [SOLVED] Nvidia GPU passthrough code 43

    I am using proxmox ve 6.0 and default its cpu args already have the following -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=proxmox,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,kvm=off' following is my vm conf agent: 1 bios...
  14. elurex

    [SOLVED] Nvidia GPU passthrough code 43

    I would suggest the following config for your qemu if you are using PVE 6.0 hostpci0: 04:00,pcie=1,x-vga=1 machine: pc-q35-3.1
  15. elurex

    Ceph rbd error - sysfs write failed

    no, I don't.... root@gpu01:/etc/pve/priv/ceph# ls -al total 0 drwx------ 2 root www-data 0 Aug 12 05:10 . drwx------ 2 root www-data 0 May 3 05:02 .. A temporarily work around this issue is to reboot pve host, then it will stop working again mounting additional rbd disk with sysfs write...
  16. elurex

    Ceph rbd error - sysfs write failed

    no... cephx is disabled
  17. elurex

    Ceph rbd error - sysfs write failed

    root@gpu01:~# rbd showmapped id pool namespace image snap device 0 rbd vm-101-disk-1 - /dev/rbd0 1 rbd vm-101-disk-2 - /dev/rbd1 2 rbd vm-101-disk-0 - /dev/rbd2 3 rbd vm-104-disk-1 - /dev/rbd3 4 rbd vm-104-disk-2 -...
  18. elurex

    Ceph rbd error - sysfs write failed

    Does not work ........ root@gpu05:~# rbd feature enable rbd/vm-104-disk-0 exclusive-lock root@gpu05:~# rbd feature enable rbd/vm-104-disk-1 exclusive-lock root@gpu05:~# rbd feature enable rbd/vm-104-disk-2 exclusive-lock /dev/rbd9 /dev/rbd10 /dev/rbd11 rbd: sysfs write failed can't unmap rbd...
  19. elurex

    Proxmox Cluster

    yes you can, but since you already have existing running vm setup already, when you adding the 2nd, 3rd nodes to the cluster, you must first make sure /etc/pve/qemu-server/ and /etc/pve/lxc/ has no config files (move all *.conf to /root/qemu-server and /etc/lxc/), then also make sure all your...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!