Search results

  1. J

    `/var/lib/ceph/osd/ceph-<ID>/keyring` is gone.

    Solved # /usr/sbin/ceph-volume-systemd lvm-{osd_id}-{lvm_name(?)} # systemctl start ceph-osd@{osd_id}
  2. J

    `/var/lib/ceph/osd/ceph-<ID>/keyring` is gone.

    ceph osd is dead and there is no keyring. Why? The keyring for a few osd on a few nodes is gone. /var/lib/ceph/osd# find . ./ceph-3 ./ceph-3/lockbox.keyring ./ceph-2 ./ceph-2/lockbox.keyring LOG : Jan 21 22:53:30 pve-node-stor-08101 systemd[1]: Starting Ceph object storage daemon osd.2... Jan...
  3. J

    notice: RRDC/RRD update error

    As you said, there were several pvestatd. # killall pvestatd # systemctl stop rrdcache # rm -rf /var/lib/rrdcache/db # systemctl start rrdcache # systemctl start pvestatd The noisy log has quieted down, but I'm still getting errors. Jan 21 22:22:31 pve-node-stor-08101 pvestatd[1449724]...
  4. J

    notice: RRDC/RRD update error

    Not all methods on the forum work. 1. NTP Sync (Now, all nodes synced < 100ms) 2. systemctl stop rrdcached; rm -rf /var/lib/rrdcached/db; systemctl start rrdcached; systemctl restart pve-cluster 3. The rtc clock is also synchronized. What's strange is that nodes that are itselves are marked...
  5. J

    During pci passthrough, the VM does not start with qm, but it can with run /usr/bin/kvm.

    Running the qm start command or from the GUI throws a `got timeout` error and the VM does not start. However, if run the following command directly in the shell, it will work. ``` /usr/bin/kvm -id 1003 -name pc-01 -no-shutdown -chardev...
  6. J

    Why SSM is disabled?

    patch file : https://git.proxmox.com/?p=pve-qemu.git;a=blob;f=debian/patches/pve/0005-PVE-Config-smm_available-false.patch;h=59fd521a7107cf419764cdf78399a0128dd15561;hb=HEAD https://lists.proxmox.com/pipermail/pve-devel/2015-September/017486.html : kernel 4.2 and qemu 2.4 machine introduce...
  7. J

    "Irq 17: nobody cared" problem with PCI passthrough

    I solved with. https://pve.proxmox.com/wiki/Pci_passthrough HDMI Audio crackling/broken It would be nice if there was an explanation of the content in the troubleshooting wiki.
  8. J

    "Irq 17: nobody cared" problem with PCI passthrough

    Version: proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve) pve-manager: 6.3-2 (running version: 6.3-2/22f57405) pve-kernel-5.4: 6.3-1 pve-kernel-helper: 6.3-1 pve-kernel-libc-dev: 5.4.106-1 pve-kernel-5.4.73-1-pve: 5.4.73-1 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.0.4-pve1 criu: 3.11-3...
  9. J

    IRQ changed during PCI Passthrough: irq 17: nobody cared (try booting with the "irqpoll" option)

    Before start the VM: ``` 02:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 (prog-if 02 [NVM Express]) Interrupt: pin A routed to IRQ 17 NUMA node: 0 Kernel driver in use: nvme 07:00.0 VGA compatible controller: NVIDIA...
  10. J

    Are there any plans to support virgl?

    I want to take acceleration of graphics acceleration using virtio-gpu. By the way, proxmox's qemu doesn't seem to support virgl. Are there any plans to apply? Is it because it still doesn't support virgl over network? Where can I find information on this? Or is there a way to implement it in...
  11. J

    nested virtualization is not working

    It just seems that vmx is invisible in certain kernels (Xen Kernel). The actual problem, in this case, was the IOMMU problem, not the vmx being visible. https://wiki.qemu.org/Features/VT-d I looked at this and fixed it. But why isn't the "intel-iommu"(vIOMMU) setting not exists in the GUI for...
  12. J

    nested virtualization is not working

    Of course I did. /sys/module/kvm_intel/parameters/nested is also "Y".
  13. J

    nested virtualization is not working

    version: 6.3-2 Proxmox: /proc/cpuinfo ``` processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1280 v3 @ 3.60GHz stepping : 3 microcode : 0x28 cpu MHz : 3577.490 cache size : 8192 KB...
  14. J

    [SOLVED] VM Disk over ceph so slow, But rbd bench is good.

    RBD Bench: ``` # rbd bench --io-total=1G --io-size=1M --io-type read --io-threads 1 --io-pattern seq .../vm-1004-disk-0 bench type read io_size 1048576 io_threads 1 bytes 1073741824 pattern sequential SEC OPS OPS/SEC BYTES/SEC 1 138 144.19 151191864.43 2 355...
  15. J

    Can use SRV TCP Record at PVE RBD Storage?

    How do I write the storage.cfg file? That's the question.
  16. J

    Can use SRV TCP Record at PVE RBD Storage?

    Hello, https://pve.proxmox.com/wiki/Storage:_RBD Here's only monhost in the description. https://docs.ceph.com/en/latest/rados/configuration/mon-lookup-dns/ If look here, ceph supports srv record. Can I use srv record in proxmox too?
  17. J

    Inbound broadcast packets dropped.

    Thank you for answer to IGMP. Of course, there are two interfaces. I only used one in the above configuration to determine the cause of the problem.
  18. J

    Inbound broadcast packets dropped.

    Hi, I would like to support rstp and igmp snooping without another third party.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!