Search results

  1. C

    Ceph: rbd map command fails with error --user is deprecated, use --id

    Hello, I created several RBDs in previous Ceph version, and typically I mapped these RBDs using command rbd map --user <user> --keyring <path/to/keyring> <pool>/<image> However I get this error now trying to map image GML: 2023-01-02T14:58:22.635+0100 7f0559c19700 -1...
  2. C

    [SOLVED] Ceph: calculate used disk space per pool

    Hello, I have defined a huge pool for database backups in Ceph. Now I need to migrate these backups to another storage. Therefore I must calculate the total disk space used by the backups. I could display the RBDs of the relevant pool representing a database backup and summarize the RBD size...
  3. C

    [SOLVED] Ceph health warning: unable to load:snappy

    Issue fixed. After restarting related service ceph-osd@<n>.service the warning is gone.
  4. C

    [SOLVED] Ceph health warning: unable to load:snappy

    2022-12-13 07:50:00.000324 mon.ld5505 [WRN] BLUESTORE_NO_COMPRESSION BlueStore compression broken on 68 OSD(s) [62/1968] 2022-12-13 07:50:00.000347 mon.ld5505 [WRN] osd.0 unable to load:snappy 2022-12-13...
  5. C

    [SOLVED] Ceph health warning: unable to load:snappy

    Hello, after a server crash I was able to repair the cluster. Health check looks ok, but there's this warning for 68 OSDs: unable to load:snappy All OSDs are located on the same cluster node. Therefore I was checking version of related file libsnappy1v5; this was 1.1.9 Comparing this file...
  6. C

    EFI boot fails after block copy to new device

    I have attached a screenshot that documents UUIDs of affected server. What I can see is that UUIDs are correct.
  7. C

    EFI boot fails after block copy to new device

    Hello, I had a severe issue with OS disk (using BTRFS) and must replace it. This means I started block copy using dd from old disk to new SSD; I performed the following steps: 1. block copy with dd from old to new device 2. extend root partition 3. resize BTRFS of relevant partition Then I...
  8. C

    Using ZFS Pool as "local" storage

    OK. My conclusion of this discussion is: It's not recommended to run TrueNAS Core in a VM if best disk performance is required for Proxmox because network stack will limit this. If virtualization features are important then ZoL should be used in Proxmox, and NAS will not be virtualized. Any...
  9. C

    Using ZFS Pool as "local" storage

    What is a "soft passthrough"?
  10. C

    Using ZFS Pool as "local" storage

    So, if I deploy TrueNAS Core in a VM, I must accept the disadvantage that shared storage used by Proxmox, e.g. ISO, images, etc. will always be thwarted by the network stack? If I setup ZoL in Proxmox, can I utilize this storage in a VM running TrueNAS Core "directly"? And what about the...
  11. C

    Using ZFS Pool as "local" storage

    Hello, I have a server with ECC RAM and multiple disks, means it's equipped like a NAS. However, I want to install Proxmox and run a VM with TrueNAS Core. All relevant disks (e.g. WD Red) will be configured as passthrough for this VM. Then in TrueNAS Core I will configure these drives for ZFS...
  12. C

    [SOLVED] PVE Node mit 2 NICs und OPNsense VM

    Hello, my PVE node has 3 NICs, none is supporting PCI Passthrough. One VM should run OPNsense as additional router in the lab. My ISP provided a router incl. modem that does not support VLAN. Port 4 of this router provides a guest LAN 192.168.179.0/24 that is logically separated from LAN. eno1...
  13. C

    [SOLVED] Migration VM Win10 from PVE 6.4 to PVE 7.x

    Hello, I'm running an old PVE 6.4 node with several VMs. Now I want to migrate these VMs to a new PVE 7.x node. Among the VMs is 1 Win10 deployment, and this VM should be configured with optimal I/O performance on the new PVE 7.x node. What is the best practice for migrating a Win10 VM to a...
  14. C

    Proxmox + OPNsense + (Cisco) Switch + VLANs: Wie realisieren?

    QoS muss dann über den Managed Switch (D-Link DGS-1100-16) konfiguriert werden. Ist mein Verständnis hierzu korrekt? Das Problem könnte dann sein, dass mein Switch nur QoS 802.1p unterstützt. Und dies erlaubt die Konfiguration pro Port (siehe Screenshot). Oder muss QoS für ein VLAN anders...
  15. C

    PCI passthrough failure - kvm: vfio: Cannot reset device 0000:00:1f.6, no available reset mechanism.

    There's no issue with NIC Intel I350 as stated in my initial posting. Can you please advise how to proceed after creating a file in /etc/modprobe.d/vfio.conf and rebuilding the kernel? If I try to start the VM, the server behaves like before, means error message + reboot. I have no chance to...
  16. C

    PCI passthrough failure - kvm: vfio: Cannot reset device 0000:00:1f.6, no available reset mechanism.

    Hello, in my PVE host there are 2 NICs: 1 onboard Intel I219-LM 1 PCI Intel I350 quad port I want to passthrough NIC Intel I219-LM, but when I start the relevant VM I get this error and the host reboots: $ sudo qm start 100 kvm: vfio: Cannot reset device 0000:00:1f.6, no available reset...
  17. C

    Proxmox + OPNsense + (Cisco) Switch + VLANs: Wie realisieren?

    Wegen der aktuell verfügbaren 5 NICs (ausschließlich 1GB) stehen dann 2 Modelle zur Auswahl: a) 2 NIC LACP Bond Proxmox Host + 2 NIC LACP Bond OPNsense LAN (passthrough) + 1 NIC OPNsense WAN (passthrough) b) 4 NIC LACP Bond Proxmox Host + 1 NIC OPNsense WAN (passthrough) + Virtual Bridge Da kein...
  18. C

    Proxmox + OPNsense + (Cisco) Switch + VLANs: Wie realisieren?

    Soweit ich das verstanden habe werden an das Interface OPNsense LAN die VLANs angehängt. Das bedeutet, dass über dieses Interface, verbunden mit dem Managed Switch, alle Kommunikation aller VLANs geht. Du hast in deinem vorigen Post auf den Vorteil hingewiesen, wenn alle NICs an die OPNsense VM...
  19. C

    Proxmox + OPNsense + (Cisco) Switch + VLANs: Wie realisieren?

    Ich hatte bei Proxmox host mit 5 NICs dieses Setup geplant: eth0, eth1: bond0 + vmbr0, VLAN Proxmox VE Guest, VLAN DMZ, VLAN Smarthome, VLAN SAN, ... eth2: vmbr1, Proxmox VE Management eth3: PCI passthrough, WAN, OPNsense eth4: PCI passthrough, LAN, OPNsense Meine Annahme war, dass der meiste...
  20. C

    Proxmox + OPNsense + (Cisco) Switch + VLANs: Wie realisieren?

    Man könnte den LACP Bond ja auch auf dem Proxmox Host anlegen. Frage: Warum empfiehlst du den Bond auf der OPNsense? Welchen Vorteil hat das gegenüber dem Bond auf Proxmox Host?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!