Hello,
I created several RBDs in previous Ceph version, and typically I mapped these RBDs using command
rbd map --user <user> --keyring <path/to/keyring> <pool>/<image>
However I get this error now trying to map image GML:
2023-01-02T14:58:22.635+0100 7f0559c19700 -1...
Hello,
I have defined a huge pool for database backups in Ceph.
Now I need to migrate these backups to another storage.
Therefore I must calculate the total disk space used by the backups.
I could display the RBDs of the relevant pool representing a database backup and summarize the RBD size...
Hello,
after a server crash I was able to repair the cluster.
Health check looks ok, but there's this warning for 68 OSDs:
unable to load:snappy
All OSDs are located on the same cluster node.
Therefore I was checking version of related file libsnappy1v5; this was 1.1.9
Comparing this file...
Hello,
I had a severe issue with OS disk (using BTRFS) and must replace it.
This means I started block copy using dd from old disk to new SSD; I performed the following steps:
1. block copy with dd from old to new device
2. extend root partition
3. resize BTRFS of relevant partition
Then I...
OK.
My conclusion of this discussion is:
It's not recommended to run TrueNAS Core in a VM if best disk performance is required for Proxmox because network stack will limit this.
If virtualization features are important then ZoL should be used in Proxmox, and NAS will not be virtualized.
Any...
So, if I deploy TrueNAS Core in a VM, I must accept the disadvantage that shared storage used by Proxmox, e.g. ISO, images, etc. will always be thwarted by the network stack?
If I setup ZoL in Proxmox, can I utilize this storage in a VM running TrueNAS Core "directly"?
And what about the...
Hello,
I have a server with ECC RAM and multiple disks, means it's equipped like a NAS.
However, I want to install Proxmox and run a VM with TrueNAS Core.
All relevant disks (e.g. WD Red) will be configured as passthrough for this VM.
Then in TrueNAS Core I will configure these drives for ZFS...
Hello,
my PVE node has 3 NICs, none is supporting PCI Passthrough.
One VM should run OPNsense as additional router in the lab.
My ISP provided a router incl. modem that does not support VLAN.
Port 4 of this router provides a guest LAN 192.168.179.0/24 that is logically separated from LAN.
eno1...
Hello,
I'm running an old PVE 6.4 node with several VMs.
Now I want to migrate these VMs to a new PVE 7.x node.
Among the VMs is 1 Win10 deployment, and this VM should be configured with optimal I/O performance on the new PVE 7.x node.
What is the best practice for migrating a Win10 VM to a...
QoS muss dann über den Managed Switch (D-Link DGS-1100-16) konfiguriert werden.
Ist mein Verständnis hierzu korrekt?
Das Problem könnte dann sein, dass mein Switch nur QoS 802.1p unterstützt. Und dies erlaubt die Konfiguration pro Port (siehe Screenshot).
Oder muss QoS für ein VLAN anders...
There's no issue with NIC Intel I350 as stated in my initial posting.
Can you please advise how to proceed after creating a file in /etc/modprobe.d/vfio.conf and rebuilding the kernel?
If I try to start the VM, the server behaves like before, means error message + reboot.
I have no chance to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.