a SAN would be a shared device of some sort. when you say you have "two" SANs, do you mean you have two boxes serving independent iscsi luns, or two boxes in a failover capacity (meaning one set of luns?)
In either case, this becomes a simple...
@fiona confirmed that 9.1.5 is working correctly with Veeam, there are the erroneous errors present in rborgs post but those also exist on 9.1.3, thank you
Thank you for clarifying. There are many ZFS experts in this forum, I am not one of them. That said, I suspect that the ZFS/iSCSI plugin is sufficiently different from the Local ZFS plugin, where the ZFS replication is primarily integrated and...
There are two independent servers running Ubuntu 24.04, and the SSD storage on each is shared as an iSCSI target. In the Proxmox Datacenter, both SANs are integrated using ZFS over iSCSI. This seems to be working without any issues.
It is not a...
A better option is to avoid the PERC HBA-mode drama and get a Dell HBA330. It's a true IT-mode storage controller based on the LSI3008 chip. Real cheap to get.
Just make sure to remove any existing virtual disks prior to use and flash to latest...
Hm okay, ich hab eben parallel auch nochmal in nem Telegramm-Channel gefragt und da wurde gesagt, ich könne den NFS-Share auch direkt auf dem Proxmox-Host mounten und diesen dann an den jeweiligen LXC-Container weiterreichen.
Dann müsste ich ja...
As I ran into the same problem today, I wanted to share the resulting bash script for changing the boot order. On PVE 9 all required tools are already available, on PVE 8 eventually the package virt-firmware must be installed first as it contains...
Ok thanks a lot for the info. Can I ask why do you suggest the 8k for RAID-10 and 16k for RAIDZ1. How are you calculating it?
If I change this now new virtual machines will use the new values correct? I can backup/restore the existing ones at a...
Sorry for necroposting but I want to add some considerations for anyone stumbling onto this thread in the future:
Physical redundancy/separation (separate NICs, cables, switches etc.) is probably the most important part of a redundant cluster...
Sorry for necroposting but I want to add some considerations for anyone stumbling onto this thread in the future:
Physical redundancy/separation (separate NICs, cables, switches etc.) is probably the most important part of a redundant cluster...
@jtru
The OS and file system are the same.
fsync is indeed much slower on the NVMe, but what could be causing this?
The Proxmox and VM configurations are identical (should be :D).
Using something like zabbix or checkmk is obviously better, because with them you can monitor VMs of other hypervisors too and even physical machines.
They too cache the last seen value, so even if the VM is offline you can still see if the VM...
Which means that, if I understand this correctly, vSphere couldn't do it either, and you had to use a third-party tool to achieve what you wanted. ;)
If we now apply this logic to Proxmox, for which we have already established that the QEMU...
How do you monitor the availability and function of the services running inside the vm? For this you need a dedicated monitoring software anyhow. I'm really baffled, are there seriously professional IT-departments without a dedicated...
PVE cannot determine the free disk space inside the VM from the outside easily.
It would have to detect and parse the disk partition table and then read the filesystem metadata.
This stuff is trivial inside the VM, because there you have all the...
Use a monitoring tool like Zabbix, Icinga, Prometheus or prtg. I'm a bit baffled that somebody could do twenty years without it. I couldn't do my work without the monitoring and alarming features of a Icinga since the Hypervisor ( no matter which...
Hi this evening my network port have burn, so my Proxmox server wont connect anymore even if i use a usb Ethernet adapter.
But after using another disk ( a brand new one) i managed to install proxmox back on the machine using the famous usb...