Hallo,
Ich bin auf diesen Beitrag gestoßen nachdem ich von meinem AG eine sehr teure Proxmoxschulung besucht habe.
Dort wurde ebenfalls völlig natürlich erwähnt das sich diese Meldung abschalten ließe. Von unsittlichen oder nicht gerne gesehenen...
Port isolation uses the isolated flag for bridge ports (for more information see [1]). This can only work locally because of that. If you want to do this across host, or have more fine-grained control over traffic between guests on the same VNet...
This often occurs when there are MTU issues somewhere along the path - did you double-check your MTU config across the whole path? You can always check via the ping command:
# 9000 MTU
ping -M do -s 8972 {target host}
# 1500 MTU
ping -M do -s...
I can't do the removable since the source PBS is in a datacenter, we only rent the server.
I have run a iperf for 1h without any interruptions at full gigabit between both PBS servers.
The configuration is a pull from the remote pbs. The...
Not trying to be rude, but can you explain that? These were not options available during install. This is what I found.
"Logical Volumes (LVs) are not a file system; they are virtual partitions created by Logical Volume Management (LVM) that can...
There is one more thing which I have to write down... while at some point "trim" and space reclamation will eventually work (we just cannot predict at what point and what is really triggering it), I haven't been able until now to reclaim that...
I use Gparted Live to wipe the drives prior to each attempt. No need to remove them. I have also tried with 9.1x and 8.4x and it hangs at teh exact same spot. I'll check the "secure boot option".
Is this a shared database or a separate one for each node in a cluster configuration? After clearing a lot of unnecessary records, I noticed that some records remained on the second cluster node.
pmgsh get /nodes/localhost/status | grep insync...
Hi, I have a 3-node cluster with ceph. I created some resource pools to act as "folders" and restrict what a generic user could see.
I'm trying to build an script that moves VMs from their resource pools to another one called "toBeDeleted" if...
I am looking into a move from VMWare > Proxmox and am stumped by what I believe to be a bug that completely breaks my automation tools when VLANs are configured for the management interface.
When the Proxmox host has management IPs on VLAN...
There results after trimming, before moving VM's disk
root@:/mnt/pve/truenas01-41-test2-01/images/102# ls -l vm-102-disk-1.qcow2
-rw-r----- 1 root nogroup 16108814336 Feb 13 12:54 vm-102-disk-1.qcow2...
Okay, understood. In my case, since it's an “old” installation, I have to manually modify the repositories in /etc/apt.
For those who have already upgraded, have you experienced any particular problems? Is it fairly reliable?
Thanks