Good catch!
I added bridge-ageing to vmbr1 and now I see all the network traffic of my mirrored port!
auto vmbr1
iface vmbr1 inet manual
bridge-ports eno2
bridge-stp off
bridge-fd 0
bridge-ageing 0
Thanks a lot!
Hi,
I'm trying to setup a new proxmox box to log packet on the network.
enoX are physical network card on proxmox
vmbrX are bridges on proxmox
ensX are network card on linux VM
Basically I set up 2 eno devices eno1 (administration) and eno2 that get TX/RX traffic from a switch mirror port.
I...
OK now I better understand.
vzdump is quite interesting in this case, because I got a secondary backup off site system btrfs based, so pve-zsync is useless in this usecase!
Very interesting, thks for your clarification!
Hello,
I got two proxmox server configured in cluster: pve-server-1 and pve-server-2.
pve-server-1 has 7 vm, pve-server-2 has 2 vm.
I configured replication:
- vm of pve-sever-1 are replicated on pve-server-2
- vm of pve-sever-2 are replicated on pve-server-1
Now I got a FreeNAS with NFS...
Hello,
i got to proxmox node on a cluster: pve-server-1 and pve-server-2.
On pve-server-1 I have 7 vms and on some of them I had some snapshots.
These vm are replicated on pve-server-2 and so are the snapshots.
Now today I removed the snapshot on pve-server-1.
The replication of the vm have been...
Hi,
I've setup a proxmox on raid-1 with ZFS and wonder if there is somewhere some wiki/doc how to snapshot the server before doing the upgrade.
I mean snapshoting the OS.
It is a production server so I want to be able to rollback if anything goes wrong.
What would be the step to follow, shoudl...
Hi there,
I face a strange issue.
I have a linux VM 250GB with scsi0 virtio device. I reduced the partition size to 80GB thanks to gparted and was still able to boot it (disk was still 250GB in size but OS see a partition of 80GB.
Next I clone the VM with clonezilla onto a 80GB scsi1 disk.
Once...
Hi wolfgang,
zfs destroy did not work, but I workaround it creating a vm with a "cp" of an existing one and renaming it to 107.conf. Then I was able to erase the disk... :)
I just took a look today to ARC cache (read) and the evo SSD for read cache seems not useful at all. So far, thanks to the 64GB or RAM allocated to zfs it hasn't been used.
One other point when a ZIL device can be useful if your user transfer big file on one time transfer. I've seen my write...
Hi, we have just finished our setup.
look at "ssg-5029p-e1ctr12l-2u-based-nas-review-request.58561" on freenas forum.
(take a look at the link provided by the freenas user for another setup).
It is based on a supermicro system with 6*10TB sas drives raid-6, 2*1TB ssd samsung evo and 128GB ram...
Hi wolfgang,
<quote>
No you should not be allowed to erase vmdisk.
This has to be done in the guest hardware tab.
</quote>
OK, I've managed to erase the disk from VM tab.
One last issue, is that I tried to create a VM from a template (100GB) but during creation, the process stopped because...
Hi,
I have a local ZFS volume where I store all my vm (local-zfs-vm):
#cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl
zfspool: local-zfs-vm
pool vm
content images,rootdir
sparse 0
When I go into the pve web interface ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.