Hi,
I'm trying to allow a group of users to manage Backup Jobs at Datacenter level. I have tried a lot of different permissions but I can't seem to be able to find a way how to achieve this safely. How would I set this up without giving the group PVESysAdmin permissions?
Hello,
We have a cluster which we have upgraded to 6.4, everything went smooth, thank you for your continous hard work.
We have been reading the known issues and are aware of the following instructions:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool
We have switched...
Just for my understanding, which of these impact the connection tracking tables?
Enable at Datacenter level
Enable at Host level
Enable at VM / CT level
Adding actual rules
I just did ifdown vmbr3044 on hv01, and added an IP from vlan 3044 to the guest (ens18.3044), still doesn't work. Do I need to do something else?
By the way this wouldn't solve my initial problem, I'm merely interested in why it doesn't work at the moment :)
The space between vlan and 3043 was only on the forum, in my interfaces file it's without a space, I corrected the post above.
Yes, the guest is Debian, I just updated the interfaces file and replaced vlan3043 by ens18.3043, unfortunately still no traffic. Anything else I'm overlooking?
I added trunks=1;2;3;4;5;6;7;8;9;10;20;30;40;50;60;.... (with my own vlan id's ofcourse) to /etc/pve/qemu-server/147.conf, shutdown the guest and powered up. Unfortunately still not working. Any suggestions?
So I tried this but might need a push in the right direction.
1. I added to /etc/network/interfaces on the host:
auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
bridge_vlan_aware yes
#VLAN aware
2. Brought the interface up on...
Hi,
For an arping machine I need a VM with an address in all our subnets. Since I have setup my hv's with separate vmbr's per VLAN I need a VM with lots of interfaces.
Currently I'm getting the following error when adding the 33rd network interface:
Parameter verification failed. (400)...
Unfortunately I can't get back to the default level of 200 I guess:
# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 ssd 0.87320 1.00000 894G 317G 576G 35.50 0.96 106
1 ssd 0.87320 1.00000 894G 341G 552G 38.19 1.03 118
8 ssd 3.49309 1.00000...
After adding updating mon_max_pg_per_osd = 1000 in global section of /etc/ceph/ceph.conf and restarting all monitors and osd's problem has been resolved.
Hi,
Yesterday we added 2 new disks to our cluster. Immediately afterwards it started rebalancing. According to pgcalc I decided to also update the pg_num and pgp_num from 512 to 800 (which is the max for my setup according to the warning).
Recovery is busy but it's very slow. It has been...
hmmmm...hang on...I have been away for a couple of days and it looks like the problem has been resolved...I'm going to monitor this and will report back if this happens again.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.