So if I click the VM and add the user as PVEVMUser directly, he can see backup.
However, if I add the VM to a pool and add that user to the pool, he can't see the "Backup menu" - seems like the pool is not inheriting permissions?
Also, we created an NFS share of "templates" - but when users...
What is the difference between the permissions under datacenter and the permissions in a pool?
I want to give a user access to his VM and ability to backup / restore
So i create a user and assign him to a group and then under the pool i give him permission to the pool. However, when I log in...
I have a question.
I backed up VM-101 to my NFS share
I created a new VM (102) on my iscsi SAN and when i go to backup - select a dropdown to go to NFS share and hit restore - it asks to choose a storage, what is this storage for? I want the vm to use its iscsi storage
I've removed the dead storage and added a 15 minute cronjob to execute across all nodes "service pvestatd restart" - this works for now ... but heh not ideal - its also run into issues with multipath
One semi unrelated issue (or maybe related) - its been failing to log into 1 of our NFS share (the openfiler one) and when I removed the NFS share, the pvestatd stays up for longer ...
So when I used to do SSMPING, the server returned multicast. Now I just tested again - all of a sudden it shows unicast. However, on the switch side NOTHING has changed and in fact if you do netstat -g, it shows that it is part of a multicast group - any ideas? The switches are Brocade Fast Iron...
Is there a way to show all the vms in the cluster and which node its on?
We're coding a custom panel so need to know - also if apache is down will the api work? We're deciding between api or ssh tunelling
Here you go. My issue is I think its the web gui - it shows node is off even tho its on. I've had to routineely restart pvestard and apache
<?xml version="1.0"?>
<cluster config_version="105" name="PL-CLOUD">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>...
This is the 2nd time this has happened. The vm shows up as stopped when the file system access is still open. HA starts multiple copies of the same vm and our vms become corrupt as the disks are getting written to at the same time ... this is really dangerous and we cannot use this
Is there anyway to limit the # of VMs a user can create or # of VMs a user can create out of the resources in a pool?
If we give a user say 4gb and 4 cores, we want to say max 4 vms.
Or is there a way to a minimum amt of resoruces per VM? (ie u can't ave a 1 core 0.5gb vm
On 2 of our nodes, the disk usage is blank but cpu and mem show up. On 2 other nodes, the pvestatd keeps dying under load and i have to manually restart it - is there a better alternative to this and how do we fix the disk usage issue
So we have 12 nodes and 1 vm (testing HA). It is a CentOS cPanel server. Storage is 500gb of space on an iSCSI SAN.
When the node first started HA it migrated it from Node 1 to Node 3 (not sure why) but it ran fine. After a reboot getting TONS of FSCK errors. I migrated manually the node back...
To the op - what are you storing it on? I can create a .raw image for you (blank .raw image with windows installed) and let you wget it off our server and you can see if you can mount it? One thing we ran into was corrupted windows .iso. They basically were not fully uploaded to Proxmox and will...
We have something similar. We have Dell C6100 with Xeon L55xx + 96GB RAM per node and 2 x 250GB SSD HD in RAID 1 connected to our SSD SAN. All of our gear is Intel Xeon. We have a few of the supermicro clouds as well
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.