Rather than trying to create a 3 node cluster, setting up a 2 node cluster with an external qdevice [1] may be more appropriate for your setup.
[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support
Hi,
Sync jobs only pull the contents of an entire datastore. The next best thing for you to look into may just be to set up a Sync Job [1], that runs at a time and interval (see [2]) which strikes a balance between being regular enough to minimize the running time, without putting load on the...
Hi,
Do you have snapshots on the system that may also be contributing to the usage?
You could also compare the output of zpool list and zfs list to see if there is much discrepancy there, as ZFS reserves a certain amount of storage space for itself. However, I don't think it should be...
Hi,
For a simple use case, I would say it's easiest to assign a disk to one of your existing VMs and use that VM as the smb server (granted it will generally be running). Alternatively, you could set up a dedicated VM for hosting the share (like a minimal Debian install). Unless you have a...
Hi,
I just tested this and was able to join a cluster with no problem, using pvecm add. Are you sure there isn't anything else in your sshd_config or network config that may be be affecting the communication between the node?
There is no need to manually configure ssh keys for cluster...
Hi,
Perhaps my opinion is biased, being an employee, but I'll still try to answer your question ;)
The cluster provides a few advantages for your use case. Primarily, it provides a single interface to all the nodes of the cluster, meaning that administration is much simpler than having to...
If you remove default_server from /etc/nginx/sites-enabled/pmg-quarantine.conf, this should fix the issue.
You can also remove the default configuration from sites-enabled (or remove default_server from the files), should you want pmg-quarantine to be the default. In any case, there can only be...
This seems to be the offending line.
Running grep -R default_server /etc/nginx may help you to find where the overlapping default server configuration is coming from.
Otherwise, just check if there are any unintended configurations found in this directory.
Hi, thanks for reporting. I've opened a bug report for the issue [1] and will look into it. For now, the easiest workaround is to update from the command line.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=3454
Is this specific with the Shell found on the web interface?
Does the entire web interface become inaccessible?
Can the server still be pinged and accessed via another method such as ssh?
Yes, this is how it is designed. The backup owner can be changed manually [1], but a backup group is only owned by one user at a time.
In PVE, the datastore is configured as a storage device, wherein the user on the backup server is also specified. For your situation, I would recommend giving...
Junk mail incorporates the number of virus, spam, and greylisted mails, as well as the amount of mails rejected by SPF, RBL, and Pregreet. Basically it's the total count of "bad" emails.
Yes this is actually set up in the instructions you linked. In that tutorial, the 'g's in the mapping map that gid on the host system to the guest. And appending the line to /etc/subgid allows whatever group you stated to operate under that id.
The test directory should also display uid 1000 inside the container. This typically happens after setting the appropriate mapping and rebooting the container.
This suggests that the mapping is not effective, as the container user 1000, is being mapped to 101000. You could perhaps try to run...
No the Proxmox VE install doesn't create any additional system users. I asked just in case you have done this yourself.
Could you post the output of ls -na on the mountpoint from the host and from within the container?
Are you logged in as worker when attempting to access the directory?
Also, do you have any users on the host system which may also be mapped to uid:gid 1000? This UID is generally where system user ids start on a Debian system.
Finally, are you sure that directory has write permissions throughout?
In fairness, I don't see any reference to it in our documentation either (which i'll update soon).
They will still be shown in the hardware section, but will be detached. To attach them again, you'll just have to select them, click edit, uncheck the backup option again, and hit add. For this...
Could you explain the environment a little more? Why can't you use the two drives at once through the bay, and why are they getting periodically swapped out?
If you are trying to maintain one "storage device" in PVE that actually corresponds to multiple disks that are being periodically...
The least permissions you can do this with are VM.audit and VM.console. If VM.audit is removed, the VM will be hidden from the user and if VM.console is removed, the user will not be able to access the console.
To my knowledge, this option does not currently exist.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.