I just wanted to follow-up on my second question above. If you use the --backup-id <string> parameter in the client backup, then PBS sees it as as different backup set named for this parameter. By default it uses the hostname without this parameter. Using that parameter allows a host to...
I understand how to use the backup client itself, what I am looking for is a good suggestion of which directories to include and exclude in a Proxmox Host backup. If I'm already backing up the VMs/CTs, then obviously I won't need to backup the /var/lib/vz/images directory, but what is the...
Here's the simplest way to explain what I did.
Copy the template file:
cp /var/lib/pmg/templates/main.cf.in /etc/pmg/templates
You may need to make the /etc/pmg/templates directory if it doesn't exist already.
Edit that file:
nano -c /etc/pmg/templates/main.cf.in
Find the line (probably line...
If this is something to implement in the future, perhaps an extra checkbox in the rule that says "continue processing rules" which signals that the action in the rule is not final?
As a long time user of Proxmox (since the 3.x days) and now a user of PMG, I am very familiar with the subscription nag screen and have long learned to click through it. However, I am wondering if there could be an exception for PMG to have it not show for non-admin users logging into the...
I'm not sure what all your issues are with your hostnames, but I solved the first problem by modifying the main.conf.in file in /etc/pmg/templates, I modified this line (somewhere around line 11) so that it reads:
smtpd_banner = [% pmg.mail.banner %]
I removed the "$myhostname". That way, I...
I also hit this today on 7.0-13. There were no errors in the log for the snapshot itself, but there were problems before.
The VM had a previous snapshot which I tried to delete via the GUI. It appears the snapshot RAW file was deleted, but the snapshot delete took too long, and gave this...
Thanks, I have read the AGPL, and a myriad of analysis, and there seems to be a wide variety of opinions by many non-lawyers (like you and me...unless you are a lawyer). My opinion concurs with yours...as long as we are not modifying or linking to anything directly in Proxmox itself, we should...
Greetings,
I have a question about AGPL3 and its application to my situation. I have been reading around and found a wide range of opinions on this matter.
I have a product that currently uses Ubuntu 20.04 as the base OS, then a custom set of deployment scripts which setup and configure KVMs...
I have a cluster that is in this same scenario. It seems to have happened after upgrading the nodes yesterday. Today I tried to get a console, but it failed with this message:
VM 5355 qmp command 'change' failed - got timeout
TASK ERROR: Failed to run vncproxy.
Then I tried to migrate that...
I still never figured out how to get the newly installed system drive to recognize the OSDs after installation. However, that really wasn't too much of an issue since I was also going to have to recreate the OSDs anyway to get them in the new LVM configuration used in Nautilus.
The method I...
Final followup...this isn't a solution per se, but I did want to say what I did...and clarify that I understand now that the original problem wasn't the upgrade from Luminous to Nautilus, but in the fact that I needed to convert the old OSD created in Luminous to the new method used in Nautilus...
What I'm trying to figure out is whether or not I can re-install Proxmox (and Ceph) on a Ceph node with existing OSDs, force a rejoin to the Proxmox cluster, then have the system recognize, configure, and start the OSDs on that node. I then expect there will be some recovery since the OSDs will...
Does the new ceph-volume command help any? It seems to imply that it will find already created OSDs and configure them, but maybe I'm reading too much into that.
Right now the best course of action I've got to prevent downtime to the VMs is to move all VM images back to local storage so I can...
I'm afraid I still don't understand the question. The compute nodes have two 10Gb connections to the Ceph public network, one for the Proxmox Cluster and one for regular network traffic. The Ceph nodes have vmbr0 because that's what Proxmox put there and I never bothered to remove that part of...
Lots of questions now that I've got some decent hardware and upgrading to 6.0. Per a discussion in another thread, I would like to move the OS of my Ceph Nodes from a default LVM-based install on a large SSD (like 2 TB) ideally to a RAID 1 ZFS boot disk on much smaller SSDs (256GB). I'm fully...
Your cluster network doesn't need to communicate outside of itself. My first cluster used an off-the-shelf D-Link 1Gb consumer switch for the cluster traffic. I used a simple 192.168.200.0/24 network on one of the spare NIC ports on the nodes, and connected those to the 1Gb switch. No network...
That's an easy one...the ceph nodes have 10GbE built into the motherboard. There are no NICs less than 10GbE in those 3 servers. And all my infrastructure switches are 10GbE and up.
I'm not using the SFP+ connections. I think that network card will need to be replaced by the 100GbE when...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.