Search results

  1. T

    Backup of PVE Host configuration on PBS

    I just wanted to follow-up on my second question above. If you use the --backup-id <string> parameter in the client backup, then PBS sees it as as different backup set named for this parameter. By default it uses the hostname without this parameter. Using that parameter allows a host to...
  2. T

    Backup of PVE Host configuration on PBS

    I understand how to use the backup client itself, what I am looking for is a good suggestion of which directories to include and exclude in a Proxmox Host backup. If I'm already backing up the VMs/CTs, then obviously I won't need to backup the /var/lib/vz/images directory, but what is the...
  3. T

    SMTP Banner Check Reverse DNS does not match SMTP Banner

    Here's the simplest way to explain what I did. Copy the template file: cp /var/lib/pmg/templates/main.cf.in /etc/pmg/templates You may need to make the /etc/pmg/templates directory if it doesn't exist already. Edit that file: nano -c /etc/pmg/templates/main.cf.in Find the line (probably line...
  4. T

    Before Queue Filtering: Block + Quarantine (save a copy of blocked email)

    If this is something to implement in the future, perhaps an extra checkbox in the rule that says "continue processing rules" which signals that the action in the rule is not final?
  5. T

    Nag screen suppression for quarantine logins

    As a long time user of Proxmox (since the 3.x days) and now a user of PMG, I am very familiar with the subscription nag screen and have long learned to click through it. However, I am wondering if there could be an exception for PMG to have it not show for non-admin users logging into the...
  6. T

    SMTP Banner Check Reverse DNS does not match SMTP Banner

    I'm not sure what all your issues are with your hostnames, but I solved the first problem by modifying the main.conf.in file in /etc/pmg/templates, I modified this line (somewhere around line 11) so that it reads: smtpd_banner = [% pmg.mail.banner %] I removed the "$myhostname". That way, I...
  7. T

    [SOLVED] Snapshot Not Working

    I also hit this today on 7.0-13. There were no errors in the log for the snapshot itself, but there were problems before. The VM had a previous snapshot which I tried to delete via the GUI. It appears the snapshot RAW file was deleted, but the snapshot delete took too long, and gave this...
  8. T

    Proxmox as a base OS for a product

    Thanks, I have read the AGPL, and a myriad of analysis, and there seems to be a wide variety of opinions by many non-lawyers (like you and me...unless you are a lawyer). My opinion concurs with yours...as long as we are not modifying or linking to anything directly in Proxmox itself, we should...
  9. T

    Proxmox as a base OS for a product

    Greetings, I have a question about AGPL3 and its application to my situation. I have been reading around and found a wide range of opinions on this matter. I have a product that currently uses Ubuntu 20.04 as the base OS, then a custom set of deployment scripts which setup and configure KVMs...
  10. T

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    I have a cluster that is in this same scenario. It seems to have happened after upgrading the nodes yesterday. Today I tried to get a console, but it failed with this message: VM 5355 qmp command 'change' failed - got timeout TASK ERROR: Failed to run vncproxy. Then I tried to migrate that...
  11. T

    Replacing system drive in Ceph node

    I still never figured out how to get the newly installed system drive to recognize the OSDs after installation. However, that really wasn't too much of an issue since I was also going to have to recreate the OSDs anyway to get them in the new LVM configuration used in Nautilus. The method I...
  12. T

    [SOLVED] Ceph Luminous to Nautilus upgrade issues

    Final followup...this isn't a solution per se, but I did want to say what I did...and clarify that I understand now that the original problem wasn't the upgrade from Luminous to Nautilus, but in the fact that I needed to convert the old OSD created in Luminous to the new method used in Nautilus...
  13. T

    Replacing system drive in Ceph node

    What I'm trying to figure out is whether or not I can re-install Proxmox (and Ceph) on a Ceph node with existing OSDs, force a rejoin to the Proxmox cluster, then have the system recognize, configure, and start the OSDs on that node. I then expect there will be some recovery since the OSDs will...
  14. T

    Replacing system drive in Ceph node

    Does the new ceph-volume command help any? It seems to imply that it will find already created OSDs and configure them, but maybe I'm reading too much into that. Right now the best course of action I've got to prevent downtime to the VMs is to move all VM images back to local storage so I can...
  15. T

    [SOLVED] Ceph Luminous to Nautilus upgrade issues

    I'm afraid I still don't understand the question. The compute nodes have two 10Gb connections to the Ceph public network, one for the Proxmox Cluster and one for regular network traffic. The Ceph nodes have vmbr0 because that's what Proxmox put there and I never bothered to remove that part of...
  16. T

    Replacing system drive in Ceph node

    Lots of questions now that I've got some decent hardware and upgrading to 6.0. Per a discussion in another thread, I would like to move the OS of my Ceph Nodes from a default LVM-based install on a large SSD (like 2 TB) ideally to a RAID 1 ZFS boot disk on much smaller SSDs (256GB). I'm fully...
  17. T

    [SOLVED] Ceph Luminous to Nautilus upgrade issues

    We make all that equipment, so it costs me next to nothing. ;)
  18. T

    Proxmox Cluster unanswered questions

    Your cluster network doesn't need to communicate outside of itself. My first cluster used an off-the-shelf D-Link 1Gb consumer switch for the cluster traffic. I used a simple 192.168.200.0/24 network on one of the spare NIC ports on the nodes, and connected those to the 1Gb switch. No network...
  19. T

    [SOLVED] Ceph Luminous to Nautilus upgrade issues

    That's an easy one...the ceph nodes have 10GbE built into the motherboard. There are no NICs less than 10GbE in those 3 servers. And all my infrastructure switches are 10GbE and up. I'm not using the SFP+ connections. I think that network card will need to be replaced by the 100GbE when...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!