Oh and I noticed that on this site https://www.proxmox.com/en/services/videos/proxmox-virtual-environment/what-s-new-in-proxmox-ve-8-1
the links to the PBS and PMG Roadmaps are dead
The changelog mentions the following:
Do I automatically get to enjoy the "Reworked defaults" after upgrading to Reef or how would I apply these on existing Ceph Clusters (as the point above is about new installations)?
Hi,
thank you for your work on this project, looking forward to implementing it over the next weeks. I did notice two things in the docs that are a bit outdated, maybe you could update those:
https://pom.proxmox.com/offline-mirror.html#environment-variables This talks about the PBS, not POM...
Tested the pve-no-subscription repo on our staging cluster, no issues so far.
Is there an ETA for it arriving in the enterprise repos so we can schedule our maintenance?
Ich gehe davon aus, dass du CephCSI (rook nutzt auch CephCSI) in K8s nutzt? Dann könntest du dir auch Kasten anschauen um Backups zu machen... Ist halt nicht PBS, aber PBS ist jetzt auch nicht grade für die Sicherung von Cloudnative Projekten designed gewesen.
Hi,
seems like Dirty Pipe wasn't the only vulnerability keeping us busy this week. I got notified about these new Security Advisories from Debian:
https://www.debian.org/security/2022/dsa-5095
https://www.debian.org/security/2022/dsa-5096
I know Proxmox uses a modified Ubuntu Kernel but I was...
Hello,
when trying to import a OVF file I get the following error:
# qm importovf 500 vmname.ovf ceph01_vm
[some progress notifications]
import failed - cowardly refusing to overwrite existing entry: scsi0
When running it with --dryrun this is the output:
{
"disks" : [
{...
I have used that HBA before in the i-variant (the ports are for the backplane in the server and not exposed in the back to connect external storage) and it worked without issues. The e-variant should not be any different, it is the same board, only the position of the sas connectors differ.
As...
I think Proxmox would still benefit a lot if they provided a 1st-party rancher driver.
We would prefer getting proper commercial support for this.
Also automatically load-balancing (moving) VMs would be somewhat of a requirement if you constantly have new VMs popping up and shutting down...
Hey, I was looking into transferring a file to a VM running the guest agent as well. It seems like the --content parameter from file-write only takes strings but I want to copy a binary file. Is there a way to accomplish this?
The qemu-guest-agent docs say it accepts a base64 encoded string, I...
The Proxmox created CEPH cluster is just a "normal" Ceph - While not having tested it with PVE (but running a standalone external ceph cluster in rook) I don't see a reason why it wouldn't work.
However I would advise you to create a new pool you use for rook and not mess with the one where PVE...
Hi :)
The single file restore has been introduced in newer Proxmox VE Versions, your 6.3-3 is pretty old, you should upgrade. After the upgrade that functionality should be avaible.
Hi :) Thank you for your work!
I back up the PVE hosts to the same datastore so I had to add handling for backups of the type "host", maybe you could add that to your script as well :) I just skipped them as the name of the snapshot can be customized at backup creation anyway.
you could split the VMs into multiple jobs that run at different times. But yes, I would like an option in PVE to limit the amount of concurrent backup jobs just like you can limit the amount of concurrent jobs of migrations
Yeah that is what Thomas discussed in this, one approach would be trying to do this with the qemu guest agent. This is installed on basically all VMs anyway and doesn't require any more configuration.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.