I managed to recover the cluster while all vms kept running:
1) copy corosync.conf from /etc/corosync/corosync.conf to /etv/pve/corosync.conf
2) recover all vmconfigs from vm backups. i went to the pbs and recovered the newest qemu-server.conf.blob from each vm.
copy them to the...
Unfortunately i have no etckeeper on this node. I have tons of vmbackups in a proxmox backup server. I think i can restore the vm configs easily, but not the corosync.conf ,storage conf and all the other virtual files. I thought about undeleting in the sqlite database in...
Today i tried a "new method" of removing a proxmox cluster node out of 4. I thought it was a good idea to issue "rm -rf /boot /etc" to make sure the node never comes back. Well, this was the most stupid idea in the last years.
Unfortunately this deleted most configs in /etc/pve like...
irgendwas ist da nicht korrekt, hat jermand eine Idee ?
root@server:~# pmgversion -v
proxmox-mailgateway: not correctly installed (API: 7.3-6/5a2550b8, running kernel: 5.15.107-2-pve)
ich bin ganz sicher, dass ich pmg 7.3-6 installiert habe:
pmg-api/7.3-6/5a2550b8 (running kernel: 5.4.119-1-pve)
Trotzdem bekomme ich mit pmg7to8 folgenden Fehler:
Checking proxmox-mailgateway package version..
Use of uninitialized value in pattern match (m//) at...
I am planning a network setup with 3 host proxmox cluster at a hoster. The VMs will have private IP Addresses in a VLAN and will be accessed only via a Loadbalancer at the hoster.
However, the VMs should be able to reach the internet for updates and deployments.
I want to be able to migrate...
I recently installed a proxmox server with local-btrfs
It is a 2 Disk raid 1 with size 1.8 TB
btrfs fi usage / tells me that 1.67 TB are in use
In fact only 934 GB are in Files on the disk
When checking the diskspace with the btrfs tool btdu I can see that the disk space of the vms is...
Well, in the Hoster environment the main Problem is the shared 1 GBit/s uplink to the proxmox server which is used by several vms+Backup
This 1GBit/s Uplink is the main Bottleneck. I cannot change that within budget so i somehow have to live with it and work around.
I have a setup at a Hoster where the Proxmox VM Server is damned fast using NVMEs and the Proxmox Backup Server is terrible slow, for example network Bandwidth sometimes is only 30 MB/s
What happens now: The Backup ov VM starts really fast with let's say 500 MB/s reading speed.
A customer has 2 Servers at Hetzner in 2 Different IP subnets and in 2 Different Datacenters and asked me to modernize them.
I would install a proxmox pve distribution on each server.
Does someone have experiences if it is a good idea to cluster the servers in a pve cluster ? I could easily...
I have a Proxmox Backup Server in Data Center and would like to have a copy in another location.
Proxmox Sync is no option, there is not enough online Bandwidth
Tape also is no option, we will not invest again in tape technology
I am taking a differential zfs snapshot on harddisk from...
Thanks for inspiring me, this sounds like a good start. Yes the datastore is a zfs dataset.
I am thinking about an initial zfs synchronisation on site and then only transferring differential snapshots. They should fit on an external drive for transport.
I still have the query-pbs-bitmap-info error even with my timeout modification and the newest 2.0.14 packages.
My Backup Server is also not so slow, it is a 4 mirror 8 disk zfs raid 10 and even has zfs special devices on nvme.
Also the server ist idle during backup time and not doing anything...
I have a Backup Server with around 30TB of Backup Data. I would like to take all Backups offsite once a week.
Unfortunately the daily change of Backup Blocks are so many that we cannot replicate them online. I mean it would be complete overkill to have a 10 GBit/s Line only for replicating...
I applied the patch, now I get another error message.
Yes, the Backup Server at Hetzner is a little slower than my servers in my own datacenter. But it is not really slow.
INFO: starting new backup job: vzdump 104 --node zw-pm-1 --mode snapshot --storage backups21 --remove 0