Hello,
Looking at your configuration, the best and most scalable approach is to use a VLAN-aware bridge on top of your bond0. This way, you avoid creating unnecessary sub-interfaces or additional bridges, as all your VLANs will be managed by one...
Without further informations about your VM config and boot error it’s hard to guess what the problem is. Usually Windows is missing the VirtIO SCSI driver (vioscsi) on boot when you selected it on VM creation. You can switch to IDE as drive type...
I was confused about some part of the code of the verify........(https://github.com/proxmox/proxmox-backup/blob/master/src/backup/verify.rs)
let decoder_pool = ParallelHandler::new(
"verify chunk decoder",
4,
move |(chunk...
Hello,
It looks like Proxmox couldn’t find or create the ZFS volume (zvol) for vm-100-disk-0 during the restore process.
Run zfs list to ensure that your target pool exists and is mounted.
Finally, I used this command:
`pvck --repair --file /etc/lvm/backup/pve /dev/sda3`
and answered **"yes"** twice to resolve the issue.
root@pve20:# pvck --repair --file /etc/lvm/backup/pve /dev/sda3
Writing label_header.crc 0x5c4af410...
Hey, thanks for your answers.
Do you have examples of such vendors integration and how it provides the "glue" for PVE to be able to handle the "advanced" data management? I'd be curious to understand how it works.
I assume...
Hello,
You can try with block-level clone:
1. Boot the physical machine from a Clonezilla Live USB.
2. Save the disk image to a network share or directly to the Proxmox host.
3. Create a VM in Proxmox and restore the image into its...
I was able to reproduce your issue with a local InfluxDB 3.4.1. While InfluxDB 3.x has a new v3 API, it also provides v2 and v1 compatibility APIs. At this moment, Proxmox VE implements the v2 API of InfluxDB, which is compatible with InfluxDB...
Ich habe das Backup über PBS auf NFS umgestellt und es dauert nun doppelt so lange.
Aber wenigestens kann ich Backups erstellen.
Vielleicht hat noch jemand einen Ansatz für Backups über PBS mit SMB.
/Edit:
Habe gerade nochmal die NFS-Freigabe...
Hi all,
I had a cluster with two nodes :
mx1 - master
mx2 - slave node
I had to reinstall and move mx1 ... so I did this:
turned off mx1
promoted mx2 to master
reinstalled mx1
added mx1 to cluster (as node)
I promoted mx1 to master
now they...
Yes because in some cases this might led to problems if the swap is on zfs see https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#zfs_swap
Using a dedicated swap partition would work though or zramswap: https://pve.proxmox.com/wiki/Zram
And...
I had exactly same issue. But when boot from pve9 iso -> rescue boot, then I noticed, that proxmox is working again. So I went and executed this command now from remotely connected ssh:
```
# Reinstall GRUB for UEFI systems
grub-install...
This is working as expected: Storage replication inside a cluster is for continueing a VM in case of an network error together with high-availability (see https://pve.proxmox.com/wiki/High_Availability ) or to reduce the migration time. Thus you...
Beide Zugriffswege sind ja (meist absichtlich, das ist der Sinn eines Proxies) komplett unabhängig voneinander; will sagen: wenn du über die IP-Adresse zugreifen willst, wird versucht, IP-Pakete von deinem momentanen Client aus nach 10.10.1.2 zu...
Yes because in some cases this might led to problems if the swap is on zfs see https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#zfs_swap
Using a dedicated swap partition would work though or zramswap: https://pve.proxmox.com/wiki/Zram
And...
Beide Zugriffswege sind ja (meist absichtlich, das ist der Sinn eines Proxies) komplett unabhängig voneinander; will sagen: wenn du über die IP-Adresse zugreifen willst, wird versucht, IP-Pakete von deinem momentanen Client aus nach 10.10.1.2 zu...
I've seen that same article, @Johannes S .
One of the reasons messing with swap at all is so frustrating is that's so dependenet on the individual build.
Proxmox's default ZFS-based install doesn't even create a swap at all:
Also check what's using the SWAP. Make KSM start sooner and/or change ballooning target. Investigate and potentially change the ZFS ARC size.
Then check top -co%MEM and Datacenter > Search for what uses memory. Also see the the memory history...