Search results

  1. G

    ZFS snapshot problem

    Same problem: .zfs snapshots are not accessible.
  2. G

    Proxmox Backup Client 2.x for Debian Buster

    There is no Debian package for PBS client 2.x for debian buster. We can't upgrade hosts.
  3. G

    Proxmox Backup Client 2.x for Debian Buster

    Hello, I have to backup hosts with Debian 10. On the PBS debian repo there is only version 1.1.14 Using deb http://download.proxmox.com/debian/pbs-client buster main It iinstalls version: apt-cache show proxmox-backup-client | grep 'Package\|Version' | head -n2 Package: proxmox-backup-client...
  4. G

    Bonding and Switch configurations

    Hello, I've setup a cluster of 3 nodes, using bonding mode active/backup and 2 switches for redundancy like this (simplified diagram): I don't want to make any kind of special configuration on the switches, to be vendor free. Is it true that the only way to achive this involving two switches...
  5. G

    EFI partition sync setup fails

    Hy, yes /boot is redundant because it's on MD raid. But /boot/efi cannot be on MD Raid. It must be on a plain vanilla vfat partition. I tried to put on MD raid, but boot fails. So to sync /boot/efi the only way seems to be proxmox-boot-tool. Am I wrong ?
  6. G

    EFI partition sync setup fails

    Here they are: # proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. System currently booted with uefi E7D9-2C88 is configured with: uefi (versions: 5.15.74-1-pve, 5.15.83-1-pve, 5.19.17-1-pve) # proxmox-boot-tool kernel list Manually selected...
  7. G

    EFI partition sync setup fails

    It worked: # umount /boot/efi # umount /boot Original EFI partition was /dev/sdb1, I want to sync the empty one on /dev/sda1: proxmox-boot-tool init /dev/sda1 Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. UUID="E7D9-2C88" SIZE="536870912" FSTYPE="vfat"...
  8. G

    EFI partition sync setup fails

    Here it is: x # lsblk -o +FSTYPE NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT FSTYPE sda 8:0 1 114.6G 0 disk ├─sda1 8:1 1 512M 0 part vfat ├─sda2 8:2 1 954M 0 part linux_raid_member │ └─md0...
  9. G

    EFI partition sync setup fails

    I have a machine that boots via UEFI. Currently filesystems are mounted as is: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sys-root 15G 4.3G 9.6G 31% / /dev/md0 920M 275M 582M 33% /boot /dev/sdb1 511M...
  10. G

    [SOLVED] Documentation errors: non-subscription repos for PBS client for Debian Buster

    Solved using PVE6 key: sudo wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg -O /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg Please change the topic title to: [SOLVED] Documentation errors: non-subscription repos for PBS client for Debian Buster
  11. G

    [SOLVED] CEPH how to delete dead monitor?

    SOLVED: removing the directory, clened the gui. mv /var/lib/ceph/mon/ceph-0 ~
  12. G

    [SOLVED] CEPH how to delete dead monitor?

    Sorry, I thought it was a good idea, because googling sent me here. Yes, it shows only the new created monitor (I destroyed and recreate to migrate to rocksdb): ls -al /etc/systemd/system/ceph-mon.target.wants/ total 8 drwxr-xr-x 2 root root 4096 Dec 20 12:03 . drwxr-xr-x 16 root root 4096...
  13. G

    [SOLVED] CEPH how to delete dead monitor?

    I've faced the same problem, on a cluster upgraded since PVE 5.x till 7.3 Pacific. I failed removing mon.0. Had to remove it manually from command line and removing systemd symlink. Other monitors (mon.1 and mon.2) went smoothly. But now i still have mon.0 in the GUI with status and Address...
  14. G

    Stop Backup task at specific time

    Hello, is there a way to stop a scheduled backup job from a CLI ? I get task listing with pvenode task list But cannot stop a task, ike in GUI. How can achieve this ? Thx
  15. G

    PVE Cluster Node restore

    I would like to reinstall a cluster node from scratch, to use ZFS raid instead of an old mdadm with lvm partitioning scheme. I have a backup done with PBS. Is it possible to: format the machine install PVE from ISO using ZFS with RAID1 instead of my older custom partitioning scheme restore...
  16. G

    [SOLVED] Windows 11 VM with CEPH storage fails to start

    I have enabled the Proxmox Ceph Pacific apt reposisitory and upgraded the hypervisor. Now the VM starts correctly. Thank you !
  17. G

    [SOLVED] Windows 11 VM with CEPH storage fails to start

    Yes, it's a cluster of 3 hypervisors only nodes and 3 ceph storage only nodes, all running PVE 7.2 Done for both EFI and TPM state disks. Same error message. I think the trouble is with: kvm: -drive...
  18. G

    [SOLVED] Windows 11 VM with CEPH storage fails to start

    It fails: It works: Here it is: PS: I tested another VM: moving the EFI disk out of a CEPH pool, let the machine starts.
  19. G

    [SOLVED] Windows 11 VM with CEPH storage fails to start

    Yes, all other VM works fine. The message was always displayed.
  20. G

    [SOLVED] Windows 11 VM with CEPH storage fails to start

    Hello, I've configured a Win 11 Vm as following: Starting up the VM fails with this task log: What's wrong ? Thank you, GV

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!