Search results

  1. V

    cluster.conf.new: Permission denied

    This is totaly wrong. After you stop the cluster there is nothing to edit in /etc/pve
  2. V

    Multiple clusters backup to pbs

    Another workaround would be to create/use different datastore for each pxmx cluster.
  3. V

    Multiple clusters backup to pbs

    Awesome! You guys rock. When I grow up I will definitely purchase support!!
  4. V

    Multiple clusters backup to pbs

    Hello again, It occurred to me today that you need to have different numbering VM/CT scheme to backup two seperate proxmox clusters on the same pbs box. Backups are grouped by VM/CT number only on pbs. This could cause a lot of confusion. You can backup two different VM's with the same name...
  5. V

    [SOLVED] Snapshot running VM taking forever

    FYI: Today's update (I think it was pve-container) solved the issue.
  6. V

    [SOLVED] Snapshot running VM taking forever

    It seems to be happening to all CT's that have a bind mount to the cephfs mount on the host. Removing the bind mount the issue doesn't appear. I've had it working until last week, why is this happening now?
  7. V

    [SOLVED] Snapshot running VM taking forever

    I have this happening too. CT snapshot backups fail and leave a preparing snapshot that can only be deleted from CT conf file... My error is: Use of uninitialized value in numeric eq (==) at /usr/share/perl5/PVE/VZDump.pm line 478. INFO: starting new backup job: vzdump 100 --storage cephfs...
  8. V

    Proxmox VE and Traefik Reverse Proxy

    Would you be kind enough to share your configuration files for this? Thanx
  9. V

    Ceph pool tweaking....

    Without using ceph, my current described environment works without a hitch (proxmox wise) I can fully manage the vms and cts on each site without a problem. So the question remains, can a proxmox cluster manage multiple ceph clusters? Or should I isolate the proxmox clusters per site? Can we...
  10. V

    Ceph pool tweaking....

    Thank you for your reply Alwin. Is it possible to have 3 nodes per site in 3 sites all clustered with proxmox and create local seperate ceph cluster on each site? managed by proxmox?
  11. V

    Ceph pool tweaking....

    Been reading a lot on Ceph documentation but looks confusing. I was hoping for some Proxmox specific instructions because I've broken my previous cluster by manually doing ceph commands.
  12. V

    Ceph pool tweaking....

    I have managed to succesfully deploy a test cluster over three sites connected with 100mbit fiber. All three nodes have 3 osds each, and there is the default pools for cephfs-data and cephfs-metadata. The performance of this one pool stretched across the 100mbit links is low but acceptable for...
  13. V

    ZFS resilvering gone bad

    New issue... After another hard disk failure, I shutdown, replace faulty disk with new and after boot: root@pve:~# sgdisk -Z /dev/sdg GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. root@pve:~# zpool replace -f tank 4952218975371802621...
  14. V

    ZFS resilvering gone bad

    And it's finished and fixed/healthy with no further intervention: Thank you all for the support!
  15. V

    ZFS resilvering gone bad

    zpool replace POOLNAME ORIGINAL_DRIVE SECOND_NEW_DRIVE (check first line of screenshot)
  16. V

    ZFS resilvering gone bad

    Things are looking better...
  17. V

    How do I share physical drives to Freenas VM?

    Vshaulsk, you can use webmin for all that you mention. Since you are familiar with zential you could always move that to a container on proxmox... There are tons of tools that can be sandboxed each in its own container (more versatile for maintenance/backup/upgrade) You could also use a...
  18. V

    ZFS resilvering gone bad

    The first disk replacement started giving out SMART errors during resilvering. And I just shut down the host, removed the disk and put in the new one... Which disks are you refering to? And how could this happen when all the disks where empty (no partitions) when the array was created? I guess...
  19. V

    ZFS resilvering gone bad

    root@pve1:~# fdisk -l Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!