Search results

  1. cluster crash with unknown reason. logs attached

    For some reason most of the cluster is crashed (servers rebooted) it became stable after the reboot but there was a small downtime . i tried to find the reason in the loges but i could not understand what caused it here are the logs of the cluster from one of the nodes that was not rebooted (on...
  2. backup entire cluster

    I am in a process to automate offline backup (will be based on weekly drive change connected via usb3 to one of the proxmox server) my goal is to make offline backup to be stored on secure location offsite (most of our IP is stored in the same server room ) I have already set up already have...
  3. Adding second corosync ring best practice

    We upgraded the proxmox to 7. with latest ceph. and almost everything back to normal.. Currently our main network is the same network with the corosync and consists of 11 nodes, (4 of them with ceph) We plan to double the server count in the near future and i thinking of moving the corosync...
  4. random proxmox crash after upgrade to proxmox 7

    i upgraded entire cluster from proxmox 6.4 -> 7 with Ceph Nautilus->Octopus->Pacific the problem: we few centos 7 lxc that we cannot upgrade to 8. i used the workaround from one of the (post in this forum) and added systemd.unified_cgroup_hierarchy=0 to the grub. i applied it to two hosts in...
  5. lxc with docker have issues on proxmox 7 (aufs failed: driver not supported)

    after long upgrade of proxmox and ceph this is the ouput of dockerd -D: DEBU[2021-10-12T12:59:20.229834269Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs] ERRO[2021-10-12T12:59:20.230967397Z] AUFS was not found in /proc/filesystems storage-driver=aufs...
  6. [SOLVED] proxmox ceph ( Nautilus to Octopus) upgrade issue

    As part of upgrading to proxmox 7 i done the folling steps: upgraded all nodes to latest 6.4-13 rebooting everything upgrading cepth to Octopus , based on (https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus) after running ( ceph osd pool set POOLNAME pg_autoscale_mode on) on a pool it...
  7. error move lxc root disk from local to ceph

    i try to move the lxc root disk from local to ceph (have over 50% free space on ceph) this is the log for the error: /dev/rbd3 Creating filesystem with 18350080 4k blocks and 4587520 inodes Filesystem UUID: e55036f9-7f8a-4a49-af36-7929f96043cd Superblock backups stored on blocks: 32768...
  8. [SOLVED] install issu with lsi 2208

    i just installed fresh pbs server on supermicro 1027R-72BRFTP it have internal 2208 controller , i have 2 dedicated sata drives(boot mirror) connected directly to the motherboard. and 2 sas ssd (8TB) drives connected to the front panel. but the proxmox does not see the two sas drives. but...
  9. proxmox backup replication? (second twin off site)

    finally i revived the hardware for our first BBS, and before installing i would like to ask some questions to know what is the best approach to meet our requirements backup lxc\vms (that that pbs is designed to ) 1. what is the best method to do : nfs share to store vm backups (hyper-v...
  10. hardware ceph question

    i planning to add some more nodes to the grid ( we need mainly more computational power cpu\ram ) but i thought to add an HDD ceph storage for low accsess\archive storage (we have existing 5 node ssd based servers for to support heavy read tasks ) i am thinking to do the following 3x...
  11. [SOLVED] ceph mix osd type ssd (sas,nvme(u2),nvme(pcie))

    I asked similar question around a year ago but i did not find it so ill ask it here again. Our system: proxmox cluster based on 6.3-2 10 node, ceph pool based on 24 osds sas3 (4 or 8 TB ) more will be added soon. (split across 3 nodes, 1 more node will be added this week ) we plan to add more...
  12. proxmox 6.3 + lxc+ nfs share

    i would like to share nfs folder , the lxc is ubuntu 18.04 i tried but every time i receive an error: root@nfs-intenral:~# journalctl -xe -- The result is RESULT. Jan 12 11:21:13 nfs-intenral systemd[1]: rpc-svcgssd.service: Job rpc-svcgssd.service/start failed with result 'dependency'. Jan 12...
  13. [SOLVED] can i expand\adding storage

    now i am planning to buy a new server dedicated to pbs, to backup around 50-100 vms we keep growing, and i would like to add storage on demand.. can i easily add storage to the main backup pool?
  14. [SOLVED] Ceph - slow Recovery/ Rebalance on fast sas ssd

    We have suffered some server Failure but when the server came back ceph had to restore\rebalance around 10-30TB of data, Ssds are based on relativly high end sas ssds (4TB segate nitro and hp\dell 8TB pm1643 based) Network is 2x40Gb Ethernet (one dedicated for sync\replication and one for...
  15. [SOLVED] rpool import issue - recovery after power failure

    This morning we had some power outage issue on our server room, all nodes recovered except one. This node is part of (one of three) ceph cluster , (pools are set to replication 3) so the data is safe and the cluster is stable, (excluding ceph warning) Any idea how i can fix it?
  16. ram usage sharing between lxc(ubuntu),vm(windows),host

    I assume that this is impossible, but ill ask this anyway. i need to put one lxc (ubuntu) and one vm on each host (for large computational tasks) somtiems we need windows and sometimes we need linux the problem that the VM need to have all the ram prealocation and not freed when not in use...
  17. backup failure exit code 2

    proxmox 6.2-12 any idea? INFO: starting new backup job: vzdump 140 --storage vqfiler1-lxc --remove 0 --node pve-srv1 --compress zstd --mode snapshot INFO: Starting Backup of VM 140 (lxc) INFO: Backup started at 2020-10-02 15:37:29 INFO: status = running INFO: CT Name: grid-master INFO...
  18. lxc bavkup slow ( how i can check what limit the backup speed?)

    i have lxc containers hostes on ceph storage (with speed over 5GBs ) when making backup using zstd the speed is between 20-90MBs the storage i am writing to store the backup have read/write speed around 1GB proxmox 6.2.12
  19. [SOLVED] lxc backup error when ZSTD selected

    on some hosts the ZSTD backup compression is not working? any idea why ? this is the error? Parameter verification failed. (400) compress: value 'zstd' does not have a value in the enumeration '0, 1, gzip, lzo' solution: upgrade proxmox to latest version on all nodes.
  20. Storage question - dedicated server

    Does the PBS require same storage as the normal proxmox servers? Currently have mainly lsi cards in IT mode. Will it work the pbs? Any suggestions on minimal requirements?, all i could find is 8GB ram, and high grade boot drive

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!