Search results

  1. I

    random proxmox crash after upgrade to proxmox 7

    i upgraded entire cluster from proxmox 6.4 -> 7 with Ceph Nautilus->Octopus->Pacific the problem: we few centos 7 lxc that we cannot upgrade to 8. i used the workaround from one of the (post in this forum) and added systemd.unified_cgroup_hierarchy=0 to the grub. i applied it to two hosts in...
  2. I

    lxc with docker have issues on proxmox 7 (aufs failed: driver not supported)

    Well, is 'aufs' loaded on the host kernel (lsmod)? Have you tried with overlay2? No lxc host is ubuntu 18.04 arch: amd64 cores: 16 cpulimit: 4 hostname: docker1 memory: 8192 net0: name=eth0,bridge=vmbr0,gw=xxxxxxxxx.254,hwaddr=xxxxxx,ip=xxxxxxxx/22,type=veth onboot: 1 ostype: ubuntu rootfs...
  3. I

    Unified cgroup v2 layout Upgrade warning PVE 6.4 to 7.0

    have simmiler issue: i tried to do it, but the centos lxc dont have access internet
  4. I

    lxc with docker have issues on proxmox 7 (aufs failed: driver not supported)

    after long upgrade of proxmox and ceph this is the ouput of dockerd -D: DEBU[2021-10-12T12:59:20.229834269Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs] ERRO[2021-10-12T12:59:20.230967397Z] AUFS was not found in /proc/filesystems storage-driver=aufs...
  5. I

    [SOLVED] proxmox ceph ( Nautilus to Octopus) upgrade issue

    update: after upgrade to proxmox 7 it solved and after updating ceph to 16(pacific) it still works
  6. I

    [SOLVED] proxmox ceph ( Nautilus to Octopus) upgrade issue

    As part of upgrading to proxmox 7 i done the folling steps: upgraded all nodes to latest 6.4-13 rebooting everything upgrading cepth to Octopus , based on (https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus) after running ( ceph osd pool set POOLNAME pg_autoscale_mode on) on a pool it...
  7. I

    error move lxc root disk from local to ceph

    i think you are correct here is my actual size zfs get used pve-blade-108-internal-data NAME PROPERTY VALUE SOURCE pve-blade-108-internal-data used 78.0G - 78 used when allocating 70 ill try to extend it and move it again once the server become available
  8. I

    error move lxc root disk from local to ceph

    zfspool: pve-blade-108-internal-data pool pve-blade-108-internal-data content rootdir,images nodes pve-blade-108
  9. I

    error move lxc root disk from local to ceph

    perhaps , do you know how i can check it?
  10. I

    error move lxc root disk from local to ceph

    arch: amd64 cores: 96 hostname: svr-ub-108 memory: 401408 mp1: <hiddedn> mp2: <hiddedn> mp3: <hiddedn> mp4: <hiddedn> mp5: <hiddedn> mp6: <hiddedn> mp7: <hiddedn> mp8: <hiddedn> net0: name=eth0,bridge=vmbr0,gw=<hiddedn>,hwaddr=<hiddedn>,ip=<hiddedn>,type=veth net1...
  11. I

    error move lxc root disk from local to ceph

    i try to move the lxc root disk from local to ceph (have over 50% free space on ceph) this is the log for the error: /dev/rbd3 Creating filesystem with 18350080 4k blocks and 4587520 inodes Filesystem UUID: e55036f9-7f8a-4a49-af36-7929f96043cd Superblock backups stored on blocks: 32768...
  12. I

    [SOLVED] install issu with lsi 2208

    i just installed external 3008lsi in IT more and it works.
  13. I

    [SOLVED] install issu with lsi 2208

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 20G 0 disk |-sda1 8:1 0 1007K 0 part |-sda2 8:2 0 512M 0 part `-sda3 8:3 0 19.5G 0 part |-pbs-swap 253:0 0 2.4G 0 lvm [SWAP] `-pbs-root...
  14. I

    [SOLVED] install issu with lsi 2208

    i just installed fresh pbs server on supermicro 1027R-72BRFTP it have internal 2208 controller , i have 2 dedicated sata drives(boot mirror) connected directly to the motherboard. and 2 sas ssd (8TB) drives connected to the front panel. but the proxmox does not see the two sas drives. but...
  15. I

    proxmox backup replication? (second twin off site)

    finally i revived the hardware for our first BBS, and before installing i would like to ask some questions to know what is the best approach to meet our requirements backup lxc\vms (that that pbs is designed to ) 1. what is the best method to do : nfs share to store vm backups (hyper-v...
  16. I

    proxmox 6.3 + lxc+ nfs share

    i made a separate container for docker and nfs (to reduce and share load across the cluster )
  17. I

    proxmox 6.3 + lxc+ nfs share

    check my other post https://forum.proxmox.com/threads/nfs-share-from-lxc.65158/#post-385775
  18. I

    hardware ceph question

    The plane is to use HDD (not ssd, so 6gbs should be good enoth) for ceph storage. so for 6gbs can i use the onboard controller to work with ceph?
  19. I

    hardware ceph question

    do i need another lsi card? or the onboard is good enough ? i plan one of the following models 6027TR-DTRF+ 6027TR-D71RF+ 6027TR-D70RF+
  20. I

    hardware ceph question

    Right I forgot . Is it a good idea to make multiple ceph agents with small amount of osds. In this case each will have up to 6. Or better to get server with better capacity. (More hdds sleds )