Search results

  1. S

    Proxmox Cluster Broken almost every day

    Hi Tom thanks for that note! before I report bug, I checking with the community if it's something that happen only to me just to inform everyone, I did update to the last version and so far it's stable, so it's OK I sent another issue to the bugzilla, it's without any status changes from...
  2. S

    Proxmox Cluster Broken almost every day

    wow, thanks for that update! any news so far? updates?
  3. S

    Proxmox Cluster Broken almost every day

    1. thanks for that, can you please provide any additional info? why it's effect this node only? why pve-cluster.service restart solve it if it's network issue? why the ceph network on same switches not effected that issue? 2. what should we changed? Thanks again!
  4. S

    Proxmox Cluster Broken almost every day

    sure, here: Virtual Environment 5.1-46 Node 'server202' 2018-04-08 Apr 08 07:55:54 server202 kernel: read cpu 6 ibrs val 0 Apr 08 07:55:54 server202 kernel: read cpu 7 ibrs val 0 Apr 08 07:55:54 server202 kernel: read cpu 8 ibrs val 0 Apr 08 07:55:54 server202 kernel: read cpu 9 ibrs val 0 Apr...
  5. S

    Proxmox Cluster Broken almost every day

    Hi, I have cluster of 2x10Gbps network via Bond, using LACP but almost every day, and sometime even few times a day, I connect to the GUI and see only on server info all servers and nodes with questionmark attaching screenshot after I logged-in to server202 and run: systemctl restart...
  6. S

    LXC and lsblk command

    Hi, am running lsblk from one of the container and I see it's show the list of the disks for all the containers under the physical node server how can I avoid the container owner to run that command or view this info? Regards,
  7. S

    Internal Network for software defined storage

    Hi, we are using Proxmox VE with Ceph SDS in Internal network for HA and now we just debating between the following option: 1. Supermicro Microblade with 2x10Gbps switches and 28 nodes on 6U (we will wait for nodes with nvme, in my opinion should be Sometime this year) 2. Supermicro Superblade...
  8. S

    fstrim for Windows Guests and Ceph Storage

    sure, I did for the 2016 add dummy drive, install and remove the dummy drive after that I moved the disks to virtio-scsi and try to boot
  9. S

    fstrim for Windows Guests and Ceph Storage

    Thanks Klaus! I did and got for the Win2016 blue screen after moved to virtio-scsi also, for the win2012 it's not detecting the virtIO scsi driver from the last and stable ISOs suggestions?
  10. S

    fstrim for Windows Guests and Ceph Storage

    Hi, thanks for that response! as you can see at the screenshot, discard is disabled, Ceph + Virtio + OS Type: Win10/2016 any suggestion? os should be power Off for that?
  11. S

    fstrim for Windows Guests and Ceph Storage

    Hello, how can I run fstrim solution for Windows KVM with Ceph storage? the best is if possible to run it In a constant timing Thanks!
  12. S

    Rescan LVM-Thin Volume

    Thanks, I will, Wondering about the LXC, why Proxmox deamon not runing that every day? https://forum.proxmox.com/threads/lxc-lvm-thin-discard-fstrim.34393/#post-168569
  13. S

    Rescan LVM-Thin Volume

    Hi, I did now https://forum.proxmox.com/threads/lxc-lvm-thin-discard-fstrim.34393/#post-168569 and it's work well, now, what is the solution for Windows KVM with Ceph storage? Regards,
  14. S

    Rescan LVM-Thin Volume

    Thanks again, as fo #2 This is enabled by default. so if it's enabled, why I have this issue? Regards,
  15. S

    Rescan LVM-Thin Volume

    Thanks for that option: 1. is is SAS Disks and not SSD, Guess it does not matter 2. how can I enable discard mount for XLC Containers? 3. trim your fs with fstrim, mean on container level or on main filesystem level? Regards,
  16. S

    Rescan LVM-Thin Volume

    all of vm-2161-disk-1 vm-319-disk-1 vm-321-disk-1 vm-324-disk-1 are containers how can I solve that size issue? Regards,
  17. S

    Rescan LVM-Thin Volume

    Hi wolfgang, how can we enable discard on LXC Containers? Regards,
  18. S

    Rescan LVM-Thin Volume

    no, as good as I know... it's clean Installation of Proxmox 5 on Hardware RAID 1 volume
  19. S

    Rescan LVM-Thin Volume

    Please, here: root@server216:~# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- 996.50g 97.78 47.63 [data_tdata] pve Twi-ao---- 996.50g [data_tmeta] pve ewi-ao---- 128.00m...
  20. S

    Rescan LVM-Thin Volume

    Hi, I have node with 4 LXC Containers one container about 85GB one is about 10GB one about 105GB and last is 8GB total of about: 210GB now. the LVM-Thin Volume show me this details: Usage: 97.76% (974.18 GiB of 996.50 GiB) local no any backups / images. any suggestion how can I "rescan" the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!