Search results

  1. M

    LVM No such file or directory - After Reboot

    Hi, After reboot, all VMs are failing to initialize: PVE Version: proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve) pve-manager: 5.2-2 (running version: 5.2-2/b1d1c7f4) pve-kernel-4.15: 5.2-3 pve-kernel-4.13: 5.1-45 pve-kernel-4.15.17-3-pve: 4.15.17-12 pve-kernel-4.15.17-2-pve: 4.15.17-10...
  2. M

    Clone bandwidth limit

    Hello, How can I set clone rate limit in MB/s ?
  3. M

    4.15 based test kernel for PVE 5.x available

    Hello @martin , I run proxmox in a Dell Poweredge 11th Generation. Last week after a kernel update, my grub menu wasn't not even shown, and I had to reinstall Proxmox. I haven't tested this fix yet. Do you think was that related to this bug ?
  4. M

    Proxmox + LVM cache

    Hey there, I've seen that using ZFS on a HW RAID is not advisable. Due to this advices, In my setup with Hardware Raid, I've been planning to use LVM with cache, instead of ZFS. Have someone experienced good perfomance, working on Proxmox + LVM Cache ?
  5. M

    ZFS write (High IO and High Delay)

    Yes. The config, which I posted above, is from the server that is having the slow write issue. It runs a 512 bytes SSD. The other server, that is running fine, has enteprise SSD (4K sector size). I decided not using ZFS with 512 bytes SSD. I've already tried everything, but with no success.
  6. M

    ZFS write (High IO and High Delay)

    zpool status root@br01:~# zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 1h31m with 0 errors on Sun Apr 8 01:55:16 2018 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 sda2 ONLINE 0 0 0...
  7. M

    ZFS write (High IO and High Delay)

    Hello @6uellerbpanda , I have 32 GB RAM. Server 1 (Running ZFS smoothly, fast Read and fast Write operations): Xeon E3 1230 v5 32 GB RAM SSD 480 GB (4K sector size) - ashift 12 and zvol/zpool 128K Server 2 (Running ZFS terribly, fast Read but ...... very poor Write operations): Xeon E3 1230...
  8. M

    ZFS write (High IO and High Delay)

    Hello Alwin, I've read that ARC is only used for caching Read operations, not write. I think that this problem is being caused because my SSDs are 512 bytes, ZFS set with ashift 9 and zpool block size 128K. I have another setup with 4K SSDs, ashift 12 and zpool block size 128K (This setup is...
  9. M

    ZFS write (High IO and High Delay)

    Hello guys, I'd like to hear from you about the write speed of your ZFS setup. I'm using SSD, when a VM is being cloned, IO goes up to 30 - 40%. I see , from iotop command, that txg_sync is at 99%, and write oscilates from Kilobytes to a couple Megabytes, every second. I don't know what is...
  10. M

    Corosync spam

    Yes it's in WAN. Is there any tweaks in corosync.conf to figure it out ? It's strange because it works , but sometimes corosync spam the nodes and then break the cluster.
  11. M

    Corosync spam

    Hi, Kernel version: 4.13.13-6-pve Hello Fabian, I'm using Unicast (UDPU). Nodes are connected over WAN. Cluster size: 15 nodes Corosync.conf: logging { debug: off to_syslog: yes } nodelist { node { name: br01 nodeid: 12 quorum_votes: 1 ring0_addr: br01 } node {...
  12. M

    Corosync spam

    Hello, I realized that corosync sometimes "spam" the network, hence Cluster goes down and Nodes becomes "gray". Here it is the output when Corosync gets crazy: Apr 12 20:32:25 ns524364 corosync[2602]: notice [TOTEM ] Retransmit List: 8d 8e 8f 90 91 92 93 98 99 ff 106 107 108 89 8a 8b 8c 94...
  13. M

    Proxmox Cluster Broken almost every day

    It's not a false accusation, not even an accusation. It's a bug report. And I'll repeat as many as necessary. Almost everyday I have to restart pve-cluster service due to this issue. If I were the only one which is facing with it, that would be a "false accusation", but it is not. And I say...
  14. M

    Proxmox Cluster Broken almost every day

    @LnxBil You don't add anything to the topic, you are here only for posts. Congratulations.
  15. M

    Proxmox Cluster Broken almost every day

    based on that there are 3 threads about it
  16. M

    Proxmox Cluster Broken almost every day

    I don't think it was fixed yet. I have the latest updates, even so, today (2 times) I had to restart pve-cluster service on all nodes to take back online my cluster. That's really annoying, and many proxmox users are facing with it.
  17. M

    Proxmox Cluster Broken almost every day

    We're all facing this issue for months. There is another thread about this issue; https://forum.proxmox.com/threads/node-with-question-mark.41180/
  18. M

    Node with question mark

    It's happening every week for me too. I have 12 nodes, and when it happens, I have to stop all proxmox services, in every node: service pve-cluster stop service corosync stop service pvestatd stop service pveproxy stop service pvedaemon stop and then service pve-cluster start service corosync...
  19. M

    Proxmox API Timeout

    How can I increase Timeout Limit ?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!