Search results

  1. P

    Looking for a guru...

    Hi, I think you should specify on what topic you need a second opinion. I don’t have that much experience with ceph but if you have some basic or more advanced questions you can contact me. I’m not an expert so I might not be able to answer everything but ill try my best.
  2. P

    IO Delay per VPS

    Hi, you can try https://github.com/henry-spanka/iomonitor. It shows the disk usage per VM. Pretty basic but it does the job :)
  3. P

    [SOLVED] [z_null_int] with 99.99 % IO load after 5.1 upgrade

    @littlecake Can you please share your result about backups? Is the problem really gone after the upgrade to ZFS 0.7.7? Thanks.
  4. P

    [SOLVED] Perfomance Issues with HP SE316M1 + P410 incl. BBU

    @TheeDude It's better if you don't reboot your server. It should be safe but unless you have backups I do not recommend to restart the host while the array is transforming.
  5. P

    vzdump Output Explained? Finding bottleneck

    @tufkal Just wanted to make sure we're both on the same page :) It's not needed to convert the file after TRIM. It will have no effect on the used storage on the hypervisor and the backup time. A simple TRIM is enough to use minimal space on the hypervisor and have fast backups.
  6. P

    vzdump Output Explained? Finding bottleneck

    @RobFantini 1. Not necessarily. You can add the discard option to the fstab file which immediately trims the block when it get's deleted but you will probably take a performance hit. It's better to trim once a week like the systemd service does. 2. I never used LXC but I guess so. `fstrim /`...
  7. P

    vzdump Output Explained? Finding bottleneck

    5 minutes for a backup of a 70 GB disk isn't bad. The two values at the end of a log line separated by a slash are the read/write speeds as @HBO already mentioned. INFO: status: 0% (182452224/68719476736), sparse 0% (139419648), duration 3, 60/14 MB/s The backup process reads with 60 MB/s...
  8. P

    peinliche Frage

    @XMarC, ich habe vor zwei Wochen einen Proxmox Kernel auf Basis der Version 042stab127.2 kompiliert. Den kannst du gerne nutzen. Siehe: fuckwit/kaiser/kpti
  9. P

    fuckwit/kaiser/kpti

    :D. Yes of course if you control the virtualisation environment. However, as in my case, customers have their own VMs running on my hypervisors and I don't have access to the VMs and I can not trust my customers to not try to use this vulnerability.
  10. P

    fuckwit/kaiser/kpti

    Yes correct, however in a hosting environment (VMs, Webspace, etc.) you can not control what applications users run on their servers and they can exploit this to read memory from other virtual machines or the host. Meltdown indeed only has an impact on containers and not Virtual Machines...
  11. P

    fuckwit/kaiser/kpti

    I have compiled a kernel for Proxmox 3.x myself as I still have many OpenVZ nodes running. The kernel has been tested and so far works fine for me. You can download it at: https://git.vnetso.com/henryspanka/pve-kernel-2.6.32/tags/v2.6.32-49-pve_2.6.32-188 Feel free to check the source code and...
  12. P

    How can I check raid status?

    Hello, if a disk dies the zpool is in degraded state and will still function although if you lose another drive the pool may be broken You can replace the drive and then start the resync with the zpool replace command
  13. P

    How to enlarge Qemu linux lvm vps

    Hello, msdos partition tables only support a max disk size of 2 TB. If you have a larger disk you need to convert your partition table to gpt. However I recommend doing a backup/snapshot before that as changing the partition table could break your filesystem if done wrong.
  14. P

    [SOLVED] ZFS RAM peaks?

    The system for example caches files ans if you empty the cache it needs to fetch the files from the HDD again. High RAM usage isn’t always bad. Depending on the size of the zfs pool you may even need more RAM. Do you have a L2ARC (Cache) configured? Note that flushing the cache does not affect...
  15. P

    ZFS Memory issues (leak?)

    Hello, I can confirm that the issue was fixed with the last kernel update. The host is stable now and no memory leaks happened.
  16. P

    ZFS Memory issues (leak?)

    @sshutdownow Note that this issue is related to the network and not ZFS. Do you have a VM running with lots of traffic/packets per second?
  17. P

    ZFS Memory issues (leak?)

    Well, indeed. It seems that I can reproduce it on another node. Create a bridge auto vmbr1 iface vmbr1 inet manual bridge_fd 0 bridge_ports none ifup vmbr1 Create two VMs with the following configuration. net0: virtio=<MAC>,bridge=vmbr1 Install an operating system (I used CentOS 7)...
  18. P

    ZFS Memory issues (leak?)

    I have finally tracked down the issue to a specific VM. The VM receives traffic and inspects it to find patterns of ddos attacks. The network interface is configured as virtio attached to the bridge. The RAM usage is increasing steadily. However if I change the network device to e1000 the issue...
  19. P

    ZFS Memory issues (leak?)

    @wolfang. The ZFS Pool is already upgraded to ZFS 0.7. I guess it's not possible to go back to the 4.10 kernel isn't it?
  20. P

    ZFS Memory issues (leak?)

    Well it seems that IO is the issue. A Graphite VM that writes a lot of metrics caused the issue with about 10 MB/s and about 200 IOPS write. After turning off the VM the RAM increases only slightly with about 200 MB per hour but the bug somehow still exists. I guess that there is a memory leak...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!