Search results

  1. NoVNC Mouse Lag

    Does anyone could figure it out ? Because even after several Proxmox updates, since Proxmox 5.1 novnc mouse still lags.
  2. Proxmox on no-RAID Dell Poweredge R220 does not boot

    I can also enforce that the black screen after Proxmox installation also happens with Dell R610 and R410
  3. NoVNC Mouse Lag

    For me mouse still lags: proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve) pve-manager: 5.2-6 (running version: 5.2-6/bcd5f008) pve-kernel-4.15: 5.2-4 pve-kernel-4.15.18-1-pve: 4.15.18-16 corosync: 2.4.2-pve5 criu: 2.11.1-1~bpo90 glusterfs-client: 3.8.8-1 ksm-control-daemon: 1.2-2 libjs-extjs...
  4. NoVNC Mouse Lag

    yeah, in proxmox 5.2.x there is only one cursor, and it is still lagging.
  5. ZFS worth it? Tuning tips?

    I understand your arguments. But you're just considering by itself the experience that you had in the past. There are no statistics telling us that Raid Controllers are more likely to burn, than Hard Disks.
  6. ZFS worth it? Tuning tips?

    Parts available for replacement has nothing to do with what is in discussion here. The point is, Hardware Raid is Safe or not ? Obviously if you care with your service avaiability, you should always have parts for replacement, regardless using Hardware Raid or not. When you said "if battery and...
  7. ZFS worth it? Tuning tips?

    Yes, it is safe as long as you're using the controller batteries. If the controller suffer a fault, just replace it, using the current batteries.
  8. ZFS worth it? Tuning tips?

    I'm using ZFS with Hardware Raid. It performs really good. I don't know why and how it was spread the idea that ZFS can't be used with Hardware Raid controller. It is such as a Myth. This article explains why you can, for sure, use ZFS with Hardware Raid...
  9. NoVNC Mouse Lag

    Hello friends. Since Proxmox 5.1 I'm facing a mouse lag with NoVNC. Is there any configuration which I could edit to try to solve this ? On Proxmox earlier versions it doesn't happens. Here it is a recording, on Proxmox 5.1, showing Mouse sliding with Lag: And here it is another...
  10. PVE API Docs (Restore backup)

    NVM, I've just found it. Endpoint is: POST json/nodes/{node}/qemu THank you
  11. PVE API Docs (Restore backup)

    I've been looking at PVE API Docs and I could't find the Endpoint for restoring backups. Isn't it possible through API ?
  12. LVM No such file or directory - After Reboot

    Hi, After reboot, all VMs are failing to initialize: PVE Version: proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve) pve-manager: 5.2-2 (running version: 5.2-2/b1d1c7f4) pve-kernel-4.15: 5.2-3 pve-kernel-4.13: 5.1-45 pve-kernel-4.15.17-3-pve: 4.15.17-12 pve-kernel-4.15.17-2-pve: 4.15.17-10...
  13. Clone bandwidth limit

    Hello, How can I set clone rate limit in MB/s ?
  14. 4.15 based test kernel for PVE 5.x available

    Hello @martin , I run proxmox in a Dell Poweredge 11th Generation. Last week after a kernel update, my grub menu wasn't not even shown, and I had to reinstall Proxmox. I haven't tested this fix yet. Do you think was that related to this bug ?
  15. Proxmox + LVM cache

    Hey there, I've seen that using ZFS on a HW RAID is not advisable. Due to this advices, In my setup with Hardware Raid, I've been planning to use LVM with cache, instead of ZFS. Have someone experienced good perfomance, working on Proxmox + LVM Cache ?
  16. ZFS write (High IO and High Delay)

    Yes. The config, which I posted above, is from the server that is having the slow write issue. It runs a 512 bytes SSD. The other server, that is running fine, has enteprise SSD (4K sector size). I decided not using ZFS with 512 bytes SSD. I've already tried everything, but with no success.
  17. ZFS write (High IO and High Delay)

    zpool status root@br01:~# zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 1h31m with 0 errors on Sun Apr 8 01:55:16 2018 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 sda2 ONLINE 0 0 0...
  18. ZFS write (High IO and High Delay)

    Hello @6uellerbpanda , I have 32 GB RAM. Server 1 (Running ZFS smoothly, fast Read and fast Write operations): Xeon E3 1230 v5 32 GB RAM SSD 480 GB (4K sector size) - ashift 12 and zvol/zpool 128K Server 2 (Running ZFS terribly, fast Read but ...... very poor Write operations): Xeon E3 1230...
  19. ZFS write (High IO and High Delay)

    Hello Alwin, I've read that ARC is only used for caching Read operations, not write. I think that this problem is being caused because my SSDs are 512 bytes, ZFS set with ashift 9 and zpool block size 128K. I have another setup with 4K SSDs, ashift 12 and zpool block size 128K (This setup is...
  20. ZFS write (High IO and High Delay)

    Hello guys, I'd like to hear from you about the write speed of your ZFS setup. I'm using SSD, when a VM is being cloned, IO goes up to 30 - 40%. I see , from iotop command, that txg_sync is at 99%, and write oscilates from Kilobytes to a couple Megabytes, every second. I don't know what is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!