Search results

  1. C

    Cluster behaviour when upgrading to 6.2

    Hello, We've a cluster of 4 servers running Proxmox VE 5.4, and we're upgrading them to 6.2 next week. The upgrading path will consist on moving out VMs from the node that's going to be upgraded. Since we're using local storage, it can be a time-consuming process. Our concern is if, when all...
  2. C

    sources.list on OVH

    Thank you BobhWasatch, I've opened a support ticket sending a system report of all nodes to get an official answer from Proxmox. Thanks again!
  3. C

    sources.list on OVH

    And what about contrib? Does it have any effect on Proxmox? On node1 it was not enabled... Node 0 was installed with 3.4 Node 1 was installed with 4.4 Node 2 was installed with 5.0 Now all nodes have 5.2. I would like to be sure that all repos are OK before upgrading to 5.4. Thanks
  4. C

    sources.list on OVH

    Hello again, On Node2, I see that the only packages installed from non-free are amd64-microcode and intel-microcode. On the other nodes, these packages are not installed. So, anyone can confirm that this configuration is correct, or should I install intel-microcode on all servers? On NODE0...
  5. C

    sources.list on OVH

    Hello, We've 3 servers on OVH, with Proxmox installed from their template. I see different sources.list files between nodes. NODE0: deb http://debian.mirrors.ovh.net/debian/ stretch main contrib # security updates deb http://security.debian.org/ stretch/updates main contrib NODE1: deb...
  6. C

    Smart information on GUI for -sat disks

    Do you know if it's possible to force or hard-code disk type somewhere? It's useful for USB backup disks... Thank you
  7. C

    Smart information on GUI for -sat disks

    Hi, On the GUI, I'm able unable to see SMART status of some of our drives. Some of them show as "PASSED", but sometimes USB drives show as "UNKNOWN": What I see is that for drives that show their status on the GUI, I can see SMART status on shell via "smartctl -a /dev/sdX". However, for...
  8. C

    No ARC usage

    Hi all, After rebooting the server (with zfs_arc_min and zfs_arc_max updated) values are much better for these slow disks. Arc size is growing just now to fit min size. Thanks for your help! arc_summary ------------------------------------------------------------------------ ZFS Subsystem...
  9. C

    No ARC usage

    I'll try that on next maintenance window. Thanks! After doing that, it seems to increase arc usage but it finally drops to low values: root@srv2-global:~# arcstat time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c 16:14:32 0 0 0 0 0 0 0 0...
  10. C

    No ARC usage

    Hi, Here it is. Thank you! root@srv2:~# arc_summary ------------------------------------------------------------------------ ZFS Subsystem Report Mon Jun 18 11:38:40 2018 ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc...
  11. C

    No ARC usage

    Hi guletz, I didn't noticed disks were 5400rpm... It explains a lot. System uptime is 163 days now. Here are zpool status -v and zpool list: root@srv2:~# zpool status -v pool: rpool state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be...
  12. C

    No ARC usage

    Hi Wolfgang, Thank you, I'll try that. Is there any reason why on another similar system (same VMS, same host memory) arc is being used? Or is it random?
  13. C

    No ARC usage

    Hi, I've a server with these specifications: Intel(R) Xeon(R) CPU E31220 16GB ECC Ram 2 x 1TB SATA drives (WD10EFRX-68FYTN0) with ZFS Mirror. It runs incredibly slow. I'm aware SATA disks are slow and we should install a SSD cache device, but I noticed arcsize is small. It seems ZFS it's not...
  14. C

    Old deleted node showing on HA lrm

    Thanks Dietmar, it worked!
  15. C

    Old deleted node showing on HA lrm

    Hi all, On one of our customers, we used to have a 2-node cluster. Nodes' name were: "mdc0" and "mdc1".One of the servers failed (mdc0), and we removed it using pvecm delnode mdc0. I've noticed that this old node is still showing on HA GUI Menu: Here is the output of /etc/pve/.members (the...
  16. C

    Proxmox VE is 10 years old!

    Happy birthday and thank you for such good work!
  17. C

    Is it possible to throttle backup and restore disk io?

    Hi, I use cstream as a workaround. With this command, I can restore a VM with a 30MB/s limit: cstream -t 30000000 -i /mnt/pve/backups/dump/backupfile.vma.lzo | lzop -cd | qmrestore - newVMID --storage destinationstorage I think it can also be used to migrate VMs between nodes, to mitigate...
  18. C

    Is it possible to throttle backup and restore disk io?

    Hi, It's really a problem for us, we have technicians that sometime have to move vms and restoring backups, and anyone of those kills i/o on other Vms. We have 10G network and raid10 ssd. What command so you suggest? Anyway, a Gui option when starting any i/o task will be highly appreciated...
  19. C

    VM migration between local-lvm storage kills I/O on destination server

    Hi, On a two server cluster, trying to offline migrate a 50GB VM stored from one node to another kills I/O on the destination node. Both have HW RAID-10 Enterprise SSD (they are on OVH) and 10Gb connection between them, and the VM is stored on lvm-thin (recently converted from "classic" lvm)...
  20. C

    Swap usage

    Hi Fireon, Swapiness is set at its default value (60). Is there any recommended value? On ZFS I use 10 (as indicated by wiki), but this server storage is based on LVM on a HW RAID10 Enterprise SSD. Thanks!