Search results

  1. S

    [SOLVED] Summary Graphs Empty with Wrong Date

    For anyone wondering, if this is still occurring for you (in 8.4.X) this worked for me as well. Had the issue in 8.4.1. Was also showing the date/time in the summary as the year 1969. Updated, still had the issue in 8.4.12 after the update completed. Removing the db and restarting as per...
  2. S

    ERROR: Backup of VM xxx failed - no such volume 'local-lvm:vm-xxx-disk-XX

    Over the last while we have intermittently experienced this as well. Local drives (not NFS) and the disk is in fact there. The backup will run fine the next time. I can provide details as well but did not want to hijack the thread.
  3. S

    [SOLVED] Error when running 'pvesh get'

    Ok great - we were up to date when we saw this last week, but seems to have been updates since. Sorry - should have checked that first Thanks.
  4. S

    [SOLVED] Error when running 'pvesh get'

    When checking the current status of a VM via 'pvesh get', have been getting an error after a recent update. Anybody else see this? Command: /usr/bin/pvesh get /nodes/<servername>/qemu/101/status/current Results: Invalid conversion in sprintf: "%.H" at /usr/share/perl5/PVE/Format.pm line 75...
  5. S

    LVM commands hang and node is marked with a questiomark

    @oguz sorry - are you referring to OP or me? If not me, please ignore. If me, here is one: # smartctl -d cciss,3 -a /dev/sda smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.78-2-pve] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF...
  6. S

    LVM commands hang and node is marked with a questiomark

    We have had the same thing occur for a full shutdown VM backup. INFO: task vgs:7437 blocked for more than 120 seconds. Tainted: P IO 5.4.78-2-pve #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disabled this message This repeats and the VMs are shut down, but hung w/o actually...
  7. S

    [SOLVED] Proxmox cluster slow to shutdown.

    FYI We are still experiencing this on update reboots. Experienced during latest updates yesterday for version 6.2-15 (currently pve-manager/6.2-15/48bd51b6 (running kernel: 5.4.73-1-pve). According to the logs (dpkg.log) I think we went from 6.2-12 to 15., 2020-11-23 15:44:24 upgrade...
  8. S

    How to remove a persistent udev rule

    That is the strange thing, that is all over the internet as how to continue to use eth0 BUT that was NOT actually there in GRUB_CMDLINE_LINUX anywhere I could find on the system. This must have gotten created somewhere along the line of upgrades. This server is an older HP DL360 G7 so has had...
  9. S

    How to remove a persistent udev rule

    Hi I have one node that uses the old network device naming convention eth*. It appears that along the way through updates it has kept the old naming convention by having the file /et/udev/rules.d/70-persistent-net.rules. Can I just delete this and fix the network setup to then use the new...
  10. S

    Continued IO Errors

    Thanks. You mean a desktop style power supply where you can choose which power cables does what? This is a dual power supply server do that is not really possible as far as I know.
  11. S

    Continued IO Errors

    Update: restoring from backup did not change anything, so it must not fully re-create the disk. Moving the disk (it was somehow in the ISO storage dir. instead of with the other VMs on disk2) to another dir. on the same drive created a new disk and that seems to have fixed the issue. Perhaps...
  12. S

    Continued IO Errors

    Unfortunately HP does not provide service packs without warranty or a support agreement. $$$
  13. S

    Continued IO Errors

    lspci: # lspci -vv | grep RAID -A2 05:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01) Subsystem: Hewlett-Packard Company Smart Array P410i Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+...
  14. S

    Continued IO Errors

    /etc/pve/storage.cfg: dir: drive2_ISO path /drive2/iso content images,iso maxfiles 1 dir: local path /var/lib/vz content vztmpl,rootdir,images,iso maxfiles 0 dir: drive2_vm path /drive2/vm content rootdir,images maxfiles 1...
  15. S

    Continued IO Errors

    dmesg returns: [349765.146475] Page cache invalidation failure on direct I/O. Possible data corruption due to collision with buffered I/O! [349765.146856] File: /drive2/iso/images/113/vm-113-disk-0.qcow2 PID: 10529 Comm: kworker/6:1 [354909.318533] Page cache invalidation failure on direct...
  16. S

    Continued IO Errors

    It was already no cache.
  17. S

    Continued IO Errors

    I will try that. And no, not converted.
  18. S

    Continued IO Errors

    The /var/log/messages from the proxmox server: Apr 29 14:05:46 <server> pvedaemon[2858]: <root@pam> successful auth for user 'root@pam' Apr 29 14:05:53 <server> pvedaemon[2858]: <root@pam> starting task UPID:rciblade360:00000E49:02252476:5EA9C201:qmstart:113:root@pam: Apr 29 14:05:56 <server>...
  19. S

    Continued IO Errors

    So as a recap, there did not appear to be anything in /var/log/messages on the proxmox server. After doing a restore from backup and restarting some services on the VM got the issue again: Also on the console: Searching for this issue (Page cache invalidation failure) only turns up a few...
  20. S

    Continued IO Errors

    So if nothing in /var/log/messages on the proxmox server, then it is most likely the VM. Will restoring from backup(using qmrestore) 'recreate' the VM disc or will I end up w/ the same issues after restoring?