Search results

  1. K

    [SOLVED] Compilation failed in require at /usr/share/perl5/PVE/Auth/PAM.pm

    have 3 pve host in 1 cluster, in customer's server room. ( those servers have no internet access, and under insanely restricted access control) was about to send some files to a new vm. ( package libpam-pwquality and it's dependencies). but I execute dpkg -i *.deb before qm terminal to the...
  2. K

    [SOLVED] nfs mount failed

    the nfs share can be manually mount in the pve host by command "mount -t nfs4 172.16.0.31:/export/nfs-pve" , but auto mount failed. nfs share is configured in cluster - storage. nfs server side option is anongid=100, anonuid=100, insecure, no_root_squash, rw, subtree_check tried this...
  3. K

    [SOLVED] can I reuse removed node's name and ip address?

    have 5 node in a cluster, 2 of them is about to replaced (still running). can I reuse removed nodes (by following wiki procedures ), and reuse removed node's name and ip address (with new physical server), then jion cluster? it's ok by doing so? or should I just use different name and ip...
  4. K

    [SOLVED] mount nfs4 share from openmediavault issue

    got a strange issue while mount nfs share from omv. both PVE and OMV has been upgraded to latest version. nfs options rw,subtree_check,insecure,no_root_squash,anonuid=100,anongid=100 , nfs client has all pve node's ip address included, even try whole lan subnets. what should I do to make...
  5. K

    [SOLVED] pbs will keep only 1 backup if job set to keep x month/days

    no retention policy in pve/storage or pbs/storage. only setup retention in backup jobs. today notice job with keep 3 months will only keep the last one (previous one will be removed). then test 90 days, same behavior. now change to keep last xxx backup count, this option is horned by pbs. also...
  6. K

    [SOLVED] pve node and vm marked as question in web ui

    a node (pve-3,172.16.2.3) is marked as questioned in web UI. what is found: 1. ssh to the node is ok, and vm is not dead, can be migrate to other node with [qm migrate --live] command. 2. restart pveproxy service did not fix the problem. 3. access to the node's ui(https://172.16.2.3:8006) is ok...
  7. K

    dead node could not be remove

    a new node just gone dead because of HDD corruption. (could not load pve OS after reboot) I can NOT find the node in terminal, (have NOT issue "pvecm delnode" command yet). but web admin console the dead node is still there. (pve-03 is dead) is there anything I can do? thanks. root@pve-01:~#...
  8. K

    create container all failed

    Hi all, my pve can not create container from template now, it was ok before. have 2 proxmox , and mounted nfs share as container template / vm folder. have tried: 1. do dist-upgrade 2. reboot pve 3. check read/write permission (create txt file from proxmox terminal to nfs share folder...