Search results

  1. H

    Could PVE separate create and remove VM privileges

    I want a user as admin who can create but can NOT remove VM and CT. It's about PVE Virtual machine related privileges: VM.Allocate: create/remove VM on a server Is there a way to separate the privileges?
  2. H

    cpu usage of windows problem

    I have two windows server on a PVE node the summary gui show CPU usage 16.39% of 16 CPU(s) Memory usage 86.38% (27.57 GiB of 31.92 GiB and CPU usage 12.84% of 16 CPU Memory usage 73.95% (11.83 GiB of 16.00 Gi But top in cli show PID USER PR NI VIRT RES SHR S...
  3. H

    [SOLVED] could not backup with APItoken

    I added a PBS datastore to PVE with username and password.( storage ID: pbsBAK) Now I add the same datastore to the same PVE with API token of the same user.( storage ID: pbsBAK2) but I can not see the backup in pbsCCS2 via PVE proxmox-backup-manager acl list...
  4. H

    backup failed with exit code 11

    thanks. you are right. The same lxc could be backup via NFS,and the diff between PBS and NFS is tmpdir: NFS sync tmp to NFS but PBS sync tmp to local. could PBS sync to PBS? I have a 3T lxc, there is not enough local space.
  5. H

    backup failed with exit code 11

    I upgraded a pve cluster from v5 to v6 and added a PBS to it. Now I had backup 3 small lxc (<10G for disk usage), but meet error when big size disk lxc. the output: INFO: starting new backup job: vzdump 114 --mode snapshot --node xnode006 --compress zstd --remove 0 --storage pbsXPFnfs INFO...
  6. H

    node auto restart

    Hi, I meet a problem. Two of PVE nodes auto restart last week. And one of PVE(another cluster) node auto restart yesterday. There is no any valid log, but the node tried to start all VM and CT after started. The two cluster have been created over three years, and uptime 140+ days. Why...
  7. H

    how to get the really usage disk space of a lvm or ct

    thanks but how PVE find the really size Bootdisk size 16.00% (157.34 GiB of 983.30GiB) I could not find used map: # lvdisplay -a vg1/vm-342-disk-0 --- Logical volume --- LV Path /dev/vg1/vm-342-disk-0 LV Name vm-342-disk-0 VG Name vg1 LV...
  8. H

    how to get the really usage disk space of a lvm or ct

    the shared storage is vg1 from a fc san. for example, I allocate 200G for a ct PVE GUI summary show: Bootdisk size 16.00% (157.34 GiB of 983.30GiB) df inseide ct # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg1-vm--342--disk--0 984G 158G 776G...
  9. H

    ceph mon_clock_drift_allowed invalid

    I have set this in /etc/pve/ceph.conf mon_clock_drift_allowed = 3 mon_clock_drift_warn_backoff = 30 and it is valid in config ceph --admin-daemon /run/ceph/ceph-mon.node009.asok config show | grep clock "mon_clock_drift_allowed": "3.000000", "mon_clock_drift_warn_backoff"...
  10. H

    lxc network error

    I have two lxc on the same node, one networking normal and the other networking wrong. # cat /etc/pve/lxc/103.conf ... net0: name=eth0,bridge=vmbr1,firewall=1,gw=192.168.10.254,hwaddr=0E:BB:3B:B6:A4:E6,ip=192.168.10.70/24,tag=300,type=veth ... # cat /etc/pve/lxc/202.conf ... net0...
  11. H

    howto stop or remove a ops in ceph

    thanks. the disk smart is health. pve is v6 upgrade from v5, 9 nodes. ceph is Nautilus upgrade from Luminous. the cluster has 43 OSDs(most size 2T) with 9 ssd(a ssd every node) and work normally. i added a 4T at January 13, The process went smoothly. then I added a 10T at January 14, The...
  12. H

    howto stop or remove a ops in ceph

    My ceph cluster is unhealth: 1 filesystem is degraded 11 PGs pending on creation Reduced data availability: 202 pgs inactive, 6 pgs down Degraded data redundancy: 269/10009374 objects degraded (0.003%), 17 pgs degraded, 3 pgs undersized...
  13. H

    [SOLVED] ceph is hung

    th thanks, i created mon on other nodes with enough disk, after a few hours the size of mon (/var/lib/ceph/mon/ceph-$nodename/store.db/) is equal, but ceph is still hung. I got it. refer : "REMOVING MONITORS FROM AN UNHEALTHY CLUSTER" in...
  14. H

    [SOLVED] ceph is hung

    i add a big disk as OSD to ceph cluster. and after one day,, my ceph is hung. command 'ceph -s' hung gui 'got timeout(500)' how can i make it work? in th process, i found disk full by /var/lib/ceph/mon/ceph-$NODE/store.db now the node003 is 97%, node001 and node002 are full and i stop mon on...
  15. H

    Unable to create Ceph OSD

    I met the same problem and resolved it. # ceph-volume lvm create --bluestore --data /dev/sda --block.wal /dev/nvme0n1p1 --block.db /dev/nvme0n1p7 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring...
  16. H

    [SOLVED] new node could NOT join into ceph clsuter

    I have a cluster with 7 node. I upgraded pve from v5 to v6 last week and upgrade ceph from Luminous to Nautilus. when I want join 2 new nodes(named node007 and node009) into it. after pveceph install, the new node could not join ceph cluster, but the mounted cephfs: node009: root@node009:~# df...
  17. H

    nfs error in lxc

    I have installed nfs-ganesha in lxc and it looks well: # showmount -e localhost Export list for localhost: /opt/dxtfiles (everyone) but i could not mount it # /usr/sbin/mount.nfs -vvv -o vers=3 localhost:/opt/dxtfiles /mnt/test/ mount.nfs: timeout set for Tue Dec 22 05:39:57 2020 mount.nfs...
  18. H

    [SOLVED] migrate error

    I want upgrade a cluster of 6 nodes from v5.4 to v6.3, migrate VM and CT to other node and have upgrade 4 nodes. Now the last 2 nodes could not migrate the vm to other node, Method 'GET /nodes/ /qemu/ /migrate' not implemented (501) Howto fix it? command is normal: qm migrate VMID NODE...
  19. H

    How to replace ssd as a cache

    ok... Can you give me some advice about the size of db and wal ? Is it appropriate to put them together on a SSD?
  20. H

    How to replace ssd as a cache

    After adding a SSD into server, I rebuilded all OSDs to add cache for every OSD: ceph-volume lvm prepare --data /dev/sdd --block.wal /dev/nvme0n1p4 --block.db /dev/nvme0n1p10 And now I wonder how to replace the ssd a few years later?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!