Search results

  1. D

    create lvm-thin local storage same name

    Hello! I have 3 node in cluster (I just want add to manager center), and I want to add lvm-thin local storage with same name, is there any way to do it ? I was add but it show storage ID 'vmdata' already defined (500)
  2. D

    vm io-error

    I show lvs it is 61% = 18GB, If this is the case then the amount of the report is correct, but I don't know why in windows only 12GB but lvs show 18GB(61%)
  3. D

    vm io-error

    I used lvm-thin volume in promxox, but it show quota not correct. I have lvm-thin: 1.80TB, I created 80 vm disk 30GB/VM, usage disk 12GB/VM, as I understand lvm-thin is calculated according to the amount used as such. total = 80x12 = 960GB but my lvm-thin show full disk 1.8TB and vm status...
  4. D

    Except some vm in backup

    Thank you, I was edit cron /etc/cron.d/vzdump for except ID I need
  5. D

    Except some vm in backup

    I know, But I want to create backup by host, because I have many vm, if I backup all it will slow, I want to backup by host different dates
  6. D

    Except some vm in backup

    Hello! I used backup build in proxmox ve, I create schedule backup by host except some vm, but sometime my vm move between host in cluster, so it can not except in backup in new host, anyone have solution to except vm for all host
  7. D

    Avast for Cluster

    @Stoiko Ivanov thank you
  8. D

    Avast for Cluster

    I am intending to buy avast for cluster, but I have cluster with 5 node, and I only buy licsence for 1 server, If I install on 1 node, can I use it on all nodes, or is there a way to configure 1 node to use avast other nodes still using clamav ?
  9. D

    proxmox faulty cluster, randomly reset all nodes

    I see in log, I will add other ring to test, Thank you!,
  10. D

    proxmox faulty cluster, randomly reset all nodes

    I have running cluster with 3 node and running HA, but when 1 node crash reboot, all nodes will reboot with log "systemd[34485]: Reached target Shutdown." any one help me debug this is my config server Ring network dedicate: 10Gb ethernet pveversion -v proxmox-ve: 6.2-1 (running kernel...
  11. D

    Faulty cluster, randomly reset all nodes, can't add a new node

    I have same issue, one node in cluster reboot, all node will reboot I used proxmox 6.2.4
  12. D

    PVE 6 + InfluxDB + Grafana

    I have used this dashborad (https://grafana.com/grafana/dashboards/10048) to monitor but in VMs IO data write not show data, any one tell me how to check ?
  13. D

    list of vm with the highest disk usage

    Hi all! I used proxmox some time I have vm used disk I/O high, how can I list disk I/O for vm by command ?
  14. D

    show disk not used in proxmox

    Thank Dominic I found my issue, it is bug on ceph https://tracker.ceph.com/issues/36404, when I creat template it auto creat snap protect
  15. D

    show disk not used in proxmox

    I have checked on ceph there are still images that have not been deleted, because there are snapshots, I am currently removing snap and deleting each image
  16. D

    show disk not used in proxmox

    I use promox 6, when I delete vm, I do not check the pure so disk vm is not deleted, is there a way for me to list all these unused disks for deletion?
  17. D

    Proxmox VE 6.2 change password windows

    Sorry, I install cloudbase-init to config VPS with (network, resize disk, password), I creat temp 2012 all feature running ok, But temp 2019 it can set network and resize disk, but can not change password.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!