Search results

  1. G

    [ZFS] Disk busy

    Hello My datastore is on ZFS pool. No tache is curring but all disk of the zfs pool are busy. Any ideas please ?
  2. G

    Qemu-guest-agent and snapshots

    Hello I have the same problem with a ceph storage. I thought the backup is done via a snapshot. In this case, a write in the VM should not impact the backup and vice versa?
  3. G

    Cluster with node in two difference Datacenter

    ok all traffic pass with ssh so... it should be ok
  4. G

    Cluster with node in two difference Datacenter

    Hello, Can i setup a cluster in two different datacenter ? Serveur 1 (private ip1) ------- Firewall ------ Public IP 1 -------------- Internet -------------- Public IP 2 ------------- Firewall --------------Serveur 2 (private ip 2) Thanks
  5. G

    Get Disk IO value in api or cli

    Hello, I'm looking to get the disk io information (like in gui). Someone know how do that please ? Nicolas
  6. G

    [SOLVED] CEPH Error with 5.4.5

    ok i finnaly found and resolve my problem. I used rbd snapshot. so before moving on another ceph cluster, we have to delete snapshot before. This snapshot are not seen on proxmox. Warning : not forget to unprotect snapshot if not : error After i can move without problem Thanks Alwin for the...
  7. G

    ixgbe with proxmox and supermicro

    Hello, I get this error message after a reboot : 3.811742] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k [ 3.811743] ixgbe: Copyright (c) 1999-2016 Intel Corporation. [ 5.168652] ixgbe 0000:03:00.0: Multiqueue Enabled: Rx Queue count = 16, Tx Queue count = 16...
  8. G

    [SOLVED] CEPH Error with 5.4.5

    The unused disk appear after moving.
  9. G

    Differential backups

    Hello, Nice. but you have to do a vzdump every days. The differential is just to optimize space on zfs ? So it should take a very long time to do vzdump for all datacenter right ? Another question : can we use some technical case for backup using ceph method ? I tried eve4pve-barc but it...
  10. G

    [SOLVED] CEPH Error with 5.4.5

    Hello, agent: 1 boot: cdn bootdisk: virtio0 cores: 4 cpu: kvm64,flags=+pcid ide2: none,media=cdrom memory: 4096 name: PC-WIN-01 net0: virtio=3A:62:33:64:34:62,bridge=vmbr107 numa: 0 ostype: win7 protection: 1 scsihw: virtio-scsi-pci smbios1: uuid=3fc6c0aa-681e-4587-90fb-be7004c37389 sockets: 1...
  11. G

    [SOLVED] CEPH Error with 5.4.5

    Hello, I Have two cephs cluster and i moved one vm disk from one to another with set delete the source disk. So now i have one unused disk 0 on the old ceph pool. I unset the protected vm and i try to delete the disk. I Get : Error with cfs lock 'storage-myceph': rbd snap purge...
  12. G

    LXC (unprivileged) backup task failing

    Hello, For this tmpdir option, what capacity we need to use ?
  13. G

    ERROR: Backup of VM failed ... exit code 32

    Hello, I have the same problem an no solution too since 2 weeks with one container.
  14. G

    [SOLVED] Unable to create OSD

    Ok i success to create again the osd. $ ceph osd out <ID> $ service ceph stop osd.<ID> $ ceph osd crush remove osd.<ID> $ ceph auth del osd.<ID> $ ceph osd rm <ID> I used deleted partition on my disk and used command : ceph osd crush remove osd.<ID> After, in gui, create the new osd.
  15. G

    [SOLVED] Unable to create OSD

    Hello, We upgrade from proxmox 4.4 and from hammer to luminous with success. My ceph cluster is health. I'm using ssd for journal I try to migrate one osd to bluestore => - out - stop - wait for building up - destroy - create new osd => osd created but with my wal size was only 1G I...
  16. G

    [SOLVED] OSD high cpu usage

    Thanks for your help. Finally, the problem was the ssd. We changed it and all become ok. We tested it with # hdparm -t -T /dev/nvme0n1 So, result was half that a working node Thanks for you help.
  17. G

    [SOLVED] OSD high cpu usage

    I tried to delete osd an recrete them on the node 2 Not better IOStat no sho difference with working node On the node 2 with hdparm => root@GPL-HV3302:/var/log/ceph# hdparm -t -T /dev/nvme0n1 /dev/nvme0n1: Timing cached reads: 2472 MB in 1.99 seconds = 1240.35 MB/sec Timing buffered...
  18. G

    [SOLVED] OSD high cpu usage

    Yes I upload 2 files. I can see that cpu usage of a vm : 2% with htop and 30% in the promox gui... Very strange
  19. G

    [SOLVED] OSD high cpu usage

    thanks for help @spirit : yes all node are the same : Node1 : ok => 50 vms Node2 : cpu 30% average used => only 2 vm Node3 : ok => 20 vms Node4 : ok => 10 vms Node5 : ok => 10 vms if i move a mv from another node to node 2 : the cpu usage of the vm become 30% We have 10G storage network with...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!