Search results

  1. D

    shutting down proxmox from VM

    I use this command on my personal workstation on proxmox ( VM linux mint on passtrough ) : ssh -l root ip_du-promox- /usr/sbin/poweroff Works like a charm.
  2. D

    Copy files from a Container to another

    scp , can be a way if ssh is enable on the two container...
  3. D

    [SOLVED] Proxmox 6.0 Gemini Lake and IGD (graphics) passthrough for Windows 10

    i have this when a try to dump my bios : root@n5105:/sys/devices/pci0000:00/0000:00:02.0# cat /sys/devices/pci0000\:00/0000\:00\:02.0/rom > /tmp/vbios.dump cat: '/sys/devices/pci0000:00/0000:00:02.0/rom': Erreur d'entrée/sortie any idea ?
  4. D

    Firmware not support Jasper lake's(N5105) GPU

    Bonjour, i have the same problem. as you. Have you find a solution ?
  5. D

    Unable to get the hardware temperature

    do you add the module at the end of the command sensors-detect ? you should have something like this in /etc/modules : root@proxmoxsan:~# cat /etc/modules # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot...
  6. D

    [SOLVED] Problem during Migration with gluster filesystem.

    i try now with discard option on, and it's working again. thanks for the patch.
  7. D

    mounting raid1 disk in proxmox

    i think by default , the program mdadm isn't install by default . Perhaps it's a better idea to mount the disque on another pc, and copy the data via the network But you can install with apt-get install mdadm
  8. D

    [SOLVED] Problem during Migration with gluster filesystem.

    I install the lastest pve update ans the qemu package of the test repository glusterfs storage to glusterfs storage create full clone of drive scsi2 (SSDinterne:170/vm-170-disk-2.qcow2) Formatting 'gluster://10.10.5.92/GlusterSSD/images/170/vm-170-disk-0.qcow2', fmt=qcow2 cluster_size=65536...
  9. D

    Proxmox / GlusterFS: method of operation

    Il you have only 2 server for Glusterfs, if one server is down, or communication between the twoserver is down , you are in split Brain. for the split brain try this : gluster volume status you see something like this. You should have a "y" on every line root@p1:~# gluster volume status...
  10. D

    [SOLVED] Problem during Migration with gluster filesystem.

    Thanks for the reply. I waiting for the patch.
  11. D

    [SOLVED] Problem during Migration with gluster filesystem.

    Well it's not stable. The discard option is off . i was trying to remove the sbapshot before the update : May 06 23:00:16 p3 pvestatd[1312]: status update time (11.471 seconds) May 06 23:00:21 p3 pvedaemon[253908]: <root@pam> starting task...
  12. D

    [SOLVED] Problem during Migration with gluster filesystem.

    well , without the discard option, i was able to update the VM, without any crash. with the discard option "on" i had this in the log : May 06 21:06:30 p3 pvestatd[1312]: status update time (11.371 seconds) May 06 21:06:42 p3 pvestatd[1312]: status update time (11.521 seconds) May 06...
  13. D

    [SOLVED] Problem during Migration with gluster filesystem.

    i have : cache : write back discard : yes but ssd emulation : no. i believe qemu-img create a sparse file ( qcow2 ) with size=0 on the new storage ( glusterfs in our case ), when the migration start, the program try to recreate the structure ih the filesystem in the qcow2 file, there is a...
  14. D

    [SOLVED] Problem during Migration with gluster filesystem.

    it's look like this problem : https://github.com/qemu/qemu/commit/a6b257a08e3d72219f03e461a52152672fec0612
  15. D

    [SOLVED] Problem during Migration with gluster filesystem.

    I have a similar problem on my workstation ( promox inside too ) who is using the same glusterfs storage; Ramdomly, the vm crash, perhaps on high write disk activity, ( it dit when i was updating the kernel ). To be sure that it's a storage problem have move the disk to local SSD . create...
  16. D

    [SOLVED] Problem during Migration with gluster filesystem.

    This is the last update : Start-Date: 2022-04-29 09:40:19 Commandline: apt-get dist-upgrade Upgrade: proxmox-widget-toolkit:amd64 (3.4-9, 3.4-10), pve-firmware:amd64 (3.3-6, 3.4-1), pve-qemu-kvm:amd64 (6.2.0-3, 6.2.0-5), libproxmox-acme-perl:amd64 (1.4.1, 1.4.2), pve-ha-manager:amd64 (3.3-3...
  17. D

    [SOLVED] Problem during Migration with gluster filesystem.

    root@p1:~# pveversion pve-manager/7.1-12/b3c09de3 (running kernel: 5.13.19-6-pve) root@p1:~# pveversion -v proxmox-ve: 7.1-2 (running kernel: 5.13.19-6-pve) pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3) pve-kernel-helper: 7.2-2 pve-kernel-5.13: 7.1-9 pve-kernel-5.13.19-6-pve...
  18. D

    [SOLVED] Problem during Migration with gluster filesystem.

    Bonjour, Everything was working fine ( almost 2 years now), But recently ( since the last update ?) i have a problem with my gluster storage. The other day i try update a vm ( dist-upgrade inside the vm ), and during writting files --> the VM shutdown same after a restauration from a pbs...
  19. D

    Proxmox Cluster with local Gluster servers.

    Glusterfs manage the load balancing itself, when you are connected. The secondary IP, is use when you try to connect the storage and the Gluster node isn't up a the first connection. Only two node is a very bad idea, you can have split brain with that...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!