I use this command on my personal workstation on proxmox ( VM linux mint on passtrough ) :
ssh -l root ip_du-promox- /usr/sbin/poweroff
Works like a charm.
i have this when a try to dump my bios :
root@n5105:/sys/devices/pci0000:00/0000:00:02.0# cat /sys/devices/pci0000\:00/0000\:00\:02.0/rom > /tmp/vbios.dump
cat: '/sys/devices/pci0000:00/0000:00:02.0/rom': Erreur d'entrée/sortie
any idea ?
do you add the module at the end of the command sensors-detect ?
you should have something like this in /etc/modules :
root@proxmoxsan:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot...
i think by default , the program mdadm isn't install by default . Perhaps it's a better idea to mount the disque on another pc, and copy the data via the network
But you can install with apt-get install mdadm
I install the lastest pve update ans the qemu package of the test repository
glusterfs storage to glusterfs storage
create full clone of drive scsi2 (SSDinterne:170/vm-170-disk-2.qcow2)
Formatting 'gluster://10.10.5.92/GlusterSSD/images/170/vm-170-disk-0.qcow2', fmt=qcow2 cluster_size=65536...
Il you have only 2 server for Glusterfs, if one server is down, or communication between the twoserver is down , you are in split Brain.
for the split brain try this : gluster volume status
you see something like this. You should have a "y" on every line
root@p1:~# gluster volume status...
Well it's not stable.
The discard option is off .
i was trying to remove the sbapshot before the update :
May 06 23:00:16 p3 pvestatd[1312]: status update time (11.471 seconds)
May 06 23:00:21 p3 pvedaemon[253908]: <root@pam> starting task...
well , without the discard option, i was able to update the VM, without any crash.
with the discard option "on" i had this in the log :
May 06 21:06:30 p3 pvestatd[1312]: status update time (11.371 seconds)
May 06 21:06:42 p3 pvestatd[1312]: status update time (11.521 seconds)
May 06...
i have :
cache : write back
discard : yes
but ssd emulation : no.
i believe qemu-img create a sparse file ( qcow2 ) with size=0 on the new storage ( glusterfs in our case ), when the migration start, the program try to recreate the structure ih the filesystem in the qcow2 file, there is a...
I have a similar problem on my workstation ( promox inside too ) who is using the same glusterfs storage; Ramdomly, the vm crash, perhaps on high write disk activity, ( it dit when i was updating the kernel ).
To be sure that it's a storage problem have move the disk to local SSD .
create...
Bonjour,
Everything was working fine ( almost 2 years now), But recently ( since the last update ?) i have a problem with my gluster storage.
The other day i try update a vm ( dist-upgrade inside the vm ), and during writting files --> the VM shutdown
same after a restauration from a pbs...
Glusterfs manage the load balancing itself, when you are connected.
The secondary IP, is use when you try to connect the storage and the Gluster node isn't up a the first connection.
Only two node is a very bad idea, you can have split brain with that...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.