iface lo inet loopback
iface enp2s0 inet manual
iface vmbr0 inet static
iface enp5s4 inet manual
please verify your configuration im vmbr0. il think you shoiuld have enp2s0 and...
Actually it's broken because of the problem of glusterfs.
the packages pve-qemu-kvm qemu-server spiceterm failed the dependency.
root@prox4:~# apt list --upgradable
En train de lister... Fait
libpve-common-perl/stable 6.0-3 all [pouvant être mis à jour depuis : 6.0-2]...
On my own config @home, everyworking fine with gluster v6, but promox was installed over a debian insatllation and i add manually the gluster repository
deb [arch=amd64] https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/Debian/buster/amd64/apt buster main
before the uprgrade it was...
i think it's depend of the configuration, replica, distribute, number of brick...
for example, with a replica 3 arbiter 1 or full replica 3 with 3 bricks, the recover is easy because the all the data is on the brick itself.
if you have a problem, the data is visible on the storage.
Gluster with replica 2 : bad idea; because if one server goes done , you can have split brain.
The solution is to have third serveur, to avoid this. And then you can add have minimum a replica 3 arbiter 1 volume
on the third you can have simply an arbiter volume, no real data on it, but no...
I think it's something like that
perhaps this works too :
and then you rebalance the volume, but with replica 3 a i think it does it itself when you add the bricks because all the files are replicate.
you can create a test volume and test this.
i use this method when a...
the filesystem is ext4 ?
you can this to see what access the disk :
Another option is to enable hard drive access dumping to syslog.
That may tell you what is accessing the particular drive. I used it myself may moons ago and found it useful.
To turn it on...
and to turn it off...
If you installed proxmox over a debian installation, everything is possible.
at home :
root@prox1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda2(W) mmcblk0p1
3904512 blocks super 1.2 [2/2] [UU]...
Can you confirm me that if we use the same "vmbr10", we can't put down only one ip the network interface ?
i use also have the ":1" in my config so i can do :
ifconfig down vmbr0:1
to just shut down this specific ip address