I installed new proxmox node running pve8, and I have cluster of 5 nodes of pve7. I configured ISCSI and multipath to connect to my DELL MD 3820
I joined pve8 node to the cluster, it added and it show all disks on cluster normally
but I see in the journal logs these erroras, I tried to disable...
Hello,
I see this alert in zabbix monitoring for proxmox 7 node, what are these interfaces used for and how to check which process use this bandwidth
Thanks
I am facing a problem with proxmox backup server last few weeks that the backup of some VMs and LXCs failed, not all days and not the same VMs, the error I see as below and I cannot find the reason and how to debug it
command 'rsync --stats -h -X -A --numeric-ids -aH --delete --no-whole-file...
I have a cluster of proxmox 7 from 4 nodes, sometimes I see the swap usage for some container is 190%. I gave the container 512M swap and it is using 970M, and the memory usage on the container is 10%
how the container take swap larger than the value I specified
also I see a VM usage around...
Thanks Mira for your reply,
we are using Dell MD 3820, disks are two types, 15K and 7.2K , I did the test on both and it was the same, I am using RAID 10 for all groups,
strorage connection to my Proxmox nodes is 1GB Ethernet
but the weird thing as I mentioned before is the difference...
Please check new results as recommended by you, with 60GB and 4M bs
############### Fast Server 60GB with 4K bs ########################################
root@debian4:/home/khaled# fio --ioengine=psync --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=600 --time_based...
Hi Mira,
Thanks for your reply, please find the below results on both servers using fio
SLOW SERVER
root@devl:~# fio --ioengine=psync --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=600 --time_based --name write_4k --size 5G --filename=/data/testfile
write_4k...
Hi,
I am using Proxmox 6.4, and I have Dell MD3280 storage connected over ISCSI, I have 3 groups 2 of 1.6 TB 15K disks, and 1 of 5TB 7.2K disk speed, I configured the storage as LVM ISCSI and all working fine with me
I noticed lately that some of my VMs are slow for some actions, and I deep...
I tested CIFS and NFS , i use WD EX4100 storage for CIFS and NFS shares,
yes i mounted them using proxmox GUI
I workaround this by convert this container backup to stop mode and it is working fine now, but I hope if I can use CIFS or NFS as temp dir
Thanks
Hi,
the documentation show normal setup of tmpdir such as i did, i created a tmp folder under /mnt/pve/backup which is CIFS used to backups, and zvdump.conf was
tmpdir: /mnt/pve/backup/tmp
but i got the above error, any suggestion
Thanks
Hello,
I am running proxmox v5.4 and I have one host has disk space on local storage of 90GB, and I have a lxc container of 110 GB storage, and it sometimes the backup failed because of no disk space left, and sometimes the local disk almost full and monitoring system inform us
I tried to...
Hello,
I have Proxmox 5 installed and I am running three nodes in production mode, I have Dell MD 3820 and I use it as LVM over ISCSI storage, but LVM does not have a snapshot option, so I am planing to build a new cluster with Proxmox 6.1 and I want the recommendation about the storage setup...
Hi matrix,
I rebooted the node and it worked fine, and I am looking for upgrade to 6.1 but I am afraid to have big problems with my cluster,
and I have another issue, why when I lost the connection with my shared storage which is Dell MD3820 even it is few seconds , all VMs and containers...
Hi,
I do restart the pvestatd and multipathd services but the same, syslog say
Feb 11 12:38:32 pve8 pveproxy[26445]: proxy detected vanished client connection
Feb 11 12:38:32 pve8 pveproxy[22278]: proxy detected vanished client connection
Feb 11 12:38:38 pve8 multipathd[7725]: md3820: load...
I am running Proxmox 5.4-10 on three nodes with share storage connected to DELL MD over ISCSI, when the connection lost with ISCSI storage during switch reboot then all node status show gray with question marks, one of the node I reboot it and it works fine, the first one did not show this...
I am using differential backup by ayufan's and backups seems taken without any errors, but when I am trying to restore backup from this differential backups I got this error
Logs
extracting from differential archive, using full backup...
I connect the ISCSI to my proxmox and I can see the LUN from proxmox, but when I create SR and not check "use luns directly" it check it by default and cannot uncheck it, the I tried to add LVM on top of ISCSI but I got the following error
create storage failed: error with cfs lock...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.