There is 4 LUN for one VM, as there is 4 pathes to connect the SAN to the Proxmox. And there is LVM on top of each multipath entry to supports backup. Each time I add a new VM, I need to modify the multipath.conf manually and push it on all the cluster servers.
Do you have a better solution ...
Hi
We use a ISCSI Compellent SAN which use Multipath. it works very well but the creating of a new VM is a manual step by step process. One of the steps is the multipath.conf modification to add the real name of the VM to the wwid. It is shared by all the proxmox nodes.
I have two questions :
-...
I am in PVE7.3, and the patch is applied and working for me. I need to restart pvestatd as usual to use it correctely.
In my version, the file to modify is /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm on line 68, to add a "return 1;" before the line "return PVE::Network::tcp_ping($server, $port...
Hi
I have a Linux server where I must restore 40Go of files. This server is hosted in a VM on PVE, and backuped on PBS. The backup file is encrypted by a key known by PVE administrator, but not the VM owner (so I can not deploy the backup-client on the VM). I don't want do download a 40G file...
# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
I will try to upgrade in version 7 to see if the problem is solved.
Hi
I try to activate the Web UI in IPv4 AND IPv6.
I found this page https://pve.proxmox.com/pve-docs/pveproxy.8.html to enable IPv6.
So I created a /etc/default/pveproxy file and put in it :
LISTEN_IP="2a01:e0a:a:b::c"
I restart the service with :
systemctl restart pveproxy.service...
Hi guys
The error coming from the storage is due to proxmox code : it is generated each time pvestatd try to connect to the portal to test the connectivity. It was not due to ISCSI stack.
Actually, I remove the test in /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm by adding "return 1;" before the...
You are right, the log is from the storage. But Dell has already work very hard on this topic, takes a lot of time with engineers at level 2, 3 and 4, and they request the Proxmox help. Do you think the version 7 of Proxmox can change something ?
We didn't see anything of that. MTU 9000 set, 10G/s every where with DAC cables, flow control in switch (don't know how check that in Proxmox, but we have set "/sbin/ethtool -A eth5 autoneg off rx on tx on" in /etc/nework/interfaces).
We have seen with Dell support all these parameters and they would like to see if someone already have this problem in Proxmox, because they never see that.
Hi,
We are in Proxmox 6.4 with a ISCSI Dell Compellent storage. This storage used multipath. It works really well with proxmox since years. We have added a new storage SCv3020 and since, there is a log in it with : "CTL:856522 SUB:CHELSIOT4 FNC:ActivateObjectCallback FNM:chelsioT4Connection.cxx...
If I disable ballooning, it works : "balloon: 0" in conf. But it is not perfect...
- - - Updated - - -
Sorry Tom, I don't see your post :
VMID.conf :
pveversion -v :
qm start 105 :
Hi
I was adding the sound in my VM by adding the line : "args: -soundhw all" in /etc/pve/nodes/proxmox-bureau/qemu-server/105.conf file.
Today I do the upgrade, and the VM doesn't restart :
/usr/bin/kvm -id 105 -chardev 'socket,id=qmp,path=/var/run/qemu-server/105.qmp,server,nowait' -mon...
Re: kernel: EXT4-fs (dm-3): Unaligned AIO/DIO on inode 130479 by kvm; performance wil
Eh, sorry to retreive a old topic, but I have the same error :
Aug 20 15:42:12 proxmox-bureau kernel: EXT4-fs (dm-2): Unaligned AIO/DIO on inode 9568272 by kvm; performance will be poor.
I don't see that...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.