Hello.I'm running Proxmox VE 3.4-6.I'm following the instructions outlined here: https://pve.proxmox.com/wiki/IO_Scheduler The /etc/default/grub file contains this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet elevator=deadline"We ran: update-grubRestarted system.Now, all drives are set to...
Hello spirit.Can you provide a little more information on how this can be achieved? Can it be achieved today? I'm assuming that this would work with SAS drives because of the SCSI relationship but can you provide a few more details on how this would function? Thanks.
Interestingly enough, we used the inverted method for our project. Originally, we started off with XFS on the NFS server and XFS on the VMs. Ran into problems and quickly switched to ext3 on the NFS with XFS on the VMs. The result...no more problems on our end. We never really figured out why...
What file system are you using on NFS? I remember having some odd problems with XFS with our NFS systems when deploying a solution based on Proxmox VE 2.3. We eventually settled on ext3 based on some comments here that suggested that OpenVZ favored ext3.
We are still on VE 2.3. Have not seen this problem in over a week. Seems to be a little difficult to reproduce the error condition. If the logs go crazy and start filling up the local disk again then I'll certainly be in a better position to provide this information to you.
Thanks again.
Hello
We are using Proxmox VE 2.3 with 16 nodes.
One of the nodes is showing these errors on the Syslog tab:
Jul 1 17:41:17 proxmox7 pvedaemon[1777]: WARNING: Use of uninitialized value $path in string eq at /usr/share/perl5/PVE/AccessControl.pm line 865.
Jul 1 17:41:17 proxmox7...
The NFS machines are old IBM x306 1U servers, Intel Pentium 4 CPU 3.40GHz, 4GB RAM (3.3 GB usable) with two GigE nics. Ubuntu 12.04.2 i386 Server.
One GigE NIC runs to the dedicated GigE switch just for Proxmox nodes. The other GigE NIC runs to the AoE SAN storage GigE switch. The SAN shelves...
All VMs are accessed through one of three NFS servers: nfs1 (10.199.0.21), nfs2 (10.199.0.22) and nfs3 (10.199.0.23).
All NFS servers access AoE SAN storage with each mount point representing a single shelf of disks.
So nfs1 is the front-end (NAS) to /coraid0, /coraid1 and /coraid2.
nfs2...
Good day.
We are running a 16-node Proxmox VE 2.3 cluster. Over the last few days, we are seeing a problem where automated snapshot backups of some VMs seem to be taking as long as 10 - 12 hours to complete while other VMs are finished in less than 45 mins. Most or all of the troublesome VMs...
More details: There was some light interaction when I made the initial ssh connection from node 10 to 11:
root@proxmox10:~# ssh 10.199.0.111
The authenticity of host '10.199.0.111 (10.199.0.111)' can't be established.
RSA key fingerprint is 10:88:62:e7:59:4a:f8:2b:dd:68:b2:e0:ac:70:e3:d9.
Are...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.