By the way it seems there's a loop when Proxmox is trying to remount an NFS share :
Creates a lot of processes but when mount is frozen, PVE re-try to mount...
Hi,
Today I had the same problem on a node (stale NFS mount), is there a way to avoid this situation ? Why PVE is freezing on a stale NFS mount, which is not used ? (confirmed with lsof, no process were accessing nfs mount).
Thank you
Still not working for me...
test1:~# corosync
notice [MAIN ] Corosync Cluster Engine ('2.3.5'): started and ready to provide service.
info [MAIN ] Corosync built-in features: augeas systemd pie relro bindnow
test1:~# /etc/init.d/pve-cluster start
Starting pve cluster filesystem ...
Ok ok, I just say that's unusable in production environment.
Hi spirit, thank you for your how-to, but I don't think it can work.
When you run "pvecm add ipofnode1 -force" on a non-rebooted node, it will fail because it calls 'systemctl stop pve-cluster' and systemctl does not work yet (system...
Yes I do 1 cluster at a time, but read procedure : impossible to mix 3.4 & 4.0 nodes in a same cluster. So procedure is to upgrade a first node, and create a new cluster on that node. So during upgrade, you have 2 clusters instead of one.
When you have thousands of nodes, you can't do it by...
If I use procedure provided by spirit, it seems there's no downtime, isn't it ?
Problem is that upgrade needs to be done "by hand", impossible to automate it with Ansible for example.
And during upgrade, we have 2 clusters, not 1....
If I understand well, there is no cluster upgrade procedure from 3.4 to 4.0 ? We need to re-create cluster from root so we loose all cluster configuration as users, permissions, etc ?
I can't understand your strategy with this release. Think to people having cluster with dozens of nodes ...
Found solution by myself, there's no need to use vconfig, just use ip command : ip link add link eth0 name eth0.100 type vlan id 100
See thread : http://forum.proxmox.com/threads/24162-Proxmox-4-0-VLAN
Hi everyone,
I just upgraded a node to PVE 4, and I have a problem with the vlan package, which is listed as conflict in pve-manager.
pve-manager is supposed to provide vlan, but I don't have vconfig command.
I really need vconfig command, how can I do ?
Thank you.
Hi everyone,
Today I did an "aptitude update && aptitude safe-upgrade" on my nodes.
The last one was about 2 weeks ago.
After this, all my nodes are producing tons of logs like :
Nothing changed from network configuration.
I did "service cman stop; sleep 2; service cman start; service...
Hi everyone,
I use RBD as backend storage for my VMs.
All VMs are single disk, but on a few ones, I can see a second disk (disk-2) in my RBD pool, which have strange sizes : 4.49GB when primary disk is 32GB, 8.49GB when primary disk is 64GB, etc.
And I have some disk-2 related to VM IDs that...
It seems that Ceph repos changed recently.
As a workaround, edit /usr/bin/pveceph and replace :
my $ua = LWP::UserAgent->new(protocols_allowed => ['https'], timeout => 30);
With :
my $ua = LWP::UserAgent->new(protocols_allowed => ['https', 'http'], timeout => 30);
My problem occurs only on guest running MariaDB.
When I start guest with cache=writeback & mount its ext4 with nobarrier = no iowait
When I start guest with cache=none & mount its ext4 with nobarrier = high iowait.
Can we deduce from this that Ceph is the problem ?
In fact my problem of high IO wait time does not seem related to RBD or disk.
I really don't understand why system is showing high IO, because it does not write a lot on disk :
# iostat
Linux 3.2.0-4-amd64 (zabbix01-proxy1) 04/10/2015 _x86_64_ (1 CPU)
avg-cpu: %user %nice...
I know I read it. Does XFS extsize parameter will "defragment" disk itself or do I need to run xfs_fsr ? I'm actually running xfs_fsr on some OSD, after setting parameter to true, but it takes very long time...
I already customized mount options in Ceph.
I will see about read_ahead_kb. Thank you.
I saw that all my OSD XFS are fragmented (>15%), so I will start with it. I found some information.
That's interesting :) Why virtio-scsi is better ? Is it the "SCSI" option in PVE GUI ?
Hi all,
I run PVE & Ceph (Giant) as RBD storage.
I have high iowaits on my VMs.
All are configured with cache=writeback because I thought that was the best for performance.
Is it really true ? Which cache method do you recommend for RBD ?
Thank you.
Flo
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.