By the way it seems there's a loop when Proxmox is trying to remount an NFS share :
Creates a lot of processes but when mount is frozen, PVE re-try to mount...
Hi,
Today I had the same problem on a node (stale NFS mount), is there a way to avoid this situation ? Why PVE is freezing on a stale NFS mount, which is not used ? (confirmed with lsof, no process were accessing nfs mount).
Thank you
Still not working for me...
test1:~# corosync
notice [MAIN ] Corosync Cluster Engine ('2.3.5'): started and ready to provide service.
info [MAIN ] Corosync built-in features: augeas systemd pie relro bindnow
test1:~# /etc/init.d/pve-cluster start
Starting pve cluster filesystem ...
Ok ok, I just say that's unusable in production environment.
Hi spirit, thank you for your how-to, but I don't think it can work.
When you run "pvecm add ipofnode1 -force" on a non-rebooted node, it will fail because it calls 'systemctl stop pve-cluster' and systemctl does not work yet (system...
Yes I do 1 cluster at a time, but read procedure : impossible to mix 3.4 & 4.0 nodes in a same cluster. So procedure is to upgrade a first node, and create a new cluster on that node. So during upgrade, you have 2 clusters instead of one.
When you have thousands of nodes, you can't do it by...
If I use procedure provided by spirit, it seems there's no downtime, isn't it ?
Problem is that upgrade needs to be done "by hand", impossible to automate it with Ansible for example.
And during upgrade, we have 2 clusters, not 1....
If I understand well, there is no cluster upgrade procedure from 3.4 to 4.0 ? We need to re-create cluster from root so we loose all cluster configuration as users, permissions, etc ?
I can't understand your strategy with this release. Think to people having cluster with dozens of nodes ...
Found solution by myself, there's no need to use vconfig, just use ip command : ip link add link eth0 name eth0.100 type vlan id 100
See thread : http://forum.proxmox.com/threads/24162-Proxmox-4-0-VLAN
Hi everyone,
I just upgraded a node to PVE 4, and I have a problem with the vlan package, which is listed as conflict in pve-manager.
pve-manager is supposed to provide vlan, but I don't have vconfig command.
I really need vconfig command, how can I do ?
Thank you.
Hi everyone,
Today I did an "aptitude update && aptitude safe-upgrade" on my nodes.
The last one was about 2 weeks ago.
After this, all my nodes are producing tons of logs like :
Nothing changed from network configuration.
I did "service cman stop; sleep 2; service cman start; service...
Hi everyone,
I use RBD as backend storage for my VMs.
All VMs are single disk, but on a few ones, I can see a second disk (disk-2) in my RBD pool, which have strange sizes : 4.49GB when primary disk is 32GB, 8.49GB when primary disk is 64GB, etc.
And I have some disk-2 related to VM IDs that...
It seems that Ceph repos changed recently.
As a workaround, edit /usr/bin/pveceph and replace :
my $ua = LWP::UserAgent->new(protocols_allowed => ['https'], timeout => 30);
With :
my $ua = LWP::UserAgent->new(protocols_allowed => ['https', 'http'], timeout => 30);
My problem occurs only on guest running MariaDB.
When I start guest with cache=writeback & mount its ext4 with nobarrier = no iowait
When I start guest with cache=none & mount its ext4 with nobarrier = high iowait.
Can we deduce from this that Ceph is the problem ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.