Recent content by Florent

  1. F

    Some Cluster Nodes marked red in Web-Interface

    By the way it seems there's a loop when Proxmox is trying to remount an NFS share : Creates a lot of processes but when mount is frozen, PVE re-try to mount...
  2. F

    Some Cluster Nodes marked red in Web-Interface

    Hi, Today I had the same problem on a node (stale NFS mount), is there a way to avoid this situation ? Why PVE is freezing on a stale NFS mount, which is not used ? (confirmed with lsof, no process were accessing nfs mount). Thank you
  3. F

    Proxmox VE 4.0 released!

    Still not working for me... test1:~# corosync notice [MAIN ] Corosync Cluster Engine ('2.3.5'): started and ready to provide service. info [MAIN ] Corosync built-in features: augeas systemd pie relro bindnow test1:~# /etc/init.d/pve-cluster start Starting pve cluster filesystem ...
  4. F

    Proxmox VE 4.0 released!

    Ok that's just for corosync key, understood. I will try this.
  5. F

    Proxmox VE 4.0 released!

    Ok ok, I just say that's unusable in production environment. Hi spirit, thank you for your how-to, but I don't think it can work. When you run "pvecm add ipofnode1 -force" on a non-rebooted node, it will fail because it calls 'systemctl stop pve-cluster' and systemctl does not work yet (system...
  6. F

    Proxmox VE 4.0 released!

    Yes I do 1 cluster at a time, but read procedure : impossible to mix 3.4 & 4.0 nodes in a same cluster. So procedure is to upgrade a first node, and create a new cluster on that node. So during upgrade, you have 2 clusters instead of one. When you have thousands of nodes, you can't do it by...
  7. F

    Proxmox VE 4.0 released!

    If I use procedure provided by spirit, it seems there's no downtime, isn't it ? Problem is that upgrade needs to be done "by hand", impossible to automate it with Ansible for example. And during upgrade, we have 2 clusters, not 1....
  8. F

    Proxmox VE 4.0 released!

    If I understand well, there is no cluster upgrade procedure from 3.4 to 4.0 ? We need to re-create cluster from root so we loose all cluster configuration as users, permissions, etc ? I can't understand your strategy with this release. Think to people having cluster with dozens of nodes ...
  9. F

    [SOLVED] Proxmox 4 : VLAN package ?

    Found solution by myself, there's no need to use vconfig, just use ip command : ip link add link eth0 name eth0.100 type vlan id 100 See thread : http://forum.proxmox.com/threads/24162-Proxmox-4-0-VLAN
  10. F

    [SOLVED] Proxmox 4 : VLAN package ?

    Hi everyone, I just upgraded a node to PVE 4, and I have a problem with the vlan package, which is listed as conflict in pve-manager. pve-manager is supposed to provide vlan, but I don't have vconfig command. I really need vconfig command, how can I do ? Thank you.
  11. F

    Lots of "[TOTEM ] Retransmit List" since last update

    Hi everyone, Today I did an "aptitude update && aptitude safe-upgrade" on my nodes. The last one was about 2 weeks ago. After this, all my nodes are producing tons of logs like : Nothing changed from network configuration. I did "service cman stop; sleep 2; service cman start; service...
  12. F

    RBD: strange second disk on few VMs

    Hi everyone, I use RBD as backend storage for my VMs. All VMs are single disk, but on a few ones, I can see a second disk (disk-2) in my RBD pool, which have strange sizes : 4.49GB when primary disk is 32GB, 8.49GB when primary disk is 64GB, etc. And I have some disk-2 related to VM IDs that...
  13. F

    CEPH storage installation problem

    It seems that Ceph repos changed recently. As a workaround, edit /usr/bin/pveceph and replace : my $ua = LWP::UserAgent->new(protocols_allowed => ['https'], timeout => 30); With : my $ua = LWP::UserAgent->new(protocols_allowed => ['https', 'http'], timeout => 30);
  14. F

    RBD : which cache method to decrease iowait ?

    My problem occurs only on guest running MariaDB. When I start guest with cache=writeback & mount its ext4 with nobarrier = no iowait When I start guest with cache=none & mount its ext4 with nobarrier = high iowait. Can we deduce from this that Ceph is the problem ?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!