Recent content by Ruben Waitz

  1. R

    drbdmanage license change

    I know. I run a PVE4 cluster with DRBD tech part of PVE-setup back then. Now I've to keep the nodes running after upgrading to PVE5.
  2. R

    drbdmanage license change

    Hi Philipp, I think the Lintec Proxmox 5 plugin build of DRBD contains an old DRBD version. It's v8.4.7 instead of v9.0. Now a Proxmox 5 node breaks configuration and functionalitity in an existing Proxmox4 cluster.... How can we solve this? Thanks in advance. Best Regards, Ruben Waitz...
  3. R

    pty issue with ve 4 containers (converted from 3)

    A better solution: Enter the container 101 (example) from the host command prompt: root@pve01:~# pct enter 101 Inside the container enter: [root@ct101 ~]# rpm -e --nodeps udev [root@ct101 ~]# reboot This will remove the udev package. Problem is solved but not sure what the side effects are...
  4. R

    pty issue with ve 4 containers (converted from 3)

    Please try this "semi optimal" work-around. Add the following line to /etc/pve/lxc/XXXX.conf at your hostnode. lxc.hook.stop: sh -c "pct mount ${LXC_NAME}; perl -pi -w -e 's/^(\/sbin\/start_udev)/# \$1/g' /var/lib/lxc/${LXC_NAME}/rootfs/etc/rc.d/rc.sysinit; pct unmount ${LXC_NAME}" This hook...
  5. R

    [SOLVED] Can't backup CT: "Cannot stat: Structure needs cleaning"

    Hi Wolfgang, Yes, the same error. Here are some examples. 6070: Jan 31 05:19:10 INFO: creating archive '/mnt/pve/ib-onsite-backup2/dump/vzdump-lxc-6070-2017_01_31-05_19_07.tar.gz' 6070: Jan 31 05:27:58 INFO: tar: ./var/lib/php/session/sess_nk8q3hh36qn9vgo09imou6sph5: Cannot stat: Structure...
  6. R

    [SOLVED] Can't backup CT: "Cannot stat: Structure needs cleaning"

    Hi @morfair, I've got the same issues you're describing on a CEPH-storage. Repairing with fsck as @dietmar mentions, doesn't help because the filesystem isn't broken. It looks like it only occurs on files which are in use by the system during the backup-process, like PHP-session files, logfiles...
  7. R

    [SOLVED] ceph upgrade to jewel: HA migrate not working anymore

    I removed the OSD of the failing node, and did a new jewel install. After that I've upgraded the remaining hammer nodes to jewel. Everythings seems to work again. The missing ceph log message from pve-UI is still there. I think /var/log/ceph/ceph.log is written by the ceph MON, which I didn't...
  8. R

    [SOLVED] ceph upgrade to jewel: HA migrate not working anymore

    @wolfgang Sorry, you're right. I made a mistake. The old nodes are 0.94.9-1~bpo80+1 (hammer). @udo Yes, I'm intended to do so, but I'm afraid I mess things up when upgrading. Originally there were 4 hammer nodes. After upgrading 2 to Jewel, one of the Jewel nodes is yielding these problems. With...
  9. R

    [SOLVED] ceph upgrade to jewel: HA migrate not working anymore

    Yes, each node has 1 OSD. dpkg-query -l | grep librbd displays 10.2.5-1~bpo80+1 (both librbd1 and python-rbd). This is the same on the other working Jewel node. The 2 old 'hammer' nodes have version 0.80.8-1~bpo70+1
  10. R

    [SOLVED] ceph upgrade to jewel: HA migrate not working anymore

    Hi, This happens also with KVMs. In the rbd: section of /etc/pve/storage.cfg I've set krbd to 1. The only difference is that I don't run a ceph mon on this node to have an odd quorum (3 monitors on 4 nodes) as adviced in a ceph book I read. Could that be the problem in this case? Otherwise...
  11. R

    [SOLVED] ceph upgrade to jewel: HA migrate not working anymore

    Hi Wolfgang, Thank you for replying. I've executed 'unset noout' again, to make sure. It looks like ceph and the cluster is working fine except for the HA-issue. That occurred after upgrading to Jewel. Maybe the problem is elsewhere in HA. The only warning I get is "HEALTH_WARN: crush map has...
  12. R

    [SOLVED] ceph upgrade to jewel: HA migrate not working anymore

    Hi, I've upgraded ceph from Hammer to Jewel on 2 of 4 nodes in our proxmox 4.4 cluster. At one "pve jewel node" HA-migration (to/from) and the Ceph-log (from pve-UI) faulty. HA-migration says "HA 200 - Migrate OK" but doesn't do anything further. From pve-UI the ceph-log for this node says...
  13. R

    experience upgrade PVE4.1 to 4.2 on a DRBD9 cluster setup with LXC containers

    Hi, I've the following configuration: - 3 node cluster on PVE4.1 - High Availability with DRBD9 - LXC containers (on DRBD9) - All LXCs reside on a single node I want to upgrade the cluster nodes to PVE4.2, but I'm afraid differences between PVE 4.1 and 4.2 could defect the system (e.g. maybe...
  14. R

    [SOLVED] pve4.1 lxc backup: disabling stdexcludes possible?

    I found a bug report about this here: https://bugzilla.proxmox.com/show_bug.cgi?id=926 (Maybe it's fixed in PVE4.2)
  15. R

    [SOLVED] pve4.1 lxc backup: disabling stdexcludes possible?

    Just tested it in PVE4.1-39. Unfortunately this configuration option seems to be ignored... :-(

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!