Search results

  1. T

    CT backup failing?

    Mira, As I stated originally "All of my other current VMs finish their backups to this same NFS share without errors." What about a single CT would have permissions issues to an NFS share that works for all other backups?
  2. T

    CT backup failing?

    Hello all, I have a newly deployed CT running that fails backups to my TrueNAS NFS share: () INFO: starting new backup job: vzdump 303 --node rpve02 --notes-template '{{guestname}}' --compress zstd --storage truenas --remove 0 --mode snapshot INFO: Starting Backup of VM 303 (lxc) INFO: Backup...
  3. T

    [SOLVED] Proxmox, OpnSense, Open vSwitch slow uploads

    I've been out of town. This did fix the problem. Thanks!
  4. T

    [SOLVED] Proxmox, OpnSense, Open vSwitch slow uploads

    In the last few days I've just started recognizing I have an problem, but i'm sure it's been happening over the last 2-6 months. I have a small Proxmox server that runs OpnSense with a dual port Intel NIC as my firewall for my home internet. On systems external to this Proxmox server, i only get...
  5. T

    2021 USA training?

    Yeah...I had that week blocked off to attend the PVE training. An annual, mandatory training for work happened to land on the same week. So I won't be registering for the virtual PVE training. :(
  6. T

    2021 USA training?

    OK...I didn't notice there were virtual classes. Please add a 2nd English class some time later in the year. I'm already booked during the January dates. Also...how will time zones work for these virtual classes? Will this be a recording I can watch during my daylight hours? I don't want to...
  7. T

    Ceph Nautilus to Octopus upgrade gotchas?

    I'd like to upgrade 3 different 6.2 clusters running Ceph Nautilus. Has anyone found any issues with the Nautilus to Octopus upgrade that the wiki doesn't cover?
  8. T

    2021 USA training?

    Hey all, I would really like to see a USA training date this year. Just a single 4 day training in the middle of the country, which would be easy to get to from any of the States, would be great! e.g. Chicago, St. Louis, Dallas I'd guess you'd have attendees from many areas around North America...
  9. T

    Debian-snmp error after 6.2-11 update

    After updating from 6.2-9 to 6.2-11 I get the following snmp error from all the nodes in my cluster: pve01.xxx.xxx : Aug 25 11:45:13 : Debian-snmp : user NOT in sudoers ; TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/local/bin/proxmox Looks like i'll be getting these notifications every 5...
  10. T

    [SOLVED] Corosync and ceph issues!

    @Stoiko Ivanov I looked there and "Solved" was not listed. EDIT: I think I found it. You have to go to Edit thread and then pick it by the thread title...correct?
  11. T

    [SOLVED] Corosync and ceph issues!

    Sorry to admit, I can't figure out how to mark it solved.
  12. T

    [SOLVED] Corosync and ceph issues!

    Thanks again for the help!
  13. T

    [SOLVED] Corosync and ceph issues!

    Looks like I found what's clogging up the disk: /var/lib/samba/private/msg.sock/ I found this old bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=912717 My 3 ceph nodes all had 100%, or almost 100% with df -i. These 3 nodes all mount two SMB shares. My 4th node has Samba running on it...
  14. T

    [SOLVED] Corosync and ceph issues!

    After starting it manually with "/usr/sbin/corosync -f $COROSYNC_OPTIONS", this seems to have fixed the corosync issue. It also came back up after the node was restarted. Now I need to focus on ceph as it is still problematic. Any ideas?
  15. T

    [SOLVED] Corosync and ceph issues!

    Also: root@pve01:/# killall -9 corosync root@pve01:/# /usr/sbin/corosync -f $COROSYNC_OPTIONS Aug 24 11:52:59 notice [MAIN ] Corosync Cluster Engine 3.0.4 starting up Aug 24 11:52:59 info [MAIN ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf snmp pie relro bindnow...
  16. T

    [SOLVED] Corosync and ceph issues!

    See output below: df -i Filesystem Inodes IUsed IFree IUse% Mounted on udev 4625469 566 4624903 1% /dev tmpfs 4631662 2312 4629350 1% /run /dev/mapper/pve-root 1933312 1933312 0 100% / tmpfs...
  17. T

    [SOLVED] Corosync and ceph issues!

    I did look at that...I thought it looked ok. Here's the output: df -h Filesystem Size Used Avail Use% Mounted on udev 18G 0 18G 0% /dev tmpfs 3.6G 15M 3.6G 1% /run /dev/mapper/pve-root 29G 4.6G...
  18. T

    [SOLVED] Corosync and ceph issues!

    Hello all, I walked in this morning to find problems. My config consists of a 4 Node PVE cluster which has 3 Nodes of ceph storage Ceph is inaccessible from the webgui pve01 has corosync errors. ceph status command will not run on the cli across the 3 ceph nodes. I'm not sure if one issue caused...
  19. T

    VirtIO NIC checksum fail & poor speed

    Yes, I'm aware...but that's not the direction I wanted to go. Thanks!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!