Search results

  1. J

    Backup - Error: not a valid user id

    We have the exact same issue on one vm, and just adding a bit of info that might be a clue to help catch/address these in the future, we recently ran out of space on that datastore so backups were failing - it's possible (though I'm not positive) that vm was created and the initial backup job...
  2. J

    CEPH shared SSD for DB/WAL?

    It is my understanding (I may be wrong) that you need a separate partition on the SSD for each OSD. Did you find it worked to share a single partition? That is indeed what my (limited, but sufficient?) research has indicated, you would need to remove the OSD's and add them to the cluster again...
  3. J

    vm booted from 2-month-old vm state file (filesystem corruption, etc.)

    FWIW, we have not found a way to reproduce this. Hopefully it won't turn up again, but we'll post back here in the future if we ever catch it. Thanks
  4. J

    vm booted from 2-month-old vm state file (filesystem corruption, etc.)

    After attempting to start via the gui failed, I started via 'qm' from cli. I believe after the failed start (via gui or cli without disabling timeout) it was in a state that I had to 'qm stop ###' the vm first, then 'qm start ### --timeout 0'. You bet. We have not been able to reproduce it...
  5. J

    vm booted from 2-month-old vm state file (filesystem corruption, etc.)

    Hello, Yesterday we had a system issue causing a lot of things to restart; most things on proxmox came up fine except one vm, which started from a vm state file from Mar 03 (2 months old). That lead to filesytem corruption, data loss, etc., I'm trying to determine why that happened, and if...
  6. J

    PVE6 pveceph create osd: unable to get device info

    In browsing some of the source, it appears that 2 cases will be handled when examining a db_disk device, a disk with gpt partition label will have a new partition created for the block.db, and a device with a volume group named 'ceph' will have a new lvm created within that group. I don't know...
  7. J

    PVE6 pveceph create osd: unable to get device info

    Related to this, would you please clarify for me, we have an SSD for the root drive, and I intend to use free, unpartitioned space on that SSD for the db - is it safe to specify -db_disk /dev/sda, and my current root filesystem, swap, etc. partitions will not be overwritten? We have gpt...
  8. J

    Locking down Proxmox Interface

    That is the purpose of the 'management' ipset, under Datacenter > Firewall > IPSet Untested (as I don't have a new cluster without a firewall to do so) , but should be: go to Datacenter > Firewall > IPSet, click the 'Create' button and add a set called 'management', and add your ip addr(s) to...
  9. J

    PVE update broke ipfilter for lxc secondary interfaces

    That note about manually editing the ipset was key, I just tested that and it is sufficient to get this working now. Ie. in my example, create an ipset 'ipfilter-net0' in the container which contains all 3 ip addresses (x.x.x.2, x.x.x.4, x.x.x.44), then enable ip filtering, and everything is...
  10. J

    PVE update broke ipfilter for lxc secondary interfaces

    Seems like a "yes" to me. I don't remember specifically why that was setup the way it was, I think it's simply because it was easy and it worked (and didn't involve .pve-ignore files, etc.).
  11. J

    PVE update broke ipfilter for lxc secondary interfaces

    Doing a little testing, the issue does seem to be related to what interface (and hence mac address?) is used for each ip address. In my case, .2 is the eth0 address, .4 is eth1 and .44 is eth2, and I forced the use of the corresponding interface via source routing (performed inside the...
  12. J

    PVE update broke ipfilter for lxc secondary interfaces

    ok, this is 'pve-firewall compile' output. I notice in it some rules for arp (which I did not see in the iptables output) which might explain the behavior, to wit after I enable ip filter, sometimes the change/problem is almost immediately effective, and sometimes it takes a few seconds (eg. up...
  13. J

    PVE update broke ipfilter for lxc secondary interfaces

    Yes, they are 3 ip addresses on the same subnet. I assume you mean for all three interfaces (three interfaces, one container). Sure, /etc/pve/lxc/138.conf: arch: amd64 cores: 4 hostname: ns1.kci.net memory: 2048 net0...
  14. J

    PVE update broke ipfilter for lxc secondary interfaces

    This is the iptables output with IP filter disabled (so all interfaces working) and enabled (only eth0 x.x.x.2 ip address works), on a node with only the single container 138 running. The only difference I see between the two is the inclusion of the rules in the veth138i#-OUT chains to drop...
  15. J

    PVE update broke ipfilter for lxc secondary interfaces

    I do not find this in the bug tracker or forum so just checking if this is a known issue before filing a bug. 2 weeks ago I updated a small cluster, and found a few containers which previously worked fine were having partial networking issues. The problem ends up affecting only for containers...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!