Search results

  1. M

    LXC Guest Agent equivalent?

    For anyone else reading this Mr. Lamprecht was spot on, upstart was ignoring SIGPWR. The fix was as simple as adding /etc/init/shutdown.conf and setting up a task to run shutdown -h on power-status-changed. That particular container started out life as a VM and I migrated it to a container...
  2. M

    Ceph Single Node vs ZFS

    Ceph will absolutely run on a single node, it's just not normally a practical option as you're limiting yourself to only a single host for redundancy so there's generally better suited options available for single node applications. You would need to make sure your CRUSH map splits PGs over OSDs...
  3. M

    Remote backup to B2

    How exactly should PBS be backed up to systems other than another PBS host? In particular I was wanting to set up an offsite backup to Backblaze B2. Is it fine to just take a normal file level backup of the datastore using some third party backup software? I'm assuming the backups are just...
  4. M

    LXC Guest Agent equivalent?

    I've got a container that I migrated from a VM. It's running Centos 6 on it, but when trying to shutdown the container from ProxMox it just hangs. I can manually shut it down from within the container just fine. How should this be set up in containers that aren't based off of the normal template...
  5. M

    Multiple containers with the same IP

    I'm working on setting up an anycasted service with the containers hosted on Proxmox. The current plan is to give each container a separate IP address in the same subnet as the router via adding a regular network device in Proxmox. Within each container they'll be configured to add the anycasted...
  6. M

    All proxmox process in the 'D' state and very big LA

    well read the file /proc/[PID]/stack the next time you get a process stuck in D state and it'll tell you exactly which syscall it's getting stuck at and hopefully provide a hint as to where to go from there. You might also want to try Magic SysRq key "d" to show all held locks. NFS is known for...
  7. M

    All proxmox process in the 'D' state and very big LA

    This sounds very similar to the issue I was having, but I only saw the issue with Ceph, not NFS. How are your VMs stored? https://forum.proxmox.com/threads/pve6-sporadic-deadlock.56546/
  8. M

    PVE6 sporadic deadlock

    Oh and the backup destination was to CephFS
  9. M

    PVE6 sporadic deadlock

    After the update to PVE6 every once in a while it seems like some IO causes a deadlock while running backups and then anything calling sync (like every container that shuts down or reboots afterwards) also gets stuck in unkillable sleep waiting for the original deadlock. Both times this has come...
  10. M

    health: HEALTH_ERR (problem with ceph)

    You need 3 mons, 2 or maybe 3 mds. What you have right now is just wasting resources and probably putting yourself at more risk of downtime incidents like this one. One thing, you only have 8 servers yet 238 OSDs?? Do you really have over 30 disks per server? You should have at most 1 OSD per...
  11. M

    health: HEALTH_ERR (problem with ceph)

    As for the rest of the issues, don't even start trying to fix them before the cluster is back up and all of the PGs are active. You really should fix them, but they aren't the cause of your current issues and fixing them right now could just make things worse if you tried it on top of what's...
  12. M

    CEPH Write Performance

    Your problem is that you're using a consumer SSD with bluestore taking up all available disk space. Luckily your timing couldn't be better. Upgrade to PVE6 and this will fix your issue, although you might need to manually tell Ceph to trim the drive once after you upgrade. Ceph Luminous does not...
  13. M

    health: HEALTH_ERR (problem with ceph)

    Ouch, what the hell did you do to your poor Ceph cluster? Who set all of the noout, noup, nodown, etc flags and why? How long has it been like that? I agree with the other poster that 7 mons really is overkill, I see that you have 238 osds which is large enough that maybe there might be some...
  14. M

    CEPH in WARN state for 54 min

    That's fairly typical with Ceph and it only take 50ms of skew between any two monitors to throw a warning. Ideally what you should be doing is having all three monitor nodes syncing time via NTP to some NTP server that's local and have that NTP server sync to e.g. pool.ntp.org rather than having...
  15. M

    Estimate for Debian Buster and PVE 6

    What I meant was more of a followup to Fabian's post on the Ceph mailing list back when this whole fiasco started. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-June/027366.html Now that Debian Buster is right around the corner did Proxmox follow the original plan of building and...
  16. M

    Web Gui communication failure (0) between nodes for status panel

    Do you have a separate switch or are you referring to the switch built into the router? This still sounds like the expected behavior for setting the wrong MTU on an interface to a switch that doesn't support jumbo frames. If you have a separate switch, why do you think it supports jumbo frames?
  17. M

    Web Gui communication failure (0) between nodes for status panel

    Don't enable jumbo frames, your router doesn't support them. https://en.avm.de/service/fritzbox/fritzbox-7590/knowledge-base/publication/show/341_Oversized-packet-support-jumbo-frames/
  18. M

    Estimate for Debian Buster and PVE 6

    Now that Debian has finally announced a release date for Buster, how much work remains before we get PVE 6 and more importantly Ceph Nautilus? Is the first release going to include any substantial changes to ProxMox itself or mostly just upgrading everything to Debian Buster? Given how long...
  19. M

    CephFS trough Proxmox GUI - No VM storage possible?

    You don't want to add RBD storage on the cephfs_data pool. Ignore that suggestion. Your VMs should be in a separate pool for manageability reasons. It's easy to add a pool under the GUI if you don't already have a pool labeled vm. The reason to put it under a separate pool is so that later on if...
  20. M

    CephFS trough Proxmox GUI - No VM storage possible?

    Ceph is more than just a filesystem. Ceph provides three separate types of storage. Object storage like Amazon S3, block storage through RBD or iSCSI, and file storage through CephFS. For a VM you don't want it to be stored in CephFS, you want to use block level storage. I set up Ceph on my...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!