Recent content by brickmasterj

  1. pveproxy become blocked state and cannot be killed

    Nope, we have since replaced all production machines this error occurred on as there seems to be no possible fix for this fairly common problem (they were older AMD machines anyway). My testing machine however still runs into this mystery of a problem every few weeks or so if left on long...
  2. pveproxy become blocked state and cannot be killed

    After having experienced this issue any number of times, I have started to collect some metrics on the sort of machines it happens on. One thing that immediately stood out to me that on the about ~10ish machines I have actively and extensively run PVE on, this issue has in my case exclusively...
  3. pveproxy become blocked state and cannot be killed

    Exact same as original post: root@pve:~# service pveproxy status ● pveproxy.service - PVE API Proxy Server Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled) Active: failed (Result: timeout) since Tue 2016-09-27 14:58:46 CEST; 21h ago Main PID: 830 (code=exited...
  4. pveproxy become blocked state and cannot be killed

    Pardon, I meant blocked. I get the same output as in original post. In any case: root@pve:~# ps faxl | grep pveproxy 0 0 3372 3242 20 0 12732 1792 pipe_w S+ pts/3 0:00 \_ grep pveproxy
  5. pveproxy become blocked state and cannot be killed

    Cluster is up and running perfectly fine, full quorum, and /etc/pve is fully accessible... Yet the pveproxy service still blocks
  6. pveproxy become blocked state and cannot be killed

    Stuck with the exact same problem on 3 servers, however whenever it happens `pvesm status` seems to execute fine. This still is quite a problem. I also don't see any weird I/O wait times or whatever in the logs/nagios. Any ideas how to proceed?
  7. VM can't start on DRBD9: Could not open '/dev/drbd/by-res/vm-x-disk-1/0': Permission denied

    Unfortunately no. We ended up ditching DRBD9 altogether in favour of a combination of snapshots and other live migration tools. To this day I would like to know what was causing this but at this rate, we might just wait for a bit and see if it gets fixed or it becomes a wider known issue. For...
  8. [SOLVED] ZFS incredibly low IOPS and Fsyncs on RAIDZ-1 4 WD Red

    Thanks @mir and @kobuki for your insights, I will look into it further and play around with various settings. For now, I've added a 120GB SSD to the servers, split the capacity in half meaning effectively 55.9GB for both cache and log. This resulted in an already quite remarkable increase to...
  9. [SOLVED] ZFS incredibly low IOPS and Fsyncs on RAIDZ-1 4 WD Red

    What would you recommend in terms of capacity split for ZIL (cache) vs. L2ARC (log)? Split the SSD capacity 50/50 or something, or is there something more optimal, and are there any places where the trade offs are documented?
  10. [SOLVED] ZFS incredibly low IOPS and Fsyncs on RAIDZ-1 4 WD Red

    For some reason, on 2 of my servers I'm getting incredibly slow IOPS and FSYNC per sec running pveperf Pproxmox 4, on a ZFS RAIDZ-1 cluster with 4 4TB WD Red's. root@pve:/# pveperf CPU BOGOMIPS: 15959.00 REGEX/SECOND: 743734 HD SIZE: 9921.49 GB (rpool/ROOT/pve-1)...
  11. VM can't start on DRBD9: Could not open '/dev/drbd/by-res/vm-x-disk-1/0': Permission denied

    After a system crash I needed to restart a VM, call it ID x, after one of the other nodes in the DRBD9 cluster derped out. Now, node 1 derped out, and the VM was running on node 2, the issue being that restarting the VM on either node returns a KVM-error: Could not open...
  12. [SOLVED] Proxmox 4 delete cluster on server without reinstall

    Worked perfectly, thanks you so much. Time to donate a cup of coffee ;)
  13. [SOLVED] Proxmox 4 delete cluster on server without reinstall

    By mistake I accidentally created a cluster on one empty, meaning no VM's are running and nothing important is stored on it, server. Now the easiest way to remove this cluster configuration is obviously reinstalling, but that'd require me to physically go down to it which is difficult ATM, so is...
  14. [SOLVED] Debian container unable to start after migration from OpenVZ to LXC on Proxmox 4

    Thank you so much, this enabled me to diagnose the problem. Turns out, it was to do with something other than LXC itself. The booting container got stuck on apache2 asking for a password for a certificate... Not exactly sure how this caused LXC to crash while in OpenVZ this was never an issue...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!