tl;dr: it seems to be working.
I'd tried bringing up the other OSDs through the web interface previously, and it appeared to succeed, but they didn't actually come up. But when I checked pveceph status again, the time skew warning was gone--perhaps it just took ceph some time to realize the...
root@pve1 ➜ ~ pvestatd status
running
root@pve1 ➜ ~ pvecm status
Cluster information
-------------------
Name: brown-cluster
Config Version: 4
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Thu Aug 31 07:48:42 2023
Quorum...
root@pve1 ➜ ~ pveceph status
cluster:
id: 9e9a1f45-4882-4324-b208-fda9e78e73a4
health: HEALTH_WARN
Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process
clock skew detected on mon.pve3, mon.pve1...
I'm not sure if this is progress or not--restarting pvestatd on the affected nodes brought them online (i.e., green check mark), but only for about a minute. After that, they reverted to the gray question mark shown above. Restarting the whole node brought it online for a slightly longer...
I had a power outage for about an hour this afternoon, which took down three nodes of my four-node cluster (creatively enough, the nodes are named pve1, pve2, and pve3; pve4 was on a different UPS and wasn't affected). After bringing them back up, pve1 and pve2 are in a semi-offline status:
I...
I've tried swapping the network daughter card for an X520-based unit; that seems to be working better. Same optics, cables, network configuration, etc. Seems odd. It'd still be nice to see why the X710 is flaky.
tl;dr: I'm experiencing a very flaky network connection on a Dell PowerEdge R630 with the Intel X710/i350 network daughter card. The node itself is dropping offline and coming back online at (apparent) random, and VMs and containers on the node are also very flaky.
Background: I've had a...
Indeed it did, thanks--editing /usr/share/proxmox-acme/dnsapi/dns_acmedns.sh to add the ACMEDNS_BASE_URL lets it work for the time being, though it's obviously a bit of a hack. Any idea when that next update is expected?
tl;dr: I've been getting emails from by backup server for about the last 10 days that it was unable to renew my Let's Encrypt certificate.
My PBS has been set up to get a cert from Let's Encrypt using DNS validation via acme-dns since September 2021. It's successfully renewed every 60 days...
I should have updated this thread earlier, but was reluctant to call it "solved" without a good bit of experience. Since updating TrueNAS to -U6, I haven't seen the NAS be marked "offline" for any of my PVE hosts--I don't watch them constantly, of course, but I haven't seen it. Performance has...
Hmmm. I know it's happened with previous versions as well, but it's worth a try. Updated the TrueNAS server to 12.0-U6, and rebooted each of the cluster members. Let's see what happens.
I'm having a recurring problem that NFS mounts on my TrueNAS server go "offline" in my PVE cluster, and/or remain online but show very poor performance. Trying to track down why it's happening, and what I can do to to address it.
My TrueNAS server is running TrueNAS CORE 12.0-U5.1. It has 2x...
I tried making two clones of a container template at the same time, and now I'm left with one that's locked:
The problem is that it doesn't really seem to be there--I can't qm unlock or qm destroy it; both commands tell me that nodes/pve3/qemu-server/166.conf doesn't exist. How do I get rid of it?
It looks like the official repo only has the backup client for x64--is there a reasonably-tested method of running it on a Raspberry Pi under either Raspbian/Raspberry Pi OS or Ubuntu?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.