For the KVMs on the down node, I was able to get them working. You can just go in to "HA" (high availability) and add the KVMs from the down node, and it just migrates them. It doesn't need to be set up when the node is functioning. So I'm able to recover the nodes, thankfully!
Later: I wasn't...
Hi,
I have three similar proxmox clusters, ~10 nodes running Ceph with encrypted root partitions. I enable remote ssh for unlocking the encrypted root drives when it boots. This has worked swell for years. This morning, one of the nodes had rebooted and was waiting for the password to decrypt...
From what I can see, Proxmox isn't going to work until you get your upgrade/packages fixed. And it may be that the packages can't get fixed until your disks get fixed. I do note your multipathd.service has been running for 1 year 1 month. I'm not familiar with your iscsi setup, but it may be...
Cool. Would be nice to see it back in Debian if all the licenses check out. I sympathize with the packager and the many deps that packages can drag in...
1) ok, I edited that.
2) I edited that too. But what I meant is it (as it appeared to me) to not have been around as long as some of the others.
3) Apparently Debian Developers found licensing issues with the web GUI, so perhaps it isn't all under apache license. I don't know the details, but...
I got this working in a Debian VM. I wasn't able to see the second monitor in arandr until I ran this, which I now run on startup each time:
xrandr --addmode Virtual-2 1920x1080
xrandr --output Virtual-2 --mode 1920x1080 --right-of Virtual-1
That works swell, except I can't get the mouse...
It looks like you have issues with your disk or SNA or whatever it is you are using for storage. It added 11 terabytes in the middle of the operation, amongst some other errors.
While you are running the update, in another terminal run `dmesg -Tw` and see if it says anything is wrong. Maybe you just have a failing disk or something.
Are you trying to install inside the VM? The disk space pics you show I think are all from the host, not the guest. You could run `df -h` inside your Ubuntu guest--perhaps that is where you are out of space.
Ya, you don't have to redo everything from scratch. Just edit the network config to move the IP you want to use to the interface you want. No need to re-join.
From the API. "The state of the zpool":
https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/disks/zfs/{name}
I don't see anything for ARC, but it could be in the array it dumps. As a side note, I did notice collectd has ZFS ARC support, afaict.
> smart attributes...
Ha! Exactly, I was thinking this myself. I have one system that I give a lot of RAM for just for the caching, but it appears to be the "worst". This is true whether you use Proxmox internal graphs or view InfluxDB data with Grafana. Really qemu limitation afaict, not necessarily Proxmox.
I'll edit this as I get additions/corrections.
Needs
----------
* System that produces alerts when something has gone bad.
* System that warns when things are getting bad.
* System that allows visualization of metrics to aid in system optimization.
* Monitoring of hardware, such as temperature...
Ya, now it is a million steps again! ;)
I realized something last night when trying to get SPICE to start unmuted in Proxmox. Back in the day, you pretty much had to start with some basic tools and conjure up a bunch of perl. Each bit you had to build up yourself. Same with cluster filesystems...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.