Not to worry, as I also don't have the nodes measurement in my influx db. Just make sure, you create the new datasource in Grafana and then re-create your dashboard. Btw., I always had to remove and re-import the entire dashboard, just changing the setttings to a new datasource didn't do it for me.
I do run two clusters. One is running the ceph nodes, albeit managed by PVE and the other one is made up of actual proxmox hypervisors, which runs all the guests - and containers. I am actually not into that hyperconverged thingy and like to keep my diversed for several reasons. One being, that...
Well… the quest could probably hibernate itself after flushing the caches, if one would let it use the PVE api to hibernate itself. Otherwise, the qm command would have to be issued from the PVE host and that one has of course no knowledge as of when or if the cashes have been flushed out.
Hmm… you're still missing the actual PVE performance data, otherwise it'd look more like this:
I am still suspecting, that your PVE performance metrics don't get through. Maybe you should really try a seperate database/udp port and reconfigure /etc/pve/status.cfg accordingly.
You're missing the ballooninfo, so I am suspecting, that your data doesn't get through. Also, I opted to create a seperate UDP listener for that proxmos DB on my system, because I didn't want to mix up data from different systems. Did you check if, there's any firewall in place on the influx...
Well… there's no thing like a raidz0 in ZFS. Hopefully, you don't mean a simple striped (aka raid0) zpool, which multiple disks and no redundancy at all.
Please issue a zpool status and paste it's output. You can, "upgrade" a non-redudant zpool to a mirrored zpool and then even upgrade it...
What's in your PVE status.cfg?
Whats the config for your proxmox db in your influxdb.conf looking like?
Maybe you can login to your influxdb and run a
show measurements
on the db which is receiving the updates.
Hard to tell without further data. Usually the use of NFS itself doesn't spike the CPU. To better be able to judge this, you'd have to show something like
top -s 5
re-create the issue and share the output.
The simplest way would be to export the storage via NFS from the OV guest. Since this is running all locally on the system the speeds will be more than enough of what a plex server would need.
Fist up, you can't create a raidz1 with only one disk - you need at least 3 disks for a raidz1 (eq. to a raid5) zpool. So what I am seeing is a single disk rpool and presumably you also had a single disk zpool for your container. Obviously, you don't care much about the safety of your data, but...
I am having no issues at all with this dashboard. However, you cannot choose a single vmid but only pve nodes for the upper gauges. The vmids are displayed below in their resp. graphs. Also the setup has been straight forward and according to the docs.
Well, you can do that of course and you can lock guests from being altered, that should also prevent them from being carelessly or accidentially via the API, so I'd think, that there isn't a problem with that function.
Why would you want to have that - that's precisely, what an API is for. I reckon, that everbody who accesses the API is knowing what he or she is doing!
Yeah… I have experienced that also a lot. Almost any reboot of a ceph host, does kill the monitor, which had been running on that ceph node. In such a case, I do have a little action plan how to "re-create" such a monitor. It generally goes like this:
rm -rf /var/lib/ceph/mon/<ceph-node name>/*...
Well, that's the nature of defaults, isn't it - they never suit everyone. Actually, I am pretty content with the current set. To me, it's basically the same as walking up to my servers in the rack and checking twice, before pulling a drive from any server. The same applies to the "virtual" rack...
Well, you can. I have just checked that using one of my CentOS guests. You can detach a device from a running guest, even while the volume is mounted. Its probably the equivalent of pulling a drive from it's drivebay on a hotpluggable system… I haven't checked, if any running traffic to that...
I don't know about your HW, but you can get yourself a SATA to USB adaptor and hook any of those two disk up to that, after you installed PVE on a new disk. If you want to boot the system with these disk already enabled, make sure that you exclude them from the BIOS boot volumes, that should...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.