Search results

  1. B

    PVE 6 + InfluxDB + Grafana

    Not to worry, as I also don't have the nodes measurement in my influx db. Just make sure, you create the new datasource in Grafana and then re-create your dashboard. Btw., I always had to remove and re-import the entire dashboard, just changing the setttings to a new datasource didn't do it for me.
  2. B

    PVE 6 + InfluxDB + Grafana

    I do run two clusters. One is running the ceph nodes, albeit managed by PVE and the other one is made up of actual proxmox hypervisors, which runs all the guests - and containers. I am actually not into that hyperconverged thingy and like to keep my diversed for several reasons. One being, that...
  3. B

    PVE 6 + InfluxDB + Grafana

    Yeah… mine looks like this: ### ### [[udp]] ### ### Controls the listeners for InfluxDB line protocol data via UDP. ### [[udp]] enabled = true bind-address = ":8091" database = "iceph" batch-size = 1000 batch-timeout = "1s" [[udp]] enabled = true bind-address = ":8090"...
  4. B

    hibernate a VM via qm command

    Well… the quest could probably hibernate itself after flushing the caches, if one would let it use the PVE api to hibernate itself. Otherwise, the qm command would have to be issued from the PVE host and that one has of course no knowledge as of when or if the cashes have been flushed out.
  5. B

    PVE 6 + InfluxDB + Grafana

    Hmm… you're still missing the actual PVE performance data, otherwise it'd look more like this: I am still suspecting, that your PVE performance metrics don't get through. Maybe you should really try a seperate database/udp port and reconfigure /etc/pve/status.cfg accordingly.
  6. B

    PVE 6 + InfluxDB + Grafana

    You're missing the ballooninfo, so I am suspecting, that your data doesn't get through. Also, I opted to create a seperate UDP listener for that proxmos DB on my system, because I didn't want to mix up data from different systems. Did you check if, there's any firewall in place on the influx...
  7. B

    ZFS: howto add 'new' HDD & transfer data from old one

    Well… there's no thing like a raidz0 in ZFS. Hopefully, you don't mean a simple striped (aka raid0) zpool, which multiple disks and no redundancy at all. Please issue a zpool status and paste it's output. You can, "upgrade" a non-redudant zpool to a mirrored zpool and then even upgrade it...
  8. B

    PVE 6 + InfluxDB + Grafana

    What's in your PVE status.cfg? Whats the config for your proxmox db in your influxdb.conf looking like? Maybe you can login to your influxdb and run a show measurements on the db which is receiving the updates.
  9. B

    Files sharing between VM and Container: What is the best solution to optimize data transfer?

    Hard to tell without further data. Usually the use of NFS itself doesn't spike the CPU. To better be able to judge this, you'd have to show something like top -s 5 re-create the issue and share the output.
  10. B

    Files sharing between VM and Container: What is the best solution to optimize data transfer?

    The simplest way would be to export the storage via NFS from the OV guest. Since this is running all locally on the system the speeds will be more than enough of what a plex server would need.
  11. B

    ZFS: howto add 'new' HDD & transfer data from old one

    Fist up, you can't create a raidz1 with only one disk - you need at least 3 disks for a raidz1 (eq. to a raid5) zpool. So what I am seeing is a single disk rpool and presumably you also had a single disk zpool for your container. Obviously, you don't care much about the safety of your data, but...
  12. B

    donot allow api to delete

    You can set this under VMID -> Options -> Protection. Or use the API to set that.
  13. B

    PVE 6 + InfluxDB + Grafana

    I am having no issues at all with this dashboard. However, you cannot choose a single vmid but only pve nodes for the upper gauges. The vmids are displayed below in their resp. graphs. Also the setup has been straight forward and according to the docs.
  14. B

    donot allow api to delete

    Well, you can do that of course and you can lock guests from being altered, that should also prevent them from being carelessly or accidentially via the API, so I'd think, that there isn't a problem with that function.
  15. B

    donot allow api to delete

    Why would you want to have that - that's precisely, what an API is for. I reckon, that everbody who accesses the API is knowing what he or she is doing!
  16. B

    [solved] rpool silently fails to boot on 1 of 2 disks.

    Can you provide a fdisk -l from both of your boot disks?
  17. B

    Ceph-Cluster RAM runs full

    Yeah… I have experienced that also a lot. Almost any reboot of a ceph host, does kill the monitor, which had been running on that ceph node. In such a case, I do have a little action plan how to "re-create" such a monitor. It generally goes like this: rm -rf /var/lib/ceph/mon/<ceph-node name>/*...
  18. B

    Recover deleted qcow disk

    Well, that's the nature of defaults, isn't it - they never suit everyone. Actually, I am pretty content with the current set. To me, it's basically the same as walking up to my servers in the rack and checking twice, before pulling a drive from any server. The same applies to the "virtual" rack...
  19. B

    Recover deleted qcow disk

    Well, you can. I have just checked that using one of my CentOS guests. You can detach a device from a running guest, even while the volume is mounted. Its probably the equivalent of pulling a drive from it's drivebay on a hotpluggable system… I haven't checked, if any running traffic to that...
  20. B

    Raid1 boot device failed

    I don't know about your HW, but you can get yourself a SATA to USB adaptor and hook any of those two disk up to that, after you installed PVE on a new disk. If you want to boot the system with these disk already enabled, make sure that you exclude them from the BIOS boot volumes, that should...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!