Ah ok, I see. They are all inheriting snapdir setting from rpool/data, whereas I only set it on rpool. I checked on my first node and everything is hidden, so I think I'm safe to hide it for every pool. Solved again :)
Unsolved :(
root@proxmox-2:~# zfs get snapdir rpool
NAME PROPERTY VALUE SOURCE
rpool snapdir hidden local
And yet:
INFO: Starting Backup of VM 202 (lxc)
INFO: Backup started at 2023-10-22 01:00:01
INFO: status = running
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT...
After trying to fix the broken Proxmox node for what felt like an entire day, a simple reboot of my NAS (i.e. the NFS server) fixed the PVE node. The only reason I realised is that even mount -a was timing out. I don't really know what I'd log as a bug, but I feel like a broken storage mount...
I'm just saying that if VM ID is unique to a cluster, I'm not the first person to wish there was a convenience call (I call it this because I guess it's not needed by PVE to function) to get the node ID based on the VM ID (or indeed get a VM's info without caring what node it's on), without all...
@dcsapak I was setting up replication via the UI and it complained "unable to replicate mountpoint type 'bind' (500)"...despite the mp having replicate=0 set.
I just retried and now it's not complaining, so not sure what to tell you, sorry! pve-manager/7.4-16 (soon to be 8..).
Cheers.
Or if you don't want to rely on external packages that don't ship with Proxmox (jq) just to do something that really should be part of the core API... ;)
pvesh get /cluster/resources -type vm --noborder | awk '{if($13 == "your_vm_name") print $14}'
The pvesh command itself seems oddly thought...
Neat! Thank you. I verified and you are correct - for some reason (afaik I never set this) snapdir is visible on the problematic node. So I guess also
zfs set snapdir=hidden rpool
should solve this too! Thank you :)
Hey @fiona - thanks! I'm still confused though. This path is inside the LXC? Or on the PVE host? I cannot find it in either location (not sure where "." is).
Why would this work fine on one PVE host and not another? They are almost identical afaik.
Also not sure what I'm losing by excluding...
Hi,
On one of my nodes, all lxc backups fail with the topic error:
INFO: starting new backup job: vzdump 302 --storage pbs-iscsi --node proxmox-2 --notes-template '{{guestname}}' --mode stop --remove 0
INFO: Starting Backup of VM 302 (lxc)
INFO: Backup started at 2023-10-08 19:45:56
INFO...
"VM names are not unique..."
Right...VM ID's are unique across a cluster. Which begs the question, why do we need to include a node number when querying the API?
/api2/json/nodes/{node}/lxc/{vmid}
Am I missing a call to get a VM info without knowing which node its on?
Been hacking at this all day. Basically ended up removing all references to NFS mounts from my PVE storage and from fstab. The node then finally comes back. As soon as I add NFS shares back (to any location), the node goes down again, pvesm status hangs forever etc. I really don't know what is...
Hi @fiona - I am indeed! I believe this came out the box with however homeassistant used to ship the setup instructions. Good to know it'll be patched, thanks! :)
root@proxmox-2:~# qm stop 101
root@proxmox-2:~# gdisk /dev/zvol/rpool/data/vm-101-disk-1
GPT fdisk (gdisk) version 1.0.6
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: present
Found valid GPT with corrupt MBR; using GPT and will write newprotective MBR on...
Basically it seems like an NFS mount on the host is causing the PVE node all sorts of issues.I can't even manually mount the NFS share mount anymore - at a guess one of the ? lxc nodes is locking the path or something. The Proxmox Storage Manager is really confused and hangs.
I don't think a...
Actually this time it's totally broken my PVE node. I even hard powered it off/on.
Only a few lxc's start. The rest all error with:
Oct 02 11:05:34 proxmox-1 cgroup-network[8727]: Cannot open pid_from_cgroup() file '/sys/fs/cgroup/lxc/305/tasks'.
Oct 02 11:05:34 proxmox-1...
pve-manager/8.0.3/bbf3993334bfa916
I run a large snapshot backup once a week, to a PBS, and one of those is the backup of an lxc that has an nfs mount passed through to the lxc. Quite often, it seems like the backup hangs (no error) which leaves one my lxc's locked, and the entire node with ?'s...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.