Yeah the only reason I wanted to add it to the cluster was to manage it from the same point, rather than having to use two portals. The reason the VM needs its own network is because it will likely saturate our primary network and I don't want it bogging down other traffic. Security is a concern...
I have five nodes in a cluster and am about to spin up a sixth for a special case. The first five currently all use the same network (working on separating out the management interfaces from the LACP bonds so I can trunk VLANs in on the rest, but that's for another day), but the VM that will run...
It was user error. I don't remember exactly what I was doing wrong (it was nearly a year ago), but if you're not lacking common sense like myself, then it's not the same issue. Sorry I can't be of more help. I think I was assuming the wrong unit conversion in the config or something. Like bytes...
Update: While restarting the services and waiting a month did not work, stopping the services and waiting a couple of days did. I don't know why this is happening but it is causing a lot of downtime. If someone comes across this and has a suggestion on how to improve robustness to mitigate these...
Replying again because the issue persists. Restarting the services has no effect. Stopping them changes the error to "Dataset is busy." All new backup jobs are failing but I can't troubleshoot it until I can resolve this first issue. Also, there is a partial recv but I can't destroy it.
I restarted the service, but it's not unlocking. The datastore has been locked for weeks at this point, preventing me from taking snapshots and "send|recv"ing them. It's persisted through reboots and every attempt to fix it.
Coming back to this because I'm having trouble again. I ran lsof to try to find the process locking the mountpoint and got this
:~$ sudo lsof | grep /mnt/datastore/storage/
proxmox-b 2325 backup 19u REG 0,58 0 3...
Update: I tried to use qm importdisk but it says that the target storage contains illegal characters. The documentation for qm doesn't seem to specify what characters are unacceptable. The target storage is /mnt/VM_Storage
Fantastic! Thank you! I'll give that a shot an report back. I looked at the link you provided but I don't see where it says qcow2 isn't supported on LVM. I'm guessing it doesn't explicitly say that, but can be inferred by what is said?
I'm trying to import a .vmdk as per the official instructions, but it keeps failing, reporting that there is no space left. This is very much not true, as there are terabytes of space in the volume group, and the .vmdk is less than 130 gigabytes.
The .vmdk is on an HDD connected via USB and...
Hello,
I'm trying to set up a node to host a file server. However, by default, Proxmox is assigning nearly all of the storage space to the LVM volume. Since I will only be hosting one VM on this node, and that VM needs extensive storage space, I need to reconfigure the storage volumes. Despite...
At this point in thinking of nuking the datastore and pool and starting fresh. I jsut wish I knew what happened so I could try to prevent it in the future.
All drives passed SMART, so that's good. unshare complained that the share is not nfs or smb, so that answers that as well. I'll certainly set up a cronjob for scrubs, but for now should I use that force/hardforce export?
It's definitely not set up as SWAP. There is an NVME drives that holds all of the operating partitions.
The pool is purely storage. I have over 4TB still available, but I did run out of space a couple of weeks ago since I was out sick and no one was cleaning up the old snapshots in the pool and...
Hey all,
I have my backups in a zfs pool ("s_pool") of which I take regular snapshots. These are then streamed elsewhere with zfs send ... | zfs recv .... Today, however, it is failing to stream because zfs can't unmount s_pool. Running an export fails as well (specifying that the device is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.