looks like the VG is active, you can deactivate it with
vgchange -a n <VG_NAME>
you get the name using vgs, it should just be pve, but just to be sure check with vgs. Then wiping it should work.
Hey,
what is on sda? Looks like it might be an old PVE installation, if you don't need that anymore you can select /dev/sda and then click "Wipe disk", after that you can select it under "Directory" when clicking "Create: Directory", there you can select to use it as a new datastore.
Do you still have both configured? You really don't want a bind mount and a NFS share mount pointing to the same directory.
_netdev[1] should be enough, it'll tell systemd you wait for network before mounting it. Could add _netdev, reboot and check the journal, does it mention anything about the...
Hey,
yes, when adding the storage on the new PVE host and marking it as a "VZ Dump" location backups will show up the same way they did on your old PVE host. You can give this a try by installing PVE in a VM and adding the NFS share there.
Hey,
so, you basically have two options
1. mount NFS on host, bind mount into LXC
Here the NFS share is mounted on the PVE host, so like you'd setup other storages through the UI. Then configure the bind mount[1] from wherever this NFS is mounted on the PVE host into the LXC container with...
Hey,
for the UI filling password(+repeating it) or adding a public SSH key should be enough, could you post a screenshot of the filled out form? Are any error displayed in the browser JS console(F12 > Console)?
For the CLI, by default local storage is used by the pct create[1] command, but in...
Hmm, can you ping 192.168.117.1 from the hosts? VMs that have internet access are on vmbr0, and get IP (from a DHCP I assume) on the same subnet(192.168.117.0/24), right? Do you have a firewall configured on your router?
Hey,
could you post the output of cat /etc/network/interfaces and ip a? Make sure the static IP you've set for the PVE host is in your network, and not already in use.
Not really, chunks basically have to be "unused" for ~1day.
Technically you could set your system time to a few days in the future and run GC, but that is NOT recommended.
Hey,
GC marks unused chucks which are removed ~1 day later, so they have to be marked as unused for some time before removal. For a technical overview you can take a look at [1].
[1] https://pbs.proxmox.com/docs/maintenance.html#gc-background
Hey,
looks like 192.168.32.5 was already used previously by at different server/VM/CT and you connected to it using SSH. The fingerprint of that was saved and associated with the IP, now the IP is the same but its fingerprint is a new one, and SSH warns you that the fingerprints don't match. As...
Hey,
you can setup pools[1], those are kind of like folders. Generally, feel free to create a feature request at [2], since that's the place where things like that are kept track of.
[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pveum_resource_pools
[2] https://bugzilla.proxmox.com/
So, if VMs do have working network then it looks like whatever the Windows Server is doing messes up the network for its VMs, and for that you probably want to like at something like [1], or [2].
[1]...
Hey,
wie sieht denn /etc/network/interfaces nach dem Erstellen der vmbr1 aus? Könntest du auch die Ausgabe von ip a und ip r vor und nach dem Erstellen der zweiten Bridge posten?
Hey,
Datastores do not have IPs, the Proxmox Backup Server(PBS) has one. I assume you mean the IP of your NFS share changed, in that case you have to update the corresponding entry in /etc/fstab, or wherever you have the mounting for the NFS share configured on you PBS.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.