Every node that you want to update to PVE 7 must be updated to latest version of PVE 6.4 first.
Both approaches 1) and 2) should work. I would do the second.
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_remove_a_cluster_node
And all your thinpools do actually exist as logical volumes? To check this, could you please also post
lvs
?
So before the upgrade everything was OK? Are there other VMs that
have a similar problem or have disks on
work even though they have a disk on a "bad" storage
?
Thank you! Looks like I had the same problem. I could resolve it by activating nesting:
sudo pct set 137 --features nesting=1
sudo pct stop 137
sudo pct start 137
(replace 137 by your container ID)
Does it contain
Failed to set up mount namespacing
?
That's one of the messages that should...
Could you please try to login with verbose output?
ssh -vvv ...
And post from the host
pct config <containerid>
and from within the container
journalctl -b
pmgversion -v
dpkg -l | grep dbus
The best resource for Ceph on PVE is the respective chapter in the PVE administration guide.
To do so, you should set up HA in PVE.
You can set up CephFS in the GUI. CephFS can hold VM backups.
Having a separate network for Ceph is certainly a good idea.
You can install a DHCP server if you...
I have not been able to reproduce the problem so far. Could you please try
apt update
apt install --reinstall pve-manager proxmox-widget-toolkit
and hard refresh the browser?
@Nefariousparity @Anguskon What exact versions of pve-manager and proxmox-widget-toolkit do you have installed?
Hi,
1) there is a feature request for dark mode in Bugzilla. You can add yourself to the CC list. There are some inofficial implementations around for the moment.
2) You can, for example, limit the used disk size of the PVE installation using the advanced options in the installer. There is a...
Hi,
could you please post your package versions pveversion -v and maybe check in the developer console (F12) if any errors are displayed during the not working interactions?
The fdisk output contains entries for the working nvme-thin01 and samsung_ssd_1tb but not for the others. Do you have physical volumes and volume groups as defined in the storage.cfg?
pvs
vgs
Hi, there is an upgrade guide for PBS that describes how you can create a backup of your PBS server. If you have spare hardware then you can also sync backups between two PBS servers.
Why do you use 46.241 in the code when the CIDR info says 46.246? And why does your code say 241 when ip a says 246 for vmbr0?
Also, please try iface eno1 inet manual as described in the PVE administration guide.
PBS zu verwenden bietet viele Vorteile: Deduplizierung, inkrementelle Backups (deshalb wie @grefabu erwähnt hat schneller), Sicherstellung von Datenintegrität... Siehe auch die Feature Liste von PBS.
Als Anmerkung: PBS hat eine Client-Server-Architektur. Deshalb ist es möglich PVE als Client...
Hi,
10.1.1.3 should be the broadcast address of 10.1.1.0/30. So I'd try to use
10.1.1.1 and 10.1.1.2 or
something simpler/bigger like 10.10.10.0/24
See some IPv4 subnet calculator
Uploading an ISO to a PVE storage does not change anything in existing VMs. Attaching an ISO to a VM in its hardware view is like attaching a USB drive to a physical machine. It will do nothing until you boot the VM from this ISO.
When you choose a file level storage then you get a drop-down menu where you can select qcow2 both in the VM create wizard as well as when you add a disk to an existing VM in the GUI.
See the PVE Wiki page about storages.
Could you please describe in more detail what you mean by "cannot start"? Does PVE show an error? Does your VM get past GRUB/UEFI? Bluescreen in Windows?
Hi, as first step, could you please copy&paste the output of the following commands (for example from the shell in the GUI of your PVE host)?
cat /etc/pve/storage.cfg
fdisk -l
df -h
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.