Hi,
currently such a feature does not exist. There is, however, a request for it in the Proxmox Bugzilla. You can add yourself to its CC list to get notified about changes to its status.
There is a bug entry in Bugzilla. You can add yourself to the CC list to get notified about changes to its status. You can also find some unofficial solutions in the forum or rather on GitHub.
The best practices for Windows guests have not been changed by the jump from PVE 6 to 7.
Backup & restore is a good & reliable way to transfer your VMs.
Hi,
have you been able to solve this already? Usually posting
cat /etc/network/interfaces
ip a
from both Proxmox VE host and within the VM is really helpful.
In addition, you can debug really many networking problems using tcpdump, for example:
tcpdump -envi vmbr0
Hi, if you absolutely want TrueNAS then this sounds OK. You can also think about
letting Proxmox VE handle the physical disks and create a (maybe ZFS) storage on it
install some (other) data storage solution in a VM
assign the VM virtual disks residing on the storage from step 1
I would not recommend two-node Ceph cluster. Instead you could look into ZFS & storage replication.
In addition, I would avoid the partitioning. You can give PVE one of the slower drives (or an additional 128GB SSD) and use the NVME as cache for ZFS. Note that for both Ceph & ZFS you should use...
Which storage controller do you have in your server? In general ZFS is a good choice, but it does use 50% of the host memory for caching by default. Depending on how much memory you need for your computations and the controller, other storage technologies might be interesting as well.
Yes, as explained by the preconditions in the upgrade guide.
A temporary combination of 6.4 and 7 should work. There are some hints in the forum.
Octopus is OK before the upgrade. You can upgrade later. See also in the upgrade guide.
Great to hear that you got so far! Unfortunately, that process often requires experiment with a couple of VM configuration options.
Did you get the glitches with the noVNC shell? Maybe you can use some remote desktop solution (Chrome Remote Desktop) instead?
Hi,
importing VMs into Proxmox VE has become much easier. For the CloudReady image
Move the .ova archive to the server (or some accessible shared storage)
Extract it
Run importovf
qm importovf 137 CloudReady\ Home\ 83.4.ovf tank
where you replace tank by your storage name and 137 by your...
There is a tutorial to install Proxmox VE on top of Debian in the wiki. It does not use dhcpcd, but the explanations concerning the hosts file might still be interesting.
Can you start VMs? It seems the container problem can be expected with the old kernel https://forum.proxmox.com/threads/container-status-unknown-after-pve5-pve6-blkio-throttle-io_service_bytes_recursive-no-such-file-or-directory-500.72252/
Can you verify that your backups work with another...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.