Your repositories look OK.
The patch has to be applied to the git repository first. Then at some point there will be a version bump (which you can see as commit in the log) and
the package will be made available in the repositories.
Do you have other VMs with a working network connection?
Could you please share
cat /etc/network interfaces # from host
qm config <vmid> # from host
ip a # from within VM
You can also try to use something like tcpdump -envi ens18 "icmp" to find out where your packages get lost.
Usually...
Yes. The image formats column in the table contains only raw for backend LVM but it contains raw, qcow2, vmdk, subvol for Directory storages, for example :)
The target storage should be the name that you see in the first column of
pvesm status
and in most places in the GUI. So not a path with /.
Hi, could you please post
pveversion -v
and also the exact log output of your failed restore?
For me with
pve-manager/7.0-10/d2f465d3 (running kernel: 5.11.22-2-pve)
glusterfs-client: 9.2-1
restoring backups from a GlusterFS storage works. It does show the "subvolumes down" message, but...
How did you do that? Can you please post your storage configuration?
cat /etc/pve/storage.cfg
It might be easier to use qm importdisk as in the guide you mentioned. There you can directly specify the LVM storage as in the storage.cfg. If you specify .qcow2 as target format in importdisk as...
There is no magic switch to make everything faster. You will have to do some benchmarks to find out what exactly is the bottleneck. The favorite tools for this are
fio for disk benchmarks (there are some usage examples in the benchmarks on the PVE website)
iperf for network performance
Not really.
Having the OS separate also makes wiping and creating a fresh installation easier.
You can create backups of your guests and manually copy folders like /etc/pve in addition. Making this easier is on the roadmap of PBS.
Hi, it would be great if you could also add your suggestion to bugzilla.proxmox.com. There it certainly won't get lost like it could happen in the forum because of the amount of new posts.
You can do it with the API. You can GET on nodes/<node>/qemu to get VM IDs and then GET on nodes/<node>/qemu/<vmid>/config and filter for the bridge name.
Alternatively, you can search /etc/pve for the bridge name.
This is blocked on purpose in the GUI because LVM thin storages support only raw in Proxmox VE.
There is a wiki page about storages and on the bottom of it you can go to details for all available storage types. Some of them, for example Directory, will contain qcow2. So you need to set up one...
Clusters require low latency and it is therefore recommended to have the servers as close together as possible. You can find more information the Cluster Network Requirements section of the Admin Guide.
When you compare the attributes of your thin volumes, there is an additional letter a for the working pools
twi-aotz-- vs
twi---tz-- for the bad hgst_12tb and seagate10tb.
a is for active (see "lvs" man page). I think it should work when you activate the volumes:
lvchange --activate y...
There is a built-in function to sync backups between two PBS installations.
PBS offers some options in the GUI to manage this. Depends a little on the system around the drives (RAID controller, available memory).
If
hostname --ip-address
returns your IP address then you should be fine. And in this case it looks like this should happen to me.
You could try to see where your pings get lost using tcpdump
tcpdump -envi vmbr0 "icmp"
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.