I do have the same issue with an LXC container (unprivileged) on a local hw raid with LVM - backup destination is NFS.
Running version 6.1
Thanks,
screenie
that's how it looks like on all of the servers... not seeing anything wrong here...
and i'm not finding anything in the logs except that the pve hosts complain that the storage does not exist or is not reachable when using the name from below
root@server01:~# cat /etc/hosts
127.0.0.1...
I recently setup three 6.1 nodes in a cluster and added some existing NFS exports from my storage server i use since a couple years (Debian 7).
As i do not have an internal DNS server i've added an entry to the hosts file which i'm using in the PVE Storage config for the NFS Server but NFS...
There is a explicit 'Storage View' - what is the purpose for that one then?
The Storage View is not used very often so i may not be needed to be displayed below every cluster member - in our case there are 16 storage entries listed below on each member.
As the 'Server View' is the default view...
Honestly, i do not know if the various LXC containers available to download are all created by the Proxmox guys - but the turnkey containers which are offered to download are for sure created by someone other...
In the PVE GUI there are different View's defined.
Server View contains all Containers, VM's, Templates, Storages where there is also Storage View which contains the same hierarchical listing for Storages only.
Would it be possible to remove the Storages from Server View as there is a explicit...
You haven't previously specified which templates from whom you want to use.
If you chose to use a software from a vendor of course you have to trust them, but with PVE you are not limited to use only templates provided by Proxmox.
You can download and use templates from anywhere - that's what i...
A VM Template to download from someone?
I would never ever do that for production use - only someone without any security concern would do that.... how would you decide if you can trust the creator/uploader and which 'extra' tools are included?
You should think twice if you 'really' need that...
OK, setting the TERM variable to xterm-256color manually works.
Still confused - it's not the problem of adding this setting now on the container template - i only do not understand why it needs this extra setting, and i didn't have to do it on the PVE host - its working here by default.
Is...
The regular F1 - F12 Keys.
They are working correctly on the PVE host when connected via ssh and also in the LXC container via ssh.
They are working correctly when connected to the console of the PVE host via the PVE GUI using xterm.js - but they are not working properly in the LXC container...
What do i need to configure to get the function keys working in the xterm.js console of an LXC container?
Using the xterm.js shell on the PVE host they are working properly.
Thanks
We are also still using PVE 3.4 with OpenVZ as almost everything is just working.
Tried to move to LXC but was a painful experience - with 10 years of OpenVZ we didn't had that much issues and downtimes than 3 months with LXC, so we stay running the productive environment with OpenVZ.
We also...
If someone has the same issue - it seems vzdump has a problem when the nfs target name contains a dot in storage.cfg.
When the source directory where the containers are located contain a dot, that doesn't matter.
Copy files manually to the nfs mount point containing a dot is also not a problem -...
'/srv/vz.local' is used for all the local stored CT/VM's which is the LVM device:
root@fralxpve02:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda4
VG Name vg001
PV Size 651.91 GiB / not usable 3.00 MiB
Allocatable yes
PE Size...
I setup a new 5.3 two-node cluster and created some CT on a local LVM volume.
When i backup a CT to a NFS storage it starts, says it cannot do snapshot and continues with suspend mode but never finishes.
The node or better the PVE gui gets unresponsive, all NFS mounts hang and also a reboot is...
thanks, sorry - i was to unspecific;
i know what needs to be change in the configuration file (cluster.conf) but is there a special procedure to follow or simply editing the file and restart the pve-cluster service?
want to avoid unexpected downtimes or issues related to this config change;
thx
Hello,
I have an cluster running in multicast mode and the provider stopped supporting multicast - turned off during switch firmware upgrade and they don't want to enable it again.
Is there an procedure to switch from multicast to unicast without braking things and possibly no or minimum...
This is how it looked like on the remaining pve4 nodes:
And 3.4 cluster is also running cluster sync via multicast where i never had a problem before - nothing has changed on the infrastructure;
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.