After upgrading my cluster to PVE7:
LXC container with OpenVPN server shows an error in the log:
Mon Jul 19 10:32:50 2021 Diffie-Hellman initialized with 2048 bit key
Mon Jul 19 10:32:50 2021 Socket Buffers: R=[131072->131072] S=[16384->131072]
Mon Jul 19 10:32:50 2021 ROUTE_GATEWAY...
When I try to edit a network interface in the WEB GUI (Safari browser, MacOS), the edit window runs out of the visible area:
I cannot access for move or close this window. I have no other action. I can log out from GUI only.
for PVE 6.x display the same in Safari correctly
for PVE 7 in...
I am trying to file-restore from disk with XFS file system:
XFS not supported in PBS?
PBS package versions:
# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-6 (running version: 6.4-6/be2fa32c)
I am trying to mount backup (VM 200):
proxmox-backup-client mount vm/200/2021-05-29T08:16:14Z root.pxar /mnt/backup
but I get an error:
Error: manifest does not contain file 'root.pxar.didx'
Am I doing something wrong?
proxmox-backup: 1.0-4 (running kernel: 5.4.106-1-pve)...
I'm trying to understand how the ballooning work.
And my misunderstanding can be seen in the picture.
Why VM see 2GB RAM presence only?
This situation is observed for Linux-VMs. Windows-VMs see memory completely.
Yesterday shown VM started using the Swap and the available memory was...
I very much regret that I upgraded (via new install and restore all VMs and CTs from backup)
my cluster from 4.x to 5.2! A new storage Bluestore is driving me crazy!
I now do not sleep at night, trying to understand why the same disks worked fine in the old version CEPH,
and in the new...
A cluster of four nodes (cn1, cn2, cn3, cn4). After the last update, there was a problem with the WEB-UI.
If log into any of the first three nodes (cn1, cn2, cn3), then I can manage this node only:
Attempts to get control of other nodes is fail:
But if I logged on the 4-th node (cn4)...
I want to add a 4th node to the cluster (PVE 4.4), which uses ceph.
As I understand it, I do not need to add a 4th monitor. Suffice it to 3 monitors already available?
How to properly install the ceph without monitor on the 4th node?
1. pveceph install --version hammer
2. pveceph init...
I decided to try Proxmox 5. And play around ceph luminous.
After installation Proxmox 5 (via debian install) and ceph via pveceph install --version luminous
and after pveceph init --network I tried to create a monitor and get error:
# pveceph createmon
The VMs and CTs ceased to migrate between nodes. For example:
I tried the manual ssh-connections between nodes. Does not work!
root@cn1:~# ssh cn2
... and after 2 minutes:
Connection closed by 192.168.0.240
From my workstation, the SSH connection to all nodes works...
I want separates cluster's needs network from bridge (for access to VMs and CTs) to dedicated adapter.
It is possible for existing cluster?
For example from vmbr0 to eth4:
iface vmbr0 inet static
About 1 hour started backup task on each from 3 nodes.
But backups are hanging:
and not going on... :(
Stop button from GUI don't work...
ps aux | grep vzdump
Identify process numbers and:
# kill -9 22609 22610 22614
ps ax | grep vzdump
22614 ? Ds 0:04 task...
Just created 3-node cluster.
The VM has two disks: 1-st on LVM-thin, 2-nd on ceph.
I'm trying to live migrate this VM to another node from GUI:
At the command prompt, this task is performed successfully:
# qm migrate 201 acn2 -online -with-local-disks -migration_type insecure
My CEPH-storage gives an error:
# ceph health
HEALTH_ERR 27 pgs are stuck inactive for more than 300 seconds; 7 pgs down; 27 pgs incomplete; 27 pgs stuck inactive; 27 pgs stuck unclean; 1 requests are blocked > 32 sec
# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT...
After solving the problem with unexpected reboots:
A new problem has arrived... :)
This cluster is used as a storage for VMs and CTs backups from other cluster (which works
without any problems). This backup...
I can not establish the cause of unexpected reboots 2 (from 3) nodes in my cluster.
Over the past 48 hours, few unexpected reboots. It reboots one node several times, then another node also.
iLO events log:
This happened after the last update (from pve-no-subscription). But on this...
I tested default migration VM (8GB memory, disk on ceph-storage) vs insecure migration.
Results on pictures.
Insecure migration faster almost 3 times. :)