Hello, I have 2 proxmox PVE 6 nodes in HA (softdog) at two different sites connected by internet through a VPN. I use a third computer (Synology) to keep the VM (high availability) using the NFS. The problem is that I don't want the HA process to start just for the loss of connectivity for 30...
This morning I decided to make the jump to ovs. I am at the point of applying my changes, but I wanted to do check a couple of things. After I installed *openvswitch*, I noticed that the openvswitch.service was disabled. I decided to not enable or start the service until I was sure that it would...
Hi,
I do yesterday an short performance test between pve5.4 + pve6.0 mainly to see, if the zfs performace are better, because we have some trouble with mysql-vms on zfs (ssd zfs raid1).
Test:
hardware:
Dell R610 with 16GB Ram + HT on - 16 x Intel(R) Xeon(R) CPU X5560 @ 2.80GHz (2 Sockets)
2 *...
Hi @ll,
i'm installing my new Proxmox Server. My Plan is to have a fully encrypted System (i had similar Setups before with Debian 6-10 & Xen & Encrypted LVM).
Proxmox 6 is already running with encrypted lvm on a 3ware Raid 1 with SSD's (similar partition scheme as the default Installer does...
I am still new to Proxmox, hope you can help me…
A couple of days ago I created a cluster with two nodes, both run PVE 6.0-4.
The first node has a VM and a CT just for testing purposes, the second node is "empty".
The cluster was created on node1, and node2 joined successfully.
Until today...
I am new to Proxmox VE, still reading the docs, so please forgive my easy question.
I need to completely remove the cluster configuration from one of my servers. There are no nodes attached, no VMs, no containers.
I simply created the cluster by mistake, and now I would like to know the...
Hi,
I have proxmox 6.04 cluster, upgraded from 5.4.
When I try to do offline migration of VM between servers I get error:
storage 'local-ssd' is not available on node 'proxmox3'
The local-ssd storage is not used by this VM.
What can cause this error ?
VM configuration and full log below...
Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. Before the upgrade from 5.4 I had three ceph monitors called 'a', 'b', 'c'. Mon-a is on host pve01; Mon-b is on host pve02; Mon-c is on host pve03. After the upgrade...
Just upgraded a 3 node cluster to PVE 6.0 last night. I followed the excellent upgrade docs for PVE and Ceph Nautilus upgrades. I added a new OSD using a new hard drive. I initialized the Disk with GPT and the the disk appeared to have a GPT partition table per the "Disks" menu of the web GUI...
On the PVE 6.0.4 the Active Directory connection stopped working.
With same settings in PVE 5.4.11 it works fine on PVE 6.04 login with Active Directory accounts not possible.
Where I can check this issue in CLI ?
Best Tim.
Hello,
Just installed fresh pve 6.0 on 2 servers, the 2 are identical :
root@proxmox02:~# pveversion
pve-manager/6.0-2/865bbe32 (running kernel: 5.0.15-1-pve)
I installed with ZFS for root partition :
root@proxmox02:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.