I have an old cluster, which grew up from PVE 4, and changing various servers.
Yesterday I completed the last transformation, and now it's a two-nodes PVE 8.1 with ZFS-replication.
I also renamed the servers, and resolved the typical issues I see many had when doing it. Anyway I say I've done...
I had the backup storage mounted with NFS shared by a NAS.
Until last week I always used the default "hard" connection.
If NAS dies during backup, the VM which was backed up freezes until I don't force unmount the NFS share.
Then I found the "soft" NFS option in this thread, and I tried to use...
Hi,
I have a pve on Internet, I want to block any traffic between VMs, and allow them to go to Internet only.
I enabled the firewall on datacenter, node and vm level.
The node firewall works, I can only connect to it from my office public IP address, but the VM pve firewall doesn't DROP...
Hi all,
I just subscribed to a nested PVE VDS from Contabo.
The first thing I did was upgrading to PVE 7.1, after that I imported some VM from my onprem server, just to see they don't start: linux vm starts with kernel supported virtualization = no, while Windows VMs get stuck at boot (blue...
I have a pair of HPE DL360 Gen8
dual Xeon, 64GB RAM, 2 hdd 10k sas for system (ZFS RAID1) and 4 consumer sata SSD
They're for internal use, and show absymal performances.
At first I had ceph on those SSD (with a third node), then I had to move everything to NAS temporarily.
Now I...
I have a cluster with P420 RAID controllers, and very bad performance with SSD.
I know I can configure the controller in HBA mode, but then I will lose the system RAID1.
I would like to switch to HBA mode, reinstall the system in ZFS raid1 and then move the configuration from the old...
I'm replacing two nodes on my PVE5.4 cluster. I will upgrade to 6.x after that.
I installed the first of the new nodes, joined to the cluster, reloaded the webgui, everything ok.
Then, from another node's webgui, I clicked in the new node's "Ceph" section.
It proposed to install ceph packages...
I want to backup an unprivileged LXC to a NFS (Qnap nas).
This is a frequent question, and usually the answer is to remove NO_ROOT_SQUASH or enable --tmpdir.
I tried both, without success:
INFO: starting new backup job: vzdump 108 --storage backup --mode suspend --mailto log@example.com...
I played a bit with certificates and letsencrypt, failed and rolled back.
pmgproxy did not restart, resolved with pmgconfig apicert --force 1.
Now pmgproxy starts, I can login, but when I try to make any changes, I get this error then I have to login again:
root@mailscan:/etc/pmg# ls -alh...
I have a client with an old installation: PVE4.4 on an OLD HP server, with HW RAID and working BBU.
Waiting to replace it, I'm trying to find ways to speed up VMs.
I saw that most of virtual disks were created with IDE controllers adn writethrough enabled, thus I thought it was an easy win to...
I'm following the Full Mesh guide, method 2 (routing, not broadcast), and everything works.
I want to add faul tolerance, to handle cable/nic port failures.
At first, I thought to use bonding: I have 3 nodes, with 4 10Gb ports each. I connected each node with each other with 2 bonded cables. It...
I have a new DELL server, and installed PROXMOX without a problem.
I'm now installing W2016 ROK, but it hangs in ROK license check, that is the check that it's real DELL hardware.
I already dealt with this problem with HP hardware, and resolved using SMBIOS parameters. With Dell I'm not able to...
this is my test cluster:
node A: n.3 filesystem 1TB OSDs
node B: n.2 filesystem 1TB OSDs, n.1 bluestore 1TB OSDs,
node C: n.6 bluestore 300GB OSDs
I noticed that bluestore OSDs take 3.5GB of RAM each, while the fileystem ones take 0.7 GB each.
Following this thread, I added this to ceph.conf...
I'm about to build a new, small and general purpose cluster.
The selected hardware is this:
SuperMicro TwinPro (2029TP-HC0R), with 3 nodes, each with:
1 CPU XEON SCALABLE (P4X-SKL3106-SR3GL)
64GB RAM DDR4-2666 (MEM-DR432L-CL01-ER26)
4 port 10GB (AOC-MTG-I4TM-O SIOM) FOR CEPH TRAFFIC (MESH)
4...
I have a cluster with proxmox 4.4.
Three nodes, two IBM x3400 and a small PC. The two IBM host ceph data, the third is only monitor.
I added another server, HP DL380 G7, installed Proxmox 5.2 on it and joined to the cluster (still not ceph).
I will later upgrade the other servers.
I have a...
Hi all,
I have a doubt about RAID controllers and Ceph.
I know that I must not put ceph osd disks under raid, as such I would not need a RAID controller.
But the controller allows to hotswap disks, so I DO need it.
Is it right?
I have a working Proxmox 4.4+Ceph hammer with three nodes.
In Ceph, my pool has the policy 2-1 (two copies, at least 1 to work).
I created another pool, because I want a policy 3-1 for more critical VMs, and I want to assign it to the same OSDs as the existing pool. Is it possible? When I...
I have a cluster with 3 nodes which act both as vm nodes and ceph storage.
node1: 1 osd, 1tb disk
node2: 2 osd, 1tb disks
node3: 2 osd, 1tb disks
the total is 4,7 usable disks. I created a ceph pool with size 2/ min 1, so I have a single replica of my data. I've now read that it's a bad...
We have a low-budget 3-nodes (5yr old servers, with 32gb ram) cluster, with ceph in two of them.
They are connected with an HP 1920G switch, 2 nic for ceph and 2 for corosync and lan.
After some minutes, the cluster stops working (each node sees only itself as online) and after some time the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.