Search results

  1. S

    [SOLVED] migration fails on different migration network

    I have an old cluster, which grew up from PVE 4, and changing various servers. Yesterday I completed the last transformation, and now it's a two-nodes PVE 8.1 with ZFS-replication. I also renamed the servers, and resolved the typical issues I see many had when doing it. Anyway I say I've done...
  2. S

    NFS soft option causes I/O errors

    I had the backup storage mounted with NFS shared by a NAS. Until last week I always used the default "hard" connection. If NAS dies during backup, the VM which was backed up freezes until I don't force unmount the NFS share. Then I found the "soft" NFS option in this thread, and I tried to use...
  3. S

    blocking traffic in-between VMs

    Hi, I have a pve on Internet, I want to block any traffic between VMs, and allow them to go to Internet only. I enabled the firewall on datacenter, node and vm level. The node firewall works, I can only connect to it from my office public IP address, but the VM pve firewall doesn't DROP...
  4. S

    nesting not working with newer kernels

    Hi all, I just subscribed to a nested PVE VDS from Contabo. The first thing I did was upgrading to PVE 7.1, after that I imported some VM from my onprem server, just to see they don't start: linux vm starts with kernel supported virtualization = no, while Windows VMs get stuck at boot (blue...
  5. S

    very poor performance with consumer SSD

    I have a pair of HPE DL360 Gen8 dual Xeon, 64GB RAM, 2 hdd 10k sas for system (ZFS RAID1) and 4 consumer sata SSD They're for internal use, and show absymal performances. At first I had ceph on those SSD (with a third node), then I had to move everything to NAS temporarily. Now I...
  6. S

    move cluster node configuration

    I have a cluster with P420 RAID controllers, and very bad performance with SSD. I know I can configure the controller in HBA mode, but then I will lose the system RAID1. I would like to switch to HBA mode, reinstall the system in ZFS raid1 and then move the configuration from the old...
  7. S

    ceph install broken on new node

    I'm replacing two nodes on my PVE5.4 cluster. I will upgrade to 6.x after that. I installed the first of the new nodes, joined to the cluster, reloaded the webgui, everything ok. Then, from another node's webgui, I clicked in the new node's "Ceph" section. It proposed to install ceph packages...
  8. S

    [SOLVED] LXC (unprivileged) backup task failing

    I want to backup an unprivileged LXC to a NFS (Qnap nas). This is a frequent question, and usually the answer is to remove NO_ROOT_SQUASH or enable --tmpdir. I tried both, without success: INFO: starting new backup job: vzdump 108 --storage backup --mode suspend --mailto log@example.com...
  9. S

    [SOLVED] invalid csrf token on every edit

    I played a bit with certificates and letsencrypt, failed and rolled back. pmgproxy did not restart, resolved with pmgconfig apicert --force 1. Now pmgproxy starts, I can login, but when I try to make any changes, I get this error then I have to login again: root@mailscan:/etc/pmg# ls -alh...
  10. S

    IDE vs SCSI

    I have a client with an old installation: PVE4.4 on an OLD HP server, with HW RAID and working BBU. Waiting to replace it, I'm trying to find ways to speed up VMs. I saw that most of virtual disks were created with IDE controllers adn writethrough enabled, thus I thought it was an easy win to...
  11. S

    Separate Cluster Network

    Is this wiki article still valid with Proxmox 6? https://pve.proxmox.com/wiki/Separate_Cluster_Network
  12. S

    Ceph in Mesh network, fault tolerance

    I'm following the Full Mesh guide, method 2 (routing, not broadcast), and everything works. I want to add faul tolerance, to handle cable/nic port failures. At first, I thought to use bonding: I have 3 nodes, with 4 10Gb ports each. I connected each node with each other with 2 bonded cables. It...
  13. S

    PROXMOX and Windows ROK license for DELL

    I have a new DELL server, and installed PROXMOX without a problem. I'm now installing W2016 ROK, but it hangs in ROK license check, that is the check that it's real DELL hardware. I already dealt with this problem with HP hardware, and resolved using SMBIOS parameters. With Dell I'm not able to...
  14. S

    ram usage with bluestore

    this is my test cluster: node A: n.3 filesystem 1TB OSDs node B: n.2 filesystem 1TB OSDs, n.1 bluestore 1TB OSDs, node C: n.6 bluestore 300GB OSDs I noticed that bluestore OSDs take 3.5GB of RAM each, while the fileystem ones take 0.7 GB each. Following this thread, I added this to ceph.conf...
  15. S

    New 3-nodes cluster suggestion

    I'm about to build a new, small and general purpose cluster. The selected hardware is this: SuperMicro TwinPro (2029TP-HC0R), with 3 nodes, each with: 1 CPU XEON SCALABLE (P4X-SKL3106-SR3GL) 64GB RAM DDR4-2666 (MEM-DR432L-CL01-ER26) 4 port 10GB (AOC-MTG-I4TM-O SIOM) FOR CEPH TRAFFIC (MESH) 4...
  16. S

    network problem on win vm

    I have a cluster with proxmox 4.4. Three nodes, two IBM x3400 and a small PC. The two IBM host ceph data, the third is only monitor. I added another server, HP DL380 G7, installed Proxmox 5.2 on it and joined to the cluster (still not ceph). I will later upgrade the other servers. I have a...
  17. S

    Ceph, RAID cards and Hot swap

    Hi all, I have a doubt about RAID controllers and Ceph. I know that I must not put ceph osd disks under raid, as such I would not need a RAID controller. But the controller allows to hotswap disks, so I DO need it. Is it right?
  18. S

    new ceph pool

    I have a working Proxmox 4.4+Ceph hammer with three nodes. In Ceph, my pool has the policy 2-1 (two copies, at least 1 to work). I created another pool, because I want a policy 3-1 for more critical VMs, and I want to assign it to the same OSDs as the existing pool. Is it possible? When I...
  19. S

    server dead in pve-ceph cluster

    I have a cluster with 3 nodes which act both as vm nodes and ceph storage. node1: 1 osd, 1tb disk node2: 2 osd, 1tb disks node3: 2 osd, 1tb disks the total is 4,7 usable disks. I created a ceph pool with size 2/ min 1, so I have a single replica of my data. I've now read that it's a bad...
  20. S

    [SOLVED] (I think) multicast problem

    We have a low-budget 3-nodes (5yr old servers, with 32gb ram) cluster, with ceph in two of them. They are connected with an HP 1920G switch, 2 nic for ceph and 2 for corosync and lan. After some minutes, the cluster stops working (each node sees only itself as online) and after some time the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!