Search results

  1. S

    [SOLVED] login error after PBS vm crash

    After a power issue that caused the PBS vm to crash, I can't login anymore via web. PVE connections don't work, too. I can login in ssh. In /var/log/syslog I have these errors, repeated every 10 seconds: 2024-11-21T10:24:46.980889+00:00 PBS1 proxmox-backup-api[146]: authentication failure...
  2. S

    [SOLVED] proxmox-backup-client garbage-collect permission

    I want to schedule my datastores garbage collection with cron, because if I use WebUI scheduling I risk to overlap GC jobs (my PBS is on spinning disks). I see that to use the command proxmox-backup-client garbage-collect permission mydatastore, I need to write the user password in the...
  3. S

    [SOLVED] migration fails on different migration network

    I have an old cluster, which grew up from PVE 4, and changing various servers. Yesterday I completed the last transformation, and now it's a two-nodes PVE 8.1 with ZFS-replication. I also renamed the servers, and resolved the typical issues I see many had when doing it. Anyway I say I've done...
  4. S

    NFS soft option causes I/O errors

    I had the backup storage mounted with NFS shared by a NAS. Until last week I always used the default "hard" connection. If NAS dies during backup, the VM which was backed up freezes until I don't force unmount the NFS share. Then I found the "soft" NFS option in this thread, and I tried to use...
  5. S

    blocking traffic in-between VMs

    Hi, I have a pve on Internet, I want to block any traffic between VMs, and allow them to go to Internet only. I enabled the firewall on datacenter, node and vm level. The node firewall works, I can only connect to it from my office public IP address, but the VM pve firewall doesn't DROP...
  6. S

    nesting not working with newer kernels

    Hi all, I just subscribed to a nested PVE VDS from Contabo. The first thing I did was upgrading to PVE 7.1, after that I imported some VM from my onprem server, just to see they don't start: linux vm starts with kernel supported virtualization = no, while Windows VMs get stuck at boot (blue...
  7. S

    very poor performance with consumer SSD

    I have a pair of HPE DL360 Gen8 dual Xeon, 64GB RAM, 2 hdd 10k sas for system (ZFS RAID1) and 4 consumer sata SSD They're for internal use, and show absymal performances. At first I had ceph on those SSD (with a third node), then I had to move everything to NAS temporarily. Now I...
  8. S

    move cluster node configuration

    I have a cluster with P420 RAID controllers, and very bad performance with SSD. I know I can configure the controller in HBA mode, but then I will lose the system RAID1. I would like to switch to HBA mode, reinstall the system in ZFS raid1 and then move the configuration from the old...
  9. S

    ceph install broken on new node

    I'm replacing two nodes on my PVE5.4 cluster. I will upgrade to 6.x after that. I installed the first of the new nodes, joined to the cluster, reloaded the webgui, everything ok. Then, from another node's webgui, I clicked in the new node's "Ceph" section. It proposed to install ceph packages...
  10. S

    [SOLVED] LXC (unprivileged) backup task failing

    I want to backup an unprivileged LXC to a NFS (Qnap nas). This is a frequent question, and usually the answer is to remove NO_ROOT_SQUASH or enable --tmpdir. I tried both, without success: INFO: starting new backup job: vzdump 108 --storage backup --mode suspend --mailto log@example.com...
  11. S

    [SOLVED] invalid csrf token on every edit

    I played a bit with certificates and letsencrypt, failed and rolled back. pmgproxy did not restart, resolved with pmgconfig apicert --force 1. Now pmgproxy starts, I can login, but when I try to make any changes, I get this error then I have to login again: root@mailscan:/etc/pmg# ls -alh...
  12. S

    IDE vs SCSI

    I have a client with an old installation: PVE4.4 on an OLD HP server, with HW RAID and working BBU. Waiting to replace it, I'm trying to find ways to speed up VMs. I saw that most of virtual disks were created with IDE controllers adn writethrough enabled, thus I thought it was an easy win to...
  13. S

    Separate Cluster Network

    Is this wiki article still valid with Proxmox 6? https://pve.proxmox.com/wiki/Separate_Cluster_Network
  14. S

    Ceph in Mesh network, fault tolerance

    I'm following the Full Mesh guide, method 2 (routing, not broadcast), and everything works. I want to add faul tolerance, to handle cable/nic port failures. At first, I thought to use bonding: I have 3 nodes, with 4 10Gb ports each. I connected each node with each other with 2 bonded cables. It...
  15. S

    PROXMOX and Windows ROK license for DELL

    I have a new DELL server, and installed PROXMOX without a problem. I'm now installing W2016 ROK, but it hangs in ROK license check, that is the check that it's real DELL hardware. I already dealt with this problem with HP hardware, and resolved using SMBIOS parameters. With Dell I'm not able to...
  16. S

    ram usage with bluestore

    this is my test cluster: node A: n.3 filesystem 1TB OSDs node B: n.2 filesystem 1TB OSDs, n.1 bluestore 1TB OSDs, node C: n.6 bluestore 300GB OSDs I noticed that bluestore OSDs take 3.5GB of RAM each, while the fileystem ones take 0.7 GB each. Following this thread, I added this to ceph.conf...
  17. S

    New 3-nodes cluster suggestion

    I'm about to build a new, small and general purpose cluster. The selected hardware is this: SuperMicro TwinPro (2029TP-HC0R), with 3 nodes, each with: 1 CPU XEON SCALABLE (P4X-SKL3106-SR3GL) 64GB RAM DDR4-2666 (MEM-DR432L-CL01-ER26) 4 port 10GB (AOC-MTG-I4TM-O SIOM) FOR CEPH TRAFFIC (MESH) 4...
  18. S

    network problem on win vm

    I have a cluster with proxmox 4.4. Three nodes, two IBM x3400 and a small PC. The two IBM host ceph data, the third is only monitor. I added another server, HP DL380 G7, installed Proxmox 5.2 on it and joined to the cluster (still not ceph). I will later upgrade the other servers. I have a...
  19. S

    Ceph, RAID cards and Hot swap

    Hi all, I have a doubt about RAID controllers and Ceph. I know that I must not put ceph osd disks under raid, as such I would not need a RAID controller. But the controller allows to hotswap disks, so I DO need it. Is it right?
  20. S

    new ceph pool

    I have a working Proxmox 4.4+Ceph hammer with three nodes. In Ceph, my pool has the policy 2-1 (two copies, at least 1 to work). I created another pool, because I want a policy 3-1 for more critical VMs, and I want to assign it to the same OSDs as the existing pool. Is it possible? When I...