@dcsapak
1) version is
2) There is schedule
The verification on May 13 was successful, but on May 20 it has already revealed errors. But there are no backups of virtual machine ID 118 in the list, although this virtual machine exists and works
3) In syslog there is no errors or messages...
Hi guys!
I've noticed a problem that tasks created in Proxmox VE 7.4 are not executed on schedule. We moved the storage with backups with a change of ip address and re-mounted to the server (the mount point did not change) Since then the tasks are not executed on schedule, although the manual...
Hello. Thank you for response. At the momen I'm not able to filter syslog to fing the reason. System at boot freeze at this moment and if am trying to restart host I see error that DM cannot finalize job. Looks like that DM cannot login on to datastore.
Hi guys! I have cluster with servers and blade servers. At the moment I'm trying connect Dell VNX7600 datastore over iSCSI, with servers connected without problems, but after connecting to blade and rebooting I got boot fail (Screen attached)
I've tried delete wwid and routes to datastore but...
@Moayad Hello!
At the moment the problem with adding new node is
Dec 02 17:50:23 20pve03 corosync[3444]: [CFG ] Cannot configure new interface definitions: To reconfigure an interface it must be deleted and recreated. A working interface needs to be available to corosync at all times...
@Moayad I found the problem. Before I've added node 20pve03 using management ip as link0 and corosync ip as link 1. After that mistake (we use for management - management vlan ip, for cluster - cluster vlan ip, corosync - corosync vlan ip) I reconfigured corosync.conf changend link0 to correct...
Hello @Moayad
I've already deleted node, but folder still there
At the moment I reinstalled node and wanna try to re-add to cluster
pveversion output:
pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14...
Hi Team! Today I tried to add new host into PVE cluster (10 nodes)
I used command
pvecm add [cluster_ip] --link0 [cluster_vlan_ip] --link1 [corosync_vlan_ip]
Everything was without errors. But in cluster I see only my 10 nodes (without new one) in Datacenter.
But! I can see it in Datacenter...
Hi guys! At the moment we trying to configure monitoring by manual
But now we got error Cannot fetch data: Post "https://admin:***@172.16.133.200:6856/request?wait=1": tls: first record does not look like a TLS handshake.
We trying get information from ceph-manager by 6856 and 6857 ports, but...
I have resolved my problem. After copying ceph.client.admin.keyring on all nodes, I started comparing rbd ls -l Ceph_Pool and VM's in GUI and found 1 VM which configured HDD on Ceph, but in Ceph it wasn't in list. After deleting this VM problem solved.
@aaron I have tried to copy ceph.client.admin.keyring to all nodes, but nothing changed.
Could the blade-servers not have their own storage causing the problem?
No, all 9 nodes in cluster have already installed Ceph packages
VM Disks list not working in GUI at all nodes. But from cli I can get list of all disks located on Ceph
Output from random node
root@220pve01:~# ls -la /etc/ceph
total 12
drwxr-xr-x 2 ceph ceph 4096 Jul 28 18:32 .
drwxr-xr-x 98...
In WEB interface, select Ceph pool under host, the select VM Disks
Yes, Ceph services already installed on all nodes in cluster.
FYI. In cluster 9 nodes, 4 nodes has disks and ceph installed, another 5 nodes are blade servers which use ceph storage.
I see a symlynk only on node which has...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.