Hi,
we have installed a fresh proxmox 8.1 Server with 4 x 10Gbit NIC and 2 x 1Gbit NIC
2 x 10Gbit as bond for storage
2 x 10Gbit as bond for VMS IP
2 x 1Gbit as bond for Proxmox-MGMT
cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file...
Hi,
after upgrading our 7.2 node to 7.4 with Kernel pve-kernel-5.15.102-1-pve the node is not able to boot again.
It breaks through initiating network tasks on bond0, messages on screen shows problems with bonding links.
Our mgmt IP from the node is pingable, but no access to ssh or gui is...
Hi,
i have one question. We have a proxmox cluster installed on server with 1 disk for OS (didnt recognized that one was missing on delivery)
Now we have installed a second disk to the proxmox pve host and want to add it like a raid1.
Current situation:
Device Start End...
Hi,
in our cluster, pve node05 is marked with a "?" in Proxmox Gui.
The VMS on that node are still running and available, but i even cant access per ssh or gui to that server.
Is there a way to migrate running vms to another node while the server is not available via ssh or mgmt gui?
icmp to...
Hi,
we have installed 2 clusters which have some same storages defined.
If i now delete eg. VM 103 on Cluster A, and have also a VM 103 on Cluster B and they have access to all storages, the storage from VM 103 on cluster B will also be deleted.
Is it possible to change deletion of a vm in that...
Hi,
we have a promox cluster based on Virtual Environment 5.4-13.
We have some Debian clients which lose ipv6 connectivity sporadic.
If we insert
ip -6 addr del 2a00:d0c0:xxx::xxx/64 dev ens18 and
ip -6 addr add 2a00:d0c0:xxx::xxx/64 dev ens18
the ipv6 communication is back again.
Any...
Hi,
we have new server hardware with broadcom sas 9440-8i.
Debian doesnt recognized raid controller and single drives on that controller.
Has anybody allready installed debian with a 9440-8i for proxmox?
Regards,
Volker
Hallo,
wir betreiben aktuell einen kleinen Cluster mit 8 Servern. Als Storage nutzen wir einen SSD Pool (12 OSDs), einen SAS Pool (26 OSDs) und als Backup einen NFS Speicher (Raid6).
Bei den CEPH Storages haben wir insbesondere bei dem SAS Pool immer wieder Performanceprobleme und höhere IOs...
Hi Support,
we try to enable bonding with debian 9.3 and proxmox Virtual Environment 5.1-46.
The bonding is about 2 x 10Gbit interfaces and we try to use mode 802.3ad.
The lacp is active and stable as long as we shutdown one link on juniper switch for testing.
When the port wents down, we...
Hi,
we are using proxmox 5.1-36 and ceph luminous.
Sometimes we see higher io latency on some pgs and if i view iostat -x 1 in shell, i see one hardware drive from ceph with high awaits and %util.
Is it possible to identify the vm which is the cause for the awaits and io usage?
Best regards,
Volker
Hi,
we have configured an external ceph with infiniband and added it to proxmox cluster.
The ceph has a public network (172.16.65.0/24) and a cluster network (10.16.70.0/24).
We also added the ceph pool to storage.cfg:
rbd: ssd01
content images
monhost 172.16.65.21 172.16.65.22...
Hi,
we setup a new environment with 3 nodes, debian stretch, proxmox 5.1, ceph luminous.
Each nodes has 4 ssds for osd, summary 12 osds.
From pg-calc we set a pg_num of 512 in ceph pool.
The network for ceph is connected via infiniband. If we install a vm in ceph storage and make a dd inside...
Hi,
we setup a new cluster on debian stretch with proxmox 5.1 and wanted to use ceph.
While configuring setup we get the following error:
root@cloud-node12:~# pveceph createmon
unable to find local address within network '172.16.65.0/24'
Our /etc/network/interfaces looks like:
iface vmbr0...
Hi Folks,
we have a proxmox cluster of 4 nodes, installed proxmox 4.4.
While migrate vms from one node01 to another, all hardware nodes rebooted and i cant find a hint in a logfile what happened. As storage we configured ceph with 25 osds and 4 monitors (now reduced to 3).
Any ideas where i can...
Hi,
we want to create two ceph pools.
One with SAS osds, and one with ssd as osds.
Is it possible to maintain that in proxmox 5.1 via webinterface or cli or do we have to configure one ceph configuration by hand?
Regards,
Volker
Hi,
we have a running setup with:
4 Server Debian 8.x with Proxmox 4.4, CEPH "hammer" with 30 osds
Last weekend and for further growing, we installed 4 server with debian stretch and proxmox 5.1.
If we login to the proxmox panel the new servers are marked as "offline". I didnt see any hints...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.