Each server is configured with dual network cards, both on 10gbit for storage, and 1gbit.
We use active-backup setup for the cards.
Ceph is set up using the GUI in Proxmox, only thing we have tried tinkering with is "osd_memory_target"
4x 480GB samsung sm863a in each server, one OSD pr SSD...
Ceph runs on redundant 10gbit. 10.10.10.0/24
Our servers run on 1gbit. 10.0.0.0/24
Each server has 4 SSDs that run Ceph.
I have pulled all logs from all 6 servers. its about 400MB total, what logs do you want?
Thanks :)
Edit: Managed to zip them all down to 156MB, (removed lastlog, which is...
Hello.
This is the second time this has happened. I don't remember if the last time was when upgrading from Proxmox 5to6.
We got 6 proxmox servers in a cluster, using each of them for ceph storage as well.
Both times we have, migrated all VMs off the proxmox node, updated it, rebooted...
Some problem with one of the network cards had been renamed to "rename2", and lost link in the process.
A reboot of the server solved the issue.
It has a dual gigabit ethernet in bonding mode, that is PCIe. This is because the server has dual 10gbit integrated.
It's only migrating a VM with vlan tag 50, to this Proxmox node.
Migrating from:
task started by HA resource agent
2020-09-01 12:58:53 starting migration of VM 100 to node 'proxmox4' (10.0.0.14)
2020-09-01 12:58:53 starting VM 100 on remote node 'proxmox4'
2020-09-01 12:58:55 [proxmox4] Error...
How is this regarding leaving old trails, if i remove proxmox4, reinstall, and then add the node with the same name?
I have removed a temporary node with name proxmox9, and this is still listed in the Ceph->OSD list. (even though i removed all OSD's before removing the node. As with network...
I was thinking of shutting the server off, and cloning each of the SSHD to new 1.2TB SAS drives we have. Just putting them info an other server and running dd to copy the entire disk. Then i don't need to think about shrinking the filesystem.
No. This is a separate network for Ceph.
Even from...
Hello.
We recently installed 3 new Proxmox servers, now running 6x Proxmox servers. 2 of the new servers were identical (HP DL360p G8).
The only difference between these two new servers are that the one with problems are running Seagate 1TB Firecuda SSHD boot disks in RAIDZ1.
All of these...
Hello.
Just upgraded Proxmox from 6.1.3 go 6.1.7 i think. we have 3 nodes. everything went fine. upgraded one node at a time and rebooted.
Then i upgraded the last node. And after rebooting the last server. every VM locked up, and ceph is now doing this:
Does the new version require some...
Hello.
We Installed a new VM for our customer, in the process we "moved" the customer data disk to the new VM. (detached the disk from the first VM, added it the in config of the second vm, via terminal)
Then we removed the first vm, that also removed all disk, even the detached one connected...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.