Search results

  1. potetpro

    Last proxmox cluster node updated cause critical problems in Ceph.

    On all servers: bond1 which is the 10gbit have 9000MTU configured, and this is also enabled on the switch.
  2. potetpro

    Last proxmox cluster node updated cause critical problems in Ceph.

    Proxmox6 This was when ceph started going haywire.
  3. potetpro

    Last proxmox cluster node updated cause critical problems in Ceph.

    Each server is configured with dual network cards, both on 10gbit for storage, and 1gbit. We use active-backup setup for the cards. Ceph is set up using the GUI in Proxmox, only thing we have tried tinkering with is "osd_memory_target" 4x 480GB samsung sm863a in each server, one OSD pr SSD...
  4. potetpro

    Last proxmox cluster node updated cause critical problems in Ceph.

    All of the log files: https://potetmos.com/index.php/s/GmigGDEWmo2qNFD
  5. potetpro

    Last proxmox cluster node updated cause critical problems in Ceph.

    Ceph runs on redundant 10gbit. 10.10.10.0/24 Our servers run on 1gbit. 10.0.0.0/24 Each server has 4 SSDs that run Ceph. I have pulled all logs from all 6 servers. its about 400MB total, what logs do you want? Thanks :) Edit: Managed to zip them all down to 156MB, (removed lastlog, which is...
  6. potetpro

    Last proxmox cluster node updated cause critical problems in Ceph.

    Hello. This is the second time this has happened. I don't remember if the last time was when upgrading from Proxmox 5to6. We got 6 proxmox servers in a cluster, using each of them for ceph storage as well. Both times we have, migrated all VMs off the proxmox node, updated it, rebooted...
  7. potetpro

    Confirmation when removing VM, all associated disks will be removed.

    Yes. This should be reworked so it clear that all detached disks will be removed as well.
  8. potetpro

    [SOLVED] Migration failed: can't add vlan tag to interface

    Some problem with one of the network cards had been renamed to "rename2", and lost link in the process. A reboot of the server solved the issue. It has a dual gigabit ethernet in bonding mode, that is PCIe. This is because the server has dual 10gbit integrated.
  9. potetpro

    [SOLVED] Migration failed: can't add vlan tag to interface

    It's only migrating a VM with vlan tag 50, to this Proxmox node. Migrating from: task started by HA resource agent 2020-09-01 12:58:53 starting migration of VM 100 to node 'proxmox4' (10.0.0.14) 2020-09-01 12:58:53 starting VM 100 on remote node 'proxmox4' 2020-09-01 12:58:55 [proxmox4] Error...
  10. potetpro

    [SOLVED] Ceph - mon.proxmox4 has slow ops - mon.proxmox4 crashed

    How is this regarding leaving old trails, if i remove proxmox4, reinstall, and then add the node with the same name? I have removed a temporary node with name proxmox9, and this is still listed in the Ceph->OSD list. (even though i removed all OSD's before removing the node. As with network...
  11. potetpro

    [SOLVED] Ceph - mon.proxmox4 has slow ops - mon.proxmox4 crashed

    I was thinking of shutting the server off, and cloning each of the SSHD to new 1.2TB SAS drives we have. Just putting them info an other server and running dd to copy the entire disk. Then i don't need to think about shrinking the filesystem. No. This is a separate network for Ceph. Even from...
  12. potetpro

    [SOLVED] Ceph - mon.proxmox4 has slow ops - mon.proxmox4 crashed

    Might be those SSHD boot drives :( root@proxmox4:~# ceph crash info 2020-03-10_08:11:03.445795Z_adc310f8-4172-42f1-ada1-d612e4d5006b { "os_version_id": "10", "assert_condition": "abort", "utsname_release": "5.3.13-3-pve", "os_name": "Debian GNU/Linux 10 (buster)"...
  13. potetpro

    [SOLVED] Ceph - mon.proxmox4 has slow ops - mon.proxmox4 crashed

    root@proxmox4:~# ceph crash info 2020-03-09_23:29:31.307248Z_825c9fed-00ec-4917-bb39-e13dc2fed5bb { "os_version_id": "10", "utsname_release": "5.3.13-3-pve", "os_name": "Debian GNU/Linux 10 (buster)", "entity_name": "mon.proxmox4", "timestamp": "2020-03-09 23:29:31.307248Z"...
  14. potetpro

    [SOLVED] Ceph - mon.proxmox4 has slow ops - mon.proxmox4 crashed

    Hello. We recently installed 3 new Proxmox servers, now running 6x Proxmox servers. 2 of the new servers were identical (HP DL360p G8). The only difference between these two new servers are that the one with problems are running Seagate 1TB Firecuda SSHD boot disks in RAIDZ1. All of these...
  15. potetpro

    [SOLVED] Ceph in critical condition after upgrade. In production.

    An extra reboot of the last server fixed the problem. I don't know what happened. But now i got this message:
  16. potetpro

    [SOLVED] Ceph in critical condition after upgrade. In production.

    Hello. Just upgraded Proxmox from 6.1.3 go 6.1.7 i think. we have 3 nodes. everything went fine. upgraded one node at a time and rebooted. Then i upgraded the last node. And after rebooting the last server. every VM locked up, and ceph is now doing this: Does the new version require some...
  17. potetpro

    Confirmation when removing VM, all associated disks will be removed.

    Yes, i know. I was thinking of something like: In case you want to keep the disk, but not the VM.
  18. potetpro

    Confirmation when removing VM, all associated disks will be removed.

    Hello. We Installed a new VM for our customer, in the process we "moved" the customer data disk to the new VM. (detached the disk from the first VM, added it the in config of the second vm, via terminal) Then we removed the first vm, that also removed all disk, even the detached one connected...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!