Search results

  1. potetpro

    Out server crashed in production while live migrating.

    Hello. One of our servers just rebooted while we live migrated a VM in production. What logs do i need to get asap, before they are overwritten? I My first guess is that it has to do with HA. The VM and 3 existing servers had HA configured. The new machine that we were migrating, was not yet in...
  2. potetpro

    Vm import from libvirt - Inaccable boot device

    Still same. Something to do with windows. How is the performance difference using scsi vs virtio-block on ceph? or just in general?
  3. potetpro

    Vm import from libvirt - Inaccable boot device

    It was installed with virtio in libvirt. I'll check if the virtscsi drivers are actually installed first.
  4. potetpro

    Vm import from libvirt - Inaccable boot device

    So i just imported a VM from libvirt. Using SCSI it fails BSOD - Inaccable boot device. But using virtioblock it works. The VM has virtio drivers installed, and works great in libvirt. Do i have to change something on the VM to make it accept SCSI? The SCSI controller is the best choice after...
  5. potetpro

    Does qm importdisk automaticly convert from qcow2 to raw?

    Does qm importdisk automaticly convert from qcow2 to raw when importing to ceph? or do i have to use qemu-img to convert first? Thanks :)
  6. potetpro

    Ceph reports wrong pool size after upgrade.

    [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.10.10.0/24 fsid = 1f6d8776-39b3-44c6-b484-111d3c8b8372 mon allow pool delete = true osd journal size = 5120 osd pool default min size = 2...
  7. potetpro

    Ceph reports wrong pool size after upgrade.

    root@proxmox9:~# ceph df detail RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED ssd 2.6 TiB 1.3 TiB 1.3 TiB 1.3 TiB 48.97 TOTAL 2.6 TiB 1.3 TiB 1.3 TiB 1.3 TiB 48.97 POOLS: POOL ID...
  8. potetpro

    Ceph reports wrong pool size after upgrade.

    Hello. Just upgraded our production cluster to pve6 and Ceph to version 14. Everything went great, but now Ceph reports wrong pool size. Unless this is an other way to view the pool. We got 3 servers, with 2 SSDs in each server. a total of 6x480GB. Replicaton 3; minmum 2. So we use 400GB of...
  9. potetpro

    Proxmox 5to6 test-upgrade, Ceph

    Yeah, just did a complete test upgrade, everything worked like the guide said :) Yes, thats where we keep the full-image backups ;)
  10. potetpro

    Proxmox 5to6 test-upgrade, Ceph

    Hello. I am currently doing a test upgrade in a virtual environment before upgrading out production environment. We have 3 servers running ceph, and VM's We are VM's running, and i see in the documentation that you can migrate the vm's to a upgraded host, and keep upgrading your cluster...
  11. potetpro

    Alternative to Samsung 863a for Ceph

    Yes, my bad "The Kingston SSD" Alwin :) I will take a look at sm883. Thanks.
  12. potetpro

    Adding cluster ring

    Yup, same as i was thinking, Thanks Tim :)
  13. potetpro

    Adding cluster ring

    Howdy. We have a system in production that has 2 networks. Main network for cluster communication and VM internet access, and one 10gbit for ceph. Currently we are just using the main net as cluster network. Should i add the second 10git network as a backup cluster ring? If a network cable...
  14. potetpro

    Alternative to Samsung 863a for Ceph

    Ok, thanks. Has anyone tested the Samsung Data center ssd? https://www.kingston.com/datasheets/sedc500r_en.pdf
  15. potetpro

    Alternative to Samsung 863a for Ceph

    Hello. We have a production environment using ceph and proxmox, and the disks are difficult to get hold of sometimes. Is there any alternative, or newer disks that can replace the 863a without performance loss? Or when samsung stops producing them. Thanks :)
  16. potetpro

    Windows terminal server breaks webpages sometimes.

    Found this thread: https://forum.proxmox.com/threads/ceph-bad-crc-signature-and-socket-closed.38681/ I disabled krdb in storage for ceph. Seems to have fixed the error messages in dmesg.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!