Search results

  1. Recover zpool ( insufficient replicas / corrupted data )

    The missing indentation for sdd and sdc is very obscure!
  2. Recover zpool ( insufficient replicas / corrupted data )

    This sounds like sdd and sdc were not added as a mirror device, instead as single devices - really a bad setup So there is no chance, the data is dead meat at is spread over all vdev's
  3. Cannot create CEPH Pool with min_size=1 or min_size > 2

    every Documentation of CEPH tells you cleanly why min_size 1 is a very bad idea, keep your fingers from it. min_size 2 and size 3 is fine!
  4. Size on ZFS does not match to VM?

    it depends heavily on the Windows Version, any version before Server 2012 or 8 does not support the scsi Unmap command Also it does not clean up immediately, you should also look for this: fsutil behavior query disabledeletenotify If the parameter is set to "1" trim/unmap is disabled and...
  5. Size on ZFS does not match to VM?

    if you move with P2V the empty blocks are copied over too. You should probably set "discard" in the disk definition, so you can regain the zero blocks. But be aware: this works only with Windows 10 or 2012 up Windows 7/2008R2 only support discard with ATA not with SCSI
  6. 3x proxmox node with ceph: how to allow single node mode? Or 2 node needed for ceph at least?

    Do not run CEPH with just 1 node it is not and was never intended for such a use. CEPH needs at least 3 Nodes with OSD's. No exception
  7. ZFS RAIDZ2 : how to add another disk for extend zpool space ?

    if you want to extend a RaidZ2, you have to extend it by a same size VDEV. So with 4 * 1TByte RaidZ2 you have to add again a VDEV with 4 * TByte. But: Z2 means 2 drives for redundancy. From a capacity point of view it is not better than 2 Mirror VDEV's Z2 makes sense for high capacity with at...
  8. Proxmox Supporting Ceph

    it is supported very fine. the management is simple. I run 4 Clusters each with CEPH As you are versed in CEPH you should know what to do to get it running fast. Just keep off the fingers from rotating disks and use a fast backend network >= 10 GBit/sec
  9. New All flash Ceph Cluster

    I do not understand why you want a separate Journal for a full flash deployment. if your journal SSD fails you loose immediately all 6 OSD's depending on it. Also the write load on the journal will be 6 times the OSD's compare: 0.8 DWPD to 3 DWDP -> so if you are really hitting the DWDP Limit...
  10. corosync not running, file /etc/pve/corosync.conf impossible write

    Sorry, just hit the post button to early, now here the relevant information: Write Configuration When Not Quorate If you need to change /etc/pve/corosync.conf on an node with no quorum, and you know what you do, use: pvecm expected 1 This sets the expected vote count to 1 and makes the cluster...
  11. corosync not running, file /etc/pve/corosync.conf impossible write

    Look here: https://pve.proxmox.com/wiki/Cluster_Manager the relevant part for you: https://pve.proxmox.com/wiki/Cluster_Manager
  12. PVE 6.1 (ZFS setup) keep killing disks

    Very curious, we have many disks of different types and vendors running with ZFS since years (RaidZ2 and Mirror). Of course we see disks passing away, but with normally expected rates ( usually < 1 per month out of around 500 disks ). So I bet your hoster has just no luck with these disks.
  13. pvestatd often reports storage offline with CIFS

    Hi I do have the same problem. i have to edit /usr/share/perl5/PVE/Storage/CIFSPlugin.pm Making the timeout, or configurable would be a greate plea !
  14. Bond/bridge questions

    It depends heavily on the switch you use. With an dumb switch there is just Active/passive possible (anything else will not work reliable). you need a managed switch on which you can setup LACG groups (or you can setup a full mesh with direct connections, openvswitch can do that) _but_ don't...
  15. The guest operating system kills the performance of the disk subsystem.

    Hi, you mention 16 G ARC size, did you limit it in a conf file in /etc/modprobe.d (and rebuild the initram) so it get set again after reboot? And for the disks: you setup a 4 way mirror, was this intentional?
  16. CIFS online check produces high load

    The samba Server is running on a different dedicated hardware ( it is running there in a container for some reasons, but the whole hardware is dedicated for storage) guest user is definitly no solution for security reasons, but I will look into password caching
  17. CIFS online check produces high load

    Hi, we use CIFS Protocol for Backup, but the check if the Storage is online produces high load on our Samba file servers. The samba fileserver is bound to our Active Directory, in the log you can see, that the online checker tries an login every 10 seconds: [2019/01/28 13:46:13.847175, 2]...
  18. [SOLVED] Proxmox 5.3-6 HA - Setup/Node Reboot

    you need always a quorum of > 50 %, otherwise you can run into split brain situations (you really do not want that!) so expected vote is n/2+1 so: 3 Nodes -> Expected Vote = 2 4 Nodes -> Expected Vote = 3 5 Nodes -> Expected Vote = 3 (and so on) So only in a cluster with at...
  19. [SOLVED] Ryzen 2700 + X470 - KVM virtualisation configured, but not available.

    Look in the Bios for AMD-V. Iommu is only necessary if you want to give VM's Access to PCIe Cards. But it doesn't harm.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!