Search results

  1. A

    Web login problem

    Ok, managed to work web login after restarting some services. But don't exactly understand what service do the job. Questions about zfs still actual.
  2. A

    Web login problem

    Hi good ppl. I can't login using web interface, but can login using ssh. The story. After updating one proxmox node three days ago i have rebooted it to activate new kernel. To not shutdown all vm's i used hybernate. After node restarting, all was fine, but today i noticed that i can't login...
  3. A

    Proxmox VE 6 Radeon VGA

    Btw, i think the best way is to use vesa mode by default, because proxmox is not workstation, it's virtualization server, so no need to use nvidia or radeon drivers with hardware acceleration. Sometimes we use server paltform without integrated gpu, and we need cheapest hole plug with passive...
  4. A

    Proxmox VE 6 Radeon VGA

    I have tried all methods and vga=xxx - no results. I think the problem is with hardcoded into the kernel native radeon driver. I can't disable it, because when i use nomodeset KMS drm crushed xorg with error. When i use radeon driver - i simply got no image. I have managed to deal with problem...
  5. A

    Proxmox VE 6 Radeon VGA

    Hi good people. I have assambled new proxmox server, and can't install proxmox ve 6 on it. When i start installation it simply stops before starting GUI installer part and don't starts xorg (or starts, but i see only freezed cli installer output). Actually i found that installer works, but...
  6. A

    Proxmox cluster + ceph recommendations

    You don't understand. Its work. LVM RAID works as expected. And proxmox uses it without problem. This is the question not to proxmox team. The question is about how lvm raid works if it will be converted to thinlvm poll. How to rebuild this converted to thin pool lvm raid array if disk failure...
  7. A

    Proxmox cluster + ceph recommendations

    Ok, i already saw this articles, ubuntu help is useless, it covers basic operations and links are for very old LVM howto and other outdated info. And yep, i tryed serverfault tip, but without success.
  8. A

    Proxmox cluster + ceph recommendations

    Hi. Thx for your links, i will read. 1. Yes, OS drive is separate, it is zfs mirror from 2x 500GB SSD. And for data i use separate disks. I fixed my commands to build lvm raid thin pool: # for n in {b,c,d,e}; do sgdisk -N 1 /dev/sd$n; done # pvcreate /dev/sd{b1,c1,d1,e1} # vgcreate r10...
  9. A

    Proxmox cluster + ceph recommendations

    I found this regarding HW NVME RAID: http://www.highpoint-tech.com/USA_new/series-ssd7120-specification.htm I don\t know how it's working but it exist. I think there is bigger fish, maybe LSI/AVAGO/BROADCOM - because a lot of SM/HP/DELL raid controlles are simply OEM LSI. But who knows. My...
  10. A

    Proxmox cluster + ceph recommendations

    There is always variants - yep hardware raid is easy to maintain, but what if raid controller is dead? Software raid on zfs actually not to hard to maintain, one simple command ang go. But overheads are huge, and first of all RAM consumption. I can insert software raid disks in any hardware and...
  11. A

    Proxmox cluster + ceph recommendations

    Yep, something like this. But my problem is in lvmthin usage with DRBD. If i cant reassamble thin lvmraid, i need move all VM's to another node or move VM's disks to another thinpool on this node, than remove degraded thinpool and recreate it with new disk. Than move VM's or VM's disks back. So...
  12. A

    Proxmox cluster + ceph recommendations

    I will use software lvm raid - new lvm emplementation that uses md without mdadm layer. But there is no way to make lvmraid thinpool, but i can make lvmraid pool and than convert to thinlvmraid. It will work and will have redundancy. But not thinlvm or lvmraid examples in man are working when...
  13. A

    Proxmox cluster + ceph recommendations

    Hi bro! I'm testing DRBD right now as main solution. But there is question not aboud DRBD, but about LVM. DRBD can use thin LVM or ZFS as backend. ZFS is RAM hungry. LVM is fast oldscool. Thin LVM can make all we need - speed, clones, snapshots. With new feature as lvm raid - we can get good...
  14. A

    Proxmox cluster + ceph recommendations

    Hi good people. My customer wants to create HA virtualization cluster from its hardware. But i need advice, because i have some planning question. Hardware: - 3x supermicro servers: 12x 3,5" 8TB 7k2 SATA HDD, 4x 800GB INTEL ENTERPRISE SATA SSD, 2x 8Cores 16vCores (summary 16 Cores, 32 vCores)...
  15. A

    New Proxmox Cluster Storage suggestion

    And about pools - is it enough to make one pool on this bunch of OSDs or better to have several poools? On old cluster i simply have 8 HDDs per node and only 3 nodes, so 24 OSDs and one pool.
  16. A

    New Proxmox Cluster Storage suggestion

    Thx. I was looking for DRBD9. But now will try Ceph. I have already 2 proxmox clusters with Ceph, but they are very old - proxmox 3.x. Main question is what is about SSD journal and OSDs - can i simply use 12 SSD as 12 OSD and 48 HDD as 48 OSD per node and configure 2 crash maps - one fast...
  17. A

    New Proxmox Cluster Storage suggestion

    Hi good people. I am planning to upgrade existing proxmox cluster. I have 4 nodes with dual xeon (56 vCores) and 512GB RAM per node. Proxmox is installed on ZFS mirror from two intel 3510 SSD. I have used NFS storage for VM's, but now have 4 jbods (with 48x 8TB HDD and 12x 800GB SSD in each...
  18. A

    Supermicro X10DRi SATA ZFS error

    Wow! Good job. So i think it was good decision to migrate from hyper-v to proxmox.
  19. A

    Supermicro X10DRi SATA ZFS error

    Hi. Big thx. It's worked. DIsconnected jbod, successfuly installed proxmox 5.2-1, attached jbod - and now i see all disks! Perfect. Maybe you will add this info (about installer disks limit) to wiki installation manual? JBOD is cheap and effective method of disk attaching.
  20. A

    Supermicro X10DRi SATA ZFS error

    Hi good people. I am trying to install proxmox ve 5.2 on supermicro server. I have 2x SSD Intel DC3510 800GB connected to internal SATA controller (AHCI mode) on the motherboard Supermicro X10DRi. I want to make ZFS RAID10 installation but installation process ended with error: "unable to get...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!