Search results

  1. I

    Fail to boot proxmox on old dell r620

    i just tried to install an older version image (6.1 instead 6.2) and this is the erros i found on of `journalctl -b' ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size efifb: abort, cannot remap video memory 0x1e0000 @ 0x0 efi-framebuffer: probe of efi-framebuffer.0 failed...
  2. I

    [SOLVED] PVE Fails to load past 'Loading initial ramdisk...'

    did you solve it ? i have similar issue at post-336153
  3. I

    Fail to boot proxmox on old dell r620

    i remved the "quiet" and press ctrl+x but i did not see any messages on the screen. i am able to ssh to the machine (under root) and print out the syslog: here is the end of if Sep 7 14:06:22 pve-srv6 systemd[1]: Startup finished in 8.236s (kernel) + 7.310s (userspace) = 15.547s. Sep 7...
  4. I

    Fail to boot proxmox on old dell r620

    after disabling all virtual devices (ipmi) it no longer look for sdb, but again a fresh install still heaving issues. i think there might be a bad ram module (running full diagnostic now )
  5. I

    [SOLVED] corosync questions - edit voting values

    Aa i remember on older version before we upgraded (i think it was v4 or v5) we had it hard-coded, Can i assume now 50%?
  6. I

    [SOLVED] corosync questions - edit voting values

    i looked for this file, but i can only change the vote size for each node there, i am looking to change the minimal total vote needed to achieve quorum. we added few more servers, i want to update the change the weight on some of them
  7. I

    Anyone moved backups over to S3 or Azure via a hook?

    did you hear about proxmox-backup-server (keep in mind it is still in beta) i am planing to use it once it get abit more stable
  8. I

    Fail to boot proxmox on old dell r620

    i made a fresh install on raid1(2*200 ssds) and after install stage i have the following hang screen: Loading Linux 5.4.34-1-pve ... Loading initial ramdisk ... tried to upgdate-grub and received the output : Generating grub configuration file ... /dev/sdb: open failed: No medium found...
  9. I

    change subnet mask on proxmox host, is it safe?

    yes a working cluster of 10 nodes, i gave some certification on the question on the first post, (change subnet mask)
  10. I

    change subnet mask on proxmox host, is it safe?

    is it safe to change (extend) the subnet mask for active proxmox servers ? is there any precautions i should take ?
  11. I

    [SOLVED] corosync questions - edit voting values

    i want to add few more servers, how i can change the voting of each server? how i can change the minimal quorum value ?
  12. I

    ceph qestion, sas3 and nvme on separate bucket

    the pool is very fast (based on high end sas ssds and 40gb network) so a dont think we will have such a big performance hit. worst case the storage will be down for 10 hours (we can do it over night) currently there are only sas ssds (nvme are not yet connected) will i still have the effect...
  13. I

    ceph qestion, sas3 and nvme on separate bucket

    can i do it on existing (running) pool ?
  14. I

    ceph qestion, sas3 and nvme on separate bucket

    currently i have one ceph poll consists of mix of sas3 ssds , (different models and sizes , but all in the same performance category ) i thinking of creating another bucket (i dont know if bucket is the right name for what i want to do ) currently we have the default (consists of sas3 ssds)...
  15. I

    to satadom or not to satadom

    we are planing to add few more servers. due to our growing storage demands i would like to use all the 2.5 ports with high capacity drives and add them to ceph. the server will mostly be a ceph storage, (read intensive ) and compute grid (the containers will be hosted on the ceph) does the...
  16. I

    [SOLVED] proxmox does not see some ssds

    i just added some ssds to grow our ceph, we bought two bulks of sas ssds KPM51RUG7T68 hp branded (works ,integrated into our ceph) ARFX7680S5xnNTRI MZ-ILT7T60 (link for ebay listing for same model) does not recognized, (cannot added to ceph) both connected to 3008lsi on it mode hdd is found...
  17. I

    do you have an estimation when the beta stage will end?

    in general we are very happy, but since then (upgrading from 5.4) we have a random crash(1-2 month) on 4-7 nodes (simultaneously) due to unknown backup failure. i tried to investigate but i couldn't find anything. so ill wait for proxmox backup server to replace the current backup schedules
  18. I

    do you have an estimation when the beta stage will end?

    not to 6.2-10 , but all nodes are at 6.2-9, it is occurred since i updated from 5.4
  19. I

    do you have an estimation when the beta stage will end?

    bummer, currently the current current backup giving us some problems (random crashes(reboot) for multiple nodes in our cluster .. and i cannot find the reason) ill keep waiting.. thanks..
  20. I

    do you have an estimation when the beta stage will end?

    I have read about it, and i wane to use it on our core cluster to backup all lxc and vms. but waiting for the "beta" to finish. anyone got an estimation ?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!