i just tried to install an older version image (6.1 instead 6.2) and this is the erros i found on of `journalctl -b'
ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size
efifb: abort, cannot remap video memory 0x1e0000 @ 0x0
efi-framebuffer: probe of efi-framebuffer.0 failed...
i remved the "quiet" and press ctrl+x but i did not see any messages on the screen.
i am able to ssh to the machine (under root) and print out the syslog:
here is the end of if
Sep 7 14:06:22 pve-srv6 systemd[1]: Startup finished in 8.236s (kernel) + 7.310s (userspace) = 15.547s.
Sep 7...
after disabling all virtual devices (ipmi) it no longer look for sdb, but again a fresh install still heaving issues.
i think there might be a bad ram module (running full diagnostic now )
i looked for this file, but i can only change the vote size for each node there,
i am looking to change the minimal total vote needed to achieve quorum.
we added few more servers, i want to update the change the weight on some of them
i made a fresh install on raid1(2*200 ssds)
and after install stage i have the following hang screen:
Loading Linux 5.4.34-1-pve ...
Loading initial ramdisk ...
tried to upgdate-grub and received the output :
Generating grub configuration file ...
/dev/sdb: open failed: No medium found...
the pool is very fast (based on high end sas ssds and 40gb network) so a dont think we will have such a big performance hit. worst case the storage will be down for 10 hours (we can do it over night)
currently there are only sas ssds (nvme are not yet connected) will i still have the effect...
currently i have one ceph poll consists of mix of sas3 ssds , (different models and sizes , but all in the same performance category )
i thinking of creating another bucket (i dont know if bucket is the right name for what i want to do )
currently we have the default (consists of sas3 ssds)...
we are planing to add few more servers. due to our growing storage demands
i would like to use all the 2.5 ports with high capacity drives and add them to ceph.
the server will mostly be a ceph storage, (read intensive ) and compute grid (the containers will be hosted on the ceph)
does the...
i just added some ssds to grow our ceph,
we bought two bulks of sas ssds
KPM51RUG7T68 hp branded (works ,integrated into our ceph)
ARFX7680S5xnNTRI MZ-ILT7T60 (link for ebay listing for same model) does not recognized, (cannot added to ceph)
both connected to 3008lsi on it mode
hdd is found...
in general we are very happy, but since then (upgrading from 5.4) we have a random crash(1-2 month) on 4-7 nodes (simultaneously) due to unknown backup failure. i tried to investigate but i couldn't find anything. so ill wait for proxmox backup server to replace the current backup schedules
bummer, currently the current current backup giving us some problems (random crashes(reboot) for multiple nodes in our cluster .. and i cannot find the reason) ill keep waiting..
thanks..
I have read about it, and i wane to use it on our core cluster to backup all lxc and vms.
but waiting for the "beta" to finish. anyone got an estimation ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.