I suspect the syncs taking a very long time. The backup in around 18tb being synced TO on a 300mbit offsite line.
Is this the reason for the ticket timeout? Is there a way to adjust it or otherwise fix this problem?
I ponder why I cannot migrate from storage type DIR to RBD... because it is a blatant lie.
Go to hardware, move disk from DIR to RBD = SUCCESS.
Once done, migrate the VM... it is now all running on ceph... litterally migrated from DIR to RBD....
WHY is PVE lying and rolling over like a little...
Dead PVE.... I wanted to move the EFI disk, hence I clicked MOVE.... I did not ask any cancel... why do you behave like a infantile AI and cancel my job? To anoy me? Success!!
WHY?
I don't know how many times i've had to shutdown a vm to move its EFI disk... as if its in used at all... This is...
5 of 6 nodes down for maint.
Can't login to web interface on the 6th with the correct password.
Can SSH to the 6th with the correct password.
Why?
If quorum is the culprit.... WHY?
The server has 512GB Ram.
It has 5 running vms with following allocated memory:
1:2GB
2:4GB
3:32GB
4:64GB
5:16GB
Please explain how this server can run OOM when it clearly has sufficient memory to run the meager load it is running.
Even with 2 ZFS pools with default value it will never fill...
Clean standard install of Debian, upon reboot I am greeted with this bs. It would be adviced to read INSIDE the damn disk and not outside!!! Who gets these funny ideas?
If UEFI is selected everything starts to flicker nicely with deb 11. (/S!)
Why does stuff like this keep happening?
Hi, is there a way to have PBS shut down after a remote sync is complete?
I would like the server to shut down and auto start up (by BIOS) to pull the sync down, then shut down again.
I see no reason to have the 2nd off-site server powered on more time than required due to security concerns...
Thanks to this typical proxmox error I have tried to migrate the damn disk twice with same useless result!
Why does stuff like this continually happen on freshly installed clusters? I don't get it...
drive-scsi3: transferred 446.8 GiB of 450.0 GiB (99.28%) in 10m 19s
drive-scsi3: transferred...
Hi, we have 4 servers with various SSD's from various vendors. Today I observed something interesting in the syslog of all servers that I went to investigate.
I even went so far to order a trainee to the serverroom, ready to pull out the disk I specified when the syslog entry appeared, to check...
So.. what are those 5 nodes pretending to be part of?
Why does this "still" happen on cleanly installed clusters?
It has been happening ever since 7 was released..
One node down, all hell breaks loose.
Can't SSH to hosts because the login is hanging.
Webinterface wont log you in because the...
A nice little yellow warning appeared on my Ceph pool after having enabled and subsequently disabled lz4 compression.
What does this mean? Pool runs "fine" but, how do I get rid of this error?
22 OSD(s) have broken BlueStore compression osd.0 unable to load:none
osd.1 unable to load:none...
Yeah, one of those proxmox adventures where shit just hits the fan for absolutely no obvious reason.
Please Add to the help file the definition of what ceph related data and configuration files and where, because it is not local.. I am very very very anoyed!
USAGE: pveceph purge [OPTIONS]...
So, installed a server today, used the GUI to do an apt upgrade. Briefly checked another part of the UI and browsed away from the window.
As I return, a new session was open and the apt-upgrade could no longer be connected to.
This feature singlehandedly forced me to reinstall the server. Why...
So what? I do not care that it has a holder. Kill the holder and wipe the disk as asked! What is the reason for all these workarounds we constantly have to do to administrate these systems?
The disk was previously in a CEPH installation, it is no more, I need it for something else! I can't...
How inconvenient... that the two nodes with absolutely 0 vm's are offline.
The VM's to be backed up are located on the two online nodes. Why can proxmox backup server not figure out to do the damn backup?
So, as a silly work around I now have to set the backup job to only run on server 1 first...
VM102 USED to beplaced on ceph. I moved the disk away without ticking the "delete source" Now I want to delete this, I do not want to fight with silly popups telling me completely irrelevant stuff...
So, here is the wierd workaround I have to do?
1. Shutdown vm102
2. copy the 102.cfg file...
when the updater gets to pve-manager 7.0.11 there is a timeout, a brief dump and then apt hangs. After a reboot, system wont start vm's saying all sorts of nonsense, such as:
()
TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS...
Hi, I have a question about the path of the storage.
If I create 3 nodes with CEPH. Node A,B and C.
I have a VM running on node A and I have 3 replicas, and host as failure domain. Thusly A's vdisks will be placed on node A, B and C concurrently
Will the VM grab data from storage that is local...
Hi, is it just me(us) or is there a suspicious lack of updates for at least two days?
Reddit post about topic:
https://www.reddit.com/r/Proxmox/comments/p6uv7n/is_something_up_with_the_proxmox_bullseye_package/
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.