We shutted down a windows VM to change the SCSI interface of the RBD Disk to SATA.
everything worked well. the next day we noticied after our upgrade maintenance during the process of reverting from SATA to SCSI ,
the drive was showing 5TB !
we first tough this was a bug, but no it is now...
hello everyone i have built a identical cluster that is not able to get traffic 802.1q to pass correctly
for example i have a Mikrotik VM that see the neighbors on the coresponding QINQ Zone but cannot access to them.
Firewall are off , both cluster end in the same HPE Stack . the original...
hello in some scenario we need to restore large VMs
example a C: windows system with 1 TB and there is for example a Data drive of 2TB
is there a way in CLI to initate a restore without importing all Disk at once, so at least we can recover the server faster in some case.
From the Proxmox...
hello im strudling to if in case a customer delete all his backup trough API ( from our customer web interface )
is there a way to retrieve a pending removed backup if they did that by error ?
duplicating our PBS datastore is actually our only safety and we would like to know what other do /...
we have found a post similar to the issue we have when trying to move a VM from a Ceph cluster to another one.
they suggested to disable Discard , what we did and seem to fix the issue.
Source Cluster as the VM on a RBD Ceph and the destination is a CephFS as recommended by Proxmox team as RBD...
hello i did a test with a image that i sucessfully mirrored from Cluster1 to Cluster2
i didnt migrated the VM configuration file , so for testing purpose i created a VM with the matching vmid from the image file but i am not seeing it in disk avaiable to attached after vm is created.
so i...
hello all im at last step of configuring a Mirroring ( enabled the deamon service on the site-b node ) .
do i need to have each Cluster on the same CEPH Public network or it can be any other range. as i dont have any communication now on the Public network or Client Network between the cluster...
We have 4x PM863a samsung 3.84 TB 3d-Nand TLC in ZRAID2 backed by a HBA from HPE , we believe the performance are really slow. is there anything we can do to improve the overall performance ?
we enabled Trim with no change .
the PBS as 128GB and use around 20-30 %
Hi , Proxmox is as killer hypervisor for competition , everybody know that but one of the most important feature that we lack for is REPLICATION of virtual machine on other file system than ZFS, why is this still not available ?
we have alot of VM we cant move from HYPER-V and Vmware because of...
senario.
we want to move VM from SHARED SAS storage on CLUSTER1 to RBD on CLUSTER2
the vm image on cluster1 are Qcow2, i get a error saying RBD is not supported.
my only solution is to use a tmp file system from ceph and move the vm there at step 1 and switch them on RBD at step 2
but...
hi the thing that scrare us the most using HA is not been sure about this below question:
Let say a VM fail for X reason but all 5 PVE node are active. or the VM try to move for X reason to another PVE node but fail to do so.
Will HA kill the Node that server correctly let say all others vm...
During Verification job , PVE host have hard time accessing the Datastore of a PBS server. as the pbs take all available i/o.
in case a customer want to restore a VM we would to probably mannualy kill the verification job per our observation when
having hardtime to only connect to see backups...
hi currently we use :
/etc/crontab
to configure a cron on each host , and the file is in /etc/pve/cron/cronxyz.cron
is there a way to have PVE cluster service runnning this job and not individual host ?
i read that ceph can inforce having copy on different host not only different osd
is this rule by default in Proxnox ? ( the Number refer to OSD and or host ?
or its a rule we need to add to the config file ?
in case 1 OSD / Node reboot , and we still have 3 node / OSD online
will the pool lock i/o if configured at 2/2 ?
im asking cause my collegue told me that during his test during a node reboot the VMs was not responding to i/o anymore with 2/2 but correctly with 2/1 on a 4 node lab
hi i think i finaly isolated a issue with a VM losing network connectivity when we move it from host to the other
all runing 7.3.4 , the mikrotik have 32 cpu so we have configured 32 multiqueue.
yesterday i downgraded for fun from 32 to 16 multiqueue has another mikrotirk with 16 cpu and 16...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.