i'm sorry - we didnt resolve it. we recovered the data from older backups or remote-files. there was an nextcloud storage.
we suspect that the issue was the remote backup storage (hetzner storage box) and large files on it.
our recommendation is - don't use it on this way.
we had the same problems with server 2012R2 and some old pfsenses.
- on pfsenses only move to virtio was needed
- on server 2012R2:
it is not enough to only re-install the driver and/or move to virtio-card
our solution was to remove all network-drivers and network-cards (via proxmox-gui) from...
because of an Performance issue in SAP HANA System we need to add this CPU Flag to our VM:
i've added this in my vm.conf file but after that the VM had 64kvm CPU without any flags :(
thanks for your answer
yes, i think so too - no block storage on it.
we had this SSDs on a OpenFiler NAS an with ISCSI connected to PRX-Cluster. Performance bad and high io-delay also...
so how can i use it for my cluster? build an OpenNAS and share NFS?
we have 15 Consumer Samsung 1TB SSDs (850 and 860 Pro) and dont know how we can use it as well.
I tested these SSDs on our 7-Node Cluster in CEPH (3 Nodes got 5 SSDs) and on all 3 nodes the io-delay runs high.
my test was to move some VM-Harddisks to this Customer_SSD_Pool, start 3...
for an Geo-Redundant-HA System we have two single Proxmox-Servers which hosts our App-VMs.
today we have a Hardware Raid-1 with two 1TB Enterprise-SSDs (=1TB usable space) with thin-LVM (for snapshots).
the disk space is near full and we have to upgrade it.
so here is my question...
we also have this problem. some changes we made had no effect:
virtio-drivers installed from ISO 0.1.149 (hdd, network, baloon, qemu-guest-agent)
at this moment every day our one Win2012R2 VM had no Network and must Hard-Stopped.
we also have a few other Win-Server and Win10/Win7 VMs with...
yesterday i removed an Node from our PVE/CEPH Cluster, but i forgott something:
At first i set all OSDs for this Node to OUT and STOP. After that, shutdown and removed the Node from PVE Cluster (pvecm delnode...)
But now the cluster shows me in CEPH Overview the OSDs from the removed...
today i have the same issue:
i added the 4. node to our cluster and now the cephfs-Storage ist unusable.
In syslog i see the mounting errors:
pvestatd: A filesystem is already mounted on /mnt/pve/cephfs
pvestatd: mount error: exit code 16
all nodes become ceph-Monitors and...
in my old clusters/ servers the disks on iscsi-lvm storages.
for rbd import i have to mount the old storage in my new environment, right?
it is possible to change the vm-id in this way?
do you have an example for me?
thanks a lot
i have an question about migrating:
we have to migrate about 200 VMs from our old PVE-Hosts/Clusters to our new PVE-Ceph Cluster.
it is possible to copy via scp the disk files directly to the ceph storage an how i do it?
thanks a lot