Hello,
for an Geo-Redundant-HA System we have two single Proxmox-Servers which hosts our App-VMs.
today we have a Hardware Raid-1 with two 1TB Enterprise-SSDs (=1TB usable space) with thin-LVM (for snapshots).
the disk space is near full and we have to upgrade it.
so here is my question...
Hi,
we also have this problem. some changes we made had no effect:
virtio-drivers installed from ISO 0.1.149 (hdd, network, baloon, qemu-guest-agent)
at this moment every day our one Win2012R2 VM had no Network and must Hard-Stopped.
we also have a few other Win-Server and Win10/Win7 VMs with...
Hello,
yesterday i removed an Node from our PVE/CEPH Cluster, but i forgott something:
At first i set all OSDs for this Node to OUT and STOP. After that, shutdown and removed the Node from PVE Cluster (pvecm delnode...)
But now the cluster shows me in CEPH Overview the OSDs from the removed...
hello,
today i have the same issue:
i added the 4. node to our cluster and now the cephfs-Storage ist unusable.
In syslog i see the mounting errors:
pvestatd[3334]: A filesystem is already mounted on /mnt/pve/cephfs
pvestatd[3334]: mount error: exit code 16
all nodes become ceph-Monitors and...
in my old clusters/ servers the disks on iscsi-lvm storages.
for rbd import i have to mount the old storage in my new environment, right?
it is possible to change the vm-id in this way?
do you have an example for me?
thanks a lot
Hello,
i have an question about migrating:
we have to migrate about 200 VMs from our old PVE-Hosts/Clusters to our new PVE-Ceph Cluster.
it is possible to copy via scp the disk files directly to the ceph storage an how i do it?
thanks a lot
regards
Ronny
Hello,
on our new 3-node cluster with fresh ceph-cluster installation we get continously this messages on all 3 nodes in ceph-log.
pveversion: pve-manager/5.4-4/97a96833 (running kernel: 4.15.18-12-pve)
the cluster contains this hosts:
pve-hp-01 (7 OSDs)
pve-hp-02 (7 OSDs)
pve-hp-03 (8 OSDs)...
the hint about probs of ram and/or hdd i've also read in proxmox forum.
and yes - i tryed on an other host and there is the same error, so i've just saved the time with memory testing.
correct, the 3rd hdd is 2TB big and cannot recover. the backup drive is for proxmox an local directory, but an...
the second step failed with the same error - look at my screenshot.
there are 3 disks inside, the first and second restored fine but the last big one is broken.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.