Same results in production servers with last updates:
# pveversion
pve-manager/8.0.9/fd1a0ae1b385cdcd (running kernel: 6.2.16-19-pve)
If can be useful, form journalctl -r I have:
Nov 19 12:44:58 pveprod01 systemd[1]: Reached target zfs-volumes.target - ZFS volumes are ready.
Nov 19 12:44:58...
Calirification: this answer is valido also for ceph-fuse?
dpkg: fuse: dependency problems, but removing anyway as you requested
:pve-cluster depends on fuse.
ceph-fuse depends on fuse.
And, curiosity: I am not a developer, so my knowledge is limited, but first curiosity that come to me is...
Simply installed sshfs and got this warning.
It refers to pve-cluster and ceph-fuse, so I ask if this can be something to considered.
I suppose Proxmox Team had his good reasons for maintaining the "old" fuse instead of fuse3.
# apt install sshfs
Reading package lists... Done
Building...
Reinstalled the cluster and recreated zpools.
Same problems.
I have read other threads that reports the same "problem", but anyone got a real answer.
I tried to simply disable the services with: systemctl disable zfs-import@zfs1.service
like suggested in...
I am able to migrate a couple of VMs, but if I try a CT, either running or switched off, I have:
trying to acquire lock...
TASK ERROR: can't lock file '/var/lock/pve-manager/pve-migrate-41200' - got timeout
I simply activated replication of the VMs/CTs to the other nodes and added the VMs/CTs...
Playing with a test cluster.
After disconnecting one node to test HA migration (VMs replication active), I have some VMs corrupted (hangs on boot like the disks are corrupted).
From systemctl:
● zfs-import@zp01.service loaded failed failed...
So, migration and replication only is done using 1 network by default (the default WAN network)?
And I can only optionalluy chose which network use?
I supposed that confugiring multiple networks in the cluster configuration automatically it uses them as needed.
Thanks, P.
I can't understand why replication stops if I disconnect the cable in the "local" network while it continue if I disconnect the "internet" network.
I supposed it uses the "local" link for replication and the alternate link if the "local" fail.
I have this test cluster (3 nodes, 2 nic)...
Hi,
Is there some fast way/tricks to move VMs from an old PVE 7.0 where they are stored as normale files to a new PVE 8.0 using ZFS (block evice)?
I suppose the simplier answer may be to use backup/restore, but I am searching for something like directly copy folder images because I have about 3...
Thanks for the explanation.
I obviously had read a few documentation, so, some of this concept are familiar to me. I miss the practical experience.
My focus was in evaluating the specific situation:
3 TB data amount
15 TB disk space (4 x 3.84)
We are at 20% of physical space.
With 2 disks/OSD we...
I am at my first experiences with ceph, so I don't know it so much.
In my limited experience to standard file Systems and/or storage, I usually prefer to have separated littler "pools" if possible so, if something go wrong it is simplier and faster to recover the one that have problems.
I don't...
I have 3 node ceph cluster, each node has 4x3.84TB disks to use as OSD.
VM images are about 3 TB.
Does Is it better to create a pool with all the 4x3.84 TB for each node (15TB), or create 2 pools of 2x3.84 TB for each node (7.5TB)?
Thanks, P.
Now that I am at implementing the new nodes, I evaluated that my final goal is to have a 8.x cluster that have 3 new nodes (with ceph), and the only interesting things that I need to insert/mantain in the new cluster from the old (apart VM & CT), are the firewall rules and the PBS host.
For the...
I have a PVE 7.x cluster with 3 nodes (2 PVE with VM/CT + 1 PVE/PBS).
No ceph.
I want to add 3 new PVE 8.x nodes where to install also ceph and when it is ready move the VMs from the old PVE 7.x nodes that will be dismissed (the PBS updated).
Can I mix 7.x and 8.x?
Thanks, P.
There are many people more experts than me than probably can give you better answers, but for similar situations I simply avoid to complicate my life in trying to mount a PVE disks into another PVE system (LVM conflicts & c. in my experience), and simply access the disk(s) from a sysrescue boot.
P.
I have a cluster with 2 x SCALE-4 (at RBX), that have 2 SSD for PVE system and 4/6 NVME (Samsung MZQLB). + BK/Storage server
They are using 2 NVME disks as storage so I have 2 spare NVME disks available.
CPU are 2× Intel Xeon Gold 6226R - 16c/32t
RAM 384 GB
Public bandwidth 1 Gbps, Private...
8 core for windows server VM.
2 windows server VM for PVE server
+ some core for LXC containers
8+8+4 = about 20 core
Distribute the 4 heavy VM between the 2 servers.
Have a backup solution for disaster recovery.
You are probably right, but I like to have spare space (that always will be...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.