Hi!
That is, You want to run a VMs with Windows XP, 7, 8, 10 as quest systems?
If yes, then the answer to this question - of course. Look on this picture:
Best regards,
Gosha
WEB-GUI -> Datacenter -> Storage -> Add -> NFS (for NFS storage).
Use DFS as a shared storage for VMs in HA context...
May be... if you manual mount DFS as a directory on each node as samba storage,
add this directory in WEB-GUI (like NFS-storge but as Directory) and check on Shared option...
Hi!
I want to add a 4th node to the cluster (PVE 4.4), which uses ceph.
As I understand it, I do not need to add a 4th monitor. Suffice it to 3 monitors already available?
How to properly install the ceph without monitor on the 4th node?
1. pveceph install --version hammer
2. pveceph init...
# /etc/default/ceph
#
# Environment file for ceph daemon systemd unit files.
#
# Increase tcmalloc cache size
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728
## use jemalloc instead of tcmalloc
#
# jemalloc is generally faster for small IO workloads and when
# ceph-osd is backed by SSDs...
Hi!
For autodelete previous backups You need create backup schedules in Datacenter -> Backup.
For manual backups - manual delete only. :)
And max backups in storage settings apply to Backup schedules only.
Best regards,
Gosha
Ok. I installed ceph from deb https://download.ceph.com/debian-luminous/ stretch main
and after 'pveceph init --network ...' I tried 'pveceph createmon' again.
And get the same error:
pveceph createmon
creating /etc/pve/priv/ceph.client.admin.keyring
monmaptool: monmap file /tmp/monmap...
Yes of course! This old server for testing needs only. (But I'm trying to adapt it for third copies of backups... Just in case... ;) )
I'll try debian packages later and report the results.
Best regards,
Gosha
Hi!
I decided to try Proxmox 5. And play around ceph luminous.
After installation Proxmox 5 (via debian install) and ceph via pveceph install --version luminous
and after pveceph init --network I tried to create a monitor and get error:
# pveceph createmon
creating...
Hi!
The VMs and CTs ceased to migrate between nodes. For example:
I tried the manual ssh-connections between nodes. Does not work!
root@cn1:~# ssh cn2
... and after 2 minutes:
Connection closed by 192.168.0.240
root@cn1:~#
From my workstation, the SSH connection to all nodes works...