if the system fails, there is usually no way to access the image of course.
If you need security against worst case you have the following solutions:
Backup -> of course stored on a different system
scheduled ZFS Replication
or at best:
at least 3 nodes as a cluster with a shared storage...
it is an zvol image, so you cannot directly see it as a file.
Easiest way is to backup the vm via proxmox, the backup takes anything including image and config. Then restore the backup on the other system.
or use the zfs replication built into proxmox (not yet in the gui i think)
It should be not very complicated to recreate a config file, if you at least know some details about the vm like type of system / number of cpus / etc.
you should copy the image away before you do that, and after creation copy it back.
After that learn the rule: Backup .. Backup ... Backup
Hi javier, you should set the BIOS time to UTC, and in Proxmox setup a correct NTP Server if you have network access on your server.
here is a list of NTP Servier in Argentina :
http://www.pool.ntp.org/zone/ar
Uhhm, this looks like an severe PCIe problem with the device on 0000:03:00.0
From vendor and device ID it is a PCI switch from PLX Technology Inc.:
8624 PEX 8624 24-lane, 6-Port PCI Express Gen 2 (5.0 GT/s) Switch [ExpressLane]
As it is a switch maybe the device behind the switch is...
LVM in the VMs has some advantages with reconfiguring filesystem layout (e.g. adding a /var/lib/mysql partition etc.) but its just fashion to use on or the other
they are configurted as bluestore. There is db and wal device for bluestore OSD, block.db is enough, as I read from the docu, wal should then be also on this device.
for the partitioning of the NVME disk: the default will make a 1G partition, which is quite small. make partitions with say 20G...
depends on your workload and what you want to see as performance. CEPH will of course have benefits from more OSD's and Host's, but I see acceptable performance from three nodes with 4 OSD's with 7.2k SATA's each. In my case a Samsung EVO 960 NVME is used as block.db.
So just try it with the...
if the memorys work in the dell server, they should be ok, at least for a start. The servers will not run to its maximum speed, but
it depends on the workload you want to put on it, if it will be a problem. Of course you cannot expect speeds as in a modern system.
From the intel data the optane...
What we do:
we run backup weekly onto a external NFS Storage which in turn is backuped to a tape roboter (Tivoli agent, this is provided by the university computing center)
We backup state together with image, at least for alle images on our ceph storage.
some very large images (WSUS data...
I know how to put db partitions (as I understand from the docu it will be used for WAL too) onto a SSD's / NVME partitions. I wanted to know how to put multiple complete OSD's onto a single NVMe with bluestore.
if you share back a VM's drive to the host running the VM you will always get into henn/egg trouble regardless of the protocol.
You can do it of course for short times for a transfer of data, but there are other ways to do that (scp or whatever). Backing up vm's into a VM running on the same...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.