Quick Update on the AutoDNS Login Issue:
After some debugging, it was identified that the issue stemmed from the use of quotation marks (" and ') in the environment variables within the dns_autodns.sh script.
Original Variables:
AUTODNS_USER="AutoDNS username"
AUTODNS_PASSWORD="AutoDNS...
ok, So I added "rbd default features = 5" to the ceph.conf. Default was 61
So I only have "layering" (1) and "exclusive-lock" (4). "object-map" (8), "fast-diff" (16) and "deep-flatten" (32) are now disabled by default.
I don't "want" VMs to use the kernel rbd driver. The question was if this is a bug? So, if I understand this right the rbd kernel module is not up to date? Will this stay this way?
I just want easy direct access from the host to the volumes. If this will stay this way I could script some...
Just plain installation of PVE 5 with Ceph 12.1 (pveceph install..) and VMs (with volumes) created from the PVE web interface. Nothing special. So you can't reproduce this?
Hello,
I am testing PVE 5 with Ceph (12.1) and wanted to "map" a ceph volume but I get an error. Is this a bug? Did that work with another versions of PVE or Ceph?
Thanks,
esco
# rbd map <ceph-pool>/foo
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported...
Hello,
I can reproduce this, too. But I don't this is network related. It looks like the VM gets completely "frozen" for some seconds (here 17s).
Some details:
Empty PVE 5 test cluster with 3 nodes and 10 GBit/s network
10 GB "clean" Ubuntu VM on ZFS
Ping from outside:
PING 172.20.60.128...
I completely zeroed the disks and reinstalled. Now it works.
Maybe interesting is, that the disk didn't have a partition table on the previous installation with software RAID. It was a software RAID 1 with complete disks (sda and sdb).
But using ZFS shouldn't depending on the previous layout...
Hello Michele,
I use LVM snapshots with storebackup since a few years for incremental backups without problems..
If you are looking for something newer to backup LVM snapshots incrementally I would take a look at zbackup or bup. But bup was a bit slow at my first test...
Ok, than this increases your backup.
Instead of deleting you could overwrite it with zeros:
http://linux.die.net/man/1/shred
something like that could do the job:
shred -n 0 -z yourbackupfile.zip
esco
Hello Alex,
Don't check the free space inside the VM! You didn't checked the size of the disk! You checked the actually used space inside the VM! And not if there is any disk traffic causing this?
Check the disk from outside! Copy it and compress it manually. And check the size of this.
And...
Hello Alex,
could you copy the disk manually and compress it? To check if this increases too?
I would say that there is running something which generates "disk traffic" but doesn't use much space in total.
Possibilities:
Traffic at /tmp: move it to RAM
Other location: add additional small...
Hello,
this happened for me tonight.
VM on logical volume.
Backup startet manually:
#vzdump --mailto <mail@address> --mode snapshot --compress lzo --storage backup 11015
Log:
11015: Jul 21 23:40:57 INFO: Starting Backup of VM 11015 (qemu)
11015: Jul 21 23:40:57 INFO: status = running...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.