LXC - Cannot assign a block device to container

this volume does not support snapshots, so the warning is correct - you cannot use snapshot mode, and vzdump automatically "downgrades" to suspend mode.

I am not sure why you are passing through a block device from the host like this, you can just use

Code:
mp0: /dev/sdc1,mp=/mnt/timeshift,backup=0

instead. of course, for the DVB device you still need the adapted AA profile and settings.

Hello fabian,

thank you for the fast response.

I am passing through the block device in this way because it was written by @mjb2000 in this thread some posts ago. I didn't read the official documentation. Sorry, my mistake.;)

But yes, you are right. It is as easy as you wrote. The following did the trick:

Code:
mp0: /dev/sdc1,mp=/mnt/timeshift,backup=0

By the way... With that mount method I do not see the info for the not supported snapshot mode anymore.

Code:
INFO: starting new backup job: vzdump 110 --mode snapshot --remove 0 --node proxmox --compress lzo --storage local
INFO: Starting Backup of VM 110 (lxc)
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: vdr
INFO: excluding device mount point mp0 ('/mnt/timeshift') from backup
INFO: creating archive '/var/lib/vz/dump/vzdump-lxc-110-2017_03_03-18_29_37.tar.lzo'
INFO: Total bytes written: 1142456320 (1.1GiB, 139MiB/s)
INFO: archive file size: 645MB
INFO: Finished Backup of VM 110 (00:00:08)
INFO: Backup job finished successfully
TASK OK

Thank you very much!

Greetings Hoppel
 
Last edited:
But yes, you are right. It is as easy as you wrote. The following did the trick:

Code:
mp0: /dev/sdc1,mp=/mnt/timeshift,backup=0

By the way... With that mount method I do not see the info for the not supported snapshot mode anymore.

Code:
INFO: starting new backup job: vzdump 110 --mode snapshot --remove 0 --node proxmox --compress lzo --storage local
INFO: Starting Backup of VM 110 (lxc)
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: vdr
INFO: excluding device mount point mp0 ('/mnt/timeshift') from backup
INFO: creating archive '/var/lib/vz/dump/vzdump-lxc-110-2017_03_03-18_29_37.tar.lzo'
INFO: Total bytes written: 1142456320 (1.1GiB, 139MiB/s)
INFO: archive file size: 645MB
INFO: Finished Backup of VM 110 (00:00:08)
INFO: Backup job finished successfully
TASK OK

yes, but only because the container is not running, so PVE will always use "stop" mode. there are no inconsistency problems because the container is not running, and stop mode is the most efficient mode in that case.

it basically boils down to this (when selecting snapshot mode):
Code:
running?
----> yes: all volumes which are included in backup support snapshots?
-----------> yes: attempt to create snapshot
------------------> worked: continue with snapshot mode
------------------> failed: fallback to suspend mode
-----------> no: fallback to suspend mode
-----> no: stop mode

the difference between snapshot mode and suspend mode is very small for most containers, with two big exceptions:
  • very I/O busy containers: the second rsync run (which happens when the container is suspended) will take longer, i.e., a bit more downtime
  • containers with FUSE mounts: suspension can fail, so avoid such container setups if at all possible
 
I am trying to pass storage directly to containers for the purposes of an active/passive nfs cluster. For this purpose I have a RAID controller connected directly to two proxmox nodes. As I see it, I have the following options:
1. map the storage to one container/VM on one host, and migrate it to the other for failover. this doesnt seem to work as the local mapped storage prevents migration. Is there a way to make this arrangement work?
2. map the storage to two containers running heartbeat+NFS server, one on each node. In order for this arrangement to work correctly the storage needs to be unmounted at boot (heartbeat will mount it when the storage node is active.) How do I attach the block device to the container without mounting it?
3. run heartbeat NFS server on the proxmox nodes directly. Is this a good idea?
4. Am I missing a better alternative?

any comments welcome.
 
I am trying to pass storage directly to containers for the purposes of an active/passive nfs cluster. For this purpose I have a RAID controller connected directly to two proxmox nodes. As I see it, I have the following options:
1. map the storage to one container/VM on one host, and migrate it to the other for failover. this doesnt seem to work as the local mapped storage prevents migration. Is there a way to make this arrangement work?
2. map the storage to two containers running heartbeat+NFS server, one on each node. In order for this arrangement to work correctly the storage needs to be unmounted at boot (heartbeat will mount it when the storage node is active.) How do I attach the block device to the container without mounting it?
3. run heartbeat NFS server on the proxmox nodes directly. Is this a good idea?
4. Am I missing a better alternative?

any comments welcome.

run the whole thing in a VM, and use regular disk passthrough.
 
Hello fabian,

here is my container configuration file:

Code:
arch: amd64
cores: 1
hostname: vdr
memory: 4096
net0: name=eth0,bridge=vmbr0,gw=10.11.11.1,hwaddr=0A:D8:58:1F:AA:A3,ip=10.11.11.12/24,type=veth
.....

[code]# UNCONFIGURED FSTAB FOR BASE SYSTEM
/dev/sdc1               /mnt/timeshift/         ext4 defaults 0 2


I did it the way @mjb2000 described.

Greetings Hoppel
Is there a way to specify a block device by UUID instead of /dev/sdX?
 
/dev/disk/by-****
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!