Hi Fiona,
All good questions, but they're all interlinked, let me try to explain.
This setup is for my homelab. I use it for a couple of standard home-labbing things such as for example:
SMB fileshare for PC's in the household
Docker host running a number of apps, Emby, *arr, Transmission...
I have a Truenas VM under Proxmox with the SATA controller passed through to Truenas. Proxmox boots off an NVME drive, and Truenas then provides a ZFS pool. This is again shared with CIFS/SMB and the idea is to mount as necessary on VM/CT's. Due to the well known problems with shares in...
qm list [OPTIONS]
Virtual machine index (per node).
--full <boolean>
Determine the full status of active VMs.
Just curious: What is the correct way of running qm list with the --full option? Presumably, according to the manpage it should be qm list --full true
But...
I just tried creating an Almalinux 8 VM, and ran into exactly the same problem. I can add the hosts main IP address as gateway, but not the gateway of the assigned IP range. What is going on, is this a Proxmox problem or a Hetzner problem?
I have a Proxmox server with Hetzner, with a ip range 123.123.123.112/28 (.112-.127, 16 hosts, .112 = gateway, 127= broadcast)
The host has two bridges, vmbr0 with CIDR 123.123.123.112/28 and vmbr1 with CIDR 10.10.10.1/24.
On this server I have a number of virtual machines that all work fine...
I know this thread is a couple of years old, but I just came across it looking for something else. Maybe I misunderstand the question, but I backup vzdump (wma and tar) files directly from /var/lib/vz/dump, with borg, including compression and deduplication, at about 10GB/min, but of course...
For LXC containers it seems possible to bind mount a pool, (pct set CTID -mp0 /poolz/secdata,mp=/secdata) but for QEMU the only way to avoid having to duplicate filesystems would be to NFS share the pool.
It doesn't seem very practical. How are peeps setting up VM/CT as NAS with a large part...
So, I create an encrypted+compressed pool, secdata. I used pvesm to add it as a zfspool, so now I see it with pvesm status:
root@pve3:~# pvesm status
Name Type Status Total Used Available %
..
secdata zfspool active 14354464013...
I've been in the situation of moving a snapshot from one server to another, and got the error that the base iso was missing. I don't recall if it was VM or CT, hence the question.
Isn't this just the currenly mounted isos? What if they have been unmounted/ejected, would they still show?
What about containers, they don't need the base iso to restore?
Is there a way to list which iso or template is used/required for vm/ct?
I am carefully taking snapshots but just realize (sometime?) the iso/template used to create it is required in order to restore it. I realize I perhaps should have made a note of it during creation, but I didn't.
I'm sure...
Sorry, no ideas. I use Storagebox with Borgbackup over ssh. It's amazing how fast and how well it deduplicates even snapshots. I also don't think it is a good idea to use unencrypted protocols like CIFS/SMB over public internet.
I did try mounting a share for a test, and it worked fine right...
Storage is local zpool. 350G with 200G used.
I didn't time it, but I think it was under a minute, certainly not ten.
Just tried again, this time it succeeded after 26 seconds. Seems it's a bit hit and miss.
I believe vzdump outputs the exact same logging info on stdout and to the logfile with the same name as the dump?
I run a script for daily snapshots + borgbackup/deduplication +prune, that output everything to a comprehensive logfile so have no need for the default named logfile. Is there a...
What output might that be? I just want a standard text console. This is the tty01, that you can login directly with root, that you can use without network and gui. It typically shows some log output too, whether anyone logged into it or not. Sorry if this comes across as a little grumpy, but...
What do you mean by "if I also want display output"? Like in xwindows? I'm a non-gui operation, everything happens on CLI, no windows. In any case, the objective is to avoid having to do anything in the guest OS. I used to get a 80x24 window, now it's more like a 160x60 or something window...
I've been trying to get that to work, but I think it means having to hack grub right?
What I am trying to archive? I thought that was obvious. When I create a new VM under Proxmox in Hetzner, manual correction of the network configuration is required in order to get any network access, so I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.