zpool I think I have tried everything, also -D -f and many other options, always the same result. I've given up on this and start over. It annoys me a bit as I would have liked to know that I'd be able to reuse a zfs filesystem with a new installation, but on the other hand I can't waste more...
Actually isn't there a guide for zfs somewhere. I have so many questions but find it quite hard to find the answers, like:
- when I create a VM the installers always want to create ext4 file systems for it. Since this is now on zfs wouldn't it make more sense to just create a datasetas the file...
I tried replacing the boot drive and installing a fresh Proxmox, now what?
root@pve2:~# zpool status
no pools available
root@pve2:~# zpool import
no pools available to import
root@pve2:~# zpool list
no pools available
root@pve2:~# zfs list
no datasets available
What do I need to do to get it...
So let's say you want to reinstall Proxmox for whatever reason, but there's an existing and functioning zraild pool that you want to keep. What do you do?
add a new drive to the server, boot a Proxmox installer, tell it to install on the new drive - will it automatically understand not to...
On a system with zfs/raidz, is it best to boot directly from the raidz or is it better to add a small standard disk to boot from? I can imagine if things goes belly up for any reason it's easier to recover with a separate boot drive. I realize the best may be to boot from a mirrored pair, but...
Yes, it seems it's not possible, so in the end I decided to run the NFS server directly on the host. I then mount the NFS share in a VM and share that with Samba and Webdav. Not perfect, but the closest I could get.
https://pve.proxmox.com/wiki/Performance_Tweaks#Disk_Cache suggests the fastest cache method is writeback, but doesn't mention the underlying FS at all.
Given the hypervisor is installed with ZFS raidz1-0 with SSD L2ARC, does it makes sense with yet another caching level?
Ok, now we're getting somewhere - I found a etc/vzdump/vps.conf with the QUOTAUGIDLIMIT line. Commented it out, packed the files back into the tar file, and retried the pct restore.
Thanks but I already had a look at that page, many times actually.
"you can edit the config file in the tar archive manually" - yes, which config file?
vzdump-openvz.tar is an uncompressed vzdump from a running openvz server.
I am trying to restore it as a Proxmox LXC container but get an error:
# pct restore 100 vzdump-openvz.tar
unable to restore CT 100 - unable to parse config line: QUOTAUGIDLIMIT=3000
This value is really not that...
Uhm, ok. How do you "fix" a VM agent? It's installed ("yum install qemu-guest-agent") and reports that it was called. The caller didn't get the message though.
# systemctl status qemu-guest-agent.service
● qemu-guest-agent.service - QEMU Guest Agent
Loaded: loaded...
Hi Richard,
Thanks for the update, very interesting!
1)
So what exactly is logged in /var/log/pveproxy/access.log - presumably activities on port 8006?
I have an iptables rule that block traffic to port 8006 (and 22) except from my IP address:
Chain INPUT (policy ACCEPT)
target prot opt...
Cool, so it's just a matter of being fast enough ;)
Would it make sense to thin provision the directory? I guess to prevent it from running away and taking up all the zfs space? I'm kinda new to zfs, what's best practice? (https://wiki.debian.org/ZFS)
I had a requirement for a new Proxmox server with zfs. Unfortunately the hosting company I'm with did not have a bootable proxmox+zfs installer so I had to install Debian first, then adding Proxmox and zfs, By and large this wasn't too hard, and I now have the system setup with the base...
The Firewall Wiki page https://pve.proxmox.com/wiki/Firewall explains
If you enable the firewall, traffic to all hosts is blocked by default. Only exceptions is WebGUI(8006) and ssh(22) from your local network.
I think quite a lot of Proxmox users have the server in a remote located...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.