Jain. Weiß nicht wie das bei Adguard ist, aber bei Pi-hole kann man durch aus verschiedene DNS Einstellungen für den Pi-hole Dienst und dem LXC haben (halt einmal DNS Client und einmal DNS Server). Dann hat man auch keine Schleife, wenn der LXC die Fritzbox als DNS eingetragen hat, die Fritzbox...
He wrote the disk is recognized on another computer running Windows.
I guess you already tried another USB port? And 240GB sounds more like a SSD than a HDD? Or is it actually a HDD?
If you want to run a cluster you should have at least 2 full PVE nodes + 3rd machine (for example a bare metal PBS, SBC, thin client, VM on some NAS box) as qdevice. With that script you are risking a split brain and potentially data-loss. "pvecm expected 1" should only be used manually after...
Padding overhead only affects zvols as only zvols have a volblocksize. LXCs and PBS are using datasets which are using the recordsize instead (and therefore no padding overhead). One of the many reasons why you usually don't want to use a raidz1/2/3 for storing VMs but a striped mirror instead...
So with a 8K volblocksize and ashift=12 you would lose 50% raw capacity (14% because of parity, 36% because of padding overhead). Everything on those virtual disks should consume 71% more space. To fix that you would need to destroy and recreate those virtual disks. Easiest would be to change...
With only 2 or 3 nodes you don't need a 10Gbit switch. You could directly connect two servers via a single port NIC or 3 servers via three dual-port NICs. Got my single port 10bit NICs for 30€ each and the DAC cables for 10€.
Very hard to read those tables if you don't put it between CODE tags.
Are you sure its 128K volblocksize and not just 128K recordsize? You could check that with zfs get volblocksize,recordsize. Because 16K (previously 8K) volblocksize and 128K recordsize would be the default.
With a 7-disk...
I would count that "HDD for data + SSD for metadata" vs "HDD for data as well as metadata" as a important part of a filesystem speedtest, not just for a feature test. Will result in magnitudes of faster GC tasks and a cheap option for people who aren't willing to pay for SSD-only, as the SSDs...
You can set up a proxy server for steam that caches your game downloads. So you have to only download it once. The proxy then will save it. Any other PCs after that, that want to download and install that game, will download it from your local proxy instead of the steam online servers.
With that...
Usually you would share those folders via SMB/NFS. But playing a game that loads ressources from a remote network share sounds terrible. Popping in objects, not loading textures and so on. Maybe a steam proxy would be an alternative?
Maybe you are using a raidz1/2/3 with default volblocksize and it is "padding overhead"?
Output of zpool list -v and zfs list -o space would give some hints.
If you only want to store cold data on it, yes. But used SFP+ NICs and DAC aren'T that expensive and would allow for 10Gbit.
For redundancy you usually would use 3 servers. 2x PVE nodes (with for example ZFS replication) + 1x PBS that also works as a qdevice.
Usually you would set up some SMB/NFS server handling those disks and then mount the SMB/NFS shares inside the VMs. This could be a NAS VM with disk passthrough, a NAS LXC with bind-mouned folders from the disks that are mounted on the PVE host or directly running the SMB/NFS server on the PVE...
Yes, it is when PVE isn't installed on that pool. "<old-device>" is the disk you want to replace, "<new-device>" is your disk you want the existing one to be replaced with.
Both are valid options. Depends on how you would prefer to manage them. HA/migration/backup/templates on the PVE level by doing operations on the whole VMs or by doing all that on the container level inside the guest OSs.
Disk Passthrough sollte aber besser performen. Das ist kein echtes Passthrough wie bei PCI, wo die VM direkt und exklusiv auf die physische Hardware zugreifen kann, sondern die VM arbeitet dann mit einer virtuellen Disk die auf die physische Disk gemappt wird. USB Passthrough dagegen ist...
Yes. But the simplest way to achieve it as long as PVE isn't supporting docker containers.
If you want it in the intended way, you probably would set up a Kubernetes cluster with no PVE involved in rolling out/migrating/backing up those docker stacks. But if you are able to set up such a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.