think i figured it out, pool is Bigdata and you then create the zfs location after... so zpool then zfs. so in my case I just made Bigdata and then Bigdata/Zeus and re-did the samba share for Bigdata/Zeus and restarted it and now I can see the files transfering and I'm not getting smb errors...
so I recently had to upgrade some hardware, and after doing so couldn't keep the ZFS pools for some reason. I destroyed the pool and recreated it, installed samba on the proxmox root (I just like it that way). And chmod the main zfs share..
I've followed this guide in the past to get the...
having a similar issue, should I not be removing the 6.1.0-9 headers?
dpkg: error processing package linux-headers-amd64 (--configure):
dependency problems - leaving unconfigured
Setting up pve-headers-6.2 (8.0.2) ...
Setting up pve-headers (8.0.1) ...
Errors were encountered while processing...
Happened after updating from the GUI the packages.... no idea why and nothing I've found about restarting services has helped.
Some screenshots of the GUI below
Reboot didn't fix and restarting services from other forum posts didn't fix it either.
I've been racking my brain on this as all the setups I've found online show this should be easy.
I have a ZFS zpool named Bigdata and a sub called Files
Bigdata/Files
I've installed turnkey file server and added a mount point that shows /Files as the store (50TB for backups of another...
so maybe I have a complete misunderstanding of ZFS and how I setup my system. I was probably moving too fast and not comprehending the setup. My initial impression is that with Proxmox, I was able to setup a zfs pool and from there sub volumes.
SSD has iso and vm's on it
/Bigdata (10x10tb...
So the deeper explanation is that it's the same proxmox install. What happened is I had to reboot for some reason and the turnkey file server lxc wasn't able to boot (no idea). So knowing I had created a zfs file store (mount point) for that lxc, I figured I'd just spin up a new turnkey lxc...
Trying to understand what to do here, it's been since covid started and I've been putting this off for a while.
I have the latest proxmox installed with a zfs pool, created some pools and installed turnkey file server.... some how it got jacked up and I don't know why (literally never touched...
so that's what happens... all three HBA's work only in slots 2,4,6. If in 1,3,5 they don't show in proxmox.
With them in 2,4,6 and I put the nic in slot 5, I lose 2 of them.
I'm just struggling with Supermicro in trying to understand that they would have a brand new board in 2020 that has 6...
also with the nic plugged in
Systemctl status gives the following
● pve
State: degraded
Jobs: 0 queued
Failed: 2 units
Since: Fri 2020-05-29 07:34:54 PDT; 12min ago
CGroup: /
├─1440 bpfilter_umh
├─user.slice
│ └─user-0.slice
│...
ok so after plugging the card back in to slot 5, no bios changes to what you see above, the following is what happens. So I'm not sure from googling what this actually is. This is a brand new Supermicro server board, can't think that they would disable slots based on a card being in one like...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.