Many thanks Lukas - it is indeed as your say `rdis_host`.
Proxmox by default selected it as the management host - I had a feeling it wouldn't work since it's not listed as a MAC address on any of the visible interfaces on the machine. Indeed it didn't work after booting up so a modification to...
Just setting up Proxmox and in the network setup I see three interfaces (see image below).
1. mdis_host
2. eno1
3. eno2
eno1, eno2 are my onboard Intel network ports, what's mdis_host?
There is an IPMI interface on this machine but it shouldn't visible to Proxmox as a NIC and moreover the MAC...
thanks @guletz, food for thought.
Some interesting analysis on the subject here suggests that whilst the `copies` property is not anywhere as good as two separate devices, a single device with `copies=2` can increase odds of recovery from corruption enough to be worth doing with important...
@morph027 so I did a quick calculation:
Purchased - Samsung 970 Pro Evo Plus - it quotes TBW of 600.
Ideal - Samsung PM983 Enterprise - it quotes DWPD 1.3 (3 years)
When you convert to TBW it gives 1367.
So approximately twice the longevity for about £50 more...hmmm, tempted. Pity it seems to...
thanks - the disk will be a single 1TB SSD. It will only have about 100-200GB of containers on it.
Do you know if ZFS on Linux’s lack of support for TRIM will be a problem?
We're planning on using a few Intel NUCs in lab conditions with Proxmox. They take a single M.2 NVME device. What would be the best filesystem to use in this case?
Also if ZFS would a RAID1 or RAID0 array be the correct option?
We're evaluating Proxmox in our lab and have two setups - both running 5.3:
1. Xeon
Storage: ZFS (10x 3TB SAS + 100GB ZIL SSD)
2. i7-7700K
Storage: ext4 LVM (1x 1TB SATA SSD)
Running the following sysbench commands inside identical LXCs to evaluate storage performance:
sysbench...
it's the weirdest thing, I ran:
:~# lsof -K | grep inotify | wc -l
93
Which is hardly enough watches to saturate the system. I then shutdown all containers except the one experiencing issues. It still wouldn't come up in the Proxmox console or allow SSH in.
I noticed on the affected system...
After running `lxc-attach ID` I ran the command:
~# /etc/init.d/ssh restart
[....] Restarting ssh (via systemctl): ssh.serviceFailed to add /run/systemd/ask-password to directory watch: No space left on device
Any ideas why it'd say 'No space left on device'?
df returns:
Filesystem...
After changing an LXC container Hostname under its DNS setting, I can ping it, but can no longer SSH to it.
Other containers are accessible and the renamed container was accessible before the Hostname change. Is this a known issue or any suggestions?
Thanks.
We have a small vSphere cluster in our lab but we're increasingly using Docker. This however isn't ideal since we'd like a GUI managed solution which natively supports Docker (like Synology's DSM).
Does Proxmox have Docker support in the web interface? I see mention of it in 5.3 but I can't...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.