I think you are 100% correct, unless you use nfs-ganesha you are probably worse off by using a container because you are providing a server that ties in to your kernel on the host so all "benefits" of containers/vms go out the window.
If it were a networked UPS but here it's just USB so if the host is down there is no monitoring unless I physically plug it in elsewhere, for me the way to move easily would probably just be to build an ansible automation, the server doesn't store anything as far as I recall and if I had clients...
That is exactly my instinct, but because others were doing it that way (there is even a topic about the LXC route on this forum) I was questioning myself.
Hey everyone,
I have a UPS connected directly to my PVE host by USB, I would like to monitor it with nutd, my gut is that since nutd is a small daemon and also since if the battery gets low it needs to anyhow do drastic things like shutting down the host it should just run directly on the host...
Just wanted to check back in - I did what was suggested, linked the disks to a vm and installed proxmox.
After that I still had to modify the grub command line to enable serial output and of course fix the network config.
As for the issue I had with /etc/pve it turns out that in my zeal to...
I just realized I missed the memo that /etc/pve comes from config.db, I have attempted to copy config.db from the old install that did not give me the desired outcome, then I tried "unifying" the old and new config.db (ie. copy just those rows from the old that contain info I want and fixing...
Hehe total inception this....
I'm finding all kinds of interesting differences between my old install (converted Debian) to the "native" install, for instance no /etc/pve/storage.cfg
At the moment I'm trying to get the new install to recognize the old one to import the VMs so far no luck :/ I...
LOL I never even thought to look for it there because I figured only Dell could issue these keys, though I must say I also find the whole model they (and HPE) have around their IPMI devices offensive, here I got the machine for free from a friend but at a previous employer we bought some...
Sadly I can't use the proxmox ISO for installation, the machine I am working with (Dell T130) only has a VGA output and I don't have a VGA compatible screen at home anymore.
Dell also put ipmi (iDrac) html console behind some license that even if I was willing to buy it is no longer being sold...
What is default layout and settings/features/fstab used by proxmox when installing to run on ZFS?
Background - I installed a system with the Debian text installer and then converted Debian to proxmox, now I installed two new SSDs and want to migrate the system to a ZFS mirror, the proxmox...
I haven't worked with multipath for several years, as far as I recall we defined the wwids of the devices so they always got the same device node in multipath.conf and you need to have a multipath.conf to tell your system how to recognize and deal with disks that are multipath.
If you can't get...
Just realized - you posted a copy-past config of the other guy but he has DELL disk shelves while you have Lenovo so you probably at the very least need to replace DELL with Lenovo or however things are shown on your system.
/Edit - or maybe IBM if Lenovo still identifies as IBM.
I'm going to try the following sequence of actions:
1. Setup ZFS mirror on the two new SSDs
2. rsync the root filesystem to ZFS
3. Modify fstab on ZFS
4. use `proxmox-boot-tool` to install grub on the ZFS mirrored disks
5. Shutdown all running VMs
6. run rsync again with --exclude on fstab and...
I don't know if this is the case for PERCs too but unless you are actually offloading RAID work to them and leveraging disks that also have the larger sector size needed for the PERC to store checksums you are probably better off just using the PERC as a pure JBOD HBA and implementing RAID at a...
Current install is just a single SSD with default Debian partitioning (ext4 on the full disk with a swap partition).
/Edit -
I have 2 additional SSDs and am replacing the original SSD with them, so I don't need anything to happen to the original SSD other than moving the data from one to the other.
Hi,
I have a Proxmox 7.4 system running from a single SSD at the moment and want to create a mirrored device and transfer my root filesystem there.
If I just use mdadm to create this I'm pretty sure everything should work as long as I update the UUID in fstab and grub.cfg (1).
With ZFS based...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.