I have older server with sata ssd(intel s3510) and Epyc 3rd gen server with intel p4510. I have a budget for another server(4th gen epyc with nvme ssd).
I'm planning to build 3 node with ceph. If I mix those 3, would they bottle-necked by sata ssd server?
I will run database intensive...
Hello, I have created lxc and and storage type is "directory".
root@server:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 7.8G 3.8G 3.7G 51% /
/dev/loop1 3.6T 3.3T 132G 97% /storage
none 492K 4.0K 488K 1% /dev
tmpfs 16G 0...
I have some legacy code running on Ubuntu 14.04 LXC.
Since upgrading to Proxmox 7, my 14.04 container starts but doesn't even start up network interfaces and other apps.
If I manually enter to the container with 'pct enter' command and enter 'init 3', everything works.
It worked fine with...
My datacenter has disconnected my Proxmox host because it was causing MAC flood.
It has over 10 vm and wondering if there is way to check which VM is sending such requests.
Thanks in advance.
@oguz Thanks.
I have a software that relies on virtual ethernet.
I have configured virtual ethernet on my lxc server. However lxc's network is provisioned by host and when I reboot the lxc server. my virtual ethernet disappears.
I need to veth0a device persist after reboot on lxc machine.
Thank you so much. Silly me thought deleting partitions would clear ZFS label.
If only I've cleared zfs label before installation, it wouldn't have happened.
But now I know thanks to you.
I've deleted all partitions with fdisk before installation.
It's right after clean installation.
As you can see two disks of healthy rpool's mirror-0 (nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3) are also in first and second rpool.
So, if I do zpool labelclear nvme-eui.0xxxxxxxxxxxxxxxxxxxxx-part3...
Other pools name "rpool" shows degraded.
Top 2 pools' disk are just another symlink to same disk used in 3rd pool, which is healthy and one I want to use.
Hi, I've installed proxmox 6.1 on 2x Intel P4510 (ZFS mirror).
Upon successful installation, it reboots and stuck in initram console because there is multiple pool named rpool.
If I manually import my pool using ID, it boots fine.
How can I delete other pools named rpool.
I've tried rpool...
As a workaround, I created a mirrored ZFS using first two disk. After installation completed I've added mirrors manually using following command.
zpool add rpool mirror /dev/disk/by-id/disk1 /dev/disk/by-id/disk2
Hi,
I'm trying to do clean installation with 8x Intel P4510 on ZFS 10.
When I try to install following error shows.
I've executed dd if=/dev/zero of=/dev/nvmeXn1 bs=64MB count=10 on all drives.Still the same.
Can anybody help? Thanks
Helllo everyone.
I have Proxmox installed on top of Debian software RAID-10.
Today it froze and I restarted manually, unfortunately it doesn't boot due to mdadm error.
A. It gives following error on boot.
B. cat /proc/mdstat
C. blkid
What I find weird is boot error UUID doesn't...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.