For 20/30 euros I want it too. Do you find shipping to Brazil? If you know how to indicate a link... I have paid that price (30 euros) in the consumer here.
Here I don't find any used SSD for data center. When I find it, it's new and very expensive.
So, but I don't have the budget for enterprise class SSD's to use for booting. My RAID controllers also don't have battery backup. So I want to trust that RAID1 with BTRFS or ZFS can give me some protection. They can? Which would be better? I hear that ZFS is bad that it consumes a lot of...
Thanks for the comment.
Buddy, I haven't had any problems with ZFS booting even when one of the disks is broken. That is, even though the RAIDZ mirror is degraded, I can boot Proxmox. I did this test a few times.
It is true that with Btrfs I had this problem. Once the RAID1 Btrfs array is...
There are many differing opinions. There is no perfect file system. But now that Btrfs is also built into the Proxmox installer, could anything have improved?
For OS-only use, putting most of the load on Ceph (VMs and CTs), wanting to prioritize performance, but mainly data security, high...
I tried to mount it from the Proxmox installation pen drive, but I couldn't, because it couldn't mount the degraded Btrfs. In theory, I would have to put the flags to allow this volume mounting in the Kernel line of the live Linux disk, I believe. It will be? How would you do that? Very...
Hello.
I have the same doubt.
I installed standard Proxmox 7.3 installation with ZFS raid1 (on boot disks with zfs rpool for system root) on two 250GB disks and now I've swapped them both for 512GB disks. These are the system's boot disks. Two of equal size, but larger than the previous ones...
Thanks for the answer.
I had already found the publication of this link, but unfortunately it does not solve my case, because I no longer have the second disk working to boot and set the flags that it suggests. So the suggestion in that post would be to put "rootflags=degraded" on the kernel...
In a standard installation of Proxmox version 7.3, the server was installed over a raid1 btrfs array of two mirrored disks. That was the boot disk. After working a few days, there was a problem with the hardware (probably) and the system crashed. Upon restarting, I noticed that one of the disks...
Proxmox 7.3
I'm having a very similar (if not the same) problem.
On the host, using the physical interface, iperf connected to another external node, delivers 1Gbps (942Mbps). Correct.
In the virtual machine, using the VirtIO interface, connected to vmbr0, Iperf delivers 400~500Mbps...
Important topic, I hadn't read any issues about Intel X520 devices yet. I always read raves about them (82599 controllers). Does the problem also occur with load balance bounds as well (LACP, RR, etc)?
I've been reading that old HP devices gave a lot of heating or other problems. I don't know...
Hi all,
Regarding the 10Gbps cards, are the Intel X520 cards still the only ones indicated for Ceph or for VMs? (only 10g)
The HP NC522sfp and NC523sfp are still very bad and unresolved?
Are there other cheaper brands that might also serve you well?
How has the experience been in recent...
Maybe. But, thinking about it, it seems to me a BUG because I think Proxmox should recognize this device normally, since it is in the main line of the Linux Kernel itself.
Hello.
Thank you for your help. Your tip was on the fly!
root@pve-20:~# cat /var/lib/ceph/osd/ceph-0/fsid
2f6b54af-aec8-414e-a231-3cce47249463
root@pve-20:~# ceph-volume lvm activate --bluestore 0 2f6b54af-aec8-414e-a231-3cce47249463
Running command: /usr/bin/chown -R ceph:ceph...
Hi,
Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve.
The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a...
Hello,
I know this message is old, but please, I need to solve a similar problem. I'm trying to create an OSD using bcache drive. If it works, I intend to use bcache on all OSDs here. But when I try to build, in GUI bcache drives are not available for use. And from the CLI, the following error...
Hi,
I'm facing a network or firewall issue with my cluster that I don't even know where to start solving.
I have a Windows 2008 R2 Server VM that has a Bitdefender anti-virus and a Google Chrome browser.
Users access this server and make use of remote desktop (terminal service) on it.
So...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.