Hello!
I have a Cisco WS-C4948E and a Proxmox server.
I configured LACP on cisco using the following commands:
```
interface Port-channel 2
switchport
switchport mode access
switchport access vlan 1000
service-policy input RATE-LIMIT
service-policy output RATE-LIMIT
!
interface range...
I fixed all SSDs using that method with one exception. Going to sleep mode was not enough so to temporary disable freeze I had to pull-push each caddy with node booted up.
@leesteken
I discovered the problem!
Comparing the 9 functional SSDs with the other 30 non-functional ones, I found out with the `hdparm -I /dev/sdX' command that all the functional disks have the "not frozen" flag, while the other 30 have the "frozen" flag set.
Now the problem is...can it be...
First two screenshots are before any restart, the 3rd one is after a full shutdown/bootup cicle.
SSD parameters (maybe something relevant?):
```
root@node03:~# hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: INTEL SSDSC2BB012T6
Serial...
I used this tool and a firmware upgrade was available for 5 out of 6 SSDs (from version XXXX40 to XXXX50).
I did the upgrade, I rebooted but the firmware version was still the old one.
After that I did a shutdown / bootup and that's how I lost the partitions again.
It's a 4 node server cluster, 2x 1600W for 4 nodes dual CPU.
This is not a power issue because power consumption is under 300W right now and just on node is up.
Hello!
I have several proxmox nodes and several dozen Intel S3510 SSDs of 1.2TB each.
The SSDs have no errors and the lifetime is between 80-89% for each of them.
For some unknown reason, after every reboot, the SSDs lose their partitions.
Has anyone happened to that?
I think it's impossible for...
Hello!
I currently have 4 identical dedicated servers, each with:
2x E5-2680v4
192GB of RAM
6x 1.2TB SSD
2x 10Gbps SFP+
My question is:
What is the recommended setup so that the data is replicated at least once (similar to RAID1 or RAID10, ceph shared storage with 3 nodes) and at the same time...
Hello!
I created a new ZFS RAID1 pool using two drives, after which I migrated some disks of some KVM machines and restarted the node.
Although all the VMs were stopped, the server did not close, as I had run the command and forced it to stop (nothing happens in the console, there was only a...
Howdy!
I have an HP server with two Intel Xeon processors and 16 drives (12 HDDs and 4 SSDs), having several ZFS pools between disks of the same type.
The problem I'm facing is that a night ago, suddenly the server ended up having a huge consumption of resources.
From the tests so far, smartctl...
Hello!
I appeal to your knowledge in search of the best ideas to answer the following two questions:
As a provider of VPS servers (servers to which I do not have access), what would be the best method to limit the number of emails of each customer? (KVM virtualization via Proxmox, public IP...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.