Search results

  1. S

    Motherboard Replacement

    As far as I know, ZFS disk can be moved to any system that supports ZFS without trouble. You're supposed to be able to move Linux installations the same way (Proxmox is built on Linux) however I've had exceptions where the system wouldn't boot after moving the boot drives, most times it worked.
  2. S

    proxmox ve 7.0-10 connectivity issue

    More information: This is what makes me suspect Proxmox. Attempting to SSH to node 62 from 64 looks like this: root@pve-64:~# ssh 192.168.76.62 Connection closed by 192.168.76.62 port 22 This leads me to believe 62 answered the call but didn't say anything. But it connects just fine from my...
  3. S

    proxmox ve 7.0-10 connectivity issue

    Hello folks, I have an odd one here, not 100% it's Proxmox, but here goes. I just installed 3 instances of PVE 7. I'm trying to cluster them but they're failing with a connection error. Upon inspection, The nodes can ping each other but can't SSH to each other. I can SSH to all of them and...
  4. S

    CEPH: Switching to SSD drives

    Thank you for validating.
  5. S

    CEPH: Switching to SSD drives

    Hi All, I'm just making sure this is a solid enough plan before I execute. I have a 3 node, 5 disk(each) cluster with ceph storage. currently the cluster consists of (combined) 3 2TB NVME and 15 2-3 TB spinning disks. I just got approval last week to convert all 15 spinning disks to 2TB ssds...
  6. S

    RBD pool size

    This is great information; Thank you!!! I get your point about performance, and I agree, but there's a reason. It's just not a perfect world, unfortunately. We're coming from a raid solution; where performance, safety, and capacity are a little more interconnected. As I know it, if you need...
  7. S

    remove stuck OSD

    I have an odd issue here where an OSD won't delete. My cluster is supposed to have 5 disks for each node but I've been running with 4 disks since I created the cluster a couple weeks ago (I added a 5th disk to one node but removed it a short time later). Today we installed the hardware needed...
  8. S

    Host memory usage differs VM memory

    Check your storage config. I think I read that ZFS is configured to use 50% by default. I'm using CEPH, one of my nodes has 32 gb of ram and ceph is using almost 60% of it. Needless to say, that server is on the short list for replacement.
  9. S

    RBD pool size

    I swear I have the weirdest type of experiences. I'm going to chuck it up to the missing node and monitor again when the cluster is healthy. I started copying data into the environment from a USB drive (Transfer was MUCH faster than I thought it would be) and now the capacity is going up as I...
  10. S

    RBD pool size

    I have to admit, I didn't consciously play a role with the weights. I'm assuming what's set is good for this. I'll only have 2 nodes with 4/5 disks each for about a week. Once I get the data off of the 3rd node, I'll have 6 disks available and another node to even everything out. I was hoping to...
  11. S

    RBD pool size

    Posted my last before seeing this. correct, I most certainly did not create a quota, HA. That command doesn't look like something you could accidentally do either :). Okay, that's a solid theory. I can see how that could happen. Well, I have .9TB to see if you're correct, I really hope you...
  12. S

    RBD pool size

    I just got half way down the hall and had a thought. I can't be all wrong about the Data Center storage view. That's reporting 10TB in use. We don't have 10TB in use anywhere in this environment (expect the backup server, different conversation :-) ).
  13. S

    RBD pool size

    The Data Center view agrees with you; There's 54% used of 20.01TB. I assumed that this was raw capacity of the cluster, including redundancy. As a result, not to be used to measure the usable space in a pool. My assumption was, like ZFS, there's a pool and Dataset. The available space of the...
  14. S

    PVE Kernel doen't work after install

    Hmmm, I don't know. That's odd to me. Did you try the bios reset? If it's complaining about space for ROM I would disable some bios features and try again. That would be an educated guess on my part but that's what I would do if I were in your shoes. I'm very interested in the solution to this...
  15. S

    RBD pool size

    I swear the file transfer has sped up since I started looking at this lol. I'm using 3.04TB. I think all would have been fine if the total capacity didn't drop from 4.2TB. I'm not sure why that happened and why it's still dropping. From my last post to this one I'm down another .1TB. It...
  16. S

    RBD pool size

    Ran the 2 commands. The output is below. Some background: I'm in a tough spot between business goals and finance. The new cluster is to consist of 1 new server and 2 existing production servers being repurposed. They're road mapped for replacement in 2 phases with the addition of a 4th server...
  17. S

    RBD pool size

    I reread that and confused myself with the numbers so it's probably not very clear. Raw capacity: 20TB Initial file transfer last Tuesday: 1.1TB File transfer over the weekend: 680GB Total used as reported by proxmox: 2.99 TB; by my math should be around 1.8TB Total pool size 3.24TB down from...
  18. S

    RBD pool size

    Hi, I'm very new to Ceph and I'm trying but this thing is confusing. I'm trying to build our new proxmox cluster with Ceph for storage. When I created the RBD pool it created a pool that was 4.2TB but the raw capacity is 20TB. Probably not my best moment, but I wasn't very concerned at the...
  19. S

    erase former ceph disks

    Also, if it's the last disk it won't remove. IDK how to get past that.
  20. S

    erase former ceph disks

    wow, I took 5 steps away from the computer and remembered the command, I had to come back. it's "lvremove <LVM member>". The member starts with "ceph-".

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!