The expansion issue is just a bit of a problem? Unless I'm missing something?
What would be the best way to make my proxmox backup disk expandable using ZFS, let's say i've got 4 drives now, and want to add 1 tomorrow (or maybe 2)?
Maybe I'm just missing something about ZFS?
Hey Guys,
Ok so following some guides I setup a ZFS storage array on RaidZ1...
I started out with 6TB since I wanted to check how it works...
But now I've got a problem. I can't find any documentation on expanding the storage? Can someone please advise how to safely add more disks?
I'm...
Hey GUys,
The reason behind it is simple...
I'd like to be able to see how much disk space us used per server to keep x number of retentions. Spesifically server's that don't change much, but do take up a lot of space...
It's more for planning purposes than anything else...
Hey Guys,
I've opened several threads over the past couple weeks/months about the above issue. But I realized that I was conflating corosync with ceph, and combining ceph's private and public ranges into 1 thing aswell...
So I want to pose a question very spesific and I'm hoping someone can...
Ok guys thanks for all the feedback. You guys did get my head working in a different direction. So as it turned out, they did hack the ILO since it was accessible via the internet, and kept infecting it from there.
I was able to find the cronjob and scripts being installed and was able to stop...
Is suspect my ip ranges have ceph and proxmox corrosync syncing over eachother cause whenever ceph runs high all servers become unresponsive and only show grey question markes in PM UI...
How can I check though #StressedOut
Hey Guys,
I am hoping you can advise...
I want to export a ceph disk image to gzip manually, not download the entire server since it's a lot larger...
How can I change rbd --pool Default export vm-105-disk-0 /mnt/pve/NAS/vm-105-disk-0.raw to export it to a gzip file safely and successfully...
Theoretically though that means ceph can send out 2 sets of 10GB/s to different servers though should it be re-balancing although each of those servers would only receive 10GB/s per server?
Hey Guys,
Ok so I have 1 EXTREMELY SMALL but EXTREMELY frustrating problem with proxmox backup server...
First off... 1 Million bonus points to Proxmox team for building the software and getting it to run so smooth!
I need to determine how I'm going to structure our backups... But I can't see...
That's a good question. So i know my corosync is on a seperate IP range, but I don't know how to check for the ceph sync IP range, since i'd like to make it the internal IP range... How can I check?
Also, in PM, if I bond the ports using balance-rr does that mean the bond of 2 x 10Gb ports can...
Sorry about that... Here's the status of ceph as it stands now...
root@pve:~# ceph -s
cluster:
id: 248fab2c-bd08-43fb-a562-08144c019785
health: HEALTH_WARN
Degraded data redundancy: 903844/5938684 objects degraded (15.220%), 358 pgs degraded, 358 pgs undersized...
do you mean further down or at the very bottom of the list?
The original task shows, for example a migration shows migrate server 100 to c1 for example. But as soon as the initial command is complete and the running process should show up, it just doesn't that one never shows up in the logs....
No it doesn't it's the task that dissapears. backups for example, I can see it running with the icon on the server, but I can't see any progress at all... The task just goes away after it starts the job
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.