Hi,
I think you should specify on what topic you need a second opinion. I don’t have that much experience with ceph but if you have some basic or more advanced questions you can contact me. I’m not an expert so I might not be able to answer everything but ill try my best.
@TheeDude It's better if you don't reboot your server. It should be safe but unless you have backups I do not recommend to restart the host while the array is transforming.
@tufkal
Just wanted to make sure we're both on the same page :)
It's not needed to convert the file after TRIM. It will have no effect on the used storage on the hypervisor and the backup time. A simple TRIM is enough to use minimal space on the hypervisor and have fast backups.
@RobFantini
1. Not necessarily. You can add the discard option to the fstab file which immediately trims the block when it get's deleted but you will probably take a performance hit. It's better to trim once a week like the systemd service does.
2. I never used LXC but I guess so. `fstrim /`...
5 minutes for a backup of a 70 GB disk isn't bad.
The two values at the end of a log line separated by a slash are the read/write speeds as @HBO already mentioned.
INFO: status: 0% (182452224/68719476736), sparse 0% (139419648), duration 3, 60/14 MB/s
The backup process reads with 60 MB/s...
@XMarC,
ich habe vor zwei Wochen einen Proxmox Kernel auf Basis der Version 042stab127.2 kompiliert. Den kannst du gerne nutzen.
Siehe: fuckwit/kaiser/kpti
:D. Yes of course if you control the virtualisation environment. However, as in my case, customers have their own VMs running on my hypervisors and I don't have access to the VMs and I can not trust my customers to not try to use this vulnerability.
Yes correct, however in a hosting environment (VMs, Webspace, etc.) you can not control what applications users run on their servers and they can exploit this to read memory from other virtual machines or the host.
Meltdown indeed only has an impact on containers and not Virtual Machines...
I have compiled a kernel for Proxmox 3.x myself as I still have many OpenVZ nodes running.
The kernel has been tested and so far works fine for me.
You can download it at: https://git.vnetso.com/henryspanka/pve-kernel-2.6.32/tags/v2.6.32-49-pve_2.6.32-188
Feel free to check the source code and...
Hello,
if a disk dies the zpool is in degraded state and will still function although if you lose another drive the pool may be broken
You can replace the drive and then start the resync with the zpool replace command
Hello,
msdos partition tables only support a max disk size of 2 TB. If you have a larger disk you need to convert your partition table to gpt. However I recommend doing a backup/snapshot before that as changing the partition table could break your filesystem if done wrong.
The system for example caches files ans if you empty the cache it needs to fetch the files from the HDD again. High RAM usage isn’t always bad. Depending on the size of the zfs pool you may even need more RAM. Do you have a L2ARC (Cache) configured?
Note that flushing the cache does not affect...
Well, indeed. It seems that I can reproduce it on another node.
Create a bridge
auto vmbr1
iface vmbr1 inet manual
bridge_fd 0
bridge_ports none
ifup vmbr1
Create two VMs with the following configuration.
net0: virtio=<MAC>,bridge=vmbr1
Install an operating system (I used CentOS 7)...
I have finally tracked down the issue to a specific VM.
The VM receives traffic and inspects it to find patterns of ddos attacks. The network interface is configured as virtio attached to the bridge.
The RAM usage is increasing steadily. However if I change the network device to e1000 the issue...
Well it seems that IO is the issue. A Graphite VM that writes a lot of metrics caused the issue with about 10 MB/s and about 200 IOPS write.
After turning off the VM the RAM increases only slightly with about 200 MB per hour but the bug somehow still exists. I guess that there is a memory leak...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.