I'm doing a overhaul of my cluster. Currently I have a 1GB switch connected to my cluster. I've installed 10gb nics and I also want to deploy ceph.
What is the best process so I can use the speed of 10gb for transferring between nodes, install ceph but not break my cluster?
Recently I had both power supplies blow up on a TrueNAS server which had shared storage for several VMs. I'm now wanting to implement some HA for the critical services.
On my large VMs I keep the important files on local (ZFS) storage and older non-important data on the shared NFS storage.
If...
I have a VM with multiple disks inside a VM...one of the disks is a transcoding only mount and contains 200GB+ of exports which changes hourly.
Is there a way to use pve-zsync and SKIP certain disks inside a vm? I've read the pve-zsync manual (6.x) and I don't see a way to do this. It...
Sorry, this isn't Proxmox specific but perhaps someone can help me understand.
If I have a ceph cluster with 3 nodes with each node having 1 x 2TB hard drive is the storage 1:1 on the other nodes?
If I have 500gb of video files will the total for all the nodes be 500gb used or would it be...
I have a zfs pool containing of 4 x 750GB "enterprise" sata drives. On the host and in containers I get fairly good disk speeds, however in a KVM guest the disk speeds fall on their face. I've tried various KVM disk cache settings including no cache, write back, write through and nothing seems...
I have the same problem. My KVM was running fine for 600 days, I had to stop and had this error:
Stop
kvm: error: failed to set MSR 0x40000090 to 0x10000
kvm: /home/builder/source/qemu.tmp/target/i386/kvm.c:1910: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.
I upgraded...
I'm trying to restore a .vmz.lzo backup from an old version of Proxmox. However I get this error:
root@rovio:~# qmrestore vzdump-qemu-110-2016_08_03-01_00_03.vma.lzo 110
Configuration file 'nodes/rovio/qemu-server/110.conf' does not exist
It doesn't matter what I set the vmid to and I can't...
I backed up and transfered a LXC dump from Proxmox 4.1 to 4.2. When i ran pct restore (with the --storage local-lvm option) and started the container HHVM and other processes were freaking out and I noticed all of the directories and *some* of files in /var/log were missing.
Has anyone seen...
I have a clean install of 4.2 on a 2 x 600GB SAS. The server also has a 2TB drive and a 500GB SSD that I want to use for non-important VM's and a backup of the VM's. What is the suggested method for adding storage with 4.2 and lvm thin?
I'm planning on deploying 2 x Samsung 850 Pro SSD's in a Dell R420 server and probably a 1TB HDD for backups. The server has a Dell H710 w/ 512MB cache installed.
Is there anything wrong with doing hardware RAID-1 with 2 x SSDs? I'm not experienced enough to setup and maintain zfs.
One of my VM's is locking up with random filesystem errors inside the VM. Looking at the host I see these in the kern log:
[Fri Mar 18 04:00:51 2016] EXT4-fs (loop1): error count since last fsck: 11
[Fri Mar 18 04:00:51 2016] EXT4-fs (loop1): initial error at time 1456211823...
I have the same problem as iMer.
Backup output:
strace -p <pid of lxc-freeze>
output of ps axl | awk '$10 ~ /D/'
When I kill lxc-freeze it moves on to the next container:
ERROR: Backup of VM 100 failed - command 'lxc-freeze -n 100' failed: got signal 15
However this leaves the container...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.