I know it's beating a dead horse but it's curious there's no official support for mdraid considering that in a hardware raid scenario the controller is a single point of failure. With mdraid the array is far more portable between systems and is not reliant on that single controller, but sure...
ZFS isn't the best solution for every scenario dietmar. Maybe somebody needs something simple without the high memory and ECC-RAM requirements (Because everywhere you look it's stated to not use ZFS without buffered memory).
I'm trying to add a 4.0 host to a cluster created on 3.4 but I'm getting this error when running pvecm add x.x.x.x:
rsync: change_dir "/etc/corosync" failed: No such file or directory (2)
rsync: link_stat "/etc/pve/corosync.conf" failed: No such file or directory (2)
rsync error: some...
I have a brand new system I built for homelab testing, it's specs: AMD FX-8320e, 32GB HyperX RAM, 2x 320GB HD's in RAID1 and 4x 2TB HD's in RAID10 and during the install process I see dozens of AMD-VI IO_PAGE_FAULT errors.
I had IOMMU disabled in the BIOS and my NIC subsequently wouldn't work...
I have 6 hard drives that I want to configure in the following manner:
2x ZFS RAID1 - 320GB for / and main system partitions
4x ZFS RAID10 - 4TB for VM Storage
The problem I've encountered is that I can only configure one of these arrays during setup, it's either the RAID1 or the RAID10.
Once...
I have a Gigabyte 990FXA-UD3 motherboard with an onboard Realtek RTL8111/8168/8411 chip.
Proxmox 4.0 doesn't appear to see the card correctly. Does anybody know if there's any special requirements for running Proxmox with a motherboard like this?
Thanks
Re: grub2 update problems for 3.4 (GUI only), leading to not correctly installed pack
Do you have a list of package versions I can reference against to determine if I am affected by this? I had already run the updates via GUI.
I performed a host update but the Web GUI became unresponsive so I restarted the server through the console but now upon startup I get:
GRUB Loading
Welcome to GRUB!
Error: file not found
Entering rescue mode
The server is on the pve no sub repo and doesn't do much (no mods or changes at all)...
The previous proxmox server is mounted through a rescueCD, so I've mounted the disks but there's nothing under /etc/pve/. Where can I find the vm conf files elsewhere on the system?
I did a direct copy of /var/lib/vz/images/* over SSH from one server to another so that I can decommission the older server.
I need some help understanding this because the file sizes of the QCOW2 files on both the old and new server differ in size.
Old Server
VM Size
New Server
VM Size...
I had an issue with one of my Proxmox servers so I setup a new Proxmox host and copied over all data from /var/lib/vz/images/.
Whilst on the previous Proxmox server I had a VM that was running on a Snapshot but once I imported that VM over to the new Proxmox host the web GUI details that a...
According to:
http://pve.proxmox.com/wiki/Windows_VirtIO_Drivers
..the virtio repo located at:
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/
..should contain signed 64bit versions of the virtio drivers:
"Those binary drivers are digitally signed, and will work on 64-bit versions of...
I have a container configured with a static IP address in /etc/network/interfaces, however, every night at 11PM the CT reverts to a random DHCP address.
How is an explicitly configured configuration file ignored?
I found I had stored an older version of Proxmox 3.0, one which works with the script that Davide Lucchesi wrote, so I installed it on my system and was thus able to setup the software RAID 1 array without difficulty.
Because it looks like the script is no longer available, here it is...
I can confirm this no longer works with GPT disks and I'm not sure what steps should be replaced.
That script that Davide Lucchesi wrote based on the guide also doesn't work because it contains stuff for MBR and not GPT :\
I am trying to add LVM to use a new iSCSI target, however, when I select the 'Base volume' dropdown I see no options as per this image:
Running the following results in timeouts:
root@proxhost:~# iscsiadm -m node -L all
Logging in to [iface: default, target...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.