You have allocated 24 GB of ram for KVM VMs, while you only have 16 GB in your HN. It's not going to work. Well, I might, for some time, with extensive swapping which in turn will render the server unusable. Revise memory allocation for your VMs. No wonder you run into serious problems. Leave at...
Could you please tell me how can I create a dir-based container with ZFS in 4.0beta? I can't see any way to do that (besides setting CT disk size to 0 but that's just dir-based, doesn't need ZFS).
In that case I have no idea. Something's funky with your setup. You didn't copy the 3rd node, did you? Like cloning an existing PVE HN. I'm suspecting a colliding ID somewhere, seeing one of the old servers got replaced. I'd recommend a full reinstall of your 3rd node if you haven't started VMs...
This sounds awfully familiar. Check your hosts file, it must have the correct ip for your node and pvelocalhost (check a working node to see what I'm talking about). Also check your DNS. All similar problems I've encountered were caused by misconfigured DNS or hosts file.
Well, I solved it by running from the latest 2.6.32 PVE kernel. Not a single issue since then. This host doesn't run OVZ guests so I thought I give the 3.10 series a go. It might be fine on newer HW, probably, I don't know. But for production use on this server and probably generally it's not...
I've tried and checked all of this before reporting the problem but here it goes. I'll try to answer everything in order.
1. Only virtio-scsi causes issues.
2. Yes, absolutely. See my previous post, too. I was greeted with the host's login prompt in the NoVNC console...
3. No cluster here yet...
An update: after some reading around I tried loading the vhost-scsi module with "modprobe vhost_scsi". It loaded and I haven't been able to reproduce the error since. Before that I tried upgrading the HN to the latest PVE but it exhibited the exact same problem with kernel 3.10.0-10-pve. One VM...
I've encountered a very strange and potentially very dangerous issue. I'm using virtio-scsi on top of a local LVM volume on a guest and instead of trying to boot from the virtual disk or CD image, it's trying to boot from the host drive. I can see the host's Grub menu on the guest's console...
Yes, using apcupsd without problems, with smaller Eaton/MGE units (<2kVA).
EDIT: BTW, where is it stated that they abandon NUT development?
EDIT2: never mind, it's on the official page. But it happened several years ago. They put out the official announcement just last year.
Oh, sorry, I missed that point. I think with only the OS sitting on a ZFS mirror you should have no problems. But then it's still comparatively very memory-intensive and if you don't need or use ZFS features then I see no point. I'd just stick with MDRAID.
I use ZFS (ZoL 0.6.3) on some 4 gig nodes on metal and in VM too. They work just fine for light to light-moderate loads. Do not expect high performance, though. ZFS is happily eating any and all RAM it finds.
EDIT: see also http://pve.proxmox.com/wiki/ZFS#Limit_ZFS_memory_usage
If you're willing to roll your own solution, there are a few dozen backup-centric VPS providers. Simply google for "cheap backup vps" or something similar and you'll see what I'm talking about. They basically provide large, but potentially slow, redundant storage for your VPS and enough RAM for...
I had a very similar problem on X10 series Supermicro boards. The solution was to turn off a PCI-e power control option in the BIOS. Sorry, I can't remember off the top of my head which one, I'd need to reboot to see the bios but on live servers it's not possible ATM :)
This method is seriously flawed since many disks misreport sector sizes for compatibility reasons. (And I also think you wanted to write smallest common multiple.) See this page with the array of misreporting disks: https://github.com/zfsonlinux/zfs/blob/master/cmd/zpool/zpool_vdev.c#L108
As it...
IIRC triggering a rescan is by writing "- - -" to the sysfs, for example: echo "- - -" >/sys/class/scsi_host/host3/scan
Aside from more complex solutions like booting from ZFS on Linux hosts, always provide:
- whole disks to the pool
- symbolic disk names to ease disk identification and to...
Doing it manually is fine. It was more like a technical question, since the recommended and supported FS is ext3/4 and while many others also use OVZ on xfs, ZoL + OVZ is very rare, as far as I can tell. Do you have positive experience with this combination? Any caveats?
Great news. ZFS is a very nice addition, but I was wondering if it's recommended to put OpenVZ containers on ZFS volumes. Putting every container on its own dataset would ease management, backups, etc a lot. Would this be a supported situation?
Re: New Mini-itx Proxmox Build
Guys, could you please enlighten me, why is hardware support for VLANs so important in this case? I've always been able to use VLANs on the cheapest Gb nics. It's working nicely using the Linux kernel's VLAN code. VLANs are essentially just ethernet frames tagged...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.