There is virtio support for pfsense 2.1 also but with some limitations like traffic shaping does not work.
https://doc.pfsense.org/index.php/VirtIO_Driver_Support
I'm using e1000 vnics for that reason until pf 2.2 will be available.
hi
It is not recommended having your internet firewall on a vm neither proxmox host.
I would prefer a separate physical device to handle internet like pfsense on a alix or apu board for example.
But if you want to experiment you could try pfsense as a vm with 2 vm nics bridged on the 2 physical...
If your question is, if you can create a omv vm with 4 .raw files as hard disk for omv to create a RAID config through omv, then yes you can. But that is not recommened. What I really don't understand is why want omv so badly virtualized?
Thank you for the tip!
Finally I found what was wrong.
I have glusterfs also installed on these boxes.
By default NFS was disabled on gluster but seems that after some apt-get upgrade it desided to enable it.
This was conflicting with local NFS server.
I gave:
gluster volume set <VOLNAME>...
rpcinfo -p | grep nfs on nfs server does not return any results.
But rpcinfo -p shows:
root@proxmox2:~# rpcinfo -p program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4...
thank you for pointing that, but actually /storage/nfs is a symbolic link to /storage/nfs-zfs.
Although tried modifying /etc/exports with direct path but still the same error.
If I try to restart nfs-kernel-server on NFS server I get this:
service nfs-kernel-server restartStopping NFS kernel...
Hello
I have 2 proxmox nodes in a cluster.
I am using debian wheezy with proxmox repos.
One of nodes acts as nfs server for iso files.
It was working normally but now it does not.
On syslog I get the message: mount.nfs: requested NFS version or transport protocol is not supported
On NFS server...
hi
first , dont install proxmox on usb sticks (its write intensive).
second, whats the benefit of 1st scenario? why omv vm and proxmox on same machine
I think it complicates things with no real benefits.If you need nfs use another box or install nfs server directly on proxmox.
I would prefer...
yes first do a backup of all vms.
Then decide which server will overwrite the data of the other (actually it will write only the differences, not full sync will initiated).
Split brains are often on drbd that is why I told you that you must not use proxmox HA in combination with drbd.
When these...
The problem is here. You must resolve drbd split brain situation.
It should be ds:UpToDate/UpToDate
to be able to live migrate.
To resolve this follow this guide:
https://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster#DRBD_split-brain
Also you must be sure that 10.100.5.234 and...
Simply follow lvm guidelines to create a volume group:
1. Create a PV first from and underlying block storage for example /dev/sdb1
2. Create a Volume Group (VG) on this PV for example VG0
3. Now, go to proxmox webgui on Datacenter -> Storage -> Add -> LVM and selected the previously create VG0...
Yes
Yes, you can but not automatically. I mean you have to do it manually. When the 1st node is down you should manually move the vm config file to the appropriate directory of node2 (e.g "mv /etc/pve/nodes/proxmox1/qemu-server/100.conf /etc/pve/nodes/proxmox2/qemu-server/"). Then just start...
Thank you for your response.
Yes, me too I have installed pfsense many times on proxmox but this particular one seem to have this strange problem.
I am trying to reproduce it to my test proxmox box but still everything ok on that.
Now, on the problematic pfsense, I have installed from scratch...
hello,
Recently I tried to virtualize a pfSense 2.1.5 i386 on pve vm with the following config:
balloon: 512bootdisk: ide0
cores: 1
cpuunits: 5000
ide0: local:100/vm-100-disk-1.qcow2,format=qcow2,cache=directsync,size=10922M
ide2: none,media=cdrom
memory: 1250
name: pfsense
net0...
hello
DRBD traffic (also live migration) will pass through the interface's ip that you have configured on drbd's resource file (e.g /etc/drbd.d/r0.res).
To be sure use iptraf utility at the same time that you do live migration.
Cluster traffic will pass through ip/nic that you have set on...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.