If your pve host is plugged on the trunk port, it should reveive all packets with all VLAN tags.
You do not have to do any special configuration to get acces to the Proxmox host.
Try to ping a machine on one of the LAN your switch is connected to.
since PVE uses the latest ubuntu LTS kernel ( currently from Ubuntu 16.04 + patches) so usually you need to check if the hardware is supported by ubuntu
acoording to http://h17007.www1.hpe.com/us/en/enterprise/servers/supportmatrix/exceptions/ubuntu_exceptions.aspx#.WbqeUa202-Y
your...
check if you're logged to your iscsi portal
iscsiadm -m session -P1
for each LUN you should see
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
no LVM and LVM-thin have different properties but you cannot say one is *faster*
if you're interested about speed use (entreprise) SSD. There is much much to gain by switching from mechanical to SSD than fiddling with different storage settings.
> As I have another disk for just the OS proxmox, the idea is to partition 1tb disk Hardware Raid to 500gigs LVM-thin and the other 500gigs as a normal directory
then probably you need to do pvcreate /dev/sdb2 where 2 is the second partition of your disk and then create a volume group using the...
do you plan to assign public IPs to your VMs ?
if yes you can create an second NIC for your VM, add it to the vmbr1 bridge and inside the VM configure it with a public IP
I had a look at this and submitted a pull request for the proxmoxer python module (which the ansible proxmox module uses in the backend) so that the root cause of the error is displayed on API calls errrors.
Please rebuild the proxmoxer module using...
when you disconnect / reconnect the cable, the cluster software should detect this and rebuild the cluster
after the cable is plugged in again check that:
1) the link is active on the NIC
ip link show dev your_NIC
should have LOWER_UP in the output
2) if the link is active, verify if the...
if your storage is so slow that a ping request is blockend by a pending write on the storage, you should maybe start benchmarking the storage too :)
Are you using VirtiO Network drivers for the VM NIC ?
It would be interesting to see the vm.conf
first thanks for using fio, this a right tool for benchmarking disks
concernining zfs vs ext4 performance, at the higher level, it depends on where you put the focus for your setup, ZFS is focused on data safety first. You can have checksumming and other ZFS features for free.
It is exactly the...
do you have configured HA ressources on this node ?
remember if you have HA ressources on a node, and the node ist not in the corosync quorum partition, the ha manager will force a reboot on the host
you can track such a behaviour by inspecting the pve-ha-lrm log
with
journalctl -u pve-ha-lrm
check that:
* network connectivity is working via the vmbr0 device
( ping your gateway from the host_
* the virtual NIC of the vm is inside the vmbr0 bridge
brctl show vmbr0
shoud give you a list of devices on the bridge, of the form "tapMYVMIDi0"
* on the host start tcpdump on the bridge...
usually these kind of very long accesses are non working reverse DNS lookups
I don't think this is something which has to do with PVE. Or you need to run the Web server in PVE, and compare the TTFB with the web server running in the host vs web server running inside the guest.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.