We are testing setup using VM (FreeNAS) with Disk passthrough (via FC on Storage).
Current VM seetings are:
- cpu: qemu64
- scsihw: virtio-scsi-pci
- LUN passthrough : virtio2: /dev/disk/by-id/dm-uuid-mpath-331402ec001f4fc
- virtio drive cache: none
My question is there any recommended...
IBM storage is conected with 3 servers via FC, so one shared disk will be available on all servers.
What PVE storage solution can be choosen to accomplish these tasks:
- VM disk will be stored on shared disk (storage),
- Qcow format,
- live migration.
Please help to understand one simple implementation solution.
- Storage via FiberChannel,
- multiple servers have FC to shared storage,
- VM will be in qcow format (so we need File System to store them),
2) As I understand FileSystem is required to be clustered version (shared...
Two days ago, the proxmox cluster we are testing, rebooted completely :(
Cluster components: 2 physical servers and 1 VM server as quorum member
- 2x ProLiant DL585 G6 with ilo
- 1x VMWare VM on another physical server as Quorum VM
All 5 OS (3 proxmox and 2 ilo) are NTP syncronized...
There is no such file, and there is no need in it (only if you want to change in your way).
Kernel will supply name from PCI buses
# cat /etc/udev/rules.d/70-persistent-net.rules
cat: /etc/udev/rules.d/70-persistent-net.rules: No such file or directory
# udevadm info -e | grep -A 9...
I'm new to PVE, so I may don't know something.
In our installation, we have 2 HP servers with 3 modules of NICs on each:
- 2 NICs onboard
- 3 NICs on 1 PCI-X slot,
- 2 NICs on 1 PCI-E slot.
After first test install, problem arrised with interfaces name inconsistence.
Two exact same...