Managed to get this solved. The issue is that the proxmox host has no route back to the network from which the web interface is being access. Adding a route solves this.
route add -net 10.0.0.0 netmask 255.255.248.0 gw 192.168.40.1 dev vmbr2
I have a dedicated server on which I have proxmox running. I have installed pfsense as a VM which is working fine. I have assigned pfsense a public IP to its WAN interface, which separate from the Proxmox host.
For LAN, on the proxmox host I created a virtual bridge on eth0.1 and assigned it...
This is on Proxmox 5.
I wasn't specifying the VLAN tag on the Proxmox GUI for the VMs, but I have just tried that too (after disabling VLAN stuff on the VM). The result is the same - I can ping other VMs on VLAN 6, but not the Proxmox host (and vice versa).
bond_xmit_hash_policy just sets how the slave device is selected in a LACP link as far as I know. I changed it to layer2+3 as in the OP's first post just in case, but it makes no difference.
I'm also trying to get this working, but I can't get the Proxmox host talking to the VMs on the VLAN (the VMs can talk to each other on the same VLAN just fine though).
In my case, I want to have the proxmox server on a Management VLAN with VLAN ID 6. On the server I have this:
auto lo
iface...
Will this make it easier (via GUI?) to create multiple different Ceph pools, such that I could create one all flash for faster storage, one all HDD for DVRs / backups etc.)?
Looking at the ceph screenshot, I noticed the word "Storages". I don't think "storage" is countable, so this may be...
Hi jacmel,
I first deleted the storage entry "local-lvm" from the web GUI (under Storage), then I deleted the underlying volume from the command line with lvremove. You can use tab-complete after lvremove. It should be:
lvremove /dev/pve/data
I think.
This looks to be hard disk related, specifically the first hard disk (system disk).
The server is installed with RAID1 config across sda and sdb (using Proxmox VE installer). I switched the boot drive to sdb and it runs successfully, but it does tend to crash again after a few hours / days...
I'm testing the new storage replication framework, specifically trying to make it work with HA.
I've created a 3-node cluster and have a VM on server2. I set up replication. I let the first sync complete, then watch replication continue minute by minute. It's working nicely. I can migrate the...
We had one of our Proxmox VE servers crash today, and it wouldn't boot back up.
I've attached screenshots. Hard disk related? We have 2 hard disks in RAID1, so I feel it's unlikely that both would have failed.
The cluster did its job and the VMs were moved to remaining servers automatically...
Also, is it possible to somehow use a separate replication network (in the same way you can use a separate network for ceph)? In my scenario, the servers have 10G NICs that could connect directly to one another, but I have no 10G switch available.
I don't know the answer to that yet either, but I'll need to find out soon as we are implementing this in the coming weeks.
Any feedback from those in the know?
We have a 3-server PVE cluster using Ceph running on SSDs.
Now we would like to add a second, separate Ceph pool to the same cluster using slow HDDs (only for CCTV DVR duties).
What is the recommended procedure for configuring that these days? I've seen these approaches...
Ah ok, I had misunderstood the docs (tab complete to get <UNIQUE ID>, not <MON-ID>). Works perfectly with
root@smiles1:~# systemctl start ceph-mon@0.service
root@smiles1:~# systemctl status ceph-mon@0.service
● ceph-mon@0.service - Ceph cluster monitor daemon
Loaded: loaded...
I followed the tutorial to move Ceph from Hammer to Jewel here:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
All the steps went ok aside from starting the monitor daemon;
root@smiles2:~# systemctl start ceph-mon@ceph-mon.1.1500178214.095217502.service
root@smiles2:~# systemctl status...
Very interested in the new Storage Replication functionality in PVE 5.00, and I had a few questions regarding the documentation page on the wiki:
https://pve.proxmox.com/wiki/Storage_Replication
It seems like only 2 nodes would be required for this to work well, but in the documentation it...
I installed a DVR virtual machine to record some CCTV cameras on my PVE 4.4-13/7ea56165 cluster, using Ceph (Hammer) as the storage backend. I created a 1.5 TB disk on Ceph, allocated to the VM.
Over time, the disk began to fill with recordings (over 1 TB), and I found myself lower on storage...
Sorry, I assumed that Windows would see it after the first boot if I increased the disk size while the guest was shut down. It actually requires 2 restarts to be seen by Windows for whatever reason.
Thanks for your help, extending the partition worked perfectly after the second reboot. :)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.