Hello,
I have created VM template using packer. Template is working properly, if I use PVE GUI, I can use prepared cloud-init config and deploy it perfectly. However, using terraform with Proxmox plugin from Telmate/proxmox version 3.0.1-rc1, I can create full clone without issue but it always...
So I have triple checked - all of the bridges were having /28 netmask - no differences. I have tried with /26 and /25 netmasks - it did not work. Once I switched to /24 netmask, it started to work. No clue how and why - I wanted to shorten the network for this not to unneccessarily use 255 IP in...
Hello,
I have 3 identical servers running identical version of Proxmox with all of therm fully upgraded.
Each has 2 network cards installed, one is on-board quad 10gb and another is PCie quad port SPF+ 10gb. I have created two bridges, each bridge having all 4 ports assigned. Both bridges has...
Hello,
I have a 3 node cluster, crated OK. I want to create Ceph storage cluster for important VMs.
Furthermore, I have added to each host iSCSI drive, and created LVM on top of it. But seems that Ceph does not like that drive to be added as OSD.
Is this limitation of Ceph config in PX or Ceph...
I think those lines are not in regards of VM, since it war running at that time. Nevertheless, here is the output of the config:
root@pve-hq:~# qm config 201
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host,flags=+aes
efidisk0...
Well nothing extreme there, though I found this - I copied few lines before and few after, if it makes any sense to anyone:
Jan 25 19:31:16 pve-hq pvedaemon[3964404]: cannot delete 'vcpus' - not set in current configuration!
Jan 25 19:31:16 pve-hq pvedaemon[3964404]: cannot delete 'cpulimit' -...
Hello,
I have PVE 7.1-10, on it I have several VMs, but one in particular keeps shutting down. It is Ubuntu 20.04-3 server, used for Docker. It was part of the same template cloning as other VMs there. The only difference for this machine is that I have 2h period of backing it up. Now VM is...
Hello friends!
I want to open up a debate - well, I was hoping to - about the iSCSI disc space assignation. The thing is: let's say that I have a server with several storage spaces and that server is connected to PVE. For the purpose of this, let us not go into the matter how they are...
No, it means, that on this physical Cisco switch, there are more than 1 hypervisor connected. They all run VMs and all are working properly, no disconnects or similar problems. On this very same device, I added PVE6.3, fired it up, connects OK, but immediately when first VM starts *ANY* network...
I checked, no duplicates were found - there are 4 NET adapters, only 1 is participating in virtual switch, the switch has mac of that adapter, VM has its own, other VM on net have completely different MAC starting addresses.
PVE does no information in syslog at all, when disconnect happens, all...
Hello friends!
I have PVE 6.4 on HP DL560 G8. I have integrated NIC in use (quad giga port). Now my problem is this - when I installed PVE, it was all fine, I could get IP via DHPC, which I then reconfigured to be fixed. I could update system etc. etc. - so no problems accessing PVE or working...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.