So sum it up: I installed maxView Storage Manager directly on my Proxmox VE host. Therefore I downloaded the latest version for linux from the Microsemi website: https://storage.microsemi.com/en-us/support/raid/sas_raid/asr-8805/ and unpacked the archive. Afterwards one can simply install the...
Yes, that's what I experienced while doing my research. Most of the questions/howtos are based on a setup with a public IP assigned to the PVE server.
Just for interest, may I ask for you opinion? Thanks a lot :)
Hi LnxBil, my firewall is a physical host in front of my PVE server.
Do you mean a one-to-one NAT with host-nat? At least that's what I also thought of: I got more than one public IP address, so I could alias IPs on my firewall and assign one of those public IPs via one-to-one NAT to the...
Hi beloved Proxmox Community,
I want to make a LXC directly accessible from the Internet via a public IP address. Atm my whole setup is as follows:
Evil Internet <---> (ix0: A.B.C.D/29) OPNsense firewall (ix1: 192.168.42.1/24) <---> (ge0: 192.168.42.20/24) Proxmox VE server
As you can see...
Verstehe. Bei mir ist es leider so, dass die User ihre eigenen VMs benötigen und verwalten müssen. Das Setup funktioniert, wie gesagt, auch prächtig unter Ubuntu. Windows 10 wird jetzt von mir getestet, da sich die User per RDP auf ihre VMs verbinden und ich vergleichen wollte, wie die...
Leider habe ich mich zu früh gefreut und es tut mir leid das Fass wieder aufzumachen. Meine Lösung hält nur bis zum nächsten Neustart, dann lande ich wieder in der bereits beschriebenen Bootloop bzw. Reparaturmodus von Windows 10. Es muss doch Nutzer geben die das Setup erfolgreich bei sich im...
Vorab, @LnxBil vielen Dank für deine Hilfe.
Ich konnte das Problem mittlerweile lösen. Um Nachvollziehbarkeit zu gewährleisten, habe ich eine neue Windows 10 VM (1809) aufgesetzt und alle Schritte dokumentiert.
Wenn ich Windows 10 frisch installiere, anschließend VirtualBox installiere und...
Nein, kein Bluescreen. Die Maschine stürzte in diesem Fall einfach komplett ab und startete sofort neu. Wie wenn man den Strom kurz getrennt hätte und anschließend gleich wieder den Rechner startet.
UPDATE: das Problem ist gelöst und in meinem jüngsten Kommentar beschrieben. Leider besteht das Problem weiterhin nach einem Neustart :(.
Guten Morgen liebe Proxmox Community,
ich teste im Moment eine Windows 10 VM mit Nested Virtualization. Ich würde als Nested-Hypervisor gerne VirtualBox...
Ok, could you provide me an example of how such a device file should look like for a RAID controller you are using? I want to try out the LXC solution before installing any additional software on my PVE server. And an example might help me a lot.
Indeed, huge overhead. I think the main part of...
Ok the alternative of a device file sounds good. But I am not aware of how to use it. According to some research this would be a special file that acts like an interface on the LXC between the RAID controller and the application that wants to make use of it. I guess this is not existing out of...
@LnxBil thanks for your answer. I would prefer the LXC solution but does this even work? The corresponding wiki article mentions that the PVE host does not have access to the device any more when it is passed to the VM/container.
Hi,
I am trying to arrange some management/monitoring for my Adaptec RAID 8805 controller. Therefore I installed the latest maxView Storage Manager on an Ubuntu (18.04.2) VM. Unfortunately, the software does not show any information about my controller.
It seems that the topic has received...
Hi,
yesterday, I wanted to try out the latest Ubuntu LTS version 18.04.2 and installed it as a normal VM on my PVE-Server (v.5.3.9). Unfortunately, it wount start after installation has completed. After removing the ISO from the drive the startup is stuck in the fsch dialog as displayed in the...
Are there any problems at the moment? I am sitting near Frankfurt (ISP Telekom) and my download of the proxmox repo (non-subscription) is starving at 2 KB/s.
Ok, thank you for your answer. It worked.
So the correct way would've been to create a temporary LV first, move the VMs there, rename the old LV, move the VMs to the renamed LV and delete the temporary LV?
Hi,
yesterday I renamed a VG and a corresponding LV. While doing so I did not think about the linked VMs which are having their disks lying on the LV. Now they woun't boot up with the following message:
kvm: -drive...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.