Hi,
can anybody help me and explain me how can I change disk type for Windows machine? I've tried one Windows 10 vm and one Windows Server 2016 and both failed to start correctly after change.
Any help appreciated.
I need to run pve-ha-simulator with Proxmox 7.1 and installed it with all dependencies. But when I'm trying to run in in mobaterm, I get error "cannot open display".
I think it's caused by no gui installed. Which packages I should install to run it?
Upgrade of kernel didn't help. Maybe for a moment Windows Server was workable (not perfect but not stucked). After few minutes is neverending lag, windows not responding, applications hang. CPU usage below 20%, RAM usage about 10%, it looks like cpu overload.
I'm trying to run Proxmox 7.1 on HP server with RAID controller HP P420 in HBA mode. After installing I start server with Debian 10 Live, install zfspool and try to mount zfspool with command "zfspool import -R /mnt" (tutorials give 'zfspool -a -R /mnt' but it causes syntax error). After run...
I'm using Proxmox 6.4 and I've got big problems with working Windows Server 2016 as VM for nested virtualization. When cpu type is set as SandyBridge, that vm works perfectly but when I change cpu type to host, it's terrible slow. Anybody know how to fix it?
Where I'll find that log? GUI doesn't show migration, it freezes when stopping cloning, then happens timeout to the gui and ssh (pings are still alive), after a minute it's live again and I can show unexpected error and vm moved to next node.
Edit: this timeout it's node restart , so moving vm...
I've got 3 nodes cluster. All of them have ZFS filesystem, vm is Windows 2012R2 with disk on local zfs.
VM is configured in HA and has replication to 2 other nodes. I'm starting with vm on node 1.
I've tried to clone that machine without shutting down, after few minutes vm migrates to node 2 and...
I've got 3 nodes of pve and NFS share for vms hard disks outside the cluster. Is it possible to run working HA without ZFS (I have got physical RAID and I'm going to change to ZFS in future)?.
I've got running cluster with 3 nodes and I've changed them ip addresses. And they stopped to run as cluster.
Is it possible to change ips on running cluster or I have to revert addresses, disconnect all nodes then change addresses and connect again?
You're right. I've tested on fresh install without scrub. When I scrub zfs pool, that error disappears.
But after scrub I've got this:
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features...
I know that I can use zpool status but it's annoying when something in gui doesn't work properly :)
I executed apt-get install zfsutils-linux=0.8.5-pve1 zfs-zed=0.8.5-pve1 zfs-initramfs=0.8.5-pve1 and it works again.
I've upgraded PVE to the newest version.
These 3 packets cause that I can't show zfs pool detail:
- zfs-initramfs
- zfs-zed
- zfsutils-linux
Their version changed from 0.8.5-pve1 to 2.0.3-pve2.
How to rollback that update?
That's the error in GUI:
Thanx, I've found it earlier but author shows only final network config but doesn't show how to set it up.
But I've spend about 2 hours to set it and I've got it.
I've got one note about creating mirror with ovs-vsctl , command from that site doesn't work and needs to add "add-port vmbrX tapXiY"...
I understand that it doesn't use network load when files are not changed but it takes a lot of time. Will it be changed to fast incremental backup in the future?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.