How do you plan to configure these VMs under Proxmox? Is the R630 a hardware raid with LVM storage?
More generally, you don't want your PVE host to be serving up SMB shares, or have any "always-on" access to your production virtual disk storage. There was a thread on here about a month ago...
most of my virtual disks are like
scsihw: virtio-scsi-single
scsi0: CephRBD_NVMe:vm-9901-disk-0,aio=native,cache=writeback,discard=on,iothread=1,size=301G
scsi1: CephRBD_NVMe:vm-9901-disk-1,aio=native,cache=writeback,discard=on,iothread=1,size=401G
and I do not have a tremendous disparity...
add a scsi slave disk first so windows can install the scsi driver properly.
then shut down and change the system disk to scsi. once that is working you can delete the slave disk
so you can ping 192.168.68.119 from other systems, can you SSH into your host or are you running these ping/curl on the physical console?
your default interface should be vmbr0, not vmbre, right? And the port is 8006, try: curl -k https://192.168.68.119:8006 instead of your last example.
Also...
They just need to know they are coddled. They can pay to stay, or they can adapt to reality. To ask others to adapt to you, like elsewhere in life, is unproductive.
We get a lot of questions, that's for sure. Some of them on the level of how to setup ZFS or the most basic plug-in virtual networking. We help those people because they are actually trying to use PVE how it is meant to be used. Your question is closer to how can we make PVE be more like VMware...
It is so funny how a paying VMware user can traipse into a free community asking for stuff rather than just learn how to do something themselves.
You are the ideal VMware customer buddy, just pay them what they have earned for their 10-ply charmin soft user experience.
The concept as a whole is flawed. The conventional approach requires 2 Ceph clusters with rbd mirroring and rolling snapshots on your prod cluster.
If you can't have same or similar infrastructure in your 2 sites then just accept the fact that it's going to be heterogeneous asymmetrical all the...
I'm sure stuff like this is already filed. I have another feature request with tons of +1's going on 5 years old.
it took less than a day to implement in my feasibility testing. it's never getting done, I'm not wasting any more time on obvious feature enrichment. everyone knows the zfs gui is...
prioritize all of it. the most important would be replacing an rpool member, cloning the partition table from a good drive and handling the proxmox-boot-tool portion before executing the zpool attach.
I think that's an operation that almost all noobs will struggle with because it's easy to...
Any 2 or more computers with PowerShell can use Invoke-Command. So all you need is network access and a known credential on the target computer, and you'll get in.
You can have a dedicated Windows machine to store all your scripts that you use to manage your PVE hosts and templates, or you can...
I keep my templates as a VM so I can power them up and run updates and/or add stuff to the VM over time, and within this template VM you can keep separate snapshot series for different things too.
To deploy, I just use the clone function on the GUI. If the clones will be joined to an active...
All of my VMs and their data are important to me. I expect to be able to snapshot, backup, and restore my VMs in their entirety, including all data.
Virtualizing is the whole idea here. Hardware passthrough can be a good compromise for specific cases, but not the general case. The more hardware...
This is the official Proxmox forum. The official method to remove the popup is to buy a subscription.
It takes 5 minutes to Google and implement the simple, unofficial, and unsupported modification that removes the popup.
I find discussing it and rationalizing it and justifying it on the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.