Hi Wolfgang,
Yes, 0.1.141 is installed.
In terms of power management, are you talking in the guest VM or on the host system itself?
In terms of the host, I'm pretty sure all power saving is off - it doesn't ever shut down and it was originally one of my test platform servers when I was first...
Hi all,
I recently installed PVE 5.0 (prior to the 5.1 official release, I haven't updated this unit yet to 5.1) - I seem to be having an odd issue I haven't been able to track down.
What's happening is that Windows 10 will basically seize up and no longer function properly. It *appears* to...
Hi all,
Last night, in the middle of the backup run, the power went out beyond our battery backup and there was an unclean shutdown of the system. I've brought it back up, but I have several VM's that are locked with the message:
Error: VM is locked (backup)
I know it's not backing up, and...
On mine, I usually do similar to:
# network interface settings
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.6.10
netmask 255.255.255.0
gateway 192.168.6.1
bridge_ports...
Sounds almost like your switch may be blocking multicast... I don't have the links in front of me, but there are some threads about adjusting the switch for multicast groups...
Honestly, for MSSQL backups I just use the MSSQL Server Backup capability built into it run on a regular schedule to do a backup of our database to a separate location (in our case, a secondary drive on the VM from which a set of scripts run to do an offsite copy of the backup as well as two...
Personally I use a combination of things - I use NUT to connect to my battery backup to issue shutdown commands to my proxmox servers if the battery backup runs low (to reduce the risk of a dirty shutdown). In terms of backup, I use the built in proxmox backup functionality to backup to an NFS...
Re: Proxmox+Ceph: pveceph createmon: unable to find local address within network
You shouldn't change that file really (the PVE file).
For one thing - the spacing for the interfaces file is pretty standard how it should be laid out, which includes indenting the settings that refer to a...
Certain things will change with a migration like that - for example, the MAC address of the ethernet adapters will change so the associations will be different (unless you transfer the physical cards from one box to another). Is this part of a cluster? if it is, why not just spin up a new box...
Ok, I wasn't sure if all nodes had the subscription or not as the picture you posted only showed the two green nodes which made me wonder if the others were not on subscription.
Now I could be very wrong about this - but AFAIK, you can't have community and non-community in the same cluster, and all systems in a cluster have to be under the same level of support... I wonder if that is causing some of your problems? Possibly different versions of software available via...
I never said it wasn't fine, the sanity of it could be debated based on the strength of the host box, but I feel it's a waste on a weaker box because of the stress the transcoding can put on the box - especially if you have more than one client requesting video that requires transcoding. I mean...
This would be interesting - but would it be worth it?
If you can just do streaming on the media it could work ok, but if you needed to transcode, you'd have to dedicate a good amount of resources to make it a truly viable option - at that point you might as well just host it on its own box (in...
Re: Proxmox+Ceph no de Hi IOWait
I'll be curious as to how your research in this turns out.
I have a much smaller Ceph cluster (3 nodes, 6 OSDs per node) my iowait/io delay usually sits between .5% - 1.5% topping out at about 10% during the period when the VMs are being backed up by proxmox -...
A lot of this is my opinion, so take it or leave it :)
We use a combination of CEPH and NFS for our back end storage - mostly CEPH as VM storage and NFS as backup and ISO storage. I personally like CEPH. If you aren't running DB heavy VMs then Gluster is an option as well. NFS is an option...
did you by chance try to install qemu from the debian repositories?
try doing:
dpkg -l qemu-utils*
(that's a lower case L btw, not a capital I)
and see if it shows something installed.
How exactly did you do the upgrade procedure from 3.2 to 3.4 initially?
Are you using the subscription...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.