Maybe starting order will give what I need. I'm going to make an assumption that the start order works across "datacenter" and not individual nodes? And that start order just starts the VM then continues to the next or is the "start delay" the delay before starting the next VM in sequence...
Thanks, that document covers setting delays and such... and have used current startup and shutdown options. But it would be nice if VM110 would not start till VM105 is running. Again, I can mess with delays , but VM 110 will start after delay whether or not VM105 has.
Just an idea, and maybe someone else has incorporated the requirement in some way.
I know with some careful settings on the start/stop delay I might could achieve this but would be nice to have a VM not start till another has started, a required dependency needed before its allow to start. And...
Works for me, thanks!
Only thing I noticed was the line that contained expected downtime 118. Seems to have moved ok tho.
Jan 08 08:07:45 starting migration tunnel
Jan 08 08:07:45 starting online/live migration on port 60000
Jan 08 08:07:45 migrate_set_speed: 8589934592
Jan 08 08:07:45...
I had 4 VMs on one node I could not move at all, finally just shut'em down and put the updates on and so far so good.. The migrated ok after being updated and rebooted.
One thing I did notice before hand, was a snapshot that did not finish. Hung or crashed. And it was not a VM that was...
I know this is kind of an old post/question but wanted to add that I just had some issues with an issue between PROX and WEBMIN.... It had to do with an NFS mount that disappeared and issues adding another NFS share. Just did not seem to work... come to find out Webmin was holding the ball...
I have 3 HP Gen8s with 64 gig/dual 12 core AMD procs attached to scale computing 48 TB iscsi. They don't yet break a sweat.
I'm pretty happy. I have in the past used a mix of intel and amd. I always felt my amd boxes ran better... BYMMV.
Proxmox user for almost 2 years.
3 servers online
1 shows physical volumes as /dev/dm-xxx
2 shows physical volumes as /dev/dm-xxx
3 shows physical volumes as /dev/disk/by-id/dm-name-xxxxxx
was kind of a pain to add a lvm on node 1 and 2 because it was seeing volumes as listed on node 3
log into node 3 and added...
Just a thought, I wonder if vzdump might have an effect. The VM in question had a snapshot ran right before when I feel the time was goofed up.
I'm guessing with doing any research yet that at some point the VM has to stop and then be restarted?
I just ran into this on a w2003 guest. Reboot and will report, but its the first time I have noticed it. Clock was running minutes like seconds.
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2...
svc: failed to register lockdv1 RPC service (errno 97).
nope, no good, starting getting mulltipath errors right after this error and still had to start iscsi manually. STill looking... :( and more goggling...
Well I hope this will be correct...
Remembered runlevels, installed sysv-rc-conf and found that open-scsi was not set to be started on said server, at any runlevel. Set to match other servers in cluster.
Have not tested yet but makes a lot of sense based on behavior. Had to look into this...
ummm... mean anything? this was in daemon.log
Nov 2 10:16:07 proliant02 iscsid: Missing or Invalid version from /sys/module/scsi_transport_iscsi/version. Make sure a up to date scsi_transport_iscsi m
odule is loaded and a up todate version of iscsid is running. Exiting...
but then...
iscsi and mutlitpath relationship , mutlitpath will not run properly without iscsi running first. I can answer that, but iscsi does not have a dependency on mutlitpath . Guess that is the question I'm hung on right now..
syslog is showing several lines with an exit code 255, but that appears I...
Re: AW: pvestatd Crash - Where to get more information?
Me, too soon. VMs would go missing at random on each node. Guessing I will have some idea over the next few days.
After last weeks update to version 2.2 I noticed that I was having a problem with iscsi not starting when system boots (1 node out of 4) . The cluster had been running since Aug and never restarted it till now. So I don't think it's related to anything updated this past week.
Anyway, several...
Re: AW: pvestatd Crash - Where to get more information?
Mine is currently running after update this am
pve-manager: 2.2-26 (pve-manager/2.2/c1614c8c)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-80
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-80...