vm power on after reboot host

"Nov 28 09:30:03 slave02 kernel: pci_root PNP0A08:00: ignoring host bridge window [mem 0x000c8000-0x000dffff] (conflicts with Video ROM [mem 0x000c0000-0x000cafff])" This could indicate a BIOS bug.
 
@Dietmar :

The dns server is my pfsense box which is working fine.

The dns for the domain sits on a windows box on the proxmox.

Under the domain search, i have the company domain, could that be an issue.

I will remove it and see.


@mir, these are all ibm x3655 boxes updated to lates bioses and firmwares so would be a big problem if its a bios bug?


I hope its cause of the domain name to be honest but will reboot all the nodes and post the syslog results again.

Cheers,

Raj
 
Ok I have set a backup to run at 2200 and the first and second node failed with the following error:
"storage 'Backup' is not online"
The third node ackup started fine.

I cancelled that backup and set a new one to run at 22.15 and this time all 3 nodes started well.

All the storage, switch and boxes have been rebooted about 3 hrs ago.

Cheers,
 
syslog from second node :

Nov 28 21:59:26 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 21:59:43 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 21:59:56 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:00:01 slave01 /USR/SBIN/CRON[17768]: (root) CMD (vzdump --quiet 1 --mode snapshot --mailto administrator@businessparksolutions.com --all 1 --compress gzip --storage Backup)
Nov 28 22:00:01 slave01 pmxcfs[1450]: [status] notice: received log
Nov 28 22:00:04 slave01 /USR/SBIN/CRON[17767]: (CRON) error (grandchild #17768 failed with exit status 4)
Nov 28 22:00:04 slave01 postfix/pickup[13391]: 23AAA9A0D5: uid=0 from=<root>
Nov 28 22:00:04 slave01 postfix/cleanup[17775]: 23AAA9A0D5: message-id=<20121128220004.23AAA9A0D5@slave01.businessparksolutions.com>
Nov 28 22:00:04 slave01 postfix/qmgr[1463]: 23AAA9A0D5: from=<root@slave01.businessparksolutions.com>, size=723, nrcpt=1 (queue active)
Nov 28 22:00:04 slave01 pvemailforward[17778]: forward mail to <administrator@businessparksolutions.com>
Nov 28 22:00:04 slave01 postfix/pickup[13391]: 867869A0D6: uid=65534 from=<nobody>
Nov 28 22:00:04 slave01 postfix/cleanup[17775]: 867869A0D6: message-id=<20121128220004.23AAA9A0D5@slave01.businessparksolutions.com>
Nov 28 22:00:04 slave01 postfix/qmgr[1463]: 867869A0D6: from=<nobody@slave01.businessparksolutions.com>, size=930, nrcpt=1 (queue active)
Nov 28 22:00:04 slave01 postfix/local[17777]: 23AAA9A0D5: to=<root@slave01.businessparksolutions.com>, orig_to=<root>, relay=local, delay=0.43, delays=0.04/0.12/0/0.27, dsn=2.0.0, status=sent (delivered to command: /usr/bin/pvemailforward)
Nov 28 22:00:04 slave01 postfix/qmgr[1463]: 23AAA9A0D5: removed
Nov 28 22:00:04 slave01 postfix/smtp[17781]: 867869A0D6: to=<administrator@businessparksolutions.com>, relay=192.168.0.24[192.168.0.24]:25, delay=0.18, delays=0.01/0.03/0.01/0.13, dsn=2.6.0, status=sent (250 2.6.0 <20121128220004.23AAA9A0D5@slave01.businessparksolutions.com> Queued mail for delivery)
Nov 28 22:00:04 slave01 postfix/qmgr[1463]: 867869A0D6: removed
Nov 28 22:00:05 slave01 pvestatd[1809]: WARNING: storage 'ISO' is not online
Nov 28 22:00:07 slave01 pvestatd[1809]: WARNING: storage 'Backup' is not online
Nov 28 22:00:13 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:00:15 slave01 pvestatd[1809]: WARNING: storage 'ISO' is not online
Nov 28 22:00:18 slave01 pvestatd[1809]: WARNING: storage 'Backup' is not online
Nov 28 22:00:25 slave01 pvestatd[1809]: WARNING: command 'df -P -B 1 /mnt/pve/ISO' failed: got timeout
Nov 28 22:00:27 slave01 pvestatd[1809]: WARNING: command 'df -P -B 1 /mnt/pve/Backup' failed: got timeout
Nov 28 22:00:30 slave01 pmxcfs[1450]: [status] notice: received log
Nov 28 22:00:40 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:00:53 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:00:58 slave01 pmxcfs[1450]: [status] notice: received log
Nov 28 22:01:13 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:01:33 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:01:52 slave01 pmxcfs[1450]: [status] notice: received log
Nov 28 22:01:53 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:02:13 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:02:24 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:02:43 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:03:03 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:03:23 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:03:43 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:04:01 slave01 /usr/sbin/cron[1579]: (*system*vzdump) RELOAD (/etc/cron.d/vzdump)
Nov 28 22:04:03 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:04:23 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:04:43 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:05:03 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:05:23 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
Nov 28 22:05:43 slave01 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
 
Would that cause all the issues with the backups also as the storage is 2.6 TB on an external storage, nfs.

The sdb is an iscsi target on which a lun has been created vis the gui of proxmox.

How do I change that pls?

Cheers
 
Hi mir,

My question is as all the storages linked to the cluster or on nas, nas1 where the datastore is( images live) is on iscsi of 3.63TB, where the backups and iso are, nas 2 that an nfs with 2.67 TB and the 3 nas again iscsi with 5.45TB.

It looks to me that the link describs the process with local drives.

So the real question is how to perform this with iscsi targets that are connected to proxmox.

Secondly would that the cause of why the servers does start automatically and thirdly would that be the issue why the backups sometimes come up with the following error and dies:
"storage 'Backup' is not online"

Cheers,

Raj
 
Ok have whiped and install the full cluster as even the upgrade was done, i never had the option for sata when creating a new vm.

I have also reduce the size of one of the iscsi storage to 1TB as i tried 2.7TB from 5.6TB and was still getting the errors:

Nov 28 09:33:58 slave02 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16)

Doing restores now and will update the post afterwards.

Cheers,

Raj
 
...
Nov 28 09:33:58 slave02 kernel: sd 3:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16)
...

this is no error, just ignore it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!