Search results

  1. 3

    Tried to update server and broke it

    I recently upgraded my HP servers to 7.1-12 from 6.4-14 and it broke the nic configs. Now every time I reboot I have to reconfigure them. Run "ip add" and see if your nics have the ip addresses that you expect. If they are gone then take a look at this thread for what I have to do. Also see if...
  2. 3

    Lost a Boot drive, Raid 1, Rebuild corrupted remaining good drive

    Ok, I was thinking I had missed something in the gui. Looking into an rsync solution now. Thanks again for your help.
  3. 3

    Lost a Boot drive, Raid 1, Rebuild corrupted remaining good drive

    Live and learn. Can I automate backing up the /etc/pve folder? No backup exists currently of the root filesystem, just the CT's. Nothing critical was lost except my time. But next time I wish to be better prepared.
  4. 3

    Lost a Boot drive, Raid 1, Rebuild corrupted remaining good drive

    So as you read I am kinda screwed. What I would like to know is if I reinstall on the new boot drives will it be able to read the data on the existing ZFS drives. Consisting of 12 drives and 2 SSD's? Boot drives are Raid1 on an HP raid controller. Storage are on a separate controller. I can...
  5. 3

    Move CT's to different cluster via USB drive?

    I gave up because when I restored the CT's it crashed the server. Also every time I reboot my servers the built in nic's go offline.
  6. 3

    Network failure after upgrade to Proxmox 7 from 6.4-14

    I had this undo everything I did today during a reboot of one node. Is this normal that the interfaces file is over written?
  7. 3

    Move CT's to different cluster via USB drive?

    Is it possible to use a usb drive to move CT's from a Proxmox 6.4-14 to a 7.1-12 version cluster via usb? I have tried it locally but the usb drive does not show the backups in the web interface on the 7.1-12 cluster. Though you can see the files in the cli on the server it is connected to...
  8. 3

    Network failure after upgrade to Proxmox 7 from 6.4-14

    This is how it usually happens. I make a post and then figure out what I did wrong. In the link above there is the following in a post: "In the /etc/network/interfaces the following statement appears on the servers that were not working anymore after upgrading to 7.0 auto eno1 iface eno1 inet...
  9. 3

    Network failure after upgrade to Proxmox 7 from 6.4-14

    it did not change the names. I compared it to a non upgraded node and it looks the same. My servers are identical. The networking service was disabled, but even after enabling and rebooting they are still down. When I manually bring them up they still don't work. I went through the attached...
  10. 3

    Network failure after upgrade to Proxmox 7 from 6.4-14

    I have no network connections to the host. However I can not find anything wrong with the network configuration. Host can not ping anything. Any idea's? Thanks.
  11. 3

    CT keeps saying it has migrated?

    Also noticed that this is only on the NVME drive storage. The SAS/ssd cache ZFS CT's/VM's did not do this.
  12. 3

    CT keeps saying it has migrated?

    yes the container is running and clients never see it offline. No reverse proxy. I did rebuild the ct from scratch and it has not done it since. Tried a restore from before it started and that did not change anything. This did start after I did the last upgrade and rebooted the node.
  13. 3

    CT keeps saying it has migrated?

    Any idea why my CT's keep saying they have been migrated every 30-45 seconds? failed waiting for client: timed out TASK ERROR: command '/usr/bin/termproxy 5901 --path /vms/501 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole501 -r winch -z lxc-console -n 501 -e -1' failed...
  14. 3

    New volume not showing in Proxmox Dashboard

    I to want to say thanks for posting the solution, though I was looking for the lvcreate steps.
  15. 3

    Container Density

    I know this question is like asking how much water will a sock hold. Thanks for the input, I will be ordering the most cores I can afford.
  16. 3

    Restore error

    For those that find this later. I was unable to do anything with the existing storages. However I added a nfs backup storage and backed up the CT after it was cloned to the same NFS storage.
  17. 3

    Restore error

    I am trying to move containers and VM's to a new Proxmox stack. Was planning on using the backup to attach to the nfs share to the new stack and pull them in through that. Thanks for your insights.
  18. 3

    Restore error

    extracting archive '/mnt/pve/NFS/dump/vzdump-lxc-100-2020_11_12-00_00_02.tar.lzo' tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted Total bytes read: 4275230720 (4.0GiB, 120MiB/s) tar: Exiting...
  19. 3

    Container Density

    I guess what I need to see is what actual cpu's are capable of handling the load. Which server models are currently working? Do we need to go to 4 cpu model servers? We have 10 gig cisco switch of the testing. Production would be on Cisco Nexus 9K's. Bandwidth is really low on the network. I/O...