Proxmox VE 5.3 released!

I have a problem after upgrade...
When i try to ssh on a LXC Debian 7 Container y obtain:
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
You have new mail.
stdin: is not a tty
I dont have a rc.sysinit file to "disable" udev at startup . So, udev no start at default i think , because does not support container..
¿Why i can fix it? I apply all updates to debian, restar node and LXC and not work.

I can attach to LXC with Proxmox GUI and not problem. But ssh appear this message
 
I must say this update is very nice so far. Adding the storage configurations via the web UI is a nice touch. Quick question though - has anyone been able to get nested virtualization to work on this version? I'm having a bit of trouble getting it to pass through VT-X capabilities for a proxmox guest.
 
I must say this update is very nice so far. Adding the storage configurations via the web UI is a nice touch. Quick question though - has anyone been able to get nested virtualization to work on this version? I'm having a bit of trouble getting it to pass through VT-X capabilities for a proxmox guest.

Hmm, I use it daily for my nested PVE test/development instances. Just use "host" as CPU type and you should be good.
 
Hmm, I use it daily for my nested PVE test/development instances. Just use "host" as CPU type and you should be good.

Thanks for confirming - must just be something to do with my setup. I already have the CPU type set to host so it must be something else, I won't clutter this thread with details.
 
Hi!
I see new proxmox release comes with no swap volume. In case i need swap, i see two methods to get it:
  1. make zvol, e.g. /rpool/swap like in older PVE installation
  2. during installation stage leave empty space and then do swap of a mirror of this partitions
Which would be better? Thanx
 
I personally don't have swap enabled at the moment. That said, if I had to go down the path of setting up swap, I would be setting up a swapfile. Since proxmox just runs on top of debian you should be able to do this fairly easily. I might try it out myself sometime.
 
Good news are the abilities to manage ZFS storage in web interface, but...
Why can I create ZFS in raw HDDs, but can't in free space/partitions of used ones?
F.e. I use server from hetzner.de, and they don't provide Proxmox installimg on ZFS mirrowed RAID instantly, so I set up pair of SSD with BTRFS partition, then I need to create ZFS pool manually for free SSD space:( May be creation of special partitions could help?
 
I personally don't have swap enabled at the moment. That said, if I had to go down the path of setting up swap, I would be setting up a swapfile. Since proxmox just runs on top of debian you should be able to do this fairly easily. I might try it out myself sometime.
It is occured to me that ZFS not support swap in file.
i got this when i did swapon on file
Code:
swapon: /swapfile.swap: swapon failed: Invalid argument
 
  • make zvol, e.g. /rpool/swap like in older PVE installation
  • during installation stage leave empty space and then do swap of a mirror of this partitions

swap on zvol is currently problematic. Through the installer you can now set "hdsize" for ZFS, with this you can tell the installer to not use the full space for ZFS but leave some unparitioned at the end. After booting you can use this to create a swap partition, which then is completely unrelated off ZFS, albeit using the same block devices.
 
Regarding swap on zvol being problematic - The issue https://github.com/zfsonlinux/zfs/issues/7734 sums it up pretty well.
Changes between 0.6.x and 0.7.9 can lead in certain cases to a deadlock, and according to upstream fixing this is far from trivial.

Therefore we decided to not create swap on a zvol during the installation.
 
This nigh i just update the server from 5.2 to 5.3 and i ended up all my lxc dead, they boot they ar online but

Server refused to allocate pty

[root@2 home]# screen
Cannot access '/dev/pts/2': No such file or directory
[root@2 home]#

[root@2 pts]# echo "" > 2 bash: 2: Read-only file system

Now i`m in big mass all the customers ar on my head
 
Hi, some thoughts which come to mind ...

I assume you have rebooted the server, its worth a try.
Else, could you try booting into the previous kernel and try

BTW, "read-only filesystem" smells of probably an "fsck needed" on the underlying file system.
Can you do a forced fsck on the filesystem (i.e. a forced fsck by creating the file "forcefsck" on the root of the filesystem, and then rebooting)

HTH,
Shantanu
 
Hi, some thoughts which come to mind ...

I assume you have rebooted the server, its worth a try.
Else, could you try booting into the previous kernel and try

BTW, "read-only filesystem" smells of probably an "fsck needed" on the underlying file system.
Can you do a forced fsck on the filesystem (i.e. a forced fsck by creating the file "forcefsck" on the root of the filesystem, and then rebooting)

HTH,
Shantanu
how can i booting into the previous kernel and try ? i dont have idrac acces i see this on the server shell
 

Attachments

  • ssh.png
    ssh.png
    15 KB · Views: 12
@Bidi
Assuming this is the screenshot after you do an SSH into the machine d2 ...

are you able to "touch" a file named 'forcefsck' like so:
Code:
touch /forcefsck

ref: https://linuxconfig.org/how-to-force-fsck-to-check-filesystem-after-system-reboot-on-linux

If the 'touch' command doesn't work, can you try the tune2fs trick to force an fsck upon reboot.

If the filesystem is already being shown as READONLY, then only an offline fsck will help.

* you may need the iDRAC access to attach a SystemRescueCD iso (or equivalent) from your workstation.
(varies as per the hardware/model of the server)

HTH,
Shantanu
 
Hello i installed and config my proxmox cluster with 3 node for testing. And i configed ceph like in the video. (youtu.be/0YyMTg9qc74?t=159). it works but have a problem. i want to save vm and ct disks on the cephfs but cephfs not in the storage list when creating a new vm or ct. create screen and cephfs screen are attached. can you help me?
1.png 2.png
 
@Omer SAVAS, please rather open up new posts for issues that are not like the on of the OP.

And for your issue, the content type 'disk image, container' is not allowed on CephFS. This is due to RBD being the better native support for disk images on Ceph and the low performance on small writes on CephFS. This makes it more ideal to store backup, container templates and ISOs.
 
  • Like
Reactions: Omer SAVAS

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!