disk capacity increased two times while moving from ont pool to another

A.M.

Member
Feb 15, 2018
12
0
21
42
I have two pool storage on one proxmox, when i copy disk (278GB - vm-104-disk-1) from one pool to another with web interface on second pool capacity of raw disk increases two times .

here you can see the result of copying
Screenshot from 2018-11-27 11-02-40.png

and most confusing that in web interface proxmox show that it capacity is equal to original image while copying .
Screenshot from 2018-11-27 11-02-33.png

and here is graph of copying
Screenshot from 2018-11-27 11-02-58.png

when i move the same disk, capacity stays the same.
why this is happening?
 
Last edited:
how do you copy and how do you move?
can you post the output of zfs list and zpool status?
 
okey,
for example, i want to copy one disk (dbpool2/DBDISK2/vm-101-disk-1)
33.0G
Screenshot from 2018-11-27 13-14-16.png (no option delete from source)

now zfs list
Code:
after mooving 
zfs list
zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
dbpool                          265G   596G   384K  /dbpool
dbpool/DBDISK                   265G   596G   384K  /data
dbpool/DBDISK/vm-100-disk-2     200G   596G   200G  -
dbpool/DBDISK/vm-107-disk-1    64.1G   596G  64.1G  -
dbpool2                         650G   210G    96K  /dbpool2
dbpool2/DBDISK2                 650G   210G    96K  /dbpool2/DBDISK2
dbpool2/DBDISK2/vm-100-disk-1  33.0G   211G  32.1G  -
dbpool2/DBDISK2/vm-101-disk-1  33.0G   211G  32.1G  -
dbpool2/DBDISK2/vm-104-disk-1   278G   216G   273G  -
dbpool2/DBDISK2/vm-104-disk-2  33.0G   211G  32.3G  -
dbpool2/DBDISK2/vm-106-disk-1   103G   291G  22.8G  -
dbpool2/DBDISK2/vm-106-disk-2  33.0G   211G  32.2G  -
dbpool2/DBDISK2/vm-108-disk-1  33.0G   242G  1.53G  -
dbpool2/DBDISK2/vm-108-disk-2   103G   314G    56K  -
rpool                          22.0G   193G   104K  /rpool
rpool/ROOT                     13.5G   193G    96K  /rpool/ROOT
rpool/ROOT/pve-1               13.5G   193G  13.5G  /
rpool/data                       96K   193G    96K  /rpool/data
rpool/swap                     8.50G   194G  7.44G  -
zpool status
Code:
zpool status
  pool: dbpool
 state: ONLINE
  scan: scrub repaired 0B in 0h4m with 0 errors on Sun Nov 11 00:28:12 2018
config:

   NAME                                            STATE     READ WRITE CKSUM
   dbpool                                          ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       ata-INTEL_SSDSC2KB960G8_PHYF834201XW960CGN  ONLINE       0     0     0
       ata-INTEL_SSDSC2KB960G8_PHYF8342022N960CGN  ONLINE       0     0     0

errors: No known data errors

  pool: dbpool2
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   dbpool2     ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       sde     ONLINE       0     0     0
       sdf     ONLINE       0     0     0

errors: No known data errors
now i copy disk (dbpool2/DBDISK2/vm-101-disk-1)

Screenshot from 2018-11-27 13-50-02.png
Code:
zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
dbpool                          329G   532G   384K  /dbpool
dbpool/DBDISK                   329G   532G   384K  /data
dbpool/DBDISK/vm-100-disk-2     200G   532G   200G  -
dbpool/DBDISK/vm-101-disk-1    64.1G   532G  64.1G  -
dbpool/DBDISK/vm-107-disk-1    64.1G   532G  64.1G  -
dbpool2                         650G   210G    96K  /dbpool2
dbpool2/DBDISK2                 650G   210G    96K  /dbpool2/DBDISK2
dbpool2/DBDISK2/vm-100-disk-1  33.0G   211G  32.1G  -
dbpool2/DBDISK2/vm-101-disk-1  33.0G   211G  32.1G  -
dbpool2/DBDISK2/vm-104-disk-1   278G   216G   273G  -
dbpool2/DBDISK2/vm-104-disk-2  33.0G   211G  32.3G  -
dbpool2/DBDISK2/vm-106-disk-1   103G   291G  22.8G  -
dbpool2/DBDISK2/vm-106-disk-2  33.0G   211G  32.2G  -
dbpool2/DBDISK2/vm-108-disk-1  33.0G   242G  1.53G  -
dbpool2/DBDISK2/vm-108-disk-2   103G   314G    56K  -
rpool                          22.0G   193G   104K  /rpool
rpool/ROOT                     13.5G   193G    96K  /rpool/ROOT
rpool/ROOT/pve-1               13.5G   193G  13.5G  /
rpool/data                       96K   193G    96K  /rpool/data
rpool/swap                     8.50G   194G  7.44G  -
here you can see that dbpool/DBDISK/vm-101-disk-1 64.1G 532G 64.1G -
 
also in proxmox interface this diskScreenshot from 2018-11-27 13-58-04.png
and zsf list
Code:
zfs list
NAME                            USED  AVAIL  REFER  MOUNTPOINT
dbpool                          329G   532G   384K  /dbpool
dbpool/DBDISK                   329G   532G   384K  /data
dbpool/DBDISK/vm-100-disk-2     200G   532G   200G  -
dbpool/DBDISK/vm-101-disk-1    64.1G   532G  64.1G  -
dbpool/DBDISK/vm-107-disk-1    64.1G   532G  64.1G  -
dbpool2                         650G   210G    96K  /dbpool2
dbpool2/DBDISK2                 650G   210G    96K  /dbpool2/DBDISK2
dbpool2/DBDISK2/vm-100-disk-1  33.0G   211G  32.1G  -
dbpool2/DBDISK2/vm-101-disk-1  33.0G   211G  32.1G  -
dbpool2/DBDISK2/vm-104-disk-1   278G   216G   273G  -
dbpool2/DBDISK2/vm-104-disk-2  33.0G   211G  32.3G  -
dbpool2/DBDISK2/vm-106-disk-1   103G   291G  22.8G  -
dbpool2/DBDISK2/vm-106-disk-2  33.0G   211G  32.2G  -
dbpool2/DBDISK2/vm-108-disk-1  33.0G   242G  1.53G  -
dbpool2/DBDISK2/vm-108-disk-2   103G   314G    56K  -
rpool                          22.0G   193G   104K  /rpool
rpool/ROOT                     13.5G   193G    96K  /rpool/ROOT
rpool/ROOT/pve-1               13.5G   193G  13.5G  /
rpool/data                       96K   193G    96K  /rpool/data
rpool/swap                     8.50G   194G  7.44G  -
 
can you post the output of 'zpool status' ?
previously i have posted this output

Code:
zpool status
  pool: dbpool
 state: ONLINE
  scan: scrub repaired 0B in 0h4m with 0 errors on Sun Nov 11 00:28:12 2018
config:

    NAME                                            STATE     READ WRITE CKSUM
    dbpool                                          ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        ata-INTEL_SSDSC2KB960G8_PHYF834201XW960CGN  ONLINE       0     0     0
        ata-INTEL_SSDSC2KB960G8_PHYF8342022N960CGN  ONLINE       0     0     0

errors: No known data errors

  pool: dbpool2
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    dbpool2     ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdf     ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0h0m with 0 errors on Sun Nov 11 00:24:23 2018
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sdc2    ONLINE       0     0     0
        sdd2    ONLINE       0     0     0

continuing research,
i found that
Code:
zpool get all | grep ashift
dbpool   ashift                         14                             local
dbpool2  ashift                         0                              default
rpool    ashift                         12                             local

pool with default settings, does not inflate drives
it is seems to be not only raidz issue, no matter what kind of pool used, if virtual mashines use 4K block devices and default value ashift is 12

Also, interesting, that, GNU Parted reports
Code:
parted /dev/vdb
(parted) print                                                        
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name    Flags
 1      1049kB  107GB  107GB  ext4         webwww

and tune2fs -l /dev/vdb1
Code:
Block size:               4096
Fragment size:            4096
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!