Corrupted macOS partition after GParted

I followed a tutorial to create dual boot with macOS Sierra 10.12 and Kali-Linux 2.0.

I created a bootable USB drive and booted on a Kali-Linux live session in order to use GParted and resize my macOS Partition.

I selected the macOS partition and resized it from 239 GB to 200 GB. I got 2 partitions, with the 39 GB partition formatted to "unallocated".

But now when I try to boot to macOS, I get the Apple logo then a white cross and can't boot to macOS.

I tried to boot in Recovery HD by holding cmdR, then I tried to use S.O.S. but it says I need an Assistant recovery disk. We can create a USB recovery disk by plugging in a USB drive to our MacBook then use the assistant to create a bootable USB drive which can repair disks, but like I said my MacBook can't boot on macOS, so I can't create it... Is there a way to download the iso of the recovery USB drive directly to create my own recovery USB drive from it?

I read somewhere that I need to rewrite the corrects boot codes and my data isn't lost. Is that true?

What do you think I can do?

Edit:
Here is the output of diskutil/gpt:

Terminal

(Sorry for the low compression rate I haven't 10 reputation to post more than 2 pics)

I didn't expect the result of Diskutil. So much partition is normal ?

Edit2:

Here is the other screen I had after write commands :

Terminal2

Edit 3

Last verify


GParted didn't really create unallocated disk space. Instead the MBR got bogus. The CoreStorage LVG and all subsequent containers got corrupted also, because the whole stack wasn't resized as required. Usually - in macOS - the whole stack is resized with the command diskutil cs resizeStack .... As far as I can say from remote, the ending boundary of the second partition was simply moved to lower block numbers which usually works with normal HFS+ volumes in GParted but not in this case with a CoreStorage stack. Fortunately some invisible data structures of the CS stack weren't overwritten.

Additionally the recovery partition wasn't moved properly. But this a different problem.

Instead of the MBR you should have a pMBR. After removing the bogus MBR you have to destroy and recreate the GUID partition table:

  • Boot to Internet Recovery Mode
  • Open Terminal in the menubar Utilities -> Terminal
  • Get an overview (especially the gpt command is important!):

    diskutil list
    gpt -r show disk0
    
  • Unmount disk0:

    diskutil umountDisk /dev/disk0
    
  • Delete the MBR:

    dd if=/dev/zero of=/dev/disk0 bs=512 count=1
    
  • Destroy the GUID partition table and create a new one (this also creates a fresh pMBR):

    gpt destroy disk0
    gpt create -f disk0
    
  • Rebuild all previous GUID partitions:

    gpt add -i 1 -b 40 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B disk0
    gpt add -i 3 -b 488965176 -s 1269536 -t 426F6F74-0000-11AA-AA11-00306543ECAC disk0
    gpt add -i 2 -b 409640 -s 409602008 -t 53746F72-6167-11AA-AA11-00306543ECAC disk0
    

    If you get a resource busy error after one of the steps, just unmount disk0 again with

    diskutil umountDisk /dev/disk0
    

Check the disk with diskutil verifyDisk disk0 afterwards.

Enter diskutil cs list and check if all four CoreStorage containers appear: a Logical Volume Group, a Physical Volume and Logical Volume Family and a Logical Volume.

With the UUID of the Logical Volume mount the LV:

Example:

    +-> Logical Volume 9A7B21AA-F9FE-4E65-8C7E-ED2A73744C15
        ---------------------------------------------------
        Disk:                  disk17
        Status:                Online

Then use:

diskutil mount 9A7B21AA-F9FE-4E65-8C7E-ED2A73744C15

Then after getting the disk identifier of the mounted LV with diskutil list verify the volume:

diskutil verifyVolume disk17 # probably it's disk17, disk16 or disk18

Below I assume the disk identifier is disk17


If the Logical Volume Family and the Logical Volume don't appear try the following:

  • Boot to Internet Recovery Mode
  • Open Terminal in the menubar Utilities -> Terminal
  • Get an overview (especially the gpt command is important!):

    diskutil list
    gpt -r show disk0
    
  • Unmount disk0:

    diskutil umountDisk /dev/disk0
    
  • Remove the current partition entry for the second partition:

    gpt remove -i 2 disk0
    
  • Add a new "expanded" second partition entry:

    gpt add -i 2 -b 409640 -s 488555536 -t 53746F72-6167-11AA-AA11-00306543ECAC disk0
    
  • Then repeat all verify steps:

    Check the disk with diskutil verifyDisk disk0 afterwards.

    Enter diskutil cs list and check if all four CoreStorage containers appear: a Logical Volume Group, a Physical Volume and Logical Volume Family and a Logical Volume.

    With the UUID of the Logical Volume mount the LV:

    Example:

        +-> Logical Volume 9A7B21AA-F9FE-4E65-8C7E-ED2A73744C15
            ---------------------------------------------------
            Disk:                  disk17
            Status:                Online
    

    Then use:

    diskutil mount 9A7B21AA-F9FE-4E65-8C7E-ED2A73744C15
    

    Then after getting the disk identifier of the mounted LV with diskutil list verify the volume:

    diskutil verifyVolume disk17 # probably it's disk16, disk17 or disk18
    

    If you get errors backup the data or the whole partition to an external volume then repair the volume with diskutil repairVolume disk17.

    One possibility to backup the data is dd. Attach a HFS+ formatted drive with at least 250 GB free space. Get the path to the external volume with ls /Volumes. Then unmount disk17 and disk0 with diskutil umountDisk disk17 and diskutil umountDisk disk0.

    Then clone the partition to a file:

    dd if=/dev/disk0s2 of=/Volumes/ExternalDriveName/disk0s2.rawdevice bs=4m
    

    If the volume name contains spaces, escape the spaces with backslashes: ...of=/Volumes/ExternalDriveName\ With\ Spaces/disk0s2.rawdevice....

    You can also use asr to restore the partition to another disk (as a temporary "backup"). Check man asr.