Computing x,y coordinate (3D) from image point
Given your configuration, errors of 20-40mm at the edges are average. It looks like you've done everything well.
Without modifying camera/system configuration, doing better will be hard. You can try to redo camera calibration and hope for better results, but this will not improve them alot (and you may eventually get worse results, so don't erase actual instrinsic parameters)
As said by count0, if you need more precision you should go for multiple measurements.
Do you get the green dots (imagePoints) from the distorted or undistorted image? Because the function solvePnP already undistort the imagePoints (unless you don't pass the distortion coefficients, or pass them as null). You may be undistorting those imagePoints twice if you are getting them from the undistorted image, and this would end up causing an increased error in the corners.
https://github.com/Itseez/opencv/blob/master/modules/calib3d/src/solvepnp.cpp