ACHIEVING CLOSE RANGE PHOTOGRAMMETRY WITH NON-METRIC MOBILE PHONE CAMERAS

. Close range photogrammetry (CRP) has gained increasing relevance over the years with its principles and theories being applied in diverse applications. Further supporting this trend, the current increase in the wide spread usage of mobile phones with high resolution cameras is expected to further popularize positioning by CRP. This paper presents the results of an experimental study wherein two (2) non-metric mobile phone cameras have been used to determine the 3-D coordinates of points on a building by using the collinearity condition equation in an iterative least square bundle adjustment process in MATLAB software environment. The two (2) mobile phones used were Tecno W3 and Infinix X509 phones with focal lengths of 5.432 mm and 8.391 mm respectively. Statistical tests on the results obtained shows that there is no significant difference between the 3-D coordinates obtained by ground survey and those obtained from both cameras at 99% confidence level. Furthermore, the study confirmed the capability of non-metric mobile phone cameras to determine 3D point positions to centimeter level accuracy (with maximum residuals of 11.8 cm, 31.0 cm, and 5.9 cm for the Tecno W3 camera and 14.6 cm, 16.1 cm and 1.8 cm for the Infinix X509 camera in the Eastings, Northings and Heights respectively).


Introduction
Close-range photogrammetry (CRP) has found many diverse applications in the fields of industry, biomechanics, chemistry, biology, archaeology, architecture, automotive and aerospace, construction as well as accident reconstruction (Jiang et al., 2008). Furthermore, the capability of CRP to produce dense point clouds similar to the output from terrestrial laser scanning (TLS) makes it a cheaper alternative to be considered in applications that require 3D position of points (Ruther et al., 2012;Mokroš et al., 2013Mokroš et al., , 2018. Consequent upon its many applications, CRP has witnessed a wide range of developments in the past 4 decades many of which are results of automation and digital techniques which occurred on the sidelines of mainstream photogrammetry (Fraser, 2015). Many of these developments have been especially concerned with models and automation of the procedure for the rigorous determination of the geometric relationship that exist between image and object as at the time of image capture which is the fundamental task of photogrammetry (Mikhail et al., 2001;Luhmann As digital photogrammetric techniques began to gain relevance above analytical photogrammetry, Jechev (2004), worked on the use of amateur cameras for determination of 3D coordinates of buildings using CRP approach. The results obtained in the study showed root mean square error (RMSE) of 1 2 cm ± . and 6 1 cm ± . in planimetry and altimetry respectively when compared with Total station observations at same points. The data was processed using PHOTO MOD Lite software. Later, Abbaszadeh and Rastiveis (2017) explored the ability of CRP for volume estimation using non-metric cameras and found that the use of non-metric cameras produced results with a relative error of 0.2% in comparison with ground survey techniques. The study further established the possibility of using non-metric cameras for CRP applications. However, the images used for the study were also processed using the Agisoft software hence, the study did not explicitly discuss the procedure utilized in converting the image coordinates to object coordinates which is fundamentally known as space resection (exterior orientation) and space intersection.
Exterior orientation involves the process of determining the 3D spatial position and the three orientation parameters of the camera, as at the time of exposure (Jacobsen, 2001). There are three major fundamental condition equations used in photogrammetry in-order to achieve exterior orientation and all equations rely on the point coordinates as input data (Elnima, 2013). Several approaches have been developed over years in the field of photogrammetry for solving the problem of exterior orientation. Some of such methods include the Direct Linear Transformation (DLT) method which gives the exterior orientation parameters without initial approximation (Elnima, 2013) and the matrix factorization method which uses matrix factorization and a homogenous coordinate representation to recover the exterior orientation parameters in a planar object space (Seedahmed & Habib, 2015). All these methods are modifications of the collinearity equation which is conventional approach for solving exterior orientation problem.
This paper explicitly discusses the procedures (space resection and space intersection) for determination of 3D object space coordinates from 2D images taken with mobile phone (non-metric) cameras using the collinearity equation; and implements same using the MATLAB software.

Data
The basic data/equipment used for this study are: -Ground coordinates of two exposure stations.
-Two (2) non-metric cameras. This is to determine if there is any relationship between positioning accuracy and calibration parameters of the non-metric camera used. -Calibration parameters of cameras (determined with the MATLAB software)

Methodology
Although, the basic rational of this study is to illustrate and develop a simple (easy to replicate) MATLAB procedure for determination of accurate 3D point coordinates of object points from CRP using non-metric cameras; ground survey methods was still conducted to determine: -Co-ordinates of exposure stations, -Co-ordinates of photo control points (PCP) and -Co-ordinates of check points that were used to validate the model. Sequentially, the procedure adopted in this study is as shown in Figure 1.
Determination of coordinates of exposure stations, PCP and check points was done using the ZTR 320 Hi-Target Total Station by conventional survey technique. Two exposure stations (A001 and A002) were established within 70 m distance away from the building and coordinated accordingly.
Thereafter, five (5) photo control points (P1-P5) used in obtaining the exterior orientation parameters and nine other check points (C1 -C9) used to check the accuracy of the determined 3D coordinates from CRP were also coordinated by taking observations to the designated points using the total station in reflectorless mode. Figure 2 shows the location of the PCPs and the check points on the building whose 3D coordinates are determined in this study.
Camera calibration was done in order to determine the intrinsic parameters of the camera (Zhang, 2000). Camera calibration for the two non-metric cameras used for the image acquisition was performed by taking ten (10) shots to a mounted checker board which has five rows and seven columns with 11.3 cm dimension of each squares. The acquired images were then processed using MATLAB 2014a software with a camera calibration add-in tool. The obtained results are presented in Table 1.
Photo shots were taken to the building whose 3D coordinates are to be determined from the two established exposure stations. The camera shots are taken such that 100% overlap is obtained from both exposure stations for each of the cameras.
Pixel extraction was done using the MATLAB software as the comparator. The pixel coordinates of PCPs and check points were extracted accordingly as illustrated in Figure 3. Since the MATLAB comparator environment has its origin at the top right corner, transformation from comparator coordinates to camera coordinates (with origin at the perspective point) was carried out by subtracting the x pixel coordinate from the x principal point coordinate (obtained from camera calibration) and subtracting the y principal point coordinate (obtained also from camera calibration process) from y pixel coordinate. The result of each camera coordinates was then multiplied by the pixel to millimeter conversion constant "0.2645833333".
The collinearity condition equation was used for transforming the camera coordinates to object coordinates in this study. The transformation was achieved in a two staged solution approach as follows:  -Space resection stage (Determination of exterior orientation parameters): The exterior orientation parameters of the camera positions were determined using the collinearity equation given in Eqs (1) and (2). MATLAB codes used were modified after the works of Alsadik (2010). The code written executes the collinearity equation iteratively in a least squares adjustment until convergence is reached. The condition for convergence was defined such that the difference between final solution and previous solution does not exceed 0.001. The condition for convergence was modified by the authors in this study. ; where: dω , dφ and dκ are the corrections to be applied to omega, Phi and Kappa respectively; XL, YL and ZL are the 3D exposure station coordinates; a x and a y are the camera coordinate of the control points.

11
33 32 13 32     Figure 4 shows a graph of the iterations during the least squares determination of the orientation parameters for the right photo taken with the Infinix X509 camera. The figure shows that the MATLAB codes continue to iterate until the difference between the final value obtained for each of the parameters and the previous value does not exceed the specified range of 0.001 mm.
-Space intersection stage (Determination of 3D object coordinates from camera coordinates): Transformation from the 3D camera coordinates to 3D object coordinate system was again carried out with the MATLAB software by evaluating the collinearity condition equation given by Eqs (3) and (4). Eqs (3) and (4)

Results and discussion of results
Tables 2 and 3 present the exterior orientation parameters and also the 2D comparator and camera coordinates respectively obtained from both cameras. Since exterior orientation parameters are based in general on geometric and topologic characteristics of imaged objects, the computed orientation parameters (ω, φ and κ) as presented in Table 2 reveal that the photographs were taken in a near horizontal direction. Furthermore, the adjusted coordinates of the two exposure stations from where the left and right photos (respectively) were taken indicate the instability of the position of the camera at the various times of exposure. With an observed maximum difference of 21 cm and 32 cm in the computed horizontal position of the exposure station, it is obvious that the camera positions varied for each exposure. This could have been minimized if the camera was mounted on a tripod that is properly centered during exposure.
Furthermore, Table 4 presents the residuals of the coordinates at the check stations obtained by space intersection with those obtained by ground survey method.
From Table 4, it can be observed that the highest residuals are 11.8 cm (Eastings), 31.0 cm (Northings), and 5.9 cm (height) when the Tecno W3 camera was used. Similarly, the maximum residuals were -14.6 cm, 16.1 cm and 1.8 cm for the Eastings, Northings and Height coordinates respectively. The residuals obtained suggests that the Tecno W3 camera performed better in determination of the object coordinates than the Infinix X509 camera despite that the latter has a more refined focal length. Similar residual pattern is observed during the determination of the exterior orientation parameters for images obtained from both cameras. This is because the final adjusted exposure station coordinates obtained from the Tecno W3 camera is closer to known  coordinates of stations. The result as obtained therefore confirms that while the focal length of the camera plays a significant role in image magnification, it does not necessarily depict that the relative image to object geometry is properly preserved. Notwithstanding, it is evident that centimeter level 3-D positional accuracy can still be achieved from CRP by using non-metric cameras. Consequent upon the centimeter level residual obtained from the space intersection results in comparison with ground survey coordinates, a statistical test (students -t test for equality of means and variances) was conducted on the obtained coordinates from space intersection at 99% confidence interval. The comparison of the results obtained from space intersection (Tecno W3 and Infinix X509) and those obtained from survey technique was done to ascertain the level of reliability of the use of non-metric cameras in low order (3rd order) accuracy position determination. Tables 5 and 6 present the results of the statistical tests of equality of means and variances performed on the coordinates obtained by space intersection from the Tecno W3 and Infinix X509 cameras respectively.
From Table 5, the corresponding p-value for the test statistic of the Levene's test is very large (0.99, 0.97 and 0.98 for the Northings, Eastings and Height respectively) therefore we accept the null hypothesis that there is no significant difference in the variances of the results obtained by ground survey technique and CRP using the Tecno W3 camera (Snedecor & Cochran, 1989). Similarly, we observe that there is no significant difference in the means of both results in the Northings, Eastings and Height. Similar result is observed in Table 6 on comparison of the results obtained from ground survey method with that from CRP using the Infinix X509 camera. This is again because all the obtained values are greater than the chosen significant level (0.01).

Conclusions
This study has ascertained the statistical reliability of using non-metric cameras for determination of low order accuracy position via CRP. This was achieved by using Collinearity condition equation in an iterative least square bundle adjustment process in the MATLAB software environment. Therefore, the study concludes that by a careful implementation of the conventional collinearity equation, third order accuracy positions can be obtained with the use of non-metric mobile phone cameras.
Furthermore, the study concludes that mobile phone cameras (non-metric) with a minimum of 5 megapixel and 5.40 mm focal length are suitable devices for CRP applications requiring 3rd order positional accuracy.
Finally, the study discovered that camera capacity for preservation of image to object geometric / topologic relations does not necessarily improve with increasing focal length.

Funding
There was no funding for this research.