SciELO - Scientific Electronic Library Online

 
vol.13Waste Heat Recovery in SI Engines by the Dissociation of Methanol Fuel índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • En proceso de indezaciónSimilares en Google

Compartir


R&D Journal

versión On-line ISSN 2309-8988
versión impresa ISSN 0257-9669

R&D j. (Matieland, Online) vol.13  Stellenbosch, Cape Town  1997

 

From image to CAD. A digital photogrammetry system for the semi-automatic mapping of plants and pipe-systems

 

 

M. CammidgeI; H. RiitherII

IDepartment of Surveying and Geodetic Engineering, University of Cape Town
IIDepartment of Surveying and Geodetic Engineering, University of Cape Town, Private Bag, Rondebosch, 7700 South Africa

 

 


ABSTRACT

A machine vision (digital photogrammetry) system for the semi-automated identification of objects in industrial environments, and especially the determination of pipe dimensions and orientations is described. The test measurements on objects covering some 80% of the image resulted in accuracies of 1 in 2 000 (difference between known and system determined object dimensions) and precisions of better than 1 in 2000 (standard deviation against object dimension). The test also confirmed the svstem's ability to determine circle parameters based on the photogram-metric measurement of three points on a cylindrical body, a feature especially designed for the measurement of pipes.


 

 

Introduction

The implementation of chemical, petro-chemical, and related industrial plants frequently diverges from the design thereof as a result of unforeseen implementation difficulties during the plant construction. Furthermore, it is frequently found that, as a result of repairs and updates to these plants, the original plans are no longer representative of the plant layout after a number of years, and are thus unsuitable for use in the planning of further alterations.

It is, at present, common practice to make use of analytical photogrammetry to facilitate the mapping of such plants, and research in this regard has been described.2,3,4,5,6 Typically a number of well-distributed control points, i.e. points of known position, are established and a set of photographs of the industrial system is taken from different view points. The positions and orientations of cameras are then determined using the control points. The image positions of relevant object points are then measured on the images, and these point positions are used, together with the camera orientation information, to determine the positions of objects in three dimensions. These object positions are used to determine object dimensions and, in the case of pipe systems, identified by means of a lookup table of standard pipes and other system components. This information, together with the calculated component positions are input into a CAD model of the plant, and a complete three-dimensional plant model is generated in this manner. The most apparent deficiency of this technique is the amount of labour involved in identifying points, and transferring data between analogue images and computerised systems. Other obvious disadvantages include the time taken from image capture at plant sites until the data are entered in the CAD model, and the lack of automation in the identification of pipe sizes, orientations, and positions.

Developments in digital photogrammetry7 have now reached a stage where conventional photogrammetric measurement can be replaced by machine vision systems with semi-automatic measurement capabilities. This paper reports on the design and implementation of such a system. The following basic steps are required for the acquisition of dimensional data suited for entry into a CAD model of an industrial system:

Image capture, generally using a digital camera;

Pre-processing of captured images to simplify object point location, using image processing algorithms;

Determination of camera orientations and positions from the known positions of a number of control points;

Object point identification from the positions of points on a number of images;

Calculation of the spatial relationship between point in object space.

These are described in more detail in the sections which follow.

 

Image capture

Images can be captured using any of a number of techniques to generate digital images. The most convenient method relies on the use of digital still cameras. For this project the use DCS420 digital still camera was used to capture images at a resolution of 1524 x 1012 pixels. These images were stored on a PCMCIA hard drive, and downloaded directly to the PC where the processing was to take place.

Further tests were performed on 512 x 512 pixel images captured using CCD video cameras attached to a framegrabber in a PC. The possibility of scanning analogue images was also considered, but was not tested.

 

Image processing

Image processing algorithms are implemented to enhance the images prior to identification of points to simplify this task for the user of the software. Of particular use are the histogram equalisation algorithm, which in many cases improves the contrast in dark regions of images, and the high pass filtering algorithm which sharpens edges of objects. The high pass filter is of particular use in enhancing small objects which are only one or two pixels wide in images.

 

Camera calibration

Camera calibration involves the determination of the elements of interior and exterior orientation of the camera. The interior orientation elements comprise the focal length, or more correctly, the 'principal distance1, the principal point, and the lens distortion of the image capturing system. The exterior orientation defines the object space position and orientation of the camera during image acquisition. The orientation process makes use of the (x,y) image co-ordinates and the (X,Y,Z) object co-ordinates of a set of control points. Photogrammetric calibration algorithms are most commonly based on the so-called collinear-ity equations which describe the requirement that a point, its image, and the perspective centre of the camera lens system must be collinear:

where x, y, and c are the image co-ordinates of a point on an image, and X, Y and Z the co-ordinates of the point in object space. The remaining parameters (xp, yp, ω, k, Xc, Yc, Zc) describe the position and orientation of the camera.

These equations must ideally be satisfied for all the control points which are available. Due to imperfections in the image capturing and data acquisition systems, the collinearity equations cannot generally be satisfied fully, and it is thus desirable to evaluate the parameters based on number of redundant points, in which case least squares optimisation is invoked to model the observation data. This is achieved by linearising the equations, and solving for the nine parameters in an iterative manner. Initial estimates of the unknowns are required, and are obtained using other techniques. Solving for the nine unknown parameters requires at least six control points.

Additional parameters can be included in equation (1) and equation (2) to model some of the distortions introduced into the system by distortion of the camera lens, and any other imperfections in the system.

Other techniques based on the same equations8 are used in situations where fewer control points are available, but where some of the camera orientation parameters are known.

 

Object point intersection

If the orientation of a camera is known, then a point on an image, together with the camera orientation defines a line in three dimensions. If the same point is identified on a second image, for which the camera orientation is known, this too defines a line in three-dimensional space (object space). Both these lines will ideally pass through the point in object space. The point position can thus be found by calculating the point of intersection of the two lines.

The calculations involved make use of the same equations used for the camera calibration (equation (1) and equation (2)), but in this case the object space coordinates, rather than the camera orientations, are considered variable.

Once points have been identified in object space, it is possible to use these points to refine the estimates of the camera orientation parameters, even if the correct object space co-ordinates are not already known. Both the object space co-ordinates and the camera orientation parameters are allowed to vary, and their optimum values determined. The adjustment, based on the same equations used above, is then referred to as a bundle adjustment.

 

Locating circles in three dimensions

An algorithm to find the radius and centre of a circle, and the direction of the normal to the plane of the circle passing through three points in three dimensions was required to assist in finding the directions of pipes in factories, and to assist in the identification of pipes and other circular components from catalogues of items. The following method was employed to calculate these parameters:

Using the co-ordinates A, B and C of three points through which a circle passes, two chords of the circle can be found. These chords are vectors AB and BC. The directions of the chords are found by subtracting A from B and B from C, respectively. The cross product of difference vectors AB and BC defines the direction of the normal to the plane of the circle. Finding the cross product of the directions of the chords will give the same result. The direction of the normal, D, is therefore:

Figure 1 illustrates the points A, B, and C, and the vectors AB and BC. A vector with the same direction as the normal to the plane of the circle (direction D), and passing through the midpoint of the circle (M) is also shown.

 

 

Two vectors R1 and R2 from the midpoints of the chords to the centre of the circle can now be found. The directions, Rd1 and Rd2,of these vectors can be determined from the equations

since R1 and R2 must be perpendicular to both the direction of the normal, and the chords. If R4 and R2 are each assumed to pass through the centre of one of the chords, it is clear that the they will pass through the centre of the circle, in the plane of the circle. The centre points of the chords can be calculated using

The vectors and R2 are illustrated in Figure 2.

 

 

In order to locate the centre of the circle, the intersection of R1 and R2 is required. Ideally these vectors will intersect at the centre of the circle. However, as a result of earlier rounding errors, it is probable that the two vectors will approach each other, but will not intersect at the midpoint of the circle. The following equations can be written:

where M is the centre of the circle, and s1 and s2 are unknown scale factors. The terms ε1 and ε2 describe the residual vectors bridging the gap between R1 and R2 vectors at the point of closest approach. The vector describing the difference between R1 and R2 at the point of closest approach is given by (ε1 -ε2)- Using equation (8) and equation (9), the values of the two scaling factors which multiply Rd1 and Rd2, s1 and S2 respectively, can be found. In order to find these values, the distance between R1 and R2 at closest approach, (ε1 - ε2) must be minimised. Writing equations for each of the vector components in matrix notation gives the equations

Since Equation 10 generally has no solution, the least squares method is used to determine the best values of s1 and s2. The midpoint of the shortest vector between R1 and R2 is accepted as the best estimate of the centre of the circle, and this point can be calculated using the following equation:

Once the midpoint of the circle has been found, the magnitude of the radius of the circle can be calculated using one of the following equations

The average of these values is accepted as the best estimate of the radius of the circle.

The direction of the normal to the plane of the circle is normalised to a unit vector D. These three parameters, R, D, and M then represent the circle passing through A, B, and C.

 

System design

The software was written in C and C++, and designed to run under DOS on standard PC platforms. An effort was however made to ensure future portability of the source code, and alterations to allow the system to allow the software to run alongside the CAD software used for modelling of such industrial plants should be simple to implement,

 

Testing

Tests were performed to verify the correctness of the various algorithms which were implemented as part of the system.

 

Measurement testing

In order to test the ability of the system to measure accurately distances between points in three dimensions, the distances between various points on a 30 cm ruler, shown in Figure 3, were calculated, and compared to their known values.

 

 

Camera calibration was performed using a Direct Linear Transformation (DLT) described by Abdel-Aziz & Karara.1 Following the intersection of the object points, a bundle adjustment was used to refine the estimates of the point positions, and of the camera orientation parameters. Seven additional parameters were used in the bundle adjustment. The results of the test, performed using three 1 012 X 1 524 pixel images, are presented in Table 1.

 

 

The accuracy obtained is 0.05% for the ruler which occupies approximately 80% of the height of the image. The use of more images, would improve the accuracy obtained, as would the use of more easily identifiable targets, as the graduations on the ruler were difficult to identify.

 

Industrial scene example

The discrepancies between the known and the calculated control point co-ordinates were determined as a demonstration of the accuracy which can be expected in industrial plants. The test was based on the scene shown in Figure 4, and the results of this test are summarised in Table 2. The square root of the mean of the squared deviations (RMS deviation) of measured points from their known coordinates is given. Three images, captured using a DCS video still camera were used for this test. Camera calibration was performed using a combination of the DLT, and the bundle adjustment. No additional parameters were implemented in this test since too few control points were available for this purpose.

 

 

 

 

These results show that the expected accuracy for scenes similar to those shown in Figure 4 is better than 1 cm. The presence of more control points would improve the achievable accuracy by improving the camera orientation estimates, and by allowing for the use of additional parameters to model lens distortions.

 

Circle location

The ability of the system to determine the radius, centre and orientation of a circle passing through three points in three dimensions was demonstrated by calculating these parameters for the same circle using two separate sets of three points on the circumference of the circle. Figure 5 illustrates these two sets of points, and Table 3 presents the results of each test. In each case the direction vector of the normal to the plane of the circle has been normalised to a unit vector.

 

 

 

 

The two circles identified are expected to be identical, since both pass through points on the rim of the tank as illustrated in Figure 5. The radii differ by 4 mm, and the angle between the two normal directions is 3.1°. The distance between the two calculated centre points is 60.4 mm.

A number of factors cause errors in the calculated circles parameters. The most important of these is the precision of the calculated positions of surface points in object space, which is of the order of 1 cm for this test. The effect of the errors in the object space co-ordinates is larger if all three points used to locate the circle are located on the same side of the pipe, as is the case for this test. In addition, it should be noted that the object to which a circle is being fitted is not necessarily perfectly circular. Image point identification by the operator is also imperfect as a result of the difficulty involved in determining the exact position of features on the images. The fitting of a best fit circle to more than three points would improve this accuracy.

 

Line location

Another set of industrial plant images was used for this test. To demonstrate the line location facility, the ends of two approximately perpendicular pipes were identified. These lines are illustrated in Figure 6.

 

 

The results of this test are presented in Table 4. As a result of rounding errors, the length of the direction vector is not exactly 1 mm.

 

 

The angle between line 1 and the horizontal is less than 0.2°, and between line 2 and the vertical is less than 2°. These deviations result from a combination of real deviations in the layout of the plant, and the limited accuracy of the calculated object point positions. The angle between the two lines is 89.23°, indicating that the lines are approximately perpendicular, as expected.

 

Conclusions

The use of digital photogrammetry holds the following advantages over analytical techniques:

The tests illustrate the ability of the digital system to measure the positions of points in three dimensions accurately, and to determine the dimensions of circular structures in industrial plants.

Reduced time from image capture to data entry into CAD model, compared to analogue systems;

The ability to process images digitally to ease the identification of points in the images;

The ability to find the lengths of features, and the radii, positions, and orientations of pipes in plants in a semi-automated, and integrated package;

The ability to store data in a suitable digital format to allow such data to be transferred to other data processing facilities.

However, the digital techniques described above involve certain trade-offs:

Reduced accuracy in certain circumstances: The lower resolution of digital images can reduce the accuracy with which points in images can be identified, and the resultant accuracy of the calculated object space co-ordinates. This disadvantage is rapidly becoming insignificant with the increase in camera resolutions;

Hardware requirements: Present analogue systems, making use of established technology, require computer equipment only for calculations, and somewhat cheaper photographic equipment. These too are likely to be reduced as advanced equipment becomes commonplace.

 

Acknowledgements

The author would like to thank the Foundation for Research Development and the University of Cape Town, without whose generous support this project would not have been possible.

 

References

1. YI Abdel-Aziz & HM Karara. Direct Linear Transformation from comparator coordinates in object space coordinates in close-range photogrammetry. Proceedings of ASP/VI Symposium on Close-Range Photogrammetry, Urbana, Illinois, pp. 1 - 18, 1971.

2. David Chapman, Andrew Deacon & Asad Hamid. CAD modelling of radioactive plants: The role of digital photogrammetry in hazardous nuclear environments. International Archives of Photogrammetry and Remote Sensing, Part B5, XXIX, 741-753, 1992.         [ Links ]

3. Francis Hamit. As-built industrial visualization CAD meets digital photo and document. Advanced Imaging, pp. 75-77, 1995.

4. MA Jones, DP Chapman Sz AA Hamid. Close range photogrammetry using geometric primitives for efficient CAD modelling of industrial plants. International Archives of Photogrammetry and Remote Sensing, Part B5, XXXI, 284-289, 1996.

5. J Kramer & H Scholer. Photogrammetric measurement of piping systems. Presented Paper, 14th International Congress of ISP, Commission V, Working Group, Hamburg, 1980.

6. RM Littleworth & JH Chandler. Three dimensional computer graphics models by analytical photogrammetry. Photogrammetric Record, 85, 65-76, 1995.         [ Links ]

7. H Rüther. Geodetic Engineering and Photogrammetry in quality control and manufacturing. South African Journal of Surveying and Mapping, 22, 1993.         [ Links ]

8. ADN Smith. The explicit solution of the single picture resection problem, with a least squares adjustment to redundant control. Photogrammetric Record, 5, 113-121, 1965.         [ Links ]

 

 

Received October 1996
Final version May 1997

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons