Image based 3D modeling is an effective way to reconstruct large-scale scenes, especially city-level scenarios. In the image based modeling pipeline, obtaining a watertight mesh model from a noisy multiple view stereo point cloud is a key step to ensure the model quality. However, stateof-the-art method  relies on the global Delaunay-based optimization formed by all points and cameras, and will encounter scale problem when dealing with large scenes. To circumvent this limitation, this paper proposes a distributed surface reconstruction approach which could handle cityscale scenes with limited memory and time consumption. Firstly, the whole scene is adaptively divided into several chunks with overlapping boundaries, and each chunk can satisfy the memory limit. Then, the Delaunay-based optimization is performed to extract meshes for each chunk in parallel. Finally, the local meshes are merged together by resolving local inconsistencies in the overlapping areas. We test the proposed method on three city-scale scenes with billions of points and tens of thousands of images, and demonstrate its scalability and completeness compared with the state-of-the-art methods.