Abstract:Data-to-text generation is a natural language processing method that generates coherent text from structured data. In recent years, data-to-text generation have shown great promise profit due to the popular neural network architectures which are trained end-to-end. This method can automatically process large amounts of data and generate coherent text which often used in news writing, report generation, etc. However, there are some defects in the reasoning of information such as the data of specific value and time in the existing researches, which unable to make full use of the structural information of data to provide reasonable guidance for the generation. Beyond that the generation process is prone to separate semantic and syntactic when training. In this paper, a data-to-text generation method based on Transformer model and deep neural network is proposed, we also introduce the algorithm of Transformer Text Planning(TTP) that it can effectively control the context information of the generated text and improve the deficiencies of the previous model semantics and syntax separation. We present experiment results on the ROTOWIRE public dataset which show that the performance of the method outperforms competitive the existing model. The method of this paper can be directly applied to the generation task of scattered data to coherent text, which has certain practical application value.