AI-Augmented Software Engineering: Automated Code Generation and Optimization Using Large Language Models

Authors

  • Sergey Levine Software Engineering, University of Malaya, Kuala Lumpur, Malaysia. Author

DOI:

https://doi.org/10.63282/3050-9246.IJETCSIT-V1I4P103

Keywords:

AI-powered software engineering, Large Language Models (LLMs), automated code generation, code optimization, AI in software development, machine learning in coding, CI/CD automation, AI-driven bug detection, software quality assurance, AI-assisted programming

Abstract

The integration of artificial intelligence (AI) into software engineering has opened new avenues for enhancing productivity, quality, and efficiency. Large Language Models (LLMs) have emerged as powerful tools capable of generating and optimizing code, thereby reducing the manual effort required in software development. This paper explores the current state and future potential of AI-augmented software engineering, focusing on automated code generation and optimization. We discuss the theoretical foundations, practical applications, and the challenges and opportunities presented by this technology. The paper also includes a detailed analysis of existing systems, case studies, and a comparative evaluation of different approaches. Finally, we outline a roadmap for future research and development in this field

Downloads

Download data is not yet available.

References

[1] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are fewshot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.

[2] Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 4171-4186.

[3] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140), 1-67.

[4] Allamanis, M., Peng, H., & Sutton, C. (2018). A convolutional attention network for extreme summarization of source code. Proceedings of the 35th International Conference on Machine Learning, 160-169.

[5] Allamanis, M., Barr, E. T., & Sutton, C. (2017). Learning a representation for programmatic elements from GitHub. Proceedings of the 39th International Conference on Software Engineering, 414-425.

[6] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.

[7] Liu, P., Qian, C., & Wang, D. (2020). A survey on deep learning for source code. ACM Computing Surveys, 53(4), 1-37.

[8] Allamanis, M., Peng, H., & Sutton, C. (2018). A convolutional attention network for extreme summarization of source code. Proceedings of the 35th International Conference on Machine Learning, 160-169.

[9] Allamanis, M., Barr, E. T., & Sutton, C. (2017). Learning a representation for programmatic elements from GitHub. Proceedings of the 39th International Conference on Software Engineering, 414-425.

Published

2020-12-02

Issue

Section

Articles

How to Cite

1.
Levine S. AI-Augmented Software Engineering: Automated Code Generation and Optimization Using Large Language Models. IJETCSIT [Internet]. 2020 Dec. 2 [cited 2025 Sep. 12];1(4):21-9. Available from: https://www.ijetcsit.org/index.php/ijetcsit/article/view/48

Similar Articles

1-10 of 244

You may also start an advanced similarity search for this article.