Enhancing Reliability in Java Enterprise Systems through Comparative Analysis of Automated Testing Frameworks

Authors

  • Srikanth Reddy Gudi Software Engineer, Express Scripts Inc,Herndon, Virginia, USA. Author

DOI:

https://doi.org/10.63282/3050-9246.IJETCSIT-V4I2P115

Keywords:

Automated Testing Frameworks, Java Enterprise Systems, Software Reliability, Defect Detection, Test Automation, Framework Comparison

Abstract

In complex software environments, Java enterprise systems require strong testing frameworks that help bring reliability and maintainability. This paper provides a detailed comparative study of java-specific automated detectors in finding defects and improving the quality of developed code and the overall reliability of the systems. The research evaluates the performance metrics, defect detection potency, and integration feasibility of various multiple testing frameworks such as JUnit, TestNG, Selenium, and Calabash. Your main goal is to determine the best approaches for testing, which yield the highest degree of reliability while consuming resources the least when dealing with Java applications on an enterprise level. This study uses a qualitative framework with a mixed-methods approach to quantitative performance analysis and to qualitative framework evaluation, detailing empirical data on real-world implementations through Java projects. The approach includes comparison of frameworks based on time taken to run tests, coverage, defect finding and subsequently maintenance overhead. The results show that the performance of the frameworks varies considerably between testing environments, indicating that some frameworks are better suited to specific circumstances than others. Statistics indicate that integrated testing approaches using multiple frameworks produce a 34.7% better defect detection rate than single-framework implementations. The implications of these findings offer important insights for software engineering teams seeking to optimize their testing strategies and, ultimately, improve the reliability of Java enterprise systems by making informed decisions regarding their framework selection and implementation practices

Downloads

Download data is not yet available.

References

[1] M., & S. K. N. (2017). Comparative study on different mobile application frameworks. International Research Journal of Engineering and Technology, 1299–1300.

[2] Blundell, P., & Milano, D. T. (2015). Learning Android application testing (Vol. 1). Packt Publishing Ltd.

[3] Dalal, S., & Chhillar, R. S. (2012). Software testing—Three P's paradigm and limitations. International Journal of Computer Applications, 54(12), 49–54.

[4] Dalton, F., Ribeiro, M., Pinto, G., Fernandes, L., Gheyi, R., & Fonseca, B. (2020, April). Is exceptional behavior testing an exception? An empirical assessment using Java automated tests. In Proceedings of the 24th International Conference on Evaluation and Assessment in Software Engineering (pp. 170–179). http://gustavopinto.org/lost+found/ease2020.pdf

[5] Garousi, V., & Mäntylä, M. V. (2016, August). When and what to automate in software testing? A multi-vocal literature review. Information and Software Technology, 76, 92–117. https://doi.org/10.1016/j.infsof.2016.04.015

[6] Khoria, S., & Upadhyay, P. (2012). Performance evaluation and comparison of software testing tools. YSRD International Journal of Computer Science and Information Technology, 2(10), 801–808.

[7] Kong, P., Li, L., Gao, J., Liu, K., Bissyande, T. F., & Klein, J. (2019, March). Automated testing of Android apps: A systematic literature review. IEEE Transactions on Reliability, 68(1), 45–66. https://doi.org/10.1109/TR.2018.2865733

[8] Kulkarni, M. K., & P. S. A. (2016). Deployment of Calabash automation framework to analyze the performance of an Android application. Journal of Research, 2(3), 70–75.

[9] Liu, K., et al. (2020, June). On the efficiency of test suite-based program repair: A systematic assessment of 16 automated repair systems for Java programs. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (pp. 615–627). https://arxiv.org/pdf/2008.00914

[10] Mashhadi, E., & Hemmati, H. (2021, March). Applying CodeBERT for automated program repair of Java simple bugs. In Proceedings of the 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR) (pp. 505–509). IEEE. https://arxiv.org/pdf/2103.11626

[11] Pareek, P., Chaturvedi, R., & Bhargava, H. (2015). A comparative study of mobile application testing frameworks. In BICON 2015 (pp. 4–5).

[12] Pei, K., Cao, Y., Yang, J., & Jana, S. (2017, October). DeepXplore. Communications of the ACM, 62(11), 137–145. https://doi.org/10.1145/3361566

[13] Purnamasari, R. A. (2015). Penentuan penerima beasiswa dengan metode simple additive weighting dan metode the distance to the ideal alternative. Universitas Jember.

[14] Shao, L. (2015). Top 5 Android testing frameworks with examples.

[15] Tian, Y., Pei, K., Jana, S., & Ray, B. (2018, May). DeepTest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th International Conference on Software Engineering (Vol. 12). https://doi.org/10.1145/3180155.3180220

[16] Tran, N. P., & Boukhatem, N. (2008). The distance to the ideal alternative (DiA) algorithm for interface selection in heterogeneous wireless networks. In Proceedings of the 6th ACM International Symposium on Mobility Management and Wireless Access (p. 61).

Published

2023-06-30

Issue

Section

Articles

How to Cite

1.
Gudi SR. Enhancing Reliability in Java Enterprise Systems through Comparative Analysis of Automated Testing Frameworks. IJETCSIT [Internet]. 2023 Jun. 30 [cited 2025 Nov. 19];4(2):151-60. Available from: https://www.ijetcsit.org/index.php/ijetcsit/article/view/476

Similar Articles

41-50 of 359

You may also start an advanced similarity search for this article.