Applications of Artificial Intelligence in Self-Developing Software.
DOI:
https://doi.org/10.65204/djes.v3i1.449Keywords:
Self-developing software, reinforcement learning, explainable artificial intelligence, cyclomatic complexity, self-processingAbstract
This study deals with the challenge of developing highly dependable self-developing software (SDS) operating in full autonomy in complex and dynamic work environments with a strict adherence to ethical and legal responsibility considerations. The significance of this is that it addresses a fundamental problem in both the software engineering and artificial intelligence domains and brings understanding of social and technical issues to a quantitative level. The research hypotheses were formed to validate RL models as the more adaptable and powerful decision making mechanism in SE and that use of AI results in higher cyclomatic complexity and security verification efforts, and that presentation of XAI boosts trust and decreases instances of contestation of automated outputs, and that AI-based self-healing solutions reduce unplanned downtime by at least 25%. The study is rooted in a meta-methodology of critical interpretivism and quantitative research and employs advanced statistical models including multiple regression, z-testing, and Markov Decision Processes (MDPs) for data analysis. The results verified that the model of software self-development was mainly led by artificial intelligence, and the automated testing and verification tools had substantial operational improvements, the reliability of test was improved by 85% and the coverage of test was increased by 95%. Predictive self-healing systems also reduced unintended downtime by 30.0% and increased mean time between failures (MTBF) by 35.6%, resulting in a 18.6% saving in annual maintenance costs.However, the research identified deep socio-technical problems: automatically
the churned code has 40% more cyclomatic complexity and 37.1% more security vulnerabilities than human-written code. It also revealed a “productivity paradox”: developers decelerate by 16% on certain tasks even though they believe their productivity is increasing by 20%. Lastly, the findings demonstrated that XAI increased trust and reduced security breaches, and that rejection of self-modifying decisions, predicated on opaqueness, is a significant impediment to full automation with a propensity score of 0.79.