Abstract
Generative Artificial Intelligence (GenAI) has become increasingly renowned over the years, notably since the launch of AI chatbots such as ChatGPT in 2022. In this research paper, we outline various technical aspects of GenAI and its mechanisms, with discussions on the intricate architecture of a GenAI model, how an AI model is trained, and the sometimes-perplexing procedures involved in the generation of content through a prompt. This paper also conducts an analysis of both general advantages and disadvantages of GenAI, with in-depth discussions on the beneficial role GenAI plays on the industrial side and the colossal challenges it poses ranging from privacy issues to the creation of deepfakes. Additionally, we explore the various opportunities and significant challenges that GenAI brings in a variety of fields, particularly the use of this technology in education, healthcare and the business sector. Moreover, we explore various steps taken by different nations and blocs regarding policies that shape the future utilisation of GenAI technology.
1. Introduction
Generative Artificial Intelligence (GenAI) refers to computational techniques capable of generating new, meaningful content like text, images or audio from training data (Feuerriegel et al., 2023). It encompasses the theory and development of computer systems able to perform tasks normally requiring human intelligence. Unlike machine learning (ML), which is a subfield of AI (Financial Times AI Glossary, 2023), GenAI is a subject within AI, highlighting that AI encompasses a broader range of technologies, including ML (Bigelow, 2024). GenAI models identify patterns in known data to generate new content like pictures and texts by using neural networks (Stryker et al., 2024). Many GenAI tools became very popular since the AI boom of the 2020s. They are widely used in our daily life; famous examples include Claude, Gemini, OpenAI’s ChatGPT, GPT-4, Gen-2, Sora and others (Kumar, 2024).
1.1. THE HISTORY OF GENAI
In the early 20th century, Russian mathematician Andrey Markov used a stochastic process to analyse the patterns of vowels and consonants in Eugene Onegin (Potter, n.d.). This process, later known as the Markov chain, has been used to model natural languages since its development (ibid.). The text corpus can be used as a probabilistic text generator once it uses Markov chains to chain it. This is likely to be the first example of algorithmically generated media.
In 1956, many researchers from a variety of fields of science held a workshop called the Dartmouth Summer Research Project (Solomonoff, 2023). They discussed the potential power of intelligence, creative processes and simulate reasoning (ibid.). An American computer scientist called John McCarthy gave this new academic discipline a beautiful name which is well known today: artificial intelligence (McCarthy et al., 1956). After this, AI was formally born. In the 2000s, due to the growth of the internet, an explosion in the amount of data being collected and processed meant that the processing power of computing machines needed to reach the level essential for dealing with enormous data flows (Kumar, 2024). Many new technologies and concepts have since been discovered to support the development of AI.
Machine learning and neural networks have become more and more widely accessible (Janiesch, 2021), providing new opportunities to develop systems which are smarter than before. A type of ML called deep learning, which began to rise extremely fast in the 2010s, allowed machines to use algorithms to shake out the correct answer (Greene, 2020). In 2021, OpenAI introduced a transformer-based model called DALL-E with the capacity to create images from text (Ramesh et al., 2021) – a as a major step in AI-generated art. In November 2022, a now widely-used AI tool, ChatGPT, was released (ibid.). This tool drove generative AI into mainstream use for writing, coding and conversation.
1.2. TECHNICAL ASPECTS OF GENAI
GenAI models are trained through either supervised or self-supervised learning. In supervised learning, models are trained on pre-labelled data categorised by humans (Sharma et al., 2016). Self-supervised learning relies on the system to generate its own labels through techniques such as masked language modelling or contrastive learning (Chang et al., 2019). Initially, the system is untrained, with outcomes being completely random. However, through exposure to training data combined with optimisation algorithms, such as backpropagation, the difference between the output and target is minimised (Feng & Wu, 2018). This process iterates until a predetermined threshold of accuracy has been reached, at which the neural network can be labelled as trained (ibid.).
The architecture of a Gen-AI model heavily depends on the type of learning that is used as it will be optimised for that model. Systems that are focused on text generation, such as GPT-3, use transformer-based architectures (see Figure 1), pioneered by Vaswani et al. (2017) due to self-attention mechanisms that dynamically weigh the importance of different words in different contexts (ibid.).

Figure 1: A transformer model architecture (Vaswani et al., 2017, p.3).
Systems that put an emphasis on visual content, such as MidJourney, prefer diffusion models (see Figure 2), as they operate effectively in continuous output spaces, contrasting the discrete sampling methods used in text generation (Abbeel et al., 2020). These are more effective in eliminating random noise in image generation through Denoising Diffusion Probabilistic Models (DDPM) (ibid.).

Figure 2: A diffusion model architecture (Blattman et al., 2022, p.4).
Hybrid models, such as Dall-E, have also been introduced, which combine elements from both transformer and diffuser based models (see Figure 3). However, this introduces redundancy as separate parameters have to be kept for both text and image generation,with memory increased by 1.4-1.8x compared to non-hybrid models (Chen et al., 2025). As such, current models tend to favour pure architectures, with ChatGPT, Perplexity and Gemini all using a transformer model.

Figure 3: A hybrid model architecture, with aspects of both transformer and diffusion models for text and image generation respectively (Chen et al., 2025, p.3).
To generate content from a prompt, the system has to find patterns in the prompt similar to its training data and then conducts probabilistic inference to produce outcomes (Feng & Wu, 2018). Text purposed systems tend to use probabilistic sampling – most commonly, Top K or Nucleus sampling. Top K sampling works by limiting the number of tokens a system can consider at once (Dauphin et al., 2018). Each token is assigned a probability and only the top K tokens are selected with the rest discarded. From this, a token is then selected based on the probability of each of these K tokens (ibid.). This approach allows flexibility in reliability and creativity. A smaller K value will assure the system sticks to patterns found in training, whilst a larger K value allows more volatility in the text generated. Nucleus sampling, instead of choosing a fixed number of tokens, assigns a threshold probability (Buys et al., 2020). The system keeps on choosing tokens, sorted from highest probability of relevance to lowest, until this threshold is met. This allows the system to dynamically allocate memory, resulting in more cohesive text generation (ibid.).
As mentioned previously, image generation systems operate more effectively in continuous space outputs, so they require different architecture, as well as different generation methods, to text generation systems, mainly DDPM (Abbeel et al., 2020). Each model also has a value for a guidance scale, which tells the system how repetitive or creative it should be (Ho & Salimans, 2022). A lower value tells the system to be more creative, with potentially unrelated elements to the prompt, whilst a higher value gives an image that precisely represents the prompt (ibid.).
1.3. ADVANTAGES OF GENAI
Globally, revolutionary algorithms are being actively developed, shaping AI that otherwise would need human involvement (Pingili, 2025), underscoring GenAI’s vital role in driving future solutions. Since their launch, some pioneering platforms, such as ChatGPT3 and ChatGPT4, have undergone iterative upgrades (Dhoni, 2023). In order to foster advanced computing innovation trends, this section explores the transformative potential of GenAI by analysing its key industry benefits, including adaptability to the fast-paced technological landscape, streamlined cross-sector integration and scalable operational efficiency.
As technology continues to evolve, GenAI is increasingly becoming a game-changer for companies to navigate changing demands. GenAI provides unprecedented adaptability for accommodating dynamic company demands, marking a substantial leap beyond traditional automated methods (Pingili, 2025). Marzi and Balzano (2025) provide empirical support for the idea that GenAI amplifies the adaptability effect on sustainability outcomes. Consequently, GenAI is aiding businesses in growing in smarter and more sustainable ways.
GenAI plays an important role in helping various industries to manage complex organisational challenges by consolidating tasks and strengthening cross-functional coordination. In order to use GenAI solutions in essential industries like manufacturing, finance, retail, transportation and construction, as well as to facilitate groundbreaking achievements in a technologically complex atmosphere, cross-functional teams are valuable, according to a comprehensive evaluation (Rane, 2023). GenAI’s ability to consolidate tasks across departments involving marketing, operations, legal, human resources and finance, while simultaneously boosting output, is exceptionally favourable for business, education and healthcare domains (Nah et al., 2023). Hence, GenAI supports collaborations across departments within diverse sectors.
Furthermore, as businesses aim to work faster and reduce costs, many are turning to GenAI to modernise daily operations (Naqbi et al., 2024). By automating repetitive processes, generating real-time data visualisations and providing customised scheduling, GenAI enables professionals to optimise their time while encouraging diligence, collaboration and productivity (Dhoni, 2023). Due to rising recognition of GenAI’s capabilities to maximise efficiency, simplify operations and assist in strategic decision-making, multidisciplinary businesses are leveraging the cutting-edge technology in order to stay highly efficient and competitive (Naqbi et al., 2024). Ultimately, strategically expanding the application of GenAI tools allows companies to minimise wasted resources and make workflows leaner.
Ultimately, GenAI is harmonising various industry practices by supporting growth goals, reconfiguring operational structures and uniting departments. As a result, the technology’s integration reinforces core capabilities of flexible adaptation, internal alignment and scalable long-term progress.
1.4. CHALLENGES IN GENAI
Although there are various benefits and opportunities that GenAI brings forth as discussed in the previous section, there are many issues that have arisen surrounding GenAI. In this section, we discuss and evaluate some of the many challenges that the world faces with the growth of GenAI.
Privacy and security issues have been born with the boost in the utilisation of GenAI. AI chatbots, such as the renowned ChatGPT, though providing people comfortable convenience, ensnare a colossal amount of personal data (Nah et al., 2023). The underlying risk is that confidential information could possibly be leaked to the public (ibid.). Data collected from crawled webpages can include personal details, threatening privacy and raising concerns about security (Siau & Wang, 2020 cited in Nah et al., 2023). Therefore, though GenAI may seem harmless and just a tool of aid, the implications of the type of data we input can be adverse in many ways. For instance, it was reported in 2023 that the chat history of some users on ChatGPT could be seen by others on the platform due to a systematic error (Porter, 2023 cited in Nah et al., 2023). This communicates that even one of the most advanced and robust digital innovations is vulnerable to being breached by a bug or a flaw, potentially compromising sensitive details about a person.
Furthermore, the advancement in the development of AI tools has given rise to the creation of “deepfakes”. Deepfake technology, a certain application of AI that is capable of transforming images and audio to produce knock-off content, intensifies the risk of privacy violations and identity theft (Al-kfairy et al., 2024). The capability of such technology, especially its ability to persuasively imitate anyone, can have severe adverse consequences such as anguish due to harassment, unwanted harm to an individual’s repute and financial blackmail (Shoaib et al., 2023 cited in Al-kfairy et al., 2024). Additionally, harmful and inappropriate content may also be produced with the help of GenAI tools. Such content may include vicious content, use of foul or explicit language, discriminative content and may also include pornography (Nah et al., 2023). Even though unbending rules have been put in place by GenAI platforms to avoid the possibility of such content being generated and displayed, these rules could be bypassed by strategic prompts, such as prompts that evoke DAN (Do Anything Now) or other jailbreaker modes of AI models to generate illicit responses or content that violates the law (ibid.).
Additionally, another crucial issue relating to GenAI is the increase in difficulty of determining the authenticity of work of art or content (Nah et al.,2023). Genuine images that seem to record or capture legitimate real-life activities or people may be manufactured using deepfake technology (ibid.). Therefore, the competency of GenAI could lead to the negative remodelling of images and/or videos, thereby exacerbating the problem of the spread of counterfeit information on social media (Gragnaniello et al., 2022 cited in Nah et al., 2023).
Moreover, copyright issues are another major concern surrounding GenAI, and copyright is a problem that cannot be overlooked in today’s technological landscape. As discussed in earlier parts of this section of the paper, determining the authenticity of something has become difficult due to the fact that AI-generated content can be difficult to distinguish from an original art piece. As Al-kfairy et al. (2024) discusses, questions about genuineness, creativity and decent use are not easily answerable when considering AI-generated works. Al-kfairy et al. (2024) also put forward essential questions such as: “Can an AI indeed be the author of a work? If so, how does one apply concepts like fair use or public domain to such creations?”. This ambiguity challenges existing regulations in terms of how to ensure that the intellectual property rights of an original piece of work will be protected.
These major issues need to be addressed via an effective framework. New regulations need to be adopted because, as noted by Nah et al. (2023), existing rules and regulations on GenAI, including those that have been amended in current times, are insufficient. This shows us that there is a need for the introduction of robust policies that will help in mitigating the adverse effects of GenAI.
2. GENAI IN DIFFERENT SECTORS
As GenAI continues to grow at a rapid rate, so does the utilisation of this technology in different fields. In this section, we explore the usage, advantages and disadvantages of GenAI in education, business and marketing, and healthcare, with discussions on the transformations that this technology brings forth to the respective fields.
2.1. GENAI IN EDUCATION
GenAI technology is remodelling learning with interactive content, adaptive lessons and easily accessible features that appeal to an array of learner needs, disrupting all aspects of education (Strielkowski et al., 2024). For example, GenAI promotes curricular engagement by guiding younger learners in the development of their reading and maths skills while assisting college students in understanding complex subjects (ibid.). Therefore, GenAI is instrumental in elevating traditional education to a new level where students can learn new concepts with greater focus in an immersive manner. GenAI is changing academic environments, progressively promoting deeper learning and efficient classroom management by facilitating dynamic, personalised instruction and consolidating repetitive duties for educators.
The expanding role of GenAI in the education space is encouraging students and institutions to adopt more learner-centric strategies that blend digital platforms with traditional teaching methods. Students who acquire education that integrates GenAI tools develop cognitively, gain confidence and build sophisticated critical thinking skills that are geared towards the future (Hikmawati & Mohammad, 2025). Importantly, nearly 70% of students enrolled in English academic programmes report positive attitudes toward ChatGPT, highlighting its impact on delivering information in formats that enhance clarity and student engagement (Duran et al., 2025). Hence, GenAI plays a major role in equipping and preparing students for their career paths.
Indeed, schools can potentially pinpoint academic gaps and successfully use GenAI to enrich learning experiences by reviewing student performances and harnessing inputs. In order to tailor education, adaptive learning technology uses GenAI data analytics to analyse student performance levels, learning habits and styles. Binhammad et al. (2024) state that GenAI-driven, role-based teaching approaches, especially those consisting of character-based and storytelling elements, better student achievement, participation and retention through unique presentation of information and user-generated input. Likewise, they give teachers insightful feedback and enable real-time adjustments to materials being distributed. For instance, the middle school maths programme was customised to employ GenAI-driven adaptive technology by the Carnegie Learning platform, which resulted in significant improvements in test results and increased student engagement through individualised assignments (Strielkowski et al., 2024). Therefore, GenAI-supported learning refines instruction and accelerates progress.
GenAI is increasingly recognised as a valuable teacher’s resource, simplifying routine tasks and providing specialised assistance to reduce teacher workload (Hashem et al., 2023). As a result, educators may devote more time to specialised instruction. Additionally, by automating worksheets, tutorials, assessments and feedback, GenAI enhances a teacher’s productivity and time management (Mittal et al., 2024). Teachers can utilise GenAI as a virtual assistant to promptly and correctly respond to questions on lesson planning, subject matter, teaching techniques and classroom management (Hashem et al., 2023). As a result, educators may devote more time to specialised instruction.
On the other hand, because of GenAI models, scholars are having discussions about plagiarism in academic settings (Crompton, 2024). Students may circumvent fulfilling their academic obligations by submitting coursework generated by GenAI as their own. Differentiating between originality and imitation has become more difficult due to the usage of private datasets to build large language models, such as ChatGPT (ibid.). Further compromising conventional academic norms are the prevalent implementation of GenAI platforms in courses and the gaps in specific integrity regulations associated with GenAI. Crompton and Burke (2024) illustrate the emerging pattern by reporting 53% of over 1,000 students used ChatGPT to write a class essay. Therefore, schools should establish explicit guidelines to ensure GenAI tools are used ethically without undermining academic honesty.
Overall, by adjusting learning experiences, upgrading student performance and participation, and increasing teacher productivity, GenAI is altering education. However, it also raises questions about academic credibility. GenAI offers dynamic, personalised instruction and reduces repetitive tasks for teachers, helping to shape learning environments that are more efficient, immersive and future-focused.
2.2. GENAI IN BUSINESS AND MARKETING
With the rise of GenAI, there comes a variety of pros and cons about the current and future status and operations of the business and economic world. To begin with, chartered accountants around the world have started to embrace AI, including the young professionals in the field (Ipsos UK, 2025). According to the research carried out by Ipsos UK (2025), out of all those who were surveyed regarding the use of AI in the profession of accountancy, 85% of the people were willing to utilise AI tech if given the chance (ibid.). This statistic shows how big of an impact GenAI can make in terms of influencing individuals and businesses to integrate it into their business work structure.
There are a variety of advantages to incorporating GenAI in business and marketing. Procurement, for example, is one of the most important tasks involved heavily in the business supply chain which can make use of GenAI. In the sourcing stage, GenAI can effectively modernise the RFP (Request for Proposal) procedures (CIPS, 2025). For example, GenAI can be made use of as a form of AI-chatbot which can handle inquiries from suppliers; therefore, automating the inquiries process and enabling the procurement officials to divert their focus on other major works (ibid.).
Moreover, GenAI can also aid in the proposal analysis by efficiently encapsulating supplier proposals, amalgamating performance information from past interactions and recommending a successful supplier based on the organisation’s requirements (ibid.). Furthermore, GenAI can also conduct a risk assessment as the RFP process is underway by starting background checks of invited suppliers and identifying any risks (CIPS, 2025). Additionally, GenAI can also be deployed for the contracting phase of business procurement. GenAI can generate and propose a detailed contract based on the conditions and policies agreed upon in the sourcing stage (ibid.). It can also conduct an optimisation of the clauses by scrutinising past contract performance and industry benchmarks (ibid.).
On the contrary, there are a colossal number of disadvantages when it comes to employing GenAI in the business sector. For example, GenAI can potentially cause unemployment and reduce the number of jobs available (Wang & Siau, 2019; Zarifhonarvar, 2023 cited in Nah et al., 2023). Organisations that are heavily utilising AI technologies are employing a lesser number of low-skilled individuals (Li et al., 2021 cited in Nah et al., 2023). This factor may contribute to an increase in unemployment as multiple companies move to employ AI technologies rather than human labour. Some policy-based professions such as data analysts and customer service agents may become needless as the growth of AI continues (Nah et al., 2023). Additionally, in terms of socio-economic challenges, Nah et al. (2023) mention that GenAI may also increase the digital divide. More objections about monopolies and income inequality have also surfaced recently with the endorsement of GenAI (ibid.). Moreover, those people who are employed in sectors whose jobs are replaceable by GenAI may lack the expertise for a job switch (Nah et al., 2023). The income gap may also increase between those who have improved their skills in using GenAI and those who are yet to (ibid.).
2.3. GENAI IN HEALTHCARE
While traditional AI has already revolutionised healthcare through its enhanced diagnostics and optimisation of operational efficiency (Topol, 2019), Gen-AI can further transform this field through optimisation of clinical workflows and synthesis of data.
The lack of high-quality medical data sets has hindered the progression of healthcare in recent years (Banerjee et al., 2022); however, Gen-AI offers a partial solution through synthesised yet accurate data. Advances in generative adversarial networks (GANs) and diffusion models mean systems are capable of producing medical images such as CT liver scans that are indistinguishable from real images to a radiologist (Amitai et al., 2018). Not only does this remove privacy issues but it can also help advance the research for treatment of rare diseases through data synthesis. Proof of this is an experiment conducted by Han et al. (2019), where tumour segmentation dice scores improved by 18% from synthesised paediatric brain tumours from 50 cases.
With upwards of 40% of clinician time being spent on administrative tasks (Arndt et al., 2017), not only is time being wasted but workers are susceptible to burnout due to high repetition of similar tasks. As such, GenAI can be used to speed up these tasks. Indeed, Ayers et al. (2023) show that a chatbot’s response is noticeably quicker than a human’s when using ChatGPT. Similar results are also present in note generation and the quality of the output (ibid.). With these advancements, public services can provide healthcare to more clients, while private practices allow more flexibility in an employee’s schedule, with appointments taking less time.
On the other hand, the use of GenAI presents certain drawbacks, and its application in healthcare has caused concern among professionals. While GenAI has proven to be beneficial within the field, it is important to balance its ethical issues with continued implementation. One concerning factor that hinders the use of GenAI in medicine is data privacy, particularly the issue of patient confidentiality. For medical professionals to utilise GenAI in this context, GenAI must be trained on large amounts of patient data, which can pose a multitude of security threats (Chen & Esmaeilzadeh, 2024). According to Chen and Esmaeilzadeh (2024), if the generative model is misused, sensitive health information could be leaked. Certain patient data that was made anonymous for training could be reidentified by the AI, or hackers can gain access to systems using AI to acquire the patient data that was used to train the generative model.
Another factor to consider is the reliability of GenAI in medical applications. Rumale et al. (2024) explain that a faithfulness hallucination, as referred to in clinical studies, is where a medical GenAI model generates incorrect information. After testing, it is shown that GPT-4o and LLAMA-3, both used for medical record summaries, show hallucinations (ibid.). Medical hallucinations are caused by a multitude of factors relating to data, model architecture or the complexities of healthcare (Kim et al., 2025). Data-related factors include: a lack of diversity within datasets, which can cause bias in responses or mistakes in clinical data, such as misspellings, that confuse the models (ibid.); model-related factors, such as overconfidence or generalisation, which provide irrelevant or incorrect output (ibid.); and healthcare-related challenges, such as ambiguous abbreviations or evolving medical knowledge (ibid.). These hallucinations compromise the reliability of true clinical documentation (ibid.).
In conclusion, GenAI has the potential to be a useful tool within healthcare through its uses in data synthesis and administrative efficiency. GenAI’s ability to relieve challenges in the field by reducing workloads and providing realistic data for medical education proves to be beneficial to healthcare professionals. The opportunities that GenAI provides should be balanced with concerns about patient privacy and the reliability of applications. To ensure that healthcare can continue to benefit from the uses of GenAI, ethical frameworks need to be developed to address issues that arise.
3. THE FUTURE OF GENAI
The sudden adoption of GenAI within current daily life is comparable to the similar usage of other revolutionary technologies, such as the rise of the internet. The internet has adapted modes of communication and the flow of information; likewise, GenAI alters how humans develop ideas and perform tasks (Petkovic, 2025). But unlike the internet, which connected people to existing creative work and knowledge, GenAI can produce creative work and present knowledge (ibid.). Due to the speedy utilisation of GenAI and the leap in the capabilities of this technology, it is most imperative that professionals develop a responsible and effective way to proceed with this new technology (ibid.).
This introduces the need for frameworks to be developed that clearly define a future where GenAI can be used ethically. Current governments such as the United States (US), the European Union (EU) and the People’s Republic of China (PRC) have already laid out policies relating to the future use of AI pertaining to topics such as misuse, national security and data integrity (Christodorescu et al., 2024). However, each of their approaches is meant to highlight GenAI uses within the needs of their system, but a general approach to mitigating future risks of GenAI was needed, so the G7 Code of Conduct and the Bletchley Declaration were created (ibid.).
The G7 Code of Conduct gives actions that developers can take for responsible AI creation, like risk management and transparency, and the Bletchley Declaration is a show of collaboration within nations to make sure that AI is used ethically, but neither are concrete systems on how AI should be used, rather they are voluntary and symbolistic (Christodorescu et al., 2024). Kaur et al. (2025) describe how collaborative efforts made by developers, policymakers and ethical consultants can be used to set laws that are needed to make sure that AI and, by extension, GenAI are used responsibly. Kaur et al. (2025) also highlight how developers need to make it a priority to test systems and be transparent to comply with ethical standards. The future of GenAI is dependent on how professionals develop it, so it should be made sure that it is created safely. This means that they need to focus on building a system with enforceable rules that ensure transparency, security and fairness.
While policy is important, the future of GenAI also resides with the continuing technology that can be created. Agentic AI is the proverbial next step in the journey of GenAI; it exhibits more autonomy, even stronger reasoning and the ability for better engagement (Schneider, 2025). Unlike GenAI, which focuses on isolated decisions, Agentic AI can view objectives broadly and is able to complete more complex reasoning, such as planning (ibid.). Another key component of Agentic AI’s programming is its ability for flexibility (Acharya et al., 2025). Compared to GenAI, Agentic AI can run through the same scenario and act differently to pursue a goal (ibid.). An example of a customer service situation given by Acharya et al. (2025) is how Agentic AI changes its communication methods to whichever works best with a customer’s mood. This flexibility allows Agentic AI to adapt to new information in a changing environment (ibid.). With the tasks that can be performed by both models, Agentic AI’s ability to reason allows it to both reduce hallucinations and avoid potential biases (Schneider, 2025). Agentic AI has capabilities beyond the realm of GenAI, but it also does not come without its shortcomings (ibid.).
While Agentic AI has fewer errors than GenAI, it has an increased danger of cumulative error across multiple steps, and Agentic AI is more vulnerable to security risks and attacks than GenAI (Schneider, 2025). Other problems with Agentic AI have to do with possible ethical implications, such as that Agentic AI will possibly do unethical actions to fulfil a goal, no matter the original application, and controlling Agentic AI is difficult because they will act unpredictably to achieve a purpose (ibid.). As GenAI continues to advance, it is essential to be aware of the actions that are taken right now. As new technologies emerge, such as Agentic AI, policymakers need to field its development with responsible policies. Concrete laws and monitored innovation will make sure that the next steps developers take with GenAI are ethical and effective. GenAI holds massive potential for the advancement of technology, and it is important to support it with safety and control.
4. Conclusion
GenAI continues to demonstrate significant impact across diverse industries and educational domains through its adaptive capabilities, scalable framework and capacity for real-time personalisation (Pingili, 2025). Moreover, GenAI improves instructional engagement (Hikmawati & Mohammad, 2025) and stimulates organisational efficiency in changing contexts (Marzi & Balzano, 2025) by merging data-driven insights with flexible computational models. Through evaluating real-world applications, this study concludes GenAI functions as a catalyst for system updates and leaner processes across education, healthcare, business and marketing sectors. Future research should explore the long-term effects of GenAI supplementation in education, healthcare and business settings, with emphasis on transparency and ethical governance (Schneider, 2025).
A sustainable integration of GenAI depends on a balance between innovation, ethics and responsibility. As organisations and governments continue to field development within GenAI and balance its challenges and opportunities, it is critical to establish frameworks that are built on transparency, fairness and inclusivity. With more structured planning, GenAI has the potential to become a crucial part of innovation throughout society.
Bibliography
Acharya, D.B., Kuppan, K. & Divya, B. (2025). Agentic AI: Autonomous Intelligence for Complex Goals – A Comprehensive Survey. IEEE Access, 13, p.1. doi:https://doi.org/10.1109/access.2025.3532853.
AI AND THE FUTURE OF THE GLOBAL CHARTERED ACCOUNTANCY PROFESSION (n.d.). Research carried out by Ipsos UK on behalf of Chartered Accountants Worldwide. Available at: https://charteredaccountantsworldwide.com/wp-content/uploads/2025/04/CAW_AI-in-Accountancy-web.pdf.
Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M. & Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics, 11(3), 58. doi:https://doi.org/10.3390/informatics11030058.
Arndt, B.G., Beasley, J.W., Watkinson, M.D., Temte, J.L., Tuan, W.J., Sinsky, C.A. & Gilchrist, V.J. (2017). Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. The Annals of Family Medicine, 15(5), pp.419-426. doi:https://doi.org/10.1370/afm.2121.
Binhammad, M.H.Y. et al. (2024) Investigating how generative AI can create personalized learning materials tailored to individual student needs. Creative Education, 15(07), pp. 1499–1523. doi:https://doi.org/10.4236/ce.2024.157091.
Chen, Y. & Esmaeilzadeh, P. (2024). Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security Challenges. Journal of Medical Internet Research. doi:https://doi.org/10.2196/53008.
Christodorescu, M., Craven, R., Feizi, S., Gong, N., Hoffmann, M., Jha, S., Jiang, Z., Saberi, K.M., Mitchell, J., Newman, J., Probasco, E., Qi, Y., Shams, K. & Turek, M. (2024). Securing the Future of GenAI: Policy and Technology. arXiv.org. Available at: https://arxiv.org/abs/2407.12999.
CIPS (2025). Implementing GenAI across the supplier lifecycle. Available at: https://cips-download.cips.org/expert-reports/implementing-genai-across-the-supplier-lifecycle.
Crompton, H. (2024). The promise and pitfalls of GenAI in education. eLearn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2024, pp.690–695. Available at: https://www.learntechlib.org/p/225289/.
Crompton, H. & Burke, D. (2024). The Educational Affordances and Challenges of CHATGPT: State of the Field. TechTrends, 68(2), pp.380–392. doi:https://doi.org/10.1007/s11528-024-00939-0.
Devlin, J., Chang, M.W., Lee, K. & Toutanova, K. (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, 1, pp.4171-4186. doi:https://doi.org/10.18653/v1/N19-1423.
Dhoni, P. (2023). Unleashing the Potential: Overcoming Hurdles and Embracing Generative AI in IT Workplaces: Advantages, Guidelines, and Policies. TechRxiv. doi:https://doi.org/10.36227/techrxiv.23696709.v1.
Duran, V., Ersanlı, E. & Çelik, H. (2025). Unveiling student sentiment dynamics toward AI‐based education through statistical analysis and Monte Carlo simulation. British Educational Research Journal [preprint]. doi:https://doi.org/10.1002/berj.4188.
Fan, A., Lewis, M. & Dauphin, Y. (2018). Hierarchical neural story generation. arXiv [preprint], 1805.04833. doi:https://doi.org/10.48550/arXiv.1805.04833.
Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J. & Greenspan, H. (2018). GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing, 321, pp.321-331. doi:https://doi.org/10.1016/j.neucom.2018.09.013.
Fui-Hoon Nah, F. et al. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), pp.277–304. doi:https://doi.org/10.1080/15228053.2023.2233814.
Greene, T. (2019). 2010–2019: The rise of deep learning. The Next Web. Available at: https://thenextweb.com/news/2010-2019-the-rise-of-deep-learning.
Han, C., Rundo, L., Araki, R., Furukawa, Y., Mauri, G., Nakayama, H. & Hayashi, H. (2019). Infinite brain MR images: PGGAN-based data augmentation for tumor detection. Neural approaches to dynamics of signal exchanges (Singapore: Springer Singapore), pp.291-303. doi:https://doi.org/10.48550/arXiv.1903.12564.
Hashem, R. et al. (2023). AI to the rescue: Exploring the potential of ChatGPT as a teacher ally for workload relief and burnout prevention. Research and Practice in Technology Enhanced Learning, 19, pp.23. doi:https://doi.org/10.58459/rptel.2024.19023.
Hikmawati, A. & Mohammad, N.K. (2025). Enhancing Critical Thinking with Gen AI: A Literature Review. Buletin Edukasi Indonesia, 4(1), pp.40–46. doi:https://doi.org/10.56741/bei.v4i01.764.
Ho, J., Jain, A. & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems, 33, pp.6840-6851. doi:https://doi.org/10.48550/arXiv.2006.11239.
Ho, J. & Salimans, T. (2022). Classifier-free diffusion guidance. arXiv [preprint], 2207.12598. doi:https://doi.org/10.48550/arXiv.2207.12598.
Holtzman, A., Buys, J., Du, L., Forbes, M. & Choi, Y. (2019). The curious case of neural text degeneration. arXiv [preprint], 1904.09751. doi:https://doi.org/10.48550/arXiv.1904.09751.
Janiesch, C., Zschech, P. & Heinrich, K. (2021). Machine learning and deep learning. Electronic Markets, 31(3), pp.345–360. Available at: https://link.springer.com/article/10.1007/s12525-021-00475-2.
Kaur, P., Kaushik, A., Sehrawat, K., Arya, K. * Singh, P. (2025). Is GenAI Really the Future of this World: Dark Side of GenAI. SSRN Electronic Journal. doi:https://doi.org/10.2139/ssrn.5190963.
Kim, Y., Jeong, H., Chen, S., Li, S.S., Lu, M., Alhamoud, K., Mun, J., Grau, C., Jung, M., Gameiro, R., Fan, L., Park, E., Lin, T., Yoon, J., Yoon, W., Sap, M., Tsvetkov, Y., Liang, P., Xu, X. & Liu, X. (2025). Medical Hallucinations in Foundation Models and Their Impact on Healthcare. arXiv.org. Available at: https://arxiv.org/abs/2503.05777.
Kumar, Y. (2024). The AI-powered evolution of big data. Applied Sciences, 14(22), p.10176. doi:https://doi.org/10.3390/app142210176.
Marzi, G. & Balzano, M. (2025) Artificial intelligence and the reconfiguration of NPD Teams: Adaptability and skill differentiation in sustainable product innovation. Technovation, 145, pp.103254. doi:https://doi.org/10.1016/j.technovation.2025.103254.
McCarthy, J., Minsky, M.L., Rochester, N. & Shannon, C.E. (1956). A proposal for the Dartmouth summer research project on artificial intelligence. Available at: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.
Mittal, U. et al. (2024). A Comprehensive Review on Generative AI for Education. IEEE Access, p.1. doi:https://doi.org/10.1109/access.2024.3468368.
Nah, F.F. et al. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), pp.277–304. doi:https://doi.org/10.1080/15228053.2023.2233814.
Nah, F., Cai, J., Zheng, R. & Pang, N. (2023). An Activity System-based Perspective of Generative AI: Challenges and Research Directions. AIS Transactions on Human-Computer Interaction, 15(3), pp.247–267. doi:https://doi.org/10.17705/1thci.00190.
Naqbi, H.A., Bahroun, Z. & Ahmed, V. (2024). Enhancing Work Productivity through Generative Artificial Intelligence: A Comprehensive Literature Review. Sustainability, 16(3), p.1166. doi:https://doi.org/10.3390/su16031166.
Petkovic, D. (2025). GenAI: Our Tool or Our Future Master? TechRxiv. doi:https://doi.org/10.36227/techrxiv.175086742.25708267/v1.
Pingili, R. (2025). Generative AI Unlocking Adaptive Workflow Design. Journal of Next-Generation Research 5.0 (JNGR 5.0), 1(2), pp.1–13. Available at: https://jngr5.com/index.php/journal-of-next-generation-resea/article/view/72/47.
Potter, E. (n.d). Markov Chains Are the Original Language Models. Elijah Potter Dev. Available at: https://elijahpotter.dev/articles/markov_chains_are_the_original_language_models.
Rajpurkar, P., Chen, E., Banerjee, O. & Topol, E.J. (2022). AI in health and medicine. Nature medicine, 28(1), pp.31-38. doi:https://doi.org/10.1038/s41591-021-01614-0.
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A. & Sutskever, I. (2021). DALL·E: Creating images from text. OpenAI. Available at: https://openai.com/research/dall-e.
Rane, N. L. (2023). Multidisciplinary collaboration: key players in successful implementation of ChatGPT and similar generative artificial intelligence in manufacturing, finance, retail, transportation, and construction industry. OSF [preprint]. doi:https://doi.org/10.31219/osf.io/npm3d.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp.10684-10695). doi:https://doi.org/10.48550/arXiv.2112.10752.
Rumale, P., Tiwari, S., Naik, T.G., Gupta, S., Thai, D.N., Zhao, W., Kwon, S., Ardulov, V., Tarabishy, K., Mccallum, A. & Salloum, W. (2024). Faithfulness Hallucination Detection in Healthcare AI. Available at: https://openreview.net/pdf?id=6eMIzKFOpJ.
Schneider, J. (2025). Generative to Agentic AI: Survey, Conceptualization, and Challenges. arXiv.org. Available at: https://arxiv.org/abs/2504.18875.
Singh, A., Thakur, N. & Sharma, A. (2016) A review of supervised machine learning algorithms. 2016 3rd international conference on computing for sustainable global development (INDIACom), pp.1310-1315.
Solomonoff, G. (2023). The Meeting of the Minds That Launched AI. IEEE Spectrum. Available at: https://spectrum.ieee.org/dartmouth-ai-workshop.
Strielkowski, W. et al. (2024). AI‐driven adaptive learning for sustainable educational transformation. Sustainable Development [preprint]. doi:https://doi.org/10.1002/sd.3221.
Stryker, C. & Scapicchio, M. (n.d.). What is generative AI? IBM. Available at: https://www.ibm.com/think/topics/generative-ai.
Topol, E.J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), pp.44-56. doi:https://doi.org/10.1038/s41591-018-0300-7.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. doi:https://doi.org/10.48550/arXiv.1706.03762.
Wu, Y.C. & Feng, J.W. (2018). Development and application of artificial neural network. Wireless Personal Communications, 102(2), pp.1645-1656. doi:https://doi.org/10.1007/s11277-017-5224-x.
Zhang, X., Guo, J., Zhao, S., Fu, M., Duan, L., Wang, G.H., Chen, Q.G., Xu, Z., Luo, W. & Zhang, K. (2025). Unified multimodal understanding and generation models: Advances, challenges, and opportunities. arXiv [preprint], 2505.02567. doi:https://doi.org/10.48550/arXiv.2505.02567.