Abstract

This article examines the trajectory and design principles of artificial intelligence (AI) enabled robots that form a context of human-robot interaction (HRI), as well as their social effects, ethical considerations and future implications. Robots have become highly adaptive tools capable of supporting emotional, cognitive and social human needs, moving from simple industrial machines like Unimate to modern-day social companions. Design principles are key for delivering robust and usable robots for human-machine interaction. Key considerations include adaptability, anthropomorphism, inclusivity and transparency. 

The article provides an overview of the increasing use of AI robots in customer service and smart assistance. In customer service, a range of chatbots and virtual assistants use mechanical, analytical, intuitive and empathetic intelligence to improve speed, personalisation and efficiency. In healthcare and disability support services, AI robots offer support for children’s accommodation, provide stress relief, assist in dementia care and increase autonomy for people with disabilities through adaptive systems.

While AI-enabled robots are clearly producing substantial benefits, there are still risks and challenges to navigate which include privacy, emotional dependence, loss of human empathy and accountability. The future of HRI holds opportunities for companionship, healthcare and educational contexts but is limited by boundaries of fragile trust, questions of authenticity, cultural resistance and a lack of transparency regarding privacy issues. We conclude the study by arguing that inclusivity and transparency through regulation with the view to support human relationships, rather than replace them, is crucial for successful integration.

1. Introduction

1.1 What Is a Robot?

In today’s world, the term ‘robot’ has a wider meaning and understanding than it did at its inception. There are various definitions of a robot, and it has changed over the years, such as defining a robot as a self-operated machine that substitutes human labour, regardless of its resemblance to humans or its ability to do tasks in a human-like style (Moravec, 2025). 

As robots have become more common in society, their roles have drastically changed over the years. In the modern era, robots are not only capable of executing repetitive tasks like manufacturing and assembly line work, but can also perform tasks that require thinking skills and decision-making abilities – previously only executed by humans (Miner, 2023). Simultaneously, robots are being updated to understand and perform in emotional and social roles, providing fellowship to individuals experiencing social isolation and offering multiple mental health benefits (Miner, 2023). It is evident that the role of robots in our society is evolving quickly and profoundly as a result of advancements in robotics technology (Miner, 2023).

1.2 The Evolution of Industrial Robots

While the evolution of robots began in ancient Greece, the term ‘robot’ was not used until 1920 by writer Karel Čapek in his play featuring a factory that creates thousands of artificial humanoids, in which they work with minimal cost and relentless effort (Thompson, n.d.). Čapek named these machines ‘robots’ after the Czech word robota, referring to the forced labour of serfs; this is what influenced the modern name for robots (Thompson, n.d.). 

In 1954, the first industrial robot, Unimate, was invented by George Devol alongside Joseph Engelberger. Its purpose was to handle hazardous duties in automotive production facilities, incluing lifting and stacking hot, die-cut metal components (Patsiaouras, 2024). Until the 2010s, robots did not work alongside humans; the risk of the heavy machinery breaking down or malfunctioning was deemed dangerous (Turney, 2022). From the 2010s, better and lighter materials have been used, making it safer for humans to work together with robots (Turney, 2022). Though the visual appearance of industrial robots has not changed much over the years, the components have significantly improved, such as the sensors and effectors (Turney, 2022). 

1.3 The Evolution of Social Robots

The foundations of social robotics began in the late 20th century. A landmark example was Kismet, developed at MIT, which was able to engage in expressive face-to-face interaction through basic emotional responses (Breazeal, 2003). In the 2000s, therapeutic robots such as PARO – a robot made to resemble a baby seal – were introduced in elderly care and dementia treatment (Shibata et al., 2005). Studies showed that PARO reduced patient stress, increased social interaction and provided comfort, proving robots could fulfil emotional as well as functional roles (Wada et al., 2010; Petersen et al., 2019; Wada et al., 2025; Petersen et al., 2021; Shibata et al., 2005).

Advancements in AI have since accelerated the development of socially assistive robots (SARs). These robots combine many different types of skills, including natural language processing, computer vision and adaptive learning, to personalise interactions with humans in real time (Akhyani & Moradi, 2023). For example, robots can now adjust their tone, responses or activities based on a user’s emotional state, engagement level or health needs (Breazeal, 2022). These abilities range from supporting children with autism to providing companionship for older adults, highlighting their growing role in many different fields including education, healthcare and social integration.

Inevitably, the evolution of social robots also raises ethical questions. Unlike industrial robots, social robots inhabit more intimate places in human life. Trust, privacy and transparency are central concerns as users may perceive these machines as companions rather than tools (Sharkey & Sharkey, 2010). Designers now emphasise co-design processes involving users, caregivers and communities to ensure robots remain supportive and ethical rather than misleading or exploitative (Van Wynsberghe, 2013). Robots are no longer restricted to factories; they are becoming embedded in the social fabric of daily life, shaping not only how work is performed but also how humans connect, communicate and experience support (Breazeal, 2022; Tapus et al., 2007). This way, a relationship of trust can be built between humans and robots.

2. Design Principles of Robots

The design of robots is developed according to a set of principles that take both technical capabilities and human needs into account. These principles dictate not only how robots function, but also the methods by which people engage with them. Known as design principles, they provide a foundation for decisions about appearance, interaction styles, adaptability and ethical responsibilities, ensuring that robots are not only effective machines but also accessible and trustworthy partners in human environments.

2.1 Designing for Human Interaction and Recognisability

A critical realisation from recent research is that robots do not need to perfectly mimic human appearance in order to be effective; rather, a clear focal point for interaction is required, such as a face or another identifiable or recognisable front through which users can interpret responses more easily (Onyeulo & Gandhi, 2020). Moreover, while spoken language can certainly enhance and intensify communication, it is not by any means necessary for successful HRI. Instead, the ability to convey non-verbal information through gestures, gaze direction or subtle limb activity is just as effective at creating an understanding between robots and humans (Onyeulo & Gandhi, 2020). This way, design generally favours communicative intelligibility over sheer imitation of human behaviour.

Anthropomorphic features, such as facial expressions and posture, are also typically incorporated into the design of social robots as they facilitate interaction and make it appear more natural (Onyeulo & Gandhi, 2020). In other words, these qualities provide a familiar setting through which humans can easily interpret the robot’s actions and intentions. However, it is vital that designers maintain a careful balance, as too much realism risks pushing robots into the uncanny valley, where human-like qualities and characteristics appear disturbing or unsettling rather than relatable (Onyeulo & Gandhi, 2020). The value of a human-like design is therefore context dependent; in social roles such as companionship, education or customer service, incorporating human-like features can significantly enhance engagement, trust and the overall quality of interaction. On the other hand, more functional roles such as logistics or manufacturing require a more mechanical and practical design, as it emphasises efficiency and dependability (Onyeulo & Gandhi, 2020). This distinction highlights how our expectations and the intended use guide design decisions, ensuring that form follows function while also supporting trust and usability.

2.2 AI-Driven Adaptability and Ethical Design

In addition to physicality and communication modality, another design principle is the robot’s exploitation of AI in its operation. As Obaigbena et al. (2024) state, AI-enabled robots will be capable of responding to user preferences through machine learning. Their design approach should ensure that they are able to learn and develop through interactions with the user; the robot’s appearance should not violate its capacity to learn. The robots’ responses, its mannerisms towards humans and relevance to a certain context may drift over time while it learns and adapts. In addition, advances in natural language processing require design characteristics that support user experiences, resembling human-to-human conversation and incorporating elements such as responsive audio systems, intuitive visual feedback and flexible controls to create natural and seamless interactions (Obaigbena et al., 2024). Therefore, the design of a robot does not have a static nature; it features a dynamism of using the best technology available for the time and place at that moment to meet user needs. The adaptability principle will improve robots by ensuring their ongoing viability and providing positive human engagement experiences across multiple contexts over time. 

Ethics and inclusivity are two further dimensions especially important to consider when designing robots in respect to human use experience. Ethical design practices dictate that there should be clear transparency in relation to the functions and intentions of robots, minimising the chance of user misinformation or overdependence (Farooq & Iqbal, 2025). Examples include transparent processes or decision time indicators, which promote trust levels by displaying when the robot is ‘thinking’ and when the robot requires further input (Wortham et al., 2017). Likewise, inclusivity is equally important to ensure ease of use for people with diverse cultural, linguistic or physical experiences (Kim et al., 2021). For example, designers may want to create multiple communication modes, such as gesture-based interfaces for users with speech issues, and consider differences in culture when it comes to facial expressions and non-verbal body language. Designers who consider both inclusivity and ethical responsibilities alongside technical capabilities go beyond creating robots as simple tools, acting as social mediators between humans and robots.

3. Impacts of AI-Enhanced Robots on Society

3.1 AI-Enhanced Robots in Customer Service

Customer service refers to the advice and assistance that firms provide to customers using or purchasing their products and services (Metz et al., 2020). Nowadays, AI is increasingly being used in this domain in the form of chatbots and virtual assistants to improve speed, consistency, personalisation and availability (Huang & Rust, 2021). A useful way to understand how AI can be applied is through the four-intelligence schema proposed by Huang and Rust (2018): mechanical intelligence automates standardised, repetitive tasks with predictable inputs and outputs; analytical intelligence mines data to diagnose problems and recommend actions; intuitive intelligence adapts to novel or ambiguous situations when rules alone are insufficient; and empathetic intelligence recognises and responds to emotion through cues in language and voice (Huang & Rust, 2018). The combination of the four types of intelligence has resulted in significant benefits for various companies, such as reducing operational costs, an increase in productivity, improving decision-making and enhancing customer experience (Yuliani et al., 2024). However, one main concern is the potential loss of jobs due to automation (Yuliani et al., 2024).

Across channels, AI can improve each step of how a customer receives help. For example, mechanical intelligence handles self-check-in, password reset, tracking an order or taking a ticket in a queue (Huang & Rust, 2018). As a result, fewer steps will be taken and fewer mistakes will happen. Analytical intelligence allows the bot to comprehend what is being asked, power next-best-action recommendations and dynamic form filling, and pull answers from help articles in any language, aiding fast triage and smart routing at a huge scale (Davenport et al., 2020). Intuitive intelligence deals with special cases or messy situations that need careful judgement (Huang & Rust, 2018; Wirtz et al., 2018); for example, rebooking after a cancelled flight, fixing orders mixed across teams and solving edge cases. Empathetic intelligence listens for emotion in tone and wording, adjusts how it speaks and escalates to a human when needed to keep the conversation respectful and human-like (Huang & Rust, 2018).

Research show that bots perform well on clear, repetitive tasks, processing more cases with fewer errors and no drop in satisfaction, although real-world success still depends on whether customers find them useful, reliable and smoothly handed over to humans when needed. At Japan’s Henn-na Hotel, robots handled routine queries well but faltered on noisy or unusual requests so staff were reintroduced at key points to ensure seamless fixes and accountable service (Wirtz et al., 2018; Davenport et al., 2020; Bhimasta & Kuo, 2019; Reis, 2020). To balance these strands, businesses are increasingly adopting a hybrid model in which AI manages high-volume, low-touch scenarios, while humans address uncertain or emotionally-charged cases (Wirtz et al., 2018). This division can reduce delays and errors but keeps empathy and accountability where it matters most. Practically, this means clear disclosure that customers are interacting with an AI, easy human handover and transparent rules for important outcomes (Floridi & Cowls, 2019). Operationally, AI keeps service consistent across chat, email, phone and apps (Davenport et al., 2020; Verhoef et al., 2015). It also learns from every case; companies can update their rules, improve training and even fix or redesign the product (Davenport et al., 2020).

While automation scales service, humans sustain enduring trust. For instance, in South Korea, AI was used in customer service through AI-based chatbots that interacted directly with humans, with data provided by Korea Telecom, one of the country’s largest telecommunications service providers (Ahn et al., 2024). The interactions began with the customer phoning the call centre of the company to seek resolution for an issue or advice, with the AI providing immediate guidance and reception (Ahn et al., 2024). According to Schegloff (2007), the basis of a conversation consists of two parties: the first pair-part (FPP), which initiates the conversation in the form of a question, announcement or invitation; and the second pair-part (SPP), which responds to the FPP with an answer, acknowledgement, acceptance or decline. The FPP is, in this case, the AI used for customer service, while the SPP is the human (Ahn et al., 2024). The AI introduced itself as an AI consultant while greeting the user with their name. The AI used Korean honorifics (the bot is from South Korea) to greet the SPP to show respect and politeness and to establish a hierarchical relationship (Ahn et al., 2024). The SPPs (customers) were shown to have used one of four different ways of answering: saying nothing, repeating the words said by the bot, striking up a natural conversation or giving a demand (Ahn et al., 2024).

In sum, AI is revolutionising customer service by scaling speed and consistency as the four-intelligence schema describes where it adds the most value. However, the best results come in hybrid models: automation addresses routine, high-volume cases and humans address ambiguous or emotionally-complex cases. Clear disclosure, frictionless handoff and ongoing learning from real interactions maintain trust and accountability. In practice, this preserves empathy and judgement while achieving operational scale and sustaining long-term relationships. 

3.2 AI-Enhanced Robots as Smart Assistants

AI smart assistants – robots that use AI technology in personal and adaptive ways to aid human tasks and concerns – can respond and run in real-time, unlike automated processes that function under a programmed rule base. AI smart assistants can also share agency and use natural language processing tools. Case studies in healthcare, disability support and workplaces show that such capabilities could help reduce distress, improve autonomy and enhance collaboration, yet ethical considerations surrounding trust, equity and inclusion should be carefully considered.

3.2.1 SMART ASSISTANTS IN HEALTHCARE AND SOCIAL CONTEXTS

AI-enabled smart assistants are already showing tangible societal gains in the healthcare sector through increasing satisfaction and alleviating distress in difficult procedures. One example is provided by Foster et al. (2020) who developed an AI-enabled social robot designed to aid children in managing painful needle-based procedures within hospital emergency departments. The robot is significantly different from earlier teleoperate robots. Rather than a bot which followed programmed and rigid behaviours, the researchers were able to use AI to create an autonomous robot that could respond to individual children’s needs. Through a process of co-design with children, parents and healthcare staff, they were able to program the system with evidence-based distraction methods, such as games, story-telling and guided breathing exercises. The AI hardware on board the robot enabled it to adapt the distraction methods in real time based on individual children’s reactions of engagement, thereby personalising their experiences to reduce both their immediate pain response and long-term consequences of needle fear and anxiety. Foster et al. (2020) included ethical considerations in the design process by ensuring the robot’s role was clear and did not mislead or over-promise any technological capacity.

In addition to children’s healthcare, socially assistive robots have been used in other scenarios, such as in dementia care and in the field of autism, where they contribute to combatting loneliness, improving problems of communication and in many cases, encouraging social interaction (Foster et al., 2020).

Together, these case studies suggest that smart assistants can transcend their stand-alone status as ‘tools’ and become a trusted assistant in managing human experience in complex and challenging domains. The importance of these implications lies not only in demonstrating an empirical benefit, such as decreased pain or anxiety, but in ensuring SARs are positively accepted, seen as trusting, supportive and ethical.

3.2.2 SMART ASSISTANTS FOR AUTONOMY, COLLABORATION AND WORK

Smart assistants are also increasing autonomy and collaboration for individuals with disabilities, as well as in workplace situations. Pascher (2024) gives several case study examples about individuals with motor disabilities (a decreased ability to control movement) who use robotic arms on wheelchairs. One frequent problem is that the arms have multiple Degrees of Freedom (DOF), meaning that they can move in several independent directions using manipulations that normally involve complex controls. To address this, Pascher (2024) created Adaptive Degrees of Freedom Mapping Control (ADMC), which is an AI adaptive ‘shared control’ system. Shared control means that both the robot and the human share both parts of the control: the human gives general input (e.g., ‘move toward the cup’), and the AI takes the input, interprets it and refines the robot’s manipulations. ADMC demonstrates greater reductions in task times, total unnecessary switches of control and user frustration than typical traditional manual systems. Pascher also introduced the AdaptiX Extended Reality (XR) framework. XR is an umbrella term that captures any sequenced interactions between the automated assistant and human in virtual, augmented and mixed reality, allowing users, as well as the original designers, to explore these systems in simulations beforehand, substantially reducing risks and widening inclusion. These findings demonstrate how AI-enabled assistive devices can offer independence in everyday activities, removing the need for continuous caregiver presence (Pascher, 2024).

Sokolov et al. (2025) discuss improved HRI in the workplace, where AI enables a more natural form of interaction with robots through speech recognition, gesture detection and emotional indicators. For instance, robots that use natural language processing can process direct verbal commands, and robots with computer vision can decipher gestures or frowns, allowing humans and robots to interact more seamlessly (Sokolov et al., 2025). Case studies reveal examples of AI-enabled robots in factories and services that free workers from repetitive tasks, reduce employee exposure to dangerous working conditions and free up humans from repetitive tasks. Together, these examples illustrate that AI-enabled smart assistive devices have the potential to enhance social integration, independence and workplace environments when they are accessible, ethical, transparent and user-centred (Mirza & Hemamalini, 2024).

3.2.3 NEGATIVE IMPACTS AND CHALLENGES OF AI-ENHANCED SMART ASSISTANTS

While AI-enhanced smart assistants offer significant benefits, they also present notable challenges. When it comes to healthcare, one primary concern is the erosion of human empathy in healthcare settings. As AI systems increasingly handle tasks traditionally performed by humans, patients may experience a reduction in compassionate care, potentially impacting their overall wellbeing and trust in medical professionals (Farhud & Zokaei, 2021). Additionally, the extensive data collection by these systems raises serious privacy and security issues. Sensitive personal information can be vulnerable to breaches, and the potential for unauthorised access or misuse of data is a significant risk (Porter, 2025). These concerns underscore the need for careful consideration and regulation in the deployment of AI technologies to ensure they are used ethically and responsibly. Ensuring transparency, user-centred design and robust ethical frameworks is essential to maximise the benefits of AI while minimising potential harms (Dankwa-Mullan, 2024).

4. Ethical and Social Challenges of AI-Enhanced Robots

With the recent rise in popularity of AI-enhanced robots, a multitude of ethical and social challenges have been observed. The goal of robot companions is to actively interact with humans, which can provide comfort but also create dependency and distress if the robot is removed, many such cases have been observed in the healthcare sector, especially in young children or the elderly (Hung et al., 2025; Onyeulo & Gandhi, 2020). Furthermore, according to recent research, the design of many social robots intentionally incorporates pet-like or toy-like features to appeal to young children and to promote natural engagement between the robot and user (Hung et al., 2025). While this can certainly make the experience flow more naturally, it also raises concerns when regarding vulnerable adult users. Instead of fostering engagement, the toy-like design can result in adults feeling infantilised or socially stigmatised, leading to embarrassment and shame, especially when interactions with these robots occur in public or in communal environments where others might witness the engagement (Hung et al., 2025). According to Hung et al. (2025), these effects can influence users’ self perception and dignity, making these design choices a critical ethical issue in AI-enhanced robots.

Meanwhile, privacy, surveillance and consent remain central: always-on cameras and microphones, together with affect inference from face or voice, generate highly sensitive data in care and home settings. Securing truly informed consent is tricky where capacity is unstable, exposing participants to risks of incidental coercion or misunderstanding (Floridi & Cowls, 2019; European Union, 2024). The same data flows offer attack surfaces for impersonation and identity theft (e.g., voice cloning), further strengthening the case for on-device processing, encryption, strong authentication and purposeful limitation (European Union, 2024). There is also a distributional problem: these devices are costly and disproportionately found in better-off institutions or neighbourhoods, risking a two-tier trend where poor-resource groups receive a “tech-only” substitute for human interaction (Hung et al., 2025). In keeping with elder-care ethics, robots should increase rather than replace human interaction so as not to annihilate dignity and precious social relationships (Sharkey & Sharkey, 2012). In implementation, protection responses include public self-identification (“I am a robot”), minimal human handover, privacy-by-design (data minimisation and on-device processing as necessary) and lifecycle monitoring (bias testing, incident logging and independent review) so that responsibilities can still be traced and harms remedied (Floridi & Cowls, 2019; European Union, 2024).

4.1 Moral Responsibility and Path Forward

The question of whether robots can be considered morally accountable remains a topic of active debate. Hung et al. (2025) focus on ethical concerns in AI-enhanced robots in social contexts, but when talking about moral accountability itself, philosophers have argued more directly. For example, Altehenger et al. (2024) examine whether these robots, specifically the AI behind them, can be considered “morally blameworthy”. The authors introduce the concept of “attributability”, which considers whether the actions of a certain entity express a quality of will that could be used to justify moral praise or blame. Ultimately, while robots can surely perform actions that have real ethical consequences, the responsibility rests in the hands of their human designers, as AI lacks consciousness and true intentions.

In summary, AI robots may aid care and convenience but also pose dependency risks, dignity harms, privacy incursion and disparate access. Since AI lacks agency and moral status currently, the responsibility falls on designers, providers and organisations. The careful path forward is clear self-identification, robust consent and data minimisation, proportionate human oversight and handover, and continuous bias and safety auditing so that robots complement rather than replace relationships.  

5. Future Possibilities of Human-Robot Relationships

The future is something we cannot predict with precision. However, we can make assumptions based on research data and their conclusions. According to Kikani and Ramchandani (2025), AI chatbots will continue to evolve with advanced natural language processing and GPT-powered (generative pretrained transformers) AI to improve its emotional intelligence and contextual understanding. In the future, voice-based AI assistants and AI-human collaboration in real-time may be possible for better customer service. 

According to Ahmad et al. (2024), the ultimate goal of HRI is to develop robots that can work alongside humans in various fields, such as industry, healthcare, education and domestic settings. There are also several types of HRI: physical interaction (physical contact between humans and robots); social interaction (non-verbal communication and social behaviour such as eye contact and gestures); intellectual communication (exchange of information through speech or text); and emotional interaction (expressing and recognising emotions) (Ahmad et al., 2024). According to Sheridan (2016), one of the possibilities that may occur in the future of HRI is an improvement in learning and adaptability, meaning robots will learn from the data of other robots as well as from humans. This may also include robots receiving unstructured input, allowing them to improve in their responsiveness and adaptability, which would allow them to process several different types of input, such as audio, text and images. This can also help them recognise human emotions and aid them in decision-making tasks and how to behave in human settings (Sheridan, 2016).

Other possibilities include enhanced companionship and support, trustworthy collaboration and new ethical frameworks.

  • Enhanced companionship and support can allow robots to become the new reliable partners in many fields. In education, healthcare and therapy, they can be used to offer comfort and reduce loneliness.
  • Trustworthy collaboration will be the future of transparent and well-designed co-workers or assistants that build trust and reliability.
  • New ethical frameworks are the development of regulations and policies. These regulations and policies, as well as interdisciplinary collaboration, may make social robots safer and more socially integrated.

However, Weiss et al. (2010) found that while some experts believe that future robots will be capable of doing everything a human can, others doubt if robots are suitable to work in education or in healthcare due to ethical reasons (see Section 4) (Weiss et al., 2010). 

6. Conclusion

Throughout this paper, we tracked the development of robots from factory manipulators to social actors in everyday contexts, and how that development requires a new standard for design, deployment and regulation. The historical arc, from Čapek’s metaphor to Unimate’s shop-floor functionality to contemporary social companions, illustrates that the key question is no longer “can a machine do a job?”, but rather “how must a machine engage with human beings when doing it?”.

The purpose of this paper was to trace the multifaceted evolution of HRI, from its origins in industrial automation to its current state of sophisticated social integration. The main conclusion is that artificial intelligence has completely changed the function of robots, allowing them to perform tasks that call for emotional intelligence, decision-making and real-time flexibility rather than just performing repetitive tasks (Miner, 2023). 

This report’s thorough examination of case studies in customer service and healthcare has shown that hybrid models –where automation manages high-volume tasks while human empathy and judgement handle ambiguous or emotionally complex situations – are where AI-enhanced robots work best (Huang & Rust, 2018; Wirtz et al., 2018). These results support the paper’s main contention, which is that a dedication to open and moral design, rather than a perfect human-like simulation, is what will allow robots to successfully integrate into our social fabric. A substantial body of evidence supports the report’s claims, emphasising important ethical issues that need to be addressed, including data privacy, emotional dependence and dignity issues (Hung et al., 2025; Floridi & Cowls, 2019). 

Even though there has been a lot of progress, this study shows that HRI has not yet reached its full potential. In order to guarantee that AI-enhanced robots are safe, accessible and actually enhance human relationships rather than replace them, future research should place a high priority on the creation of transparent ethical frameworks and inclusive design practices (Sharkey & Sharkey, 2012).

Bibliography

Ahmad, M., Khan, M.H., Bouabdallah, A. & Imran, M. (2024). Exploring Human-Robot Interaction and Collaboration for Real-World Applications, IEEE-SEM, 12(2), pp. 52-60. 

Ahn, S.-Y., Kim, S., Lee, E.-Y., Lee, H., Song, E.-Y., Song, J. & Won, J. (2024). How do AI and human users interact? Positioning of AI and human users in customer service, Text & Talk, 45(3), pp. 301-318.

Akhyani, M. & Moradi, H. (2023) Modular Customizable ROS-Based Framework for Rapid Development of Social Robots, arXiv [preprint], 2311.15780. <https://arxiv.org/abs/2311.15780

Altehenger, H., Menges, L. & Schulte, P. (2024) How AI Systems Can Be Blameworthy, Philosophia, 52, pp. 1083-1106.

Balevic, K. (2025) Signal President Warns the Hyped Agentic AI Bots Threaten User Privacy, Business Insider [online]. <https://www.businessinsider.com/signal-president-warns-privacy-threat-agentic-ai-meredith-whittaker-2025-3>

Belcic, I. & Stryker, C. (2024) What is GPT (generative pretrained transformer)? IBM [online]. <https://www.ibm.com/think/topics/gpt>

Bhimasta, R.A. & Kuo, P.-Y. (2019) What causes the adoption failure of service robots? A case of Henn-na Hotel in Japan, Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, pp. 1107-1112.

Biba, J. & Urwin, M. (2024) What Is a Social Robot? Built In [online]. <https://builtin.com/robotics/social-robot>

Breazeal, C. (2002) Designing Sociable Robots. Cambridge, MA: MIT Press.

Dankwa-Mullan, I. (2024) Health Equity and Ethical Considerations in Using Artificial Intelligence in Public Health and Medicine, Prev Chronic Dis, 21(240245).

Davenport, T.H., Guha, A., Grewal, D. & Bressgott, T. (2020) How artificial intelligence will change the future of marketing, Journal of the Academy of Marketing Science, 48(1), pp. 24-42.

European Union (2024) Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), Official Journal of the European Union [online]. <https://eur-lex.europa.eu/eli/reg/2024/1689/oj>

Farhud, D.D. & Zokaei, S. (2021) Ethical Issues of Artificial Intelligence in Medicine and Healthcare, Iran J Public Health, 50(11), i-v.

Farooq, A. & Iqbal, K. (2025) Towards Transparent Ethical AI: A Roadmap for Trustworthy Robotic Systems, arXiv, 2508.05846. <https://arxiv.org/abs/2508.05846>

Fernandez, F. (2023) Social assistive robotics: evolution and new advances in this field, inrobics [online]. <https://en.inrobics.com/robotica-social-asistencial-evolucion-nuevos-avances-campo/?glang=en&gurl=social-assistive-robotics-evolution-and-new-advances-in-the-field/>

Floridi, L. & Cowls, J. (2019) A unified framework of five principles for AI in society, Harvard Data Science Review, 1(1).

Foster, M.E., Ali, S., Litwin, S., et al. (2020) Using AI-Enhanced Social Robots to Improve Children’s Healthcare Experiences, Lecture Notes in Computer Science (LNAI), 12483, pp. 542-553.

Furhat Robotics (2024) Exploring the Ethics of AI and Social Robots, Furhat Robotics [online]. <https://www.furhatrobotics.com/post/exploring-the-ethics-of-ai-and-social-robots>

Huang, M.-H. & Rust, R.T. (2018) Artificial Intelligence in Service, Journal of Service Research, 21(2), pp. 155–172.

Huang, M.-H. and Rust, R.T. (2021) The Feeling Economy: How Artificial Intelligence Is Creating the Era of Empathy, Cham: Springer International Publishing.

Hung, L., Zhao, Y., Alfares, H. & Shafiekhani, P. (2025) Ethical considerations in the use of social robots for supporting mental health and wellbeing in older adults in long-term care, Frontiers in Robotics and AI, 12(1560214).

Jones, C., Moyle, W., Beattie, E. & Shum, D. (2023) The effectiveness of a therapeutic robot, ‘Paro’, on behavioural and psychological symptoms, medication use, total sleep time and sociability in older adults with dementia: a systematic review and meta-analysis, International Journal of Nursing Studies, 145(104530).  

Kikani, D. & Ramchandani, S. (2025) The Impact of AI Chatbots on Customer Service: Efficiency vs. Human Touch: A Comparative Analysis of Automation and Human Interaction in Customer Support, IJRTI [pdf], 10(3). <https://www.ijrti.org/papers/IJRTI2503134.pdf>

Kim, Y., Marx, S., Pham, H.V. & Nguyen, T. (2021) Designing for robot-mediated interaction among culturally and linguistically diverse children, Educational Technology Research and Development, 69(6).

Massachusetts Institute of Technology (2001) MIT team building social robot, MIT News [online]. <https://news.mit.edu/2001/kismet>

Metz, D., Ilieș, L. & Nistor, R. L. (2020) The Impact of Organizational Culture on Customer Service Effectiveness from a Sustainability Perspective, Sustainability, 12(15), 6240.

Miner, T. (2023) Robotics and Social Impact: How Robots are Changing Society, Ziva Robotics [online]. <https://www.zivarobotics.com/robotics-social-impact-robots-changing/

Mohammed, A. (2024) Advanced Guide for Natural Language Processing, Analytics Vidhya [online]. <https://www.analyticsvidhya.com/blog/2023/09/advanced-natural-language-processing-nlp/>

Moravec, H.P. (2025) Robot, Encyclopaedia Britannica [online]. <https://www.britannica.com/technology/robot-technology>

Obaigbena, A., Lottu, O.A., Ugwuanyi, E.D., et al. (2024) AI and human-robot interaction: A review of recent advances and challenges, GSC Advanced Research and Reviews, 18(2), pp. 321-330.

Onyeulo, E.B. & Gandhi, V. (2020) What makes a social robot good at interacting with humans?, Information, 11(1), 43.

Pascher, M. (2024) An Interaction Design for AI-enhanced Assistive Human-Robot Collaboration, Universitat Duisburg Essen [thesis]. <https://duepublico2.uni-due.de/receive/duepublico_mods_00082229

Porter, A. (2025) AI in Healthcare: Advancements, Challenges, and Trends, BigID [online]. <https://bigid.com/blog/ai-in-healthcare-advancements-challenges-and-trends/>

Reis, J., Melão, N., Salvadorinho, J., Soares, B. & Rosete, A. (2020) Service robots in the hospitality industry: The case of Henn-na hotel, Japan, Technology in Society, 63(101423).

Sharkey, A. & Sharkey, N. (2012) Granny and the robots: Ethical issues in robot care for the elderly, Ethics and Information Technology, 14(1), pp. 27-40.

Sheridan, T.B. (2016) Human-Robot Interaction: Status and Challenges, Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(4).

Shibata, T. & Wada, K., (2010) Robot therapy: a new approach for mental healthcare of the elderly – a mini-review, Gerontology, 57(4), pp. 378-386.

Sokolov, O., Andrusyshyn, V., Iakovets, A. & Ivanov, V. (2025) Intelligent Human–Robot Interaction Assistant for Collaborative Robots, Electronics, 14(6), 1160.

Thompson, C. (n.d.) 13 Milestones in the History of Robotics, Aventine [online]. <https://www.aventine.org/robotics/history-of-robotics>

Turney, D. (2022) History of industrial robots, from single taskmaster to self-teacher, Autodesk [online]. <https://www.autodesk.com/design-make/articles/history-of-industrial-robots>

Verhoef, P.C., Kannan, P.K. & Inman, J.J. (2015) ‘From Multi-Channel Retailing to Omni-Channel Retailing: Introduction to the Special Issue on Multi-Channel Retailing, Journal of Retailing, 91(2), pp. 174-181.

Wada, K., Ikeda, Y., Inoue, K. & Uehara, R. (2010) Development and preliminary evaluation of a caregiver’s manual for robot therapy using the therapeutic seal robot PARO, 19th International Symposium in Robot and Human Interactive Communication (RO-MAN), pp. 533-538.

Wada, K. & Shibata, T. (2007) Living With Seal Robots–Its Socio-Psychological and Physiological Influences on the Elderly at a Care House, IEEE Transactions on Robotics, 23(5), pp.972–980.

Wada, K., Shibata, T., Musha, T. & Kimura, S. (2008) Robot therapy for elders affected by dementia, IEEE Engineering in Medicine and Biology Magazine, 27(4), pp. 53-60.

Weiss, A., Igelsböck, J., Wurhofer, D. & Tscheligi, M. (2010) Looking Forward to a “Robotic Society”?, International Journal of Social Robotics, 3(2), pp. 111-123.

Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S. & Martins, A. (2018) Brave new world: service robots in the frontline, Journal of Service Management, 29(5), pp. 907-931.

Wortham, R.H., Theodorou, A. & Bryson, J.J. (2017) Improving robot transparency: Real-time visualisation of robot AI substantially improves understanding in naive observers, 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1424-1431.

Yuliani, M., Tambunan, Y.G.M., Khalijah, P., Situmorang, S.H. & Sembiring, B.K.F. (2024) Collaboration Humans and Robots Supported by Artificial Intelligence (AI) in Serving Customers, Jurnal Ilmu Sosial Mamangan, 12(3), pp. 77–87.