Cross-cultural comparison of preferences for the external appearance of artificial intelligence agents

Main Article Content

Youngsang Kim
Hoonsik Yoo
Cite this article:  Kim, Y., & Yoo, H. (2021). Cross-cultural comparison of preferences for the external appearance of artificial intelligence agents. Social Behavior and Personality: An international journal, 49(11), e10824.


Abstract
Full Text
References
Tables and Figures
Acknowledgments
Author Contact

We analyzed international differences in preferences related to the two dimensional (2D) versus three dimensional (3D) and male versus female external appearance of artificial intelligence (AI) agents for use in self-driving automobiles. We recruited 823 participants in five countries (South Korea, United States, China, Russia, and Brazil), who completed a survey. South Korean, Chinese, and North American respondents preferred a 2D appearance of the AI agent, which appears to result from the religious or philosophical views held in countries with a large or growing number of Christians, whereas Brazilian and Russian respondents preferred a 3D appearance. Brazilian respondents’ high rate of functional illiteracy may be the reason for this finding; however, there were difficulties in identifying the reason for the Russian preference. Furthermore, men in all five countries preferred female AI agents, whereas South Korean, Chinese, and Russian women preferred female agents, but in the United States and Brazil women preferred male agents. These findings may offer valuable guidelines for design of personalized AI agent appearance, taking into account differences in preferences between countries and by gender.

With consistent advances in technology for self-driving automobiles, it has become reasonable for an artificial intelligence (AI) agent to function as a driver. However, as the AI agent often fails to function properly in this context, social interactions between the human driver and the AI agent in the car are becoming increasingly important (Hong, 2020). In particular, because driving is associated with various hazards, such as contravening road traffic laws or the risk of accidents, it is crucial for the driver to trust the AI agent, and studies have shown that the degree of anthropomorphism of the AI agent is directly related to the driver’s level of trust (Song & Luximon, 2020; Xu & Howard, 2020).

Moreover, the AI agents loaded into self-driving cars need to communicate effectively not only with their own driver, but also with the drivers of other cars on the road. Thus, it can be anticipated that in the future, designers of self-driving cars will be required to manufacture AI agents that can comply with the rules of society while also forming positive cognitive and social relations with their own driver and the drivers of other cars (Strömberg et al., 2018).

Therefore, the social and emotional exchange between AI agents and humans is becoming increasingly more important. However, most previous studies on the subject have been restricted to analyzing the social and emotional functions of robots with AI. Typically, when AI technology is used with robots to enable them to act as an independent entity with cognitive abilities, interacting with humans and performing an assistive role while carrying out automated tasks, they are described as an artificial intelligence agent (Andreu-Perez et al., 2017). In this regard, as there is little difference between an AI agent and a robot that displays social and emotional empathy, we treated these two concepts as equivalent in this study.

According to Lohani et al. (2016), when a robot empathizes with human emotions, a robust relationship is formed between robot and human, accompanied by increased trust and a feeling of emotional comfort for the user. Robots capable of interacting and communicating with humans in this way are called social robots; specifically, social robots have been defined as those that autonomously or semiautonomously interact with humans while also following the rules of behavior expected of humans, or as robots with a physical appearance and capability of communicating with humans (Akalin & Loutfi, 2021; Lee, 2018).

As social robot technology has continued to advance, the naturalness of interactions between humans and robots has begun to be emphasized. For this purpose, there has been a growing recognition of the need for robots optimized for each user’s needs and characteristics, that is, social robots designed from a usability standpoint (Akalin & Loutfi, 2021). Product usability refers to the ability of the product to be used easily and efficiently by humans. To improve the usability of robots, it is essential to apply a thorough understanding of users when designing the robot (Sauer et al., 2020). When a robot responds appropriately depending on the user’s social and cultural background, users will develop a positive perception of the robot; therefore, robot designers and service agents need to understand the sociocultural context and background of their target users to provide effective products (Li et al., 2010; O’Neill-Brown, 1997). Li et al. (2010) reported that culture affects many factors, such as methods of communication by the person interacting with the agent, knowledge, motivation, cognitive state, and a combination of verbal and nonverbal communication. Overall, users show a preference for and tendency to respond to socially adaptive robotics, compared to typical robotics (Li et al., 2010).

In particular, users’ preferences as regards the robot’s face are strongly related to their cultural background and are greatly affected by their lifestyle and previously experienced sociocultural environments (Seyama & Nagayama, 2007). For example, depending on users’ familiarity with characters seen in animations, computer games, and cartoons in their own culture, they differ in their preference for the external appearance of robots (Seyama & Nagayama, 2007). Furthermore, users may experience different emotions regarding a robot’s appearance depending on their course of growth, education, and the community to which they belong, which means that, when designing the appearance of a robot, it is essential for manufacturers to consider users’ sociocultural background (Tay et al., 2018).

One of the most influential studies on robots’ appearance was conducted by Mori (1970), who proposed the uncanny valley theory, which states that the appearance of robots or AI agents should be designed to a suitable level of realism, depending on users’ preferences. According to this theory, users like robots with a face that resembles humans to a certain extent, but they feel discomfort when robots appear too similar to humans, and this phenomenon needs to be considered when designing robots (Seyama & Nagayama, 2007). However, these criteria also show different patterns depending on the users’ nationality or culture, indicating there is a need to compare preferences for realism in robot appearances between cultures.

Scholars have demonstrated differences in preferences for robot appearance depending on gender and cultural differences, and it is difficult to explain these differences between Western and Asian regions without considering culture (Li et al., 2010; O’Neill-Brown, 1997). In this study we compared preferences for degree of realism in a robot’s external appearance according to gender and nationality, with the aim of helping robot developers design social robots with suitable cultural and social adaptability.

Research Background

Artificial Intelligence Agents and Users’ Cultural Background

According to Hofstede’s (1984) cultural dimensions theory, people from a culture with a high masculine value tend to prefer large and fast things, whereas people from a culture with a high feminine value tend to prefer small and slow things. Applying this theory to robots, social robots reflect more feminine characteristics, as they are manufactured to be smaller and slower than industrial robots (Li et al., 2010). Indeed, users in countries such as Germany, which has a high masculine value, prefer larger and faster robots, whereas users in countries such as South Korea or China, which has a mid-level masculinity value, prefer smaller and slower robots (Li et al., 2010).

There are also differences in the preferred mode of conversation with social robots. People in low-context cultures usually prefer explicit communication, whereas people in high-context cultures usually prefer implicit communication. The preferences also differ between countries with an individualistic culture, such as Germany and the United States, and countries with a collectivistic culture, such as South Korea and China. Individualistic cultures are typically described as low-context cultures, and collectivist cultures are described as high-context cultures. As a result of these differences in cultural characteristics, German users show greater trust in and intimacy for robots with an explicit mode of communication, whereas users in countries with a collectivistic culture are more strongly influenced by robots with an implicit mode of communication (Rau et al., 2009). Further, depending on the cultural context, user viewpoints differ on the gender of AI agents. When applying the concept of infant caregiver logic (Weber, 2005) to the relationship between machines and humans, in Western society women are regarded as more social and emotional than are men, which leads to the allocation of feminine qualities to machines that act as agents (Da Costa, 2018). On the other hand, male robots tend to have agentic characteristics and are designed to manage tasks traditionally more likely to be ascribed to men, such as fixing electronic machines. This means that the roles assigned to men and women in real life tend to be reflected in the roles assigned to male and female robots as well (Eyssel & Hegel, 2012). In another study that compared preferences for humanoid robotics between Korean, American, and Japanese users, Korean and Japanese users showed more positive attitudes toward the robots, whereas American users showed a more cautious response (Nomura et al., 2007). In yet another study, it was reported that Americans showed more positive attitudes toward robots than Mexicans did (Destephe et al., 2015).

The External Appearance of Artificial Intelligence Agents and Users’ Cultural Background

Users’ perspectives on the external appearance of AI agents are determined by the users’ sociocultural background (e.g., nationality, gender, age, religion, level of education; Destephe et al., 2015; Tay et al., 2018). For example, in regard to age, younger people (under 30 years old) have been found to rate the humanness and attractiveness of AI agents more positively, whereas older people (over 50 years old) show the opposite trend (Destephe et al., 2015). Cultural differences have been reported in the preferred eye size for robots, as well as in the degree of anxiety and aversion experienced toward robots (Seyama & Nagayama, 2007; Tay et al., 2018).

From sociocultural or intercultural perspectives, one of the most important topics is the degree of realism in the appearance of robots. Specifically, studies have examined the degree to which robots’ external appearance should resemble humans; for example, there has been much discussion on which human characteristics, such as gender, hair length, hair texture, and skin color, should be reflected in the appearance of robots (Tay et al., 2018). In this regard, the uncanny valley theory is highly influential. Mori (1970) reported that users felt uncomfortable when a robot’s appearance was very similar to that of a generic human (see also Destephe et al., 2015; Seyama & Nagayama, 2007), and stated that robot developers need to consider the uncanny valley when determining the appropriate level of realism for the appearance of robots or AI agents (see also Seyama & Nagayama, 2007; Tay et al., 2018).

Furthermore, the most suitable appearance for robots also differs depending on their purpose. For example, robots with a completely mechanical appearance are better suited for job roles such as security or translation that do not require a high degree of sociability or comfort; robots that look like animals may be more suitable as assistants in healthcare, or as toys; and robots with a humanoid appearance may be best suited as tour guides, receptionists, and in the food industry (Li et al., 2010). For occupational roles where trust and emotional communication with the user are considered important, it is suitable to use the most humanoid robot possible (Destephe et al., 2015; Li et al., 2010).

Religious and Philosophical Discussion on Artificial Intelligence Agents

For scientists, religion and philosophy play an important role in understanding the natural world, and attempts have also been made to understand human perspectives of robots and AI from religious and philosophical viewpoints (Geraci, 2006). Research findings indicate that there are differences in perceptions of robots between Western and East Asian cultures. For example, one study found that, due to the influence of Cartesian education, French people tend to view all aspects of the world from a hierarchical or liminal perspective; thus, they view nature and artificiality as dichotomous concepts (Destephe et al., 2015). Moreover, because of the long history of Christian influence, people in Western cultures consider humans to be the only entities with a soul; thus, compared to people from Eastern cultures, they tend to show relatively lower affinity for robots and often depict robots as taking over or replacing humans in culture and literature (Lee et al., 2012). Furthermore, according to the Christian worldview that has been spread widely in Western countries and gradually across other nations throughout the world, because only “man” is defined as having the image of God, making a machine in the form of a man is considered not only contrary to biblical values, but is also viewed by some as the creation of an idol (MacDorman et al., 2009).

On the other hand, in East Asian cultures, because of the influence of Confucian culture, people tend to have more positive views of robots (Bartneck et al., 2005). For example, in the worldview of Japanese people, there is no clear boundary between natural and artificial things, which, when combined with animistic beliefs that all objects have souls, results in a positive perception of robots (Lee et al., 2012). In particular, Japanese people clearly differ from people in Western countries in their concept of good and evil regarding robots; therefore, the robots in Japanese animations or movies are often depicted as assisting or living symbiotically with humans (Bartneck et al., 2005; Lee et al., 2012). In addition, according to Wang et al. (2010), Chinese people feel more comfortable than do Americans about robots being included as members of their group.

There are also cultural differences in the preference for realism in robots’ appearance. According to Fraune et al. (2020), U.S. participants showed a stronger affinity for robots that resembled machines, whereas Japanese participants showed a stronger affinity for more anthropomorphized robots. Similarly, there are large differences in the attitudes toward robots of people in Western countries and those in East Asian countries, which appears to be because when people in Western countries evaluate robots, they focus on the practical needs for robots and the influence of Christianity; thus, they tend to separate machines and humans based on their belief in the distinctiveness of humans (Samuel, 2019). In contrast, East Asian culture is based on Buddhism and Confucianism, and there is a tendency to view human beings less as a separate entity and more as a part of the whole that does not differ significantly from nonhuman entities.

On the basis of the aforementioned literature review, our study was guided by the following research hypotheses:
Hypothesis 1: There will be differences between respondents in different countries in terms of the preference for realism in the four types of appearances of artificial intelligence agents.
Hypothesis 2: There will be differences according to gender in respondents’ preference for realism in the four types of appearances of artificial intelligence agents.

Method

Participants and Procedure

The objective of this study was to analyze national and cultural differences in preferences for the types of visualization of AI agents in self-driving automobiles. To this end, we recruited a sample of 823 participants from five countries (Brazil, n = 171; Russia, n = 167; China, n = 164; United States, n = 161; South Korea, n = 160). Most of the data were collected through an online survey by country-specific human resources that the survey company, Dataspring Inc., had previously secured. The survey was administered for 3 weeks between July and August 2018.

Measures

Participants responded to a survey composed of eight items. Two versions of the survey were prepared and varied by gender and visualization (2D or 3D), as follows:
AI Agent 1 (male 2D, male 3D, female 2D, female 3D)
AI Agent 2 (male 2D, male 3D, female 2D, female 3D).

Participants were asked to indicate their degree of preference for the external appearance of an AI agent on a 5-point Likert scale ranging from 1 (not at all preferable) to 5 (highly preferable). The sum of scores for each model was obtained for each item, and the scores for each of the four items (male 2D, male 3D, female 2D, female 3D) were calculated. We used SPSS to conduct t tests and Welch’s analysis of variance (ANOVA) for statistical analysis of differences depending on the participants’ gender and nationality, as follows:
(a) analysis of differences in preferences among participants in each country for the degree of realism in the four types of appearance of AI agents.
(b) analysis of differences in the participants’ preference for the degree of realism in the four types of appearances of AI agents between each of the five countries.
(c) analysis of differences in the participants’ preference for the degree of realism in the four types of appearances of AI agents separately for each of the five countries.

Table 1. Basic Characteristics of the Participants

Table/Figure

Note. SUV/RV = sports utility vehicle/recreational vehicle.

Results

Overall Comparison Between Countries

We analyzed the results of our quantitative assessment of participants’ preference for the types of visualization of an in-vehicle virtual assistant.

The men-to-women ratio of the participants in each of the five countries was 6:4 or 5:5 (South Korea, 66.9% to 33.1%; USA, 48.4% to 51.6%; China, 62.8% to 37.2%; Brazil, 53.2% to 46.8%; Russia, 52.1% to 47.9%). In all five countries, all age groups were nearly equally represented (20s, 11.9%; 30s, 23.8%; 40s, 23.8%; 50s, 20.6%; 60s, 20%).

Among the countries analyzed in this study, South Korea and China are traditionally considered to be influenced by Confucian culture, whereas the United States, Russia, and Brazil can be considered to be affected by Christian thinking as part of the Western cultural sphere (Cho, 2007; Hofstede et al., 2005). As the participants in this study were evenly distributed between Eastern and Western countries, we could investigate whether the differences in philosophical and religious thinking between Eastern and Western cultures are related to preferences for the appearance of AI agents. Table 2 and Figure 1 show a comparison of the preferences of participants in the five countries in regard to gender and realism (2D/3D) of an AI agent.

Table 2. Overall Comparison Between the Countries for Preference in Regard to Gender and Realism of Artificial Intelligence Agent

Table/Figure
Table/Figure

Figure 1. Overall Comparison Between the Countries for Preference in Regard to Gender and Realism of Artificial Intelligence Agent

Preferences were split among the four alternatives, with the preference expressed by the largest group in the country (or countries) being for (1) Male 2D: no significant difference; (2) Female 2D: Korea–Russia, Korea–USA, China–USA, China–Russia, USA–Brazil; (3) Male 3D: Korea–Brazil, Korea–Russia, China–USA, USA–Brazil, USA–Russia, and (4) Female 3D: Korea-USA, Korea–Brazil, China–USA, USA–Russia, USA–Brazil, Russia–Brazil.

On the other hand, there were differences between the countries in terms of participants’ preference for the degree of realism in the appearance of AI agents. While both Koreans and Americans significantly preferred the 2D AI agent to the 3D AI agent, Korean participants significantly preferred male 2D over male 3D and female 2D over female 3D. Further, American participants significantly preferred male 2D over male 3D. The results for other countries were not significant.

South Korea, China, and the United States

In this section we outline the results for participants in South Korea, China, and the United States, who showed similar preferences in regard to the external appearance of an AI agent. First, Table 3 shows the results of an ANOVA to establish whether there was a significant difference between men and women in South Korea in regard to their preference for the reality (2D/3D) of the appearance and gender of an AI agent. According to the result below, men and women showed significant differences in their preferences (p < .001).

Table 3. Analysis of Variance Results for South Korean Respondents

Table/Figure

Note. SS = sum of squares; MS = mean squares.

Table 4. Post Hoc Analysis for South Korean Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.

Table 4 shows the results for South Korean respondents regarding the realism (2D/3D) and gender of the AI agent, analyzed using the Games–Howell post hoc test. As shown in Table 4 and Figure 2, for men in South Korea, the order of preference for the appearance of the AI agent was female 2D > female 3D > male 2D > male 3D; for women in South Korea, the order of preference was female 2D > male 2D > female 3D > male 3D. The results of the post hoc analysis show that there were significant differences between all pairs, except for male 2D and female 2D for the men and male 3D and female 3D for the women. Thus, both men and women in South Korea preferred an AI agent with a 2D appearance to one with a 3D appearance, showing greater acceptance of an AI agent with a less realistic appearance, and both men and women also preferred an AI agent with a female appearance.

Table/Figure

Figure 2. Preferences of South Korean Respondents for Appearance of Artificial Intelligence Agent

Similar to the South Korean respondents, both Chinese men and women preferred 2D to 3D appearance for the AI agents (see Figure 3). Both men and women showed the same order of preference: female 2D > female 3D > male 2D > male 3D.

Table/Figure

Figure 3. Preferences of Chinese Respondents for Appearance of Artificial Intelligence Agent

However, according to the ANOVA results shown in Table 5, the difference in preference for robot appearance was significant for men but nonsignificant for women.

Table 5. Analysis of Variance Results for Chinese Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.

When the differences were examined in more detail in the post hoc analysis, men showed a significant preference for female appearance, but a nonsignificant preference for 2D appearance; conversely, women showed no significant differences in these categories (see Table 6). Although there could be objections in terms of the significance of the results, the Chinese participants generally preferred a female appearance of the AI agent, and this trend was stronger than that observed for South Korean respondents.

Table 6. Post Hoc Analysis for Chinese Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.

Figure 4 shows the results for the North American respondents. The United States has an individualistic culture, placing it in a completely different cultural sphere from those of South Korea and China, which have collectivistic cultures. In spite of these differences, as shown in Figure 4, similar to South Korean and Chinese respondents, the North American respondents preferred a 2D appearance to a 3D appearance for the AI agent. The difference in preferences between men and women was found to be significant (see Table 7).

Table 7. Analysis of Variance Results for American Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.

Table/Figure

Figure 4. Preferences of North American Respondents for Appearance of Artificial Intelligence Agent

Specifically, according to Table 8, the order of preference of North American men was female 2D > female 3D > male 2D > male 3D, and that of women was male 2D > female 2D > female 3D > male 3D. Furthermore, the order of preference of men was the same as that of men in the South Korean and Chinese respondent groups. In contrast, whereas women in South Korea and China preferred an AI agent of the same gender as themselves (female appearance), women in the United States preferred an AI agent of the opposite gender (male appearance).

Table 8. Post Hoc Analysis for North American Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.

Brazil and Russia

Unlike the participants in South Korea, China, and the United States, who preferred a 2D appearance for the AI agent, participants in Brazil and Russia preferred a 3D appearance. As shown in Table 9 and Figure 5, men in the Brazilian group showed an order of preference for female 3D > female 2D > male 3D > male 2D; post hoc analysis revealed that Brazilian men significantly preferred a female 3D appearance compared to a male 2D appearance. The order of preference for Brazilian women was male 2D > female 3D > male 3D > female 2D. In summary, Brazilian men showed a strong preference for a 3D female appearance for the AI agent, whereas the strongest preference for Brazilian women was for a 2D male appearance and the weakest for a 3D female appearance, but the differences were small and nonsignificant.

Table 9. Post Hoc Analysis for Brazilian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.

Table/Figure

Figure 5. Preferences of Brazilian Respondents for Appearance of Artificial Intelligence Agent

As shown in Table 10, the p value of the Brazilian men was < .001, indicating a significant difference between groups, and the p value of the Brazilian women was .171 (ns), also indicating that the difference between groups was significant.

Table 10. Analysis of Variance Results for Brazilian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.

Finally, Figure 6 shows the results for Russian respondents. The order of preference of Russian men was female 3D > female 2D > male 3D > male 2D, which was the same as for Brazilian men, and the order of preference for Russian women was female 3D > male 2D > male 3D > female 2D, which differed from the Brazilian women’s preference for the male 2D appearance. In other words, the Russian participants showed a strong preference for AI agents with a 3D appearance, and both Russian men and women preferred a female appearance for the AI agent.

Table/Figure

Figure 6. Preferences of Russian Respondents for Appearance of Artificial Intelligence Agent

However, according to the ANOVA results in Table 11, the difference for the preference among men in the Russian respondent group was significant, whereas the difference for preference among the women was not. Further, according to the post hoc analysis results in Table 12, apart from the men’s preference for a female 3D appearance over a male 3D appearance, none of the differences in preference were significant.

Table 11. Analysis of Variance Results for Russian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.

Table 12. Post Hoc Analysis for Russian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.

Discussion

In this study we compared the responses of people in five countries regarding their preference for the degree of realism (2D/3D) in the appearance of an AI agent. The overall analysis results indicate that participants in South Korea, China, and the United States showed a preference for AI agents with a 2D appearance, whereas participants in Brazil and Russia showed a preference for AI agents with a 3D appearance.

First, South Korean people, like Japanese people, are known to view robots positively because of the influence of animism, and robots are also depicted as relatively positive entities in media in this country. Indeed, according to the findings of a study that analyzed South Korean people’s perceptions of humanoid robots from an intercultural perspective, compared to U.S. or Turkish participants, South Korean participants showed a greater preference for robots resembling humans or animals over robots resembling machines or plants (Lee & Šabanović, 2014). However, in one study Twitter conversation data were analyzed to examine South Korean people’s perceptions of the humanoid robot Geminoid, which was at the center of uncanny valley discussions because of its close resemblance to humans. The researchers reported that South Korean Twitter users showed slightly more positive perceptions of Geminoid compared to English-speaking users, but many more negative perceptions compared to Japanese users (Jang et al., 2018).

In China, traditionally, Buddhist philosophy is widespread, which includes a belief that animals, like humans, have souls. In particular, shadow puppet theater, which has been around for a long time, has helped to establish the perception that the souls of the departed can be reborn in different forms. Not only is the distinction between living and dead unclear, but the belief that all living things have souls has an impact on both Korean and Chinese cultures. It is because of these influences that people in both these countries show little aversion to robots that resemble humans (Trovato et al., 2019). On this topic, Li et al. (2010) reported that compared to German people, Chinese and South Korean people showed more certainty about, higher satisfaction with, greater affinity with, and more trust in robots. However, our results are slightly different from the findings of previous studies, in that the South Korean and Chinese participants in our study showed a preference for a 2D appearance for the AI agent, which resembles a human less than an AI agent with a 3D appearance.

According to Hofstede’s (1984) cultural dimensions theory, compared to Western countries, both South Korea and China have a strong long-term orientation and are very committed to maintaining traditions, meaning that these two countries have many cultural and religious similarities (Li et al., 2010; Trovato et al., 2019). These similarities are also reflected in recent religious and cultural trends, wherein both countries were previously within the cultural spheres of Buddhism and Confucianism, but the number of countries with Christianity as an established religion has gradually increased, and there has been a recent movement towards post-Confucianist cultural sensitivities, especially among the younger generations. In the largest survey conducted in South Korea (the 2015 Statistics Korea Population and Housing Survey), 20% of respondents aged 18 years or over reported Christian beliefs as members of Protestant denominations, meaning that Protestantism had surpassed Buddhism (16%) as the most common religion (with Christian beliefs as members of the Catholic Church in third place: 8%). Similarly, in China, the number of Protestant Christians is rapidly increasing, and the Chinese government estimates that, of the total population of 1.4 billion, 200 million people are Protestants (Chinese Government, 2020). When investigating perceptions of robot appearance in these two countries, these statistics suggest that there may be limitations in continuing to analyze perceptions from an animistic perspective based on Buddhist philosophy.

Unlike Buddhism, according to Christian beliefs and Western philosophy, humans are considered special entities, and animals and objects (e.g., materials or minerals) are perceived to be entities of a lower order (Krikmann, 2007; Lee & Šabanović, 2014). Among countries that traditionally have a strong Christian culture, especially the United States, which was born out of a Christian background, because of the influence of laws based on the Bible, humans are considered to be special entities of a higher order than animals and objects (Krikmann, 2007; Lee & Šabanović, 2014). Indeed, according to a study on the preferences of people in the United States for the appearance of robots, they showed a strong preference for robots that resembled plants or machines, compared to people from other countries, demonstrating that North Americans are averse to robots with a human-like appearance (Lee & Šabanović, 2014). The fact that our South Korean, Chinese, and North American participants reported a preference for the 2D AI agent over the 3D AI agent is thought to be because of a discriminatory attitude between people and machines based on Christian thinking prevalent in the cultures of the three countries.

Unlike the three abovementioned countries, in our study participants in Brazil and Russia tended to prefer a 3D appearance for the AI agent over a 2D appearance. In the case of Brazil, because of adverse economic circumstances, such as a wide economic gap between rich and poor, a 9% illiteracy rate has been reported among the population aged 10 years or above (da Silva Garcia et al., 2019; Trovato et al., 2015, 2017), with 18.3% of the population aged 15 years or above being functionally illiterate. This means that the problems these people have in reading and writing causes them difficulty in communication with other people. They are also less familiar than their peers are with technology such as computers (Trovato et al., 2017). Trovato et al. (2017) reported that technologically illiterate people show a stronger reaction when they experience new technologies, such as anthropomorphic robots, and investigated preferences for anthropomorphic robots among Brazilians, finding that Brazilians preferred robots that closely resemble humans, and that this trend was even stronger among less educated people. Similarly, in this study, Brazilians preferred a 3D appearance, which can be considered more like a human form, than a 2D appearance. This finding could be linked to a high rate of technological illiteracy among Brazilian people.

Finally, for Russia, there have not been many related studies, but Ivanov et al. (2018) showed that Russian young adults (18–30 years old) were not particularly interested in the degree of anthropomorphism in the appearance of robots. Given the dearth of related research with regard to this specific demographic context, our finding that Russian respondents preferred the 3D AI agent to the 2D AI agent is a novel result.

Our study results show that men from all five countries showed a strong preference for an agent of the opposite gender. However, only women in the United States and Brazil preferred agents of the opposite gender (male appearance); women in South Korea, China, and Russia preferred agents of the same gender as themselves (female appearance). ter Stal et al. (2020) reported that masculine agents were perceived as strong and intellectual, whereas feminine agents were perceived as more sociable and affable. However, as the roles of men and women change over time, the perceptions of male and female agents are also changing. Users define suitable roles for AI agents of each gender based on their own changed perspectives of male and female roles (Nag & Yalçın, 2020).

Nevertheless, in studies related to gender preference for agents, respondents have not shown a consistent trend, with some showing a strong preference for agents of the same gender and others for agents of the opposite gender (ter Stal et al., 2020). In this study South Korea and the United States were the only two countries in which there were significant results in the post hoc analysis for women in regard to the gender of the AI agent. Of these, South Korean women preferred a female agent, which we attributed to the historically strong influence of Confucian culture in this country. Since the 1990s Korean society has been plagued by serious conflicts between men and women, and controversies and disputes related to gender inequality have continued (Ryu & Kim, 2019). In other words, although the Confucian culture in South Korea is gradually disappearing, South Korea still has a strongly paternalistic and male-dominated culture, and this perception is also projected onto the choice for the gender of robots, resulting in a preference for female agents over male agents among women (Kang, 2014). On the other hand, the United States has a relatively low inequality index; therefore, we attributed the preference of the North American women for a male agent to the lack of significant aversion to male agents (Crotti et al., 2020).

Conclusion, Limitations, and Directions for Future Research

The objective of this study was to analyze cultural differences in preference for the type of visualization of an automobile AI agent. The results show that people in South Korea, China, and the United States preferred the AI agent with a 2D appearance, and people in Brazil and Russia preferred the AI agent with a 3D appearance.

Although preferences for realism in the external appearance of AI agents or robots have been investigated from an intercultural perspective in previous studies, there are very few studies in which a sample as large as ours has been used. Our findings are, therefore, valuable in terms of providing more reliable data on this aspect of preference for external appearance of an AI agent. In particular, almost no research has previously been conducted on this topic in Brazil and Russia; therefore, our findings can provide direction for future research on preferences among users in these countries for the appearance of AI agents for use in self-driving automobiles. Furthermore, the findings of this study can offer design guidelines for personalized agent appearances accounting for differences in preferences between countries, and these results are timely, as the age of self-driving cars is now beginning.

Nevertheless, this study has a limitation. Because this was a quantitative survey, an understanding in greater depth of the reasons for cultural and gender differences in preference for AI agent appearance could not be generated. Although our findings were mostly analyzed from a religious and philosophical point of view even within the respective cultural contexts, this will need to be verified through a qualitative investigation to better elucidate the causes for these preferences. A future study could be conducted using in-depth qualitative interviews with users in these countries to derive more precise conclusions based on a detailed discussion of the factors affecting difference in preferences.

References

Akalin, N., & Loutfi, A. (2021). Reinforcement learning approaches in social robotics. Sensors, 21(4), Article 1292.
https://doi.org/10.3390/s21041292

Andreu-Perez, J., Deligianni, F., Ravi, D., & Yang, G.-Y. (2017). Artificial intelligence and robotics [White paper]. UK-RAS Network. https://bit.ly/3iHLsol

Bartneck, C., Nomura, T., Kanda, T., Suzuki, T., & Kato, K. (2005). Cultural differences in attitudes towards robots [Paper presentation]. AISB Convention: Social Intelligence and Interaction in Animals, Robots and Agents. United Kingdom. https://bit.ly/39lP2yT

Cho, G.-H. (2007). The Confucian origin of the East Asian collectivism [In Korean]. The Korean Journal of Social and Personality Psychology, 21(4), 21–53.
https://doi.org/10.21193/kjspp.2007.21.4.002

Crotti, R., Geiger, T., Ratcheva, V., & Zahidi, S. (2020). Global gender gap report 2020. World Economic Forum. https://bit.ly/3CzGylj

Da Costa, P. C. F. (2018). Conversing with personal digital assistants: On gender and artificial intelligence. Journal of Science and Technology of the Arts, 10(3), 59–72.
https://doi.org/10.7559/citarj.v10i3.563

da Silva Garcia, G., Christmann, G. H. G., da Silva Guerra, R., Librelotto, G. R., Hirt, E. R., & Fumagalli, M. R. (2019). Evaluation of exercise motivation competence of a humanoid robot: A case study in Brazil. In Proceedings of the 19th International Conference on Advanced Robotics (pp. 462–467). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ICAR46387.2019.8981555

Destephe, M., Brandao, M., Kishi, T., Zecca, M., Hashimoto, K., & Takanishi, A. (2015). Walking in the uncanny valley: Importance of the attractiveness on the acceptance of a robot as a working partner. Frontiers in Psychology, 6, Article 204.
https://doi.org/10.3389/fpsyg.2015.00204

The Economist. (2020, September 15). Protestant Christianity is booming in China: President Xi does not approve. https://econ.st/3lJfMPz

Eyssel, F., & Hegel, F. (2012). (S)he’s got the look: Gender stereotyping of robots. Journal of Applied Social Psychology, 42(9), 2213–2230.
https://doi.org/10.1111/j.1559-1816.2012.00937.x

Fraune, M. R., Oisted, B. C., Sembrowski, C. E., Gates, K. A., Krupp, M. M., & Šabanović, S. (2020). Effects of robot-human versus robot-robot behavior and entitativity on anthropomorphism and willingness to interact. Computers in Human Behavior, 105, Article 106220.
https://doi.org/10.1016/j.chb.2019.106220

Geraci, R. M. (2006). Spiritual robots: Religion and our scientific view of the natural world. Theology and Science, 4(3), 229–246.
https://doi.org/10.1080/14746700600952993

Hofstede, G. (1984). Culture’s consequences: International differences in work-related values (Vol. 5). Sage Publications.

Hofstede, G., Hofstede, G. J., & Minkov, M. (2005). Cultures and organizations: Software of the mind (Vol. 2). McGraw-Hill.

Hong, J. W. (2020). Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving car accidents in experimental settings. International Journal of Human–Computer Interaction, 36(18), 1768–1774.
https://doi.org/10.1080/10447318.2020.1785693

Ivanov, S., Webster, C., & Garenko, A. (2018). Young Russian adults’ attitudes towards the potential use of robots in hotels. Technology in Society, 55, 24–32.
https://doi.org/10.1016/j.techsoc.2018.06.004

Jang, P. S., Jung, W. H., & Hyun, J. S. (2018). Analysis of differences in uncanny valley phenomenon by languages using social media data [In Korean]. Journal of the Ergonomics Society of Korea, 37(2), 111–121.

Kang, M. (2014). How do East Asians live within Confucian family relations: Changes of practices of individuality represented in Korean and Chinese television drama. Media & Society, 22(1), 162–222.

Krikmann, A. (2007, November 5–12). The great chain of being as the background of personificatory and depersonificatory metaphors in proverbs and elsewhere [Paper presentation]. 1st Interdisciplinary colloquium on proverbs: Tavira (Algarve), Portugal.

Lee, D. (2018). Science technology – What are social robots? [In Korean]. Science Technology – Sosyeol robosiran TTA Journal, 178, 68–69. https://bit.ly/3nN2NiJ

Lee, H. R., & Šabanović, S. (2014). Culturally variable preferences for robot design and use in South Korea, Turkey and the United States. Proceedings of 9th ACM/IEEE International Conference on Human-Robot Interaction (pp. 17–24). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1145/2559636.2559676

Lee, H. R., Sung, J., Šabanović, S., & Han, J. (2012, September). Cultural design of domestic robots: A study of user expectations in Korea and the United States. 21st IEEE International Symposium on Robot and Human Interactive Communication (pp. 803–808). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ROMAN.2012.6343850

Li, D., Rau, P. L. P., & Li, Y. (2010). A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics, 2(2), 175–186.
https://doi.org/10.1007/s12369-010-0056-9

Lohani, M., Stokes, C., McCoy, M., Bailey, C. A., & Rivers, S. E. (2016, March). Social interaction moderates human-robot trust-reliance relationship and improves stress coping. Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (pp. 471–472). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/HRI.2016.7451811

MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C. (2009). Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI & Society, 23(4), 485–510.
https://doi.org/10.1007/s00146-008-0181-2

Mori, M. (1970). The uncanny valley [In Japanese]. Energy, 7, 33–35.

Nag, P., & Yalçın, Ö. N. (2020). Gender stereotypes in virtual agents. Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1–8). Association for Computer Machinery.
https://doi.org/10.1145/3383652.3423876

Nomura, T., Kanda, T., Suzuki, T., Jeonghye, H., Shin, N., Burke, J., & Kato, K. (2007). Implications on humanoid robots in pedagogical applications from cross-cultural analysis between Japan, Korea, and the USA. Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication (pp. 1052–1057). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ROMAN.2007.4415237

O’Neill-Brown, P. (1997). Setting the stage for the culturally adaptive agent. Proceedings of the 1997 AAAI Fall Symposium on Socially Intelligent Agents (pp. 93–97). Association for the Advancement of Artificial Intelligence Press. https://bit.ly/3Ack33j

Rau, P. L. P., Li, Y., & Li, D. (2009). Effects of communication style and culture on ability to accept recommendations from robots. Computers in Human Behavior, 25(2), 587–595.
https://doi.org/10.1016/j.chb.2008.12.025

Ryu, Y., & Kim, Y.-M. (2019). An exploratory study on gender conflict perception in Korea: Focused on the moderating effect of gender [In Korean]. Korea Social Policy Review, 26(4), 131–160.
https://doi.org/10.17000/kspr.26.4.201912.131

Samuel, J. L. (2019). Company from the uncanny valley: A psychological perspective on social robots, anthropomorphism and the introduction of robots to society. Ethics in Progress, 10(2), 8–26.
https://doi.org/10.14746/eip.2019.2.2

Sauer, J., Sonderegger, A., & Schmutz, S. (2020). Usability, user experience and accessibility: Towards an integrative model. Ergonomics, 63(10), 1207–1220.
https://doi.org/10.1080/00140139.2020.1774080

Seyama, J. I., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337–351.
https://doi.org/10.1162/pres.16.4.337

Song, Y., & Luximon, Y. (2020). Trust in AI agent: A systematic review of facial anthropomorphic trustworthiness for social robot design. Sensors, 20(18), Article 5087.
https://doi.org/10.3390/s20185087

Strömberg, H., Pettersson, I., Andersson, J., Rydström, A., Dey, D., Klingegård, M., & Forlizzi, J. (2018). Designing for social experiences with and within autonomous vehicles – Exploring methodological directions. Design Science: An International Journal, 4, Article e13.
https://doi.org/10.1017/dsj.2018.9

Tay, T. T., Low, R., Loke, H. J., Chua, Y. L., & Goh, Y. H. (2018). Uncanny valley: A preliminary study on the acceptance of Malaysian urban and rural population toward different types of robotic faces. Proceedings of the 3rd International Conference on Science, Technology, and Interdisciplinary Research (IOP Conference Series, Vol. 344, Article 012012). Institute of Physics Publishing. 
https://doi.org/10.1088/1757-899X/344/1/012012

ter Stal, S., Tabak, M., op den Akker, H., Beinema, T., & Hermens, H. (2020). Who do you prefer? The effect of age, gender and role on users’ first impressions of embodied conversational agents in eHealth. International Journal of Human–Computer Interaction, 36(9), 881–892.
https://doi.org/10.1080/10447318.2019.1699744

Trovato, G., De Saint Chamas, L., Nishimura, M., Paredes, R., Lucho, C., Huerta-Mercado, A., & Cuellar, F. (2019). Religion and robots: Towards the synthesis of two extremes. International Journal of Social Robotics, 13, 539–556.
https://doi.org/10.1007/s12369-019-00553-8

Trovato, G., Ramos, J. G., Azevedo, H., Moroni, A., Magossi, S., Ishii, H., … Takanishi, A. (2015). “Olá, my name is Ana”: A study on Brazilians interacting with a receptionist robot. In U. Saranli & S. Kalkan (Eds.), Proceedings of the 2015 International Conference on Advanced Robotics (pp. 66–71). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ICAR.2015.7251435

Trovato, G., Ramos, J. G., Azevedo, H., Moroni, A., Magossi, S., Simmons, R., … Takanishi, A. (2017). A receptionist robot for Brazilian people: Study on interaction involving illiterates. Paladyn, Journal of Behavioral Robotics, 8(1), 1–17.
https://doi.org/10.1515/pjbr-2017-0001

Wang, L., Rau, P. L. P., Evers, V., Robinson, B. K., & Hinds, P. (2010). When in Rome: The role of culture and context in adherence to robot recommendations. Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 359–366). Association for Computer Machinery.
https://doi.org/10.1109/HRI.2010.5453165

Weber, J. (2005). Helpless machines and true loving care givers: A feminist critique of recent trends in human-robot interaction. Journal of Information, Communication and Ethics in Society, 3(4), 209–218.
https://doi.org/10.1108/14779960580000274

Xu, J., & Howard, A. (2020). How much do you trust your self-driving car? Exploring human-robot trust in high-risk scenarios. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (pp. 4273–4280). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/SMC42975.2020.9282866

Akalin, N., & Loutfi, A. (2021). Reinforcement learning approaches in social robotics. Sensors, 21(4), Article 1292.
https://doi.org/10.3390/s21041292

Andreu-Perez, J., Deligianni, F., Ravi, D., & Yang, G.-Y. (2017). Artificial intelligence and robotics [White paper]. UK-RAS Network. https://bit.ly/3iHLsol

Bartneck, C., Nomura, T., Kanda, T., Suzuki, T., & Kato, K. (2005). Cultural differences in attitudes towards robots [Paper presentation]. AISB Convention: Social Intelligence and Interaction in Animals, Robots and Agents. United Kingdom. https://bit.ly/39lP2yT

Cho, G.-H. (2007). The Confucian origin of the East Asian collectivism [In Korean]. The Korean Journal of Social and Personality Psychology, 21(4), 21–53.
https://doi.org/10.21193/kjspp.2007.21.4.002

Crotti, R., Geiger, T., Ratcheva, V., & Zahidi, S. (2020). Global gender gap report 2020. World Economic Forum. https://bit.ly/3CzGylj

Da Costa, P. C. F. (2018). Conversing with personal digital assistants: On gender and artificial intelligence. Journal of Science and Technology of the Arts, 10(3), 59–72.
https://doi.org/10.7559/citarj.v10i3.563

da Silva Garcia, G., Christmann, G. H. G., da Silva Guerra, R., Librelotto, G. R., Hirt, E. R., & Fumagalli, M. R. (2019). Evaluation of exercise motivation competence of a humanoid robot: A case study in Brazil. In Proceedings of the 19th International Conference on Advanced Robotics (pp. 462–467). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ICAR46387.2019.8981555

Destephe, M., Brandao, M., Kishi, T., Zecca, M., Hashimoto, K., & Takanishi, A. (2015). Walking in the uncanny valley: Importance of the attractiveness on the acceptance of a robot as a working partner. Frontiers in Psychology, 6, Article 204.
https://doi.org/10.3389/fpsyg.2015.00204

The Economist. (2020, September 15). Protestant Christianity is booming in China: President Xi does not approve. https://econ.st/3lJfMPz

Eyssel, F., & Hegel, F. (2012). (S)he’s got the look: Gender stereotyping of robots. Journal of Applied Social Psychology, 42(9), 2213–2230.
https://doi.org/10.1111/j.1559-1816.2012.00937.x

Fraune, M. R., Oisted, B. C., Sembrowski, C. E., Gates, K. A., Krupp, M. M., & Šabanović, S. (2020). Effects of robot-human versus robot-robot behavior and entitativity on anthropomorphism and willingness to interact. Computers in Human Behavior, 105, Article 106220.
https://doi.org/10.1016/j.chb.2019.106220

Geraci, R. M. (2006). Spiritual robots: Religion and our scientific view of the natural world. Theology and Science, 4(3), 229–246.
https://doi.org/10.1080/14746700600952993

Hofstede, G. (1984). Culture’s consequences: International differences in work-related values (Vol. 5). Sage Publications.

Hofstede, G., Hofstede, G. J., & Minkov, M. (2005). Cultures and organizations: Software of the mind (Vol. 2). McGraw-Hill.

Hong, J. W. (2020). Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving car accidents in experimental settings. International Journal of Human–Computer Interaction, 36(18), 1768–1774.
https://doi.org/10.1080/10447318.2020.1785693

Ivanov, S., Webster, C., & Garenko, A. (2018). Young Russian adults’ attitudes towards the potential use of robots in hotels. Technology in Society, 55, 24–32.
https://doi.org/10.1016/j.techsoc.2018.06.004

Jang, P. S., Jung, W. H., & Hyun, J. S. (2018). Analysis of differences in uncanny valley phenomenon by languages using social media data [In Korean]. Journal of the Ergonomics Society of Korea, 37(2), 111–121.

Kang, M. (2014). How do East Asians live within Confucian family relations: Changes of practices of individuality represented in Korean and Chinese television drama. Media & Society, 22(1), 162–222.

Krikmann, A. (2007, November 5–12). The great chain of being as the background of personificatory and depersonificatory metaphors in proverbs and elsewhere [Paper presentation]. 1st Interdisciplinary colloquium on proverbs: Tavira (Algarve), Portugal.

Lee, D. (2018). Science technology – What are social robots? [In Korean]. Science Technology – Sosyeol robosiran TTA Journal, 178, 68–69. https://bit.ly/3nN2NiJ

Lee, H. R., & Šabanović, S. (2014). Culturally variable preferences for robot design and use in South Korea, Turkey and the United States. Proceedings of 9th ACM/IEEE International Conference on Human-Robot Interaction (pp. 17–24). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1145/2559636.2559676

Lee, H. R., Sung, J., Šabanović, S., & Han, J. (2012, September). Cultural design of domestic robots: A study of user expectations in Korea and the United States. 21st IEEE International Symposium on Robot and Human Interactive Communication (pp. 803–808). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ROMAN.2012.6343850

Li, D., Rau, P. L. P., & Li, Y. (2010). A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robotics, 2(2), 175–186.
https://doi.org/10.1007/s12369-010-0056-9

Lohani, M., Stokes, C., McCoy, M., Bailey, C. A., & Rivers, S. E. (2016, March). Social interaction moderates human-robot trust-reliance relationship and improves stress coping. Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (pp. 471–472). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/HRI.2016.7451811

MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C. (2009). Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI & Society, 23(4), 485–510.
https://doi.org/10.1007/s00146-008-0181-2

Mori, M. (1970). The uncanny valley [In Japanese]. Energy, 7, 33–35.

Nag, P., & Yalçın, Ö. N. (2020). Gender stereotypes in virtual agents. Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1–8). Association for Computer Machinery.
https://doi.org/10.1145/3383652.3423876

Nomura, T., Kanda, T., Suzuki, T., Jeonghye, H., Shin, N., Burke, J., & Kato, K. (2007). Implications on humanoid robots in pedagogical applications from cross-cultural analysis between Japan, Korea, and the USA. Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication (pp. 1052–1057). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ROMAN.2007.4415237

O’Neill-Brown, P. (1997). Setting the stage for the culturally adaptive agent. Proceedings of the 1997 AAAI Fall Symposium on Socially Intelligent Agents (pp. 93–97). Association for the Advancement of Artificial Intelligence Press. https://bit.ly/3Ack33j

Rau, P. L. P., Li, Y., & Li, D. (2009). Effects of communication style and culture on ability to accept recommendations from robots. Computers in Human Behavior, 25(2), 587–595.
https://doi.org/10.1016/j.chb.2008.12.025

Ryu, Y., & Kim, Y.-M. (2019). An exploratory study on gender conflict perception in Korea: Focused on the moderating effect of gender [In Korean]. Korea Social Policy Review, 26(4), 131–160.
https://doi.org/10.17000/kspr.26.4.201912.131

Samuel, J. L. (2019). Company from the uncanny valley: A psychological perspective on social robots, anthropomorphism and the introduction of robots to society. Ethics in Progress, 10(2), 8–26.
https://doi.org/10.14746/eip.2019.2.2

Sauer, J., Sonderegger, A., & Schmutz, S. (2020). Usability, user experience and accessibility: Towards an integrative model. Ergonomics, 63(10), 1207–1220.
https://doi.org/10.1080/00140139.2020.1774080

Seyama, J. I., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337–351.
https://doi.org/10.1162/pres.16.4.337

Song, Y., & Luximon, Y. (2020). Trust in AI agent: A systematic review of facial anthropomorphic trustworthiness for social robot design. Sensors, 20(18), Article 5087.
https://doi.org/10.3390/s20185087

Strömberg, H., Pettersson, I., Andersson, J., Rydström, A., Dey, D., Klingegård, M., & Forlizzi, J. (2018). Designing for social experiences with and within autonomous vehicles – Exploring methodological directions. Design Science: An International Journal, 4, Article e13.
https://doi.org/10.1017/dsj.2018.9

Tay, T. T., Low, R., Loke, H. J., Chua, Y. L., & Goh, Y. H. (2018). Uncanny valley: A preliminary study on the acceptance of Malaysian urban and rural population toward different types of robotic faces. Proceedings of the 3rd International Conference on Science, Technology, and Interdisciplinary Research (IOP Conference Series, Vol. 344, Article 012012). Institute of Physics Publishing. 
https://doi.org/10.1088/1757-899X/344/1/012012

ter Stal, S., Tabak, M., op den Akker, H., Beinema, T., & Hermens, H. (2020). Who do you prefer? The effect of age, gender and role on users’ first impressions of embodied conversational agents in eHealth. International Journal of Human–Computer Interaction, 36(9), 881–892.
https://doi.org/10.1080/10447318.2019.1699744

Trovato, G., De Saint Chamas, L., Nishimura, M., Paredes, R., Lucho, C., Huerta-Mercado, A., & Cuellar, F. (2019). Religion and robots: Towards the synthesis of two extremes. International Journal of Social Robotics, 13, 539–556.
https://doi.org/10.1007/s12369-019-00553-8

Trovato, G., Ramos, J. G., Azevedo, H., Moroni, A., Magossi, S., Ishii, H., … Takanishi, A. (2015). “Olá, my name is Ana”: A study on Brazilians interacting with a receptionist robot. In U. Saranli & S. Kalkan (Eds.), Proceedings of the 2015 International Conference on Advanced Robotics (pp. 66–71). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/ICAR.2015.7251435

Trovato, G., Ramos, J. G., Azevedo, H., Moroni, A., Magossi, S., Simmons, R., … Takanishi, A. (2017). A receptionist robot for Brazilian people: Study on interaction involving illiterates. Paladyn, Journal of Behavioral Robotics, 8(1), 1–17.
https://doi.org/10.1515/pjbr-2017-0001

Wang, L., Rau, P. L. P., Evers, V., Robinson, B. K., & Hinds, P. (2010). When in Rome: The role of culture and context in adherence to robot recommendations. Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 359–366). Association for Computer Machinery.
https://doi.org/10.1109/HRI.2010.5453165

Weber, J. (2005). Helpless machines and true loving care givers: A feminist critique of recent trends in human-robot interaction. Journal of Information, Communication and Ethics in Society, 3(4), 209–218.
https://doi.org/10.1108/14779960580000274

Xu, J., & Howard, A. (2020). How much do you trust your self-driving car? Exploring human-robot trust in high-risk scenarios. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (pp. 4273–4280). Institute of Electrical and Electronics Engineers.
https://doi.org/10.1109/SMC42975.2020.9282866

Table 1. Basic Characteristics of the Participants

Table/Figure

Note. SUV/RV = sports utility vehicle/recreational vehicle.


Table 2. Overall Comparison Between the Countries for Preference in Regard to Gender and Realism of Artificial Intelligence Agent

Table/Figure

Table/Figure

Figure 1. Overall Comparison Between the Countries for Preference in Regard to Gender and Realism of Artificial Intelligence Agent


Table 3. Analysis of Variance Results for South Korean Respondents

Table/Figure

Note. SS = sum of squares; MS = mean squares.


Table 4. Post Hoc Analysis for South Korean Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.


Table/Figure

Figure 2. Preferences of South Korean Respondents for Appearance of Artificial Intelligence Agent


Table/Figure

Figure 3. Preferences of Chinese Respondents for Appearance of Artificial Intelligence Agent


Table 5. Analysis of Variance Results for Chinese Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.


Table 6. Post Hoc Analysis for Chinese Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.


Table 7. Analysis of Variance Results for American Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.


Table/Figure

Figure 4. Preferences of North American Respondents for Appearance of Artificial Intelligence Agent


Table 8. Post Hoc Analysis for North American Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.


Table 9. Post Hoc Analysis for Brazilian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.


Table/Figure

Figure 5. Preferences of Brazilian Respondents for Appearance of Artificial Intelligence Agent


Table 10. Analysis of Variance Results for Brazilian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.


Table/Figure

Figure 6. Preferences of Russian Respondents for Appearance of Artificial Intelligence Agent


Table 11. Analysis of Variance Results for Russian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. SS = sum of squares; MS = mean squares.


Table 12. Post Hoc Analysis for Russian Respondents’ Preference for Appearance of Artificial Intelligence Agent

Table/Figure

Note. * p < .05.


Hoonsik Yoo, Department of New Media, Seoul Media Institute of Technology, Seoul 07590, Republic of Korea. Email: [email protected]

Article Details

© 2021 Scientific Journal Publishers Limited. All Rights Reserved.