skip to main content

THE INFLUENCE OF DIFFERENT TYPES OF TASKS AND USER EXPERIENCE LEVELS ON HUMAN-COMPUTER TRUST IN THE CHATGPT APPLICATION

*Sigit Rahmat Rizalmi orcid scopus  -  Institut Teknologi Kalimantan, Indonesia
Vridayani Anggi Leksono  -  Institut Teknologi Kalimantan, Indonesia
Abdul Alimul Karim  -  Institut Teknologi Kalimantan, Indonesia
Syarifah Chairunnisaa  -  Institut Teknologi Kalimantan, Indonesia
Putri Gesan Prabawa Anwar  -  Institut Teknologi Kalimantan, Indonesia

Citation Format:
Abstract

This study investigates the role of user experience and task type on Human-Computer Trust in the ChatGPT system. The method used in this research is an experiment where respondents are asked to do mathematics, descriptive, translation, and programming tasks on the ChatGPT application. After completing each task given, respondents were asked to fill out the trust questionnaire that had been prepared. Respondents in this study were divided into two categories: novice and expert. The total number of respondents in this study was 32 respondents. The result of this research is that the level of user experience significantly influences human-computer trust in the ChatGPT application in Task 1, Task 2, Task 3, and Task 4. However, there is no significant influence from differences in the types of tasks on human-computer trust in the ChatGPT application.

Fulltext View|Download
Keywords: Human-Computer Trust; ChatGPT; Artificial Intelligence; Type of Task; User Experience

Article Metrics:

  1. Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics, 54, 101473
  2. Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52-62
  3. Bansal, H., & Khan, R. (2018). A review paper on human computer interaction. International Journal of Advanced Research in Computer Science and Software Engineering, 8(4), 53
  4. Chang, Y., Lee, S., Wong, S. F., & Jeong, S. P. (2022). AI-powered learning application use and gratification: an integrative model. Information Technology & People, 35(7), 2115-2139
  5. Edwards, C., Edwards, A., Stoll, B., Lin, X., & Massey, N. (2019). Evaluations of an artificial intelligence instructor's voice: Social Identity Theory in human-robot interactions. Computers in Human Behavior, 90, 357-362
  6. Falcone, R., & Castelfranchi, C. (2001). Social trust: A cognitive approach. Trust and deception in virtual societies, 55-90
  7. Frieder, S., Pinchetti, L., Griffiths, R. R., Salvatori, T., Lukasiewicz, T., Petersen, P., & Berner, J. (2024). Mathematical capabilities of chatgpt. Advances in neural information processing systems, 36
  8. George, A. S., & George, A. H. (2023). A review of ChatGPT AI's impact on several business sectors. Partners universal international innovation journal, 1(1), 9-23
  9. Haleem, A., Javaid, M., & Singh, R. P. (2022). An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil transactions on benchmarks, standards and evaluations, 2(4), 100089
  10. Kesharwani, A., & Singh Bisht, S. (2012). The impact of trust and perceived risk on internet banking adoption in India: An extension of technology acceptance model. International journal of bank marketing, 30(4), 303-322
  11. Kim, J., & Gambino, A. (2016). Do we trust the crowd or information system? Effects of personalization and bandwagon cues on users' attitudes and behavioral intentions toward a restaurant recommendation website. Computers in Human Behavior, 65, 369-379
  12. Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., ... & Kazienko, P. (2023). ChatGPT: Jack of all trades, master of none. Information Fusion, 99, 101861
  13. Iku-Silan, A., Hwang, G. J., & Chen, C. H. (2023). Decision-guided chatbots and cognitive styles in interdisciplinary learning. Computers & Education, 201, 104812
  14. Limbu, Y. B., Wolf, M., & Lunsford, D. (2012). Perceived ethics of online retailers and consumer behavioral intentions: The mediating roles of trust and attitude. Journal of Research in Interactive Marketing, 6(2), 133-154
  15. Lortie, C. L., & Guitton, M. J. (2011). Judgment of the humanness of an interlocutor is in the eye of the beholder. PLoS One, 6(9), e25085
  16. Madsen, M., & Gregor, S. (2000, December). Measuring human-computer trust. In 11th australasian conference on information systems (Vol. 53, pp. 6-8)
  17. Niculescu, A., Van Dijk, B., Nijholt, A., Li, H., & See, S. L. (2013). Making social robots more attractive: the effects of voice pitch, humor and empathy. International journal of social robotics, 5, 171-191
  18. Shawar, B. A., & Atwell, E. (2007). Chatbots: are they really useful?. Journal for Language Technology and Computational Linguistics, 22(1), 29-49
  19. Tamagawa, R., Watson, C. I., Kuo, I. H., MacDonald, B. A., & Broadbent, E. (2011). The effects of synthesized voice accents on user perceptions of robots. International Journal of Social Robotics, 3, 253-262
  20. Torre, I., Goslin, J., & White, L. (2020). If your device could smile: People trust happy-sounding artificial agents more. Computers in Human Behavior, 105, 106215
  21. Van Dis, E. A., Bollen, J., Zuidema, W., Van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224-226

Last update:

No citation recorded.

Last update: 2024-12-23 09:54:54

No citation recorded.