Ferenc RIBNÍ

Deep Analysis of Higher Education Students' AttitudesTowards Artificial Intelligence

Introduction

From an interdisciplinary perspective, AI is more than a system of algorithms (alg.) and data sets: it is a new kind of dialogue between man and machine (Molenaar, 2022; Beishui, 2022). Our study invites the reader to view AI not only as a tool, but as a system whose deeper understanding can help push our intellectual limits.

Our research has explored in detail and empirically confirmed this phenomenon, emphasising that the lack of knowledge or the superficial, fragmentary possession of knowledge not only limits us, but also acts as a kind of internal boundary that narrows the horizon of understanding (Ribní, 2025a). This insight points to the paradox that the precondition for the development of human thought is precisely the awareness of the lack of knowledge: the lack that generates both anxiety and desire, but also the dynamic that drives the search for knowledge. Thus, ignorance becomes not merely a limitation but a compass of cognition that permeates the fundamental structure of intellectual progress (Foucault, 1970). Following this line of theoretical reasoning, our research model is built on three fundamental pillars:

The following factors are critical determinants of the effective and efficient application of AI. Hence, for the application of AI, it is important that users have the appropriate knowledge, are active users of the technology, and have confidence in the functioning of AI systems.

The use of AI in education and research is indeed a complex issue that requires a combination of positivist and constructivist approaches (Davis, 2005). The benefits of using AI, such as the ability to process data quickly and accurately, can undoubtedly be beneficial. However, it is essential not to overlook the role of subjective interpretations, particularly in areas where human experiences, ethical considerations, and social contexts significantly influence decisions (Bredenoord, 2016; Sen, 2009). Nevertheless, the use of AI is essential in modern education and research processes, but should always be treated with caveats.  When using this technology, the following should be considered:

Consequently, while AI should be utilised, its development and applications must ensure that it does not become a fully autonomous decision-maker, but rather that human oversight and ethical considerations remain paramount.

Methods

The objective of our current research is to explore the threefold structure outlined above (knowledge, use, trust - see Figure 1), which is one of the fundamental requirements for the growing use of AI. Furthermore, our research aims to emphasise the role of teachers in raising awareness of the importance of these factors in the use of AI. The research questions guiding our study were as follows:

The study aims to explore in depth the knowledge, confidence level, and willingness to use AI among the higher education students mentioned above. Our primary objective was to assess the extent of knowledge of AI technology, as well as its perception and attitudes towards its use in educational settings. It was imperative to explore the extent to which students perceive AI as a trustworthy source and how confident they are in its responses compared to traditional methods. Furthermore, the research also aimed to determine the extent to which students are willing to incorporate AI into their learning process (see Figure 1 for the research design).

Validation of the questionnaire

Before the research, a pilot study (N=150) was conducted in which we tested the reliability and validity of the questionnaire, as our questionnaire was self-developed and not adapted. Reliability was measured using the Cronbach's alpha coefficient (α=0.806), which showed a value above 0.80, indicating a good level of internal consistency of the questionnaire. In terms of validity, both content validity and construct validity were examined, with the results confirming that the questionnaire adequately measures the targeted concepts.

Research design

Source: Author's creation (draw.io, 2025)

Research design and data collection

A total of 365 respondents participated in the survey (N = 365). The target group of our research were students in higher education (students of the Budapest University of Technology and Economics and Eötvös Loránd University). Fifty-five percent, thirty-four (55.34%, N=202) of the respondents were female, and forty-four percent, sixty-six (44.66%, N=163) were male. They were asked to complete the questionnaire online, with no time limit. The results were conducted using IBM SPSS Statistics (version 20) software; the analyses used were: descriptive statistics, Cronbach's alpha test, correlation analysis, linear regression, VIF test, Breusch-Pagan test, Durbin-Watson statistic, Q-Q plot, and Shapiro-Wilk test, and MATLAB was used for data visualisation (MATLAB, 2021). Our questionnaire was considered reliable, as Cronbach's alpha (α) was 0.805. The questionnaire included Likert scale statements, as well as open-ended and multiple-choice questions.

In terms of place of residence, the data show the following distribution: 30.41% live in a village (N=111), 22.2% in a small town (under 50,000 inhabitants, N=81), 26.84% in a big city (over 50,000 inhabitants, N=98), 20.55% in the capital (N=75). 56% (N=204) of the respondents were undergraduate students, 44% (N=161) were master's students.

Most participants fell within the 18–23 age range, while the least represented group consisted of those aged 27 and older [μ=24.25 years (mean), σupper=29.54 (upper standard deviation), σlower=18.96 years (lower standard deviation)]. The distribution between age groups shows that the surveyed group mainly represents the younger generation.

Qualitative analysis

The study has been supplemented with a qualitative section that analyses the open-ended questions. The text's length is 2.8 sheets, comprising 92,862 non-space characters (NSP), and was constructed around the following items, which served as the foundation for the research questions:

The qualitative analysis aims to identify, through qualitative content analysis (Hsieh et al., 2005; Elo et al., 2008), students' understanding of the concept of AI, basic knowledge of how AI works, and the educational benefits, drawbacks, and future potential of the technology. Building around the following research questions:

Qualitative methods

For analytical purposes, we applied the ATLAS. TI software is an advanced qualitative data management tool that allows for the systematic coding and analysis of large amounts of textual data (corpus) (Figure 1). Using the software, we coded the data in a structured way, allowing for the identification of patterns and themes, thus contributing to a more comprehensive understanding of the research questions (Tenny et al., 2006; ATLAS.ti, 2025).

The use of ATLAS.ti is significant as it offers a range of tools for thematic analysis, for fast and accurate coding and categorisation, thus supporting the reliability and validity of qualitative research. An inductive approach [conventional content analysis (Hsieh et al., 2005)] was used for coding, i.e., themes were identified based on patterns and contextual relationships in the responses. This procedure provides an opportunity to explore the context and hidden structures of the responses in depth. Deductive coding elements were also used to a lesser extent, essentially starting from the data to identify themes based on patterns and contextual relationships in the responses, while also using prior conceptual frameworks to guide the fine-tuning of specific categories. This approach can be interpreted as a mixed-method approach, but due to the dominance of inductive analysis, it is referred to scientifically as inductive analysis. The following analytical steps were carried out in the research:

This approach of qualitative analysis allows for a deeper, more in-depth understanding of the knowledge and attitudes of the students involved in the research about AI. The use of the ATLAS. TI software ensures that the analysis is transparent, reliable, and structured, providing a scientific basis for interpreting the results and validating the trends revealed by the research. In order to assess the intra-coding reliability of the text, a repeat coding was performed one month after the first coding. Reliability was measured using Krippendorff's alpha (α) index, which was found to be 0.87, indicating a high degree of coding consistency and reproducibility, supporting the reliability of the procedure.

Results

The following section provides a synthesis of the qualitative and quantitative findings of our research. A detailed analysis of these results can be found in the two referenced articles (Ribní, 2025a; Ribní, 2025b).

Quantitative summary

The results of the survey showed that students self-assessed their knowledge of AI at a medium-high level (μ=3.47). However, the responses to the open-ended questions indicated that only 21.92% of the respondents could correctly define how AI works. This indicates that the students' subjective sense of knowledge differs significantly from their actual level of knowledge. Surprisingly, social media is the primary source of information about AI (74.6%), followed by formal education (43.3%) and professional articles (40.3%). The fact that a significant proportion of students obtain their information from social media may pose a risk, as these platforms tend to spread disinformation, which can distort students' perceptions of AI. This mixed information environment is reflected in students’ overall moderate confidence in AI (μ=3.17). However, the level of confidence is highly dependent on the specific application. Respondents are more confident in scientific information (57.7%) and educational aids (75%), but more sceptical about health (8.7%) and financial applications (6.7%).

Importantly, AI-generated errors and misinformation significantly reduce trust: 56% experienced a slight loss of trust, while 32.8% experienced a significant loss of trust. Gender differences were also found, with men generally showing more trust in AI than women (p<0.05; V=0.34). Despite these concerns, students' overall frequency of AI use is moderate (μ = 3.07), but they rate the usefulness of AI as high (μ = 3.66). The image of the future of AI is optimistic, particularly regarding its role in performing creative tasks (μ = 4.07).

Although students hold an optimistic outlook on AI’s capabilities and use it moderately, the role of AI in education is divisive: 41% believe that AI can bring significant change. In comparison, 32.8% prefer education led by human teachers. Ethical concerns are also prominent, with 57.5% of respondents concerned about privacy and 45.5% about bias in algorithms. Interest in AI education is moderate but significant, with 50% of students indicating they would take AI courses, while 23.1% are not interested and 26.9% are undecided. The way students interact with AI tools provides further insight into how they perceive the value of AI. There is a strong positive correlation between frequent use of AI and perceived level of usefulness (r=0.725; p<0.01), indicating that the more students use AI, the more useful they perceive it to be. The moderately strong correlation (r=0.566; p<0.01) between the perception of the importance of knowledge and the usefulness of AI highlights the importance of knowledge in the adoption of technology.

Supporting this, the regression analysis shows that knowledge of AI and confidence in AI significantly affect the frequency of AI use (R²=0.458). Knowledge of AI has the most significant impact on its use, indicating that educational institutions should prioritise the effective integration of AI-based tools.

The results highlight a gap between students' perceived and actual AI knowledge, emphasising the need for structured AI education. While AI is seen as applicable, confidence varies by application, and ethical concerns remain significant. Social media as a primary information source raises misinformation risks. AI knowledge and confidence strongly influence usage, underscoring the role of education in AI adoption. Institutions should focus on effective AI integration while addressing trust and ethical issues. A more detailed analysis of the results is available in our previous paper (Ribní, 2025a).

Qualitative summary

The results of the qualitative content analysis can be structured along four research questions (RQs), which explore university students' definitions, perceptions, impact on education, and future role of AI. Based on the responses analysed, definitions of AI can be grouped into four main categories:

By discipline, it was observed that students in engineering and science faculties typically used technical definitions. In contrast, students in social sciences and humanities placed more emphasis on ethical and philosophical aspects. Students' perceptions of how AI works, and their accuracy, fall into three main categories:

Engineering students provided more detailed and accurate descriptions of how AI works, while social science students were more inclined to portray AI as an entity similar to human intelligence. When it comes to AI in university education, students recognized both its advantages and disadvantages as follows.

Benefits

Disadvantages

Respondents agreed that AI is expected to play an increasing role in education over the next decade. The most frequently mentioned future trends are:

Overall, the majority of students have a positive attitude towards the role of AI in education, but are aware that there are challenges and risks in using the technology. The results highlight that the integration of AI in education requires not only technical development but also the development of appropriate pedagogical and ethical guidelines. The detailed qualitative analysis can be found in the conference proceedings of Imre Sándor II (Ribní, 2025b).

AI Knowledge and Perception: Lessons learned and final reflections

The growing presence of AI in education and society raises new questions about knowledge, confidence, and applicability. The results of this research highlight a significant gap between students' self-assessment and their actual knowledge. Although students self-report a medium-high level of knowledge of AI, responses to open-ended questions indicate that their actual knowledge of its definition and operation is limited. This asymmetry reflects the classic problem of human self-evaluation: a subjective sense of knowledge does not necessarily correlate with objective knowledge.

One of the most fascinating aspects of the research is the analysis of students' sources of information. The data shows that social media dominates the information landscape, ahead of formal education and professional articles. This phenomenon raises critical questions about the credibility and reliability of information. Social media, although a quick and widely available source of information, tends to disseminate disinformation that can distort students' perceptions of AI. This means that universities and educational institutions have a crucial role in disseminating scientifically sound knowledge about AI and fostering critical thinking. Students' confidence in AI is moderate, but strongly dependent on the field of application. While confidence in AI systems for scientific and educational purposes is relatively high, there is scepticism about applications in health and finance. This distinction suggests that perceptions of AI are influenced not only by the technology itself but also by its contextual application.

One of the primary sources of loss of trust is AI-generated errors and misinformation, highlighting the need for increased attention to transparency and trustworthiness by both technology developers and users. The research pays particular attention to the role of AI in education, which is a highly divisive issue. A significant proportion of students believe that AI can bring radical changes to education, while others continue to emphasise the central role of human teachers. This contrast also highlights the impact of AI on human interactions. While technology can make education more effective and personalised, it can also reduce the number and quality of direct human interactions.

Qualitative analysis will further deepen our nuanced understanding of definitions and perceptions of AI. Based on student responses, definitions fall into four main categories: technical definitions, user-centred approaches, ethical and social aspects, and philosophical and abstract approaches. Interestingly, the disciplinary breakdown also shows significant differences. While engineering and science students interpret AI from a technical perspective, social science and humanities students are more inclined to compare AI with human intelligence and to consider its social impact.

There is a strong positive correlation between students' attitudes towards AI and the frequency of AI use. The more students use AI, the more useful they perceive it to be. However, it also raises the question of the extent to which frequency of use is associated with the development of critical thinking: do students use AI as a mere tool, or are they able to understand its deeper mechanisms of operation?

The research also has an important message for education policymakers. The results suggest that increasing the knowledge of AI and the effective integration of AI-based educational tools can be key factors for the successful adoption of technology. Emphasising the teaching of AI means not only developing technical skills but also raising awareness of its ethical and social dimensions. AI is not just a new technological tool, but part of a paradigm shift that will have a profound impact on society and the future of education.

Overall, the results of the research confirm that there are still significant challenges in understanding and applying AI, mainly due to the gap between students' subjective knowledge and actual knowledge. Attitudes and trust issues related to AI further nuance the discourse, while several opportunities and dilemmas arise regarding its role in education. How universities and other educational institutions respond to these challenges and shape the future vision of AI-based education in a world where AI is playing an increasing role will be key to the future.

Limitations and Supplementary Information

This study has certain limitations. The sample is limited to students, which may not fully represent broader societal attitudes toward AI. Additionally, self-reported data can introduce bias, as responses may reflect subjective perceptions rather than objective knowledge. Future research should expand the sample and incorporate longitudinal data to track changing attitudes over time. Further detailed data are available in the referenced study, and the questionnaire, along with the dataset, can be provided upon request. The Grammarly tool was employed to enhance the grammatical accuracy and overall clarity of the manuscript (Grammarly Inc., 2024).

References