This study investigated pre-service early childhood teachers’ perceptions of generative AI in lesson planning. Using a quantitative descriptive design, 30 fifth-semester students completed an online questionnaire assessing perceived usefulness, perceived ease of use, ethical concerns, sociopedagogical factors, and attitudes toward AI, complemented by open-ended questions. Findings indicate that students perceive AI as highly useful and easy to use, particularly for generating ideas, structuring lesson content, and supporting pedagogical tasks. Ethical concerns were moderate, highlighting awareness of potential overdependence, content accuracy, and academic integrity issues, while sociopedagogical support, including guidance from lecturers, peers, and institutional resources, enhanced confidence and responsible use. Overall, participants demonstrated positive attitudes toward AI, appreciating its practical benefits while recognizing the need for ethical literacy and structured institutional support. The study provides insight into how generative AI can be integrated effectively into early childhood education, balancing innovation, professional judgment, and responsible use.
Generative artificial intelligence (GenAI) has emerged as one of the most influential components of contemporary educational practice. It is a tool that produce text images assessment and learning materials that are now widely used by teachers and students for students who are preparing to work in early childhood education this change bring opportunities but also uncertainties. Many pre-service teachers acknowledge that GenAI can simplify things in lesson planning, help create learning media, and support task documentation (Nyaaba et al., 2024). On the other hand, pre-service teachers also understand that there is a need to keep professional judgment, protect student’s data, and the over dependence on automatic suggestions made by artificial intelligence tools (Almazán-López et al., 2025). These perspectives show that there is a challenge of implementing advance technologies into pedagogical practice in educational field which is mostly based on developmental context (Butler-Ulrich et al., 2024; Ko et al., 2025).
Students in early childhood teacher education programs in their 5th semester are in a very important transitional stage in their education. These students have learned about theoretical and practical knowledge to understand the essential principles of learning pedagogy. However, they are still building their professional identity. Their perception of technology more or less showed expectation in the field of teacher preparation. Making lesson plan is a very important skill for an early childhood teacher as it involves creativity and knowledge in understanding of children’s development. This also needs the ability to use different teaching methods for different learning context (Ramírez et al., 2017). These skills are based on pedagogical reasoning which will guide teachers in making informed instructional decisions (as cited in Ramírez et al., 2017). When generative artificial intelligence becomes part of this process it reshapes how students engage in reflection, judgment, and planning their lesson plan.
Studies about artificial intelligence in teacher education has evolved in the last decade. Many studies report that there a positive outcome in in designing lesson plan using AI, such as increase of productivity and broader access to ideas (Akanzire et al., 2025; Nyaaba et al., 2024; Tan, 2025). Other studies recognize that there are risks of AI usage including shallow reasoning or ethical concerns (Barrett & Pack, 2023). However, most of these research focuses on general, secondary, or higher education settings. There is a very limited literature around the early childhood context, especially regarding lesson plan that must be grounded in play-based learning, observation, and responsive interaction . This gap shows that studies on lesson planning that is supported by artificial intelligence still lack focus on the unique responsibilities of early childhood teachers and the consideration of developmental areas that guide their decisions.
To respond to that gap, this study provides empirical evidence on how pre-service early childhood teachers think about generative AI in designing lesson plan. This study also examines how personal beliefs in social influences shape pre-service teachers views on AI using socio-pedagogical perspective. Focusing on students in the 5th semester allows the study to capture perception at a stage when fundamental skills and professional identity are taking shape. Understanding these perceptions is important for designing training and policy that support responsible and meaningful use of AI tools in early childhood education. (Hsu, 2023). This provides a better picture of how educators in the future prepare to implement artificial intelligence in their work while also acknowledging concerns about students’ holistic development (Kölemen & Yıldırım, 2025).
This study employs an integrated framework to examine how the technical beliefs of pre-service teachers (Perceived Usefulness and Ease of Use) are mediated by their social environment (Sociopedagogical Factors) and bounded by their professional responsibilities (Ethical Awareness). By synthesizing these dimensions, the research explores the holistic readiness of future educators to adopt GenAI as a collaborative planning partner.
1.1. Generative AI in Education
Artificial intelligence in education refers to computational systems that are used to recognize patterns, generate predictions, and provide decision support through the processing of vast datasets (Feuerriegel et al., 2024). In this category, GenAI has become very important because it can generate new content based on learned patterns. These models create text, images, explanations, or activity ideas that can be used in different educational contexts (Chugh et al., 2025). The example of artificial intelligence used in education include ChatGPT, Gemini, Perplexity, Copilot, and DeepSeek, or Canva AI that is used to generate pictures and design materials. Disability of generative AI in creating new material makes it different from other AI tools that focus more on searching or classifying information (Willet & Na, 2024).
There is a rapid growth in the use of generative AI in education context. Teachers use generative AI to design learning materials, assessments, making reading summaries, and generate variety of activities for different levels of child development (Cordero et al., 2024). AI provides alternative explanations, draft lesson plans, generate worksheets, or making choices of strategies to deliver a lesson material (Jensen et al., 2025). In other word, AI helps reducing the burden of lesson preparation without decreasing the quality of teaching, particularly for administrative tasks that are usually taking a lot of time (Powell & Courchesne, 2024).
On the other hand, using AI also needs very careful attention. As AI work through statistical predictions, generative AI models might produce false information, fake references, inappropriate ideas for activities that is unsuitable for the context or development of learners (Dúo Terrón, 2024; Resnick, 2024). There might be also bias training data that can carry over into the output presented. This condition emphasizes the importance of literacy in the field of AI so that teachers can assess the accuracy, relevancy, and the ethical appropriateness of the material produced through AI (Kohnke et al., 2023).
The limitations of AI are even more visible in terms early childhood education as lesson planning in this level is very dependent on observation, responsiveness, and a deep understanding of children development (Berson et al., 2025; Thompson, 2019). AI might provide the initial draft in lesson planning but it is hard for it to capture the socio-emotional dimensions, play, and classroom dynamics there are unique to early childhood education. Therefore, teachers need to review and readjust materials to make sure that they remain aligned with children’s development and needs. Moreover, there is also privacy issues and questions about the appropriateness of using AI materials with young children add to the need for professional supervision (Jauhiainen & Guerra, 2023).
Overall generative AI provides substantial support in developing materials and planning lessons. However, the integration of AI in the context of early childhood education still needs strong pedagogical consideration so that the technology can function as a supporting tool, not as a substitute for professional decision-making process (Choi, 2025).
1.2. An Integrated Framework for AI Acceptance in Teacher Preparation
To provide a robust analysis of how future educators perceive GenAI, this study employs an integrated framework that moves beyond a singular focus on technical usability by synthesizing the Technology Acceptance Model (TAM) with a sociopedagogical perspective and a dedicated analysis of ethical concerns (Bastani et al., 2025; Bazine, 2025). This multidimensional lens is essential because AI systems differ from traditional information systems; they rely on probabilistic decision-making and often exhibit "black box" characteristics that require high levels of professional trust and literacy (Acosta-Enriquez et al., 2024; Zhao et al., 2025). The foundation of the framework is built upon the TAM, which posits that two primary cognitive beliefs, perceived usefulness (PU) and perceived ease of use (PEOU), are the fundamental determinants of a user's attitude (ATT) toward and intention to use a specific technology (Diao et al., 2024). In the ECE context, perceived usefulness is enacted as "pedagogical augmentation," where students evaluate the degree to which AI can enhance their effectiveness in drafting play-based activities and aligning them with developmental milestones (Choi, 2025; Nyaaba et al., 2024). Perceived ease of use refers to the technical effortlessness of conversational interfaces, which allows students to focus their cognitive load on pedagogical content rather than technical operation (Kukul, 2023; Zhao et al., 2025).
However, these technical beliefs are not formed in isolation; they are deeply situated within the sociomaterial context of the university (Sy et al., 2024). This framework incorporates sociopedagogical factors (SP) as a critical mediator that shapes how technology is understood and legitimized through social interaction (Marrone et al., 2025). Social support theory suggests that "influential nodes" in a student's network, such as faculty mentors and peer groups, can directly intervene in their perception of a tool (Morales-Navarro et al., 2024). If faculty members model transparent, ethical, and pedagogically sound AI use, it validates the tool’s utility and reduces the initial hesitation or "AI anxiety" that students might feel during their formative professional years (Ifenthaler et al., 2024; Marrone et al., 2025). Peer influence further normalizes the technology through recommendations and shared success stories in assignments, effectively turning a technical tool into a socially accepted professional asset (Morales-Navarro et al., 2024; Wang et al., 2024).
Finally, ethical awareness (EC) functions as a professional boundary that moderates the transition from technical acceptance to responsible professional practice (Acosta-Enriquez et al., 2024; Jauhiainen & Guerra, 2023). For ECE teachers, acceptance is not purely driven by productivity; it is bounded by the responsibility to protect children's data privacy and the need to avoid cultural biases inherent in AI training data (Berson et al., 2025; Mnguni, 2025). High ethical concern regarding AI "hallucinations" or the risk of cognitive over-dependence creates a state of "cautious acceptance" (Acosta-Enriquez et al., 2024). This ethical filter ensures that the teacher remains the "human-in-the-loop" who critiques every AI-generated suggestion through the lens of Developmentally Appropriate Practice (DAP), preserving professional autonomy and pedagogical sensitivity (Powell & Courchesne, 2024; Ramírez et al., 2017). By integrating these five factors (usefulness, ease of use, social influences, ethical awareness, and attitude), this framework provides a holistic roadmap for understanding the complex readiness of future educators to adopt AI as a collaborative planning partner (See Table 1).
Table 1. Integrated Framework Summary
1.3. Preservice Teachers
There is an important part of teacher education in helping students understanding the dynamics of AI use as they are in the process of shaping their professional identity. Although they are often considered digital natives, some of them might feel that they’re not competent enough in using educational technology in pedagogical context (Duan et al., 2024). This disparity between everyday technology use and its application in educational context is often a source of confusion or doubt (Hidayat et al., 2024).
Pre-service teachers’ attitude towards AI might also greatly influenced by their professional identity and educational values. As early childhood education students tend to prioritize play based methods, sensitivity to child development, and social interaction, they tend to be more critical towards the use of AI that is perceive as not being in line with early childhood pedagogical principles (Hur, 2025). On the other hand, they also realize that digital tools can help them save time provide initial ideas or broaden the variety of activities that they can use in the learning process.
Some students’ interpretation of the function of AI in teaching plans are influenced by their experience, confidence and personal beliefs Students who have had positive experiences with technology tend to be quicker to adopt AI, while others still need sport or additional training to understand the limitations and potential risks of AI. Their perfections towards AI also influenced by interactions with peers, lectures and the campus environment (Ifenthaler et al., 2024).
Therefore, it is very important to build a high literacy in teacher education. This literacy includes not only the skill to use AI tools, but also the ability to assess the benefits, risks, and ethical implications. This will help student to develop a more mature professional approach to the use of technology, especially generative AI, in learning process (Wang et al., 2024).
1.4. Lesson Planning in Early Childhood Education
Lesson Planning in early Childhood education mainly focuses on learning activities that support children’s overall development. It mostly relies on play-based approach while considering cognitive, social, emotional and physical aspects. Planning cycles help teachers move from observation, assessment, planning, implementation, to evaluation to ensure responsiveness to children’s interests and needs (Speldewinde & Campbell, 2024). In this process intentional teaching is a key element namely the conscious actions of teachers that support learning while respecting children’s autonomy (Leggett, 2025).
In play-based learning children are placed as co-researchers and co-constructors of knowledge. That is why learning plans need to create space for exploration, social interaction, and creativity (As & Excell, 2018). The teacher’s role is to observe, interpret children’s behavior, and use professional judgment to decide on which strategy that could be used to support the learning process. This approach requires flexibility, reflection, and decision-making skills that are sensitive to children’s development.
The key foundation of this stage is observation as it helps teachers to understand children’s interests, assess the developmental progress, and decide necessary adjustments in learning . The evaluation process and communication with families and stakeholders are also strengthened by documentation such as anecdotal notes, portfolios, or digital platforms (Fyffe et al., 2024). Flexible documentation supports teachers plan adaptive, responsive learning experiences that focus on curricular goals (Restiglian et al., 2023).
In the context of early childhood education AI might provide initial drafts or lesson plans or activity ideas. However, it is difficult for AI to capture the social and emotional dynamics of children, the spontaneity of play, and the developmental context that is important in early childhood education. There is a need for teacher to make professional judgments about outputs made by AI to ensure that they are consistent with the principles of children’s development and the value of play-based learning. Research suggested that while AI might support time efficiency, pedagogical decisions remain in the hands of teachers (Powell & Courchesne, 2024).
This research employed a quantitative descriptive survey design, supplemented by qualitative open-ended responses (a mixed-response survey), to explore the psychological and social drivers of AI adoption among pre-service teachers. This design facilitated a robust statistical evaluation of participant data, offering nuanced insights into the cognitive and affective determinants of AI integration in teacher education (Bastani et al., 2025). A structured survey approach was selected to systematically capture the perceptions of a cohort sharing homogeneous academic backgrounds and professional objectives. The participants were fifth-semester students enrolled in the Early Childhood Teacher Education (PGPAUD) program at Makassar State University. One cohort was selected from four available classes through purposive sampling logic, specifically targeting "information-rich cases" who possess the theoretical and practical foundations necessary to evaluate the professional utility of AI tools (Zickar & Keith, 2023). These 30 students were selected because they had mastered foundational pedagogical theory and were actively engaged in instructional design for their practicum, ensuring their insights were closely aligned with the study’s exploratory objectives. While the sample is site-specific, it provides a focused analysis of trends within this professional developmental stage.
Data were obtained using a structured online questionnaire. Data were collected via a structured online questionnaire developed by adapting validated scales from the Technology Acceptance Model (TAM) and contemporary research on AI ethics. To establish content validity, the instrument underwent expert review by two ECE pedagogical specialists, ensuring the items were linguistically and contextually appropriate for early childhood instructional design. The final instrument was pilot-tested with a small subset of students to confirm the clarity and relevance of the items before full distribution
The final instrument measured five primary constructs: perceived usefulness (5 items), perceived ease of use (4 items), ethical concerns (5 items), sociopedagogical factors (5 items), and attitudes toward AI (4 items). Each item in the questionnaire used a five-point Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree).
The qualitative data from the three open-ended questions were analyzed using Braun and Clarke’s (2012) six-step reflexive thematic analysis to ensure methodological transparency. This systematic process involved: (1) familiarizing with the data through repeated reading of responses; (2) generating initial codes for perceived benefits and risks; (3) bundling these codes into broader conceptual themes; (4) reviewing themes against the raw data for accuracy; (5) defining and naming theme; and (6) writing the final narrative (Braun & Clarke, 2012). This framework ensures that the qualitative insights are grounded directly in the participants' authentic responses.
Content validity was evaluated through expert review and feedback, which allowed for minor adjustments to wording and relevance. Internal consistency was assessed using Cronbach’s alpha for each construct, with reliability coefficients reported in Table 6. All scales met the minimum reliability criteria required for initial exploratory research, supporting the instrument's application in descriptive analysis.
The questionnaire was distributed through an online communication platform commonly used by students. Participation was voluntary and anonymous to encourage candid responses. Respondents completed the survey independently. Demographic information such as age and frequency of AI use was also collected to support possible comparisons between usage patterns.
Quantitative data were analyzed using descriptive statistics, including mean scores (M), standard deviations (SD), and frequency distributions, to characterize overall participant perceptions. Pearson correlation coefficients were subsequently calculated to identify significant relationships between the core constructs of the integrated framework. The analysis focused on the sociopedagogical categories contained in the instrument, with the goal of providing a clear picture of how prospective teachers perceive generative AI in the context of lesson planning for early childhood.
3.1. Result
3.1.1. Participant Characteristics and General Use of Generative AI
Table 2. Frequency of Using Generative AI
Source: Processed from primary data (2025)
In Table 2, there are thirty participants in the study. All of them are female pre-service early childhood teachers in their fifth semester aged between 19 to 23 years old. Table 2 shows the frequency of generative AI use of the students. There are no students who never or seldom use AI. Most students often use AI, 4 students sometimes use AI and 3 students very often use AI in their academic lives. These patterns show that generative AI is widely integrated into students’ academic routines.
Table 3. Main Purposes for Using Generative AI (Multiple Responses)
Source: Processed from primary data (2025)
Table 3 shows the main purposes of students in using generative AI. In this part, students are allowed to choose more than one purposes in using generative AI. Most participants use AI to look for ideas or inspirations (27 participants). There are 18 participants who use AI to do assignments, while 17 participants use AI to create lesson plan. In terms of making learning media, there are 11 participants who use AI. This indicate that students use AI both for academic productivity and pedagogical preparation related to their studies.
Table 4. AI Tools Used by Participants (Multiple Responses)
Source: Processed from primary data (2025)
Table 4 highlights the most used AI tools by the participants. Each participant could choose more than one AI tools. Most students use ChatGPT (28 participants). While 27 participants use Gemini, 12 participants use Canva AI, 15 participants use Perplexity and 7 participants use Copilot. While only 1 participant use Deepseek. This indicates that participants mostly use widely accessible and well-known conversational AI tools.
3.1.2. Descriptive Statistics of Main Constructs
There are five aspects measured in the questionnaire, namely PU, PEOU, EC, SP, and ATT. Composite scores were calculated by averaging item responses.
Table 5. Descriptive Statistics for PU, PEOU, EC, SP, and ATT
Source: Processed from primary data (2025)
Overall, participants showed positive perceptions towards generative AI. Table 5 shows that PU scored the highest with M = 3.79. This indicates that students mostly view AI as a helpful tool in lesson planning. The second highest is PEOU with M = 3.75, indicating that students perceive AI tools easy to use. In the third place, with M = 3.73, is EC, showing that students remain mindful of issues regarding ethical use of AI. ATT followed (M = 3.66, SD = 0.68), indicating that the students' overall dispositions toward the technology were generally positive. Lastly, sociopedagogical factors SP showed slightly lower mean (M = 3.45). This indicates that, compared to other aspects, social and institutional influences are not as strong as students’ individual evaluations.
3.1.3. Reliability Analysis
Table 6. Reliability of Measurement Scales
Source: Processed from primary data (2025)
Table 6 shows the reliability of measurement scale used in this study. All aspects showed acceptable to excellent consistency. The PU scale showed the highest reliability (α = .896), followed by ATT (α = .816), then EC (α = .811). The last two on the reliability ranking are PEOU (α = .717) and SP (α = .715).
3.1.4. Correlation Analysis
Table 7. Correlations Between PU, PEOU, EC, SP, and ATT
Source: Processed from primary data (2025)
In Table 7, Pearson correlations were used to explore relationships among the five constructs. The analysis revealed several meaningful patterns. PU showed a strong positive association with ATT (r = .740, p < .01), suggesting that students who find AI helpful tend to develop more positive views toward using it. Strong correlations also appeared between PEOU and ATT (r = .686, p < .01), and between SP and ATT (r = .595, p < .01). Ethical Concerns had weaker or non-significant relationships with PEOU and ATT, although it correlated moderately with SP (r = .514, p < .01). Overall, PU and PEOU appear most closely associated with ATT.
3.1.5. Qualitative Findings from Open-Ended Responses
In the questionnaire, there are three open ended questions that ask students about their perceived benefits of generative AI in lesson planning, their main concerns about AI usage among preservice teachers, and their expected institutional support from the university. From these questions, three main themes emerge regarding the perceived benefits of generative AI. Some participants revealed that AI helps them to finish their tasks faster, especially in creating lesson plans, teaching materials and assessments. Some students revealed that AI is making their work faster and easier, helping with lesson planned drafts and creating organized content within minutes. Besides that, they often refer to AI as a tool that gives them inspiration, helps them organize their ideas, and gives them creative alternatives they would not have considered on their own. Moreover, some responses emphasized the ability of AI to tailor content based on the students’ need, allowing them to create more adaptive and child-centered activities.
The participants also highlighted some of their worries. Three main concerns appeared in the result including the effect of AI dependence, which is the biggest concern. Participants are concerned that dependence on AI might weaken their critical thinking, creativity, pedagogical judgment and lesson planning skill. Another concern is that some of the students feared that AI might make lesson planning overly mechanical, which might reduce teacher’s sense of creativity and their pedagogical sensitivity Moreover, a smaller group of students mention about plagiarism risks, data privacy, algorithmic bias and irrelevant answers produced by AI tools.
On the questions about institutional support from the university, three main themes of answers were revealed. First, there is a need for structured training and literacy programs regarding the AI use. Participants ask for clear instruction on how AI works, how to use it responsibly, and how to evaluate contents generated by AI. Some also expect ethical guidance and clear policies in AI use from the institution. Several participants wanted more specific rules on when AI can be used, how to maintain academic integrity, and how to balance AI use with critical thinking. Another expected support is access to AI tools, workshops, and practice opportunities which could be delivered through workshops, hands-on sessions, access to AI platforms and guidance on safe and legal tools.
3.2. Discussion
It is revealed that there is a consistent pattern in how pre-service early childhood teachers perceive generative AI for academic and pedagogical tasks. The high rating for perceived usefulness suggests that pre-service teachers view GenAI as a vital scaffolding tool for the initial stages of the ECE planning cycle (observation and idea generation) allowing them to quickly draft sensory or play-based activity frameworks that align with general developmental milestones (Leggett, 2025; Ramírez et al., 2017). While AI identifies 'commonality' by suggesting general developmental activities, its effectiveness in ECE is limited by its inability to account for 'individuality' and 'context.' Therefore, pre-service teachers must be trained to use AI as a 'collaborative partner' rather than a decision-maker, ensuring that every AI-generated play idea is filtered through their relational knowledge of the specific child (Powell & Courchesne, 2024; Ramírez et al., 2017). These perceptions showed a broader trend in higher education. A study from (Kazanidis & Pellas, 2024) shows that generative AI supports academic performance and enhances satisfaction across diverse student groups. Studies based on the TAM also show that perceived usefulness is an early predictor of positive attitudes toward technological tools (Hamid et al., 2024; Zhao et al., 2025) and contributes to students’ confidence in using AI in their pedagogical work (Kanont et al., 2024).
Students also showed a high level of PEOU. They defined AI platforms as intuitive and simple to operate. This ease of use helps lower initial hesitation and encourage consistent integration of AI in lesson planning. These perceptions align with previous studies that easy-to-use technologies tend to strengthen perceive usefulness and support technology adoption in educational settings (Matute-Vallejo & Melero-Polo, 2019; Quintana-Ordorika et al., 2024; Shakeel et al., 2023; Zhao et al., 2025).
In terms of attitude towards AI, students view AI as generally positive. They appreciate AI as a tool that broadens creative possibilities, helps them organize their lesson content, and compliments their pedagogical reasoning rather than replacing it. They also highlighted that effective AI use could maintain a balance between efficiency and thoughtful decision making to preserve developmentally appropriate practice (Diao et al., 2024; Roettl & Terlutter, 2018).
Sociopedagogical aspect is viewed as one of the factors in supporting the use of AI in lesson planning. Although showing lower influence when compared to other aspects in this study, sociopedagogical aspect still affect the students’ overall perception of generative AI. Interestingly, most of the support that they need is closely tied to this aspect. Students’ confidence in using AI is influenced by guidance from lectures, collaboration with peers, and access to institutional resources. Some participants highlighted that there is a need for structured AI literacy programs, workshops, and clear policies that frame responsible use. These forms of support help pre-service teachers to navigate AI use in alignment with academic and pedagogical standards. This aligns with recent research that show institutional conditions shape motivation and competent use of AI in teacher education (Lund et al., 2025; Quintana-Ordorika et al., 2024).
Ethical concern was revealed to be moderate. While students expressed worries regarding accuracy and academic integrity, the most significant concern was the risk of professional over-dependence. This fear of dependence is particularly critical in ECE, where 'intentional teaching' requires reflexive, spontaneous responses to the dynamics of children's play (Leggett, 2025; Thompson, 2019). If pre-service teachers rely on automated scripts, they risk eroding the 'pedagogical sensitivity' and relational awareness needed to build a responsive and caring learning community (Leggett, 2025). Consequently, while these concerns did not lower interest in the tool, they emphasize that AI must remain a 'supporting partner' rather than a substitute for professional decision-making
The correlation analysis strengthens the interpretation of these findings. Perceived usefulness and perceived ease of use showed strong associations with positive attitudes toward AI. However, ethical concerns showed a weak relationship. This suggest that there is awareness of ethical risk but without significant impact to students’ attitude towards AI. Sociopedagogical support showed a moderate association with attitudes towards AI. This indicate that sociopedagogical factors such as confidence, guidance and structured learning environments influence how AI is applied responsibly in lesson planning (Kanont et al., 2024; Wang et al., 2024).
Overall, the results show that pre-service teachers perceive generative AI as beneficial, accessible, and relevant to their academic tasks. They are aware of the ethical risks but they are also willing to engage with AI when they are supported with clear guidelines and meaningful institutional structures. The combination of quantitative and qualitative data shows a holistic picture of how usefulness, ease of use, ethical awareness, sociopedological support and attitudes towards AI interact to create students’ readiness to integrate AI into early childhood education context.
3.3. Implications and Limitations
The findings of this study provide some important implications for early childhood teacher education. First, the high perceived usefulness and ease of use of generative AI among pre-service teachers suggests that integrating AI tools into lesson planning might enhance creativity, efficiency and alignment with child development principles(Hamid et al., 2024; Zhao et al., 2025). This shows that educational programs should consider implementing structured AI literacy programs such as training workshops, practical exercises and clear ethical guidelines, to ensure responsible and effective use of AI (Acosta-Enriquez et al., 2024; Quintana-Ordorika et al., 2024).
Moreover, sociopedagogical support from lectures, peers and institutional policies are seen to be critical in developing confidence and encouraging the application of AI in pedagogical practices (Lund et al., 2025). Ethical awareness, although moderate, highlights the need for consistent emphasis on academic integrity, data privacy, and verification of AI generated content, reinforcing that AI compliment rather than replace critical thinking (Kajiwara & Kawabata, 2024; Uwosomah & Dooly, 2025).
There are also several limitations that must be considered when interpreting the results of this study. This study involved only a small sample of 30 students, which limits generalizability of the findings to other pre-service teacher populations. The reliance on self-reported survey data may be influenced by participant social desirability or subjective perceptions. This potentially led the participants to over-estimate positive attitudes and perceived ease of use. Moreover, this study only focused on perceptions rather than actual classroom practices, which restricts conclusion regarding how AI is integrated into real life teaching scenarios.
Future research is suggested to expand the sample, include multiple institutions and incorporate observational or experimental methods to examine how pre-service teachers apply AI in authentic lesson planning context and how ethical concerns manifest in practice. Despite this limitation the study provides valuable insight into the factors shaping attitudes towards generative AI, highlighting influence of technical usability, perceived benefits, ethical consideration and institutional support in promoting responsible and effective AI Integration in Early Childhood Education.
This study offers exploratory insights into the perceptions of pre-service teachers regarding generative AI within a specific institutional context. While the small, purposive sample (n=30) limits universal generalizability, the findings reveal critical trends in how future Early Childhood Education (ECE) educators negotiate the tension between automated efficiency and relational pedagogical values (Akanzire et al., 2025; Zickar & Keith, 2023). Participants demonstrated strong technology acceptance, driven by positive perceptions of ease of use and utility. Specifically, they identified AI tools as intuitive, accessible partners capable of enhancing both creativity and lesson structure without compromising core pedagogical principles (Choi, 2025).
The analysis further indicates that ethical concerns and sociopedagogical factors currently occupy a moderate position in student perceptions. While participants are acutely aware of professional risks such as over-dependence and accuracy issues, these worries did not significantly lower their willingness to integrate AI into their lesson planning (Acosta-Enriquez et al., 2024). Instead, students emphasized the necessity of structured sociopedagogical support. They highlighted that guidance from lecturers and access to institutional training resources are essential for building the confidence required for responsible and effective AI adoption (Kanont et al., 2024). Ultimately, correlation analysis confirms that perceived usefulness and ease of use remain the primary predictors of positive attitudes, while sociopedagogical support serves as a vital bridge toward ethical professional integration.
In conclusion, pre-service teachers view generative AI as a beneficial and supportive partner that can revolutionize the lesson planning cycle when used with care. They exhibit a state of "cautious optimism," balancing their enthusiasm for productivity with a sophisticated awareness of their responsibilities toward young learners. To ensure that these benefits are maximized, teacher education programs must prioritize the integration of AI literacy, technical training, and clear ethical guidance (Leggett, 2025). Such a holistic approach will allow the next generation of educators to benefit from advanced technology while steadfastly maintaining the pedagogical integrity and human touch that define early childhood education (Bastani et al., 2025).
Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Arbulu Perez Vargas, C. G., Orellana Ulloa, M. N., Gutiérrez Ulloa, C. R., Pizarro Romero, J. M., Gutiérrez Jaramillo, N. D., Cuenca Orellana, H. U., Ayala Anzoátegui, D. X., & López Roca, C. (2024). Knowledge, attitudes, and perceived ethics regarding the use of ChatGPT among generation Z university students. International Journal for Educational Integrity, 20(1), 10. https://doi.org/10.1007/s40979-024-00157-4
Akanzire, B. N., Nyaaba, M., & Nabang, M. (2025). Generative AI in teacher education: Teacher educators’ perception and preparedness. Journal of Digital Educational Technology, 5(1), ep2508. https://doi.org/10.30935/jdet/15887
Almazán-López, O., Hasbún, H., & Osuna-Acedo, S. (2025). Inteligencia artificial generativa e identidad (pos)digital docente [Generative artificial intelligence and (post)digital teacher identity]. IJERI: International Journal of Educational Research and Innovation, 24, 1–17. https://doi.org/10.46661/ijeri.11160
As, A. J. van, & Excell, L. (2018). Strengthening early childhood teacher education towards a play-based pedagogical approach. South African Journal of Childhood Education, 8(1). https://doi.org/10.4102/sajce.v8i1.525
Barrett, A., & Pack, A. (2023). Not quite eye to AI: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(1), 59. https://doi.org/10.1186/s41239-023-00427-0
Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. Proceedings of the National Academy of Sciences, 122(26). https://doi.org/10.1073/pnas.2422633122
Bazine, I. (2025). Exploring the development of the technology acceptance model (TAM): A chronological overview. International Journal of Research and Scientific Innovation, 12(6), 1643–1655. https://doi.org/10.51244/IJRSI.2025.120600138
Berson, I. R., Berson, M. J., & Luo, W. (2025). Innovating responsibly: Ethical considerations for AI in early childhood education. AI, Brain and Child, 1(1), 2. https://doi.org/10.1007/s44436-025-00003-5
Braun, V., & Clarke, V. (2012). Thematic analysis. In APA handbook of research methods in psychology (Vol. 2, pp. 57–71). American Psychological Association. https://doi.org/10.1037/13620-004
Butler-Ulrich, T., Hughes, J., & Morrison, L. (2024). Creativity and generative AI for preservice teachers. In Contemporaneous issues about creativity. IntechOpen. https://doi.org/10.5772/intechopen.1007517
Choi, Y. (2025). Integrating ChatGPT into the design of 5E-based earth science lessons. Education Sciences, 15(7), 815. https://doi.org/10.3390/educsci15070815
Chugh, R., Turnbull, D., Morshed, A., Sabrina, F., Azad, S., Md Mamunur, R., Kaisar, S., & Subramani, S. (2025). The promise and pitfalls: A literature review of generative artificial intelligence as a learning assistant in ICT education. Computer Applications in Engineering Education, 33(2). https://doi.org/10.1002/cae.70002
Cordero, J., Torres-Zambrano, J., & Cordero-Castillo, A. (2024). Integration of generative artificial intelligence in higher education: Best practices. Education Sciences, 15(1), 32. https://doi.org/10.3390/educsci15010032
Diao, Y., Li, Z., Zhou, J., Gao, W., & Gong, X. (2024). A meta-analysis of college students’ intention to use generative artificial intelligence. https://doi.org/10.48550/arXiv.2409.06712
Duan, S., Exter, M., & Li, Q. (2024). In their ideal future, are preservice teachers willing to integrate technology in their teaching and why? TechTrends, 68(4), 734–748. https://doi.org/10.1007/s11528-024-00978-7
Dúo Terrón, P. (2024). Generative artificial intelligence: Educational reflections from an analysis of scientific production. Journal of Technology and Science Education, 14(3), 756. https://doi.org/10.3926/jotse.2680
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111–126. https://doi.org/10.1007/s12599-023-00834-7
Fyffe, L., Sample, P. L., Lewis, A., Rattenborg, K., & Bundy, A. C. (2024). Entering kindergarten after years of play: A cross-case analysis of school readiness following play-based education. Early Childhood Education Journal, 52(1), 167–179. https://doi.org/10.1007/s10643-022-01428-w
Hamid, M. F. A., Sahrir, M. S., Amiruddin, A. Z., Yahaya, M. F., & Sha’ari, S. H. (2024). Evaluating student acceptance of interactive infographics module for Arabic grammar learning using the technology acceptance model (TAM). International Journal of Learning, Teaching and Educational Research, 23(9), 121–140. https://doi.org/10.26803/ijlter.23.9.7
Hidayat, R., Zainuddin, Z., & Mazlan, N. H. (2024). The relationship between technological pedagogical content knowledge and belief among preservice mathematics teachers. Acta Psychologica, 249, 104432. https://doi.org/10.1016/j.actpsy.2024.104432
Hsu, H.-P. (2023). Can generative artificial intelligence write an academic journal article? Opportunities, challenges, and implications. Irish Journal of Technology Enhanced Learning, 7(2), 158–171. https://doi.org/10.22554/ijtel.v7i2.152
Hur, J. W. (2025). Fostering AI literacy: Overcoming concerns and nurturing confidence among preservice teachers. Information and Learning Sciences, 126(1–2), 56–74. https://doi.org/10.1108/ILS-11-2023-0170
Ifenthaler, D., Majumdar, R., Gorissen, P., Judge, M., Mishra, S., Raffaghelli, J., & Shimada, A. (2024). Artificial intelligence in education: Implications for policymakers, researchers, and practitioners. Technology, Knowledge and Learning, 29(4), 1693–1710. https://doi.org/10.1007/s10758-024-09747-0
Jauhiainen, J. S., & Guerra, A. G. (2023). Generative AI and ChatGPT in school children’s education: Evidence from a school lesson. Sustainability, 15(18), 14025. https://doi.org/10.3390/su151814025
Jensen, L. X., Buhl, A., Sharma, A., & Bearman, M. (2025). Generative AI and higher education: A review of claims from the first months of ChatGPT. Higher Education, 89(4), 1145–1161. https://doi.org/10.1007/s10734-024-01265-3
Kajiwara, Y., & Kawabata, K. (2024). AI literacy for ethical use of chatbot: Will students accept AI ethics? Computers and Education: Artificial Intelligence, 6, 100251. https://doi.org/10.1016/j.caeai.2024.100251
Kanont, K., Pingmuang, P., Simasathien, T., Wisnuwong, S., Wiwatsiripong, B., Poonpirome, K., Songkram, N., & Khlaisang, J. (2024). Generative AI as a learning assistant: Factors influencing higher education students’ technology acceptance. Electronic Journal of E-Learning, 22(6), 18–33. https://doi.org/10.34190/ejel.22.6.3196
Kazanidis, I., & Pellas, N. (2024). Harnessing generative artificial intelligence for digital literacy innovation: A comparative study between early childhood education and computer science undergraduates. AI, 5(3), 1427–1445. https://doi.org/10.3390/ai5030068
Ko, U., Hartley, K., & Hayak, M. (2025). Exploring AI in education: Preservice teacher perspectives, usage, and considerations. Journal of Information Technology Education: Research, 24, 021. https://doi.org/10.28945/5592
Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). Exploring generative artificial intelligence preparedness among university language instructors: A case study. Computers and Education: Artificial Intelligence, 5, 100156. https://doi.org/10.1016/j.caeai.2023.100156
Kölemen, E. B., & Yıldırım, B. (2025). A new era in early childhood education: Teachers’ opinions on the application of artificial intelligence. Education and Information Technologies, 30(12), 17405–17446. https://doi.org/10.1007/s10639-025-13478-9
Leggett, N. (2025). Intentional teaching and the intentionality of educators: Time for careful, considerate, collaborative, and reflective practice. Early Childhood Education Journal, 53(1), 1–9. https://doi.org/10.1007/s10643-023-01550-3
Lund, B. D., Lee, T. H., Mannuru, N. R., & Arutla, N. (2025). AI and academic integrity: Exploring student perceptions and implications for higher education. Journal of Academic Ethics, 23(3), 1545–1565. https://doi.org/10.1007/s10805-025-09613-3
Marrone, R., Zamecnik, A., Joksimovic, S., Johnson, J., & De Laat, M. (2025). Understanding student perceptions of artificial intelligence as a teammate. Technology, Knowledge and Learning, 30(3), 1847–1869. https://doi.org/10.1007/s10758-024-09780-z
Matute-Vallejo, J., & Melero-Polo, I. (2019). Understanding online business simulation games: The role of flow experience, perceived enjoyment and personal innovativeness. Australasian Journal of Educational Technology, 35(3). https://doi.org/10.14742/ajet.3862
Mnguni, L. (2025). A qualitative analysis of South African preservice life sciences teachers’ behavioral intentions for integrating AI in teaching. Journal for STEM Education Research, 8(2), 230–256. https://doi.org/10.1007/s41979-024-00128-x
Morales-Navarro, L., Kafai, Y. B., Nguyen, H., DesPortes, K., Vacca, R., Matuk, C., Silander, M., Amato, A., Woods, P., Castro, F., Shaw, M., Akgun, S., Greenhow, C., & Garcia, A. (2024). Learning about data, algorithms, and algorithmic justice on TikTok in personally meaningful ways. https://doi.org/10.48550/arXiv.2405.15437
Nyaaba, M., Shi, L., Nabang, M., Zhai, X., Kyeremeh, P., Ayoberd, S. A., & Akanzire, B. N. (2024). Generative AI as a learning buddy and teaching assistant: Pre-service teachers’ uses and attitudes. In arXiv. https://doi.org/10.48550/arXiv.2407.11983
Powell, W., & Courchesne, S. (2024). Opportunities and risks involved in using ChatGPT to create first grade science lesson plans. PLOS ONE, 19(6), e0305337. https://doi.org/10.1371/journal.pone.0305337
Quintana-Ordorika, A., Camino-Esturo, E., Portillo-Berasaluce, J., & Garay-Ruiz, U. (2024). Integrating the maker pedagogical approach in teacher training: The acceptance level and motivational attitudes. Education and Information Technologies, 29(1), 815–841. https://doi.org/10.1007/s10639-023-12293-4
Ramírez, E., Clemente, M., Recamán, A., Martín-Domínguez, J., & Rodríguez, I. (2017). Planning and doing in professional teaching practice: A study with early childhood education teachers working with ICT (3–6 years). Early Childhood Education Journal, 45(5), 713–725. https://doi.org/10.1007/s10643-016-0806-x
Resnick, M. (2024). Generative AI and creative learning: Concerns, opportunities, and choices. In MIT Exploration of Generative AI. https://doi.org/10.21428/e4baedd9.cf3e35e5
Restiglian, E., Raffaghelli, J. E., Gottardo, M., & Zoroaster, P. (2023). Pedagogical documentation in the era of digital platforms: Early childhood educators’ professionalism in a dilemma. Education Policy Analysis Archives, 31. https://doi.org/10.14507/epaa.31.7909
Roettl, J., & Terlutter, R. (2018). The same video game in 2D, 3D or virtual reality: How does technology impact game evaluation and brand placements? PLOS ONE, 13(7), e0200724. https://doi.org/10.1371/journal.pone.0200724
Shakeel, S. I., Al Mamun, M. A., & Haolader, M. F. A. (2023). Instructional design with ADDIE and rapid prototyping for blended learning: Validation and its acceptance in the context of TVET Bangladesh. Education and Information Technologies, 28(6), 7601–7630. https://doi.org/10.1007/s10639-022-11471-0
Speldewinde, C., & Campbell, C. (2024). Building a better wall: Assessing children’s design technology learning in nature-based early childhood education. Canadian Journal of Science, Mathematics and Technology Education, 24(1), 39–56. https://doi.org/10.1007/s42330-024-00320-6
Sy, M., Siongco, K. L., Pineda, R. C., Canalita, R., & Xyrichis, A. (2024). Sociomaterial perspective as applied in interprofessional education and collaborative practice: A scoping review. Advances in Health Sciences Education, 29(3), 753–781. https://doi.org/10.1007/s10459-023-10278-z
Tan, Q. (2025). Reimagining teacher development in the era of generative AI: A scoping review. Teaching and Teacher Education, 168, 105236. https://doi.org/10.1016/j.tate.2025.105236
Thompson, M. (2019). Early childhood pedagogy in a socio-cultural medley in Ghana: Case studies in kindergarten. International Journal of Early Childhood, 51(2), 177–192. https://doi.org/10.1007/s13158-019-00242-7
Uwosomah, E. E., & Dooly, M. (2025). It is not the huge enemy: Preservice teachers’ evolving perspectives on AI. Education Sciences, 15(2), 152. https://doi.org/10.3390/educsci15020152
Wang, K., Ruan, Q., Zhang, X., Fu, C., & Duan, B. (2024). Pre-service teachers’ GenAI anxiety, technology self-efficacy, and TPACK: Their structural relations with behavioral intention to design GenAI-assisted teaching. Behavioral Sciences, 14(5), 373. https://doi.org/10.3390/bs14050373
Willet, K. B. S., & Na, H. (2024). Generative AI generating buzz: Volume, engagement, and content of initial reactions to ChatGPT in discussions across education-related subreddits. Online Learning, 28(2). https://doi.org/10.24059/olj.v28i2.4434
Zhao, Z., An, Q., & Liu, J. (2025). Exploring AI tool adoption in higher education: Evidence from a PLS-SEM model integrating multimodal literacy, self-efficacy, and university support. Frontiers in Psychology, 16. https://doi.org/10.3389/fpsyg.2025.1619391
Zickar, M. J., & Keith, M. G. (2023). Innovations in sampling: Improving the appropriateness and quality of samples in organizational research. Annual Review of Organizational Psychology and Organizational Behavior, 10(1), 315–337. https://doi.org/10.1146/annurev-orgpsych-120920-052946