Revista Cienfica, FCV-LUZ / Vol. XXXV Recibido: 29/09/2025 Aceptado: 05/01/2026 Publicado: 17/01/2026 UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico 1 of 7 Revista Cienfica, FCV-LUZ / Vol. XXXVI UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico Evaluang Youtube™ videos on ruminal acidosis: integrang human and AI assessment for quality and reliability. Evaluación de videos de Youtube™ sobre acidosis ruminal: integración de evaluaciones humanas e Inteligencıa Arficial para calidad y confiabilidad. Can Ayhan Kaya Dicle University, Vocaonal School of Agriculture. Dicle, Türkiye e mail: canayhan.kaya@dicle.edu.tr ABSTRACT Ruminal acidosis is a common metabolic disorder in ruminants that carries significant health and economic consequences. As online plaorms such as YouTube™ are increasingly used for veterinary guidance, assessing the quality and reliability of digital content has become essenal. In this cross-seconal study, 71 publicly available English-language videos on ruminal acidosis were analyzed using a combined human–AI evaluaon approach. Video content was assessed with three validated tools: the Video Content Quality Index, the Global Quality Scale, and a modified DISCERN instrument, measuring scienfic accuracy, educaonal value, and reliability. Viewer engagement, including likes, comments, and viewing rates, was also recorded. The average Video Content Quality Index, the Global Quality Scale , and DISCERN scores were 9.80, 3.12, and 2.87, respecvely, reflecng moderate overall quality and reliability. Videos uploaded by professional associaons scored highest, whereas content from commercial sources had the lowest Video Content Quality Index values. Inter rater comparison revealed systemac differences between human reviewers and AI, with AI assigned higher Video Content Quality Index, the Global Quality Scale, and DISCERN scores compared to the human reviewer highlighng its potenal to standardize evaluaons across large datasets. Importantly, viewer engagement metrics did not consistently correlate with video quality, emphasizing that popularity does not equate to scienfic rigor. These findings underscore the necessity for veterinary professionals and educators to acvely contribute accurate, evidence-based content online. Integrang AI- assisted evaluaon provides a scalable, consistent approach to idenfying high-quality educaonal resources, offering a promising tool for enhancing digital veterinary educaon and supporng informed decision-making among students, praconers, and livestock producers. Key words: Ruminal acidosis; YouTube; Veterinary educaon; arficial intelligence; video content quality RESUMEN La acidosis ruminal es un trastorno metabólico frecuente en rumiantes que conlleva importantes consecuencias para la salud y la economía. Dado que plataformas en línea como YouTube™ se ulizan cada vez más como fuente de orientación veterinaria, evaluar la calidad y la fiabilidad del contenido digital se ha vuelto esencial. En este estudio transversal se analizaron 71 videos en inglés disponibles públicamente sobre la acidosis ruminal mediante un enfoque combinado de evaluación humano–IA. El contenido de los videos se evaluó ulizando tres herramientas: el Índice de Calidad del Contenido de Video , la Escala de Calidad Global y un instrumento DISCERN modificado, para medir la exactud cienfica, el valor educavo y la fiabilidad. También se registraron métricas de interacción de los espectadores, incluidos los “likes”, comentarios y tasas de visualización. Los valores promedio del Índice de Calidad del Contenido de Video, la Escala de Calidad Global y DISCERN fueron 9.80, 3.12 y 2.87, respecvamente, lo que refleja una calidad y fiabilidad generales moderadas. Los videos publicados por asociaciones profesionales obtuvieron las puntuaciones más altas, mientras que los procedentes de fuentes comerciales presentaron los valores más bajos del Índice de Calidad del Contenido de Video. La comparación entre evaluadores reveló diferencias sistemácas entre la evaluación humana y la realizada por la IA, ya que la IA asignó puntuaciones más altas del Índice de Calidad del Contenido de Video, la Escala de Calidad Global y DISCERN en comparación con el revisor humano, lo que pone de manifiesto su potencial para estandarizar evaluaciones en grandes conjuntos de datos. De manera importante, las métricas de interacción de los espectadores no se correlacionaron de forma consistente con la calidad del contenido, lo que subraya que la popularidad no equivale necesariamente al rigor cienfico. Estos hallazgos destacan la necesidad de que los profesionales y educadores veterinarios contribuyan acvamente con contenido preciso y basado en la evidencia en entornos digitales. La integración de evaluaciones asisdas por inteligencia arficial proporciona un enfoque escalable y coherente para idenficar recursos educavos de alta calidad, y representa una herramienta prometedora para fortalecer la educación veterinaria digital y apoyar la toma de decisiones informada entre estudiantes, profesionales y productores ganaderos. Palabras clave: Acidosis ruminal, YouTube, educación Veterinaria, In- teligencia Arficial, calidad del contenido de video https://doi.org/10.52973/rcfcv-e361811
Revista Cienfica, FCV-LUZ / Vol. XXXVI UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico INTRODUCTION Ruminal acidosis is a widespread metabolic disorder in ruminants, primarily caused by excessive consumpon of rapidly fermentable carbohydrates. Under normal condions, rumen pH is maintained between 6.0 and 7.0 through microbial buffering and salivary bicarbonates. However, when pH falls below 5.8, Streptococcus bovis and lactobacilli proliferate excessively, producing lacc acid that exceeds the rumen’s buffering capacity [1],[2]. In subacute ruminal acidosis (SARA), rumen pH typically fluctuates between 5,5 and 5,8 for several hours (h) per day (d), which can trigger subclinical inflammaon, disrupt microbial balance, and allow lipopolysaccharides (LPS) to enter systemic circulaon [3]. Diagnosis usually relies on rumen fluid pH measurement, feeding history, and producon monitoring. Management strategies oſten include adjusng dietary fiber, gradually introducing high-concentrate feeds, and using rumen modifiers or buffers [4 , 5]. Prevenon through proper raon formulaon and connuous monitoring remains the cornerstone of effecve control [6 ,7]. Beyond its direct veterinary importance, ruminal acidosis underscores the ongoing need for accurate and accessible scienfic informaon. In this context, YouTube™ has become an increasingly ulized plaorm for veterinary students seeking educaonal resources and clinical guidance [8]. However, the quality and reliability of content vary widely, depending on the experse and intent of content creators. Some videos are produced by specialists or professional organizaons, whereas others may contain incomplete or misleading informaon [8],9]. Recent advances in arficial intelligence (AI) offer a promising approach for systemacally evaluang online educaonal content [10]. In this study, both a human reviewer and an AI assistant independently assessed YouTube™ videos on ruminal acidosis using validated scoring tools to measure reliability, educaonal value, and clinical relevance. AI enables rapid and consistent analysis of large datasets, complemenng human experse and highlighng high-quality resources. As AI technologies connue to evolve, AI-assisted evaluaon is likely to play an increasingly important role in guiding veterinary students toward trustworthy informaon [10]. Accordingly, this study seeks to crically evaluate YouTube™ content for scienfic accuracy, clinical relevance, and educaonal value. By combining human review with AI based analysis, we aim to determine whether online videos can provide reliable and informave resources for veterinary and agricultural audiences. MATERIALS AND METHODS Study design This cross-seconal observaonal study was conducted over a four-week period, from July 21 to August 18, 2025, to assess the quality, reliability, and educaonal value of YouTube™ videos related to ruminal acidosis. As all videos analyzed were publicly accessible, formal ethical approval was deemed unnecessary. The study design adhered to YouTube™’s terms of service and ensured compliance with privacy and content regulaons. Keyword Idenficaon To idenfy the most relevant search term, the “Google Trends” tool was ulized with sengs adjusted to “worldwide” and “past 5 years.” Keywords including “rumen acidosis,” “ruminal acidosis,” and “acidosis in cale” were compared. The term “ruminal acidosis” demonstrated the highest relave search frequency and relevance to the study topic (FIG. 1). FIGURE 1. The most frequently used keyword results were determined by entering three key words into Google Trends. Video selecon Following keyword determinaon, searches were conducted in Google Chrome’s incognito mode using a newly created YouTube™ account to avoid algorithmic bias from prior search history. On July 21, 2025, the first 251 videos retrieved using the search term “ruminal acidosis” were screened. No filters were applied other than YouTube™’s default “relevance” sorng. Exclusion criteria were as follows: Non-English content, Duplicate videos, Absence of audio or tle, Content unrelated to ruminal acidosis, Video duraon < 30 seconds or > 40 minutes, Content-deficient videos. Aſter applying these criteria, 180 videos were excluded and 71 videos were retained for final analysis (FIG. 2). 2 of 7
Evaluang Youtube™ videos on ruminal acidosis / Can Ayhan Kaya UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico Data extracon For each included video, the following demographic and descripve characteriscs were recorded: Upload date, Duraon (minutes), Number of views, likes, dislikes, and comments, Country of origin, Uploader type (professional, organizaon, commercial enty, or individual). To minimize potenal bias, the number of likes, dislikes, and comments was concealed from evaluators unl the video content had been fully reviewed. FIGURE 2. Flowchart of video selecon. Of the 251 videos screened, 180 were excluded based on predefined criteria, and 71 videos were included in the final analysis. Video classificaon Videos were classified based on four source type: academic/ professional, organizaons/associaons, commercial, or individual users. Quality and reliability assessment Three validated instruments were used to assess video content: Video Content Quality (VCQ) Index: The Video Content Quality (VCQ) score was a study-specific composite metric developed by the authors to evaluate the scienfic accuracy, completeness, and clarity of video content related to ruminal acidosis. The VCQ scoring framework was constructed based on key informaonal domains derived from established literature and clinical guidelines on ruminal acidosis. It consisted of a 10 item checklist assessing core content components, with each item scored on a scale from 0 to 2, yielding a maximum possible score of 20 points. The VCQ assessment was independently applied by one human reviewer and an arficial intelligence model. Total scores were categorized as follows: 0–6 = poor quality, 7–13 = moderate quality, and 14–20 = good quality. As the VCQ score represents an original assessment tool developed specifically for the purposes of this study, no previously published reference exists. Global Quality Scale The Global Quality Score (GQS) is a validated 5-point Likert- type scale originally developed to assess the overall quality, flow, and usefulness of online health-related informaon for paents and healthcare professionals. Each video is rated from 1 (poor quality, very limited usefulness) to 5 (excellent quality, highly useful). The GQS has been widely used in previous studies evaluang the quality of medical and veterinary content on YouTube™ and other online plaorms [11]. DISCERN Instrument A modified version of the DISCERN instrument focusing on core reliability items was used and scored on a 5-point scale. DISCERN has been extensively used in studies evaluang online medical informaon, including YouTube™ videos [12]. Engagement metrics Two quantave indicators of audience engagement were calculated: Interacon Index: (Likes – Dislikes) / Views × 100 Viewing Rate: Views / Days since upload × 100 Evaluaon procedure Two independent reviewers assessed all included videos: a Veterinary Medicine specialist and an arficial intelligence based assistant (ChatGPT). The Veterinary Reviewer watched each video in full, paying aenon to visual presentaon, narraon quality, and the accuracy of any demonstraons. The AI assistant, on the other hand, evaluated transcripts of the same videos, concentrang on the organizaon, clarity, and 3 of 7
Revista Cienfica, FCV-LUZ / Vol. XXXVI UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico factual correctness of the informaon presented. To reduce bias, Veterinary Reviewer completed his evaluaons before being shown any engagement stascs. This dual-review process made it possible to compare human and machine evaluaons in a consistent manner and to observe where their judgments tended to converge or differ. Stascal analysis In this study, the conformity of connuous variables to the assumpon of normal distribuon was assessed using the Kolmogorov–Smirnov test, and homogeneity of variances was examined using Levene’s test. Differences between parameters in independent groups were analyzed using the nonparametric Kruskal–Wallis test [13], while comparisons within dependent (paired) groups were performed using the Wilcoxon signed-rank test. A 95 % confidence interval was applied in all stascal analyses. Inter-rater agreement for VCQ categorical classificaon (poor, moderate, good) between the human reviewer and the AI assistant was evaluated using Kendall’s rank correlaon coefficient (τ), with concordance interpreted as low (< 0.30), moderate (0.30–0.60), or high (> 0.60). Associaons between reviewer and AI scores were assessed using Spearman’s rank correlaon coefficient. Descripve stascs and analyses were conducted using the R stascal soſtware package, version 3.2.3 (2015-12-10), Copyright © 2015 The R Foundaon for Stascal Compung [14]. Results were considered stascally significant at P < 0.05. RESULTS AND DISCUSSION A total of 71 YouTube™ videos met the inclusion criteria and were included in the analysis. The median video duraon was 5.00 minutes (min), with values ranging from 0.34 to 32.38 min. The median number of views per video was 346 (range: 7–143,935). Videos had been available online for a median of 1432 d (range: 93–5921), indicang that most content had been published several years prior to analysis. The median viewing rate was 30.70 views per d, with a wide range from 0.94 to 22,548, reflecng a highly skewed distribuon. Median engagement metrics included 3 likes and 0 comments per video, resulng in a median interacon rate of 1.205 (range: 0–8.955). Regarding quality assessment scores, the median VCQ score was 10.0 (range: 1–20), while the median DISCERN and GQS scores were 3.00 (ranges: 1.50–4.50 and 1.00–5.00, respecvely). Descripve characteriscs of the analyzed videos are summarized in TABLE I. TABLE I General Characteriscs of 71 Included YouTube™ Videos on Ruminal Acidosis Parameters Median Min. Max. Duraon (min) 5.00 0.34 32.38 Total number of days since upload 1432.00 93 5921 Viewing rate 30.70 0.94 22548 Number of views 346.00 7 143.935 Number of likes 3.00 0 1800 Number of comments 0.00 0 143 Interacon rate 1.205 0 8.955 VCQ 10.0 1 20.00 DİSCERN 3.00 1.500 4.500 GQS 3.00 1.000 5.000 VCQ: Video Content Quality. GQS: Global Quality Scale Regarding source distribuon, 25 videos (35 %) were uploaded by professionals, 22 (31 %) by commercial enes, and 12 (17 %) each by associaons and individual users. This distribuon shows that professional and commercial producers dominate content on this topic on YouTube™, whereas associaons and individual creators contribute smaller but meaningful porons. The mean VCQ score across all videos was 9.80, suggesng limited scienfic rigor and presentaon quality. The mean GQS score was 3.12, indicang moderate educaonal usefulness, while the mean Quality Criteria for Consumer Health Informaon (DISCERN) score was 2.87, reflecng fair reliability. When categorized by source, associaon uploaded videos demonstrated the highest mean VCQ (11.83) and GQS (3.54), whereas commercial videos yielded the lowest VCQ (8.36). Although associaons and individuals had slightly higher DISCERN scores (2.96) than professionals (2.88) and commercial sources (2.77), none of these differences were stascally significant (P > 0.05). These findings are presented in TABLE II. Overall, these results indicate that while some instuonal efforts provide more reliable informaon, the general scienfic rigor of ruminal acidosis related content on YouTube™ remains limited. 4 of 7
Evaluang Youtube™ videos on ruminal acidosis / Can Ayhan Kaya UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico TABLE II Comparison of Video Quality Scores According to Uploader Type Parameters Source n Median Min. Max. P VCQ Professionals 25 10.000 1.00 20.00 0.290 Associaons 12 11.500 5.00 20.00 Commercials 22 8.500 1.00 16.00 Individuals 12 10.500 2.00 20.00 GQS Professionals 25 3.000 1.00 5.00 0.240 Associaons 12 3.500 2.50 4.50 Commercials 22 3.000 2.00 4.50 Individuals 12 3.000 2.00 4.50 DISCERN Professionals 25 2.500 1.50 4.50 0.768 Associaons 12 3.000 2.00 4.00 Commercials 22 2.750 1.50 4.50 Individuals 12 3.000 1.50 4.00 VCQ: Video Content Quality. GQS: Global Quality Scale. (P > 0,05) Inter-rater comparisons between the human reviewer and the AI assistant demonstrated stascally significant differences across all three scoring systems. The AI assistant assigned higher median VCQ scores than the human reviewer (median = 13.00 vs. 10.00; P < 0.01). Similarly, median GQS scores were higher for the AI assistant compared with the human reviewer (median = 4.00 vs. 3.00; P < 0.001), as were median DISCERN scores (median = 3.00 vs. 3.00; P < 0.01), reflecng differences in score distribuons despite idencal median values. Kendall’s rank correlaon coefficient (τ) analysis based on VCQ categories (poor, moderate, good) demonstrated a low but stascally significant level of agreement between the human reviewer and the AI assistant (τ = 0.235, p = 0.030), indicang measurable discordance between evaluators (TABLE III). TABLE III Comparison of Video Content Quality, Global Quality Scale, and DISCERN scores between the human reviewer and the AI assistant. Paired comparisons were performed using the Wilcoxon signed-rank test. Inter-rater agreement for Video Content Quality categorical classificaon (poor, moderate, good) was assessed using Kendall’s rank correlaon coefficient (τ). Parameter n Reviewer Median AI Median Test Sasc P VCQ score 71 10.00 13.00 Wilcoxon signed-rank 0.006* GQS score 71 3.00 4.00 Wilcoxon signed-rank <0.001* DISCERN score 71 3.00 3.00 Wilcoxon signed-rank 0.007* VCQ category agreement (poor–moderate–good) 71 Kendall’s τ 0.235 0.030* VCQ: Video Content Quality; GQS: Global Quality Scale; AI: Arficial Intelligence. Stascal significance was set at P < 0.05 These findings are summarized in TABLE III. In addion to these score differences, inter-rater agreement and associaon analyses provided further insight into the relaonship between human and AI evaluaons. Kendall’s rank correlaon coefficient (τ) analysis based on VCQ categorical classificaon demonstrated a low but stascally significant level of agreement, indicang that although both evaluators used the same scoring framework, their qualitave judgments differed. Consistently, Spearman correlaon analysis revealed only a weak associaon for VCQ scores and no significant correlaons for GQS or DISCERN. Together, these findings suggest that AI- based assessments capture structural and textual completeness more readily than contextual and audiovisual quality, underscoring the complementary rather than interchangeable role of AI in evaluang educaonal video content. These discrepancies can be explained by methodological differences: the AI evaluated text-based content extracted from transcripts, emphasizing linguisc organizaon and clarity, while the human reviewer considered visual presentaon, delivery, and contextual accuracy. Consequently, the AI tended to overesmate structural coherence, whereas the human evaluator placed greater weight on didacc quality and audiovisual comprehensiveness. The frequency distribuon of VCQ categories (TABLE IV) visually reinforces this discrepancy, showing that the AI assistant classified nearly half of the videos as good quality, whereas the human reviewer more frequently assigned poor and moderate rangs. The distribuon of VCQ categories according to evaluator is presented in TABLE IV. 5 of 7
Revista Cienfica, FCV-LUZ / Vol. XXXVI UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico TABLE IV Distribuon of VCQ categories by evaluator VCQ category Reviewer n (%) AI assistant n (%) Poor (0–6) 22 (31.0%) 12 (16.9%) Moderate (7–13) 33 (46.5%) 26 (36.6%) Good (14–20) 16 (22.5%) 33 (46.5%) Total 71 (100) 71 (100) VCQ: Video Content Quality; AI: Arficial Intelligence. Viewer engagement metrics showed no consistent correlaon with video quality. Some videos with low VCQ or DISCERN scores received disproporonately high numbers of views and likes, indicang that popularity does not necessarily reflect educaonal or scienfic accuracy. Conversely, several high quality videos produced by professional or academic sources had limited engagement. This imbalance highlights the need for strategies that promote both visibility and reliability in online veterinary educaonal media. Spearman rank correlaon analysis revealed a weak but stascally significant posive correlaon between reviewer and AI VCQ scores (P = 0.303, P = 0.010). In contrast, no significant correlaons were observed for GQS (P = 0.159, P = 0.184) or DISCERN scores (P = 0.054, P = 0.654). From a Veterinary educaon standpoint, these findings reveal both challenges and opportunies. The overall scarcity of high-quality instruconal materials on ruminal acidosis represents an untapped potenal for academic instuons and professional associaons. By increasing their digital presence and producing scienfically robust yet accessible content, these organizaons can enhance public understanding and counter misinformaon. Previous studies have emphasized that veterinary students appreciate online video materials for their accessibility and learning value, parcularly before praccal examinaons [15], 16 , 17 , 18 , 19 , 20]. Incorporang assessments from both human experts and AI-based systems may enable a more comprehensive evaluaon of educaonal videos. As suggested by skill acquision theory, repeated exposure to structured, high-quality video materials facilitates the transformaon of declarave knowledge into procedural competence, thereby improving performance and safety awareness [21]. The observed divergence between human and AI scores underscores their complementary strengths. While human experts excel at evaluang contextual accuracy and audiovisual delivery, AI-based tools provide scalability and consistency. Hybrid evaluaon frameworks that integrate both approaches could therefore offer a more balanced and objecve assessment of online educaonal content. Several limitaons should be acknowledged. First, the analysis included only English-language videos, which may restrict generalizability to non-English contexts. Second, YouTube™’s dynamic recommendaon algorithms may influence video rankings over me, affecng reproducibility. Third, despite the use of validated scoring instruments, evaluator dependent variability remains, as reflected in the reviewer AI discrepancies. Finally, the study did not assess the actual learning outcomes or behavioral impacts of the videos, which would require experimental or longitudinal research designs. Future studies should include mullingual analyses and cross-plaorm comparisons to improve generalizability. Invesgaons incorporang both human and AI hybrid scoring models may yield deeper insights into evaluaon reliability. Furthermore, collaboraons between veterinary experts and digital media specialists could help produce scienfically accurate and engaging educaonal materials, maximizing the potenal of plaorms such as YouTube™ in veterinary and agricultural sciences. CONCLUSION In conclusion, YouTube™ videos on ruminal acidosis exhibit substanal variability in quality and reliability. While content produced by academic and professional sources is generally accurate and educaonal, the majority of available videos remain incomplete, inconsistent, or potenally misleading. These findings highlight the crical importance for Veterinary students, praconers, and livestock producers to crically evaluate online materials, ideally incorporang feedback from both human experts and AI assisted assessments. Furthermore, the results underscore the need for proacve engagement by veterinary professionals and academic instuons in producing accessible, scienfically sound digital content. Expanding expert led educaonal resources on widely used plaorms such as YouTube™ may serve as an effecve strategy to enhance farmer and student knowledge, counteract misinformaon, and support improved herd health management. ACKNOWLEDGEMENTS The author would like to thank all individuals and instuons who contributed to the preparaon and review of this study. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. BIBLIOGRAPHIC REFERENCES [1] Owens FN, Secrist DS, Hill WJ, Gill DR. Acidosis in cale: a review. J. Anim. Sci. [Internet]. 1998; 76(1):275-286. doi: hps://doi.org/qmpj [2] Krause KM, Oetzel GR. Understanding and prevenng subacute ruminal acidosis in dairy herds: a review. Anim. Feed Sci. Technol. [Internet]. 2006; 126(3-4):215-236. doi: hps://doi.org/cg9fnm [3] Plaizier JC, Krause DO, Gozho GN, McBride BW. Subacute ruminal acidosis in dairy cows: the physiological causes, incidence and consequences. Vet. J. [Internet]. 2008; 176(1):21-31. doi: hps://doi.org/c4bdpf 6 of 7
Evaluang Youtube™ videos on ruminal acidosis / Can Ayhan Kaya UNIVERSIDAD DEL ZULIA Serbiluz Sistema de Servicios Bibliotecarios y de Información Biblioteca Digital Repositorio Académico [4] Adrogué HJ, Madias NE. Management of life-threatening acid-base disorders. First of two parts. N. Engl. J. Med. [Internet]. 1998; 338(1):26-34. doi: hps://doi.org/ fv2n3q [5] Enemark JM. The monitoring, prevenon and treatment of sub-acute ruminal acidosis (SARA): a review. Vet. J. [Internet]. 2008; 176(1):32-43. doi: hps://doi.org/ bk27b3 [6] Khafipour E, Krause DO, Plaizier JC. A grain-based subacute ruminal acidosis challenge causes translocaon of lipopolysaccharide and triggers inflammaon. J. Dairy Sci. [Internet]. 2009; 92(3):1060-1070. doi: hps://doi. org/cndwsv [7] Kleen JL, Hooijer GA, Rehage J, Noordhuizen JPTM. Subacute ruminal acidosis in dairy cale: a review. J. Vet. Med. A Physiol. Pathol. Clin. Med. [Internet]. 2003; 50(8):406-414. doi: hps://doi.org/fs2525 [8] Gledhill L, Dale VHM, Powney S, Gaitskell-Phillips GHL, Short NRM. An Internaonal Survey of Veterinary Students to Assess Their Use of Online Learning Resources. J. Vet. Med. Educ. [Internet]. 2017; 44(4):692-703. doi: hps:// doi.org/qmp4 [9] Li M, Wang X, Du Y, Zhang H, Liao B. Based on digital intelligence: teaching innovaon and pracce of veterinary internal medicine in China’s southwest froner. Front. Vet. Sci. [Internet]. 2025; 12:1651179. doi: hps://doi.org/qmp5 [10] Monteiro HF, Figueiredo CC, Mion B, Santos JEP, Bisinoo RS, Peñagaricano F, Ribeiro ES, Marinho MN, Zimpel R, da Silva AC, Oyebade A, Lobo RR, Coelho WM Jr, Peixoto PMG, Ugarte Marin MB, Umaña-Sedó SG, Rojas TDG, Elvir-Hernandez M, Schenkel FS, Weimer BC, Brown CT, Kebreab E, Lima FS. An arficial intelligence approach of feature engineering and ensemble methods depicts the rumen microbiome contribuon to feed efficiency in dairy cows. Anim. Microbiome. [Internet]. 2024; 6(1):5. doi: hps://doi.org/qmp6 [11] Bernard A, Langille M, Hughes S, Rose C, Leddin D, van Zanten SV. A systemac review of paent inflammatory bowel disease informaon resources on the World Wide Web. Am. J. Gastroenterol. [Internet]. 2007; 102(9):2070- 2077. doi: hps://doi.org/bnwm6b [12] Charnock D, Shepperd S, Needham G, Gann R. DISCERN: an instrument for judging the quality of wrien consumer health informaon on treatment choices. J. Epidemiol. Community Health. [Internet]. 1999; 53(2):105-111. doi: hps://doi.org/c3qmsk [13] Kruskal WH, Wallis WA. Use of Ranks in One-Criterion Variance Analysis. J Am. Stat. Assoc. [Internet]. 1952; 47(260):583–621. doi: hps://doi.org/gfsnx8 [14] R Core Team. R: A language and environment for stascal compung. Vienna, Austria: R Foundaon for Stascal Compun. [Internet]. 2015 [cited 20 Aug 2025]. Available in: hps://goo.su/23IZam [15] Yildiz S, Toros SZ. The quality, reliability, and popularity of YouTube educaon videos for vesbular rehabilitaon: a cross-seconal study. Otol. Neurotol. [Internet]. 2021; 42(8):e1077-e1083. doi: hps://doi.org/gj6pkk [16] Hansen C, Basel MT, Curs A, Malreddy P. Pre-lab videos as a supplemental teaching tool in first-year veterinary gross anatomy. J. Vet. Med. Educ. [Internet]. 2024; 51(6):795-806. doi: hps://doi.org/qmqd [17] Akkaya S, Saglam Akkaya G. Educaonal quality and reliability of YouTube content related to musculoskeletal ultrasound. Arch. Rheumatol. [Internet]. 2025; 40(3):365- 375. doi: hps://doi.org/qmqf [18] Roshier AL, Foster N, Jones MA. Veterinary students’ usage and percepon of video teaching resources. BMC Med. Educ. [Internet]. 2011; 11:1. doi: hps://doi.org/ dc9734 [19] Azer SA, AlGrain HA, AlKhelaif RA, AlEshaiwi SM. Evaluaon of the educaonal value of YouTube videos about physical examinaon of the cardiovascular and respiratory systems. J. Med. Internet Res. [Internet]. 2013; 15(11):e241. doi: hps://doi.org/f5jpz5 [20] Morgado M, Botelho J, Machado V, Mendes JJ, Adesope O, Proença L. Video-based approaches in health educaon: a systemac review and meta-analysis. Sci. Rep. [Internet]. 2024; 14:23651. doi: hps://doi.org/ g9rs52 [21] Klupiec C, Pope S, Taylor R, Carroll D, Ward MH, Celi P. Development and evaluaon of online video teaching resources to enhance student knowledge of livestock handling. Aust. Vet. J. [Internet]. 2014; 92(7):235-239. doi: hps://doi.org/f6cjzx 7 of 7