International Journal of Clinical Case Reports, 2025, Vol.15, No.3, 98-109 http://medscipublisher.com/index.php/ijccr 103 tools across various studies and datasets highlights their potential to enhance diagnostic accuracy and improve patient outcomes in emergency care settings. Figure 2 Examples of false-positive and false-negative findings on CT images (Adopted from Seyam et al., 2022) Image caption: AI=artificial intelligence, SAH = subarachnoid hemorrhage, SDH = subdural hemorrhage (Adopted from Seyam et al., 2022) 6 Challenges in Clinical Application of AI Systems 6.1 Ethics and data privacy The integration of AI systems in emergency care for cerebrovascular accidents raises significant ethical and data privacy concerns. One of the primary issues is the protection of patient data, which is often sensitive and personal. The use of large datasets to train AI models necessitates stringent data governance frameworks to ensure patient privacy and security. However, current practices often fall short, leading to potential breaches of confidentiality and misuse of data (Tat and Rabbat, 2021). The ethical implications extend beyond mere data protection; they also encompass the need for transparency in AI decision-making processes to maintain trust between patients and healthcare providers (Hosseini et al., 2023). Moreover, the ethical landscape is complicated by the lack of comprehensive guidelines and policies governing the use of AI in healthcare. The rapid advancement of AI technologies has outpaced the development of regulatory frameworks, leaving many ethical questions unanswered. For instance, the potential for algorithmic biases to exacerbate existing health disparities is a significant concern. These biases can arise from the data used to train the models, which may not be representative of the diverse patient populations they are meant to serve (Levi et al., 2022). Addressing these ethical challenges requires a multi-faceted approach, including the development of robust ethical guidelines and continuous monitoring of AI systems in clinical practice (Petersson et al., 2023). 6.2 Technical limitations AI systems in emergency care are not without their technical limitations. One of the primary challenges is the inherent constraints of the algorithms themselves. Many AI models, particularly those based on machine learning, require large amounts of high-quality data to function effectively. In emergency settings, where data can be incomplete or noisy, the performance of these models can be significantly compromised (Chee et al., 2021). Additionally, the "black box" nature of many AI algorithms makes it difficult for clinicians to understand how decisions are being made, which can hinder their trust and willingness to adopt these technologies (Tat and Rabbat, 2021).
RkJQdWJsaXNoZXIy MjQ4ODYzNA==