Introduction
The integration of automation and machine learning (ML) has led to an unprecedented revolution in laboratory medicine.1 This change signals an evolution from conventional manual and semi-automated methods to a digital era characterized by increased consistency, precision, and efficiency. Improving quality assurance (QA) procedures is essential to this transformation since it guarantees accurate diagnoses and patient welfare.2 The infusion of ML into QA has introduced new capabilities, including advanced pattern detection, predictive analytics, and sophisticated data handling,3 effectively navigating the complexities of biomedical data through advanced algorithms.4 However, this swift embrace of cutting-edge technologies also brings to light various challenges. This review delves into these innovations in laboratory medicine, dissecting their impact, roles, and the diverse challenges they introduce, as well as offering strategic approaches to fully leverage their benefits. The exploration of the contemporary laboratory landscape aims to provide a critical analysis of ongoing trends and forecast future directions in the synergy between technology and healthcare QA.5
In the context of laboratory operations, automation is characterized as the application of technology to execute lab processes with minimal human input, aiming at augmenting productivity, minimizing errors, and enabling technicians to concentrate on complex tasks.6–8 This encompasses a spectrum of technologies, from basic automated pipettes to advanced analyzers and robotic handling systems. These systems perform routine and repetitive tasks with exceptional precision and speed, thereby boosting the operational effectiveness of the laboratory.9
The integration of automation into medical laboratory tests will enhance precision, reduce economic burdens, and provide platforms for multidisciplinary teams in the healthcare process. Automation will help to improve outcomes, safety, satisfaction, and the optimal use of healthcare resources.10 For the efficient management of serious clinical cases in the laboratory, new clinical regulatory frameworks and financing models focus on QA, risk management, technology assessment, patient satisfaction, and patient empowerment.11 The expansion of automation in sample collection and testing methods, smartphone health-related applications and software, reporting, and record-keeping systems has been suggested and implemented to manage chronic diseases.12 Interventions in laboratory medicine have contributed to improved disease control patterns.13 Patient empowerment is a second major trend relevant to this automation loop. Several research studies have indicated that reliable self-management and self-care depend on automated mobile healthcare interventions and have shown effectiveness in improving patient health outcomes associated with chronic diseases.14 Digital laboratories, smartphone applications, and advanced software have illustrated better management and self-monitoring of diseases like diabetes and cardiovascular diseases by glycemic control, blood pressure estimation, and oxygen-level monitoring at regular intervals. They also digitally connect data from one setup, case, or facility to another using the same data-sharing platform or cloud computing. These factors of automation and digitalization will impact test ratios and regulation and will permit efficient and advanced monitoring in rural settings.15 Artificial intelligence (AI) is an important factor that can influence the activation of laboratory medicine by repeat testing. ML and data analysis integrated with advanced intelligent systems may prove an efficient tool for appropriate test prescription. The integration of digital pathways with primary and secondary healthcare sectors will facilitate efficient, value-based healthcare systems.16 Safety and cost-effectiveness are crucial for the reliability and credibility of such digital laboratory environments. Transforming the healthcare system will boost novel human-machine interfaces, although implementation depends on reliable and clear interpretation.17
Conversely, within the sphere of laboratory medicine, ML algorithms are utilized to analyze intricate datasets, identify patterns, forecast outcomes, and aid in decision-making processes.18 This includes predicting sample stability, estimating workload for optimal resource management, and detecting subtle irregularities in test results that might elude human observation. In laboratory medicine, the introduction of automation and ML brings a new era that redefines conventional lab procedures.19 This transformation allows advanced analytical capabilities, improves production capacity, and streamlines workflow procedures.20 The combination of these two technical spheres profoundly transforms patient care and diagnostics while also enhancing QA procedures.21 However, the adoption of these technologies is not without its challenges. Prominent problems include initial financial investments, data security concerns, potential biases in algorithmic training, and the need for ongoing monitoring to ensure system effectiveness.22 Overcoming these challenges is crucial for the proper utilization of automation and ML in laboratory medicine. In laboratory medicine, where accurate diagnoses are necessary for efficient patient care and therapeutic decision-making, QA is crucial. To guarantee the accuracy and dependability of test results, laboratories implement standardized protocols known as QA.23 The repercussions of faulty results are severe; they may result in incorrect diagnoses, ineffective treatments, and unfavorable health outcomes, including death.24 A strong way to support QA is through the incorporation of automation and ML into laboratory procedures. Automation standardizes processes, reducing the possibility of human error, while ML provides labs with cutting-edge instruments for thorough data analysis. This involves better prediction of possible inaccuracies and improved identification of anomalies. Westgard rules are procedures used in management to detect errors or deviations in laboratory testing procedures. They are designed to monitor the effectiveness of assessments and ensure the reliability of tests. AI can help improve Westgard policies in different ways. AI algorithms can identify variability patterns in laboratory test data to predict deviations from standards. For this purpose, ML approaches like clustering or classification may be helpful.25 This monitoring system can assess and interpret results rapidly compared to manual methods, saving considerable time.26 These algorithms can also adjust the thresholds used in Westgard rules, contrary to the fixed thresholds of conventional Westgard rules, based on previous data assessment. This will enhance the sensitivity and specificity of laboratory tests by lowering the chances of false-positive and false-negative results.27 Moreover, AI algorithms can be combined with Laboratory Information Systems to identify additional information on clinical cases for better interpretation of lab results.28 They can also be used to predict future disease patterns for better preventive or therapeutic measures. However, it is compulsory to understand that AI is a complementary option to enhance the efficiency of laboratory medicine alongside existing conventional regulatory methods.
The ultimate goal is to uphold a level of service quality consistently aligned with set standards, ensuring that each patient receives trustworthy test findings.29 The demand for these cutting-edge technologies is driven by the necessity for strict QA. However, it also means that labs have to come up with elaborate plans for the verification and maintenance of automated systems and ML algorithms. This involves ensuring that lab staff members are properly trained and skilled in their work. It is imperative to have a dynamic and reliable QA system that is updated and reviewed frequently to incorporate new technology and adapt to evolving clinical requirements.30
AI encompasses two major types: weak AI and strong AI. Weak AI, or artificial narrow intelligence, describes the classification of data based on a statistical model which is well-established and has already been trained to execute specific tasks. In contrast, strong AI, also known as artificial general intelligence, can create a system that functions intelligently and independently by executing ML from any available normalized data.31 ML is generally divided into three categories: supervised learning, unsupervised learning, and incremental learning. In supervised learning, both inputs and corresponding outcomes (desired results) are provided so that the computer can learn from the data under the supervision of a “teacher”.32 Learning management in this context focuses on identifying the mathematical functions that represent the relationships between input data and target outputs. In contrast, unsupervised learning deals with unstructured data, where the algorithm autonomously discovers patterns within the dataset. These patterns often reveal underlying categories or intrinsic structures in the data. Common supervised learning techniques and models include Logistic Regression, LASSO, Ridge Regression, Support Vector Machines (SVM), Random Forests, and Neural Networks. Examples of unsupervised learning methods include principal component analysis, Laplacian eigenmaps, t-SNE, p-SNE, and autoencoders. In clinical research, for instance, Dawson et al. applied unsupervised principal component analysis to show whether there was a distinction in xerostomia (dry mouth) data between high- and low-risk patients after exposure to parotid gland radiotherapy.33 Intuitively, supervised learning can often classify information better due to additional guidance from known answers (scripts). Therefore, in this context, unsupervised learning is generally considered a more difficult problem than cognitive learning.34
The type of learning in which the ML model uses the whole dataset is called batch learning. After the finalization of training, the algorithm’s weights are fixed, and it can analyze new data in a required production setting. The new information obtained during production process does not alter the fixed weights of the algorithm; hence, the system is not learning. The positive aspects of such systems are that they are stable and robust, and their performance and accuracy can be easily identified in advance. However, the disadvantage is that the system cannot adapt to newly obtained information. It has to be trained from scratch using both previous and new data samples, which requires significant computing resources and is time-consuming. This system also has disadvantages, i.e., the inability to identify its accuracy and the instability of system performance due to continuous changes in the algorithm, leading to problems for licensing.35
This review aims to provide a thorough analysis of the impact of automation and ML on QA in laboratory medicine. Our goal is to provide a comprehensive analysis of developments, a realistic assessment of obstacles, and workable plans for implementing these technologies. The assessment includes an examination of present practices, a review of obstacles ranging from data processing to compliance with regulations, and a discussion of approaches for seamless integration into current systems.36 It is important because it offers a thorough and empirically supported examination of the subject, serving as a fundamental reference for laboratory medicine professionals. We highlight the role of automation and ML in raising QA standards by highlighting their transformational potential. In addition, we address how to overcome obstacles preventing their widespread use and provide practical plans of action for all parties involved, such as scientists, policymakers, and laboratory personnel.37 The purpose of this review is to make a significant contribution to the discussion of technological developments in laboratory medicine, with an emphasis on improving patient care standards. This review aims to impact future research, inform policymaking, and foster innovation in laboratory medicine QA by aligning with current research and technological advancements.
Progress in automation within laboratory medicine
Historical background
The historical evolution of automation in laboratory medicine is characterized by pivotal developments that have reshaped the field. It began with basic mechanization, such as the introduction of automated pipetting, and evolved with the introduction of the first automated analyzers in the 1950s. These initial advancements laid the groundwork for a shift from manual, labor-intensive methods to more efficient, automated processes. Driven by the need to handle growing test volumes while ensuring accuracy, significant progress included the incorporation of conveyor systems, barcode-based specimen tracking, and the adoption of computerized systems for analyzing test results.38 The transition from manual to automated methodologies not only enhanced throughput but also reduced human error, leading to standardized operations and setting the stage for today’s high-capacity automated systems that are fundamental in contemporary laboratory medicine.39
Present-day applications of automation
In contemporary laboratory settings, automation manifests through an array of advanced systems, encompassing everything from auto analyzers for biochemical tests to robotic arms for precise sample handling.40 These systems are seamlessly integrated into laboratory information management systems, facilitating an efficient workflow from the initial logging of samples to the final delivery of results.41 Quantitatively, automation has resulted in a marked escalation in laboratory throughput.42 Modern auto analyzers are capable of processing hundreds, if not thousands, of samples daily, a volume that would be unfeasible manually.43 Additionally, there has been a significant reduction in error rates. Automated systems boast error rates below 1%, starkly contrasting with manual methods, which can see error rates exceeding 5%.44 This enhancement in accuracy can be attributed to precise control over aspects such as sample volume, reagent addition, and reaction timing, along with the implementation of sophisticated detection and analysis technologies.45
Role of ML in the field of clinical chemistry
ML plays a vital role in chemistry as it allows an easy analytical assay approach for rapid detection.
Quality review of laboratory results
The pre-analytical phase is a major step in the sample testing process, with 70% of errors in laboratory diagnosis. One common mistake is using the wrong tube for blood sample collection, as demonstrated by Rosenbaum and Baron. ML-based multi-analytic delta checks show great potential to dominate previous single-analytic delta checks. The most promising algorithm is an SVM based on variations in laboratory values between sequential collections among eleven commonly measured chemistry analytes. The proposed algorithms realized an area of 0.97 under the receiver operating characteristic curve in identifying write-back-invalidate-tag errors and outperformed univariate delta checks. Considering a 1% error prevalence in write-back-invalidate-tag and 80% test sensitivity, the most accurate univariate delta check covered 13% of pay-per-view, while the SVM model achieved 52% of pay-per-view. Factors like hemolysis can affect numerous laboratory parameters. Benirschke and Gniadek developed a multivariate Logistic Regression model to detect pseudo-enhanced point-of-care potassium results caused by hemolysis.46
Role of ML in the field of hematology
Peripheral smear reporting
The peripheral smear is the initial step in classifying anemias and diagnosing more than 80% of hematological diseases. Several methodologies, such as Bayes classifiers, K-nearest neighbors, multilayer perceptrons, and multiclass SVMs, have been used for the classification of leukocytes. A public dataset of cell images from 17,000 individuals was used to train the model. Public datasets can support the development of integrated medical laboratory systems in routine clinical laboratories, bypassing some drawbacks of commercially available testing reagents, such as high costs and low sensitivity. Another recent study reported high accuracies in different types of white blood cells and myoblast classes in acute myeloid leukemia, with sensitivity and precision above 90% based on a convolutional neural network (CNN). After classifying white blood cells, CNNs proved helpful in morphologically classifying red blood cells. The use of CNNs can offer variable accuracy and limited specificity compared to commercial analyzers, such as CellaVision used for red blood cell classification, without the necessity of reclassification by manual operators.47
In the diagnosis of malaria
The gold standard for the laboratory diagnosis of malaria is the microscopic examination of thick and thin stained blood films. Highly trained professionals are required for the microscopic quantification of parasites present in the blood and their different stages of life cycle. It is a time-consuming and laborious procedure. Several ML approaches have been reported to distinguish different parasite stages or species and to quantify parasitemia. These systems were primarily developed to differentiate infected and non-infected erythrocytes. However, a framework with an accuracy of 97.7% was developed by Molina et al., based on SVM and linear discriminant analysis, which differentiated red blood cells infected with malaria from other non-infected cells, such as Pappenheimer bodies, Howell–Jolly bodies, and basophilic stippling.48 In another study, Poostchi et al. presented a cost-effective and compact automated microscopy platform with an ML approach to detect Plasmodium falciparum parasites in stained blood smears. The system was efficient enough to screen almost 1.5 million red blood cells per minute for parasitemia quantification, with a simulated diagnostic specificity and sensitivity of over 90%. The study showed that logistic regression analysis was the best-performing model, with 92% accuracy for predicting solely Plasmodium infections and 85% accuracy in predicting mixed infections of Plasmodium falciparum and Plasmodium ovale.49
Role of ML in molecular diagnostics
The development of highly advanced and complex high-throughput nucleic acid technologies has increased competency in the field of molecular diagnostics. These processes have been enabled by advances in ML. Massive multiplexity needs sophisticated approaches in order to identify analytically valid interpretations and results. Modern next-generation sequencing assays produce high-dimensional, structured datasets that can provide useful prognostic and diagnostic insights. Previously, techniques were implemented to check sequence similarity but often had low efficacy in predicting clinical impact. However, new technologies have been designed to interpret findings from functional analysis to clinical impact. ML techniques are used to generate interpretations of complicated findings from broad genomic assays, available through both clinician-ordered and direct-to-consumer pathways. Molecular diagnostics in laboratory medicine involve probing with nucleic acid sequences and quantifying specific molecules. These omics-oriented tests can support studies including metabolomics, microbiomics, epigenomics, transcriptomics, and proteomics. These tests often include an ML component in the analysis of raw data, which is frequently processed on a large scale. However, the ability to combine multiple sets of -omics data (i.e., multiomics) as a new clinical diagnostic approach and to integrate high-fidelity phenotypic data represents a challenging data-driven direction for molecular diagnostics.50
Role of ML in the field of immunology and serology
In immunology, imaging-based studies have been combined with immunofluorescence for the identification and classification of anti-neutrophil cytoplasmic antibodies. Currently, only a few examples of digital imaging in chemistry analysis using mechanical devices are important. There are also many new detection methods where simple and fast equipment is essential. For example, there is significant interest in integrating mass spectrometry systems into the operating room for biochemical analysis of surgical samples. In a newer application, tissue samples (such as gas-phase ionic species or water droplets) are collected from surgical instruments and sent to a spectrometer. The mass spectrum is then analyzed in real time to quickly perform biochemical analysis. Although this new in vitro diagnostic (IVD) technology is still in development, the method now uses ML to distinguish hard tissue from soft tissue. Recent publications describe this method for identifying tumors in various tissue types, including ovary, thyroid, and lung.51
Detection and identification of microorganisms
Traditional methods for identifying microorganisms and determining their antimicrobial susceptibility are still considered the gold standard. However, these methods are time-consuming, often taking several days, starting with Gram staining, antimicrobial susceptibility testing, and culture. Conventionally, the macroscopic analysis of colony morphology initiates the classification of bacterial species before confirmatory testing using advanced techniques such as mass spectrometry. ML algorithms are proving valuable in the interpretation and analysis of complex spectral outputs from different analytical techniques, including matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS), vibrational spectroscopy, and LC–MS/MS. ML models have been designed for the classification of group B Streptococcus serotypes, distinguishing between Shigella species and Escherichia coli, typing Staphylococcus haemolyticus strains, and differentiating between Clostridium species and Klebsiella species. While MALDI-TOF MS is widely used for microbial identification in vibrational spectroscopy (i.e., IR and Raman spectroscopy) and routine clinical microbiology labs, it is gaining interest as an alternative technique for classification of different microorganisms due to its rapid, nondestructive nature.
Detection of antimicrobial
Traditional methods of identifying and testing pathogens can lead to long-term effects, such as the use of broad-spectrum antibiotics and the spread of disease. Various analytical methods, including MALDI-TOF MS, vibrational spectroscopy, whole-genome sequencing, microscopy-based platforms, and acoustic-enhanced flow cytometry, play an important role in the rapid and reliable detection of anti-microbial resistance. Several research groups have reported the potential of MALDI-TOF MS and ML algorithms in the classification of methicillin-susceptible Staphylococcus aureus and methicillin-resistant Staphylococcus aureus. In addition, an ML classifier was developed to distinguish vancomycin intermediate-resistant Staphylococcus aureus (VISA) and vancomycin-susceptible Staphylococcus aureus from heterogeneous VISA and methicillin-resistant Staphylococcus aureus isolates. A successful approach using a combination of MALDI-TOF MS data and ML algorithms detected broad-spectrum β-lactamase-producing Escherichia coli (E. amide enzymes in Bacteroides fragilis strains) and identified fluconazole resistance in Candida albicans.36,44,51
Role of ML in the field of blood bank
Blood banks are facilities that collect, store, process, and distribute blood and exist to ensure that there is sufficient blood for hospital patients.21 Despite the efforts of different organizations, blood transfusions and safe delivery are major challenges in blood supply chain management, especially in case of high demand.52 Consequently, reducing uncertainties, meeting blood demand, and avoiding blood wastage are primary goals. The integration of ML algorithms in blood banking management can offer an efficient blood demand and supply chain solution to overcome these challenges. ML approaches can be used in forecasting models to develop AI or ML decision support systems for forecasting blood demand, classifying blood donors, and establishing blood donation schedules.23 As a result of this updated system, blood shortages and wastage can be reduced.
Diagnostic algorithms
The following ML diagnostic algorithms are commonly used.
Whether for prevention, early diagnosis, or corrective treatment, combining AI and ML with Internet of Things (IoT)-enabled wireless sensor networks can provide significant benefits in healthcare. Figure 1 describes the clinical management system, while Table 1 provides predictions and characteristics of ML algorithms based on relevant research.35–39,53,54 Better and more personalized medical services may be offered in the future. ML is an important technology in AI. Because proprietary algorithms use known patterns to create new patterns, they require a lot of data for sampling.24,55 The increasing connectivity of laboratory systems is referred to as Lab 4.0 or the Internet of Laboratory Things, which introduces a number of security and safety concerns due to the integration of devices such as sensors and systems in high-risk areas.56 These risks include data breaches, malware and ransomware attacks, unauthorized access and control, IoT device vulnerabilities, and equipment risks. Various measures can be taken to ensure cybersecurity, including network segmentation, authentication and access control, data security and protection, field control, and vulnerability analysis. Regularly updating the software and firmware of test equipment, conducting vulnerability analyses, and periodically applying patches to fix system vulnerabilities can reduce the risk of exploitation by attackers.57
Table 1Summary of characterization of machine learning algorithms
| Objectives and machine learning tasks | Major themes | Best model | References |
|---|
| Predicts iron deficiency and serum iron level from CBC indices | Prediction | Neural network | 35 |
| Predict liver function test results from other tests in the panel, highlighting redundancy in the liver function panel | Prediction, utilization | Tree-based | 36 |
| Predicts ferritin from other tests in iron panel | Prediction, utilization | Tree based | 53 |
| Predict normal reference ranges of ESR for various laboratories based on geographic and other clinical features | Interpretation | Neural network | 54 |
| Classify whether other lab results are valid or invalid using other lab values and clinical information | Automation and interpretation | Tree based | 37 |
| Classify blood specimens as clotted or not clotted based on coagulation indices | Quality control | Neural network | 38 |
| Automatically identifies mislabeled samples | Assurance and quality control | Neural network | 39 |
QA in laboratory medicine
QA in the laboratory comprises three phases.
Pre-analytical phase
This is an essential part of QA. Currently, pre-measurement error is considered the largest contributing factor to error throughout the testing process. This can be alleviated to some extent by point-of-care testing (POCT), but new possibilities emerge as we move to different matrices and the collection and storage of samples (e.g., home-collected dried blood and biobanking).58
Analytical phase
The identification of all diagnostic modalities has been greatly impacted due to advances in POCT, MS, and genomics. The number of publications in this field has increased exponentially in recent years; this growth must continue. The translation of laboratory technology and analytical methods outside the laboratory will also be further developed. This section discusses our predictions regarding the laboratory “screening” of drugs.58
Post-analytical phase
Appropriate interpretation of test results forms the basis for clinical decision-making, which is influenced by well-established time and decision limits. This section discusses three predictions regarding the “post-analysis” phase of the experiment, including the role of cognitive development.
The influence of automation on QA
The advent of automation has notably transformed QA in laboratory medicine, leading to measurable improvements in quality metrics. For instance, the utilization of automated hematology analyzers has standardized blood cell counting, thereby enhancing reproducibility and accuracy while reducing variability across different operators and institutions.59 A comparison of pre- and post-automation scenarios illuminates the profound impact of automation. Before automation, manual microscopy used for cell counting often resulted in a coefficient of variation of over 10% in some cases.60 In stark contrast, post-automation, automated counters consistently demonstrate coefficients of variation of less than 5%, as detailed by Genc et al.61 The efficiency benefits are equally remarkable; tasks that formerly took minutes per sample can now be executed in mere seconds, significantly increasing the number of samples analyzed without sacrificing—and often improving—the accuracy of results.
The emergence of ML in laboratory data analytics
ML has transformed the analysis of raw laboratory data.
Foundational concepts and applications in laboratories
ML has emerged as a pivotal force in laboratory data analysis, founded on algorithms that autonomously learn from data, discern patterns, and make informed decisions with minimal human oversight.62 Central to ML is its ability to recognize patterns and forecast outcomes, harnessing algorithms such as neural networks, decision trees, and support vector machines. These are particularly effective for the multivariate and intricate datasets typical in laboratory medicine.45 In the realm of laboratory medicine, the data amenable to ML spans both structured forms, like test results, and unstructured types, such as textual reports and imaging. Structured data is naturally suited for ML processing, enabling predictive analysis in areas like patient outcomes based on laboratory test patterns. On the other hand, unstructured data is amenable to analysis via natural language processing and advanced deep learning methods, which facilitate the extraction of critical clinical insights that can refine diagnostic and prognostic approaches.63
ML’s role in advancing predictive analytics
ML has significantly elevated predictive analytics in laboratory medicine by enabling more nuanced trend analyses and quality predictions. ML algorithms, including random forests and gradient boosting machines, offer powerful tools for unraveling complex interrelations in laboratory data, leading to enhanced predictive models for patient diagnosis and prognosis. For example, random forests have effectively predicted patient outcomes by analyzing various laboratory parameters, demonstrating notable accuracy and providing comprehensive insights into variable significance.64 Furthermore, deep learning, particularly through CNNs, has proven highly effective in image-based assays, excelling in tasks such as cell classification and anomaly detection. These algorithms often achieve accuracy rates that exceed those of human evaluators. Their continuous learning capability makes them invaluable for perpetually enhancing QA in laboratory practices.9
ML in diagnosis and prognosis
The adoption of ML in laboratory medicine has markedly advanced diagnostic and prognostic capabilities, significantly refining precision and patient care outcomes. An exemplary case is its application in the early detection of diseases such as diabetes.65 Here, ML algorithms surpass traditional methods in predicting disease onset, analyzing patient data to forecast diabetes development with heightened accuracy.66 In oncology, particularly cancer diagnosis, ML’s impact is profound. Models trained on extensive histopathology image datasets have demonstrated high accuracy in identifying cancerous cells, assisting pathologists in quicker and more precise diagnoses. These advancements elevate patient care standards and streamline laboratory operations by reducing diagnosis time, enabling earlier treatment interventions.67
Future role of total laboratory automation
Many future perspectives for laboratories foresee robotics and mobile devices playing a major role, following predictions in other areas of industry, such as commerce and business. Mobile robots are used to transport diagnostic samples, and two-arm robots are employed in the scanning process. Collaborative robots (cobots) are a new class of robots that are safe to operate alongside humans, easy to deploy and train, and inexpensive. They are designed to handle light equipment (e.g., up to 5 kg). A hospital in Denmark used two Universal Robot UR5 collaborative robots for blood sample processing. The first cobot takes the sample, places it on the barcode scanner, identifies the color cap via camera, and places the tube on the rack according to the color cap. The second cobot collects the tubes and racks them on the feeder for centrifugation and subsequent analysis. The footprint of these cobots meets the laboratory’s space limitations, eliminates the need for a safety cage, processes seven to eight tubes per minute, and allows the laboratory to handle 20% more samples without additional personnel. Future momentum for the use of robots in the laboratory may result from increased use of robots in other areas of the hospital. For example, robots are used in surgery, medication and diaper distribution, sterilization, drug distribution, medical care, patient consultation, and more. Another application of robots is total laboratory automation, integrating analyzers directly with sample transport paths. This is now standard and is unlikely to change in the 2020s. New total laboratory automation systems feature two-way, variable-speed magnetic transport models, multi-view cameras, and radio-frequency identification tracking.51
Green technologies and sustainability
Achieving sustainability has become a new goal. Climate change and environmental concerns are national and international issues. The general public is increasingly security-conscious, often using environmental considerations to guide their choices and practices. Governments, businesses, and individuals worldwide are striving to ensure that their operations are environmentally friendly. This includes “planting” projects focused on energy, infrastructure, waste, water management, and housing development. Results from selected hospitals across the country that have implemented programs to reduce energy use and waste, achieving room efficiency, show that savings from these interventions could exceed $5.4 billion in five years and $15 billion in 10 years. By 2020, community laboratories will have the opportunity to drive and develop new models and practices for sustainability. Professionals in the workplace play an important role in creating a healthy environment. This sustainable thinking will ensure efficient and responsible resource use, creating new value for the health mission rather than simply addressing fewer problems. Providing safe and cost-effective care to patients and their families must be a priority, and environmental management can be achieved simultaneously. New technologies will play an important role in this quest. AI and data science in pharmaceutical laboratories increase efficiency, optimize the use of reagents and resources, and contribute to leadership in sustainability. AI can improve energy efficiency and measure and manage carbon and water footprints. “Smart” decisions will be encouraged, including reducing unnecessary testing. Results from a pediatric heart study focusing on blood pressure measurements showed positive effects on biochemical tests and a reduction in carbon dioxide emissions of approximately 17.8 tons at a 32-month follow-up. IVD reagent manufacturers are also addressing environmental concerns, working to reduce reagent packaging and thereby decrease both carbon and environmental footprints. Consultation and cooperation with IVD stakeholders ensure better supply chain security, reagent production, and production equipment.68
How will POCT evolve?
Predicting the future balance between testing and self-assessment or self-monitoring is challenging. Factors contributing to this evolution include the important role of mobile health and care-related information, the emergence of new diagnostic tools, and innovative diagnostic tests (e.g., medical scanners, toilet tests, and comparison with medical records). The emergence of smartphones in 2007 changed many aspects of daily life, offering electronic devices with telephone, photo/video, MP3 player, media, and weather functions, as well as easy Internet access. Moreover, the functionality of smartphones can be expanded through downloadable applications, particularly in health and wellness, a sector expected to grow substantially by 2025.
The next phase of POCT evolution involves devices that connect to smartphones, creating diagnostic tools with capabilities ranging from blood tests to ultrasound scans. Another advancement is diagnostic equipment that connects wirelessly to smartphones (e.g., Bluetooth-connected pregnancy tests and ClearBlue-connected ovulation tests). Smartphones can also be used in POCT to measure urine output using the built-in camera, employing color recognition, computer vision, and AI to make accurate measurements across different conditions and devices. These results can then be securely shared and integrated into patients’ electronic medical records. The menu of tests should be expanded to include the urine albumin:creatinine ratio. An example of the proliferation of mobile medical devices is the increase in smart devices, clothing, or appliances with integrated or woven sensors to provide health information unobtrusively in daily life. Wearable devices include wrist-worn devices (e.g., Apple Watch for monitoring heart rate; bracelets for diagnosing epilepsy), mouthguards (e.g., measuring line and rotation rate, pulse location and direction, and calculating all pulses), wearable devices (e.g., CardioInsight non-invasive 3D mapping system), and various types of wireless patches (e.g., Smartcardia - Vital Signs Temperature, Pulse, Blood Pressure, Blood Oxygen Level, Heart rhythm, and electrical activity). These patches are often interchangeable and, in some cases, stretchable (e.g., electronic skin with pressure and temperature sensors). Two other emerging technologies that may impact the future of non-invasive POCT are breath analysis (volatilomics) and speech analysis. Breath contains compounds, particularly volatile organic compounds, whose structure has been linked to disease. Breath analysis is attractive for POCT, and many analytical methods have been developed. Speech analysis is relatively new as a diagnostic tool; algorithms have been developed with some success in detecting coronary artery disease. Pharmacists take finger swabs, which are then analyzed for up to 21 tests at the pharmacy (e.g., cholesterol, triglycerides, blood glucose, and liver function). The number and reach of POCT are likely to increase in the 2020s.
Challenges in implementing automation and ML
The following challenges are encountered when implementing ML algorithms in data management and analysis.
Data management complexities
In implementing automation and ML in laboratory medicine, managing data, especially regarding patient privacy and security, poses significant challenges.69 The sheer volume of data demands robust encryption and controlled access to prevent unauthorized exposure. Addressing these concerns involves a combination of technological and policy-based solutions, including advanced cybersecurity measures and blockchain technology to secure data transactions. Simultaneously, comprehensive policy frameworks and ongoing staff training are essential to ensure adherence to data security best practices, maintaining trust and integrity in laboratory information systems.70
Algorithmic biases and challenges
Algorithmic bias represents a critical challenge in ML applications in laboratory medicine, impacting diagnostic precision and patient outcomes.71 These biases, arising from unrepresentative training data or algorithm design flaws, can lead to systematic errors in patient care. To bolster algorithmic reliability, it is essential to use diverse and representative datasets and perform thorough validation across various population groups.72 Ongoing monitoring of algorithmic outcomes is crucial for identifying and rectifying biases. Implementing explainable AI can demystify algorithmic decisions, identify and mitigate biases, and thus enhance algorithm reliability and trust in AI-driven laboratory processes.73 Overcoming biases resulting from non-representative data is a key challenge in developing and implementing AI approaches in healthcare. When AI algorithms fail to differentiate patient diversity due to limited or biased data, they can produce recommendations that negatively impact patient care. There is a risk of misrepresentation or distortion of healthcare disparities, as AI algorithms may suggest treatment strategies based on race, gender, genetic history, or socio-demographic factors. This can lead to discrepancies in medical laboratory standards and outcomes for different patient populations, misdiagnosed clinical conditions, serious therapeutic errors, and incorrect treatment recommendations. Biased AI systems compromise legal and ethical standards, erode patients’ rights, and diminish public trust in healthcare. They can also affect public health by disrupting the allocation of health resources, interventions, and decision-making based on flawed recommendations. Addressing AI biases requires integrating expertise, ethical oversight, and collaborative measures, such as incorporating data from diverse patient demographics and races, and applying bias detection and mitigation techniques, including rational and bias removal algorithms. To ensure transparency and accountability in AI development and implementation, disclosure of resources, assumptions, and vulnerabilities to stakeholders is essential, allowing multiple perspectives and skills to be integrated into the model. By addressing these biases, healthcare systems can uphold a comprehensive, equitable approach that improves patient outcomes and increases public trust in AI systems.74
Navigating regulatory and compliance challenges
The regulatory landscape for ML and automation in laboratory medicine is constantly evolving, shaped by various international standards and national regulations focused on patient safety and data security.59 In the United States, entities such as the Clinical Laboratory Improvement Amendments and the U.S. Food and Drug Administration oversee laboratory testing, including ML and automated applications, mandating comprehensive validation and quality control.75 Compliance challenges arise from the dynamic nature of ML models, which continually evolve and adapt, potentially surpassing existing regulatory frameworks designed for static devices.76 The development of adaptive regulatory pathways is crucial to maintain safety and efficacy while encouraging innovation. Striking a balance between technological progress and stringent regulatory compliance is a key obstacle to the widespread integration of these technologies in clinical settings.77
Infrastructural and economic factors
The integration of automation and ML into laboratory medicine involves a complex cost-benefit analysis. This includes initial investments in technology and training, as well as potential modifications to existing workflows.78 Larger institutions often benefit from economies of scale, being able to distribute costs across a high volume of tests. However, the long-term advantages, such as heightened efficiency, error reduction, and potentially superior patient outcomes, can outweigh the initial costs.79 In resource-limited environments, challenges extend beyond financial aspects, encompassing infrastructural deficiencies like inconsistent power supplies, internet connectivity issues, and a lack of adequately trained personnel.80 To overcome these hurdles, a holistic approach is needed—one that not only focuses on technological advancement but also considers the local context, including investments in infrastructure and human resources development.81
Applying rules in the context of ML models presents unique challenges, especially given the complexity of these models. One major challenge is the legal background, as laws often struggle to keep pace with the rapid growth of ML. As ML models evolve and improve, regulatory frameworks may lag, making it difficult for organizations to enforce outdated or inappropriate policies.82 Interpretation and operational application of many regulations, such as the European Union’s General Data Protection Regulation, require decision-making processes, including those supported by ML models, to describe and explain compliance.83 However, as ML models become more complex, full disclosure and explanation become increasingly difficult, complicating organizational compliance. A third challenge is the privacy and security of data, as regulations such as the Health Insurance Portability and Accountability Act in the United States and the European Union’s General Data Protection Regulation introduce strict criteria for protecting sensitive and private information, including medical records used for education, research, or referrals’ ML models.84 Compliance with these regulations requires strong security measures for data storage, access, and management. Impartial and fair implementation of regulations helps prevent injustices and discrimination that may arise from ML systems. However, complete bias removal from ML algorithms is not easily achievable. Information technology-based organizations should proactively identify, address, and monitor biases, aligning with regulatory requirements to verify and validate the precision, accuracy, consistency, transparency, and stability of ML models, especially when handling complex ML-based deep neural networks. A multidisciplinary approach connecting ML, data science, governance, and legal and policy decision-making is essential, with flexibility in regulatory frameworks since standards vary across regions and industries.85
Tactical frameworks for implementing automation and ML
Establishing validation and standardization frameworks
For ML applications in laboratory medicine, implementing robust validation procedures is crucial to ensure consistent and accurate algorithm performance. A multi-tiered validation strategy is advisable, starting with internal validation against historical data, followed by external validation using data from multiple centers.86 This approach helps identify and correct potential overfitting and biases that may not be evident in single-center studies. Additionally, establishing international algorithmic standards is vital to ensure consistency and interoperability across various systems and institutions.87 Entities such as the International Organization for Standardization and the Clinical and Laboratory Standards Institute could expand laboratory standards to encompass ML applications, covering aspects such as algorithmic transparency, data quality, and performance metrics. Such standardization is key not only for QA but also for facilitating regulatory approvals globally.88
Encouraging cross-disciplinary collaboration
Cross-disciplinary collaboration is essential in maximizing the potential of automation and ML in laboratory medicine.89 Teams comprising data scientists, laboratory technicians, clinicians, and information technology experts are crucial for the development, validation, and implementation of sophisticated analytical tools. A notable example is the team at Beth Israel Deaconess Medical Center, which created an ML algorithm to predict patient risks by integrating laboratory data with electronic health records, achieving enhanced patient outcomes.90 The partnership between IBM Watson Health and Quest Diagnostics is another instance of successful interdisciplinary collaboration, where cognitive computing is applied to vast laboratory data, advancing the field of precision medicine.91 These initiatives underscore the importance of merging technological expertise with clinical insights to drive innovation in healthcare.
Fostering educational and training programs
Adapting educational models to include automation and ML is imperative in laboratory medicine. This includes developing curricula that blend data analytics with clinical acumen, as well as modifying continuous professional development programs to keep pace with technological progress.92,93 Online platforms offering micro-credentials provide flexible, targeted learning opportunities in areas such as data analysis and system integration, essential for laboratory professionals to maintain competency in these rapidly advancing technologies. Such educational initiatives are key to ensuring the workforce remains adept in the evolving technologies underpinning QA in laboratory medicine.94
Ethical considerations and regulatory evolution
The increasing prevalence of ML systems in laboratory medicine necessitates the development of specific ethical guidelines. These guidelines should focus on aspects such as transparency in algorithmic decisions, informed patient consent for data usage, and ensuring equity in healthcare outcomes, as emphasized by the American Medical Association in its policy on augmented intelligence.95 Concurrently, it is critical to promote proactive policymaking, fostering collaboration between regulators, technologists, and healthcare professionals. Such a collaborative stance ensures that regulatory measures are both informed by and adaptable to the intricacies of ML applications, facilitating safe and effective integration while keeping ethical considerations at the forefront.96 Regulatory frameworks need to be agile, adapting to the fast-evolving field of ML in laboratory medicine, akin to the U.S. Food and Drug Administration’s progressive guidelines on digital health.97
Potential ethical concerns regarding patient privacy and data security arise from the use of AI in healthcare, as AI algorithms accumulate and process data from a large number of patients, increasing the risk of data breaches. Unauthorized or unsupervised access to private medical histories and data can violate privacy and have serious consequences for patients. Such breaches can result in data misuse instead of improving health outcomes. Even with high privacy control systems, there is always a risk of future re-identification of patient data. Patients may sometimes be unable to fully understand how their data is used for clinical testing or research, potentially compromising their right to informed consent. Algorithm bias, data ownership, and regulatory compliance may also affect the fairness of healthcare delivery.95 Addressing these ethical challenges requires multiple strategies, including strong and reliable data management policies, encryption and anonymization technologies, transparent communication with patients about data security and use (informed consent), continuous monitoring of algorithmic biases, and strict implementation of regulatory standards. Moreover, fostering an environment of trust and accountability among healthcare providers, technology developers, and patients is essential to ensure the credibility of medical intelligence while protecting patient privacy and data security.96
Prospective developments
Emerging trends in laboratory medicine
Looking ahead, laboratory medicine is poised for transformative evolution driven by the amalgamation of ML and automation. Anticipated future trends include the development of self-regulating laboratory systems, which autonomously adjust based on continual data analysis, thereby boosting accuracy and operational efficiency. The integration of IoT devices is expected to enable remote monitoring and management of laboratory processes. Moreover, advancements in ML are likely to facilitate predictive diagnostics, using extensive datasets to foresee potential disease outbreaks or patient-specific health risks, paralleling predictive maintenance techniques used in industrial contexts.80 This integration may also catalyze the decentralization of laboratory services, with point-of-care diagnostics becoming more prevalent and requiring minimal human oversight, thereby extending healthcare reach, especially in under-resourced areas.98
Personalized medicine and its public health implications
Personalized medicine, tailored to individual genetic, environmental, and lifestyle profiles, is increasingly becoming a healthcare priority. ML and automation are pivotal in this shift, enabling the intricate analysis of biological data and the identification of targeted treatment pathways.99 These technologies are expected to revolutionize patient care by customizing therapies and predicting individual responses to various treatments.100
Fostering innovation and flexibility
In the evolving field of laboratory medicine, continuous innovation is crucial to uphold the reliability and validity of diagnostic tests amid advancing technologies. As novel tools and methods emerge, the sector must be agile, updating its protocols, introducing new quality control strategies, and equipping professionals with the skills to manage complex equipment and data analyses.101 This adaptability is vital to ensure that technological advancements yield enhanced health outcomes while upholding the accuracy and ethical standards central to laboratory practice.102
Conclusions
This review has explored the progressive integration of automation and ML in laboratory medicine, underscoring its transformative effect on QA. We have traversed the promising prospects offered by this integration, from enhancing diagnostic accuracy to bolstering analytical performance. Yet, this path is laden with challenges such as data management complexities, biases in algorithms, evolving regulatory scenarios, and economic considerations. To address these challenges, we have proposed several strategic measures: implementing stringent validation protocols, encouraging cross-disciplinary collaboration, advancing educational efforts, and crafting ethical guidelines, all aimed at heralding a new era of technological integration. The convergence of ML, automation, and personalized medicine points toward a future where laboratory diagnostics are not just reactive but increasingly predictive and preventive. The responsibility lies with the contemporary scientific community to implement proactive strategies, ensuring that continuous innovation, adaptability, and a collaborative spirit form the foundation of laboratory medicine. Armed with these principles, the field can not only adapt to but also drive the ongoing wave of technological evolution, enhancing patient care and public health.
Declarations
Funding
None.
Conflict of interest
The authors have no conflict of interest related to this publication.
Authors’ contributions
Conceptualization, study design, writing original draft (QUA), data curation (RN), formal analysis (AN), project administration (HS), writing-review and editing (AD), proofreading, editing, and corrections (IUM, MI). All authors have made significant contributions to this study and have approved the final manuscript.