Introduction
Pharmacology has progressed from the empirical use of natural remedies to mechanism-driven molecular therapeutics, and now toward data-intensive, systems-level drug discovery.1 While the 20th century was dominated by reductionist strategies aimed at designing highly selective single-target drugs, this paradigm has proven insufficient for addressing the complexity of human diseases. Most pathological conditions, including cancer, neurodegenerative disorders, metabolic syndromes, and inflammatory diseases, arise from dysregulated biological networks rather than isolated molecular defects. Consequently, single-target interventions often show limited efficacy, unpredictable adverse effects, and poor translational success.2,3
Recognition of biological systems as interconnected networks has led to the emergence of polypharmacology, wherein a single therapeutic agent modulates multiple molecular targets or pathways.4 Initially observed as unintended off-target activity, polypharmacology is now considered a rational and often desirable strategy for managing multifactorial diseases.5,6 Multi-target interventions can enhance therapeutic efficacy, reduce resistance, and enable system-level modulation of disease processes. However, understanding and predicting multi-target interactions exceed the analytical capacity of traditional experimental models.7–9
Simultaneously, the foundations of preclinical pharmacology are undergoing transformation due to ethical, regulatory, and scientific pressures to reduce reliance on animal experimentation. The implementation of the 3R principles (Replacement, Reduction, and Refinement), coupled with growing evidence of limited human translatability of animal data, has accelerated the development of new approach methodologies. Yet, replacing animal models presents a major challenge: the loss of organism-level biological complexity needed to evaluate systemic drug effects, toxicity, and multi-organ interactions. Initially, knowledge in the field was primarily derived from hands-on experience, with researchers examining the effects of plant-based remedies, natural toxins, and simple chemical compounds.10,11 Advances in artificial intelligence (AI), machine learning (ML), and computational systems biology are emerging as powerful tools to bridge this gap. The integration of multi-omics datasets, high-throughput screening outputs, clinical data, and molecular simulations allows AI to reconstruct biological networks and model drug–target–pathway interactions at an unprecedented scale. AI-driven approaches enable prediction of multi-target pharmacodynamics, off-target toxicity, and system-wide responses, thereby aligning naturally with the principles of polypharmacology and human-centric drug discovery. This convergence of polypharmacology, AI technologies, and non-animal methodologies marks a paradigm shift in therapeutic research. Rather than relying on animal-based biological complexity, modern pharmacology increasingly leverages computational reconstruction of human biology. Framing this transformation as a “Cage-to-Code” transition, the field is moving toward scalable, ethical, and precision-oriented models capable of improving drug safety, efficacy, and translational relevance. Figure 1 summarizes the evolution of pharmacology from the traditional herbal medicine era to the modern polypharmacology or genomics era.12
Therefore, this review critically examines the transition from traditional animal-dependent pharmacology to AI-driven, human-centric drug discovery within a polypharmacological framework. Specifically, it explores:
Global regulatory and ethical drivers promoting non-animal methodologies,
Scientific and educational challenges arising from reduced animal experimentation, and
The role of AI and deep learning in reconstructing biological complexity through multi-omics integration and predictive modeling.
Evolution of pharmacology: Context and paradigm shift
Milestones in pharmacological research
Technological innovations have played a crucial role in accelerating drug discovery processes. High-throughput screening allows rapid testing of thousands of compounds, while combinatorial chemistry enables the synthesis of diverse chemical libraries. Quantitative structure-activity relationship (QSAR) modeling further aids in predicting the biological activity of compounds based on their chemical structure, streamlining the identification of promising drug candidates.13 These advancements have not only increased efficiency but also highlighted the limitations of traditional single-target drug design, especially when dealing with multifactorial or complex diseases such as cancer, neurodegenerative disorders, and autoimmune conditions. Addressing these challenges requires a shift toward multi-target approaches, leveraging systems biology and computational tools to develop more effective and personalized therapies. The ongoing convergence of these scientific fields continues to shape the future of pharmacology, emphasizing the importance of integrated strategies in drug development and disease management.14
A better understanding of dose–response relationships has helped optimize drug efficacy and safety. These advancements formed the foundation for modern drug design, emphasizing specificity and potency. The emergence of polypharmacology marked a significant development, reflecting the recognition that many diseases involve multiple biological pathways.15 Consequently, drug discovery shifted from targeting single molecules to designing compounds that modulate several targets simultaneously. This approach aims to improve therapeutic outcomes and reduce adverse effects. Advances in synthetic chemistry, clinical pharmacology, and genomics further expanded the scope of pharmacology. Synthetic chemistry enabled the creation of novel compounds with improved efficacy and safety profiles.16 Clinical pharmacology provided insights into drug behavior in humans, while genomics facilitated personalized medicine by understanding genetic influences on drug response. Parallel to these scientific advances, improvements in analytical techniques such as chromatography and spectroscopy enhanced the ability to measure drug concentrations accurately. These methods improved the quality of pharmacokinetic and pharmacodynamic data, enabling more efficient translation of laboratory research into clinical practice. Overall, pharmacology continues to evolve, integrating new scientific insights and technological innovations to improve drug development and therapeutic strategies.17
Transition from in vivo (animal) to in silico and in vitro methodologies
Traditional in vivo studies have remained the best option for preclinical assessment of drugs, as they can mimic complex biological environments and capture physiological complexity.18 Still, issues regarding biokinetics, ethics, and regulatory challenges in different species have impelled a shift toward alternative procedures. In vitro models utilize isolated cells, tissues, and organoids, offering platforms for cost-effective and high-throughput screening.19,20 These models are widely used for toxicity testing, mechanism exploration, and target validation. These experimental and mathematical models are closer to human biology but lack systemic complexity and long-term disease modeling capabilities.21
In silico approaches provide rapid predictions of ADME (absorption, distribution, metabolism, and excretion) properties, toxicity, and polypharmacology profiles across large compound libraries even before synthesis. There has been a considerable increase in molecular docking, QSAR, and ML for absorption, distribution, metabolism, elimination, toxicity (ADMET) prediction. Vast amounts of clinical data, molecular descriptors, and omics information constitute a major component of computational models, thereby enabling the generation of various hypotheses, drug repurposing and repositioning, etc., to understand biological systems. Various countries have supported in silico methods as an important tool for chemical safety and hazard management.22 In spite of being trusted methods, in vitro and in silico methods are not able to completely compensate for in vivo experimental models due to one drawback or another. However, for optimum efficacy, safety, and medicinal value, all methods need to be combined so as to obtain robust data, such as accounting for species variations in in vivo studies or systemic errors in in vitro methodologies.23Figure 2 illustrates the integration of in vivo, in vitro, and in silico modeling platforms, highlighting their complementary strengths and inherent limitations.24,25
Over the years, pharmacology has been shaped by crucial discoveries that improved the understanding of drug actions and laid the basis for modern medicine. Dose–response relationships, receptor theory, and pharmacokinetics have transformed empirical practices into a demanding discipline.26 Subsequently, the 20th and 21st centuries witnessed developments in combinatorial chemistry, high-throughput screening, and molecular pharmacology, which have formed the basis of today’s polypharmacological strategies. Indian and global pharmacology communities have contributed to these milestones, with extensive research spanning experimental and clinical domains.27
Regulatory and ethical drivers for change
Global landscape: regions/countries imposing bans for academic research
The global movement to reduce, refine, and replace animal investigation has led to legislative actions banning animal testing in various domains, especially cosmetics and increasingly in academic and pharmaceutical research. As of now, more than 40 countries have imposed explicit bans on animal testing for cosmetics, with some extending these bans to other sectors of research, while others continue to do so.28 Some of the countries that have banned these studies are listed in Table 1.29–41 The global transition is fundamentally anchored by the Food and Drug Administration (FDA) Modernization Act 2022. This regulatory pivot transforms the ethical debate into a scientific mandate requiring New Approach Methodologies to provide human-relevant data that overcome the species differences inherent in traditional animal models.42 Regulatory approval for AI-driven methodologies is governed by Good Machine Learning Practices guiding principles. These ten principles mandate that AI models should be developed using multidisciplinary expertise and intensive monitoring to ensure the performance of adaptive models over their total product life cycle.43
Table 1Global bans on animal testing for cosmetics by jurisdiction and year
| Jurisdiction | Highlights |
|---|
| United Kingdom29 | The U.K. originally banned animal testing for cosmetics in 1998. However, UK reverted to EU rules related to animal testing in 2019, which required some animal testing for cosmetics if there were no alternatives |
| EU30,31 | The EU banned animal testing on finished cosmetic products in 2004. In 2009, the EU banned animal testing on ingredients for cosmetics, as well as the marketing of any cosmetics tested on animals. All countries associated with the EU have also banned animal testing for cosmetics |
| Israel32 | Israel banned animal testing for cosmetics in 2007. In 2013, the country banned the import of cosmetics tested on animals |
| India33,34 | India banned both animal testing for cosmetics and the import of cosmetics tested on animals in 2014 |
| New Zealand35,36 | Though no cosmetics animal testing had ever been performed in New Zealand, the country banned the practice in 2015 |
| South Korea37 | South Korea banned the distribution and sale of cosmetics tested on animals in 2016 |
| Turkey38 | Turkey banned animal testing for cosmetics in 2016 |
| Australia39 | Australia banned animal testing for cosmetics in 2020 |
| Brazil40 | Brazil banned animal testing for cosmetics in 2023 |
| Canada32,41 | Canada banned both animal testing for cosmetics and the import of cosmetics tested on animals in 2023 |
In addition to cosmetics, academic and pharmaceutical research is continuously moving toward regulatory acceptance of alternative methodologies. For example, the U.S. Environmental Protection Agency announced drastic reductions in mammal testing in chemical safety assessments, encouraging human-relevant, non-animal approaches.44
Ethical, societal, and regulatory drivers
The core ethical impetus for banning animal testing arose from increasing societal concern about animal welfare and rights. Advocacy organizations (e.g., People for the Ethical Treatment of Animals (PETA), Humane Society International, etc.) have raised public awareness about the pain, distress, and ethical cost to animals in experiments. Ethical philosophies emphasize animal sentience, the capacity to suffer, and moral considerations that challenge the justification of animal use when alternatives exist.45 Public opinion has shifted toward favoring cruelty-free products and animal welfare. Various investigations have revealed customer preferences for bans on animal-tested drugs and cosmetics, thereby favoring non-animal-tested molecules. Many awareness campaigns by social activists and the media have increased the demand for restricting animal testing.46
The 3Rs framework, established around the end of the 20th century, became a central guideline for regulators seeking to limit reliance on animals in scientific research and safety testing.47 Efforts to align policies across regions such as the European Union (EU) and India demonstrate how coordinated legislation can support consistent and enforceable restrictions on animal testing. At the same time, regulatory authorities are progressively approving well-validated in vitro systems and computational approaches as acceptable substitutes for animal-based studies in cosmetics and chemical safety evaluations.48 Regulators require that any proposed alternative method be thoroughly validated and shown to predict human outcomes as reliably as, or better than, traditional animal tests. Organizations such as the Organization for Economic Co-operation and Development (OECD) regularly revise their testing guidelines to phase out outdated animal-based procedures, illustrated by the removal of the LD50 test in 2002.49
Beyond preclinical testing, integration of AI-derived data must align with international clinical research standards, such as ICH E6, which ensures the ethical and scientific quality of clinical trials, and ICH E8, which provides the framework for quality in study design. Regulatory bodies always need qualified computational models for specific purposes so that they do not compromise safety standards established by these guidelines.50,51
In order to comply with legal restrictions and public expectations, an increasing number of multinational organizations and pharmaceutical companies have implemented cruelty-free policies and are investing in AI platforms, organ-on-chip systems, and other advanced modeling tools. Despite these advancements, it is still challenging to strike the right balance between human safety, scientific validity, and ethical considerations. Because of this, when alternatives are still insufficient, authorities frequently rely on phased adoption tactics, temporary permits, and selective licensing for particular investigations.52Figure 3 illustrates the multifaceted paradigm shift occurring in drug and cosmetic safety protocols.53
Implications of the ban on animal experimentation
Challenges faced by educators, researchers, and pharmaceutical developers
The general trend toward prohibiting the use of animals in academic research, which is especially noticeable in nations like the EU and India, has presented serious issues for educators, academics, and pharmaceutical businesses. Animal models were used extensively in preclinical pharmacology training and early drug development because they offered vital information on toxicity, pharmacokinetics, and efficacy.54 These restrictions have significantly reduced students’ access to live experimental work, making it more challenging for them to observe physiological and pharmacological effects firsthand. Institutions are now compelled to use computer-based teaching tools and simulation platforms instead of common laboratory animals like mice, rats, and guinea pigs. The reduction in animal-based toxicological and pharmacodynamic studies has made it more challenging for researchers to produce preclinical evidence of safety and efficacy in drug development.55 This gap has affected the reliability of early projections, leading to a greater dependence on alternative models, many of which cannot precisely replicate the complex, integrated physiological responses seen in living animals. Additionally, developers must manage shifting regulatory frameworks as foreign authorities continue to revise rules to incorporate non-animal procedures. This causes uncertainty and makes it more challenging to maintain consistent evaluation methods.56
The curriculum shift toward computer literacy is not just an adaptation to animal bans but is a deliberate alignment with the future of precision medicine, which will transition from wet-lab manual skills to virtual labs, ensuring the next generation of researchers can validate AI-generated evidences for regulatory compliances.57
Shift toward alternative methods
Since the ban was put into effect, the field has made rapid progress toward implementing proven substitute methods intended to minimize, eliminate, or improve the usage of animals (Fig. 4).58 Tissue culture models, in vitro cell-based systems, and an expanding array of AI-supported platforms are now popular choices.59
Computer modeling and simulations: Pharmacokinetics/Pharmacodynamics models, QSAR techniques, and ML–based toxicity prediction tools fall under this area. These methods circumvent the moral dilemmas raised by animal experiments while offering scalable, reproducible evaluations.60
In vitro techniques: Compared to many conventional animal studies, methods including cell culture systems, tissue-based tests, and organoid models provide a closer match to human biology, increasing the applicability of findings for therapeutic translation.61
Organ-on-Chip technologies: Microfluidic devices containing living cells can reproduce key organ functions, providing sophisticated, human-relevant platforms for studying pharmacological responses and toxicological effects.55
Despite their potential, these methods still face challenges in obtaining complete regulatory approval, achieving strong validation, and effectively capturing the complex reactions of an entire organism. Even while organizations like the OECD and ISO (International Organization for Standardization) have started to approve a number of non-animal techniques, no single substitute can yet replicate every aspect of conventional animal research, necessitating further development. By enabling quick virtual screening of multi-target interactions and toxicity profiles in a more efficient and economical way, the use of AI-based simulation tools is propelling advancement.62
Limitations and gaps left by the ban on animal experimentation
There are gaps in current methods for drug discovery and safety assessment due to regulatory restrictions on the use of animals in academic research and some industries, such as cosmetics. In the past, in vivo systems were crucial for studying intricate pharmacodynamics, toxicological pathways, and whole-body reactions—features that are difficult to fully replicate in isolated or cell-based models.50 The scientific justification for moving from “Cage to Code” is underlined by the high clinical attrition rate of drug molecules that pass animal studies but fail human trials. By leveraging physics-informed neural networks, researchers can now predict toxicity in silico rather than relying solely on species-specific observations in in vivo models.63
Figure 5 depicts some gaps that need to be addressed to meet challenges in toxicology. Without isolated or cell-based models, researchers face several significant obstacles50:
Incomplete experimental models: It becomes challenging to capture integrated physiological processes, such as organ interactions or long-term harmful consequences, without animal studies or experiments. While current in vitro and early-stage in silico techniques are helpful for researching particular systems, they are unable to fully replicate the biological complexity found in animal studies.64
Educational challenges: Live animal experiments have long been used in graduate and postgraduate pharmacology practical training to illustrate fundamental concepts and drug action processes. Institutions have had to swiftly update their curriculum as a result of restrictions on these activities, frequently before appropriate or fully validated alternatives became available. There are now obvious gaps in students’ practical skill development and hands-on learning due to this shift.60
Regulatory and industry bottlenecks: To comply with licensing and safety review standards, drug developers still rely significantly on data acquired from animals, which makes progress under present bans more difficult. The restrictions hinder progress and put pressure on the development of reliable non-animal techniques that can satisfy regulatory requirements, introducing uncertainty into preclinical workflows.65
Limited high-throughput screening capacity: Unlike contemporary in vitro and computational systems, animal studies are intrinsically labor-intensive, slow, and inappropriate for high-throughput screening. The prohibition has forced the scientific community to develop quicker, more scalable prediction techniques but has also highlighted the lack of substitute systems that can accurately replicate the intricacy of a living organism. In polypharmacology, where comprehending numerous drug-target interactions necessitates comprehensive, system-level expertise, this gap is especially noticeable.66
The rise of AI as a solution
AI has emerged as a key technique to close many of the gaps created by prohibitions on animal experimentation. AI-driven polypharmacology serves as a tool for systemic reconstruction, where deep learning integrates with multi-omics datasets to model the complete effect of drugs. This computational framework compensates for the loss of whole-organism animal data by simulating the biological membranes that in vivo models often fail to translate to human clinical outcomes. A few such applications in modern pharmacology are illustrated in Figure 6.67
AI offers rapid and valuable support for a variety of cutting-edge pharmacological research endeavors, including the following:
Predictive modeling of complex biological systems: Large datasets from genomes, proteomics, metabolomics, and related domains can be combined by AI methodologies, particularly machine-learning and deep-learning techniques, to mimic how medications interact with various biological targets. Research on polypharmacology is supported by this capability without the need for animal testing. These models can recognize complex relationships and non-linear patterns that are frequently missed by traditional empirical or rule-based methods.68
Accelerated drug discovery and safety assessment and enabling personalized and precision pharmacology: Through virtual screening, adverse effect prediction, and early detection of possible adverse events, AI systems facilitate quicker identification of promising compounds. These algorithms may estimate metabolic pathways and likely side-effect patterns by using published pharmacokinetic and toxicological information, which lessens the expense and ethical burden of conventional animal-based experiments.69,70
Supporting regulatory compliance and validation: Well-validated computer models are now being utilized in various fields to supplement—or in some cases, partially replace—animal data as regulatory bodies gradually recognize the importance of AI-generated evidence. This change can increase the applicability of preclinical findings to human outcomes and expedite approval procedures.71
Educational and research adaptation: New approaches to teaching and studying pharmacological concepts are being developed using AI-based simulation tools and virtual laboratory platforms. By providing interactive, data-driven environments that facilitate deeper investigation of pharmacological processes and experimental principles, these resources help counteract the loss of animal models.72
When combined, these advantages make AI a timely and crucial part of the drug development process. AI helps modern pharmacology align with ethical standards, scientific demands, and broader societal interests by facilitating safer, quicker, and more effective discovery procedures.73
AI tools and simulations in pharmacology
By improving the possibilities for drug development, target investigation, and safety assessment, AI is progressively changing modern pharmacology. The prediction of multi-target therapeutic activities is now supported by an expanding array of AI-based platforms and simulation tools that combine powerful computational models with huge, complex biological datasets.74 These technologies provide fresh insights into polypharmacology, the phenomenon in which a single chemical may simultaneously affect multiple biological processes. To facilitate this change, several cutting-edge digital technologies have been implemented, each making a distinct contribution to fields including drug-target interaction studies, molecular modeling, and the simultaneous optimization of several pharmacological parameters.75 Among the most noteworthy systems are:
DeepChem: An open-source platform that simplifies the application of deep learning techniques in pharmaceutical and chemical research. It contains tools for creating new compounds, running virtual screens, predicting physicochemical and biological features, and representing molecular structures. DeepChem allows researchers to examine multi-target interactions and evaluate possible efficacy or toxicity profiles across several biological pathways by integrating sophisticated neural-network architectures customized for chemical datasets.76
Cheminformatics suites: To handle and evaluate large chemical datasets, a variety of commercial and academic tools, like the Schrödinger suite, MOE, and Open Babel, combine traditional cheminformatics techniques with AI-driven analytics. These platforms help connect structural characteristics with biological activity by supporting tasks like molecular docking, QSAR creation, and chemical space visualization. Many now use machine-learning models that have been trained on large biochemical datasets, allowing for a more thorough investigation of multi-target interactions that are pertinent to polypharmacology.77
AlphaFold: AlphaFold, a deep learning tool created by DeepMind, has significantly improved protein structure prediction. Directly from amino acid sequences, it can produce highly accurate three-dimensional models that facilitate structure-guided drug design and provide insightful information on how pharmaceuticals interact with targets. AlphaFold also supports polypharmacology research by offering structural data for a variety of proteins, particularly when examining interrelated or pathway-level pharmacological effects.78
BIOVIA: Dassault Systems’ BIOVIA provides a full range of modeling and simulation tools that integrate cheminformatics, molecular dynamics, and AI-based data analysis. Multi-scale simulations of biomolecular systems are made possible by its environment, which aids researchers in analyzing possible off-target interactions and evaluating medication performance under conditions that closely resemble human physiology.79
Pharma.AI: The Pharma.AI platforms use cutting-edge AI techniques to expedite many phases of the drug-development process, from identifying possible targets to optimizing lead compounds and assisting with the preparation of clinical trials. These systems may integrate complex omics datasets with pharmacological responses by utilizing advanced machine-learning and deep-learning models, accelerating the search for medications that act on several biological targets.80
Both ML and deep learning techniques, each with unique advantages, are crucial in AI applications in pharmacology. ML techniques can identify patterns in data without explicit rule-based programming. For tasks including QSAR generation, toxicity prediction, and chemical similarity assessment, well-known ML algorithms like support vector machines, random forests, gradient boosting machines, and k-nearest neighbors are frequently employed. These techniques typically provide clearer interpretability and work well with medium-to-large datasets. ML techniques also facilitate the prediction of multi-target drug activity pertinent to polypharmacology by utilizing molecular descriptors and bioactivity profiles.81 Deep learning, a specialized area of ML, uses multi-layered neural networks to identify intricate, structured patterns in massive datasets. Strong performance has been demonstrated in the analysis of chemical graphs, molecular fingerprints, and protein sequence data using methods including convolutional neural networks (CNNs), recurrent neural networks (RNNs), graph neural networks (GNNs), and transformer architectures. These models are especially useful for detecting nonlinear and multi-target connections that underpin polypharmacological behaviors because they can automatically extract and refine pertinent features from raw data or information.82
AI and ML together are transforming carcinogenicity assessments in the pharmaceutical industry. However, time, cost, ethical concerns, and late-stage failures remain major challenges in this assessment.83 Predicting drug–drug interactions is critical for patient safety, and extensive historical interaction data remains essential. EmerGNN, a flow-based GNN, is designed to predict drug–drug interactions that lack historical data. This model identifies biological pathways between compounds to accurately forecast potential risks, thereby overcoming these issues. This approach improves patient safety and precision medicine by providing accurate predictions and biological interpretability for new medications.84
RNNs are used for SMILES sequences, especially the string-based ones. They require large amounts of chemical strings (e.g., from PubChem) to learn their chemistry.85 CNNs are primarily used for 3D protein–ligand or 2D molecular images. They require high-resolution structural data from the Protein Data Bank.86 GNNs treat molecules as mathematical graphs, which better preserves spatial relationships than strings.87 However, a major pitfall in these systems is data leakage, which occurs when the training and test sets are structurally very similar. This error can be addressed by scaffold splitting, which ensures that the model can generalize to entirely new chemical classes.88 Another issue is scalability and explainability. Processing billions of molecules requires high GPU parallelization, and explaining why a molecule may be toxic is equally crucial.89 Comparison of RNNs, GNNs, and CNNs is summarized in Table 2.
Table 2Comparison of RNNs, GNNs, and CNNs
| Method | Best use case | Key benchmark | Primary limitation |
|---|
| RNN | De Novo molecule generation | PubChem | Struggles with long-range 3D spatial relationships |
| CNN | Protein-ligand binding | PDB-bind | Highly sensitive to protein orientation |
| GNN | ADMET prediction | MoleculeNet | Expensive |
Limitations and future perspectives in AI-driven drug discovery
Data integrity and pitfalls of data leakage
Data leakage is a major problem faced by ML, leading to overestimated performance metrics. Traditional training and test datasets often have issues due to common scaffolds. Future studies should apply scaffold splitting to ensure that the test set contains completely novel molecules so that the system measures the model’s ability to generalize across chemical space.90 AI models can only perform well within the chemical space of the training datasets. These models face the applicability domain problem, which arises due to out-of-distribution generalization, wherein prediction reliability drops significantly. Emerging solutions to this problem include physics-informed neural networks and foundation models, which allow models to learn universal chemical representations generalizing over diverse targets.7
Many times, literature focuses on active compounds, whereas inactive or failed molecules and experiments are not reported. This creates a sample bias in databases, leading to a negative sample problem and false rates in virtual screening. Implementing active learning enables models to identify the most informative molecules for experimental validation. In the case of polypharmacology and network pharmacology, most datasets are sparse, with data unavailable for molecules hitting multiple targets. Integrating multi-task learning and knowledge graphs to infer missing links in drug-target-disease networks can help shift AI from simple binary modes to complex systems biology modeling.91 Thus, lack of interpretability remains a hurdle for chemists. Pairing high-performance models with explainability frameworks to highlight specific pharmacophores or atomic groups remains an option for accurate predictions.
The shift to AI-driven multi-target prediction is hindered by significant computational hurdles, mainly multi-label imbalance and data scarcity. In typical animal models, molecules often target a few notable targets, leaving a sparse matrix where most drug-target pairs are unidentified. In this context, multi-task learning and weighted loss functions are used to reduce bias. There is also a considerable lack of validated multi-target benchmarks, with databases like ChEMBL and STITCH being designed to predict activity across diverse protein families.92
Drug repurposing, target prediction, polypharmacology modeling
AI-driven drug repurposing
Repurposing existing medicines for new therapeutic indications can greatly shorten development timelines and lower overall costs. AI supports this process by analyzing large and complex biomedical datasets to reveal potential applications that may not be immediately apparent through conventional research methods.93
REMEDi4ALL consortium: This large EU-supported program evaluates a wide range of AI-based computational approaches for identifying new uses for existing drugs in conditions such as COVID-19, pancreatic cancer, and various rare genetic diseases. It demonstrates how AI can integrate multiple data sources to highlight promising repurposing candidates suitable for further laboratory or clinical investigation, thereby broadening potential treatment options.94
Zidovudine repurposing: Although first developed for the treatment of human immunodeficiency virus (HIV/AIDS), computational studies using AI have suggested that zidovudine may also exhibit activity against certain types of cancer. This example shows how mechanism-oriented, AI-guided analyses can reveal therapeutic applications beyond the drug’s original clinical purposes.95
Deep drug platform: This system uses diverse biomedical graph data and graph neural network models to identify repurposing opportunities for Alzheimer’s disease. DeepDrug’s analysis revealed a five-drug combination that concurrently affects multiple AD-relevant pathways, including neuroinflammation and mitochondrial dysfunction, demonstrating how AI may identify intricate, multi-target treatment approaches that require additional clinical testing.96
AI for target prediction and polypharmacology modeling
The creation of substances that can operate on several biological targets is the focus of polypharmacology, a technique particularly useful for treating complex disorders caused by numerous interrelated pathways. The POLYGON framework illustrates the shift toward generative deep learning using reinforcement learning to optimize compounds for dual-target activity, serving as a case of inverse design. Similarly, the repurposing of zidovudine can be considered in terms of molecular hybridization and multi-target directed ligand evolution, serving as a scaffold for hybrids that target multiple stages of the HIV replication cycle simultaneously.97
AI-driven polypharmacology: Recent research demonstrates how generative algorithms, deep learning techniques, and reinforcement learning are advancing the search for molecules that interact with several targets at once. A number of candidates have shown activity in cell-based tests, and platforms like POLYGON have generated new, synthetically accessible molecules that can act on target pairings, such as EGFR and PI3K.98
POLYGON framework: This platform allows for the design of compounds with multi-target activity by using generative reinforcement-learning algorithms that reward the simultaneous optimization of two goals. POLYGON’s computational predictions can be translated into actual biological effects, as evidenced by the experimental confirmation of several compounds produced using this method, such as dual inhibitors with sub-micromolar potency and detectable inhibition of cancer-cell growth.81
Multi-target ligand design for metabolic syndrome: Dual-acting drugs that interact with both GPCRs and nuclear receptors important in glucose regulation have been produced using AI-driven techniques. Several of these predicted compounds were synthesized and experimentally validated, demonstrating how AI may uncover novel multi-target therapeutic candidates that might not emerge using conventional design techniques.81
Target prediction tools: Using large compound repositories like ChEMBL, methods that combine nearest-neighbor algorithms with machine-learning models can identify likely biological targets for a given molecule. These predictions aid in the improvement and validation of polypharmacology research strategies.99
Toxicity prediction, adverse event monitoring, pharmacovigilance
AI-based toxicity prediction has become a significant part of contemporary drug research, enhancing pharmacovigilance and general drug-safety monitoring while supporting initiatives in drug repurposing, polypharmacology, and precision multi-target design. AI models can predict toxicity risks considerably earlier in the development process by analyzing molecular, cellular, transcriptomic, and clinical datasets. This helps improve safety profiles and reduce failure rates.100 Before a compound moves to costly clinical testing, machine-learning algorithms are particularly useful for combining these diverse data sources and identifying potential toxicities. AI-enabled pharmacovigilance systems analyze data from clinical trials, electronic health records, scientific publications, social media content, and spontaneous reporting databases using ML techniques, natural language processing, and large-scale data mining. These systems can identify adverse drug reactions in almost real time, detect emerging safety issues much earlier, and support stronger regulatory oversight by processing these various data streams.101 This goes beyond traditional pharmacovigilance approaches, which primarily rely on delayed voluntary reporting. AI-based systems are now being used by pharmaceutical companies to identify safety signals more quickly, allowing for faster assessment of potential risks and supporting more proactive safety management, which improves patient outcomes.102 AI tools in pharmacovigilance must exhibit strong validation, reduce algorithmic bias, and offer adequate transparency in generating predictions to be approved by regulatory bodies. These systems also need to comply with established regulatory standards, including frameworks like Guidelines on Good Pharmacovigilance Practices (GVP) Module VI of the European Medicines Agency (EMA) and relevant FDA guidelines.103
A comparative statistical analysis was proposed to evaluate the reliability of AI models against established baselines, e.g., Random Forest, traditional QSAR, etc. Performance metrics i.e. Area Under the Receiver Operating Characteristic Curve and Area Under the Precision-Recall Curve (AUROC and AUPRC) were calculated using ten-fold cross-validation. The Wilcoxon signed-rank test was performed to determine whether improvements were statistically significant.104
Case studies: Existing uses of AI
AI is becoming increasingly important in contemporary pharmacology, supporting several phases of drug discovery and safety evaluation.79 Various real-world examples demonstrate how AI helps with drug repurposing, target identification, polypharmacology analysis, toxicity forecasting, adverse event detection, and broader pharmacovigilance initiatives.105
Future prospects and impact on drug discovery
Figure 7 illustrates the future of drug discovery with the help of AI, covering areas such as de novo design and multi-omics integration.106
Integration with omics data and systems biology
Researchers believe that combining AI with multi-omics data and systems-biology techniques is a key strategy to advance pharmacology and drug development. Multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics, capture the various biological layers that influence disease behavior and drug responses.107 These massive, complex datasets can be handled and interpreted by AI techniques, particularly machine-learning and deep-learning models, which reveal subtle patterns, gene–drug–disease relationships, and underlying molecular mechanisms at a scale not possible with traditional methods. Building and analyzing biological networks that trace relationships among genes, proteins, and observable phenotypes is made easier with AI-assisted network pharmacology.108 These techniques can integrate diverse biological datasets, reduce background noise, and generate more interpretable insights by utilizing tools such as GNNs, Bayesian approaches, and transfer-learning models. Additionally, AI has improved the development of co-expression and regulatory networks from transcriptome data, enabling the identification of disease-related modules and the discovery of novel therapeutic targets. This integrated paradigm goes beyond the traditional one-drug-one-target approach, supporting the study of polypharmacology, where a single drug affects multiple targets and pathways simultaneously.109 A crucial step in understanding how genetic diversity translates into phenotypic traits and drug responses is multi-scale modeling, which incorporates interactions from molecular processes to whole-organism dynamics. Combining systems-biology techniques with AI has made this possible. This broader, integrated view strengthens mechanism-driven drug discovery and supports precision-medicine strategies, including identifying patient subgroups and designing more individualized therapies. As the volume of multi-omics data continues to grow, AI-supported integration will remain crucial for understanding complex biological networks and translating these insights into new therapeutic opportunities.110
Prospective breakthroughs: De novo drug design and clinical trial simulations
De novo drug design
AI-driven de novo design has reshaped the earliest phases of drug discovery by allowing researchers to rapidly create and refine entirely new drug-like chemical structures. Earlier approaches relied primarily on rule-based systems or fragment-assembly methods, but newer generative techniques—including variational autoencoders, generative adversarial networks, and reinforcement-learning models—now make it possible to search enormous chemical spaces efficiently. These models can propose novel compounds that meet specific requirements for biological activity, selectivity, and pharmacokinetic behavior.111 Generative AI systems draw on large collections of chemical structures, bioactivity profiles, and known protein–ligand interactions to suggest new molecules capable of engaging one or multiple biological targets. By supporting rapid hit identification, scaffold modification, and lead refinement, these approaches shorten early discovery timelines and reduce development costs compared with conventional high-throughput screening. Importantly, several AI-designed candidates have already advanced into clinical testing, demonstrating the real-world utility of these technologies.112
Future developments are expected to combine AI-based de novo design with high-accuracy protein-structure prediction tools such as AlphaFold, along with strategies that optimize activity across multiple targets simultaneously. This combination is particularly valuable for disorders driven by multiple biological pathways, a central concern in polypharmacology. Incorporating active-learning approaches and iterative simulation feedback will likely improve both the precision and efficiency of generating promising drug candidates.113
Clinical trial simulations
AI-based clinical trial simulations are emerging as a powerful new tool in drug development. By analyzing extensive datasets, including electronic health records, genomic information, and other real-world evidence, AI systems can refine trial designs, improve participant selection and recruitment, and anticipate variations in how different patient groups may respond to treatment.114 AI tools can also identify patient subgroups more likely to respond favorably to a given therapy, reducing outcome variability and increasing the likelihood of trial success. Additionally, AI-generated virtual patient models enable in silico testing of drug responses under a wide range of conditions, lowering risk and supporting more ethical trial planning.115 AI-driven analytical tools enable continuous monitoring of trial data and support adaptive study designs. They can detect emerging safety concerns early and allow timely adjustments to trial protocols or dosing strategies. This approach has the potential to shorten development timelines and reduce costs, helping move promising therapies toward approval more efficiently while safeguarding patient well-being.116 Despite these advances, challenges persist, such as protecting patient data, addressing biases within AI models, and achieving regulatory confidence in simulation-generated results. Ongoing collaboration among AI specialists, pharmacology researchers, and regulatory authorities will be crucial to fully realize the benefits of these technologies.99
Personalized medicine and precision pharmacology
Personalized medicine represents a major shift in modern pharmacology, replacing uniform treatment strategies with approaches tailored to an individual’s genetic profile, environment, and lifestyle. AI is central to this transition because it can process and interpret large, complex patient datasets, such as genomic, proteomic, and clinical information, to predict drug responses, adjust dosing, and reduce the risk of adverse reactions.117
Recent studies demonstrate how AI can incorporate pharmacogenomic data to explain how genetic differences influence therapeutic outcomes, supporting the development of patient-specific treatments. For example, machine-learning models have been used to estimate individual responses to antidepressants based on genetic markers, reducing the need for trial-and-error prescribing. Wearable devices paired with AI algorithms can track lifestyle and physiological data in real time, enabling treatment plans to adapt continuously to a patient’s current condition.118 Beyond genetic information, AI can also integrate environmental influences and social factors, providing a more complete and individualized approach to patient care, something traditional methods struggle to achieve at scale. AI-enabled systems support clinicians and pharmacists in choosing appropriate medications, adjusting doses, and anticipating potential adverse reactions, leading to better outcomes and lower healthcare costs. Additionally, AI applications in pharmacoeconomics help guide more cost-effective prescribing by weighing therapeutic benefits against financial considerations.119
Emerging fields: Multi-omics and AI-driven clinical decision support
Multi-omics integration
Multi-omics—bringing together genomics, transcriptomics, proteomics, metabolomics, and epigenomics—provides a detailed molecular view of disease development and patient response to treatment. AI techniques, particularly deep-learning and network-based methods, are well suited to analyze these extensive and diverse datasets, enabling the discovery of new biomarkers and therapeutic targets that may not be apparent through conventional analysis.120 Advanced computational methods allow researchers to examine entire biological networks rather than focusing solely on individual targets, consistent with the principles of systems biology and modern polypharmacology. For example, applying CNNs to spatial transcriptomics and single-cell omics data enables detailed characterization of tissues and cell types, revealing key pathways for potential therapeutic intervention.98 AI-supported multi-omics approaches have led to notable advances in oncology, neurology, cardiovascular medicine, and infectious diseases by predicting mechanisms of drug resistance and identifying new opportunities for drug repurposing. As these datasets continue to be integrated more effectively, they are expected to accelerate early stages of drug discovery and facilitate more precise, targeted therapeutic strategies.121
AI-driven clinical decision support systems (CDSS)
AI-enabled CDSS are transforming pharmacological practice by providing real-time, individualized guidance. These systems analyze information from electronic health records, laboratory findings, genetic data, and pharmacovigilance databases to predict potential drug–drug interactions, refine treatment choices, and identify possible adverse reactions in advance. Machine-learning models within these platforms have demonstrated strong accuracy in forecasting toxicity and side-effect risks, contributing to safer and more effective medication use.122 Newer CDSS platforms incorporate natural language processing, reinforcement learning, and explainable AI methods to provide clearer, more interpretable recommendations, which helps build clinician confidence. Combined with telemedicine and other digital-health tools, these AI-driven systems expand access to personalized pharmacological guidance, allowing patients to receive tailored support remotely and continuously.123 These systems contribute not only to day-to-day therapeutic decision-making but also influence the broader drug-development process. By enabling adaptive trial designs, assisting in patient stratification, and supporting predictive modeling of treatment responses, AI-driven CDSS can help streamline development timelines and facilitate regulatory review and approval.124
Despite AI’s transformative potential, critical assessment of its limitations is necessary. AI is vulnerable to biases in training datasets, which can compromise safety for some patient groups. Many advanced deep-learning models also produce predictions that are difficult to interpret and validate. Although AI-driven methods have achieved success, there are instances where predictions have underperformed.125
Limitations and ethical challenges of AI-driven polypharmacology prediction
Data privacy, algorithmic bias, and transparency
AI applications in pharmacology rely extensively on large, diverse datasets that include clinical records, genomic sequences, toxicological profiles, and other sensitive information. This dependence raises significant concerns regarding privacy and data security. Protecting patient information from unauthorized use, breaches, and cyberattacks is essential, particularly as healthcare data have become valuable targets for malicious activity. The challenge is further compounded because effective AI development often requires sharing data across multiple institutions or regions, increasing the likelihood of exposure or intentional data manipulation, such as data poisoning, which can compromise model reliability.126
In addition to privacy concerns, algorithmic bias represents a major ethical challenge. When AI systems are trained on datasets that are incomplete or not representative of the broader population, they may reproduce systematic errors that disproportionately affect minority groups, specific ethnicities, genders, or other underrepresented communities in biomedical research. These biases can compromise fairness in pharmacological predictions and may result in suboptimal or unsafe therapeutic recommendations for vulnerable populations. Furthermore, many deep-learning models operate as “black boxes,” making it difficult for clinicians and regulators to understand how decisions are generated.127 This lack of clarity complicates efforts to ensure accountability, interpretability, and reproducibility in AI-supported pharmacological applications. Addressing these challenges requires a combination of strategies, including robust data-governance practices, broader and more representative datasets, the development of explainable AI approaches, and transparent auditing of algorithms. Adhering to evolving data-protection regulations is also essential. Ensuring the ethical use of AI in pharmacology requires ongoing monitoring so that these systems do not exacerbate existing healthcare disparities or compromise patient confidentiality.128
A fundamental distinction in modern drug design is between on-target (rational) polypharmacology and off-target (abandoned) polypharmacology. The current era has influenced AI to intentionally design multi-target directed ligands. While on-target pharmacology aims for synergistic modulation of multiple disease-driven proteins essential for complex pathologies such as cancer and neurodegeneration to enhance efficacy and bypass resistance mechanisms, off-target AI models are increasingly tasked with predicting anti-targets to minimize toxicity.129
Can AI fully replace animal experimentation? Current limitations
Although AI-based in silico systems offer valuable alternatives to traditional in vivo studies, current evidence indicates that they are not yet capable of fully replacing animal experimentation. Animal studies still play an essential role in revealing complex, whole-body pharmacodynamic and toxicological responses, especially in areas such as long-term toxicity, metabolic behavior, and immune system interactions, where existing AI tools lack sufficient predictive reliability.130 AI performs well in tasks such as large-scale screening, predicting molecular interactions, and analyzing targeted polypharmacological effects. However, it remains limited in its ability to reproduce full-organism physiology and highly intricate biological networks. Complex features such as cell–cell communication, organ–organ interactions, and long-term physiological responses present substantial modeling difficulties. Overcoming these challenges will require more advanced computational approaches and the integration of diverse, multimodal biological datasets—areas that are still evolving.106 In addition, most regulatory agencies still require animal-derived evidence to confirm safety, since fully computational methods have not yet achieved the level of validation needed for complete regulatory reliance. At present, AI is regarded as a complementary tool supporting the reduction and refinement of animal use within the principles of the 3Rs rather than a full substitute for in vivo experimentation.46 Achieving a future in which AI can fully replace animal studies will depend on the development of more sophisticated models that integrate AI with emerging technologies such as organoids, microphysiological systems, and digital-twin platforms. These combined systems will also require extensive validation to build confidence within both the scientific community and regulatory agencies.106
Regulatory acceptance and validation of AI-driven results
A major obstacle to the wider use of AI in drug development and polypharmacology research remains regulatory approval. To assess the reliability, security, and effectiveness of AI systems, authorities such as the FDA and EMA are gradually adapting their evaluation frameworks. Developed with input from various stakeholders, the FDA’s draft guidance on AI emphasizes a risk-focused, evidence-based approach to evaluating the reliability and reproducibility of AI models in clinical and drug-development settings.131 Regulatory bodies distinguish clearly between AI technologies used exclusively for discovery or internal research operations and those that could potentially compromise patient safety or the integrity of clinical studies. AI systems must adhere to validation protocols that include consistent performance, well-documented workflows, and interpretable and explainable outputs in order to gain regulatory acceptance. This ensures that results derived from these models are reliable.132 Regulatory agencies’ evaluation and management of AI technologies are influenced by ethical and social factors, such as stringent data privacy regulations, the challenges associated with informed consent, and the demand for accountability in algorithmic decision-making.133
Wider adoption of AI is anticipated as rules continue to evolve, supported by advancements in unified validation standards, cross-border regulatory harmonization, and consistent industry practices that minimize bias and encourage transparency.115,134 Various regulatory bodies require interpretable and justifiable outputs to ensure reproducibility and accountability. Research activities should prioritize AI methodologies that provide locally understandable explanations for predictions, for example, linking a predicted toxicity to specific molecular features. Achieving regulatory confidence depends not just on predictive accuracy but also on trust through transparency.135 Future research should focus on reinforcement-learning frameworks in which the reward function is a multi-objective scoring function that guides the generative model toward the true optimal region of chemical space, rather than merely a local maximum.136
AI-driven approaches offer excellent scalability and speed compared with traditional platforms such as high-throughput screening or high-content screening. Traditional methods are physically limited by library size and high cost, while AI-driven methods can explore astronomical chemical space beyond molecule libraries. However, AI remains a complementary tool, as it requires high-quality experimental feedback loops to refine its predictive accuracy.137
AI in education: Teaching pharmacology using virtual tools
The introduction of AI-based and digital learning tools in academic programs has accelerated due to the shift away from using animals in pharmacology instruction, prompted by ethical concerns and legal constraints. Live animal studies have historically been crucial in demonstrating medication actions and physiological concepts, providing trainees with hands-on experience to supplement theoretical education. Institutions today find it difficult to offer comparable experiential learning because laws in places like the EU and India restrict these practices. As a result, computer-assisted modules, interactive simulation tools, and AI-supported virtual laboratories have emerged as scalable, compassionate substitutes that can simulate intricate biological processes in silico or through augmented reality settings (Fig. 8).138,139 Students can explore drug–receptor interactions, pharmacokinetic behavior, and toxicological consequences across several physiological systems with these digital pharmacology tools due to their interactive features and mechanistic models. This upholds ethical standards while maintaining scholarly depth. Additionally, AI-driven elements can provide instant feedback and adjust simulation complexity for each student, enhancing comprehension and long-term memory.140 Compared to methods that rely solely on classroom theory, institutions that have incorporated AI-based technologies into their pharmacology curricula report higher levels of student engagement and better learning outcomes. However, other academics point out that virtual platforms cannot completely replace the practical skills acquired through conventional live-animal work, arguing that these technologies are better utilized as supplementary tools. In India, accrediting organizations like the University Grants Commission and the Medical Council of India encourage the use of AI and other digital substitutes but advise blended learning methods for the most well-rounded and effective teaching.141
Impact on Graduate/postgraduate education
Undergraduate and graduate teaching of pharmacology and related biomedical sciences has been profoundly impacted by the ban. Animal dissections and experiments were used in practical training for many years to help students make the connection between theory and actual biological reactions. Since these activities are no longer permitted, teachers must use alternative strategies to provide similar learning experiences, such as computer-assisted modules, virtual simulation platforms, and specific in vitro techniques.105 Some critics point out that while these substitute approaches introduce students to contemporary digital platforms and uphold animal welfare ideals, they cannot completely replace the depth of expertise obtained from working with live systems. Some educators warn that completely eliminating animal experiments could hinder students’ comprehension of intricate physiological relationships and the development of critical practical pharmacology skills. In response, organizations like the Medical Council of India and the University Grants Commission have promoted the use of computer-assisted learning to help overcome these constraints while upholding ethical standards.142
Case examples: Countries or institutions successfully using AI-based alternatives
Several nations have led the way in introducing AI-based alternatives to traditional animal testing, indicating advancements in regulations as well as a wider social movement toward ethical research methods. India mandated that academic institutions use computational toxicology tools and in vitro methodologies, including AI-supported platforms, for research and training after the government outlawed animal testing for cosmetics in 2014. Since then, several prestigious Indian universities have strengthened pharmacology instruction and drug-discovery research while adhering to these standards by integrating AI-driven simulation technology into their life-science curricula.143 Significant investment in AI-based research techniques has been spurred by the European Union’s early and extensive prohibitions on animal experimentation, which cover cosmetics and other regulated items. Platforms such as DeepChem, BIOVIA, and AlphaFold are widely used by top European organizations, including the European Molecular Biology Laboratory, and pharmaceutical companies in EU member states for in silico drug design, toxicity assessment, and modeling of multi-target drug actions. The use of these technologies is also being accelerated as regulatory bodies in the EU begin to accept AI-derived evidence in submissions under REACH and equivalent frameworks.144 Brazil’s government has invested in AI technologies and organ-on-a-chip systems to boost both academic and industrial research capability in tandem with the country’s 2023 ban on animal testing for cosmetics and their constituents. To further reduce dependency on animal research and promote pharmaceutical innovation, organizations like the Federal University of Rio de Janeiro are utilizing AI to develop prediction models for medication safety and therapeutic performance.145 New Zealand has promoted the use of AI-based techniques in research and teaching since enacting a prohibition on cosmetic animal testing in 2015. The nation has been developing virtual laboratory environments to improve training within academic programs and collaborates with international organizations to validate AI-supported pharmacological models.146 State-sponsored initiatives to develop AI tools and microphysiological systems as alternative strategies have been spurred by targeted restrictions such as California’s 2022 prohibition on toxicological testing involving dogs and cats. AI platforms are currently being used by research organizations in the University of California network to design non-animal testing methodologies and forecast polypharmacological effects. Concurrently, industry partnerships are influencing regulatory discussions on the approval and validation of AI-based techniques.147
Taken together, these case studies show how a combination of international cooperation, scientific advancement, educational change, and legislative action propels the successful adoption of AI-based substitutes, overcoming obstacles presented by the reduction in animal experiments. These initiatives place AI at the center of future drug-discovery frameworks while also advancing pharmacology research and education, ethical compliance, and public confidence.124
The central gap left by animal bans is the inability to model complex systemic interactions. The future framework must aim to build a digital organism by combining the predictive power of AI and multi-omics data with the biological fidelity of organ-on-chip and microphysiological systems.148 Lack of interpretability in deep-learning models remains a critical challenge for regulatory acceptance. A new guiding framework must prioritize Explainable AI methodologies.149 Future research should generate mechanistic, human-understandable justifications for predictions. Drug design in the near future will go beyond simple virtual screening to closed-loop generative systems. This novel approach involves reinforcement-learning frameworks in which the reward function is a multi-objective scoring function that rewards the simultaneous optimization of efficacy, safety, and ethical compliance.150
Conclusions
The shift from traditional animal experimentation to AI-driven drug discovery represents a pivotal evolution in pharmacology, necessitated by both ethical mandates and the scientific complexity of multi-target diseases. While the exclusion of animal models creates significant gaps in systemic modeling, AI-driven polypharmacology provides the necessary computational bridge to restore biological complexity through the integration of multi-omics data and advanced predictive algorithms. The main essence of this review is that the future of drug discovery does not lie in the total replacement of one methodology with another, but serves as an integrated Cage-to-Code paradigm. In this framework, AI acts as a central orchestrator, harmonizing in vitro fidelity, in silico scalability, and limited in vivo validation to satisfy the 3Rs principles without compromising human safety. As regulatory frameworks such as Good Machine Learning Practices evolve to accept validated AI methodologies, the pharmacological community must transition from empirical species-specific testing to human-centric predictive modeling. Ultimately, embracing AI-driven methodologies will foster a more humane, efficient, and personalized therapeutic landscape that respects both human health and animal welfare, positioning AI as the defining driver of 21st-century pharmacological research and education.
Declarations
Funding
None.
Conflict of interest
The authors have no conflict of interests related to this publication.
Authors’ contributions
Conceptualization (SRB), methodology (PKC), data curation (PKC), formal analysis (PSP), writing—original draft preparation (SRB), and writing—review and editing (YUG). All authors have read and agreed to the published version of the manuscript.