Nikolaos Koutsouleris Interview
Actualité

Artificial intelligence (AI) has the potential to revolutionize psychiatry by enabling more precise diagnoses and accurate prognostic assessments.

Publié le 5 août 2025

Nikolas Koutsouleris 

Professor in Precision Psychiatry at Ludwig-Maximilians-University and King’s College London (dual affiliation) 

How will AI and machine learning improve the diagnosis and treatment of mental health disorders, particularly in early stages? Could you share some of the results you’ve developed and explain how they work? 

Artificial intelligence (AI) has the potential to revolutionize psychiatry by enabling more precise diagnoses and accurate prognostic assessments. Through machine learning, we can detect complex patterns in data—essential for understanding psychiatric disorders, which are inherently multifaceted and do not fit into simple classifications. 

To fully grasp these conditions, we must integrate multiple data domains, including biological, clinical, and environmental factors. AI possesses the capacity to analyze these diverse layers of information, making it a powerful tool for capturing the complexity of mental disorders and distinguishing between conditions with greater accuracy. 

Currently, psychiatric diagnoses rely heavily on clinical evaluations and patient interviews, which are subjective. A paradigm shift would occur if we could leverage more objective data sources to refine diagnostic accuracy. 

Predicting Psychosis Risk with AI 

Today, based on clinical criteria, we can estimate that a patient with a psychosis risk syndrome has a 30% risk of developing psychosis within 3 to 4 years. However, this level of precision is insufficient for individual diagnosis. Identifying individual risk remains a challenge, as clear predictive rules are difficult to establish. 

My research in early recognition of psychotic disorders has shown that machine learning models incorporating imaging and genetic data have achieved up to 86% accuracy in predicting psychosis onset. 

These conditions often emerge in adolescence and early adulthood (peak age window: 18-22), though they can appear earlier. By integrating AI at an early stage, patients can benefit from a more precise diagnosis, cutting down the time to receive adequate and typically more benign treatments than at later stages of the disease. 

AI and Treatment Response in Schizophrenia 

AI is also transforming treatment response predictions. For example, in transcranial magnetic stimulation (TMS) for schizophrenia, we have demonstrated that imaging, clinical, and genetic data can be used to predict treatment outcomes. 
By optimizing the sequence of examinations, we can maximize predictive accuracy while minimizing the number of tests neededreducing both costs and the burden on patients. 

Functional Outcome Prediction in First-Episode Psychosis 

Another groundbreaking example is the use of clinical data collected in routine care to predict functional outcomes in first-episode psychosis patients. 
With just one month of follow-up, our model achieved 73% accuracy. At one year post-enrolment, it maintained 71% accuracy—marking the first time a model demonstrated such predictive capability. 

These advances highlight how AI-driven approaches can refine psychiatric diagnosis, improve treatment planning, and ultimately lead to more personalized psychiatry and effective care for patients. 

What are the main challenges in integrating AI and machine learning models into clinical practice, particularly in the field of psychiatry? 

One of the main challenges is bias in data. When we collect patient data, it is often influenced by socio-economic background, ancestry, ethnicity, and gender, all of which interact with a patient’s phenotype. This can introduce biases into AI models, making them less reliable when applied to diverse populations. As a result, predictions may be inaccurate or non-reproducible on a larger scale. 

A striking example of this issue is the way AI models misidentify conditions in Black patients or recommend incorrect medication dosages, either too high or too low, compared to what would be clinically appropriate. If not addressed, such biases could reinforce or aggravate structural healthcare disparities rather than mitigating them. 

Another major challenge lies in the lack of transparency in machine learning models. AI often operates as a black box, meaning that neither clinicians nor patients can fully understand how a model arrives at a specific prediction. This lack of explainability is particularly problematic in psychiatry, where clinicians need clear, actionable insights to make informed decisions. 

For instance, if AI predicts a high risk of psychosis in a patient, the clinician must be able to determine the cause to address proper treatment. 

Transparency is essential, and regulatory bodies in Europe have already recognized explainability as a crucial criterion for AI in healthcare. 

A third challenge is the reluctance of some clinicians to adopt AI-driven tools. Many healthcare professionals express concerns about technology encroaching on their clinical judgment. This raises an important question: What happens when an AI model’s prediction contradicts a clinician’s assessment? 

To navigate this tension between AI and human intelligence, it is essential to integrate AI as a supportive tool rather than a replacement for medical expertise. Clinicians should be trained to understand and interpret AI-generated insights, ensuring they remain at the centre of clinical decision-making. 

Ultimately, the success of AI in psychiatry depends on addressing bias, transparency, and clinician engagement, ensuring that these technologies enhance rather than disrupt psychiatric care. 

Could you elaborate on how your tools, which analyse key biomarkers, predict outcomes for depression and psychoses, and assess decision-making in psychoses, could transform future treatment approaches or preventive strategies? 

The impact of AI in psychiatry depends on the model or algorithm used. Broadly speaking, AI models learn rules from data to classify disorders or predict conditions, such as different types of depression. But beyond diagnosis, we need to use these models to develop new treatments and inform clinical decision-making. 

Once an AI model has learned to identify patterns, it can map individuals within a decision space and measure their distance from key classification boundaries. This allows for probabilistic assessments, such as: «This patient exhibits subtype A at 80% and subtype B at 20%.» It also highlights the biological and neurological variables implicated in these classifications. 

With the approval of patients, the model signatures could then be analysed mechanistically by drug developers — identifying the genes, genes expressions, neural systems and psychological mechanisms that contribute to specific subtypes of psychiatric disorders. This deeper understanding of disease mechanisms can lead to the creation of targeted treatments. 

Currently, we have only a broad understanding of how psychiatric conditions develop. AI and machine learning allow us to uncover the underlying mechanisms with far greater precision, opening the door to personalized psychiatry and treatments that directly target disease pathways.  

With the rise of AI tools advancing precision psychiatry, what do you see as the potential benefits, especially in terms of improving access to knowledge and treatments, and the potential risks associated with relying on machine learning models for clinical analysis and decision-making (ethics)? 

One of the greatest benefits AI could bring to mental health care is reducing the time to treatment. This is arguably the most significant breakthrough AI could achieve in psychiatry. 

Currently, it can take years for individuals—especially those with conditions like psychosis—to find the right healthcare provider. Many patients undergo a long process of “doctor shopping”, during which their condition worsens and changes in the brain accumulate. AI has the potential to streamline this process, helping patients access the right care faster and preventing the progression of the disease. 

Even after diagnosis, treatment is often a trial-and-error process that can take months or even years. AI-driven models could accelerate the identification of the most effective treatment for each patient, while significantly reducing patient suffering. 

However, AI in mental health care also presents risks and ethical challenges. If AI models are trained on non-representative datasets, they could perpetuate inequities and lead to harmful or unethical decisions. Additionally, AI should not become a tool reserved for privileged populations, as this could widen existing disparities in access to mental health care. Finally, AI should support rather than replace psychiatrists. There is a concern that some clinicians may over-rely on AI. 

AI offers new opportunities for underserved populations. Every third person will experience a mental health condition in their lifetime, yet there will never be enough psychiatrists to meet the demand. AI could act as a first-line digital companion, offering guidance and support—especially in regions with limited access to mental health professionals. 

Imagine a world where a digital psychiatrist is available at your fingertips, accessible from a smartphone. This could fundamentally reshape mental health care, making support available to those who might otherwise receive none.  

What are the needs to help this new field of research? 

For AI to reach its full potential in mental health care, several key challenges must be addressed—starting with clinician education. Medical students should be exposed to AI-based precision medicine approaches early in their training, ensuring that future professionals understand how to interpret and apply AI-driven insights in psychiatry and other fields of medicine. 

Another pressing need is greater standardization in precision psychiatry. The methodologies used in AI research must be transparent to minimize biases and improve reproducibility. 

For AI to be truly effective, datasets should be freely accessible, allowing researchers to collaborate and compete in developing better models. This requires a shift towards open, standardized, and curated data that accurately represents the entire population, reducing the risk of biased or non-generalizable algorithms. 

Therefore, countries like Germany and France need to accelerate the digitalization of healthcare systems. Secure, anonymized patient data—collected during medical visits and stored with patient consent—could be used for AI-driven research across all data domains, including medical imaging, blood analysis and so on, in order to produce predictive modelling. 

Significant funding is required to speed up AI model development and validation. Governments and funding organizations must invest in data-sharing infrastructures and trusted research environments where standardized, high-quality datasets can be analysed using state-of-the-art machine learning techniques. 

By building a transparent, inclusive, and well-regulated AI ecosystem, we can ensure that psychiatric AI models serve the diverse populations they are meant to help while maintaining trust, accuracy, and fairness in mental health care. 

Faites un don

Donnez aux chercheurs les moyens d’agir pour la recherche en psychiatrie, faites un don à la Fondation FondaMental

Je donne