Dr. Amy Nitza discusses her role in leading UAlbany’s involvement in the Global Center for AI in Mental Health, a collaborative research enterprise established in 2023 with partner institutions SUNY Downstate Health Sciences University and the U.N. Health Innovation Exchange. The center works to develop interdisciplinary AI-based tools to expand access to mental health diagnosis and treatment among underserved communities worldwide.
The Global Center for AI in Mental Health
Dr. Amy Nitza, Director, Global Center for AI in Mental Health, University at Albany
SUNY Downstate Health Sciences University
Dr. Salvador Dura-Bernal, Director, Global Center for AI in Mental Health, SUNY Downstate Health Sciences University
New York State First Responder Mental Health Needs Assessment
Dr. M. Abdullah Canbaz, Assistant Professor, Information Sciences and Technology Department, College of Emergency Preparedness, Homeland Security and Cybersecurity, Department of Information Sciences and Technology
From the UAlbany News Center:
Amy Nitza Joins UAlbany to Direct the Global Center for AI in Mental Health, May 6, 2025
UAlbany, Downstate Researchers Demo Prototypes for AI-Supported Mental Health Care, Sept 30, 2025
The Engagement Ring, Episode 35: Expanding Access to Mental Health Diagnosis and Treatment Through AI
[Lively, upbeat theme music plays as program host Mary Hunt introduces the program and plays excerpts from the program.]
ANNOUNCER/MARY HUNT:
Welcome to The Engagement Ring, your connection to an ever-widening network of higher education professionals, scholars and community partners working to make the world a better place. I'm Mary Hunt today on the podcast…
AMY NITZA:
We’re seeking to improve the, you know, quality of care, but also the efficiency of care so that we can see more people and meet the massive gap we have right now between, you know, the space we have and the number of clinicians we have and the need.
ANNOUNCER/MARY HUNT:
I’ll talk with Dr. Amy Nitza, director of the Global Center for AI in Mental Health at the University at Albany. The center develops interdisciplinary AI-based tools to expand access to mental health diagnosis and treatment among underserved communities worldwide.
AMY NITZA:
What we know hands down is that the best predictor of outcome in terms of therapy processes facilitating change is that it’s the quality of the relationship between the therapist and the client that predicts outcome better than anything else, and so by training I’m a clinician — by identify and at heart — and so I’m really invested in the idea of using, developing and using tools that can make that process better
ANNOUNCER/MARY HUNT:
Here’s my conversation with Dr. Amy Nitza….
MARY HUNT:
Amy, welcome to the podcast. Thanks so much. Very nice to have you. I'm really looking forward to learning about the Global Center for AI in Mental Health. First, why don't we start with its mission?
[Music fades out]
AMY NITZA:
Sure. So, what we're trying to do at the Global Center is to develop and deploy technology to really close gaps in access to high quality mental health care. Pretty straightforward in that way.
MARY HUNT:
Well, having a clear and focused mission always is… that's probably a mistake a lot of people make, but it's good to really be clear as to what you're setting out to do. How was the center created? When was it created?
AMY NITZA:
Yeah, so the Center was founded, I believe, in 2023 and it's a collaboration and a partnership between the University of Albany, SUNY Downstate Medical Campus and HighEx, which is the Health Innovation Exchange, which is a spin-off organization. And the three groups together really bring AI expertise, the medical expertise and the U.N. sort of global mission expertise together to allow us to do what we're trying to do.
MARY HUNT:
You're a clinical psychologist.
AMY NITZA:
I'm a counseling psychologist.
MARY HUNT:
Counseling psychologist, excuse me, so obviously your work in the past and continues to be interacting with individuals and communities. When I think of diagnosing mental health issues, treating them, evaluating them, I think of it as a very personal one-on-one activity requiring that kind of approach. How can a tool like AI improve that approach?
AMY NITZA:
Absolutely. I'm glad you asked that question, because what we know hands down is that the best predictor of outcome in terms of therapy processes facilitating change, is that it's the quality of the relationship between the therapist and the client that predicts outcome better than anything else. And so, I just want to be really clear in naming up front that what we're trying to do at the Center is to make that process better, not to replace it. I am a clinician, as you just noted, I'm a clinician, you know, by training. I'm a clinician by identity and at heart. And so I'm really invested in the idea of using, developing and using tools that can make that process better. And so one way that I think is really important and powerful. There's a lot of evidence-based treatment literature. We know what works to treat a lot of different mental health conditions, and we also know how it works to the extent of in any particular session, if a client offers a specific kind of a statement, we in some models, we have some information about specifically what kind of intervention the therapist needs to use next in order to continue to facilitate positive growth and positive change. And we know some things about what people should not say next. But all of that really good process, outcome research is not necessarily trickling down to frontline clinicians, and so one of the things that I'm working on with, in collaboration with Downstate and our other partners here on campus, is developing tools that help clinicians implement evidence-based models so the tools understand, are trained to understand a therapy model, to then, for example, then to be able to listen to a conversation and provide guidance to the therapist about things that maybe they missed, things that they could do better, places where maybe they the client said something important, and that therapist heard something different. And so it can provide sort of guideposts, signposts of maybe try this next, or maybe go back here. And I think those can be used in session. I also think they have really powerful implications for training. You know, I was, I'll date myself by telling this story, but I was trained where we had to use, we, all of our sessions that we did in training were recorded on VHS tape, and then I had to transcribe minute by minute, line by line, the entire session via VHS tape, and then go back and analyze that session and comment on what I did and why I did it and where I should have done something different. And so, imagine now a tool being able to capture that session, providing the transcript and providing commentary, and then allowing the student therapist to comment on the commentary, making that process so much more efficient and probably more effective. And the other thing that I wanted to say about that is I was trained with two-way mirrors, where there was a supervisor behind the mirror watching my session and there were phones, and if I screwed up, or I did something wrong, or I missed an important moment in the session, the supervisor would call me from behind the mirror, and the client would have, we'd have to stop the session. I'd get up, I'd answer the phone, I'd hear the feedback. I'd have to really quickly reorient myself to getting back into the session and figure out how to implement the feedback that I just got. Using an AI tool that could do that same thing, and just very subtly, you know, via an iPad sitting next to the clinician, or something like that, the student clinician popping up and saying, hey, you know the client, you're pushing this intervention. The client seems super anxious. Maybe you need to slow back down a little bit, something like that, I think can really improve training and improve ultimately, we're seeking to improve the, you know, the quality of care, but also the efficiency of care, so that we can see more people and meet the massive gap we have right now between, you know, the space we have and the number of clinicians we have, and the and the need.
MARY HUNT:
That's interesting, you know, because a question before you mentioned that something went through my head as well, for someone who's just training, or someone that's new to the field, I can see how this would be, and what's the word, very positively embraced. I mean, they would, they would say, this is just a tool that I'm going to use to enhance my learning. I'm wondering older clinicians, people that may have a lot of experience under their belt. Are they as open to embracing it? Have you had the experience of finding some who feel like they're a little reticent about using it because they either aren't comfortable with the technology or they feel I've got enough experience, I don't need this. What I know is better than what this AI can tell me. How do you overcome that?
AMY NITZA:
Oh, sure, yeah, we're exploring that. There's a lot of hesitation, and I understand the hesitation. I think there's a lot of fear in… I think that the fear is in AI overriding the judgment of the clinician. And so what we're trying to do is address that by creating tools that support the judgment of the clinician or give the clinician more information to make their own judgment. So we're not trying to replace, and this may not be a tool for everyone, but I do think that it is… we're creating options, and I think that I want to re-emphasize I think as training tools, this is just the way the field is going to go. I understand the hesitation, and I and I sort of embrace the hesitation, and that's why I think it's important for me in my role as a clinician, to think like a clinician and to, you know, I've had 25 years of training other clinicians as well, and so we're really trying to… it's a tool. It’s a tool. We’re not saving the world this way. It's not a one-size-fits-all solution, but it's a tool, or a set of tools, that I do think can make things better. There's a lot of, in any session, even if it's individual therapy. But now also think about group therapy, which is something that I'm particularly passionate about. The amount of information floating around in a session is mostly more than a clinician can fully absorb. And so again, the AI can be a tool to help pick up some information that as humans, we might miss in that process.
MARY HUNT:
I think too when you show someone who is hesitant to use something or to learn something, when they start to see the value and experience it for themselves, sometimes they become the biggest champions for that. So that's…
AMY NITZA:
Absolutely, absolutely
MARY HUNT:
That’s interesting. You know, we think of healthcare shortages, nursing, physician shortages. So I take it Mental health clinicians are at shortage as well?
AMY NITZA:
There's a really big gap right now between the number of people who need help and the availability of spots and in therapists’ offices to be able to do that.
MARY HUNT:
You've talked about before in our conversations, the pillars, the three pillars that support the Global Center. Can you explain what those three pillars are?
AMY NITZA:
Yeah, sure. So one of them is, we're really focused on disaster and crisis response tools. So we want to improve through the development and deployment of some tools, we want to improve access in humanitarian settings for mental health support and the… I probably don't even have to name that at this point, but the number of humanitarian emergencies around the world and the number of people who need mental health support in those settings, even on the best day in the in the in any given humanitarian setting, there is not enough access to trained mental health professionals to be able to provide that care. So that's one, and that's one here that we're really focused on at Downstate, and that's a nice collaboration with the College of Emergency Preparedness, Homeland Security and Cybersecurity, and with the folks in Public Health and Social Welfare. And so I think there's some really, really strong opportunities to build some important collaborative projects there. And then the second pillar is more driven by my colleagues at Downstate, but my colleague, who's the director, Dr Salvador Dura-Bernal, who's the director of the SUNY Downstate branch, he's a computational neuroscientist, and so he's bringing a whole different lens to this work. He's working on some really innovative brain digital twin projects for being able to develop personalized mental health care. So that's a second sort of pillar of what we're doing. And then the third pillar is safe and ethical use of AI. So we're not just out there pushing AI for AI sake. There's also, you know, just because we can do something doesn't mean we should. And so, we're really also focused on the should of the whole thing and what's useful, what's not useful? What are the cautions? What are the guardrails we need to keep in place?
MARY HUNT:
Take that first pillar, in the wake of disasters, providing services for mental health needs, typically, what do you see, what kind of experiences or what kind of needs do people in a disaster situation have for mental health services?
AMY NITZA:
Yeah, you know, the overwhelming nature of a disaster, whether it's, you know, a human caused disaster, or a quote, unquote natural disaster, although the lines between those two things are increasingly are increasingly blurred. That is a disorienting, overwhelming, shocking experience for most people. It for those of us that are fortunate enough to live in a place that most of the time is, is a safe, stable, predictable place, a big disaster like that, it pulls the rug out from underneath your whole worldview, your sense of safety in the world, your sense of efficacy. And by that, I mean, you know, people's belief that they can make things happen, that they're in charge of what happens during their day. And so we know what people need. There's a lot of good science around what people need in the immediate aftermath of a disaster. They need a sense of safety. And I'm not just talking about physical safety, but the perception of safety, and the lack of the perception of safety lingers long for a lot of people after they actually are physically safe again, hey need a sense of calm. They need a restoration of a sense of internal physiological regulation to get back to their sort of baseline physiological level of functioning, which means an internal sense of calm, but also return of some sort of sense of calm in the environment, safety, calm. The next one is connectedness. We know more than anything else, that what people need in the aftermath of a disaster is a connection, sense of connection and social support that really predicts how well they do. Efficacy, a return of their belief, their sense of self-efficacy, the sense of community efficacy. And then the last one is hope. You know, disasters leave behind a sense of helplessness and a sense of hopelessness. And so restoring efficacy and restoring hope becomes, those are really the last two of those principles. That's, that's the work of Stephen Hobfoll and his colleagues, 2007 just for the reference, but that's really the foundational work that whatever we're trying to do in disaster, mental health, we're working towards promoting those five things,
MARY HUNT:
Your center is global. So, the communities that you're dealing with, I take it are here at home, in the U.S. and abroad. So, in developing these tools, I would imagine you're taking into account different cultural values, perspectives. How do you do that?
AMY NITZA:
Yeah, that's, we're actively working on that. So ask me back again next year, and maybe we'll have more of a conversation about that. But you're exactly right. You know, at the level of our biology, at the level of our physiology, all humans kind of have a similar reaction to big, large scale traumatic events, but in terms of how we make sense of what's happening to us and how we receive help, what help feels helpful, all of that is very culturally bound. So one of the challenges that we are going to face is in terms of training AI tools is having enough data from each cultural context in which we're trying to work, to be able to create the language model, to be able to use for the tools, and so that's a that's a particular challenge that we're actively trying to figure out.
MARY HUNT:
Very interesting. So I hope you will come back and talk about that, and I could see that would have great, great implications, you know, positive outcomes for folks. You also talked about in the past the SUNY PFA assistant. What is the SUNY PFA assistant? I take it as a piece of technology. How does it work? And how can that be adaptable for different cultures, languages, situations?
AMY NITZA:
So psychological first aid data, is the early intervention of choice recommended by U.N. organizations, American Red Cross, International Federation of the Red Cross, as the early intervention that is recommended to support people in disaster settings. So and it's working on psychological first aid is about creating those conditions that I just talked about, safety, calm, connectedness, efficacy and hope.
MARY HUNT:
So, first aid… PFA.
AMY NITZA:
Yep, PFA is psychological first aid. And so that is, it is not a clinical intervention so it does not have to be delivered by a trained mental health professional. We can train people, lay people, community health workers, whatever you want to call them, paraprofessional support people in the community, to offer psychological first aid. But that is training somebody in a day or two days, and then sending them out into these really difficult, traumatic situations. It's a lot to ask for somebody to be able to do that work with a day or two's training and some role playing with their peers in a classroom. And so with Dr Abdullah Canbaz and his lab here at Albany were developing a tool to help people be able to better implement psychological first aid out in the field, so it will have different options. It will have when we're done, it will have a coach for people trying to provide psychological first aid. It would also have a self-care, a set of self-care tools for the person providing the aid. I can tell you from personal experience, it is a lot to be out there in the field for 12 hours. Sometimes you forget to sit down or forget to take a drink of water, let alone check in with yourself and how you're doing and so, sort of mitigating the impact of the secondary traumatic stress that people are experiencing when they do the work.
And then when we're finally done, we also hope it will have a… will be deployable to survivors themselves from a provider, almost like having a prescription, if the provider thinks you could benefit from sharing some of the electronic tools, coping resources, information resources. We can deploy those from the provider to the to the survivor's phone, let's say so that they have access to some support once the provider has moved on. Often, psychological first aid is delivered in a you know, if I'm doing that work and I'm it's me, and there's 100 survivors, I may only get 10 minutes with any one individual. And so can we use this tool to after I've talked to a survivor for a few minutes, I can then share with them some of the tools that we've talked about in terms of coping strategies or in terms of action plans for things they want to do, and then they have it on their phone. And so it's not, it's not, again, it's not replacing the human, but it's extending the reach of the human after the human has moved on. That's what we're trying to do.
MARY HUNT:
And you talked about the third pillar having to do with ethics and regulation. This obviously mental health, people's lives, people's most personal concerns and cares and vulnerabilities are at play here. So this is critical.
AMY NITZA:
Absolutely.
MARY HUNT:
As important as any part of the work you do. Can you talk a little bit more about how you're addressing that or approaching it, or what considerations might go into drawing up guidelines or approaches that are ethical and practices that you'd like to see people employ?
AMY NITZA:
Yeah, I think that the field of what we can do is, is, as with a lot of things, is, is moving much faster than the than the regulation around those things. And so, you know, our center, I think it's, it is our responsibility to slow down and be thinking about that as well. So in the tools that we're designing ourselves, the ones that we're developing, we're paying careful attention to guardrails and making sure that some of them are limited so that they can't… the tool cannot just go to the universe of the internet to find answers. It's limited to what it's been trained on. So it's only being trained on professionally developed materials to reduce hallucinations; they're also being trained to know their limits. You know, if a client asks a question to a tool that suggests suicidal ideation or that suggests any sort of red flag that the tool knows immediately to offer a connection to a human and that doesn't, it won't just answer the question or support the person's thinking, if they're thinking is of concern. And so in our own work, we're developing these guardrails that are built into the system. But there's also some really important work being done out in the field. I was in a presentation a couple weeks ago from by some research done by the Rand Corporation about how different current LLMs, ChatGPT, etc., Gemini, how they answer questions, suicide related questions. And so I think it's important in our center to also support and elevate that work, make sure that that work gets out there in the field. Develop sets of recommendations for parents around and educators around the work. So we're not there yet. I've only been in this role a few months, but that's certainly something that's forefront in our thinking.
MARY HUNT:
I understand you participated in hackathon recently that was related to AI and mental health, yeah. Can you talk about that? What was that? What was the experience like? What did you learn or observe?
AMY NITZA:
We in collaboration with a student entrepreneurship center, they wanted to do a hackathon, and they, their own work suggested that AI and mental health were two issues that were really important to students, and so we created three scenarios of actual, real life problems that we have experienced: lack of access to care here in the United States, lack of access to care in an emergency setting. I used Haiti as the as the context for that example, and we posed those three challenges to students, and in groups of five or six, they had an hour or two hours, maybe, I don't remember, to come up with and present a solution to us. And I got to tell you; it was one of the best things I've done since my time here. I left there feeling like the world is in good hands.
MARY HUNT:
Oh, that’s good.
AMY NITZA:
The students did a really great job of coming up with innovative ways to leverage technology to solve real problems. And one of the teams, the team that won, had a really, really creative, robust, thoughtful set of solutions to a problem that I posed around lack of access to mental health care here in the U.S. And then another team did a really good job developing a set of tools in Haiti, proposing a set of tools for deployment in Haiti, which is a very different context with very different resources. And I've invited those teams to continue to have conversations with me about how we might actually make some of that happen.
MARY HUNT:
Excellent. Oh, they must have felt great too.
AMY NITZA:
And they won some money. So it was all good.
MARY HUNT:
Well, you can't do these projects without resources.
AMY NITZA:
Exactly.
MARY HUNT:
Sometimes you have great ideas, but you need the resources and the people who will step up and help you to make it possible. They might not have the expertise themselves, but they have a very important part of the whole formula. Tell me about the first responder mental health needs assessment.
AMY NITZA:
Sure.
MARY HUNT:
Is that something that was conducted in New York State?
AMY NITZA:
Uh huh. Yeah, that was my colleagues and I at the Institute for Disaster Mental Health at SUNY New Paltz, in collaboration with the New York State Division of Homeland Security and Emergency Services. And we know that first responders are struggling. That's not a surprise, but we wanted to be able to sort of document the need and document what they what responders themselves, saw as potential solutions. And we had the biggest survey. We were hoping for maybe 1000 respondents, and we got over 8000 respondents to our survey. These are law enforcement, fire, emergency medical services, emergency management, and then we had another, Oh, 911 dispatch. And no surprise, responders are struggling. Stigma is a big issue. Stigma and their belief that people are going to… that their peers are going to judge them, that they're that they're called that that will be career impacts. So a lot of pain, the rate of suicidal ideation reported in our survey was four times that of what we know to be a baseline in New York State. So that's a pretty powerful statement to the extent of the problem. But they're also what I was really heartened by was their openness to getting help. If we can figure out the right kind of help. So what I if I were to summarize the results of that survey, I would say there's a lot of need for help, there's a lot of openness to help, but then there are these barriers to getting help, and so I think it's upon us to figure out how to reduce, address and reduce those barriers so that we can get help to the people who need the help.
MARY HUNT:
Yeah, it's funny. You know, you think of the first responders, and they they're strong, they're capable. They have to be solid when they arrive, and they deal with the situation. Maybe some of them perceived it as weakness to admit that they were experiencing some stresses from this, but probably one of the most courageous things you could do is come forth and admit that you have issues too.
AMY NITZA:
Absolutely.
MARY HUNT:
So something big we have to overcome, I guess, as a society. What drew you to the field? Were you always interested in the field of counseling, mental health? Was it a situation, an instance and experience at some point that…
AMY NITZA:
Oh, that’s an interesting question. On a personal level, when I was in college, my younger brother and his girlfriend had a baby. They were 17 or 18, and I watched that play out, and I watched there became a legal battle between the two of them over the custody this child. And I watched... this was a two-, three-year process, and I watched that play out, and I watched the impact of that on my nephew. And I thought, who's making decisions about this? And are the decisions really based on the mental health, you know, the actual developmental needs of this child? And so that had a really profound impact on my career choice. And I also worked my way through school, working in a psych hospital, and really just fell in love with the work. So it's there's never been any doubt in my mind that I'm that I'm in the right field, and the opportunities we're now… the need, unfortunately, we're never going to work ourselves out of a job in this field. And so that's another reason I think that that AI and the center are so important, is because the need is just massive, and we can't keep up. And so can we use technology to kind of close that gap a little bit is what we're trying to do.
MARY HUNT:
You've worked in a lot of different fields, in terms of your counseling work, I understand you've worked for the federal government. You've worked here in education. You've worked I take it in the field in communities around the world. How different are those experiences for you? Or how challenging? Is there one that's more challenging than another? Are they similar? Are they different? What skills do you bring to one that you don't to another.
AMY NITZA:
Oh gosh.
MARY HUNT:
Just kind of curious. Or do you approach… do you put the same kind of lens on, the same kind of hat on when you go out?
AMY NITZA:
You know, yeah, what I've what I feel like, has been a privilege of my work, is to be able to take what we know to be, take the best available science around psychology, of psychological resilience and well-being, and addressing mental health challenges, addressing the problems of cumulative stress, as well as exposure to traumatic events, and apply that in a bunch of different contexts. And that would that's really what drives me, that's what I'm the most passionate about. So whether it's working with curators at the Smithsonian who deal with hard content and have to figure out how to tell hard stories, or whether it's you know, children in Haiti exposed to ongoing community violence who don't have access to mental health providers, or whether it's, you know, training master's level clinicians, which is what I've spent a lot of my career doing. The principles are the same. And so the challenge, and what I find just to be endlessly rewarding and fascinating is, what do we know to be true about human behavior and human needs around mental health and well-being and applying it, whether it's again, whether it's Smithsonian curators or children in Haiti. Do we have to adapt it for all the ways, of course, but at its core, it's the same. And so that's, that's what I've been focusing my career on.
MARY HUNT:
What do you hope the center can achieve through the development of these technologies for furthering the field and understanding of the field and how to work more effectively with people and improving their lives? What is it your ultimate hope is from this pursuit of new technologies?
AMY NITZA:
Ah, I think I have two answers to that. One is, there are so many places in the world where they do not have access to highly expensively trained mental health professionals, but we know a lot about interventions that can be delivered effectively by non-specialists, and so I think AI can be used as a tool for training of non-specialists around the globe. So if I had one thing that I could achieve with the center, it would be for the Global Center to be a hub for access to training resources for non-specialists anywhere in the world, whether that's peer support professionals in New York State or, you know, people. I keep referencing Haiti. I've done a lot of work there, but you know, or non-specialists in Haiti trying to support children who don't have access to the support that they need. Or, you know, anywhere else in the world, we can really, I think we can really move the needle on an already existing field of training non-specialists to be able to do, to do in the global mental health field we call it task-sharing, or task-shifting, but offloading some of the work of delivering interventions to non-specialists and figuring out how to do that. Well, I think we can train people more effectively and more efficiently using some AI based tools for that. So that's really my… in terms of intervention and training, that's really my, my most important goal. And then the other piece is the safe and ethical use piece and making sure that we are a voice and not just a voice of advocacy, but also research, and, you know, in the development of guidelines and recommendations to make sure that, again, just because we can doesn't mean we should. And so I'm really focused on the should and the how, and not just.
MARY HUNT:
And these are huge issues and will take time and a lot of work to tackle. But how will you know if you're having an impact? Are you looking for any indicators that will tell you, yeah, we're moving the needle?
AMY NITZA:
That is a fantastic question. In terms of… right now, I think the conversation… when I talk to people, when I tell people that I took this job, people mostly say, Oh my gosh. AI, mental health. Ah, eek, and so. But because what they're seeing in the… what do you see in the news? You see the bad stories of the really tragic things that have happened when AI has gone off the rails. And so I hope that we can change that conversation in a way where people are more confident in understanding what AI can do and what the limits are, and so that in the, in just in popular conversation, that people have a better grasp of what this thing is and how it how it can be used effectively. In terms of the training and the goals that I have around task-sharing, I think if we can expand the numbers of non-specialists trained in different countries, I would love to have sort of a set of fellows from around the globe who have been trained by us, and then are able to go out and adapt these tools to their field, in their setting. Again, ask me in a year, and I'll probably have a better set of what the metrics are.
[Music fades in]
MARY HUNT:
I have no doubt, Amy, that you are moving the needle and that you will continue to do that. So, thank you so much for being my guest.
AMY NITZA:
Oh, what an interesting set of questions you had. Thank you so much.
ANNOUNCER/MARY HUNT:
Dr. Amy Nitza is the director of the Global Center for AI in Mental Health at the University at Albany. Trained as a counseling psychologist, she specializes in the development and dissemination of best practices in global mental health.
As a Fulbright Scholar at the University of Botswana, Dr. Nitza trained mental health and school counselors in the use of group interventions in HIV/AIDS prevention. She also collaborated with UNICEF USA to develop and implement a program of mental health support for children and teachers impacted by the recent disasters in Puerto Rico.
She has directed numerous grant-funded projects including from the New York State Office of Mental Health, Office of Victim Services and Division of Homeland Security and Emergency Services. She is a co-author of the recent First Responder Mental Health Needs Assessment, a survey of the mental health challenges and needs of responders across New York State.
To learn more about Dr. Nitza and the Global Center for AI in Mental Health, visit the resource page for this podcast at the dash engagement dash ring dot simplecast dot com.
The Engagement Ring is produced by the University at Albany's Office for Public Engagement. If you have questions or comments or want to share an idea for an upcoming podcast, email us at UAlbany O P E at Albany dot E D U.
[Music fades out]