In the previous chapter, we explored potential ways AI could support and strengthen mental health task sharing programs, along with some real-world examples. However, identifying potential use cases is only the first step. Successful implementation of AI in task-sharing programs depends on whether AI solutions can be developed and integrated in a way that is technically sound, clinically safe, ethically governed, and operationally sustainable.
In this chapter, we outline four considerations task-sharing programs should keep in mind to responsibly and effectively implement AI in their programs:
- Technical considerations. These core elements use practical guidance to ensure AI solutions are technically sound and user-friendly. Technical considerations include AI models, data, infrastructure, user interface, and technical talent.
- Quality, safety, trust, and regulatory considerations. These considerations ensure high-quality, safe, and trustworthy AI solutions that are in compliance with regulatory requirements and meet best-in-class standards for responsible AI use.
- Governance considerations. These strategies establish oversight mechanisms and organizational AI policies to guide the responsible development, deployment, and monitoring of AI.
- Sustainability considerations. These considerations provide guidance on planning for financial and operational sustainability by assessing the total cost of ownership and scenarios for partnership.
All four of these considerations are fundamental to integrate and scale AI in a way that enhances the delivery of evidence-based mental health services in task-sharing programs.
Exhibit 3

Technical considerations
Successfully implementing AI in mental health task-sharing programs requires more than just the right AI model—it requires a robust technical ecosystem foundation and system. This section outlines five dimensions that should be considered to develop and adapt technically sound, user-friendly AI solutions:

Mental health skill-building programs looking to integrate AI solutions into their programs should think about how to ensure the effectiveness of interventions (quality), the prevention of harm (safety), the confidence of users and communities (trust), and regulatory compliance. These considerations become even more critical in settings where users may not be familiar with AI solutions and the potential risks that may come with them.
In this section, we explore how to ensure high-quality AI solutions, uphold safety through risk mitigation, build trust via ethical and transparent practices, and maintain regulatory compliance.
The previous section discussed potential ethical, legal, and practical considerations that may be associated with AI solutions, considering the sensitive nature of mental health data, research, and interventions. We encourage all organizations to establish clear principles for the responsible use of AI. Governance of AI solutions (AI governance) mitigates these risks and enhances the trustworthiness of these technologies.
AI governance is a system of checks and balances that oversees an AI solution throughout its life cycle—from initial development to deployment, ongoing use, and continuous monitoring. It includes the policies and processes a program uses to review, assess, and manage AI solutions, ensuring their safe, responsible, and effective development while complying with local regulations. A comprehensive AI governance structure in mental healthcare can preserve patient safety, uphold ethical standards, ensure regulatory compliance, foster trust through transparency and accountability, and manage privacy concerns and other legal issues.
Although there is no one-size-fits-all governance framework, there are principles defined by international and national organizations to guide programs in the ethical development and application of AI solutions.
The World Health Organization (WHO) identified six core principles for “responsible AI” to guide the governance of AI in healthcare, including mental healthcare (Exhibit 4), though the final decisions to tailor the final framework used would lie on the implementing organization:
- Protect autonomy. Safeguard individuals' rights to make informed decisions, ensuring AI does not undermine personal freedom or autonomy.
- Promote human well-being, human safety, and the public interest. Prioritize the health, safety, and overall welfare of individuals and society when deploying AI solutions in healthcare.
- Ensure transparency, explainability, and intelligibility. Operate in ways that are understandable, with clear explanations for their decisions and operations.
- Foster responsibility and accountability. Establish clear lines of responsibility for AI's outcomes, ensuring that developers, users, and organizations are accountable for the technology’s impact.
- Ensure inclusiveness. Promote the fair distribution of AI benefits, ensuring that AI applications in healthcare are accessible to and optimized for all populations, especially underserved groups.
- Promote AI that is responsive and sustainable. Encourage the development of AI that can adapt to changing needs, remain effective over time, and be sustainable for long-term use.
Exhibit 4

Establishing and maintaining an effective AI governance system rooted in these core principles requires a structured and practical approach. The following steps outline how a task-sharing program, regardless of its size, could integrate AI governance into its workflows to enable the responsible use of AI (Exhibit 5). These steps gather recommendations from existing AI frameworks originally developed by global authorities such as WHO, the World Economic Forum (WEF), and Coalition for Health AI.
Exhibit 5

Integrating AI into mental health task sharing programs requires a thoughtful approach to ensure long-term financial and operational sustainability. To maintain the sustainability of an AI initiative, programs should treat AI as an ongoing expense, rather than being reliant on one-time grants and budget for its development, deployment, and maintenance.
When adopting AI solutions as part of mental health task-sharing programs, programs should assess the total cost of ownership by distinguishing between capital expenditures (the development and deployment costs) and operational expenditures (the maintenance costs).
Total cost of ownership of AI solutions
Multiple factors, including solution complexity, data requirements, infrastructure and technological resources, and user support could influence the total cost of ownership of AI solutions.
To understand the total cost of ownership and effectively allocate resources to manage it, these factors should be examined across three phases of the AI life cycle:
Development phase
This phase includes designing, training, and testing AI models before integrating them into programs’ workflows. Programs could choose to fine-tune a pretrained open-source model or adapt a pretrained proprietary AI model for their solutions. This section is meant to be used as an informational guide based on public information and not as spending recommendations by the authors of this resource.
Another option would be to build a mental health–specific custom AI model (an LLM) from scratch; however, this is a costly and time-consuming process. To provide a sense of scale, training a model similar to GPT-3 was estimated to cost around $4.6 million with the lowest-priced cloud processor in 2020. While there are organizations that are trying to build LLMs for psychology, most digital health companies use vended LLMs.
Since building a custom LLM is not a feasible approach for many task-sharing or similar mental health programs and customization options are limited with proprietary models, we focus on AI solutions that embed pretrained open-source models in the following cost breakdown.
The cost of developing an AI solution with a pretrained open-source model has the following components:
- Cost for data acquisition and preparation. Building a custom AI model, especially an LLM, requires vast amounts of high-quality data. For domain-specific applications, this involves collecting, annotating, storing, and managing large sets of data. For example, in 2025, at time of print, the cost of creating a high-quality training data set can range from $10,000 to $90,000, depending on the nature of data and the complexity of the annotation process. These costs will depend on the wage levels at the project site and can vary substantially.
- Cost for model training and testing: Substantial computing resources (such as graphics processing units [GPUs] and tensor processing units [TPUs] may be required to train the LLM on your data set and test it using a different data set (such as anonymized transcripts of counseling sessions). Due to their scalability and flexibility, cloud-based solutions can provide these GPUs and TPUs more conveniently and help manage computational needs. Training costs can vary substantially based on model size and complexity.
- Other costs. Beyond data and model development, programs should account for the cost of the talent required to build the AI solutions. At this stage, a lean development team includes a product manager, a machine learning engineer, an UX/UI designer, and a software engineer that works closely with clinical, legal, and regulatory experts. Visit “Chapter 4, Section 1: Technical considerations” has role descriptions of these team members.
Considering the factors above, the cost for the development phase of an AI solution using a pretrained open-source model can range from $35,000 to more than $150,000, depending on the project’s complexity and specific requirements. This ballpark cost assumes the software development cost for the user-facing application in which AI is embedded would range from $15,000 to more than $100,000.
While this section emphasizes open-source models, task-sharing programs could determine which model type—proprietary or open-source—is better suited to their goals and resources in collaboration with their in-house IT team or an external technology partner.
Deployment phase
Once an AI model has been adapted for use, the next step is to deploy it within the program’s technical infrastructure. Once AI models are deployed in existing technical infrastructure, they should be integrated into operational workflows and be accessible to end users through select solutions, including mobile apps or web-based platforms.
The cost of deploying and using an AI model has the following components:
- Cost of integration. Integrating AI solutions into operational workflows requires developing APIs, modifying existing software, and ensuring a seamless data exchange between platforms. These costs vary depending on the need for customization and the complexity of integration.
- Cost of quality assurance. After integration is completed, the development team ensures the AI solutions function as intended within the programs’ workflows. This step includes validating the system’s performance for speed, responsiveness, and accuracy and conducting quality assurance to identify and address any technical issues such as bugs prior to full-scale deployment.
- Cost of training end users. Adopting AI solutions requires programs to train different groups of end users on how to use AI solutions effectively and responsibly. This training may include onboarding sessions, interactive tutorials, and user manuals. Programs should budget to hire staff or technology partners to deliver this training.
Considering the factors above, the cost for the development phase for an AI solution using a pretrained open-source model can range from $15,000 to more than $40,000, depending on the project's complexity and specific requirements. This ballpark cost assumes the cost—from testing and from quality assurance that confirms the AI is compliant with healthcare regulations—would range from $10,000 to $25,000.
Maintenance phase
After deployment, AI solutions require ongoing monitoring and maintenance to ensure that they continue to perform as intended and adapt to the evolving needs of end users and mental health task-sharing programs.
The cost of monitoring and maintaining a pretrained open-source AI model has the following components:
- Cost of cloud hosting. Programs using cloud-based, open-source AI models must budget to host their models on the cloud. Fees may vary based on complexity of model—the larger the size of the model, the higher the cost will be. Many cloud platforms offer scalable options to manage costs.
- Inference compute costs. Running an AI model in real-world settings requires computational resources to handle user interactions and generate responses to user queries. The cost of processing these interactions depends on factors such as the number of users, the complexity of the query, and how efficiently the model processes the information.
- Cost of performance monitoring. Using monitoring tools to continuously monitor AI solutions is essential to detect and address performance issues such as inaccuracies and model drifts (in other words, when the model’s performance gradually declines because of changes in data). Often, cloud-based platforms provide monitoring tools and alerts to identify performance issues early, with an additional cost.
- Cost of regular updates and improvements. AI models need periodic updates to incorporate new data, refine algorithms, and improve their accuracy. Associated costs would include fees for cloud computing resources that retrain the tool and fees for human resources (such as a machine learning engineer) to refine models based on changing needs or identified performance issues.
- Cost of user support. Ongoing support to users, including training sessions and help desks, should be provided to ensure that AI solutions are used effectively and any critical issues are addressed promptly. Programs should budget for at least one person dedicated to user support and escalate issues to the developer team (a user support specialist, for example).
Considering the factors above, the cost of the maintenance phase for an AI solution using a pretrained open-source model can range from $5,000 to $25,000 annually, depending on the project's complexity and specific requirements. This ballpark cost assumes that the program would spend 10 to 30 percent of their initial development cost annually on maintenance.
Mental health skill building programs could choose to fully own tasks in these phases or fully outsource these tasks to a technology partner. Next, we will cover different scenarios to develop and implement AI solutions in mental health task sharing programs.
Scenarios to develop and implement AI solutions
Deciding whether to build AI solutions in-house or partner with external technology companies is an important part of the process to adopt AI. This decision will likely vary for each program depending on its goals, use cases, technical and operational capacity, and financial resources.
Mental health task-sharing programs, which often operate with limited technical capacity and financial resources, may benefit from partnering with technology companies across the AI journey. These partnerships could help offset costs that may occur with building in-house capacity, such as the cost of building a development team.
The following section outlines common decision points across three phases of the AI life cycle and practical considerations for mental health task-sharing programs.
Development phase
- Build. Programs with access to domain-specific data, a development team with technical expertise, and substantial financial resources may choose to build a custom LLM from scratch. This option offers maximum flexibility and control but is associated with the highest cost for the development phase, especially for data preparation and model training and testing. This path is generally not feasible for mental health task sharing programs without a dedicated development team with AI talent.
- Partner. Since most mental health task sharing programs do not have in-house development teams and have limited financial resources, they could benefit from partnering with a technology company to build their AI use cases. This partnership will likely include developing an AI solution that embeds a pretrained open-source model. However, task sharing programs could determine which model type—proprietary or open-source—is better suited to their goals and resources in collaboration with their in-house development team or an external technology partner.
Deployment phase
- Build. Programs with strong in-house technical experts, including software engineers and product managers, may opt to deploy and integrate AI solutions internally. This path allows for more customization and control over how AI solutions fit within existing workflows, but this approach requires careful planning for compliance, cybersecurity, and quality assurance, especially in health-related settings.
- Partner. For programs without in-house technical experts, partnering with a technology company can simplify deployment. External partners can support AI integration into existing platforms, manage technical infrastructure, and ensure usability across devices. This option allows for faster implementation and may reduce the risk of deployment delays.
Maintenance phase
- Build. Programs with in-house development teams may opt to manage the maintenance of deployed AI solutions internally. While this approach offers greater control and flexibility to tailor improvements based on user feedback, it requires sustained investment in technical talent, monitoring systems, and user support.
- Partner. For programs without in-house development, the team could partner with a technology company to outsource technical maintenance, including performance monitoring, improvements, and user support. These services are often delivered through a service contract and could offer predictable costs and timely upgrades.
For mental health task sharing programs with limited financial resources and technical expertise, a hybrid approach may be feasible. A technology partner could build the AI solution that embeds the AI model, while in-house staff could lead product management, implementation in the field, and user support. This approach could be ideal because it balances technical expertise with the contextual knowledge of the program and allows programs to gradually build internal capacity over time.
In this chapter, we explored the building blocks of using AI in a responsible and sustainable way—from making sure that solutions are technically sound and safe to setting up responsible governance and planning for long-term financial and operational sustainability. Up next, we share a simple readiness assessment to help you take stock of where your program is today and what steps might help you move forward in your AI journey.