In recent years, artificial intelligence (AI) has become a tool that can significantly support the work not only of commercial companies but also of public offices, universities and other public institutions. As Marek Jeleśniański notes in the article “Artificial Intelligence to Aid Institutions”, well-implemented AI tools can “change the way many institutions operate” and create real “new opportunities” for administration, schools and universities and other public organisations.
It is precisely these new possibilities – opening the way to streamlining procedures, relieving employees and improving the quality of public services – that mean institutions today must look at AI not as a curiosity but as a genuine element of their future work. If AI can change the way institutions operate then it is only natural to consider how to introduce that change in a wise and responsible way.
Among the available options are both ready-made commercial solutions (SaaS/API) and models hosted within an organisation’s own infrastructure (I encourage you to explore the research results and AI model rankings available on this website). The right choice requires taking into account not only potential benefits such as reducing employees’ workload, increasing efficiency or shortening service times but also challenges related to data protection, regulatory compliance, costs and institutional responsibility.
This article does not point to a single universal solution because every public institution operates in a different environment and must adapt technology to its own needs. Instead, we outline the process that each institution will need to go through in its own way. We also show what should be taken into account on the path to implementing AI within organisational structures.

Artificial intelligence is transforming the face of public institutions, bringing new challenges in terms of compliance and security.
Criteria for Selecting LLMs in Public Institutions
Selecting an appropriate large language model should be based on clear and measurable criteria. Public bodies operate in an environment that is particularly sensitive to issues of security and legal compliance and their activities must be transparent. The criteria below propose a decision-making process that may form the basis for an institutional audit regarding the implementation of AI-based solutions. We briefly discuss each of the proposed criteria for selecting an LLM in the sections below and at the end of each description we formulate several questions that may help decision-makers relate them to their own organisation.
Security and Data Protection
Security is perhaps the most important criterion for public institutions. The choice of model must guarantee full compliance with data protection regulations and minimise the risk of data leaks. At the same time many outputs produced by public institutions are by their nature public documents (see: Artificial Intelligence to Aid Institutions). This means they do not constitute confidential information and therefore may be processed by large language models.

Choosing the right LLM model is a key step in integrating AI in the public sector. Solutions are available from different geographical regions and with varying degrees of openness. It is quite likely that in institutions where strong emphasis is placed on data confidentiality this choice will be subject to scrutiny.
At the beginning of an AI-focused audit the institution must ask itself the following questions:
- What kind of data will the model process (including sensitive, classified and public data)?
- Can those data – and which data exactly – leave the institution’s infrastructure?
- In the case of sensitive data, does the solution provide encryption, retention controls and the ability to conduct an audit?
Regulatory Compliance
The AI solution used within a public institution must make it possible to fulfil legal obligations such as document archiving, transparency of operations and the ability to provide full documentation and logs for the purposes of inspection or audit. Models deployed in the public sector should be transparent and traceable so that the institution is able to explain how the tool works and justify the decisions taken.
At the same time, even when working with public data, AI models may generate errors (“hallucinations”), so the institution must ensure mechanisms are in place for verification, oversight and auditing of outputs. Depending on how critical the process is in which AI is intended to support employees, these requirements may determine whether the institution chooses a model hosted by an external operator or a solution that provides full control over data, logs and interaction history.
It will also be important to prepare an appropriate knowledge base and to train employees so that they can use the tool in an informed way and are able to apply prompting techniques that improve the quality of the responses obtained (we also invite you to explore the offer of AI training courses).
It is worth considering several questions at this point:
- Does the planned solution meet all security and legal requirements?
- Does the selected model make it possible to archive work outputs and carry out audits easily?
- Does the provider ensure all necessary certifications and transparency?
Total Cost of Ownership (TCO)
The cost of implementation is not limited to purchasing a licence. In the case of hosting a model on an organisation’s own infrastructure, it is also necessary to take into account investment in hardware and networking, energy and data transfer costs, the team’s expertise and all costs related to maintaining and developing the model (you can find more on this in the section below).
When analysing their own situation, organisations should seek answers to the following questions:
- What are the initial costs and the long-term costs of the solution under consideration?
- Is the organisation able to estimate and forecast cost growth, for example if the intensity of model usage increases?
- Are there options that could help optimise costs in the future, for example through changes to the contract terms or a different composition of services?

The cost of owning a car is not limited to the price we paid for it but also includes insurance, repairs, fuel, parking and other expenses. The same applies to the implementation of IT solutions.
Technical Resources and Competences
Institution must assess what competences its team has and what technical capacity it can rely on. Not every implementation will require specialists in artificial intelligence, but when an institution plans to invest, for example, in its own hosting, it will need a team capable of deploying, adapting to its needs (fine-tuning), maintaining and supervising the model’s operation. These requirements concern both infrastructure (servers, security measures and management tools) and the team’s competences – IT support, knowledge of systems integration, working with data and creating end solutions through APIs.
If the institution does not have such resources or if building them would be too costly, it may consider selecting ready-made commercial solutions (SaaS/API), naturally on the basis of the previously discussed criteria concerning data security and regulatory compliance. Commercial solutions shift responsibility for technical operation to the provider and allow the institution to begin testing AI within its structures more quickly, but at the cost of reduced independence.
Key questions:
- What technical competences does the institution’s IT team possess?
- Does the institution have the infrastructure necessary to host the model securely?
- What are the requirements for integration with the systems currently used within its management structures?
- Are there areas that the institution can process safely, without breaching data security rules, and where handing work over to AI would have a real impact on the work of officials?
Operational Risk
In the context of public institutions, not every process can be handled by artificial intelligence to the same extent. This applies in particular to areas where the risk of errors carries serious consequences, for example in administrative decisions or in communication directed to citizens. Language models are not free from errors and even with very carefully designed prompts they may generate incorrect or imprecise responses. For this reason, before implementing AI in specific administrative processes it is necessary to analyse the risks associated with potential errors. In some processes AI should serve only a supporting function while in others it may fully automate the execution of tasks.
In this area it is worth finding answers to the following questions:
- Which tasks can AI automate safely and which should be handled under human supervision or should not be automated at all?
- Should artificial intelligence be considered for automating a given task or are there other, more stable solutions?
- Do LLM responses need to be repeatable and verifiable?
- Will the model support critical processes (for example handling administrative decisions)?
Organisational readiness
Implementing AI in an office or public institution is above all a process of organisational change. Without proper preparation in areas such as governance structures, support from decision-makers, staff competences and readiness to integrate AI into everyday processes, the project may fail. The institution must have appropriate procedures, training and rules for the use of AI in order to ensure responsible and safe use of models (at jelesnianski.pl you can find an offer of AI online courses).
Questions for assessing team readiness:
- Do employees have the competences needed to use AI models?
- Are there people or teams within the structure responsible for AI strategy, oversight, auditing and monitoring effectiveness?
- Are there policies or internal regulations defining the rules for working with LLMs?
- Is the institution ready to manage changes to processes and procedures as well as the risks associated with those changes and possible internal resistance (for example among employees)?

Data Quality and Readiness (Data Governance)
Data are the foundation of every AI model because if they are incomplete, dispersed or difficult to locate, even the best LLM will not deliver value. For this reason an institution should ensure that its data are organised, up to date and accessible in a secure manner. Importantly, in the future AI tools based on parameterised law may also prove useful, allowing for the automatic interpretation of regulations, the drafting of documents or support for compliance processes. Their effectiveness will likewise depend on the data available to the institution.
In light of these challenges the following questions should be addressed:
- Is the institution able to identify the data sources the model will use and are those data up to date?
- Are there procedures defining who can view, modify or use specific data?
- Does the institution have control over how data are transferred and protected and can it trace who accessed them and when?
The criteria presented above allow an institution to clearly determine what it truly needs and what requirements must be met for an AI implementation to be safe, legally compliant and operationally valuable. Understanding the nature of the data, the risks, the costs, the competences involved and the organisation’s readiness forms the foundation for further decisions. With these elements in order the institution can move consciously towards preparing a pilot implementation within the organisation.
SaaS/API or Self-hosting? A practical Comparison for Public Institutions
Choosing how to implement an AI model is of major importance for public institutions because it determines data security, costs, responsibility and the degree of control over the technology. The three options most commonly considered are:
- using a ready-made service (SaaS/API)
- running the model on the institution’s own infrastructure (so-called on-premise hosting or a private cloud)
- a hybrid approach combining elements of both options mentioned above
All of these approaches have advantages but each involves different obligations and risks.
In this subsection we present a brief overview of both options.

SaaS/API enables a rapid start but self-hosting offers full control over data. Both options may involve significant costs, though the emphasis will be distributed differently.
SaaS/API – a Rapid Start and Low Entry Costs
In the SaaS/API model, the institution uses a ready-made tool provided by an external supplier. This is the simplest way to begin working with AI.
Key advantages:
- Rapid implementation – the institution can begin using the tool almost immediately, without purchasing hardware.
- No need to maintain infrastructure – all updates and security measures are handled by the service provider.
- No major capital investment – fees are predictable, charged either for the use of computing power or as monthly subscription fees.
- Ability to test – the institution can test AI capabilities within its structures without major investment.
Limitations and risks:
- Data leave the institution – data sent in prompts to the model leave the institution and are directed to external servers.
- The institution is dependent on the supplier – changes in pricing and service availability may affect the operation of the entire institution.
- Very limited scope for customising the model – there is usually no possibility of adapting the model to the organisation’s needs, as the model works in the way it was prepared by the supplier.
Self-hosting – Full Control and Greater Responsibility
Self-hosting AI means that the model is run entirely and operates on infrastructure belonging to the institution – on its own servers or in a private cloud purchased for the organisation’s needs (external servers placed at the organisation’s disposal).
Key advantages:
- Control over data security – data do not leave the organisation’s own structures or servers, which makes it possible to meet heightened data protection requirements.
- Easier auditing and oversight of the model’s operation.
- Adaptation to procedures and systems – the model can use the institution’s databases and the IT systems already in use.
- The institution is independent of the supplier – the institution manages the model itself and decides on its updates and modifications.
Limitations and risks:
- High investment costs – in the case of on-premise hosting, the cost includes the purchase of servers and GPU cards or NPU chips (these cards are expensive and due to high demand access to the latest units is often limited), the building or adaptation of a server room and the creation or expansion of infrastructure such as connections, air conditioning and security measures.
- The need to have or hire specialists – maintaining the model requires appropriate expertise.
- Full operational responsibility – updates, backups, security oversight and so on are the institution’s responsibility.
- Longer implementation time – in the case of on-premise hosting, carrying out the investment usually takes from several weeks to as much as several months and in the case of public institutions this period may be extended further due to procurement procedures. What is more, time is needed to train and adapt models to the institution’s needs (fine-tuning).
- Technology depreciation costs – hardware used to host AI models becomes outdated quickly and must be replaced on a cyclical basis, typically every three to five years.
Hybrid Approach
There is, however, a strategy that combines the two solutions above. We will refer to it as the hybrid approach. Here, the institution processes part of its tasks and part of its data using external SaaS/API services. This may include, for example, non-sensitive data and services whose outputs are published anyway or where the risk is low. The other part of the data, namely sensitive data, is processed by the organisation in its own environment – in this approach most often in a cloud environment purchased by the organisation.
Key advantages:
- High flexibility – the institution adjusts the method of data processing depending on the type of data being processed.
- Flexible cost management – decisions can be made according to need as to how much funding is directed to maintaining in-house solutions and how much to commercial subscriptions.
- Usually higher security – sensitive processes operate in the organisation’s controlled environment.
- Faster implementation of new features – SaaS/API makes it possible to test new developments without having to update in-house servers.
- Development of the team’s competences – the institution can begin with SaaS implementation and over time develop its own IT resources.
Limitations and risks:
- Greater management complexity – there is a need to control separate environments.
- A need for clear data classification procedures – the organisation must clearly and precisely determine which data are processed externally and which within its own structures.
- Systems integration may be more difficult – combining both models may require additional technical work and oversight.
- Training and staff awareness are essential – employees must know the data processing procedures thoroughly and understand which processes may be handled in commercial solutions and which in self-hosting.

Implementation Recommendations for Public Institutions
Local government units (municipalities, cities, counties)
Recommendation: a hybrid model with greater emphasis on commercial solutions (SaaS/API)
Explanation: Local government units process a large number of public documents, for example resolutions, announcements and public procurement materials. For this reason they can very often use commercial solutions. However, for tasks involving the processing of sensitive data it will be necessary to use self-hosting, for example in a cloud environment dedicated exclusively to the organisation.
Universities and large educational institutions
Recommendation: a hybrid model with greater emphasis on self-hosting
Explanation: Universities often have extensive IT infrastructure and qualified technical staff, which may make it easier for them to host language models. These may be particularly useful in research work, data analysis or in building knowledge bases, at least for part of such activities. SaaS/API services can in turn support administrative work, marketing and the creation of teaching tools and materials. Advanced commercial AI models available via API may also be necessary for more sophisticated analyses.
Central administration and high-responsibility authorities (ministries, regional government offices, regulatory bodies)
Recommendation: self-hosting
Explanation: Institutions working with data of strategic importance must ensure their security. Implemented solutions must meet the strictest legal and audit requirements – here data protection is the priority. Such requirements can in practice be met only by on-premise hosting or a private cloud with full control over data and logs. AI tools available in the SaaS model or through APIs may still be considered for administrative and educational work.
Healthcare institutions (hospitals, medical institutes, public clinics)
Recommendation: self-hosting
Explanation: Medical data are subject to particularly strict protection. Institutions processing them must meet rigorous legal requirements and breaches may lead to severe penalties. AI systems are therefore expected to ensure full control over data processing and storage. For this reason institutions processing medical data should focus on implementing self-hosting. At the same time AI models available via API or through subscriptions may be considered for selected administrative tasks.
Schools and institutions responsible for culture and public resources (libraries, archives, museums)
Recommendation: commercial solutions (SaaS/API)
Explanation: Many processes and materials processed by these institutions are public in nature. Commercial tools can be implemented quickly and without major costs and the integration of such solutions is relatively straightforward. Naturally any processes involving the processing of personal data must be controlled and their processing by AI will probably require the implementation of a hybrid model, for example involving the purchase of a dedicated cloud environment.
Implementing AI in Public Institutions – Step by Step
Implementing artificial intelligence in a public institution is a process that must be secure, legally compliant and above all cost-effective. Sometimes choosing the newest or most popular model that everyone is talking about is not necessarily the best option. When making decisions, institutions should ask themselves: will this solution genuinely make work easier and improve service delivery?
The answer is not always straightforward, particularly when an institution has not previously worked with language models and employees do not yet have the necessary knowledge or skills. For this reason a common approach in such situations is to start with a pilot project – implementing AI in a limited and controlled area, observing the results and only then making a final decision.
A second related question is: will it actually be cost-effective? Here again it is usually difficult to provide an answer without conducting an assessment within a specific organisation, in concrete processes and in cooperation with a particular team.

Planning the implementation of AI requires a careful analysis of an organisation’s needs and capabilities. At Oxido we help institutions make optimal choices and guide them through the implementation process.
Below I discuss an example scenario of actions that may serve as a starting point for an institution planning its own AI implementation – initially as a pilot. Every institution has its own specific characteristics, so the process below, together with all its stages – although based on proven practices – should be flexibly adapted to the organisation’s particular needs and context.
Stage 1: Analysis of needs and requirements in the context of risks and compliance requirements
At this stage we clearly define which processes will be supported or fully carried out by AI. In a pilot study this will be a precisely defined scope of services or processes, taking into account the types of data being processed.
We analyse:
- what type of data will be processed (public or classified)
- what legal obligations are associated with these processes (regulatory compliance)
- what level of responsibility the tasks require (operational risk)
- what risks are associated with AI errors and the institution’s responsibility (operational risk)
By the end of this stage we have a prepared list of processes that may be safely handled by AI.
As part of the analysis the institution should also assess its technical resources and whether it has a team capable of carrying out the implementation process. The combination of these factors allows the institution to make an initial determination of the implementation direction by the end of this stage – whether it should lean more towards commercial solutions (SaaS/API) or whether the implementation of self-hosting will be necessary, and to what extent.
Stage 2: Formulating a framework AI implementation strategy
It is worth defining elements of the strategy at the beginning of the process in order to anticipate certain risks and determine appropriate mitigation measures. It may also be helpful to define criteria for evaluating success so that they are not later defined based on the results themselves. In AI training for managers, Marek Jeleśniański proposes the following elements of such a strategy:

Stage 3: Selecting solutions for the pilot and analysing the total cost of ownership (TCO)
Once the institution has defined its strategy and made an initial assessment of the scope in which it intends to use AI, the legal requirements and the technical and human resources available, it usually already has a preliminary idea of the implementation model. This makes it possible to begin estimating the TCO, the total cost of ownership. The institution compares the costs of SaaS/API licences, implementation and integration costs, infrastructure costs (whether owned or rented), energy costs as well as training and the development of employees’ competences. It also selects a catalogue of solutions for further evaluation.
Once costs have been estimated and the institution’s needs and resources analysed, a decision is usually made at this stage to conduct a pilot implementation that will allow the business and practical assumptions to be tested in practice.
Stage 4: Preparing the processes covered by the pilot, building the team and carrying out the pilot
Implementation – even a pilot – requires preparing employees. At this stage preliminary procedures for working with AI are developed (if they have not already been prepared), instructions for employees are created and training is conducted for the staff involved in the pilot.
Depending on the type of data being processed and the processes being automated, the risks and legal limitations will vary and employees must be aware of them. Changes and new solutions often meet with resistance, so positive communication and training for employees participating in the pilot should begin from the very start – beginning with a general introduction to AI and the assumptions of the project and gradually moving towards specific functionalities and processes connected with everyday professional duties.
The pilot is the final test of all previously made assumptions and of the team’s readiness for the upcoming change. AI is introduced into selected processes within a defined part of the institution’s operations. The earlier preparations were intended to create a safe laboratory where employees can test the possibilities while the team observes whether the substantive, technical and business assumptions are being met.
We analyse:
- the quality and precision of the model’s responses – whether they match the organisation’s needs
- the impact of AI on employees’ working comfort
- data security and the robustness of the system
- the quality of integration with systems (if they were included in the pilot) – whether AI works effectively with the institution’s IT systems
- cost verification – whether earlier TCO estimates correspond with practice. They were largely based on predicted levels of AI usage which may have changed as employees adopted the services.
Stage 5: Evaluating the pilot and preparing for full-scale AI implementation
At this stage the results of the pilot implementation are already available. There are technical data concerning stability, security and integration with IT systems, cost calculations and employees’ feedback. It is now time to summarise the findings and determine to what extent the institution intends to implement artificial intelligence and what final form this implementation will take.
Ideally several implementation models would be tested simultaneously, for example SaaS/API and self-hosting (for instance in a cloud environment). However, even if this was not possible, after testing only commercial services the institution already gains significant experience, has a clearer understanding of how AI may help and knows what employees think about it.
The Most Important Component for Effective Change
As Marek Jeleśniański notes, public institutions – due to their specific nature, the public character of many documents, the transparency of their operations and the mission of public service – have a unique opportunity to make meaningful use of AI. At the same time they must do so with caution and responsibility. No two organisations are the same and no two teams are identical. Even institutions operating at the same level of administration differ in the environments in which they operate, their organisational culture and the challenges they face in their everyday work.

Without people the implementation of AI will certainly fail. It is therefore important to ensure that they are fully aware of the objectives and have the knowledge that will allow them to go through the process safely.
We place strong emphasis on data security, technical issues and the costs of implementation but an equally important factor in such projects is the human element – managers, officials and employees. They are the ones who will use AI on a daily basis, integrate it with existing procedures and assess whether it works correctly. If we want successful implementations employees must not only understand how to use AI but above all see real value in it rather than a threat.
One might therefore ask why training appears only in the fourth stage of our process – the launch and execution of the pilot. If you are wondering about this that is a very good sign because it means you understand how important it is to prepare a team for change. The truth, however, is that education and staff preparation run through every stage described above and it is equally important to train, at stage zero, the group of employees who will prepare the entire process.
Implementing AI in an institution is of course a major IT undertaking but it is probably just as much a challenge in the field of human capital management. Even the best infrastructure and the best selected services will become a pointless expense if people do not want to use them.
In the midst of the digital revolution unfolding before our eyes the goal is not for artificial intelligence to replace anyone but for officials, employees or educators who use AI wisely to become the new standard of modern administration and education.
This is why the answer to the question of where to start is actually quite simple. It may begin with something small – preparing and carrying out a pilot implementation in a limited part of everyday tasks. And what is the aim? To find out whether it is worthwhile, why it is worthwhile and how everyone can benefit from it.

If you want to better prepare your institution for the responsible implementation of AI, on this page you will find other articles exploring the topic in greater depth but above all practical training courses and workshops, including programmes for public administration teams. These programmes teach effective prompting, safe use of AI models, working with data and making informed technological decisions.
Conclusion by Marek Jeleśniański
Allow me to add a couple of words at the end… A comprehensive implementation of AI and integrating it with multiple systems when we operate under many constraints is simply very difficult and it is often accompanied by enormous time pressure and external demands. I know the reality of Polish institutions well because at Oxido we have the opportunity to advise them and run training for them. It is certainly not worth making quick decisions just for the sake of making them and sometimes it is genuinely valuable to have a partner on the other side who will boldly cool enthusiasm and emotions, point out the risks but also inspire ideas on how the potential of AI can be used optimally. What “optimally” means depends on a great many factors including especially the moment in time we happen to be in.
What is certainly worth considering is what Krzysztof writes about – a pilot AI implementation. It is a small step and therefore much easier to put into practice and its results will provide a sensible basis for further action.
We invite you to collaborate with us or at least to subscribe to our newsletter 😉
