Guide to EU AI Act – Why does the AI Act apply to every organisation?

Artificial intelligence is changing the way work is done faster than any previous technological revolution. The EU AI Act brings common rules to the whole of Europe. The essence is simple: AI can be developed and utilised, but not in any way. AI that supports business and works for the benefit of people must be transparent, fair and secure.

Responsible AI starts with people

Who bears the responsibility if decisions are made by machines? How do we know when we are talking to a machine and when we are talking to another human? Do intelligent machines need human supervision? The rapid development of AI technology challenges us to consider many questions.

AI must be safe and fair for everyone. This principle is at the heart of the world’s first AI legislation, the EU AI Act. Every organisation must ensure that the AI technology used is safe and used in a responsible manner.

The EU’s AI Regulation directs staff to be trained so that everyone understands the operating principles of the AI systems in use, knows how to use them responsibly and is able to assess potential risks and disruptions.

The regulation does not aim to restrict innovation, but to build trust. The goal is to ensure that artificial intelligence is for humans. The decree defines how artificial intelligence is developed, deployed and supervised so that people’s rights, safety and equality remain the starting point at all times.

The safety of artificial intelligence is created as part of the organizational culture: how people use technology, how decisions are justified, and how transparent the operations are. Safety and fairness are built on many levels: from the grassroots level to the strategy.

Risks must be considered in advance

Mistakes happen, and that’s human. However, minimising the risk of errors is at the core of the AI Regulation. What happens if, for example, an individual employee uses an AI application in such a way that a serious data protection breach occurs, or a new customer information system reveals a bias that affects the lives of many people? It is essential that risks have been anticipated and their management documented in advance.

Risk management related to artificial intelligence often requires close cooperation in the organization, for example, between management, IT and HR. For example, system settings and information management models are primarily technical solutions, but on the other hand, training employees, i.e. users, is often part of HR management and the management of personnel competence development. In mapping risks and ensuring transparency, it is fruitful to involve the entire personnel in the process.

Example of a risk assessment:

  • Utilisation of weather forecasting guided by an AI model in event planning
  • Proofreading and clarifying text with the help of AI
  • Using voice-activated lights in facilities
  • Automatic captions in events or a Teams meeting
  • Automated translation of bulletins in Intranet
  • Chatbot on website
  • Virtual chat assistant for event registration
  • AI-assisted newsletter writing
  • An email or a summary of a project report created with Copilot
  • Screening job applications in the recruitment process
  • A system that automatically conducts appraisals for appraisals
  • AI-Assisted Employee Analytics Impacting Hiring
  • Access control using AI-assisted facial recognition in the office premises
  • An AI application that seeks to manipulate employees’ emotions or decision-making
  • A system that, based on artificial intelligence, automatically and without human supervision, would decide who is awarded a bonus or a promotion, for example
  • Profiling of employees’ political opinions or ethnic background

The regulation guides us to examine the ways in which they are used and their effects. The risks are assessed according to the extent to which the use of artificial intelligence may affect people’s fundamental rights, safety, health or livelihoods.

Fundamental rightHow is the fundamental right reflected in the EU AI RegulationUsing M365 Copilot
Respect for human dignityArtificial intelligence must not reduce humans to a tool or a source of data. People always have the power to make decisions and responsibility.Don’t copy M365 Copilot’s suggestions as they are. Interpret, edit, and refine them with your own expertise.
FreedomAI must not limit thinking, expression or decision-making. The user needs to know when AI is involved.Make M365 Copilot’s role visible and keep the decision-making power to yourself. AI must not decide for you.
DemocracyThe development and use of AI must be subject to democratic scrutiny and based on common rules.Follow the organization’s AI guidelines and ensure that the use of M365 Copilot in different situations is open and mutually agreed.
EqualityAI must treat people equally and it must not discriminate against anyone.Critically evaluate the content produced by M365 Copilot. Do not use it in decision-making where possible biases could affect people (e.g. recruitment, evaluation of work performance).
Rule of lawAI decisions must be transparent, justifiable and, if necessary, correctable.The use of M365 Copilot must be responsible. Decision-making must not be allowed to flow to artificial intelligence, but decisions must be transparent: who has made them, on what grounds and how they can be discussed, if necessary.
Respect for human rights and minority rightsAI must act fairly and without discriminatory traits towards everyone.Review the content generated by M365 Copilot before sharing. For example, if you notice stereotypes or one-sidedness, edit the end result.

Artificial intelligence itself is not harmless or dangerous. In their work, each employee chooses where and how to use artificial intelligence. Microsoft 365 Copilot is General Purpose AI (GPAI). It can be used for many purposes: writing, brainstorming, problem-solving, decision-making, and on the other hand, also for unethical purposes, such as deceiving people.

Every employee should have imaginary traffic lights in mind. In everyday work, the responsibility lies with both the employer and each individual employee.

Responsible AI traffic light model for everyone’s everyday work:

🔴 prohibited use, violates people’s fundamental rights or organizational guidelines
🟡 requires consideration and careful assessment, for example, it may concern individual people or decision-making may slip too much from humans to artificial intelligence
🟢 allowed and safe way to use it, but if necessary, you must remember to explain the role of artificial intelligence transparently

Employers have a responsibility for AI literacy

The first and most important step towards safe and responsible use of AI is to train your staff. Developing into an AI-native organization in the age of artificial intelligence begins with a fluent AI-literate workforce. This requires investment in the development of competence and the sharing of what has been learned. Cooperation between different teams and experts builds the foundation for the successful use of artificial intelligence.

According to the EU Regulation on Artificial Intelligence (Article 4), the employer is responsible for ensuring that the personnel have sufficient AI skills in their work. In other words, the employer has a duty to ensure that everyone who uses artificial intelligence has a sufficient understanding of the operating principles, uses, limitations and risks of artificial intelligence. It is the skill of asking: can I use this in my job, who can it affect and who will ultimately bear the responsibility? Sufficient competence is determined by each job description. For example, recruiters, teachers, salespeople, managers and analysts need different kinds of AI expertise in their work. That is why the training must also be role-specific.

In practice, the AI literacy of the personnel can be developed and the level of AI competence raised by training the staff in the versatile and responsible use of Microsoft 365 Copilot. In addition to training that is common to all, the AI skills that everyone needs in their own work can be developed by offering role-specific training.

What can be done concretely in organizations?

  • Staff training
  • Up-to-date AI guidance
  • Information management model and information security
  • Risk assessment and risk management documentation
  • AI strategy

It is also important that the organization has clear AI guidelines that define the allowed AI applications and how they can be used. This way, the use of artificial intelligence in various use cases, work tasks and processes is controlled and the organization’s risk management also remains within clear limits. In other words, the development of the AI-native organization of the future begins with AI guidance and staff training.

AI must be safe for people and business

Finally, it is important to remember that AI is used by ordinary people. In an AI-native organization, artificial intelligence is a natural part of every employee’s workday. Sharing expertise, learning together and continuous training create the basis for utilising the opportunities of artificial intelligence efficiently and safely. Over time, collective understanding and controlled and safe practices emerge.

Microsoft 365 Copilot and Microsoft 365 Copilot Chat are ISO 42001 certified AI systems. The certificate enables controlled, secure use that supports business goals. Microsoft 365 Copilot Chat doesn’t use your organization’s data to train AI models, and your organization’s data isn’t disclosed outside of your organization. Microsoft 365 Copilot is secure even when handling sensitive or confidential information. All conversations and information remain within the organization.

The safe and fair use of artificial intelligence is everyone’s responsibility. While many processes and everyday chores can be automated, maintaining safety requires human supervision. It is important that the final decision-making power in important matters does not flow unnoticed to AI systems, but that people use their expertise to weigh decisions. People shape the direction and people make the important decisions.

Writer: Iida Pitkänen

Read more: Copilot Essentials as support for EU AI Act requirements

The European Union’s AI Act introduces new requirements for organizations regarding AI competence and responsible usage. The regulation mandates that personnel receive sufficient training on the basics of AI, responsible practices, and safe usage.

Our Copilot Essentials service offers organizations a concrete solution to meet these requirements – easily, quickly, and with high quality. Read more in the article by Service Manager Suvi Savola!

Our Services for Microsoft 365 Copilot Adoption and Development

Copilot Essentials is a comprehensive service package that makes getting started with Microsoft 365 Copilot smooth, efficient, and impactful. The service combines guided deployment, a wide range of training offerings, and continuous support – all for a fixed price.

With our service, your organization receives high-quality guidance, support, and training for the use and implementation of Microsoft 365 Copilot.

The Copilot Essentials learning portal supports organizations in the deployment and utilization of Copilot. The portal includes, among other things, training and learning paths tailored to different roles.

Copilot Adoption Service is continuous full support Copilot adoption and development service for organizations who want to offer the best employee experience with support of generative AI tools by Microsoft. ​

Our service is based on various use cases and AI-driven automations that equip you with the best tools to measure productivity gains and achieve true competitive advantages.

We take care of the implementation, rollout, preparations, training, and change management. Your only task is to enjoy the benefits of Copilot – we handle everything else!