As the EU introduces one of its most ambitious regulations in tech history—the AI Act—questions still loom large about how artificial intelligence systems will be certified and brought to market. In the second part of our conversation with Zoltán Karászi, CEO of QTICS Group, we delve into the challenges of certifying AI, the friction between innovation and regulation, and how AI is reshaping the workforce.
In the first part of our interview, we made it clear that the rules around AI products are still confusing and unclear. How feasible is it to certify an AI-based tool today?
At present, since the detailed regulations have not yet been finalized, it is difficult to say exactly how the regulation of AI systems will work in practice.
If a product includes artificial intelligence – particularly in the form of regularly updated algorithms – this presents a significant challenge. By the time the notified body has completed the assessment of, say, version “A,” development may have already progressed to version “F” or “G.”
In contrast, the certification process typically begins with a so-called “freeze” point, meaning the notified body only evaluates the exact version submitted by the manufacturer. Any updates would need to be reassessed and reapproved through a separate process.
This clearly shows that the current system is not well-suited to handle fast-evolving, AI-powered medical devices in a flexible way. A new, more adaptable solution will be necessary – but for now, it remains uncertain how lawmakers will address this issue.
How does this fast-paced evolution impact industries like healthcare?
AI is making a strong entry into the field of medical devices. The approval of such devices is governed by the Medical Device Regulation (MDR), which includes a process where the manufacturer submits the finished product to a so-called "notified body" (NoBo) – an independent certification organization. This approval process can take more than a year before the notified body confirms that the product complies with MDR requirements and is ready for market entry.
At present, the process is especially problematic in medical AI. By the time a tool is certified, it may already be outdated. This creates tension between rigid regulatory frameworks and fast-evolving technology. The industry needs more flexible, dynamic certification models that can accommodate continuous development without sacrificing accountability.
How does AI certification intersect with data protection?
AI tools that process personal data fall under the scope of the GDPR. Countries like Italy are very proactive in this area—they were the first to temporarily block OpenAI and are now scrutinizing other providers like DeepSeek. Data protection is no longer a peripheral issue; it’s a core pillar of responsible AI development.
For artificial intelligence systems – including ChatGPT – it is a fundamental requirement that their operation complies with data protection regulations. Any system or application that processes personal data faces specific compliance obligations in this area.
Can previous certifications (e.g., under the MDR or GDPR) simplify AI Act compliance?
In simple terms, they should and hopefully will in the near future. From a regulatory perspective, the goal is to avoid parallel, redundant assessments of the same product under different legal frameworks (such as the AI Act, GDPR, MDR, or other relevant regulations). Instead, the intention of the legislator is to create a coordinated and efficient compliance process where these assessments complement one another.
For example, when evaluating an AI-powered medical device, its data protection compliance (e.g., under the GDPR) and its operational compliance (e.g., under the AI Act or MDR) should not be conducted entirely separately. Rather, the processes should be interconnected, building on each other, so that manufacturers or developers are not burdened with multiple overlapping efforts.
Are there NoBos already operating according to the AI Act?
Currently, there are already certification bodies for ISO 42001 – the standard for AI management systems – such as QTICS. However, certification under the EU AI Act will be an entirely different process, requiring the involvement of so-called "notified bodies" (NoBos), which are designated certification organizations.
Certification under the AI Act cannot yet begin because the necessary implementing regulations are still missing. These regulations are needed to define exactly what requirements a certification body must meet to be designated as a notified body. It is expected that the designation will be carried out by the relevant member state (e.g., in Hungary, the government), but this can only happen once the relevant EU legal framework, the procedural guidelines, and the required technical standards are in place.
These standards are currently still under development. Until the regulatory and institutional framework is established, the certification process cannot meaningfully begin. Therefore, it is premature to compare who might become a notified body under the AI Act – the foundation must first be laid: the legislation, the procedural framework, and the designation mechanism.
How is the QTICS Group preparing for the AI Act?
QTICS Group has played a key role in establishing several Notified Bodies (NoBos) and is recognized as the world’s first and largest certification body for GDPR compliance under the Europrivacy scheme.
This expertise supports the development of integrated quality assurance systems, particularly within the medical and industrial sectors. The organization also contributes to international consortia focused on creating certification methodologies tailored to artificial intelligence technologies.
Will the AI Act create new job roles?
Absolutely. Just as the GDPR introduced the Data Protection Officer (DPO), the AI Act will generate demand for AI coordinators—not only within development teams, but across compliance departments, regulatory bodies, and distribution networks. Everyone involved will need a baseline level of AI literacy.
Artificial intelligence already affects everyone – often even those who may not yet realize it. That’s why it is essential for everyone to explore how they are connected to AI, and what opportunities or risks it may represent for them.
AI literacy – the ability to understand and engage with AI – is becoming critically important. The AI Act is expected to require organizations to appoint responsible professionals, so-called AI Officers, who understand all aspects of how AI operates – including technology, ethics, legal compliance, and data protection. The goal is for every organization to have an expert who can take full responsibility for the use of AI within their operations.
In parallel, public authorities – including market surveillance bodies – must also build up their own internal expert teams in order to monitor and enforce the new regulations. This is triggering a wave of training programs across Europe. Several European universities have launched AI-focused master’s programs, along with adult education courses on AI auditing, policy, and risk management. These aren’t just for engineers—legal, ethical, and policy experts will also need to understand AI at a functional level.
It is worth highlighting that, starting in January this year, QTICS Group was invited – as the only new market entrant – to join the EU-level consortium Panoraima focusing on AI knowledge development.
As a result, the organization is now part of a European project aimed at supporting education, training, and awareness related to AI – particularly in preparation for the implementation of the AI Act. We are proud to contribute to high-level artificial intelligence training for non-ICT professionals within the Law and Compliance working group, particularly in the field of vocational education.
Which roles are at risk, and where will we see growth?
Routine, rule-based jobs—particularly in administration, customer service, and even some legal or accounting functions—are most at risk. But AI will also spawn entirely new roles:
• AI auditing
• AI governance and compliance
• Prompt engineering
• AI ethics
• Interdisciplinary positions such as AI policy experts or data protection officers with AI specialization.
With the rise of artificial intelligence, the world of software development is undergoing a fundamental transformation. Traditional developer roles that rely on modular logic are gradually being phased out. In their place, software architects are emerging—professionals who think in abstract, system-level terms and are capable of designing complex, AI-based solutions.
New development environments—especially generative AI tools—are enabling a dramatic acceleration of software development processes. Tasks that once took weeks or even months can now often be completed in just a few days or hours.
This shift not only leads to significant time and cost savings but also redefines the role of future software developers. Their main responsibility will no longer be coding itself, but rather system-level planning, translating business objectives into technical solutions, and effectively integrating artificial intelligence.
Artificial intelligence is not just a technical challenge—it’s a social, legal, and economic one. The AI Act represents an attempt to balance innovation with accountability, but the conversation is only just beginning.
AI is coming with the force of a steamroller – it can't be stopped. That's why we need to equip people with the right tools so they can either jump on and take advantage of it, or at the very least, know how to safely navigate around it.
The conversation on AI regulation is far from over — more to come soon.
If you missed the first part of the interview, you can read it here:
Stay informed—follow us on LinkedIn:
Related services offered by TAM CERT:
Explore our certification solutions supporting responsible AI and data protection compliance: