AI: Regulating the Unregulatable

20.03.2026

The application of artificial intelligence offers enormous potential while also harboring risks of at least equal magnitude. Roland Jakab, CEO of the HUN-REN Hungarian Research Network, and Gergely Szertics, Director of the HUN-REN AI Service Center, addressed these risks in two separate presentations at two conferences.


Systems based on artificial intelligence hold enormous business potential for both users and developers. Capitalizing on the former is not always easy this was the focus of Portfolio’s AI in Business conference on March 18—but examples of the latter are much easier to find.


Roland Jakab listed such examples in his closing presentation at the InnoSummit 2026 conference held at the Museum of Ethnography. The history of generative AI, which began less than four years ago, is full of meteoric rises, but even among one-person startups (OPCs), Base44 stands out.


The idea behind the company and its product sprang from the mind of Israeli developer Maor Shlomo in late 2024. He identified a market gap: he realized that nonprofit organizations need good software just as much as businesses do, but rarely can afford it. Last January, with the help of artificial intelligence but working alone, he began developing the platform, dubbed Base44, which reached $1 million in annual recurring revenue (ARR) in just three weeks in February. By spring, the number of users had already exceeded 400,000, and Shlomo had to hire a few people only for Wix.com to acquire Base44 for $80 million on June 18 (barely six months after the idea was conceived).


Such successes are likely to become more frequent as “autonomous AI”—that is, agent-based systems that independently solve complex tasks using multiple software tools—gains prominence over “assistive AI.” However, the spread of such systems also harbors new dangers, added Roland Jakab. Specialized systems can complete typical tasks (development, data analysis, and others) in as little as an hour—but afterward, it can take a person several hours to verify the final result. The danger lies in the fact that users increasingly trust these ever-improving tools, so they only check them superficially, which can lead to costly errors.


Gergely Szertics drew attention to a different, more general problem at the AI Summit 2026 conference organized by the IEEE’s Central European chapter. The technology and IT industry has always struggled with the issue of regulation: development has generally been much faster than legal regulations could keep up with. This is particularly striking in the case of internet services and social media platforms—but artificial intelligence poses even greater challenges for regulators, because not only is its development even faster, but its economic and social impact is also far more pronounced.


In such cases, both legislators and corporate compliance professionals often find themselves merely scrambling to keep up with events, regulating not what they should, but what they are capable of regulating emphasized Gergely Szertics. Yet today, we should not be concerned (only) with how people use artificial intelligence, but rather with how AI-based agents use AI. The rapid spread and integration of agent-based, autonomous systems into corporate business processes — as also mentioned by Roland Jakab — raises numerous questions, and there are no reassuring answers to them yet.
Gergely Szertics envisions a future where AI-based agents also conduct scientific research; the work and activities of these agents are supervised and monitored by higher-level “supervisor agents.” However, the question remains: to what extent can an ecosystem of agents be kept under control and what role will humans play in all of this?

Share