The Swedish Agency for Digital Government (Digg) and the Swedish Authority for Privacy Protection (IMY) have been tasked with developing guiding guidelines, which are now being presented to the government.
The guidelines concern generative AI, i.e. models that can create new content such as text and images. They are divided into several different areas – such as management and responsibility, information security, copyright, ethics, and procurement.
Among other things, it is emphasized that public actors should formulate an AI policy for their operations. All decisions supported by generative AI should have human control. One should use AI with built-in protection for individual rights. And there should be transparency and access to decisions and underlying data.
"Enormous challenge"
How are personal data processed when fed into an AI system? Can one trust that the system does not generate biased or discriminatory information that leads to discrimination against individuals or groups?
Overall, public operations face numerous important decisions to be made in order for AI use to take place while following laws and regulations.
This is an intelligence we're talking about, which many experts believe should be able to act autonomously and independently. It's an enormous challenge, says Mats Snäll, senior advisor at Digg.
Many areas of application
He simultaneously reminds us that generative AI can solve many tasks and facilitate both operations and individual citizens.
A lot has been about translation, transcription, and related to issuing public documents. Especially for those who issue public documents on a very large scale, where you have to review tens of thousands of pages and blank out sensitive information. Then you can use lawyers for much more important tasks in an organization, says Mats Snäll.