Nowadays, AI technology is widely used in many businesses e.g. face recognition system, speech recognition and synthesis, cleaning robot, chatbot. And in order to develop better products and services, those businesses need to access to more consumers’ personal information to improve their AI systems. So how can we be certain those businesses don’t misuse our personal information?
On 21 April 2021, the European Commission proposed new rules and actions to control AI application scope in businesses to assure people and business sectors that using AI is safe and doesn’t infringe on their fundamental rights. The new rules must be approved by the European Parliament and Member States before being announced as a law and becoming effective in the European Union. In the main points of this draft law, AI risks are divided into 4 levels as follows.
1. Unacceptable risk: It is the use of AI that affects people’s rights and privacy. For example, governments use AI systems to determine social scoring of citizens, or toys using voice assistance encouraging dangerous behavior of minors.
2. High risk: It especially means sensitive industrial sectors and having a huge impact if any problems occur, e.g. healthcare industry, transport, energy, government affairs. Examples of AI application are as follows.
- Critical infrastructures e.g. transport
- Educational or vocational training e.g. scoring of exams
- Employment, workers management e.g. CV-sorting software for recruitment procedures (The applicants might be discriminated by the system)
- Law enforcement that may interfere with people’s fundamental rights e.g. evaluation of the reliability of evidence
In addition, biometric and face recognition technologies (Only face recognition in public area) are also considered high risk. The implementation of AI in the above-mentioned industries must strictly follow the European Commission’s guidelines. For example;
1) There must be a large amount of data to train AI and develop it to be a safe system for users without any discriminatory behavior neither gender nor race.
2) All data must be kept systematically so that all related persons can investigate them later.
3) Information must be available to provide to public. AI users must be able to check the necessary information regarding the system performance.
4) AI system must be accurate and be able to prevent risks and impacts from any errors that may occur. For example, there must provide measures to support in the event of an emergency.
5) It needs human oversight which can be done in many ways. For example, assign a human to test and verify the result of AI system before launching to market, or to monitor the operation of AI and be able to control the system immediately if there is any problem.
3. Limited risk: The developer of systems using chatbot must inform users that they are going to chat with AI, so the users can decide whether to chat or not.
4. Minimal risk: This level doesn’t need to be controlled more because the current applied laws are adequate. Spam mail filtering system and game application using AI are samples of AI in this level.
In Thailand, there are currently no laws that apply directly to AI technology. In the near future, related persons from each sector should collaborate to propose some regulations to protect the individual rights from the use of AI. But in the meantime, what all businesses should do is to focus on protecting consumer data and considering human rights in developing AI systems that are safe for users. This is to build AI technology that everyone can trust in their safety.
Resource:
https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682
https://www.bangkokbiznews.com/blog/detail/652128
Image from: