Hong Kong’s privacy watchdog has issued the city’s first set of personal data protection guidelines for businesses using generative artificial intelligence (AI) services, saying it will carry out more compliance checks as AI adoption increases.

Companies using generative AI solutions should take a range of measures to protect personal data, including conducting risk assessments, deciding on the appropriate level of human oversight, and minimising the amount of personal data collected to train their models, according to a framework published by the Office of the Privacy Commissioner for Personal Data (PCPD) on Tuesday.

They should also set up an internal AI governance committee, led by a C-level executive, that reports directly to the board, according to the document.

The framework is the most comprehensive set of AI-related regulatory guidelines to date in Hong Kong, which currently does not have any laws or regulations specifically governing the technology that has seen rapid adoption since OpenAI’s launch of ChatGPT in late 2022.

Mayer Brown partner Amita Haylock (right), pictured at the Mayer Brown’s office in Central on January 11, 2023, says Hong Kong is likely to continue with an incremental approach to artificial intelligence regulation. Photo: Jonathan Wong

However, it does not impose mandatory requirements, noted Amita Haylock, a partner at the law firm Mayer Brown in Hong Kong.

“Given the government’s strong support for innovation and technology, I believe it is likely to continue with an incremental approach to AI use and development by introducing voluntary guidelines and subject matter-specific measures – for example, in data privacy and intellectual property – rather than enact sweeping laws and regulations in this regard,” she said.

The PCPD in 2021 published a guidance on the ethical development and use of AI, which covered several principles including accountability, transparency, fairness and security. The Hong Kong Monetary Authority also issued a document covering “high-level principles” of AI.

Hong Kong Secretary for Innovation, Technology and Industry Sun Dong said in February last year that the city will set up a special task force to navigate “disruptive” technological achievements such as ChatGPT.
Hong Kong workplaces have seen a significant increase in the use of AI, according to a report published by Microsoft and LinkedIn last week. It found that 88 per cent of knowledge workers in the city use generative AI tools, higher than the global average of 75 per cent.

The PCPD from last August to February carried out compliance checks on 28 local organisations to examine their personal data collection and protection practices related to AI use. The agency found that 21 used AI in their day-to-day operations, according to the resulting report published in February.

The regulator said 10 of these organisations collected personal data, using appropriate security measures while doing so, according to the report.

The PCPD was happy to see that Hong Kong companies were compliant with data regulations, Privacy Commissioner Ada Chung Lai-ling said on Tuesday at a media briefing. As more institutions in Hong Kong start to adopt AI services, the PCPD will continue to conduct more compliance checks, she added.

Chung noted that many Hong Kong companies have raised questions about using AI in a compliant manner, but said those that follow the new PCPD framework can be assured that they are complying with local regulations.

Still, small and medium-sized enterprises, which are increasingly adopting AI, may struggle to invest the time and resources needed to fully comply with these best practices, said Mayer Brown’s Haylock.