Health New Zealand employees instructed to cease using AI chatbot for medical documentation

[aggregator] downloaded image for imported item #44182

This post was originally published on this site.

Staff who use free artificial intelligence platforms such as ChatGPT, Gemini, or Claude risk facing official disciplinary proceedings, according to new warnings circulating in workplace settings.

The caution comes as organizations grow increasingly concerned about employees turning to publicly available AI chatbots to assist with their duties, raising serious questions about data security, confidentiality, and intellectual property protection.

Employers and compliance officers are urging workers to exercise extreme caution when inputting work-related information into these platforms, noting that sensitive company data entered into free AI tools may be stored, processed, or used to train future models by the companies behind them.

Human resources and legal teams across multiple sectors are updating their internal policies to explicitly address AI tool usage, with many firms now classifying unauthorized use of consumer-grade AI applications as a potential violation of data protection and confidentiality agreements.

Workers found to have shared proprietary or sensitive information through such platforms could face consequences ranging from formal written warnings to termination, depending on the severity of the breach and the policies of their respective organizations.

Compliance experts recommend that employees consult their organization’s technology use guidelines before utilizing any AI-powered tool in a professional capacity, and where approved AI solutions exist within a workplace, those should be used exclusively in place of freely available alternatives.

Leave a Comment

Your email address will not be published. Required fields are marked *