PETALING JAYA: The RM100,000 fine imposed on Sin Chew Media over the use of an incomplete Jalur Gemilang in its publication has reignited calls for clear, industry-wide guidelines on artificial intelligence (AI) in journalism.
Experts warn that without proper standards, reliance on AI risks costly mistakes and erodes public trust.
Universiti Teknologi Mara journalism lecturer Fadzillah Aishah Ismail said guidelines should provide a framework for AI use in newsrooms, ensuring transparency and accountability.
“Some local media are moving in the
right direction but these remain limited
to individual newsrooms. Organisations overseas such as Reuters, BBC and AP have clear AI rules emphasising transparency, human oversight and accountability.
“Malaysia needs something similar at an industry level, perhaps through the Media Council.”
Fadzillah, who also teaches Media Law, said while AI is increasingly used for transcription, summarisation and content support, legal responsibility under the Printing Presses and Publications Act, Communications and Multimedia Act and Defamation Act remains with editors and publishers.
“Malaysia has no laws specifically regulating AI in newsrooms. If AI produces an error that harms someone’s reputation, it is still the media owner or editor who is liable.”
She welcomed the National Guidelines on AI Governance and Ethics, introduced by the Science, Technology and Innovation Ministry in Sept 2024, which remain voluntary rather than binding.
“From a journalism ethics standpoint, AI should be an assistive tool, not a replacement for human judgement.
“Newsrooms must check facts, provide context and ensure accuracy. Public trust depends on human accountability. At the end of the day, AI should serve journalism, not the other way around.”
She said the Jalur Gemilang incident highlights how serious mistakes can be made, even if unintentional.
“Authorities conduct thorough investigations to determine whether mistakes are unintentional before deciding on penalties. What matters most is that these decisions are made professionally without racial sentiment.”
Fadzillah urged the Malaysian Communications and Multimedia Commission and the Media Council to collaborate with newsrooms to set clear AI guidelines, focusing on responsible use rather than punishment.
Universiti Malaysia Kelantan Institute for Artificial Intelligence and Big Data associate fellow Dr Fakhitah Ridzuan said the Jalur Gemilang issue shows the risks of over-reliance on AI without human oversight.
“AI is just a tool. If the model is not trained with complete and accurate data, it cannot provide reliable responses,” she said, warning that Large Language Models (LLM), often called “black-box systems”, can produce polished but misleading content.
“LLM are known for making mistakes. Taken blindly, they can do more harm
than good.”
She said while AI can improve efficiency, it lacks ethical judgement.
She added that the final decision and accountability for any AI-generated output must ultimately rest with humans.
“Any confidential information entered into AI is stored and added to its knowledge database. If false information is repeatedly fed, it can result in unreliable outputs.
“Editors must enforce checks and require writers to justify their work. Since AI lacks critical thinking, humans must evaluate content even if it appears accurate.”