- In AI era, human capability for critical thinking has become more critical
- Workers feel empowered with transparent AI strategies, responsible frameworks
Last year, Telenor Asia’s Digital Lives Decoded 2024 report focused on how mobile connectivity was shaping smarter and safer lives in Malaysia, with a deep dive into the usage and impact on new technologies such as generative AI.
This year’s Digital Lives Decoded 2025 report, which interviewed 1,004 people, was about Telenor Asia’s decision to double down and better understand AI adoption and views and delve deeper into the theme of Building Trust in Malaysia’s AI Future.
One key take-away was how AI adoption, already high in 2024 among internet users at 75%, shot up to 89% this year while adoption at work rose to 51%, a 40% increase over last year. A welcome development of this higher use of AI as a tool for productivity, learning, and daily convenience, is the accompanied greater awareness of the down sides to AI (chart below).
And here, Telenor Asia emphasises that an invaluable skill needed by the public to cope in this AI age is to rely even more on an age-old skill we already possess – critical thinking.
From the various data points in the report, out of all skills that are highlighted to be in demand, Dr Ieva Martinkenaite, Head of AI at Telenor Group believes that in an AI-first world, basic critical thinking skills are the most crucial, especially when ChatGPT or CoPilot are able to provide information to users without them spending much time on manual research but which have been proven to be prone to hallucination with made up answers.
“Unfortunately these days, consumers are close eyed and just trust the source, which is like validating the source then,” said Martinkenaite.
At a time where AI generated fake news is getting ever more convincing, generating alarm and fear – “I mean, how do you differentiate the sources of news?” she poses.
Proactive action needs to be taken. She notes that some governments have acted, by either introducing new or enhanced existing cybersecurity and privacy laws, or enhancing the education curriculum as Finland has done to teach students how to identify the authenticity of any content they come across online.
An eight point guideline for corporates aiming to maximise AI
Focusing on companies, to ensure employees are comfortable and able to use AI to its full potential, Telenor Asia urges corporates to introduce a framework consisting of well-crafted AI guidelines, strategies and change management.
It suggests an eight point guideline for top executives considering implementing AI as part of their daily corporate operations:
- Set a clear vision for responsible AI use as part AI-powered business transformation agenda or AI action plan;
- Assess business opportunities, and most salient AI risks;
- Develop responsible AI principles;
- Create internal policies and/or guidelines for lawful and ethical use generative AI tools (such as ChatGPT);
- Develop roles and responsibilities alongside AI risk management frameworks as part of current or new privacy/security/compliance structures;
- Invest in robust data governance tools;
- Build ethical AI frameworks for third party vendors;
- Conduct responsible AI literacy courses for employees.
Adopt AI with proper guardrails
Out of the 1,004 surveyed internet users, 707 are employed with 440 fulltime; 127 working part time and 140 who are freelancers. Of those not working, 197 are students, 80 unemployed with 20 being stay-home parents and nine users who are retired. The survey was conducted between 30 May to 10 June.
From the total who are employed, 51% said they use AI at work, though only 1 in 3 of them say their company has an AI strategy in place. The percentage of companies that do not have an AI strategy is likely higher as Telenor Asia believes it can be inferred that there is a high chance that many of the 49% who do not use AI on the job, are likely in companies that do not have an AI strategy.
Telenor Asia believes there is a need for companies to introduce an AI strategy as their staff are likely already using AI on the job and having a corporate policy around AI adoption will result in more efficient usage, with guardrails implemented to ensure safe and ethical use.
With Malaysians showing a higher awareness of AI risks such as misinformation, data leakage, and biased decision making, Martinkenaite advised, “Malaysian companies must develop transparent AI strategies, set responsible AI frameworks, and invest in employee education to foster trust and harness the full potential of AI.”
She believes that companies who set an AI vision from the top will reap the benefits.
“Such vision is super important in terms of creating governance frameworks, helping internal employees to know which tools they are allowed or not allowed to use, and establishing how companies take responsibility for creating governance, like creating AI for detecting and preventing cyberattacks on critical data.”
AI in action at work – CelcomDigi
Meanwhile, one has to look no further than the leading Malaysian telco company, CelcomDigi Bhd, a subsidiary of Telenor Group, to see this in action. With AI being at the heart of anything it does, Kugan Thirunavakarasu, Chief Innovation Officer at CelcomDigi said the telco follows an Artificial Intelligence Governance and Ethics framework to guide its journey into AI-driven productivity, efficiency and automation.
At the recently concluded AI Summit in Kuala Lumpur, CelcomDigi announced four AI focus areas including HR, Legal, Finance and Customer Service. For instance under Legal an AI application has reduced work that could take up to five days to handle certain requests, down to a few minutes now. Meanwhile there is a digital employee of every staff as well, again designed to reduce processes and increase efficiency.
For the market, it launched an AI-powered cybersecurity solution for business that was developed in collaboration with the National Cybersecurity Agency framework to ensure governance is in place.
Still, Kugan acknowledges that despite promising progress, “In terms of the maturity level of our governance, it needs improvement. We have a structured policy, but the common challenge is getting everyone to adopt. Coming up with a policy is the easy bit, but ethical adoption is the hard part.”
At the same time, he adds, “it’s always a balance when it comes to innovation and governance; you try not to curb innovation at the same time, while making sure that it’s still done in an ethical manner.”
Challenges of scaling AI-enterprise adoption
Martinkenaite, who earlier emphasised the importance of critical thinking as a key skill set needed by individuals to cope in the AI era, also highlights another long established tool in the corporate world that plays a key role in maximising AI – change management.
“You see that only between 10% to 15% (chart above) of companies have captured real value with AI and are preparing to scale,” she said. “The issue is not algorithms or data, which makes up over 30% of the challenge in capturing value. 70% of the reasons why companies are not able to scale their AI is because of change management.”
Different from what change management is in the usual corporate context, in AI the change management is more nuanced and refers to strategies and practices organizations use to navigate the profound and nascent shifts brought about by AI — especially in how it affects people, processes, and culture. With the ground still moving, there is no accepted method in how to get this right.
From Martinkenaite’s point-of-view, “companies who seriously look at potential risks, and in response create awareness campaigns, training, create initial structures, they are able to scale, creating good experience for employees and also clarity, which all of course aligns with the vision of top management ambition.”
More information can be optained by downloading the Digital Lives Decoded 2025: Building Trust in Malaysia’s AI Future report.
Related Articles
Keyword(s) :
