Even without a plan, GenAI finds its way into your organisation

By: Jules van den berg & Daan Smits

‘Our employees know the risks of ChatGPT’ is a statement we hear in our daily work, but does this ring true? There are plenty of examples of generative AI (genAI for short) that have been deployed in less than optimal ways. That is why it sometimes seems better not to use it at all. Advisors and users think that deploying genAI will save costs and that you will escape all the risks everyone orates about on LinkedIn. In this blog, we offer practical guidance on how to get started with a genAI strategy.

placeholder

We increasingly encounter the use of genAI within organisations, even when the organisation does not facilitate, discourages or even prohibits the use of this technology. Despite the inherent risks of business use of genAI, employees see the benefits rather than the risks. An organisation in which employees work with genAI, but where there is a perception in the upper echelons that this is not happening, has thus lost control. This justifies the question: how do you deploy genAI wisely, amidst these many advantages and disadvantages.

Strategy implementation

In making a strategy executable, especially in the field of genAI, both policy and an associated executable plan play an essential role. We recommend that in this plan, based on the strategy, you describe whether employees within the organisation are allowed to use genAI, and if so, in what activities and with what restrictions. The extent to which the use of genAI is desirable varies by industry and organisation and between public and private companies. No organisation escapes consideration about genAI. Even if the conclusion is that genAI should not be used at all due to weighty reasons, the organisation must first make that consideration. It is up to the organisation to weigh up which aspects carry the most weight.  

For instance, for one organisation the focus will be on data confidentiality, whereas for another organisation the sustainability aspect is very important. But how do you draft a genAI policy? A policy that is correct and complete? And also enforceable? A simple first step is to read our earlier blog, in which we outline how to use genAI reasonably safely.   

The concrete elaboration of the policy will require an interdisciplinary team, as AI is a specialist field on the one hand, but touches many domains on the other. For instance, you will probably need the help of a security expert, a technically savvy legal expert and a (digital) ethicist. You are also likely to need a director to bring all these disciplines together.   

Ultimately, as an organisation, you need to inform management and staff about the new policy and what it means for them. This is where an education specialist can help. This specialist can support in creating appealing webinars and e-learnings that help employees internalise the adopted policy, as well as train employees in using genAI. Indeed, the work of several employees will change - or has already changed - with the use of genAI. Changing roles also involve changing tasks and responsibilities. Here, the organisation needs to guide employees to embrace those new responsibilities through strategic communication. The exact form in which this awareness-raising step is arranged is up to you, but this step should not be skipped. 

In conclusion

To make your organisation genAI-proof, or even genAI-powered, you need to follow the following essential steps: Establish a genAI strategy that provides direction on what the organisation wants to achieve with genAI. Then write a policy that is interdisciplinary implementable. This means that the policy is described in aspects such as legal, safe, process, social, organisational and ethical. Make sure it is technically sound and transparently explainable. Finally, make sure this policy becomes executable with a plan involving a director and education experts.

Excited, interested or in disagreement with something we mentioned?

Please contact Jules van den Berg and/or Daan Smits. Jules and Daan are consultants at Highberg and advise on AI architecture and digital strategy. 

Related Insights

divider