My extensive background as a business technologist makes me well versed in a very similar topic, that is shadow IT. Shadow IT generally occurred in large organizations and corporate environments and was a term applied to the use and application of IT by departments outside of formal IT control. Generally this involved the application of IT without appropriate oversight and controls, and generally without the acknowledgement of the longer term care and feeding of the solutions being developed. The reasoning was that formal IT was too slow and other departments were able to implement solutions in a much faster way. Once shadow IT projects were implemented, the formal IT organization was saddled with the task of figuring out how to integrate the solution into a broader architecture and to support the solution going forward. This was generally a loosing proposition for the organizational overall when total cost of ownership was considered.
Now consider AI, which, unlike most of the IT conversation, doesn’t really even have an established home in most organizations. It is probable that the majority of forward thinking organizations today are establishing or have established some semblance of ownership or leadership with regard to the use of AI. This may be an individual tasked with the coordination of AI efforts, or a committee or even a developing department. A critical first step of managing AI is to get control of AI within an organization which implies visibility.
The use of AI must be visible to the leadership of an organization if any semblance of control is to be established and maintained. Even this simple statement of creating visibility of AI use in an organization may indeed be difficult to achieve. Take a simplistic example of a marketing department that happens to use Adobe Photoshop. If asked the direct question in April of 2023: “Are you using any AI in the department?”, the answer would probably be no. In May of 2023, Adobe rolled out its generative AI solution into Photoshop, so now the answer should flip to “yes”. The use of AI in this case just appeared in a solution which was being used anyway, but now has components of AI embedded in it. Disclaimer: The AI in Photoshop is mostly embedded at the time of this writing in the Beta version of the software but it is widely available and being widely touted.
And so the dilemma. The use of AI embedded in solutions already being used is just the tip of the iceberg. Solutions which may not previously have posed a data privacy or security exposure, now do or will in the near future. Proprietary solutions may be built on efforts which may not have been properly licensed or are at least being disputed, and now derivative solutions may be in jeopardy of creating legal or regulatory issues for the using organization. I suppose that if a vendor issues a revision to their terms and conditions, it may really be important to read and understand the fine print, especially around any changes involving AI.
The expertise to govern and control AI is mostly just taking shape. One can view this as old art needing to be applied to a new solution set, or a completely new art yet to be developed. Either way, AI that is not given appropriate oversight and control in an organization will quickly grow out of control and probably lead to significant unintended consequences over time.