Everybody and their uncle has heard about Generative AI. Those who sleep under a rock haven’t. Of course, most people know it as ChatGPT and they haven't stopped yapping about it since its launch. Most of the chatter is about how Generative AI will take away people's jobs rather than how it will help business enterprises perform better. People employed in professions as diverse as writing, coding, law, and medical science have expressed fear that a bunch of technocrats and big corporations will replace workers with Generative AI and concentrate all the wealth in their hands at the expense of society as a whole. The Hollywood writers' strike is a classic example of this type of fear taking root and almost paralyzing people.
People are also
afraid that algorithms fed with a bias can be used to influence public opinion
by unscrupulous elements. Others fear that rogue or unscrupulous coders could
cause grave danger to software, hardware, people, and enterprises- a step up
from the regular cyber threats of the past. People and organizations would fear
stolen identities as well as attempts to besmirch fair names and reputations.
Do we really need to
fear Generative AI?
If Generative AI is
left unregulated and those that deploy it do so in a manner that does not
benefit people- employees, and customers, there may be cause for fear. It is
another matter, though that such short-sighted leveraging of Generative AI will
eventually rebound on the organization concerned most negatively. That being
stated, if it is not to be misused by would-be monopolists and oligarchs, rules
and regulations and a proper structure to implement them must be put in
place.
However, it is
pertinent to note that there was widespread fear amongst people that rapid and
widespread computerization would sound the death knell of jobs in sectors like
retail, banking, and finance about a decade or so ago. In fact, Oxford University came out with a report that 47%
of the total jobs in the US were at risk.[1]
Yet, things didn’t pan out that way at all.
Generative AI will
over the long run create a whole new set of jobs, that didn't exist before. AI trainers and operators, auditors for
AI-generated work, original cutting-edge content creators, interpretation of
content in terms of human sentiment, AI integration expert, AI compliance
specialist and a slew of other all-new job
descriptions.[2]
While all of this is heartening, there will be quite a few challenges and risks
that will arise in the short term, as Generative AI continues on its relentless
march.
To assuage the
widespread fear that has accompanied the launch of Generative AI technology, it
makes sense to spread awareness about Generative AI concerning the opportunities
presented by it and the inherent risks it might carry amongst the people at
large, organizations, and policymakers. Doing so will lead to well-thought-out
and responsible deployment of the technology.
There must be total
transparency and accountability about the algorithms being deployed so that
users can pinpoint any biases that might have crept in. At the same time, the
developers should ensure that the technology does not lend itself to misuse.
The highest ethical standards need to be observed with regard to design and
development. Care also should be taken when deploying a range of training
datasets, so that the output obtained is fair, well-meaning, and good for
society. This requires researchers, domain experts, policymakers, and ethicists
to join hands to ensure that Generative AI becomes a tremendous agent of
change, economic growth, and all-round well-being.