AI: AUGMENTED DECISION MAKING

Conversation around artificial intelligence is steeped in both utopian and dystopian promise. Currently few other technologies elicit comparable levels of emotion from high level academics, technology experts and laymen alike. Associations with Skynet from The Terminator, omniscient and omnipotent are common, but in reality, the progress in AI has been focused not on developing machine sentience, but on using massive amounts of data to facilitate automation of repetitive tasks.[1]   Most successful real world applications of AI combine the technology with humans trained in its use to facilitate augmented decision making, and that’s where most experts believe the real potential of AI will be unleashed.

With proper training involving exposure to a few hundred thousand images, machines can diagnose melanoma with the same accuracy as dermatologists who’ve spent a decade in training. That knowledge is both thrilling and frightening and illuminates the real disruptive potential of AI; its impact on highly skilled labour. While past iterations of technology have automated menial physical tasks, AI is automating skilled work traditionally performed by trained professionals in medicine, science and industry. This fear response may be an overreaction, as evidence suggests the technology won’t be replacing these highly skilled individuals, but instead augmenting their skillsets elevating the work they do and the value they provide.[2]

Unfortunately, the technical expertise to leverage AI is currently in high demand and short supply. Companies from a wide variety of industries need to invest in data analytics to sustain themselves and are aggressively recruiting for individuals with strong computational data science and programming skills; skills that are not readily available.[3]  User adoption is also challenging primarily because technology that’s difficult to understand is difficult to trust; particularly when it’s being used in assisted decision making. For professionals who must justify their decisions to boards, regulatory bodies or the public, a black box decision doesn’t cut it. Further, as humans are the ones training machines with human data, machine learning is susceptible to common human bias along racial, cultural and gender lines, and if you don’t know how your technology has arrived at its decision you can’t know whether it’s algorithm has been compromised.  These concerns have led to the rise of explainable AI (XAI) or Transparent AI wherein machines provide an auditable accounting of the logic and rules used to arrive at a decision; a development which may in time help facilitate better adoption of the technology.[4]


[1] Thrun, S., & Anderson, C. (2017, April). What AI is — and isn’t. Retrieved May 8, 2019, from TED Ideas worth spreading: https://www.ted.com/talks/sebastian_thrun_and_chris_anderson_the_new_generation_of_computers_is_programming_itself

[2] Wilson, H J & Daugherty, P 2018, ‘Collaborative intelligence: humans and AI are joining forces’, Harvard Business Review, July–August, pp. 1–11.

[3] Henry-Nickie, M. (2016, November 16). Leveraging the disruptive power of artificial intelligence for fairer opportunities. Retrieved April 20,2019, from Brookings: https://www.brookings.edu/blog/techtank/2017/11/16/leveraging-the-disruptive-power-of-artificial-intelligence-for-fairer-opportunities/

[4] PwC Digital Services. (2018, December 06). The six priority areas to unlock AI value in 2019. Retrieved May 8, 2019, from Digital Pulse: https://www.digitalpulse.pwc.com.au/report-pwc-ai-predictions-2019/