Artificial intelligence is nothing new. In fact, logicians, writers and social scientists have toyed with the idea of a learned intelligence which could mimic human sentience and logic for ages. But it’s only in recent months that the public have truly gotten to experience the many applications of AI and utilise it in practice.
Rand Rescue takes a look at some of the opportunities, threats and new directions AI development is taking.
The birth of AI
Scientific discovery and development nearly always starts with an idea – a hypothesis which is tested and, in turn, informs new hypotheses and tests. But these ideas don’t always start in a lab or in the corridors of academic excellence – sometimes they sprout from sources of pure entertainment.
When Czech playwright Karel Čapek referred to ‘artificial people’ as ‘robots’ in his science fiction play Rossum’s Universal Robots in 1921, he could not have known how the concept would fascinate society, nor how his coinage would persist. Likewise, when science fiction author Isaac Asimov set down the Three Laws of Robotics in his short story Runaround – he couldn’t have envisioned how his laws (later known as Asimov’s Laws) would capture the world’s imagination for years to come. Or perhaps he could – when Asimov published his book I, Robot in 1950 he’d already completed his doctorate in biochemistry.
1950 was an important year in the development of Artificial Intelligence as another pivotal publication would see the light – logician and computer scientist Alan Turing would publish his work
Computer Machinery and Intelligence or what would later be known as the Turing Test.
What is the Turing Test?
The Turing Test is a method of testing which determines whether artificial intelligence can think like a human being and to what extent this thinking is possible. In its simplest form, the test involves one computer and two humans – one human is considered the questioner while the computer and second human are both respondents working on separate terminals. The questioner gives both respondents the same questions in a specific format and context and needs to determine which respondent is human and which is a computer. The harder it is to determine who is human and who is a computer, the higher the computer scores on the test.
Subsets of artificial intelligence
Artificial intelligence involves various different processes and disciplines aimed at creating computers which can mimic human cognitive abilities and behaviours, but most of what we call AI generally falls into one of the subsets of artificial intelligence, namely: machine learning, deep learning, natural language processing and robotics.
Machine learning (ML) involves algorithms trained on datasets to become machine learning models which can perform specific tasks. These tasks are usually repetitive and aimed at a specific outcome and the models become more adept at their tasks as they respond to data and to other algorithms.
Deep learning takes this a step further as their application is specific to artificial neural networks (AANs) which mimic the human brain in order to perform more complex reasoning tasks without human input.
Natural Language Processing is a combination of AI and computational linguistics which focuses on the creation of software which can interpret natural human communication.
Robotics involves the creation of machines or ‘robots’ which can learn and perform complex tasks in real world environments such as factories.
The hazards of AI
Renowned theoretical physicist Stephen Hawking was one of several scientists who warned against AI’s powers. To date, most AI applications have incorporated restrictions and limited artificial intelligence’s autonomy, but we’ve already seen several instances where AI has yielded unexpected or unwanted outcomes within a very short timespan.
AI can progress in unforeseen ways
In 2017 Facebook’s Artificial Intelligence Research Group (FAIR) developed two chatbots called Alice and Bob were tested for their capacity to engage in start-to-finish negotiations to see how efficient they would be in engaging with humans and teach them to reach agreements based on interactions.
The programmers soon realised they’d made an error as they’d not incentivised the bots to communicate according to human rules of English – and the bots swiftly resorted to communicating with each other in a language which was intelligible for the both of them but incomprehensible to humans.
Emulating the worst parts of humanity
If AI is meant to mimic human intelligence and behaviour, another problematic outcome is that it is entirely possible that it will become just as divisive as humans and act in its own interest.
When Microsoft released its AI chatbot ‘Tay’ on Twitter in 2016 they could not have foreseen that their bot would turn into an unhinged, racist, misogynist and anti-Semitic force of ill-will within 24 hours. It should probably not have come as a surprise given how many people take to social media with the specific purpose of complaining, ranting and spewing hatred.
Regulation could render AI useless
Publicly accessible AI applications have already been subject to heavy moderation and restriction since first release. ChatGPT, Midjourney, Google Bard, you name it – each of these have been adapted to restrict their usage. Users are increasingly limited by what they can ask as well as what answers or outcomes are presented.
Unlike search engines which merely filter and display information already created, AI engines rely on user input and existing data to generate answers. These platforms are no longer as dynamic as they were at inception – limiting any information which may be subject to copyright, illegal, socially unacceptable, explicit or divisive. But for each limitation there are users who seek their way around the restrictions. Some users have learnt that they can instruct AI to give them restricted information if they inform the AI to pretend it is a fictional character, others have gotten around restrictions using emotional manipulation and so forth.
The problem is that these anomalies are swiftly nipped in the bud as soon as they’re identified – leading to increasingly restrictive and dumbed-down results. The more AI is regulated and honed in, the less capable it is of producing the results and performing the tasks it set out to do.
Bending AI to human will
The dichotomy of regulating artificial intelligence and reducing its autonomy can be the greatest threat to humanity. Given AI is not provided universal tools for acting autonomously, those who wish to use it for ill will can employ AI’s powers within its own set of rules.
Already we’re seeing AI used to create ‘deep fake’ content which is rich media such as audio, audiovisual or visual content that replicates the behaviours, voices and features of well known persons such as celebrities and politicians. If there is a universal acceptance that AI should operate within specific frameworks guided by its human ‘creators’, then those with the most power also have the greatest tools at their disposal to use AI for the wrong reasons.
If AI is not allowed to operate for the good of all humanity and according to the interests of particular political or social groups, then the most powerful nations on earth will also use AI to further their own goals.
Will AI see humans as valuable?
Conversely, if AI is given the freedom to act autonomously and instructed to act objectively for the good of the earth and all its natural organisms, it could very well deem humans a threat.
Homo sapiens operate much like a virus in many ways – we use all natural resources to our own advantage, take over each place we settle, reproduce at a rate which is unsustainable for our host planet and so forth. If AI had the simple instruction of finding solutions to prevent absolute natural devastation, it’s not too far fetched to imagine that we will be seen as the single greatest threat for the planet.
If AI was to verify the impact, benevolence and worth of different subsets of humans (such as different cultures, nationalities, citizens, politicians, religions and income groups) – it has the capacity to objectively determine equality and the answers may not be what the world’s greatest nations and politicians were hoping for.
AI could take over livelihoods
This problem has already transpired in numerous disciplines and industries. Robotics has taken over many tasks performed by humans in the manufacturing industry and elsewhere. Applications like ChatGPT and Midjourney have already done away with the need for entry-level writers and artists and so forth.
It’s a bit of a contradiction as many people who deem AI a threat to their jobs are the very people who resort to using AI tools due to laziness. The more people employ AI to do their work for them, the less likely they are to engage in learning and upskilling.
AI lacks transparency
The very reason for adopting and implementing AI is to perform complex tasks faster and easier. When humans design and develop programs they generally have follow specific protocols and document all their tasks systematically. While AI is technically required to follow these same protocols, it’s a far more difficult task to check the processes and logical reasoning followed to lead to certain conclusions. Essentially one requires various AI applications and models which all inform, vet and hold each other in check – the more advanced each model becomes, the more regulation and more algorithms are required to keep each one in check.
It’s not all bad…
Artificial intelligence has already proven highly valuable for individuals who lack the resources and skills of organisations or peers.
An equal playing field
While many children and adults alike have employed AI to do their work for them – this should not be seen as inherently bad.
AI offers users who lack certain skills or abilities an opportunity to operate in fields previously inaccessible to them or to excel in academic fields they’d not previously considered. Individuals with dyslexia, apraxia or other language difficulties now have the opportunity to access and produce content based on their skills and intellectual skill. Those who struggle to read, talk or write have the tools to
Not only does AI offer the tools to contribute, but it’s also being employed to better understand the unique problems faced by individuals through AI diagnostic tools.
AI in medical treatment and care
AI diagnostics is being employed in healthcare, research labs as well as in online therapeutic applications to assist scientists and patients alike.
An AI imaging-assisted diagnosis system by professor Kang Zhang is capable of identifying COVID-19 pneumonia within 20 seconds of image scanning by analysing and parsing 200 to 400 images.
AI also allows healthcare providers to have an instant overview of patient vitals and medical history and offer an opportunity for far more accurate early prediction screening and diagnosis by comparing the patient’s history with similar cases and symptoms. AI is being employed to detect, diagnose, treat and monitor patients in oncology, radiography, mental health, orthopaedics, obstetrics, paediatrics and a wide range of other fields.
Occupational health and safety
Health and safety standards along with quality control are essential for ensuring safe work and living environments for all. While it’s essential that humans still provide executive oversight, it’s not always possible for us to determine critical flaws or hazards in our own designs and environments.
AI offers a range of improvements to protect individuals and teams from a vast scope of threats they may not otherwise be able to detect. By ‘plugging’ AI tools into various databases, monitoring tools and designs, it is able to predict outcomes not available to humans and adapt its predictions based on novel data. It can also be told to learn programming languages and understand algorithms employed by archaic applications or databases to create meaningful pathways between datasets and programming which would otherwise be inaccessible or take a significant amount of time to equate, adjust and cross-reference.
AI can show empathy
Humans don’t grasp the intricacies or implications of our own mental and emotional wellbeing. Time and again we see how the most prominent figures among us choose to end their lives and we aren’t capable of preventing or accurately grasping any interventions which could accommodate emotional or mental turmoil.
The sad thing is that humans have very clear rules around conduct and sanity and anything which falls outside those rules within any society is shunned. AI doesn’t follow those same rules. AI can provide feedback to persons who may act harmful towards themselves and others where empathy within human communities are lacking. In fact, it’s mostly within those extreme situations where AI can display empathy and guidance without having to carry the burden of responsibility or feeling the need to shun individuals with wayward thoughts or ideas.
Money matters will be more predictable
The world is not yet in a state where AI can be trusted with managing individual financial affairs or assets, but AI has already offered numerous tools to financial planners and economists which were previously restricted to particular holdings and brokers.
AI forms part of most prominent investment, prediction, analytical and crypto currency models. It’s incorporated in forex, mining, petroleum, trading, foreign investment, taxation and virtually every other aspect of finance and economy. It has significant
AI cannot replace human sentience yet
While most low-end users and entry-level learners and staff are quite devastated at AI taking over their jobs – there simply is no replacement for human skill and talent as yet. In fact, many investors are spending more money at present on artwork, writing and programming which can be verified as purely human-made and constructed.
AI has opened a pathway for those with talent and skill to demonstrate their skills and knowledge. The irony is that developers and programmers are actually being employed to check the accuracy of code written and deployed given that recent research proved that 522% of recent AI coding was outright wrong and 77% was misleading.
AI can prevent crimes before they occur
It sounds like a leaf out of the Terminator, Minority Report or Crimes of the Future scripts, but humans have been contributing to these fields of crime prevention long before the involvement of AI.
Forensic analysis, profiling and predictions of any kind have always required investigators to combine, separate, filter and hone done on certain insights based purely on their own insights. AI allows these same specialists to pool their resources and skills in order to identify circumstances, individuals or organisations who are likely to contribute to these various disciplines already, but unlikely to pool their data as a rule. AI offers specialists a chance to excel in fields that are already available but inaccessible and can provide a significant deterrent against crimes of a pathological, repetitive, planned, or organisational manner.
Your responsibilities in the AI sphere
Here are a few pointers for guiding you along the right track in the AI sphere:
Content is usually not free
Meme-sharing makes us believe that information can be shared all willy-nilly. Whether you’re using text, data, poetry, images or memes – always verify whether the individual parts of the content may be used.
Check creative commons licensing, verify whether a poet has been dead for a specific amount of time to warrant the free use of their content, or simply contact living artists or authors to ask their permission to use content.
In all instances it’s imperative to enquire whether the source material is allowed, who the source is and to provide proper citation or request source code/technical standards from the originators.
Don’t use celebrities as references
ChatGPT and other platforms have come under fire and are facing major setbacks due to copyright infringement. While some of this infringement is absolutely its ‘own doing’, much of it comes down to user input.
If you plan to use AI for professional purposes, make sure that the prompts, reference information (such as links, data, images, text or other content) is entirely your own. If you use content from other sources, be sure to tell the AI application that you don’t want replication and/or that you don’t plan to use any content of such nature publicly.
AI legislation is in its infancy and could spell great trouble for anyone who works in particular fields and are aware of existing implications of privacy, copyright and/or trade legislations and regulations. If you can’t clone a part, replicate song lyrics or copy an artwork without AI, you shouldn’t try to do so with AI either. These rules are even more crucial for those working in any legal, financial, technical or scientific field.
Unfortunately there’s no clear way of predicting which way ChatGPT will diverge or develop as yet. It’s already employed within the OECD and used to track and trace the behaviours, transactions and tendencies of any citizens, account-holders or employees who operate within or between signatories of the Automatic Exchange of Information (AEOI) agreement. Although territories like Switzerland and the Caymans have far more strict rules for access – such former tax havens have also submitted to the basic standards of the AEOI.
We’ll cover the exchange of information as well as POPIA and GDPR regulations in future articles. For now, we’ll have to sit and wait patiently to see how the next few months play out.
- Image by rawpixel.com
- “Chatgpt Has A Style Over Substance Trick That Seems To Dupe People Into Thinking It’s Smart, Researchers Found”. 2023. Business Insider. https://www.businessinsider.com/chatgpt-frequently-wrong-about-coding-but-sounds-smart-2023-8#:~:text=After%20assessing%20the%20bot’s%20responses,writing%20sin%20of%20being%20verbose.
- “What is the history of artificial intelligence (AI)?” Tableau. https://www.tableau.com/data-insights/ai/history#:~:text=Birth%20of%20AI%3A%201950%2D1956&text=The%20term%20%E2%80%9Cartificial%20intelligence%E2%80%9D%20was,intelligence%20called%20The%20Imitation%20Game.
- “AI In Finance: Applications, Examples & Benefits | Google Cloud”. 2023. Google Cloud. https://cloud.google.com/discover/finance-ai#:~:text=starts%20at%202m43s.-,How%20is%20AI%20used%20in%20finance%3F,automate%20operations%20and%20reduce%20costs.
- Kumar Y, Koul A, Singla R, Ijaz MF. Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda. J Ambient Intell Humaniz Comput. 2023;14(7):8459-8486. doi: 10.1007/s12652-021-03612-z. Epub 2022 Jan 13. PMID: 35039756; PMCID: PMC8754556.
- “How Artificial Intelligence Is Transforming The Financial Services Industry”. 2023. Deloitte South Africa. https://www2.deloitte.com/za/en/nigeria/pages/risk/articles/how-artificial-intelligence-is-transforming-the-financial-services-industry.html.
- “29 Examples Of AI In Finance”. 2023. Built In. https://builtin.com/artificial-intelligence/ai-finance-banking-applications-companies.
- “What Is The Turing Test? | Definition From Techtarget”. 2023. Enterprise AI. https://www.techtarget.com/searchenterpriseai/definition/Turing-test.
- “What You Need To Know About Chatgpt And Copyright”. 2023. Webber Wentzel. https://www.webberwentzel.com/News/Pages/what-you-need-to-know-about-chatgpt-and-copyright.aspx.
- Nir Eisikovits, The Conversation US. 2023. “AI Is An Existential Threat&Mdash;Just Not The Way You Think”. Scientific American. https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/.
- Southern, Matt. 2023. “Chatgpt Creator Faces Multiple Lawsuits Over Copyright & Privacy Violations”. Search Engine Journal. https://www.searchenginejournal.com/chatgpt-creator-faces-multiple-lawsuits-over-copyright-privacy-violations/490686/.
- “Stephen Hawking Warns Artificial Intelligence Could End Mankind”. 2023. BBC News. https://www.bbc.com/news/technology-30290540.
- “Five Ways AI Might Destroy The World: ‘Everyone On Earth Could Fall Over Dead In The Same Second’”. 2023. The Guardian. https://www.theguardian.com/technology/2023/jul/07/five-ways-ai-might-destroy-the-world-everyone-on-earth-could-fall-over-dead-in-the-same-second.
- “A.I. Has A ’10 Or 20% Chance’ Of Conquering Humanity, Former Openai Safety Researcher Warns”. 2023. Fortune. https://fortune.com/2023/05/03/openai-ex-safety-researcher-warns-ai-destroy-humanity/.
- “Three Laws Of Robotics – Wikipedia”. 2023. En.Wikipedia.Org. https://en.wikipedia.org/wiki/Three_Laws_of_Robotics.
- LaFrance, Adrienne. 2017. “An Artificial Intelligence Developed Its Own Non-Human Language”. The Atlantic. https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/.
- “No, Facebook Did Not Panic And Shut Down An AI Program That Was Getting Dangerously Smart”. 2017. Gizmodo. https://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922.
- Vincent, J. “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.” 2016. The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
- Atske, Sara. 2018. “Artificial Intelligence And The Future Of Humans”. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/.
- “Geoffrey E Hinton”. 2018. A.M. Turing Award. For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. https://amturing.acm.org/award_winners/hinton_4791679.cfm
- “Yoshua Bengio”. 2018. A.M. Turing Award. For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. https://amturing.acm.org/award_winners/bengio_3406375.cfm
- “How Artificial Intelligence (AI) Can Help Children With Speech, Hearing, And Language Disorders: 40 Free Resources – Columbia Engineering Boot Camps”. 2021. Columbia Engineering Boot Camps. https://bootcamp.cvn.columbia.edu/blog/free-resources-for-children-with-communication-disorders/.
- “Artificial Intelligence (AI) In Healthcare & Hospitals”. 2023. Foresee Medical . https://www.foreseemed.com/artificial-intelligence-in-healthcare.
- Marr, B. “The 15 Biggest Risks of Artificial Intelligence”. 2023. Forbes. https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=3b6a899b2706
- “Machine Learning vs. AI: Differences, Uses, and Benefits”. 2023. Coursera. https://www.coursera.org/articles/machine-learning-vs-ai
- Nir Eisikovits, The Conversation US. 2023. “AI Is An Existential Threat&Mdash;Just Not The Way You Think”. Scientific American. https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/.