The rise of a new scientific era: the future is AI 

Artificial intelligence (AI) has revolutionized the way we live, work, communicate, and interact with technology daily (with and without your knowledge). The idea of AI emerged in 1950 when Alan Turing, a mathematician and computer scientist proposed an “imitation game,” also known as the Turing test, to measure a machine’s ability to emulate intelligent behavior that is equivalent or indistinguishable from a human’s. Today, AI-powered tools and services have become more mainstream and can be found everywhere, from your digital assistants to social media, and to automated appliances and self-driving cars. AI has formed the basis of various systems including voice recognition, recommendation systems based on user content preferences, fraud detection, and even as a tool for diagnosing certain diseases and predicting patient outcomes. In 2017, the Vector Institute for Artificial Intelligence (located at the MaRS Discovery District in downtown Toronto) was launched with a $135 million investment from the Government of Canada, the Government of Ontario, and 40 companies (including Google, Air Canada, and Telus) [1, 2]. The institute will push Canada to the forefront of AI research by providing the resources to advance Canadian-driven AI research and application, increase Canadian expertise, and attract AI talents to Canada.

AI is not only used to accelerate the rate of scientific discovery but has also enabled scientists to make breakthroughs in numerous fields of research. Using AI-powered tools and algorithms, researchers are able to quickly and efficiently sift through vast amounts of data and uncover relationships and trends that may not have been identified using traditional methods of analysis and data processing. A study led by Drs. Christine Allen and Alán Aspuru-Guzik at the University of Toronto published in Nature Communications earlier this year demonstrated the promising application of AI, in particular, machine learning algorithms for predicting experimental drug release from long-acting injectables and guiding the design of new long-acting injectable systems [3]. This has the potential to accelerate the drug development process by reducing the time, cost, and amount of “trial-and-error” associated with its drug formulation development. 

As the usage of AI continues to expand in research, there are growing concerns about its potential risks and pitfalls. This includes the possibility of AI algorithms perpetuating and amplifying biases within the very data that is used to train them, which can lead to inaccurate results, interpretations, and conclusions drawn. There is also a sense of mistrust when it comes to AI due to the lack of transparency with AI algorithms. These algorithms, often referred to as “black boxes” are usually very complex and difficult to understand, which can make it difficult to flag and address any errors or biases. This can cause not only a lack of trust but potentially harmful consequences when using and interpreting data from AI. As AI evolves and becomes more involved in many systems and software, there is mounting concern that AI can be exploited for malicious purposes, giving birth to new kinds of cyberattacks, personal data exploitation, and surveillance. Thus, as AI continues to advance, effective safeguards must also be developed to protect against these risks.

A new aspect of interest in AI has recently emerged within the research community for academic writing, scientific communication and publication, and scoping literature. The emergence of generative text-based AI like chatGPT has the potential to synthesize the literature, help acquire knowledge, and aid in writing papers with just a few simple prompts. However, those within the research community are approaching the use of AI-generative text for scientific writing with caution, and it’s not hard to see why. These models are still unreliable as the information given can sometimes be misleading or completely false. Currently, generative AI like chatGPT does not have the capacity to accurately provide references from which their information was sourced and instead has been found to support their statements with references that don’t exist. This is extremely problematic, especially when researchers and the science field rely on accurate, factual information; therefore, the information obtained from these AI models should always be checked by the user. Despite this, AI has successfully demonstrated the potential to be a new means to write and publish research articles, scientific communications, and literature reviews. It has the ability to increase the accessibility of research to both the public and even to researchers within the field through improving the readability of papers by removing jargon [4]. With the high accessibility, efficiency, and ease of use of chatGPT, it’s worth noting that many publishing groups are racing to create policies surrounding the use of chatGPT in publishing literature. Prestigious journals like Nature and Science have recently banned ChatGPT from being credited as an author on papers as they do not meet the standard for authorship, one of which is their “accountability for the work, which cannot be effectively applied to [large language models]” (stated by Magdalena Skipper, the editor-in-chief of Nature in London) [4, 5]. 

Over the last several months, we have witnessed the emergence and potential of generative AI, from text-generation AI like chatGPT and Google’s Bard, to image-generating models like DALL-E 2 and Midjourney, which can change the way we create and acquire knowledge. If generative AI is the future, perhaps, as suggested by The Atlantic, one of the most vital skills for us in the 21st century is how to effectively interact with these AI systems or more specifically, the ability to write effective AI prompts will become a skill in high demand [6]. With Microsoft’s $10 billion investment in ChatGPT, we are currently witnessing the battle for the top mainstream AI, but one thing is clear:

As the wave of a new era of AI is changing the world, the science community is not immune to its tides, and researchers are adapting to take advantage of the many unprecedented and exciting opportunities it presents.


Sources:

  1. https://vectorinstitute.ai/about/

  2. https://www.newswire.ca/news-releases/new-artificial-intelligence-research-institute-launched-in-toronto-to-anchor-canada-as-a-global-economic-supercluster-617667323.html 

  3. https://www.nature.com/articles/s41467-022-35343-w 

  4. https://www.freethink.com/robots-ai/ai-could-rescue-scientific-papers 

  5. https://www.nature.com/articles/d41586-023-00107-z 

  6. https://www.theatlantic.com/technology/archive/2023/02/openai-text-models-google-search-engine-bard-chatbot-chatgpt-prompt-writing/672991/

Tiffany Ho

Tiffany Ho is a fourth year PhD candidate in the Department of Pharmaceutical Sciences at the University of Toronto. Her research focuses on the development of a light-activatable nanomedicine platform for photodynamic cancer therapy applications. She’s an Associate of the Outreach Team and outside of research, she loves to travel to new places, garden, and create art.

Previous
Previous

Life Sciences Career Expo 2023 Recap

Next
Next

Innovation Nation: a homegrown approach to grow Canada’s biotechnology sector