Custom CSS

How have AI tools been taken advantage of in a research setting?

Photo of the author who wrote this blog post
Oliver Back
5 min read

In the past 18 months AI technology has started to find its place within education and industry, but how have researchers assimilated some of the many tools out thereinto their workflow? In a recent interview with a senior scientist working in the field of Immunology. Their average day involves conducting lab work, performing analysis, and communicating the results to their team of researchers.

We asked for their perspective on the use of AI in a professional research environment, and here’s what they had to say:

AI tools are becoming more popular amongst academics, students and industry professionals. Are you using this technology yourself in your own research environment or do you know of colleagues who are?

For me personally, I do actually use machine learning technology, which is an application of AI. I use ML tools for image analysis after experiments have been performed. I'll get slides of tissue sections and they'll be stained with different markers to look at different immune cells. I then use a software that lets me click on a cell and say whether it’s positive or negative for a marker.The software uses my manual annotations to learn what I want to be called an immune cell, or what is background noise. I know some other research groups are using this tool called AlphaFold, which is used to predict the structure of proteins and that can be used to for drug discovery in future research.

AI tools for image analysis is an incredible application for machine learning. Semi-automating this aspect of the research journey not only saves time but allows academics and scientists to focus on the more important parts of the discovery process.In many ways, AI image analysis tools for scientists are like grammar checking tools for writers. Image analysis is still not a perfect science however -telling a computer what a cell is and isn’t, has proven to be a very difficult task, often requiring human input to check that the AI system has correctly classified the cells. Whilst there is room for improvement with these algorithms, they have certainly already changed the route academics take in the research environment.

So, can you effectively say to this kind of software: ‘Here are my pictures, do your thing’, and then come back two hours later to find that it's processed and classified all of the images without any input from you?

So what I usually do is, I'd have around 30 images and you'd have maybe five images per experimental group. And I'll get a representative image from each of those experimental groups and use that to train my machine learning tool. So there is some human interaction that you need to make sure it is accurate. You can't just trust it blindly. I'd apply it to just a small region of interest and then go into my tissues again and apply it in a bigger region, and then actually check to see that it is correct, because sometimes it can make misclassifications of a cell, especially with the disease that I'm looking at. You sometimes have a whole cluster of cells composed of different markers which may result double positives that don’t exist in a biological system.

So, having to check that the software has correctly performed the classifications is like having a robot that sets you homework?

Definitely.

AI tools certainly seem to have streamlined research processes involving image analysis, but it still relies very heavily on subject matter expertise to ensure complete accuracy and trustworthiness.

So, you’ve had positive experiences with some of the tools you’ve used. Do you have any reservations about using other AI tools in your day-to-day life and work?

I think this touches on your point about whether human supervision is still needed because you can't just go blindly into these things and trust it 100. That’s especially true of this other tool that I'm using, where you classify your cells and you input the data, and it will create different regions in your tissue for you, but it does it in a computational way. And it kind of just does it based on what it thinks you think the output should be. And like I said, biologically these things don't exist. So, you do have to use your own biological background to say, ‘yes, this makes sense’ or it doesn't make sense. In that respect you do have to be careful with AI.

There are there are some institutions that have banned tools like ChatGPT and Grammarly[1], [2], because anything you feed in becomes part of the training data that they use to further develop their models. This opens the possibility that in the future, LLMs and other types of machine learning tools could be hosted on a company’s own servers, so that any data going in is not exposed to the wider network or population.

If there were LLMs that you could host at your own workplace, so it's contained within your system, would you use this technology in your workflow? Or do you feel that there’s too much overhead to verify the output, especially with stories of ChatGPT hallucinating, and creating false citations.

As far as ChatGPT goes, I don't use it in my day-to-day life. I know some colleagues that do use it, but I think they would also say that they would be careful to check the output afterwards. It does save a lot of time because it’s an automated way to get your stuff done, but you always do need to verify exactly what the output is

There appears to be a trade-off for some users, in that AI isn’t trusted enough to be left alone todo a job, and therefore requires supervision and proof-reading, which in itself becomes another task to complete. In some cases, this could create more work than it saves.

As a PhD holder, you've had to read and consolidate a lot of articles. Do you think making use of tools like Scholarcy would help, or could this type of software have helped you during grad school and even in the workplace today?

I think it definitely would have sped up my PhD, because I was the type of student who would just avoid reading papers until I really had to. So it was only when I was writing up my thesis that I actually started properly reading papers. I think if I had to just accumulate all of the information I thought I needed and be guided on what would actually be most relevant and useful to me, that could have saved me a bunch of time

The average researcher will read over 250 academic papers per year. Assuming 250-260 workdays per year, that equates to one article read per day! During my PhD, I certainly didn't manage one paper per day. I would group a lot together and have to grind through them. It wasn’t until the end of my PhD that I started using Scholarcy. Using Scholarcy’s summaries meant that I could screen and review a bunch of papers, 20 in one day sometimes, and then read in depth the articles worth spending my time on. A lot of PhD students have told us that it’s been a life saver for them, particularly when it comes to screening articles.

Do you think that in your in your workplace, there’s a lot of time that could be saved if there were trusted AI tools to help expedite certain processes such as information gathering?

I think it really depends on the research groups. So for example, in my group we focus more on who the last author of a paper is. But I think maybe PhD students could benefit from reading technology, especially if they aren’t experienced in speed reading papers. You don't need to read every single word, and you don't need to read it in order. So if there was a tool where it could just summarise the main points for you, which there is, then yes, I definitely think it would be useful for PhD students. I would say for postdocs, maybe not so much, just because they tend to know exactly which papers they actually want to read.

Paraphrasing, writing, and grammar checking tools were some of the first AI tools on the market, and the first to be widely accepted. Have you had any experience using those tools, and do you have colleagues with experience using them as well?

I haven't actually used any of them, and I don't think anyone else I know uses them. It's really interesting. I think it's more like I said, depending on the group that you're in, some group leaders may have reservations of using those type of things, especially if they're a senior leader. They have their ways of doing things, so no, I've not really heard of any.

It sounds like the academics they work alongside have an established workflow and routine and may feel some resistance to change when it comes to incorporating AI tools into their work habits.

In a previous interview, Anthony, a university lecturer, talked about how some of his PhD students use tools like Grammarly and other types of spelling and grammar checkers, in order to improve the standard of their writing. It's quite possible that in the research institution where this researcher works, people have already developed an academic writing style to such a rigorous standard that it’s second nature to them. A tool like Grammarly might not add anything to their workflow, as they are not making the types of mistakes that Grammarly is there to correct.These scientists have paraphrased hundreds, potentially thousands of articles in their academic career and it’s second nature for them to be able to rewrite complex academic articles quickly and efficiently, without the aid of an AI writing tool.

The scientific review process is known to take a long time. How would you feel about an AI system reviewing your article before it becomes published?

I think, as with most AI tools, it would speed up the process of getting your paper published, and actually help you to beat competitors. It could reduce publication delays that can arise when multiple reviewers need to consider your article. But as always, there are caveats. I don’t really know how the technology could effectively review an article. Because when you send a paper to a journal, they review it based on whether they think it will fit into the wider research and what it will contribute to science? Does it fit with the journal? How have you answered the question in your title? I mean, if AI is smart enough to do that, then why not? But I think there will always be some sort of human supervision that will need to check what AI has actually determined.

One of the major challenges for a journal, and for researchers, is finding a research gap. How can they ask, and answer, a question that has not been answered before? It’s quite hard for AI tools to come up with these questions by themselves, but tools do exist to help researchers and academics to find these knowledge gaps.

OpenAlex, Scite.AI, and Scopus are tools which many academics rely on as part of the knowledge discovery process. Using academic search engines alongside analytical tools isa standard procedure performed by academics across the world, which has not yet been fully replicated within an AI tool. An AI academic review program would have a difficult time understanding if an article is unique because of its content, or purely the way that it is written. Whilst tools like Grammarly can help researchers and students -  especially those that are relatively new to academia – with their writing, by the time a research article has been submitted to a journal it will have been proofread over half a dozen times.

The sentiment of academics towards AI services may mean that the academic review process will never be entirely automated. The attitude of ‘trust but verify’ rings true within the academic community on the subject of AI, especially when validating the output from an article summarizer.

References:

[1] ‘ChatGPT banned from New York Citypublic schools’ devices and networks’, NBC News. Accessed: Feb. 22, 2024.[Online]. Available:https://www.nbcnews.com/tech/tech-news/new-york-city-public-schools-ban-chatgpt-devices-networks-rcna64446

[2] ‘FromChatGPT bans to task forces, universities are rethinking their approach toacademic misconduct’, University Affairs. Accessed: Feb. 22, 2024. [Online].Available:https://universityaffairs.ca/news/news-article/from-chatgpt-bans-to-task-forces-universities-are-rethinking-their-approach-to-academic-misconduct/

Tags