Responsible Use of AI in Research Workflows

Artificial Intelligence(AI) is rapidly transforming how researchers discover, analyse, and communicate knowledge. While AI tools offer unprecedented efficiency from enhancing literature reviews to supporting data analysis and writing, they also introduce new ethical, methodological, and integrity challenges.

This webinar provides a clear, practical framework for using AI responsibly across the research lifecycle. We will explore how to evaluate AI-generated content, prevent inaccuracies and hallucinations, ensure transparency in AI-assisted writing, safeguard sensitive data, and recognise potential bias in AI outputs.

Participants will also gain insights into global guidelines from COPE, and major publishers, alongside Clarivate?s innovations in the responsible AI space. Designed for faculty, researchers, librarians, and students, the session aims to equip institutions with the awareness needed to build a culture of ethical, accountable, and trustworthy AI use in academia.

Join us to learn how to integrate AI effectively without compromising research integrity.

Topics that will be covered:
 
?Introduction to Responsible AI in Research
?Understanding AI Capabilities and Limitations
?Ensuring Research Integrity in AI-Assisted Workflows
?Data Privacy, Security, and Sensitivity in AI Tools
?Ethical Literature Search and Review Practices
?AI-Assisted Writing: Transparency and Attribution
?Avoiding Bias and Ensuring Fairness in AI Outputs
?Using AI for Research Design and Analysis Responsibly
?Evaluating AI Tools for Trustworthiness
?Institutional Policies and Global Guidelines on AI Use and Building a Responsible AI Culture in Academic Institutions
Register now