Generative AI has made remarkable progress in recent years, prompting widespread discussion and application across various domains. The research question As we navigate developments in Generative AI, it is crucial to examine who benefits and who might be harmed by its. Researchers at The University of Edinburgh are examining the ethical and practical use of large language models (LLMs) in specialised fields through multiple co-ordinated strands of research considering the security, accuracy, fairness and any bias present within such models.Academia is particularly well-suited for this task, due to its ability to independently evaluate models without the commercial biases that may influence private sector organisations assessments of their own technologies. This allows for a more objective and impartial analysis, ensuring that the benefits and potential harms of AI are thoroughly examined. Image credit: Jamillah Knowles & Reset.Tech Australia / Better Images of AI / Detail from Connected People / CC-BY 4.0 Project aims and objectives This sector of research explores LLM capabilities in specialised content creation, the resilience of these models to challenging inputs, the societal impact of AI-generated misinformation, and the ethical consideration of bias in AI-driven decisions.Research on harmful bias in AI has often concentrated on proven measures and metrics that can be mathematically defined and directly assessed within the model. This approach assumes that social biases against specific groups are real properties of the model, showing consistency across all applications. However, the risks that can arise from AI and AI bias are so varied that we cannot hope to capture bias so easily without considering the specific context in which AI models are used and the specific harms that can arise in those contexts from their inappropriate use. Objectives Test how well LLMs can handle complex and specialised content Find and reduce biases in AI to ensure fair and ethical decision-makingImprove langue models’ resistance to misleading or manipulative inputs Study the impact of AI-generated content on public opinion and find ways to reduce misinformationCreate guidelines for the responsible use of LLM in different areas Implications This strand of research gives rise to important implications relevant to policymakers, educators and developers.For policymakers, the research highlights a clear need for comprehensive AI regulation, responsive to the rapidly advancing development and application of the technology. Key regulation questions raised include what to regulate and how to do so.In the field of education, the research aims to contribute to discussion on how to educate the population on LLM harms and to teach critical evaluation of AI content and its appropriate use.For developers, vital questions must be answered around the transparency that AI companies operate with and the tools employed to detect and mitigate biases in LLMs. Research leads Björn Ross, Lecturer in School of Informatics, University of EdinburghEddie L. Ungless, PhD Student, School of Informatics, University of Edinburgh This article was published on 2025-01-15