(S8E6) How Generative AI could shape the future of career development
In this episode, host Taryn Bell speaks to Chris Webb (Careers Consultant, University of Huddersfield) about the potentials and pitfalls of GenerativeAI (or GenAI) in supporting researchers' career development.
We discuss what GenAI is, what tools researchers and researcher developers can use, and where we need to be careful about their use.
The main points covered include:
- How GenAI can support researchers both as a first port of call during the job search, and as a way to do your current job more effectively
- The continuing importance of 'domain experts', e.g. real people, rather than chatbots!
- The need to use GenAI responsibly and thoughtfully.
Resources mentioned in this episode:
- The Foresee Framework, designed by Chris and Leigh Fowkes (Open University)
- Dr Philippa Hardman's DOMS Blog
- Dr Ethan Mollick's One Useful Thing Blog
We also highly recommend signing up to Chris' own newsletter, The Week In Careers, for a regular Digest of all things careers and higher education!
All of our episodes can be accessed via the following playlists:
- Research Impact with Ged Hall (follow Ged on LinkedIn)
- Open Research with Nick Sheppard (follow Nick on LinkedIn)
- Research Careers with Ruth Winden (follow Ruth on LinkedIn)
- Research talent management with Tony Bromley (follow Tony on LinkedIn)
- Meet the Research Culturositists with Emma Spary (follow Emma on LinkedIn)
- Research co-production
- Research Leadership
- Research Evaluation
Connect to us or leave us a review on LinkedIn: @ResearchUncoveredPodcast (new episodes are announced here). You can connect with Taryn or Chris via LinkedIn, too.
Transcript
Welcome to the Research Culture Uncovered podcast, where in every episode we explore what is research culture and what should it be? You'll hear thoughts and opinions from a range of contributors to help you change research culture into what you want it to be.
Taryn Bell [:Welcome to Research Culture Uncovered. I'm Taryn Bell, your host for today, and I'm a Researcher Development Advisor at the University of Leeds. Today we're going to talk about a topic that is both timely and really interesting, and that's artificial intelligence, or AI. In 2024, you could be forgiven for feeling like AI is just a buzzword. It's mentioned in every advert, every TV show, and every company seems to want to prove to us that they're using it. But for many of us working in universities and in researcher development, we have questions. Should we use AI? If we do use AI, how do we use it responsibly? And what impact might it have on how we support researchers in their career development? Now, I'm interested in AI, but I'm no expert in it, so I'm very lucky to be joined by someone with tons more experience than me, Chris Webb, a Career Consultant at the University of Huddersfield. He's Co-Chair of the AGCAS Graduate Transitions task group.
Taryn Bell [:He runs a very popular newsletter on LinkedIn, The Week In Careers. He co-hosts the Career Development Institute's podcast We Are Careers. And importantly for today, he's also co creator of the Foresee Framework, a reflective professional development tool designed to help individuals consider their position on generative AI and how this might impact and influence their career. So welcome to the podcast, Chris.
Chris Webb [:Thank you very much for having me, Taryn. I would also not describe myself as an expert. Yeah. Other than maybe kind of a handful of people, I think the pace at which they're moving, it's still very hard to find someone who would confidently describe themselves as definitely an expert.
Taryn Bell [:Absolutely. I've already mentioned a term, GenAI, or Generative AI. What does this mean and how does it differ from AI as a whole?
Chris Webb [:Yeah, it's a really good question, and I think we're often seeing kind of that sort of blanket term of either AI or kind of generative AI. And coursera have got quite a good glossary that sort of picks apart the two terms. So they describe sort of our artificial intelligence as the simulation of human intelligence processes by machines or computer systems. So AI being able to basically mimic human capabilities like communication, learning, decision making. When we're talking about generative AI, we're talking about that type of AI technology that uses AI specifically to create content. So to generate content, and that could be text, video, code, images and generative AI systems are trained on kind of large amounts of data and basically looks for patterns in this data in order to generate new content based on the data it's been trained on. So those are kind of the very simplistic differences, I guess, as a starting point.
Taryn Bell [:So I guess then AI is something that's been used for years, but then generative AI is the stuff that a lot of us are probably seeing on social media. There's kind of generated images or songs.
Chris Webb [:Yeah. So a lot of the stuff that you'll have heard through kind of social media and through just mainstream media generally over the past year, 18 months, ChatGPT, Google, Gemini, Claude, Midjourney, which is the image creation software, all kind of fall under that sort of bucket of generative AI.
Taryn Bell [:Is it just a fad or is it really here to stay?
Chris Webb [:You probably won't be surprised to hear that. For me, I write a lot about AI, and the answer is definitely no from kind of my perspective. So I tend to sort of sit more in the same camp as quite a few individuals that I read about who write on AI regularly. So one example, Dr Philippa Hardman, who kind of likens AI a little bit more tp what we describe as a general purpose technology in terms of how it works with different systems and tools that we use moving forward. A good example of where we're seeing this already is in stuff like search, where before long, using these type of tools like ChatGPT will replace googling things, but also in terms of how we're seeing this technology brought into different tools we already use. So phones, cars, PCs, this idea that actually will generally interact with AI in lots of different ways moving forward, even if we don't know exactly what that will look like. So, I think it's understandable. Certainly, since the original launch of ChatGPT, we're about 18 months in now to the 'hype cycle', so to speak.
Chris Webb [:Fairly understandable. I think that people are going through what's described as this 'trough of disillusionment'. We're getting a bit more cynical now. We've seen a lot of the stuff it can do, some of it looks great, but actually, some of the stuff that we might be seeing on social media we actually think looks a bit naff. And perhaps that's causing us to question, actually how impactful is this really going to be? But the american scientists, researcher and futurist Roy Amara talks about this kind of tendency to overestimate the impact of a technology in the short term. And then underestimate the effect in the long term. And I think that's probably a bit of what we're seeing, kind of with generative AI.
Taryn Bell [:Now, it's interesting because universities are a funny environment when it comes to AI, because the conversation, it feels, is all on one thing. Can students use AI to cheat on their coursework? But actually, what I'm really interested in is thinking about more broadly how AI is not being just used by students, but by staff. And I think most universities now do have really clear guidelines for students. But actually, as staff, there are some questions there about how we use AI, whether we use AI to help support students and researchers with their career development. And actually, there is a lot of discussion among careers professionals right now about the potential for AI. So could you talk about that a little bit more and potentially some of the tools that are available for research developments or career consultants to help support their communities?
Chris Webb [:Yeah, it's a really good point to make. And actually, we've been delivering some training at the University of Huddersfield recently for staff around AI use, and what we've sort of been breaking it down into is this idea of applications and implications. So, yes, it's really important to kind of understand this as a technology. We know it's going to be kind of foundational, we know it's going to impact people's work in different ways. So kind of having a rough idea of sort of how it works and how the different tools work and how you might be able to use them. But actually also, it's important to take a step back and think, when, where or how might I use this, and when might I consider it inappropriate to use this technology? Are there benefits for me using AI here? And actually, if I use this here, does this maybe set a dangerous precedent for using technology for this part of my day to day work? So I think that that side of things, something we've seen come out kind of quite strongly in the training we've delivered, which is really interesting. I think, for career development professionals, we're going through a similar sort of thing at the moment, where you've kind of got people sitting sometimes on the evangelical side, who sort of say, okay, I'm really into this, I'm going to use it for everything. People who sit on maybe a very skeptical side saying, actually, I'm really worried about how this might impact the work that we do, and a lot of people looking out there for advice.
Chris Webb [:And I think what's interesting about generative AI is that because it's still such a new technology, relatively speaking, there are no absolute master guides or tools for kind of unlocking the abilities that are out there. And this is partly because we still don't actually know exactly what the full limit of the abilities are when it comes to large language models. And, you know, what it actually responds best to when it comes to prompting. So there's been some really interesting research where they've kind of created prompts, they've offered ChatGPT money, and it started producing better outputs. They've been more polite to it, and they've seen kind of better responses or better outputs than they did when people were being rude to it, which is quite odd and quite different to a lot of other technology in how random it is. Quite a different thing, I think. So. I think the first thing, when it comes to kind of career development or sort of researcher development, or people that are working with clients to kind of help support career development, it's really important to kind of consider what problem or problems you're trying to actually solve and how genii could maybe fit in there.
Chris Webb [:So, for example, if you're looking to kind of help people scaffold kind of career exploration and research, you know, maybe they're thinking about, okay, I'm studying a topic at the moment, or I'm undertaking a piece of research at the moment, and actually I'm really interested in what I'm doing, but I'm not entirely sure how I'm going to connect this back to maybe the future of my career development, where I go with that, where I take the research afterwards, do I want to translate that to industry? And actually that's where generative AI can be super helpful because it can offer that kind of scaffold if prompted correctly, to kind of provide a starting point to help people explore different options. But it might also be that you're wanting to kind of help maybe sort of PhD students or researchers to kind of surface translatable skills, professional skills gained through academia that could maybe be translated into industry. And again, generative AI can be really good for that type of thing, but actually you kind of need to know what you want to do with it before you can actually try starting to get the most out of it by thinking about what outputs do I want to get, what inputs do I need to put in, but actually what process am I trying to solve here? Or what problem am I trying to solve? And if you kind of don't do that, it can be really difficult because there's so much advice out there and there are so many mega prompts and do this, and this will work really well. But actually what they're finding, what a lot of recent research has shown, is that the individuals who tend to get the most out of the technology are the people with the most domain knowledge we're tending to find in the career development profession. The best use cases are people who are reflecting, experimenting, but they've got that domain knowledge to kind of back up. Is that useful? Is it not useful?
Taryn Bell [:So what I'm hearing there is AI is not necessarily like a tool that's going to fix everything. It's a tool to help us do our jobs.
Chris Webb [:Yeah. And I think that is potentially open to change. I think one of the big kind of unknowns at the moment with AI in general is there's a lot of reports out there talking about job displacement, and there's a lot that talk about kind of job overlap. So how much, what percentage of your role might kind of overlap with AI? But the truth is, at the moment, we don't have any kind of hard or fast knowledge about how it's going to impact the job market over a longer period of time and which jobs necessarily might be kind of completely automated and which will become kind of more of a copilot or a tool. And so that's kind of the interesting thing about it at the moment is I think compared to maybe some technologies, there's still a bit of that to kind of be worked out. But, yeah, absolutely. I think there's so much connection between the inputs and outputs in terms of what you get back. You know, if you put something very generic into a lot of these systems, you're not going to get something that useful back.
Chris Webb [:Whereas if you look at kind of artists like Sir Peter Blake, for example, who've used AI as part of the exhibitions they've done, they've often described it as kind of having like a different style of paintbrush, you know, kind of in their toolkit. So not everyone feels that way. It's important to kind of say that some people feel, you know, I would say, much more negative about the potential impact of AI, but we're seeing a lot of professionals in a lot of different domains who are definitely drawing on that analogy to say, actually, for me, it's another tool that I can use, but ultimately I am the person behind this.
Taryn Bell [:I think this is an excellent time to talk about the four C framework because I think when you're talking about evangelists and people who are a bit more critical, I think there's a lot of difference in the way that we perceive the use of generative AI and the way we're actually using it. So could you tell me a little bit more about how you developed the framework and how you see it as something to get people talking about this topic?
Chris Webb [:Yeah, I'll give credit here to my co creator as well. So that's Leigh Fowkes, who's a Career Consultant at the Open University. So start of 2024, we'd kind of been talking quite a bit about this idea of applications and implications and feeling as though over the last year there was maybe a little bit too much focus on the former, not always talking about actually when, where, how should we be using it? You know, when might it be inappropriate or actually kind of thinking about it from maybe a specific professional context? So, you know, that was something that we thought, okay, that needs to be a bit more of a conversation. And we also sort of realized that in a lot of workplaces, training and kind of development around AI was very mixed. You know, a lot of people were saying, we need training, but we don't necessarily know what to be trained on, or where do I start? All these kind of types of things. So we created this framework, which has sort of four quadrants, so it looks at these areas of confidence, so awareness of AI in general and how it works, competence in terms of being able to use AI tools, criticality in terms of your particular position on AI, ethical concerns, or where you'd deem it to be appropriate to be used, where maybe not so much, and then context, which is very much around in your professional setting, how influential or impactful do you feel generative AI is going to be? And of course, you could use that type of framework with any technology. But we were focusing specifically on AI, and we built a sort of chatbot, so kind of quiz that people can take, which sort of helps them come out as one of 16 different AI animal buddies, because we know people like being animals or comparing themselves to animals. And the idea behind this really was to get people sort of individually or in teams reflecting on, you know, where do they currently sit on generative AI and also what might their training needs be? That was kind of where we came from.
Chris Webb [:And I think really that's where a lot of people, a lot of professionals are finding themselves at the moment. You're starting to think about, okay, how is this tech really going to overlap with my day to day work, and what do I do about that?
Taryn Bell [:So if someone has no experience with generative AI so far hasn't had a go at these tools at all, where would you advise them to go to get started.
Chris Webb [:The best thing often that you can do is talk to lots of people in your network and find out what they're reading and try and get a bit of a kind of holistic spread. So from my perspective, two that have been really, really helpful. I mentioned Doctor Philippa Hartman earlier. Her DOMS blog, which is kind of focuses on the use of AI in education, is a really, really great read. And then Ethan Mollick is quite a well known writer on the subject of AI. He has a sub stack newsletter called One Simple Thing. He's also the author of living and working with AI, which is quite an interesting book to read on the subject. Those I would say are really kind of good mainstream resources and kind of getting your head around AI.
Chris Webb [:Both of those authors talk about it in, I would say, a very clear and straightforward way and not overly kind of technical, which is really useful. And particularly from kind of a career development perspective, it's really worth looking at career development bodies like the Career Development Institute in the UK because a lot of them are now producing specific AI courses. So the CDI have a very specific AI course for career development professionals that's had really good reviews so far. And the idea there is kind of helping people to start considering what might be the most suitable use cases for their practice. Building the library of prompts, templates for using AI, those type of things I would say, are really, really good starting points. Definitely.
Taryn Bell [:How can researchers themselves use AI to support their career development? I do often hear this kind of assumption among researchers that they can sort of pop a job specification in there and get a nice CV and a cover letter. But if we want to think in a more nuanced and definitely more ethical way of using AI, how can our researchers use these tools or benefit from these tools?
Chris Webb [:Yeah, it's a really important point, I think about, again, like we were saying earlier, the sort of garbage in, garbage out, that yes, you can do some really simple things on there and you might get what on first glance looks really useful. But ultimately we come back to that same question of, you know, what's the problem that you're trying to solve? Yes, that solves a problem of taking away a lot of time that might go into creating something like that, but ultimately it's still only one part of the recruitment process doesn't necessarily get you that much nearer to securing a job. So for me, there's two key ways that AI can be used to support career development. And I think one of them, as you've mentioned is around kind of the typical sort of career management activities we'd expect to see. So things like career exploration, thinking about kind of different pathways that might connect your skills and interests, looking at kind of research. So sort of wanting to find out maybe about an industry you have a particular interest in, and wanting to get a summary or a snapshot of that before you kind of research further, and recruitment processes, obviously. So things like creating kind of impact statements for a CVD or for an application based on maybe the criteria on the job description and some of your own experiences, or kind of generating practice interview questions based on the job description. So that's something that generative AI does do quite well.
Chris Webb [:But I think the other thing that a lot of people don't think about when it comes to career development is actually using generative AI for your day to day work. So I think this is something at the moment that we're seeing more and more people experiment with, and it's really, really important. So it might be about making efficiencies within your day to day work, it might be about generating new ideas, but this is something where kind of curious curiosity and experimentation is really the only way you can work it out. So one activity that's really useful to kind of try and do with things like ChatGPT or Google Gemini or Claude, any of these kind of more chatbot systems is basically to take your job description or a type of job description for a role you're particularly interested in, copy and paste it into one of those systems and basically ask it to explain how it would complete the tasks needed as part of that job role. So I've done this, for example, with my career consultant role, I picked out a particular task around creating learning resources. The AI was able to break this down into sort of six separate tasks. So things like undertaking a needs analysis, piloting content with user groups, sharing with stakeholders via social media, for example. And you can then sort of ask the AI, okay, well, which bits of these tasks are you best place to help me with as an AI copilot? And then it will give you things like, it will say, okay, I can definitely help with generating ideas for videos or templates for transcripts or whatever, but I can't yet create the videos for you or maybe interview the subjects for you because I've not quite got that capability yet, although that may be coming.
Chris Webb [:So this sort of stuff can give you a really good starting point for actually starting to look at, well, where my AI offer value in my day to day work are there kind of areas where I've not really thought about yet maybe. I do a lot of administrative tasks, a lot of minting, summarizing, transcribing. Is there a way to collate together sources for research really quickly? So there are ethical considerations we'll come on to here. But doing this sort of exercise can really help with actually thinking where can AI assist me in my day to day work? And that then kind of will help to give an idea from a career development perspective of where might this technology be going within my field? What do I need to be aware of? And of course then when it comes to applying for jobs, you're in a position where actually you have that understanding of a frontier technology that you can talk about as part of your kind of commercial awareness for moving into your next role.
Taryn Bell [:Yeah, that's really interesting. It's really interesting to think of it more broadly, just beyond the kind of getting you ready for applying for a job. I like how you're thinking a bit further ahead there. Thinking more broadly beyond individual uses of AI from a sector wide perspective, what do you see as the kind of broader potential for AI tools to shape university culture and higher education culture?
Chris Webb [:For me, it's very much that idea of personalization I think is going to be a real potential benefit for AI. Pretty much everyone at some point is going to be able to on their phone use AI to comb through all of the information available, very much like we could with Google, but in a much more kind of dialogue based and sort of user friendly way that is going to be much more than what we could get back from most information sources. And that as a starting point potentially creates a really strong baseline that then allows people to come back to domain experts. But rather than kind of coming to them with that starting point of I have no idea what I'm doing or, you know, I've got no idea where to start or what career I want to do. They're perhaps coming from a more informed perspective and you've got those domain experts then taking more of a coaching approach. Actually, yes, they've got someone there perhaps empowered with a bit more information, but I'm here now to be able to offer that nuance, that sort of human level skill that we know is needed at that level with those type of conversations. So I think that the personalization side, I should say, is going to potentially bring a lot of value for things like healthcare, education, skill development, careers development, things like financial planning. There's a huge number of areas it's got benefits for because obviously, the AI, particularly on localized things like phones, is going to know you, is going to know preferences and all those sorts of things.
Chris Webb [:That is potentially going to be a huge benefit. But of course, the flip side of that is it all depends on how it's implemented by tech companies, how is it regulated by governments ultimately sets to benefit from it. So I'm an optimist at heart, but there are many people who feel quite differently about where this could be going.
Taryn Bell [:Yeah, obviously one of the things that I most often hear from people when they are nervous about AI are some of those pitfalls, whether that's ethical or environmental. Could you go into a little bit more detail about some of those?
Chris Webb [:You can break it down into a few different categories, really. I think data is probably a really big one, more so perhaps than any other category. There's that real worry about as the data analysis tools within AI systems become better. What will happen if individuals start plugging commercially sensitive, confidential data into these tools without really thinking about where the data goes? So we've seen in the financial industry, for example, much more severe regulation that actually we need what comes out of these systems to be absolutely accurate or it's going to affect global markets. There's much more caution there. And I think even on a personal level, if we're putting personal data in, we want to know that this doesn't end up somewhere else on the Internet or in somewhere else's search. So the data concerns are completely valid, both in terms of using it to analyze data and what it feeds back out and in terms of what happens to the data that kind of goes into it. We've also got a lot of worries from organizations and individuals about this idea of AI generated content.
Chris Webb [:So if this is used sort of externally, what does that mean from a copyright perspective? It's still very murky, very gray there. It's trained on huge amounts of data. Which part of the content is created has maybe been taken from something else. How do you prove that? There have been a range of court cases already, and of course a possible reputational damage if somebody uses AI to create content, puts it out there into the ether. And of course, either what's returned is absolute rubbish and it's not been fact checked properly, or there's a risk that it's just repeating stuff that's already out there somewhere else. So that's why for me, the domain knowledge is still so important, because you need people who understand the content you're asking to create so that there can be that element of sort of fact checking and making sure that the accuracy of what's out there is appropriate. But I think almost kind of beyond that. There's a lot of these ethical concerns, as I say, about there's data in there, content creation, there's the bias of what kind of comes back out in terms of what's included in images, you know, so this, this content is, it's trained on data that we've created.
Chris Webb [:We are a biased, imperfect society. It's returning, biased, imperfect content. So there are all of those worries, and I think underpinning all of that is, is very much that existential sense that actually we kind of need to come to a decision as a society and as professionals about what work really needs to be done by us as humans. You know, where is there a real material benefit and maybe limited ethical concerns in just delegating this over to AI? So transcribing huge reams of notes or summarizing key trends from reports. Okay, some of that might be okay, but what about stuff like writing, creating? And as a writer, I get really worried with a lot of the very generic content I'm seeing out there, create by, and it's concerning. And we're seeing kind of this, you know, generic content. And then on the flip side of this, we're seeing very specific, tailored content that could really impact things like world politics. So these deep fake videos, you might see ones of Gareth Southgate, Donald Trump.
Chris Webb [:And actually, how does that then affect what we kind of view as truth on the Internet? Where does that take us in terms of the information age? Where does it take knowledge workers if all of this content's out there? But we can't necessarily prove where it's come from. So there are a huge amount of ethical concerns, and I think it's important that alongside all of the applications and using these tools, we're comfortable having these conversations, talking about things like digital equity. Who has access to these tools, which tools are better, which ones are costed? And as we continue to have these conversations, I think it feeds into all the other stuff we're doing around professional development.
Taryn Bell [:Absolutely. And I think, speaking about digital equity, I think there are also positive implications when it comes to digital equity from AI. From my perspective and from a university perspective, there are two major implications for AI in terms of improving culture. In that, as you've already mentioned, it provides an excellent starting point for researchers before they come to us for that domain specific knowledge and expertise and that kind of coaching. But I think there's also a kind of broader implication for creating an environment for researchers, where it puts them on a level playing field with access to information, whether you're at a university with a lot of support, or whether you're at a small institution where the researcher development or the careers team is a lot smaller.
Chris Webb [:Yeah, I think it's a really good point. You know, one of the things that you've just kind of mentioned there where AI has definitely got the potential to kind of be beneficial on a much broader level is this idea of scalability. So, you know, this is something we've talked about kind of in the career development profession, this idea that, you know, you could kind of create a chatbot. And many, many organizations are doing this at the moment programmed with the relevant guardrails. So the relevant kind of ethical approach to how it might give out information or advice, where it would caveat stuff, where it would signpost, then to a professional, where people needed access to a professional. And actually by creating something like that that is available 24/7 for free, what you're suddenly doing is given a much larger group of people access to a baseline of pretty decent careers information and advice. So pretty decent starting point, as you said. And then the ability to potentially go and seek a professional if they needed that additional kind of help and support.
Chris Webb [:I think the devil will be in the detail in terms of do people get too reliant on chatbots or too reliant on AI technology to basically do everything for them and then less kind of reliant on actually speaking to a professional and seeking that kind of domain expertise? And so I think that's sort of the area where over time, there will be that kind of working out of how do we get that kind of balance right? How do we ensure that actually people aren't just using AI as a really sort of quick fix for everything, but are actually thinking carefully about where is it most useful for me. And I think coming back to kind of your point about sort of creating a culture around that, I think that's where it's really important to not just have kind of the training and development side of things, but actually also to kind of create that sort of collaborative culture of sharing. So, you know, if someone has created a really useful custom chatbot that allows you to do a part of your job that might affect someone in many different teams across the institution, are they sharing that? Are they sharing how they made it, how they created it? You know, are they sharing how it could be used for other use cases? Are you keeping kind of a centralized prompt library within the institution or kind of amongst different teams within an institution. So that if you use stuff that's worked really well for, let's say, generating practice interview questions for clients or for researchers, is that something you're sharing with other people so they can make the most of those resources too? So I think a lot of it comes down to similar things we'd sort of expect from other kind of training and development this idea of being collegial, sharing these ideas. But actually I think it's even more important with AI because it's such a general purpose technology and it's so kind of universally transformative. I think there has to be an even greater emphasis on people coming together for, I guess, kind of a collective approach to this, rather than sort of a pocketed or a siloed approach to it.
Taryn Bell [:This has been so interesting. You always give me so much to think about when it comes to this topic, but that is all we have time for today. Thank you. Thank you so much Chris, for coming along to share your expertise and chat with me about AI and careers.
Chris Webb [:My pleasure. Always happy to chat about this topic in kind of any guise that people find useful.
Taryn Bell [:And if you're interested in learning more about Chris's work, we'll put all the links and resources in the show notes. As always, that includes the new foresee framework to help you think about how you use AI, as well as his podcast with the Career Development Institute. We are careers. So until next time, goodbye.
Intro [:Thanks for listening to the Research Culture Uncovered podcast. Please subscribe so you never miss out on our brand new episodes. And if you're enjoying the discussions, give us some love by dropping a five star rating and written review as it helps other research culturists find us. And please share with a friend and show them how to subscribe. Email us at academicdev@leeds.ac.uk. Thanks for listening and here's to you and your research culture.