Episode 6

full
Published on:

12th Apr 2023

(S3 E6) First class research objects, FAIR data and where next for Open Science with Professor Hugh Shanahan

In our weekly Research Culture Uncovered conversations we are asking what is Research Culture and why does it matter? This episode is part of Season 3, hosted by Nick Sheppard who will be speaking to colleagues from both the University of Leeds and from other universities and organizations about open research, what it is, how it's practiced in different disciplines, and how it relates to research culture. In this episode Nick is joined by Hugh Shanahan, Professor of Open Science at Royal Holloway University of London.

Hugh has expertise in computational biology and statistics. He's co-chair of the Codata, RDA Schools of Research Data Science, and Vice-Chair of the World Data System Scientific Committee.

You can connect with Hugh via Twitter or LinkedIn

In this episode we talk about:

  • Hugh's varied academic background that led him to a professorship in Open Science
  • Different definitions of open science, which in addition to making scientific research more efficient and transparent, also encompasses research integrity, community and equality
  • How open research differs from open science, and that it needs to be defined by its practitioners in a specific discipline
  • The different professional roles that contribute to open research beyond scientists, researchers and academics themselves
  • Openness as a spectrum, that it's not all or nothing: it’s possible to take small, practical and useful steps which can be built on incrementally
  • Recent progress embedding open research principles in universities in the UK and beyond
  • The concept of the "first class research object" as the components of research - datasets, software and code, protocols - that are often as important as a final published journal article
  • Underpinning research infrastructure, and its importance to open science: repositories, preprint servers, computational notebooks and persistent identifiers (DOIs, ORCiD)
  • The relationship between open research training, and reward and recognition: people need the skills to be rewarded for the practice
  • The FAIR data principles and that data should be findable, accessible, interoperable and reusable. What do these concepts mean in practice?
  • Priorities for open science in the future, in particular bringing together generic support with more disciplinary expertise

Be sure to check out the other episodes in this season!

Links:

Transcript
[:

[00:00:26] Nick Sheppard: Hi, it's Nick, and for those who don't yet know me, I'm Open Research Advisor based in the library here at the University of Leeds. You're joining us in season three of the Research Culture Uncovered Podcast where we'll be speaking to colleagues from both the University of Leeds and from other universities and organizations about open research, what it is, how it's practiced in different disciplines, and how it relates to research culture.

s from the REDS Conference of:

But now I'd like to introduce my guest for today, Hugh Shanahan, who is Professor of Open Science at Royal Holloway University of London. Hugh has expertise in computational biology and statistics. He's co-chair of the Codata, RDA Schools of Research Data Science, and Vice-Chair of the World Data System Scientific Committee.

So welcome to the podcast Hugh and thank you for taking the time to join us.

[:

[00:01:29] Nick Sheppard: And before we get onto, um, Codata and FAIR data, and you know, the definition of that acronym, um, I know FAIR data is a thing that you'll probably talk about, I must first ask you about Royal Holloway. Um, uh, so it happens to be, as I mentioned to you before, the, uh, university that I went to back in the nineties, and it's quite a spectacular campus, especially in the snow. Have you, have you had much snow in Egham?

[:

[00:01:58] Nick Sheppard: But for those that don't know, it's, it's quite a spectacular building, isn't it? If you've not seen it, it's based on a French chateau, as I recall, Founders building?

[:

[00:02:12] Nick Sheppard: Red brick. Yeah. A red brick chateau. Um, and again, I was just telling you...in actual fact, the very first time I went on the web was on campus in the geology department. I actually studied English, but I had a friend in geology, uh, and we went and played on the web, um, in about 1994. So that would've been the Mosaic browser, I'm guessing. I don't know. I'm not sure. That was the first browser, wasn't it back then?

[:

[00:02:37] Nick Sheppard: So, um, so as I say, uh, thanks for for joining us today and, uh, I suppose to start with just a, perhaps a little bit of your academic background. I mentioned, um, so you're a biologist?

[:

Um, at that stage I kind of got tired of tramping around the world. Um, and I got, um, a fellowship working in bioinformatics. Uh, so that was working with Janet Thornton's team in, she was then based at UCL in Central London, and then we moved out to the European bioinformatics Institution. Um, and I was there until, uh, 2005. So, uh, at that stage I started working in the computer science department. So, yet again, another discipline change. Um, so I was there from 2005, uh, although it was about, uh, two thousand and fourteen, fifteen when I, I kind of got more and more frustrated with trying to get to data and so on. And at that stage kind of started that journey into me thinking about, uh, moving into the open end of things, and I'd been kind of working on the, the Codata RDA schools. I'd been doing all of that. And then to the point where, uh, when I actually got my post as, as professor at at at Royal Holloway, I decided okay, it's time to, to sort of step out and, uh, I almost said come out of the closet. Um, uh, but yeah, I suppose it was a bit coming out moment when I said, Okay, I am, I'm labeling myself as Professor of Open Science.

[:

[00:05:02] Hugh Shanahan: Yeah. So I wish it was something which was, which was more exotic, uh, than that, but you know, if you go to the HR department at Royal Holloway, they'll put me down as professor. Uh uh, the Professor of Open Science, the open science bit, that's the thing that I put on my door basically. And I decided from day one, this is, this is what I, this is what I, this is what I want to do. Um, I'm not the only person to, to, to sort of do this. I mean, you know, looking around amongst my colleagues. Everybody does this. So, you know, you have professors of machine learning and we have professors of software, language engineering and so on. And I sort of simply said, yeah, okay, fine. If they can do that, I can do that as well. Which I think is, you know, there's that quiet lesson in academia, which is um, ask for forgiveness rather than asking for...ask for permission.

[:

[00:06:05] Hugh Shanahan: So far so good. Nobody's, nobody's knocked on my door and, or, or, uh, you know, called me up from the higher offices and said, ah, um, Hugh, uh, yes, wanted to talk about this title you've been giving yourself? So I'll run with it until somebody says no.

[:

[00:06:41] Hugh Shanahan: Yeah, I think I, I think so. Although so, so number one, there's kind of, uh, the language difference. So, uh, you know, in British English there's, there's definitely distinction between science and research and the need to be more inclusive. Um, uh, other countries tend to be more relaxed about it, so, you know, if you're in Germany...

[:

[00:07:12] Hugh Shanahan: Yeah, yeah, and, and likewise, I think in the US, you know, in North America it tends to be, still tends to be open science. But that...now A. that said, uh, totally get the fact that there needs to be open research, and in fact, if you go and look at Royal Holloway's policies on this, the conversation is always about open research rather than, than open science. Um, because we think, yeah, a lot of the practices, they map over, although, at the same time, personally, I'm always trying to be careful because I don't want to end up, um, you know, telling...hey, you digital humanities folks, get a, get a grip, this is how you do your stuff, you know? And it's just like, no, no, no.

[:

[00:08:14] Hugh Shanahan: absolutely.

[:

[00:08:30] Hugh Shanahan: Yeah, so I think being an academic I should sort of pull out the, uh, 10 minute seminar description, yes, well, this was...it appeared here at this point, and then here are these...and here are the different interpretations and so on. I'm gonna be lazy Nick to be honest with you, and I'm gonna give you what, what I think of open science as, and I want to try and also keep it...I'm gonna go for as minimal a definition as possible, and then what I wanna do is just try and unpack that a bit, if that's okay?

[:

[00:09:10] Hugh Shanahan: So, the way I would talk about open science, and I'll try and I'll talk about open science initially and then move on to, to maybe to, to open research, is, you know, in, in two sentences.

It's...open science is a set of practices to make scientific research more efficient and effective when we live in an era when the questions that we're thinking about have become more complex, more challenging, and also are data and computation driven.

Underlying a whole variety of different, sort of, aspects that sit with the the open monika, in this, is that there's, that, there's this principle of being as transparent as possible during the process of scientific discovery. Now, what I would also do is to say that with respect to open research, you are extending that remit in terms of, in terms of saying, yeah, actually there are many areas of research which are, again, facing bigger issues and are very often being driven by large data and so on. And hence the process of being transparent in terms of what you're, you're doing is, is...it also holds there.

Now, I think it's, it's worth unpacking that a little bit now, trying to come up with something which is short and pithy and so on, but then you're gonna go, oh, what is all that? And I, what I wanted to do is just spend a little bit of time talking about the things that aren't included in that definition. Alright? And, uh, because. I, uh, there are, if you know, uh, during, you know, these interviews, I'm sure you're gonna talk to a variety of different people who say, well, what's open research? What's open science? And you'll get a different spectrum of opinions. Okay. And I'm, I'm not here to sort of say, well, I'm right because I, you know, because I put Professor of Open Science on my door. So I'd like to kind of just explain those things which I don't sort of mention in there.

So the first point is, is that in those two sentences I made no reference to community or collaboration and those, those are really, really sort of important ideas. Alright. Uh, uh, I would argue and say they kind of flow from the idea of, well, if you're gonna be transparent, if you're gonna be sharing stuff, actually by fiat, you know, you start working with people and start realizing, hey, hang on, we've gotta, we've gotta share standards and so on.

There's no reference to research integrity, even though, I think the more that time goes by, the more we kind of realize how grey the landscape is and how much we, and I mean, that's, that's everybody - that's the academics, that's research pro...you know those in professional services, that's the funders - uh, all need to have a good, long, hard look in the mirror of ourselves and say, we need to reform ourselves.

There's no reference to equality. Uh, uh, in terms of, uh, the fact that, you know, that's kind of an elephant in the room here is that I'm chatting away and I'm...then say, I put open science on my door because you know what? I say that, I say so, and I, so with a certain level of confidence and oh yeah, I'm a white middle class cis male. Uh, uh, so I can, I can afford to be a bit cheeky. And, you know, so there's no reference to those kinds of kinds of issues there, and there are a variety of people who really think hard about and say, no, no, no, hang on, this is super, super important.

There's no reference to expansion of roles in terms of sort of saying, no, no, no, it's not just the holy academic, it's the data steward, the research software engineer, um, the curator. All of those people who, who, who also sort of play roles.

[:

[00:13:47] Hugh Shanahan: Yeah.

[:

[00:14:12] Hugh Shanahan: Oh no, but I think I can guess

[:

So, I suppose that's one of the aspects I'm most interested inm is how this sort of underpins developing research culture, um, which as you say, isn't captured in that sort of raw definition, I suppose, of open science.

[:

I think the things I would argue in terms of saying...focusing on that, that sort of minimal definition is to say that if you're clear about what it is that you're trying to do, which is to say, look, we want to be more efficient and effective, and what that means is, we need to put aside this relentless obsession with papers and so on, and kind of understand that there are lots of different ways in which we need to be better at what we do, and we do that by making sure that we share all those ideas, whether that be code or data or, uh, pre-registered reports or protocols or, you know, this whole sort of gamut of things that they enter into this milieu, then that enables us to have that stop and hard look at the culture and say we need to change things. I mean, if we think about different organisations that, you know, in the commercial sector or in terms of, uh, um, things like the military and so on, there's this sort of clarity in terms of what it is that you want to achieve, then you start saying, well it's ridiculous that things should be run in this, in this way.

As I said, I think there would be many others who would push against that and say, uh, no, culture first. You get the culture fixed, and then that follows. And I would sort of say, I totally respect that. I totally get what's being said there. Uh, and you know, respect is being said there in not just in a trivial kind of way. It is absolutely a total sense of respect. I guess. It's my way of trying to figure things through.

[:

I was looking at some of the things you pulled out there, you know, you talked about openness as a spectrum. You know, it's not all or nothing, I think there can be this sort of concern that, you know, if you're not doing everything, then you can't do anything at all. Um, but I suppose just, maybe a trite question, but have we made any progress in those 18 months, do you think? Are we moving in the right direction with open science and open research?

[:

And what you see now is the kind of senior management of universities saying, yeah, we get this. We need to do these things. And you see them providing support in terms of rolling things out and thinking. So I think what you've got is definitely an acceptance of the importance of open research, open science by university management. And that's more than simply, yes, this is quite a nice idea, but, you know, make sure you still get five nature papers out tomorrow, sort of perspective. It's much more kind of saying, yeah, we need to do this and backing that up. And it feels like a lot of the effort now is sort of in the training side of things and raising awareness and also sort of saying, okay, how do we do this in a sustainable fashion?

Because we can't look at universities to sort of say...as you know, one of the things I'll argue is sort say, yeah, infrastructure costs alright? And furthermore, it's not just a once off, it's a thing that keeps on going. So how do you organise this in a sustainable fashion? So what I think you're seeing a lot more of is more awareness raising and starting to do things in a more kind of concerted campaign n terms of training and so on.

So that you get to a point where all academics are, at least in some way, aware of what the thinking is behind this approach. Not necessarily saying, like what I was talking to initially, not to say to them, now you have to change everything because with this is year zero. It's to sort of say, okay, here's a whole bunch of things that you could do. Pick one of them or could you pick one of them? Have a think about that, you know, is the stuff that's there.

[:

[00:22:07] Hugh Shanahan: Exactly.

[:

So myself working in libraries, you know, we've tended to be perhaps a little bit obsessed with open access, if that's not overstating it, and the journal article as the, you know, the final research output? And I think perhaps we're trying to get away from that a little bit, would you say in terms of a first class research object?

[:

So let's think about it again from a UK audience is to say, what do we mean by first class research object? It's a thing which you can present at the REF is, you know, that's the most kind of blunt definition. And that's of course the thing that all academics suddenly... their, you know, their antennae start, kind of, looping up. And if you do look at the REF rules, even the previous one in 2016, the people who are organizing the REF said, yeah, it doesn't have to be a paper, what that submission is. Now the problem was that, you know, the relevant committees...everybody kind of second guessed the committees, and sort of said, well, no, they won't be interested in data sets or bits of software or anything like that, so there was a great deal of sort of self censoring. So it was like, in the 2016 REF, I think there was like a handful of non-paper based submissions to the kind of things that are there for the research.

ned with the REF in whatever,:

And, one of the things that, that I'm arguing is to say that we should think of data sets, the software that we write, even things like lab protocols and so on, all of those things should be things that we value because they're stuff that other people can go and run with and actually and go and make use of their research. And when you think about that in that way, that's I think an incredibly empowering thing because, you know, what usually happens is, let's be honest, what we have in research groups, is that there are people who are good at writing, and then there are other people who do all the other bits and pieces. And there's always that thing about, well, you know, this person is...x is really good, they just, they're not, you know, they don't write the papers, so how do we, we can't ignore them, but they're not one of us, you know, and if you start saying to them...Let me pick one example: person X writes pieces of software or workflows or whatever that really work really well. Now, if you, if you think of what they do as a research object, a first class research object, then they get the credit. Other people are using their stuff directly rather than trying to figure stuff out from the paper and so on, and people get on with doing things much, much more quickly.

[:

[00:26:52] Hugh Shanahan: Yeah. Yeah. Yeah. So I think the steps are being made in terms of...we now have, the thing that's really important that's happened over the last, say, 10 years is that persistent identifiers are now a thing.

[:

[00:27:18] Hugh Shanahan: DOI, and ORCID and so on, and correspondingly. So, and again, you know, you get DOIs for data sets. You get DOIs, you know, you submit something onto, to Zenodo or Figshare or Data Dryad, you get a DOI for your data set. You get a DOI for your software...there's the software heritage archive. All of these things you now have, so they're that first layer of sort of atoms that, you know, things that people can call and gradually we start seeing databases building up. Which are initially, right now fairly generic, but then gradually will become more domain specific so that people can answer this.

I mean, I think one of the things that's kind of one of the projects I've been interested in is this thing called computational notebooks. So examples of this are things like Jupyter Notebooks and R notebooks and so on, which kind of put your code and your text and visualization all in one place. And, at some level, they're a fantastic tool for doing analysis of your data and in some respect they really are like the inheritors, for some disciplines at least, they really are the inheritors of the thing that papers should be doing next, all right. Because it's a very, very interactive, you can play around with it as you see fit, but it's only now that we're starting to get our heads around publishing notebooks. And there are, so for example, there's, there's something called NeuroLibre which is doing that. And publishers are trying, and there's a bunch of other efforts that are there to try and make that happen, but it's still not quite there. And I think one of the things is really, really key there is the search element. So, and that's because searches...you know, say, Google Scholar, everybody uses Google Scholar, but it's a service that could disappear tomorrow, so you know...

[:

[00:30:00] Hugh Shanahan: Yeah. Yeah. And how much our services are really, really dependent, you know, I mean, Google Scholar I think is a very extreme example of something that, you know, it's not anywhere part of Google's mission in some respect, it could be pulled tomorrow and then it's like...and of course we know, I mean there's Scopus, there's WoS and so on, but it's kind of, that's the one which is you take that away and then I think lots of academics are gonna go, everybody's gonna go, oh yeah, we had these resources that we were using previously, and they'll have to kind of figure out how to use them again.

[:

[00:31:01] Hugh Shanahan: So Nick, I have to, to, to make a full confession. I mean, I've heard of Octopus, but I haven't had a chance to have a play around with it. So I won't particularly comment on Octopus myself.

I'm delighted to see that that services like that are coming into place. Um, I am very disappointed that at the same time just have decided to withdraw support for CORE, which, we have to get to a point where it can't be just like on the other hand, taketh away, you know...

[:

[00:31:52] Hugh Shanahan: So it's a repository, well, not so much a repository, but a service in terms of listing sort of open access publications and so on, that's there. So it's, as you said, it's a thing which kind of sits in the background. It's not...the plug hasn't been pulled on it entirely, but the funding's been withdrawn and now they have to, they, those people have to now figure out how to get supporting for this. So all the time, there's always...I think this will be like a motif with all of these interviews in terms of infrastructure costs.

Never ever trust anybody who says, oh, don't worry, it's really important and people somehow, some way it'll be paid for. X will be paid for. You have to kind of say, this is like having, this is like roads and buildings, you know, you gotta build them and you gotta keep them updated and every day you gotta, you know, and that's the quid pro quo of this...

[:

[00:33:19] Hugh Shanahan: Yeah. I think, yeah. I mean, I want to stress and say that I don't have a problem with commercial organisations running services, as long as there isn't a point where you can't change your mind and say, I'd like to use this sort of service from another. I think that's the important element. I mean, again, if we use the internet sort of idea, is that again, in the background, there's a colossal amount of people and organisations who were involved in, you know, laying down the optical fibre and the routers and, you know, the stuff that's there. And they do a really good job. And it's just that you're not like dependent on one organisation that do that. Huge numbers of different organizations who are there, who are doing all of that, you know, providing the backbones, the tier one, the tier two, and the tier three, all that, you know, the sort of level of connectivity and so on. So, as long as we're not dependent that we can always say, Thank you, but you know these other people can do just as good a job at 80% of your cost, thanks, we'll use them now. That's the thing that keeps everybody honest.

[:

[00:35:01] Hugh Shanahan: I'm not, I'm not having a go at at Octopus.

[:

[00:35:04] Hugh Shanahan: I'm the next level, next level up I'm afraid.

[:

So I'm just a little conscious of time, not least cos I'll have to transcribe this podcast as I said to you, but you touched on it a little bit, but, you know, talked about infrastructure and the fact that policies are developing in universities, et cetera. And I think earlier on you suggested that training's the challenge now, or is that particular challenge actually training people? I mean, already you've mentioned Jupyter notebooks, you know, uh, Figshare, you know, Zenodo, all these different services, pre-registration, registered reports, you know, there's data sharing and software and, you know, there's so many skills. So, I mean, two aspects to that, I suppose for me, just to ask you about. There's training for them and also recognition. I mean, people are busy, aren't they? Researchers are, are busy.

[:

[00:36:02] Nick Sheppard: How, you know, why should they do this stuff if they're not getting sort of rewarded for it?

[:

The things I would say is that we need to be fairly focused in terms of what it is, the type of training that we provide. So if you're talking to say somebody who's, say, a PI who's got a research group. Okay? The first thing you're doing is you're saying to them, um, you know what, you don't actually have to need to know all the ins and outs of what's happening here.

You need to kind of understand, probably have a sort of fairly broad understanding in terms of saying, okay, keeping an eye on your data and so on. And if you are, if your area means that you're developing software that yeah, actually your team should be should be doing things in a certain sort of way, but you don't necessarily have to have everything on your fingers and tips.

I think the other...you know, if you're a PhD student, if you're an early career researcher, then again, it should be about, okay, what is it that's useful for you? Uh, and you know, I mean, if we think particularly about PhD students, we also should be aware that, you know, every PhD student now...we also need to, kind of, be giving them the skills so that actually if they want to go, you know, that if they decide not to go into a research career, that they can think other options in their lives and to say to them. Hey, hang on, there's some practices here which are...which might be of use to them. And again, it's not saying to them, oh yeah, you need to know everything. You know, you kind of, sort of say to them, well, you know, actually things like annotation of data, that might be something that's useful to you for the things that you do. Or it could be developing software or it could be thinking about documenting lab protocols and so on.

Um, so I think it's also about, before we say to people, do this, and you will eventually at some unspecified point, you will get some, you know, you'll be rewarded for this, is more to take them...you know, the first win is to say, if you do this, this is actually something that's just gonna make what you do more efficient, first of all. Okay. So in terms of, you know, talking to a PI to say to them, hey, you know what, um, you remember that thing where, you know, a postdoc would go or a PhD student would go, and then, the next person along would spend like six months figuring out what the hell the last person did with the data that they generated or so on.

You can say, you know what, if you do things, you know, in this way. Think about that. You can bridge that gap and you can get them much closer to making that transition much easier for you if you can say to, you know, a graduate student, um, listen, you know what you kind of understand how to do stuff in Excel, but if you do things in R, yeah, I know it takes a few afternoons to get your head aroundRo, and stuff like that. And it's, ooh, scary, it's programming. It, it means that you can process a thousand files in one go rather than one file, spending a thousand days doing the thing with an Excel spreadsheet. And you can do it in a reliable, reproducible way, then those are the wins.

So again, it's always...still kind of what I talked about 18 months ago in terms of small wins, uh the efficiency gains, I think are the things which are important, which is, you know, which is probably the Achilles heel in some respect of open researh policies, when you kinda say, oh, here's the big picture, when you want to kind of say one good thing for you now, you know, and trust me, this will help you a lot.

[:

[00:41:27] Hugh Shanahan: Sure thing. So, uh, findable, it's up there. FAIR is, it's an acronym. It's shorthand for, I think it's 16 principles. I, I don't actually have them tattooed across my chest.

[:

[00:41:40] Hugh Shanahan: Not yet. But the letters stand for findable, accessible, interoperable, reusable. And what it is, is it's in essence it's about doing two things, which is a) trying to encapsulate the kind of hard won principles that came out from sort of sharing data, you know, and research, data management, sort of saying, these are the things that you should be doing.

And the second point, which I think hopefully addresses, at least the 'I' one is taking, that next step, which is to say, can we make this sort of machines talking to machines? I'm a little bit cautious about that because as I mentioned previously, you can get very carried away with the machine to machine everything. Everything works perfectly, and that' a big ask. Uh, and I'd rather have stuff which is, which is again, something small that works now almost fair, all in little letters rather than in big capital letters.

[:

[00:43:02] Hugh Shanahan: So, so you asked me to kind of maybe dive a little bit more into that. So, uh, in, uh, so the findable aspect is to say, well, can I actually get onto my laptop and find this data set wherever it sits, I go, aha, here's a DOI for this.

Uh, accessible means. That, oh, I've got the doi. And you know what, when I click on the do it takes me to the landing page where I can go click and I can download the data set. And this is all kind of short, kind of cartoon-like figure.

Interoperable, the 'I' says I've downloaded that, and now it's...that data is in a format that I recognise and know how I can read into my computer. And then the 'R' is reusable, which is to say there aren't any licenses associated with the data set, which say, oh, you know, yes, you're free to download, but no, you can't really publish anything about, you know, that it gives you that freedom.

Now those principles do things...they're a lot more subtle than that. So they, you know, for example, they sort of say things like, well, data might not be there for reasons, perfectly reasonable ones, but metadata, you know, the data about the data should always be there and so on.

So, um, I think we've kind of reached a point where FAIR is something that's accepted. We've stopped having kind of cosmic discussions about what does FAIR mean and much more people sort of pulling up their...you know, sort of saying, okay, we have an idea of what's FAIR. I think what's kind of interesting is that the next phase is much more about use cases to sort of say, yeah, this is what works in a particular domain.

So something to keep an eye out for are things called FAIR implementation profiles, which are effectively use cases of how FAIR gets implemented for a particular type...for a domain piece of data. And I do recommend, uh, so there's a new collaboration, which I'm not part of, so I'm not...I don't have shares in it, but it's something called WorldFAIR, which is doing a lot of work in that particular area.

So if you're kind of struggling for saying, oh, okay, how do I make my data FAIR? You can look up these profiles and sort of say, oh yeah, I'm working with geophysics data of some kind. And I can say, oh yeah, this is an example of how this was done. And now that kind of gives you, in some respect, a recipe to work at it from there and so on. So, I hope that a answers...

[:

[00:46:29] Hugh Shanahan: Exactly. So, as I said, I think the FAIR implementation profiles, I think, you know, you could see something on the horizon where there's a whole library of them that you can kind of pull out and go, ah, this data set, this type of data set, this is what I could do. That I think is an interesting sort of thing. It's a lot of work, but it could be something quite important.

[:

[00:47:21] Hugh Shanahan: So I think the challenge now is that...so institutionally things are starting to get their act together. We also have domain specific work, which is there. And what we need to do is to kind of get our heads around that matrix so that we have people who are working...and, and by the way I'm doing lots of hand motions right now, which will mean absolutely nothing to those who listen to this podcast. But in some respect, we have the people, you know, the data stewards, the research software engineers, who do things at an institutional level, who are doing things quite generically and thinking about things that way, but then also, for particular disciplines, you have a corresponding people who have much more of a domain, specific area who will work across institutions. So, you know, so that things like N8 or, you know, and and so on, could sort of say, oh yeah, there's, you know, there's the data stewards associated with archeological data, say, and they work, you know, in the uk, you know, there are a handful of people who work in the UK or indeed across Europe.

[:

[00:48:42] Hugh Shanahan: Exactly. Yeah. So that I think is the sort of the...we need to kind of get our heads around how to work at those sort of two levels. And I think it's sort of like, almost like that matrix type of model in terms of training and also in terms of services and infrastructure.

[:

[00:49:21] Hugh Shanahan: Absolutely.

[:

[00:49:23] Hugh Shanahan: Thank you, Nick. Cheers.

[:

Email us at academicdev@leeds.ac.uk. Thanks for listening, and here's to you and your research culture.

Show artwork for Research Culture Uncovered

About the Podcast

Research Culture Uncovered
Changing Research Culture through conversations
At the University of Leeds, we believe that all members of our research community play a crucial role in developing and promoting a positive and inclusive research culture. Across the globe, the urgent need for a better Research Culture in Higher Education is widely accepted – but how do you make it happen? This weekly podcast focuses on our ideas, approaches and learning as we contribute to the University's attempt to create a Research Culture in which everyone can thrive. Whether you undertake, lead, fund or benefit from research - these are the conversations to listen to if you want to explore what a positive Research Culture is and why it matters.

Unless specified in the episode shownotes, Research Culture Uncovered © 2023 by Research Culturosity, University of Leeds is licensed under CC BY-SA 4.0. This license requires that reusers give credit to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms. Some episodes may be licensed under CC BY-ND 4.0, please check before use.

About your hosts

Emma Spary

Profile picture for Emma Spary
I moved into development after several years as an independent researcher and now lead the team providing professional and career development for all researchers and those supporting research. I am passionate about research culture and supporting people. I lead our Concordat implementation work and was part of the national Concordat writing group. I represent Leeds as a member of Researchers14, the N8PDRA group and UKRI’s Alternative Uses Group.

Tony Bromley

Profile picture for Tony Bromley
I've worked in the area of the development of researchers for 20 years, including at the national and international level. I was lead author of the UK sector researcher development impact framework charged with evaluating the over £20M per year investment of UK research councils in researcher development. I have convened the international Researcher Education and Development Scholarship (REDS) conference for a number of years and have published on researcher development evaluation and pedagogy. All the details are on www.tonybromley.com !! Also why not take a look at https://conferences.leeds.ac.uk/reds/

Ged Hall

Profile picture for Ged Hall
I've worked for almost 20 years in researcher development, careers guidance and academic skills development. For the last decade I've focused on the area of research impact. This has included organisational development projects and professional development for individual researchers and groups. I co-authored the Engaged for Impact Strategy and am heavily involved in its implementation, across the University of Leeds, to build a healthy impact culture. For 10 years after my PhD, I was a consultant in the utility sector, which included being broker between academia and my clients.

Ruth Winden

Profile picture for Ruth Winden
After many years running my own careers consultancy business I made the transition to researcher development leading our careers provision. My background is in career coaching, facilitation and group-based coaching, and I have a special interest in cohort-based coaching programmes which help researchers manage their careers proactively and transition into any sector and role of their choice.

Nick Sheppard

Profile picture for Nick Sheppard
I have worked in scholarly communications for over 15 years, currently as Open Research Advisor at the University of Leeds. I am interested in effective dissemination of research through sustainable models of open access, including underlying data, and potential synergies with open education and Open Educational Resources (OER), particularly underlying technology, software and interoperability of systems.