In this interview, Dr. Ben Collins, Reader from the School of Biological Sciences, Queen’s University Belfast talks to News-Medical Life Sciences about his research and work into diaPASEF workflow.
Please can you introduce yourself and the research that you carry out at Queen’s University Belfast?
I’m a Reader in the School of Biological Sciences at Queen’s University Belfast. The first thing people from outside the UK always ask is ‘what’s a Reader’? This is basically equivalent to associate professor in other systems. I have just started to set up a lab at Queen’s having been appointed in August 2019. Before this, I spent 8 years in Zurich at ETH, initially as a postdoc in Ruedi Aebersold’s group and then as a group leader in the same institute. Most of my research career has been focused on quantitative proteomics. In Zurich, I spent a lot of time working on data-independent acquisition (DIA) in the form of SWATH-MS. I also worked on characterizing protein-protein interactions focusing on dynamics and rearrangement of protein complexes and later worked applications of these methods in infection biology. I will be continuing these lines of research at Queen’s.
Why do scientists want to quantify the human proteome?
Proteins (or perhaps protein complexes) are the primary functional units of the cell and for many questions in biology, it is the function that we are concerned with. Of course, we have great technologies for measuring genomes and transcriptomes which are very mature, but the picture provided by these technologies is very far from complete. Having technologies that can determine the state of a given proteome, sometimes referred to as the ‘proteotype’, seem crucial in a very broad sense in life science research if we want to think in terms of function. I think you will struggle to find someone who would deny this but nevertheless, proteomics is not as widely adopted in life sciences as genomics/transcriptomics. The measurement technologies are taking longer to mature in proteomics, but this is entirely understandable because the analytical challenges are much larger than for nucleic acids.
Image Credit: Shutterstock/ ibreakstock
What is PASEF and why is it useful for studying the proteome?
PASEF is a method developed by Matthias Mann’s lab with Bruker which leverages trapped ion mobility spectrometry (TIMS). The use of ion mobility separations with mass spectrometry is not new but TIMS provides something more than just an added layer of separation. TIMS can also focus ions into more concentrated packets that can be released sequentially into the mass spectrometer. PASEF is the implementation that exploits this idea to achieve very fast and sensitive spectral acquisition in data-dependent acquisition (DDA) mode.
Ben Collins - diaPASEF (Expert Presentation on Demand)
Ben Collins - diaPASEF - (Expert Presentation on Demand) from AZoNetwork on Vimeo.
How are the capabilities of data-independent acquisition extended by PASEF?
Its been our view for a long time now that the best way to get reproducible quantitative data is to run the instrument deterministically. We did this initially using targeted methods like SRM and then expanded this to DIA/SWATH. Combining PASEF with DIA brings a lot to the game.
The primary goal of the diaPASEF method, the concept for which was pushed by Florian Meier in the Mann lab, was to increase the global ion utilization efficiency. In DIA methods like SWATH-MS the fraction of ions that are getting past the mass selective quadrupole is only ever something like one divided by the number of precursor isolation windows in the duty cycle. This results in efficiencies in the range of 1-3% for typical methods. For DDA methods its much lower again. The idea for diaPASEF was that we could synchronize the quadrupole position in DIA mode with the size of the ions coming out of the TIMS cell as the mobility cycle progresses. This idea rests on the observation that ion mobility and m/z are somewhat correlated. So, what we observe is that when we run diaPASEF mode we can use more ions from a global perspective and as a consequence we record more signal intensity.
We also just benefit from what has already been shown in the DDA implementation of PASEF. That is, we have an added layer of separation in the form of ion mobility which is beneficial in DIA mode where we always operate close to the limits of selectivity. Plus, we get the ion focusing effect that means we can run the instrument faster with better sensitivity and much lower amounts of sample.
Image Credit:Shutterstock/ StudioMolekuul
You were one of the first researchers to describe the diaPASEF workflow. Why did you decide to develop this method and what were some of the challenges you needed to overcome?
Around 2-3 years ago I was still in Zurich and we had been following with interest the exciting developments made by the Mann lab and Bruker with respect to TIMS and PASEF. We started to wonder whether a DIA implementation might be useful and I had talked informally a few of times with Hannes Röst about what could be done from the data analysis perspective. But, of course, the idea of making a DIA implementation of PASEF had not escaped Matthias’s group. They had also been thinking about how this could be implemented for quite some time. When we realized we were all interested in moving in the same direction it made sense to team up. The Mann lab had deep experience in PASEF development as well as a strong concept for the implementation. Hannes had a software tool that he was confident could be adapted to handle this new data. Bruker could contribute expertise from the instrument side primarily setting up prototype acquisition methods. Ruedi and I had spent a lot of time working on DIA/SWATH method development and could contribute from this respective. Collectively, we felt very confident that we could make some interesting advances.
Naturally, there were challenges along the way. Initially, these related to how to structure the precursor isolation schemes in order to achieve good efficiency while still trying to maintain selectivity and precursor coverage. It turns out that this is more complicated than for standard DIA/SWATH approaches because the added dimension of ion mobility makes the parameter space you could explore much larger. Optimizing the software also took a major effort. We now think we have a pretty good first implementation that is ready to be deployed for biological studies but there is certainly more room for optimizing and extending the method and we look forward to working on this further.
You recently used SWATH-MS to study cells undergoing mitosis. Please can you describe this research and the data you were able to gather about mitotic proteome reorganization.
For a long time, I have been interested in protein complexes and how they are re-organized dynamically to facilitate cellular functions. I initially did this using affinity purification to pull complexes out of cells in time-course experiments. More recently we have worked on applying DIA/SWATH based methods in an attempt to monitor large numbers of complexes in an unbiased manner. This approach has been worked on for a long-time using fractionation approaches to separate protein complexes in the form of ‘protein correlation profiling’ or ‘co-fractionation MS’. We have been trying to improve this and make it more useful for comparative studies where we want to look at changes in protein complex organization as a function of some perturbation or biological process. We called this SEC-SWATH-MS and it was developed primarily by Moritz Heusel and Isabell Bludau during their time as Ph.D. students in Zurich before they moved on to other things (coincidentally, Isabell went to the Mann lab and contributed to the diaPASEF project in the later stages).
The study you mention was the first in which we were able to apply this method comparatively to an interesting biological problem. We were able to look at how complexes were remodeled as they move between stages of the cell cycle. We found many canonically understood changes like the association of cyclins with cyclin-dependent kinases. We also found some new things like previously undocumented disassembly states of the nuclear pore complex. I’m excited about what could be done with this method, although in the implementation described in that study it takes a very large effort to do the measurements and analysis. In work mainly from Charlotte Nicod, Isabell Bludau, and Claudia Martelli, we have been focusing a lot lately to improve the efficiency of this approach by using short gradient analysis that allows us to increase the throughput by an order of magnitude and this will help us to apply this strategy to biological problems with more perturbations and more replication.
Image Credit: Shutterstock/paulista
What are the expectations of doing similar studies but instead using dia-PASEF, do you foresee any advantages?
Absolutely, yes. We know that diaPASEF methods can run very fast using very low quantities of starting material while still getting very good sensitivity and quantification. This should be an excellent fit for this global approach to protein complex analysis and we are very keen to follow this up. I am hoping that we will be able to transform this method into something that can be more routinely applied. This is certainly something we want to explore in my new lab.
What other areas of research have you applied DIA-based mass spectrometry approaches to?
We have been working on these methods for about a decade now and this means we have covered a lot of application areas over the years. DIA excels when we want to have high quality and complete quantification in proteomics, and this is the majority of studies these days. It also seems to me that the field is starting to come around to this with many groups now adopting this strategy. I am happy to see this because while exploratory work in a lot of different areas is good, at some point, consolidation in the most promising areas is needed.
Alongside your research, you have developed several open-access tools. Why do you feel it is important to develop open-access tools for use with diaPASEF?
The creation of software tools and computational workflows in mass spectrometry-based proteomics has always been a key aspect of research and, at some points, a fairly large bottleneck. There is a big push on in our community for making data, code, and documentation available and I sense that there is progress from this perspective. This is not always easy to do but it seems obvious that this is the way things have to go especially when we are talking about method development. Other groups need to be able to replicate and build on work from their peers in the scientific community. In principle, advances should proceed faster this way, and this is often born out in reality. I think Bruker has been pretty forward-thinking here by providing an open data representation and a software development kit. In the Aebersold group where I was trained the policy was generally to make tools as open as possible and I think Hannes Röst, who is leading the software development effort for the diaPASEF project, would have a similar outlook.
What does the future hold for your research?
I am still in the process of getting my new lab set up and this will take some time. However, we did just install a timsTOF Pro in the last few weeks and are starting to do new measurements already. I am planning to have ~3 strands of research going forward. Firstly, I want to continue with method development for quantitative proteomics. In the short, to medium term, this means continuing to advance diaPASEF and related strategies, but this will probably be more directed toward the applications we start to work on the lab.
Second, I very much want to continue to work on protein interaction networks and protein complexes especially with respect to dynamics. It seems to me this is a key area where a lot of key functional cellular information is encoded and the methods we currently have to tackle this have a long way to go.
Third, I would like to leverage the first two strands to do more applied work, especially in the space of infection biology and signaling in innate immunity. One key question that I have started to work on the last few years is how pathogens are able to rewire host cells for their own purposes to evade destruction. It seems to me that the methods we have in hand could be very useful in this arena.
How do you think the field of proteomics will evolve over the next few decades?
The field has initially focused on measuring protein quantities, and we have made great progress there, but increasingly we would like to know more about proteoforms, protein complexes, subcellular localization, conformational changes, and so on. However, the analytical problems that need to be overcome for this kind of complete proteome characterization are immense. I would say there are 2 big problems here. The first is dynamic range. We estimate that protein concentrations in a typical mammalian cell go over at least 7 orders of magnitude and for clinical samples it can be much more. Its very difficult to have an analytical method that can be quantitative across such a broad range. The second big problem is complexity. The proteome has a vast space of possible states when we account for proteoforms, complexes, etc. We are still trying to get a handle on how much of that space is occupied and how we could develop better methods to map that out and monitor it dynamically in various biological systems. I think these are the key challenges from a basic biology perspective where we would like to understand better what’s going on in a cell.
The other side of the coin is throughput for more applied work. There are many applications where biological variation is high, clinical proteomics is certainly one, where what really counts is being able to measure hundreds or thousands of samples. I think we are just getting to the stage where this is becoming realistic and I think we will see a lot of studies going for very big population-based cohorts that will allow for reasonable statistical inferences to be made in the background of this large biological variation.
When we start talking in terms of a decade or more then we should also consider those other disruptive technologies will be developed. Progress is being made with fluoro-sequencing and nanopore sequencing for proteins. These are exciting developments but there is still a very long way to go before these methods will challenge mass spectrometry I suspect.
Where can readers find more information?
About Ben Collins, Ph.D.
Since August 2019 Ben is a Reader in the School of Biological Sciences, Queen's University of Belfast, UK. His research focuses on broadly on 3 topics: (i) method development and applications in data independent acquisition mass spectrometry; (ii) method development and applications in the analysis of protein interaction networks and protein complexes; and (iii) host-pathogen biology. Ben is a native of Ireland where he studied chemistry and applied chemistry at the National University or Ireland, Galway. After working as an analytical chemist in Schering-Plough, he undertook an MSc in Molecular Medicine at Trinity College Dublin. Ben's PhD was completed at University College Dublin in 2009 where he remained for 1 year as the Agilent Technologies Newman Fellow (postdoctoral) in Quantitative Proteomics. Ben moved to the Institute of Molecular Systems Biology at ETH Zurich in Autumn 2010 as postdoctoral researcher under the supervision of Prof. Ruedi Aebersold, where his research focused on the application of quantitative interaction proteomics in signaling and the development of DIA/SWATH mass spectrometry. Following this Ben was a Group Leader and SNF Ambizione Fellow at IMSB, ETH Zurich with a focus on applying methods developed as a postdoc to relevant problems in host-pathogen biology.
For Research Use Only. Not for Use in Clinical Diagnostic Procedures.