Standard and Super-Resolution Bioimaging Data Analysis (eBook)
John Wiley & Sons (Verlag)
978-1-119-09693-1 (ISBN)
A comprehensive guide to the art and science of bioimaging data acquisition, processing and analysis
Standard and Super-Resolution Bioimaging Data Analysis gets newcomers to bioimage data analysis quickly up to speed on the mathematics, statistics, computing hardware and acquisition technologies required to correctly process and document data.
The past quarter century has seen remarkable progress in the field of light microscopy for biomedical science, with new imaging technologies coming on the market at an almost annual basis. Most of the data generated by these systems is image-based, and there is a significant increase in the content and throughput of these imaging systems. This, in turn, has resulted in a shift in the literature on biomedical research from descriptive to highly-quantitative. Standard and Super-Resolution Bioimaging Data Analysis satisfies the demand among students and research scientists for introductory guides to the tools for parsing and processing image data. Extremely well illustrated and including numerous examples, it clearly and accessibly explains what image data is and how to process and document it, as well as the current resources and standards in the field.
- A comprehensive guide to the tools for parsing and processing image data and the resources and industry standards for the biological and biomedical sciences
- Takes a practical approach to image analysis to assist scientists in ensuring scientific data are robust and reliable
- Covers fundamental principles in such a way as to give beginners a sound scientific base upon which to build
- Ideally suited for advanced students having only limited knowledge of the mathematics, statistics and computing required for image data analysis
An entry-level text written for students and practitioners in the bioscience community, Standard and Super-Resolution Bioimaging Data Analysis de-mythologises the vast array of image analysis modalities which have come online over the past decade while schooling beginners in bioimaging principles, mathematics, technologies and standards.
ANN WHEELER, PhD, is Head of the Advanced Imaging Resource at the MRC Institute of Genetics and Molecular Medicine, University of Edinburgh, UK.
RICARDO HENRIQUES, PhD, is Head of the Quantitative Imaging and NanoBioPhysics research group at the MRC Laboratory for Molecular Cell Biology, University College London, UK.
A comprehensive guide to the art and science of bioimaging data acquisition, processing and analysis Standard and Super-Resolution Bioimaging Data Analysis gets newcomers to bioimage data analysis quickly up to speed on the mathematics, statistics, computing hardware and acquisition technologies required to correctly process and document data. The past quarter century has seen remarkable progress in the field of light microscopy for biomedical science, with new imaging technologies coming on the market at an almost annual basis. Most of the data generated by these systems is image-based, and there is a significant increase in the content and throughput of these imaging systems. This, in turn, has resulted in a shift in the literature on biomedical research from descriptive to highly-quantitative. Standard and Super-Resolution Bioimaging Data Analysis satisfies the demand among students and research scientists for introductory guides to the tools for parsing and processing image data. Extremely well illustrated and including numerous examples, it clearly and accessibly explains what image data is and how to process and document it, as well as the current resources and standards in the field. A comprehensive guide to the tools for parsing and processing image data and the resources and industry standards for the biological and biomedical sciences Takes a practical approach to image analysis to assist scientists in ensuring scientific data are robust and reliable Covers fundamental principles in such a way as to give beginners a sound scientific base upon which to build Ideally suited for advanced students having only limited knowledge of the mathematics, statistics and computing required for image data analysis An entry-level text written for students and practitioners in the bioscience community, Standard and Super-Resolution Bioimaging Data Analysis de-mythologises the vast array of image analysis modalities which have come online over the past decade while schooling beginners in bioimaging principles, mathematics, technologies and standards.
ANN WHEELER, PhD, is Head of the Advanced Imaging Resource at the MRC Institute of Genetics and Molecular Medicine, University of Edinburgh, UK. RICARDO HENRIQUES, PhD, is Head of the Quantitative Imaging and NanoBioPhysics research group at the MRC Laboratory for Molecular Cell Biology, University College London, UK.
List of contibutors :
Forward
Peter O'Toole
1. An introduction to biomedical image data and data processing: from sub-cellular to tissue.
Anne Wheeler
2. Quantification 2 and 3D of image data (selecting, counting and measuring) (RH invites)
Jean-Yves Tinevez
3. Segmentation of image data for quantification
Jean-Yves Tinevez
4. Quantifying using association using Forster resonance energy tranfer
Stephan Terjung
5. Quantifying Photoperturbation techquies
Stephan Terjung
6. Correlations and colocalisation
Dylan Owen
7. Live cell imaging: tracking cell movement and development
Claire Wells and Mario
8. Super-resolution data analysis
Graeme Ball and Debbie Keller
9. Analysis of large datasets and automation
Ahmed Fetit
10. Presenting, documenting and storing bioimage data.
Ann Wheeler and Seb Besson.
Epilogue
Kota Muira
Index
1
Digital Microscopy: Nature to Numbers
Ann Wheeler
Advanced Imaging Resource, MRC‐IGMM, University of Edinburgh, UK
Bioimage analysis is the science of converting biomedical images into powerful data. As well as providing a visual representation of data in a study, images can be mined and used in themselves as an experimental resource. With careful sample preparation and precise control of the equipment used to capture images, it is possible to acquire reproducible data that can be used to quantitatively describe a biological system, for example through the analyses of relative protein or epitope expression (Figure 1.1). Using emerging methods this can be extrapolated out over hundreds and thousands of samples for high content image based screening or focused in, using emerging technologies, to data at the nanoscale. Fluorescence microscopy is used to specifically mark and discriminate individual molecular species such as proteins or different cellular, intracellular or tissue specific components. Through acquiring individual images capturing each tagged molecular species in separate channels it is possible to determine relative changes in the abundance, structure and – in live imaging – the kinetics of biological processes. In the example below (Figure 1.1), labelling of F‐actin, a cytoskeletal protein, using a fluorescent protein allows measurement of how fast it turns over in moving cells normally, and in a condition where a putative regulator of cell migration DSG3 is overexpressed. It shows that overexpressing DSG3 destabilises actin and causes it to turn over faster. Quantifying the expression and localisation of F‐actin in several cells over time it is possible to see how much F‐actin it turns over in the course of the experiment, where this happens, and the difference in rate between the two (Figure 1.1, graph). This type of scientific insight into the spatial and temporal properties of proteins is only possible using bioimage analysis and illustrates its use in current biomedical research applications.
Figure 1.1 Bioimage quantification to determine the dynamics of actin using photoconversion. Tsang, Wheeler and Wan Experimental Cell Research, vol. 318, no. 18, 01.11.2012, p. 2269–83.
In this book we are primarily going to consider quantification of images acquired from fluorescence microscopy methods. In fluorescence microscopy, images are acquired by sensors such as scientific cameras or photomultiplier tubes. These generate data as two‐dimensional arrays comprising spatial information in the x and y domain (Figure 1.2); separate images are required for the z spatial domain – known as a z stack – which can then be overlaid to generate a 3D representative image of the data (Figure 1.2). Image analysis applications such as Imaris, Volocity, Bioimage XD and ImageJ can carry out visualisation, rendering and analysis tasks. The most sensitive detectors for fluorescence and bright‐field microscopy record the intensity of the signal emitted by the sample, but no spectral information about the dye (Figure 1.3). This means effectively that intensity information from only one labelled epitope is recorded. To collect information from a sample which is labelled with multiple fluorescent labels the contrast methods on the imaging platform itself – e.g. fluorescent emission filters, phase or DIC optics – are adjusted to generate images for each labelled epitope, all of which can then be merged (Figure 1.3). Some software will do this automatically for the end user. The final dimension that images can be composed of is time. Taken together, it is possible to see how a 3D multichannel dataset acquired over time can comprise tens of images. If these experiments are carried out over multiple spatial positions – e.g. through the analysis of multiwell plates or tilling of adjacent fields of view – the volume of data generated can considerably scale up, especially when experiments need to be done in replicates. Often the scientific question may well require perturbing several parameters, e.g. adjustment of different hypothesised parameters or structures involved in a known biological process. This means that similar image acquisition and analysis needs to be used to analyse the differences in the biological system. In these cases although setting up an automated analysis workflow makes sense, to manually quantify each individual image would take a considerable time and would require a substantial level of consistency and concentration. The programming of analysis pipelines does require some work initially but it can be seen as letting the computer automate a large volume of tasks, making the research process more reliable, robust and efficient. Indeed some applications now allow data processing in batches on remote servers, computer clusters or cloud computing.
Figure 1.2 Workflow for bioimage data capture in 2D and 3D.
Figure 1.3 Combining channels in fluorescent bioimage analysis. Channel 1 has antibodies raised against E‐cadherin labelled with AlexaFluor 568 secondary antibodies. Channel 2 is labelled with primary antibodies raised against Alpha tubulin and secondary antibodies labelled with AlexaFluor 488.
Biomedical image analysis follows a given workflow: data acquisition, initialisation, measurement and interpretation (Figure 1.4) – which will be discussed in brief in this introductory chapter, followed by a more in‐depth analysis in subsequent chapters.
Figure 1.4 The Bioimage analysis workflow.
1.1 ACQUISITION
1.1.1 First Principles: How Can Images Be Quantified?
Before data can be analysed, it needs to be acquired. Image acquisition methods have been extensively reviewed elsewhere [1, 3, 4]. For quantification, the type and choice of detector which converts incident photons of light into a number matrix is important. Images can be quantified because they are digitised through a detector mounted onto the microscope or imaging device. These detectors can be CCD (charged coupled device), EMCCD (electron multiplying CCD) or sCMOS (scientific CMOS) cameras, or photomultiplier tubes (PMTs). Scientific cameras consist of a fixed array of pixels. Pixels are small silicon semiconductors which use the photoelectric effect to convert the photons of light given off from a sample into electrons (Figure 1.5). Camera pixels are precision engineered to yield a finite number of electrons per photon of light. They have a known size and sensitivity, and the camera will have a fixed array of pixels. Photons of light pass from the object to become images through the optical system, until they collide with one part of the doped silicon semiconductor chip or pixel in the camera. This converts the photons of light into electrons which are then counted. The count of ‘photo electrons’ is then converted into an intensity score, which is communicated to the imaging system’s computer and is displayed as an image (Figure 1.5). PMTs operate on similar principles to scientific cameras, but they have an increased sensitivity, allowing for the collection of weaker signals. For this reason they are preferentially mounted on confocal microscopes. Photomultipliers channel photons to a photocathode that releases electrons upon photon impact. These electrons are multiplied by electrodes called metal channel dynodes. At the end of the dynode chain is an anode (collection electrode) which reports the photoelectron flux generated by the photocathode. However, the PMT collects what is effectively only one pixel of data, therefore light from the sample needs to be scanned, using mirrors, onto the PMT to allow a sample area larger than one pixel to be acquired. PMTs have the advantage that they are highly sensitive and, within a certain range, pixel size can be controlled, as the electron flow from the anode can be spatially adjusted; this is useful as the pixel size can be matched to the exact magnification of the system, allowing optimal resolution. PMTs have the disadvantage that acquiring the spatial (x, y and z) coordinates of the sample takes time as it needs to be scanned one pixel at a time. This is particularly disadvantageous in imaging of live samples, since the biological process to be recorded may have occurred by the time the sample has been scanned. Therefore live imaging systems are generally fitted with scientific cameras and systems requiring sensitivity for low light and precision for fixed samples often have PMTs. (https://micro.magnet.fsu.edu/primer/digitalimaging/concepts/photomultipliers.html)
Figure 1.5 How images are digitised.
1.1.2 Representing Images as a Numerical Matrix Using a Scientific Camera
Although having a pixel array is useful for defining the shape of an object it doesn’t define the shading or texture of the object captured on the camera. Cameras use greyscales to determine this. Each pixel has a property defined as ‘full well capacity’. This defines how many electrons (originated by photons) an individual pixel can hold. An analogy of this would be having the camera as an array of buckets, which are filled by light. It is only possible to collect as much light as the pixel ‘well’ (bucket) can hold; this limit is known as saturation point. There can also be too little light for the pixel to respond...
| Erscheint lt. Verlag | 12.10.2017 |
|---|---|
| Reihe/Serie | RMS - Royal Microscopical Society |
| RMS - Royal Microscopical Society | RMS - Royal Microscopical Society |
| Sprache | englisch |
| Themenwelt | Medizin / Pharmazie ► Gesundheitsfachberufe |
| Medizin / Pharmazie ► Medizinische Fachgebiete ► Radiologie / Bildgebende Verfahren | |
| Medizin / Pharmazie ► Physiotherapie / Ergotherapie ► Orthopädie | |
| Naturwissenschaften ► Biologie | |
| Naturwissenschaften ► Chemie | |
| Technik ► Maschinenbau | |
| Technik ► Medizintechnik | |
| Schlagworte | 3-d digital color microscopy for the biosciences • bioimage filtering • Bioimaging • bioimaging data documentation • bioimaging data parsing • bioimaging data processing • bioimaging hardware • bioimaging software • bioimaging technology • Biomedical Imaging • biomedical imaging data analysis • Biowissenschaften • Cell & Molecular Biology • Chemie • Chemistry • digital light microscopy in the biosciences • digital microscopy • digital microscopy hardware reviews • digital microscopy software reviews • image restoration of 2-dimensional microscopy • Imaging • imaging analysis • imaging data in biosciences • Life Sciences • Light microscopy • light microscopy image analysis • light microscopy imaging analysis • live cell imaging techniques • materials characterization • Materials Science • Materialwissenschaften • microscopes • Microscopy • Mikroskopie • Molekularbiologie • quantification of bioimaging data • quantifying data sets in microscopy • segmentation of bioimaging data • super-resolution bioimaging • super-resolution bioimaging data analysis • super-resolution image data processing • super-resolution imaging in biosciences • Werkstoffprüfung • Zell- u. Molekularbiologie |
| ISBN-10 | 1-119-09693-6 / 1119096936 |
| ISBN-13 | 978-1-119-09693-1 / 9781119096931 |
| Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
| Haben Sie eine Frage zum Produkt? |
Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich