Atomfair Brainwave Hub: Nanomaterial Science and Research Primer / Characterization Techniques for Nanomaterials / Dynamic light scattering for nanoparticle sizing
Dynamic light scattering (DLS) is a widely used technique for determining the size distribution of nanoparticles in suspension. The method relies on measuring the fluctuations in scattered light intensity caused by Brownian motion of particles, which are then correlated to particle size through the Stokes-Einstein equation. The data processing workflow involves several steps, including autocorrelation analysis, cumulants fitting, and distribution conversion, each contributing to the final size distribution result.

The raw data in DLS is the time-dependent intensity of scattered light, which fluctuates due to particle movement. The first step in processing is computing the autocorrelation function (ACF), which quantifies how the signal correlates with itself over time. The ACF decays over time, with the decay rate depending on particle diffusion speed—faster decay indicates smaller particles moving quickly, while slower decay corresponds to larger particles. The ACF is typically represented as:
g²(τ) = A + B|g¹(τ)|²
where g²(τ) is the measured intensity autocorrelation function, g¹(τ) is the normalized electric field autocorrelation function, A is the baseline at infinite delay, and B is an instrumental constant. The field autocorrelation function g¹(τ) is the key to extracting particle size information.

For monodisperse samples, g¹(τ) follows a single exponential decay:
g¹(τ) = exp(-Γτ)
where Γ is the decay rate, related to the diffusion coefficient D by Γ = Dq². Here, q is the scattering vector, dependent on the laser wavelength and scattering angle. The hydrodynamic diameter (Dh) is then calculated using the Stokes-Einstein equation:
Dh = kT/(3πηD)
where k is Boltzmann’s constant, T is temperature, and η is solvent viscosity.

Most real-world samples are polydisperse, requiring more sophisticated analysis. The cumulants method is a common approach for moderately polydisperse systems, where the logarithm of g¹(τ) is fitted to a polynomial:
ln|g¹(τ)| = -Γτ + (μ₂/2)τ² - ...
Here, Γ is the average decay rate, and μ₂ is the variance, related to the polydispersity index (PDI) by PDI = μ₂/Γ². The PDI quantifies the breadth of the distribution, with values below 0.05 indicating near-monodisperse samples, 0.05–0.7 suggesting moderate polydispersity, and higher values reflecting very broad or multimodal distributions. However, the cumulants method assumes a unimodal distribution and becomes unreliable for highly polydisperse or multimodal systems.

For complex distributions, inverse Laplace transform or non-negative least squares (NNLS) algorithms are used to reconstruct size distributions. These methods convert the ACF into an intensity-weighted distribution, where each particle contributes according to its scattering intensity. Since light scattering scales with particle size (approximately ∝ d⁶ for Rayleigh scatterers), larger particles dominate the signal. This can mask smaller populations, leading to misinterpretation if not properly accounted for.

Intensity-weighted distributions can be converted to volume-weighted or number-weighted distributions using Mie theory or other light scattering models. Volume-weighting reduces the bias toward large particles, providing a more realistic view of the sample composition. However, this conversion requires knowledge of the optical properties (refractive index, absorption) of both particles and solvent, introducing potential errors if these parameters are inaccurately known.

A critical challenge in DLS analysis is distinguishing true multimodal distributions from artifacts. Dust or aggregates can create false peaks, while weak signals from small particles may be obscured by noise. The presence of multiple peaks requires careful validation, often through complementary techniques like electron microscopy. Additionally, the technique struggles with samples containing very large (>1 µm) or very small (<1 nm) particles due to limitations in sensitivity and resolution.

The polydispersity index (PDI) is a useful metric but has limitations. While it indicates distribution breadth, it does not reveal shape or modality. A high PDI could result from a single broad peak or multiple narrow peaks. Furthermore, the PDI is sensitive to measurement conditions such as concentration, temperature, and solvent viscosity, requiring strict control for reproducibility.

Software tools for DLS data analysis vary in complexity and capability. Common commercial packages include:
- Malvern Zetasizer software (includes NNLS and multimodal fitting)
- Brookhaven Instruments Particle Solutions (supports cumulants and CONTIN algorithm)
- Wyatt Technology DYNAMICS (specialized for biomolecules and polymers)
- Dispersion Technology Software (focuses on stability and aggregation studies)

Open-source alternatives like PyCorrFit and DDLS provide customizable analysis options but may require more user expertise. These tools typically offer options for smoothing data, selecting fitting algorithms, and converting between intensity-, volume-, and number-weighted distributions.

A practical consideration in DLS is sample preparation. Aggregation can occur due to improper dispersion, high concentration, or unsuitable solvents, leading to skewed results. Dilution, sonication, or filtration may be necessary to ensure accurate measurements. Additionally, viscosity and refractive index must be precisely input for correct size calculation, as errors here propagate directly into hydrodynamic diameter estimates.

In summary, DLS data processing involves autocorrelation analysis, fitting procedures like cumulants or NNLS, and distribution conversions to extract meaningful size information. The PDI offers a quick assessment of polydispersity but must be interpreted cautiously. Challenges arise with multimodal samples, aggregation artifacts, and intensity-weighting biases, necessitating careful experimental design and sometimes complementary characterization. Modern software tools streamline analysis but require informed operation to avoid misinterpretation. Understanding these principles ensures reliable nanoparticle sizing for applications ranging from drug delivery to materials science.
Back to Dynamic light scattering for nanoparticle sizing