Why do we measure physical constants,
and how?

A beginner-friendly introduction to the quantitative language of biology

Life runs on chemistry. Whether a protein folds, a drug reaches its target, or a transcription factor switches on a gene, the outcome is set by a small number of physical constants: how tightly molecules stick to each other, how fast they come together and apart, and how stable their structures are.

A physical constant is a compact summary — usually a single number — that lets you predict a system's behavior across many conditions from one well-designed measurement. A single dissociation constant Kd, for example, determines the fraction of a receptor occupied by its ligand at any concentration. It's the biological analog of a model parameter: measure it once, and it describes an entire response surface.

These pages explain the four families of constants we measure in the Fordyce lab, how they emerge from simple reaction schemes, how we measure them, and the assumptions and pitfalls to watch for.

Four families of measurements

For a sense of scale: our lab's data portal catalogs over 4.2 million individual measurements of these constants from 13 published studies since 2018, spanning 17 protein systems from transcription factors to phosphatases to signaling peptides.

Why a physicist / AI person might care

These constants are essentially parameters of generative models for how molecules behave. Most of them have a clean statistical-mechanics interpretation:

  • ΔG in kcal/mol is just a log-probability ratio: a 1.4 kcal/mol change in ΔG is a 10× change in equilibrium constant at room temperature.
  • A Langmuir isotherm is a 2-parameter sigmoid; its slope is the Fisher information about Kd.
  • Measuring ΔΔG rather than absolute ΔG is exactly what you do when you care about the learning signal of a mutation, not the uninteresting protein-wide offset.
  • ML models trained to predict binding or activity from sequence need physical-constant ground truth to train against; the Fordyce-lab datasets are among the largest sources of such labels.

The following pages lay out the derivations, the experimental logic, and example data, with enough rigor that someone coming from an AI background can reason confidently about what the numbers mean and what they don't.