If architecting is socio-technical decision-making, you should be able to observe it in the traces organizations already have. You do not need perfect metrics; you need useful signals.
Why measurement is legitimate (and where to be careful)
Architectural decision-making is not only technical. Industrial survey evidence describes software architecture as an “extensive process” in which “several stakeholders negotiate issues and solutions,” producing “a series of architectural decisions,” and frames practitioners’ experience in terms of “technical and social problems” (Demir et al., 2024, p. 1). It also reports a “universal set of challenges and considerations” transcending specific methodologies or documentation practices (Demir et al., 2024, p. 26).
The caution: these signals support diagnosis and learning, not instant causal claims.
Three practical signals
- Time-to-decision
- For each ADR: time from “opened” to “accepted/decided.”
- Use distribution, not averages: identify the long tail.
- Rework linked to decisions
- Where rework occurs, ask: “Which decision was misunderstood or unstable?”
- Use ADR revision history or sprint review evidence to reconstruct.
- Incident linkage
- Do not claim “architecture caused incident” by default.
- Instead: “Decision episode created an operational constraint that later surfaced.”
What to do in practice
- Create a decision register (ADRs) with: opened date, decided date, revisions, stakeholders.
- Add tags for “high coordination risk” decisions (many stakeholders; conflicting priorities).
- Review incidents and rework monthly against ADRs: identify patterns, not blame.
- Use the outcome signals to improve boundary objects: what was missing from the artifact?
Bibliography
Demir, M. Ö., Chouseinoglou, O., & Tarhan, A. K. (2024). Factors affecting architectural decision-making process and challenges in software projects: An industrial survey. Journal of Software: Evolution and Process, 36(10), e2703.

Leave a comment