Platform Grouping
The platform grouping is the Team Topologies (2nd edition) term for a collection of platform teams that together provide a coherent internal platform product. Each platform team within the grouping owns a distinct context; together they expose a single, coherent interface to stream-aligned teams.
Teamsβ
Logos
The foundational principle of order across systems β integrating multi-provider infrastructure, establishing boundaries, governance, and stable standards for teams to operate autonomously.
Learn more βCorpus
The embodiment of that order β the structural form where networks, shared services, and core infrastructure take shape, preparing the body that Pneuma will animate.
Learn more βPneuma
The breath of life animating the platform via Kubernetes β orchestrating dynamic, self-healing, and scalable services atop the Logos foundation.
Learn more βArche
The origin and first cause β the primordial source from which all platform foundations draw their initial form and essential nature.
View modules βEkklesia
The assembly of the called-out β where distinct capabilities are gathered into a unified body, deliberating and acting in concert toward shared platform purpose.
Learn more βKryptos
The hidden foundation of platform security β managing cryptographic primitives, secrets infrastructure, and security controls that underpin all teams on the platform.
Learn more βTechne
The practiced art of making β the disciplined craft through which raw materials of infrastructure are shaped into purposeful, refined platform instruments.
Learn more βTeam contextβ
Each team owns a distinct context with explicit upstream/downstream relationships.
Team dependenciesβ
The primary flow is a supply chain β Logos feeds team and identity data into Corpus, which feeds networking and project infrastructure into Pneuma. Arche, Ekklesia, and Techne are shared services used by all platform teams.
Team Topologiesβ
Interaction Modesβ
Team Topologies defines three interaction modes β X-as-a-Service (consume without collaboration), Collaboration (work together temporarily to solve a problem), and Facilitating (help another team improve capability). Collaboration is always time-boxed; the goal is to transition to X-as-a-Service once the consuming team is self-sufficient.
| Team | Steady-State Mode |
|---|---|
| Logos | |
| Corpus | |
| Pneuma | |
| Arche | |
| Ekklesia | |
| Kryptos | |
| Techne |
π΅ X-as-a-Service Β· π‘ Collaboration Β· π’ Facilitating
Cognitive Loadβ
Team Topologies distinguishes three types of cognitive load β intrinsic (inherent domain complexity), extraneous (friction from poor tooling), and germane (productive expertise-building). The platform is designed to eliminate extraneous load through shared automation (Arche, Techne), so each team's cognitive budget is spent entirely on intrinsic and germane load.
| Team | Working Domains | High Intrinsic Domains |
|---|---|---|
| Ekklesia | π’ 1 / 4 | π’ 0 / 3 |
| Techne | π’ 2 / 4 | π’ 0 / 3 |
| Arche | π’ 3 / 4 | π’ 1 / 3 |
| Kryptos | π’ 2 / 4 | π‘ 2 / 3 |
| Logos | π 4 / 4 | π’ 0 / 3 |
| Corpus | π 4 / 4 | π’ 1 / 3 |
| Pneuma | π΄ 5 / 4 Β· ADR β | π 3 / 3 |
π’ within limit Β· π‘ approaching Β· π at limit Β· π΄ over limit
Team Capacityβ
Internally, each platform engineer specializes in one team's context; externally the platform grouping presents a coherent interface to stream-aligned teams β consistent tooling, documentation, and services regardless of which team delivers them.
Headcount is derived from the cognitive load analysis. When operating within capacity, a team requires one platform engineer to maintain and evolve its scope. A team approaching or at its limit is a candidate for additional capacity or scope reduction. Any team flagged π΄ over limit is the highest priority for intervention β either a second engineer, scope reduction, or tooling investment to lower extraneous load.
Platform Leadβ
A single Platform Lead spans all teams. This role does not belong to any one team β it exists above them. On this platform, the Platform Lead also serves as the Product Manager, owning both the technical direction and the platform roadmap.
Responsibilities:
- Owns the platform roadmap and prioritizes work based on stream-aligned team needs
- Interfaces with stream-aligned team leads and engineering leadership to inform that roadmap
- Owns cross-team dependency sequencing (Logos β Corpus β Pneuma)
- Ratifies Architecture Decision Records (ADRs) across all teams
- Unblocks cross-team decisions that no single platform engineer can resolve
- Allocates capacity across staffed teams based on platform demand
Platform Engineersβ
Each staffed team starts with one platform engineer who owns the team's context end-to-end. Teams can scale beyond one engineer as cognitive load demands β the cognitive load analysis is the guide for when to add capacity.
| Team | Min. Engineers | Role |
|---|---|---|
| Logos | 1 | Owns org structure, identity, GitHub, and Datadog team management |
| Corpus | 1 | Owns GCP projects, shared VPC, state buckets, and workload identity |
| Pneuma | 1 | Owns GKE clusters, service mesh, policy enforcement, and cluster add-ons β currently flagged π΄ over limit, candidate for a second engineer |
| Kryptos | 1 | Owns secrets infrastructure, PKI, and cryptographic controls |
| Arche | β | Inner source β no dedicated engineer |
| Ekklesia | β | Inner source β no dedicated engineer |
| Techne | β | Inner source β no dedicated engineer |
Total: 4β5 engineers + 1 Platform Lead (minimum staffing β scale per cognitive load analysis)
Inner Source Modelβ
Arche, Ekklesia, and Techne operate without dedicated engineers. Instead, they run as inner source repositories β open for contribution from any engineer on the platform or from stream-aligned teams.
How it works:
- Any engineer may open a pull request to an inner source repo
- Platform engineers from staffed teams (Logos, Corpus, Pneuma, Kryptos) serve as code owners and reviewers
- The Platform Lead has final approval authority on structural or architectural changes
- Stream-aligned teams can unblock themselves by contributing fixes or enhancements directly, rather than filing tickets and waiting
This model distributes platform knowledge across the organization, reduces bottlenecks on the staffed teams, and ensures inner source repos evolve with the needs of their consumers rather than on a centralized backlog.