- IDP adoption failure is almost always a product problem, not an engineering problem — platform teams that don't treat developers as users consistently fail
- The "golden path" concept works when it reduces genuine toil; it fails when it enforces unnecessary standardization
- Backstage is the dominant IDP framework but requires significant investment to be useful — evaluate Port and Cortex as alternatives
- Measure IDP success by developer time saved per week, not by platform features shipped
Section 1 — Why Most IDPs Fail
Platform engineering emerged as a discipline to solve a real problem: as organizations grow, the operational complexity of deploying, monitoring, and managing services becomes a significant drag on developer productivity. The platform team's job is to abstract that complexity behind self-service interfaces, so application developers can move fast without becoming infrastructure experts.
The theory is sound. The practice is frequently dysfunctional. Platform teams at organizations with failing IDPs share common failure modes: they build features based on what they personally find interesting to engineer rather than what reduces developer toil, they require developer adoption without demonstrating clear value first, and they measure success by platform uptime rather than developer time saved.
The organizations with successful IDPs treat the platform as a product. They have a platform PM who interviews developer users, maintains a backlog prioritized by developer time saved, and measures adoption metrics weekly.
Section 2 — The Golden Path: What It Is and Is Not
The golden path is the opinionated, pre-paved route for creating and deploying a service. At Spotify (where the concept was formalized), the golden path meant a developer could run a single command and have a production-ready service with CI/CD, observability, feature flags, and documentation scaffolded in under five minutes.
# Example: Platform CLI that provisions a golden path service
# platform new-service --template=api-service --name=user-notifications
# This generates:
# - GitHub repository with standard layout
# - Dockerfile + docker-compose.yml
# - GitHub Actions CI pipeline
# - Kubernetes manifests (Helm chart, namespace, RBAC)
# - OpenTelemetry instrumentation pre-configured
# - Backstage catalog entry registered
# - PagerDuty service created
# - Datadog dashboard scaffolded
# The generated service skeleton looks like:
service:
name: user-notifications
owner: team-messaging
language: typescript
template_version: "3.2.1"
golden_path:
ci: github-actions-standard
runtime: kubernetes-standard
observability: otel-grafana
secrets: vault-k8s-sidecar
feature_flags: unleash
The golden path works when it eliminates decisions that are genuinely not worth making at the team level — CI pipeline configuration, Kubernetes namespace setup, observability boilerplate. It fails when it enforces architectural decisions that should be team-local — service framework choice, database ORM, test library preferences. Successful platform teams distinguish between infrastructure standardization (good, enforced) and application standardization (often bad, suggested but optional).
Section 3 — IDP Tooling Comparison
| Tool | Maturity | Customization | Adoption Effort | Best For |
|---|---|---|---|---|
| Backstage (Spotify) | High — CNCF incubating | Excellent — plugin ecosystem | High — requires significant eng | Large orgs with platform eng team |
| Port | Medium — growing fast | Good — no-code config | Low — days not months | Mid-size teams, fast time-to-value |
| Cortex | Medium | Good | Medium | Service catalog focus |
| Humanitec | Medium | Good — config-driven | Medium | K8s-centric, Score-based |
| Custom-built | Varies | Total control | Very high | Unique requirements, large platform team |
Section 4 — Backstage in Practice
Backstage is the most widely deployed IDP framework, but the gap between a "we have Backstage" and "Backstage is useful" is enormous. The default Backstage install is essentially empty — the value comes entirely from the plugins and integrations your platform team builds and maintains.
// Backstage: creating a custom scaffolder action for your platform
// packages/backend/src/plugins/scaffolder/actions/createService.ts
import { createTemplateAction } from '@backstage/plugin-scaffolder-backend';
import { z } from 'zod';
export const createPlatformServiceAction = () =>
createTemplateAction<{
serviceName: string;
owner: string;
language: 'typescript' | 'python' | 'go' | 'rust';
template: string;
}>({
id: 'platform:service:create',
description: 'Creates a new service via platform golden path',
schema: {
input: z.object({
serviceName: z.string().regex(/^[a-z][a-z0-9-]*$/),
owner: z.string(),
language: z.enum(['typescript', 'python', 'go', 'rust']),
template: z.string(),
}),
},
async handler(ctx) {
const { serviceName, owner, language, template } = ctx.input;
// 1. Create GitHub repository
await ctx.github.createRepository({ name: serviceName, owner });
// 2. Apply golden path template
await ctx.platformApi.applyTemplate({
service: serviceName,
template,
language,
});
// 3. Register in service catalog
await ctx.catalog.registerEntity({
apiVersion: 'backstage.io/v1alpha1',
kind: 'Component',
metadata: { name: serviceName, annotations: { 'github.com/project-slug': `org/${serviceName}` } },
spec: { type: 'service', lifecycle: 'experimental', owner },
});
ctx.logger.info(`Service ${serviceName} created successfully`);
},
});
The minimum viable Backstage installation that developers actually use requires: software catalog with >80% of services registered, working tech docs integration, at least 2–3 scaffolder templates for your most common service types, and a plugin for your CI/CD system showing build status. Everything else is nice-to-have.
The biggest mistake platform teams make is launching with a comprehensive platform and expecting adoption. Instead: identify the single most painful thing developers do manually (usually: setting up a new service, or debugging a failing deployment). Automate that one thing extremely well. Adoption follows when the platform is visibly better than the manual process for something developers do frequently.
Section 5 — Measuring Platform Engineering Success
Platform teams that survive executive scrutiny measure the right things. Developer time saved per week (measured via survey before/after specific automation) is the primary metric. Secondary metrics: onboarding time for new engineers (days to first production deployment), incident mean time to resolution (MTTR), and deployment frequency.
Vanity metrics that do not correlate with platform value: number of plugins shipped, platform uptime (platforms should be highly available by default), number of registered catalog entries. These are easy to measure but do not indicate that the platform is improving developer lives.
Run a developer satisfaction survey quarterly, keyed to specific platform capabilities: "How satisfied are you with the process of creating a new service? (1–5)". Track trends. If a capability's score is not improving after two quarters, treat it as a product failure and redesign it.
Verdict
Platform engineering is one of the highest-leverage investments an organization of 100+ engineers can make, but only when executed with a product mindset. Start with a platform team of three (two engineers, one PM/TPM), identify the top three developer pain points through structured interviews, and automate those specifically before building any general infrastructure. Adopt Backstage only if you have dedicated platform engineering headcount to maintain it; Port or a simpler custom portal may be more appropriate for smaller organizations.
Data as of March 2026.
— iBuidl Research Team