This article was originally published on the personal blog of the founder of CrossWise InfoTech and is republished here with edits.
I. Introduction
OWASP Top 10 2025 Risk #6 — “Insecure Design” — represents a true intersection between software engineering and cybersecurity, yet it remains the most common “gray area” where both sides tend to deflect responsibility or selectively ignore the issue.
Cybersecurity practitioners often label it as “a developer’s problem,” arguing that surgical, tactical security measures are largely ineffective against design flaws deeply embedded in business logic. Conversely, software engineers habitually respond: “If the design is flawed, it’s because the business demanded it—it’s not a vulnerability.” The unspoken message? Business comes first; security can wait.
This cognitive disconnect—and the failure of both sides to grasp the root cause—is why A06 persists.
Insecure Design is not merely a coding defect. It is a structural risk seeded as early as the requirements analysis phase. The crux lies in whether the systems analyst conducting requirements elicitation possesses sufficient risk-awareness and the ability to infer threat vectors from business logic—only then can design and implementation teams respond cohesively.
II. Typical Scenarios Leading to Insecure Design
I present two real-world cases encountered personally:
1) Experience-Based Inertia
In Industrial IoT (IIoT), a common design inertia stems from the assumption that devices reside in physically isolated internal networks. Developers thus transmit control commands (e.g., start/stop, parameter adjustments) in plaintext over the network. While risky even in closed environments, the limited attack surface renders it a “manageable risk.”
However, when this same logic is directly ported to consumer-facing, internet-connected IoT systems, uncontrolled risks emerge. Attackers can enumerate API endpoints, replay requests, or tamper with parameters to bypass authentication and directly manipulate devices.
For example, consider this device API:
POST /api/v1/device/12345/control
Body: { "action": "set_max_op_mins", "value": 30 }
If the API lacks proper access controls on the device ID (12345)—such as verifying user identity, permissions, or requiring a valid access token (clearly absent here)—attackers may brute-force device IDs, reconstruct valid payloads, and remotely control devices, causing physical damage and financial loss.
Critically, if no authorization model was ever designed, this isn’t “broken access control”—it’s Insecure Design by omission. Code audits can confirm this distinction.
The root cause isn’t poor coding skill, but design inertia: the failure to treat “resource ownership” and “operation authorization” as core business rules during system modeling.
2) Risk-Embedded Business Models
Beyond violations of explicit security principles, Insecure Design also manifests more subtly:
When the business logic model itself constitutes a risk.
Such issues are hard to detect but must be identified proactively during analysis—not deferred to design, implementation, or post-deployment discovery, where remediation may be catastrophic.
Take an O2O online booking flow:
users proceed through steps like selecting a store → service type → parameters → time slot → payment method → payment initiation → post-payment processing. Each step triggers at least one API call.
From a pure business view, these steps seem necessary to guide users smoothly through the journey. Aligning APIs with this flow appears “logical.”
But from a risk perspective, this design multiplies the attack surface by N, where N equals the number of APIs. A single vulnerable endpoint compromises the entire system.
III. Solutions from a Software Engineering Perspective
Both examples demand architectural and process-level fixes—no “surgical” security tool can compensate.
For Case 1: Obfuscate sensitive parameters and implement robust authentication/authorization.
Critics may ask: “Does obfuscation justify added complexity?”
Yes—and crucially, obfuscation is just one means to achieve Confidentiality (C). The CIA triad (Confidentiality, Integrity, Availability) must be balanced:
- Integrity: Sign API payloads with digital signatures, or apply HMAC to critical IDs to prevent tampering.
- Availability: Avoid over-engineering. A 10-minute-valid signature doesn’t require long keys or multi-algorithm validation.
Patterns like JWT or external-to-internal ID mapping are viable—but selection must follow a holistic CIA assessment.
For Case 2: Replace step-by-step APIs with a single “fetch service configuration” endpoint. Let the frontend handle all user interactions, and submit the complete order only at payment initiation, with full backend validation.
This approach requires software engineers to deeply understand business logic during analysis, avoid rigid adherence to legacy workflows, and recognize that decoupling frontend logic from backend APIs enhances both flexibility and robustness.
Ultimately, the cost of secure design cannot be judged by short-term ROI. Like database transactions or audit logging, security is not optional—it’s a core quality attribute. As analysts or architects, we must treat security as a non-functional requirement from day one and quantify its impact on reliability and compliance.
IV. Reinterpreting Insecure Design from a Cybersecurity Lens
Many security professionals still equate Insecure Design with “Broken Access Control”—a misconception.
The essence of Insecure Design is the absence of mechanisms to constrain risk generation.
Broken Access Control assumes a permission model exists; Insecure Design is far broader. Even analyzing resource ownership or role-action mappings captures only surface-level risks. True understanding requires a systemic view of risk exposure, as illustrated in Case 2: multiplying APIs exponentially increases systemic fragility.
This blind spot stems from the chasm between security and business. As I’ve argued before:
Security, if not integrated into business, will be marginalized—or ignored entirely.
For executives: Should CIO and CISO remain separate roles? My answer is no.
For practitioners: If your security reviews don’t engage with business-aligned artifacts—system design docs, data flow diagrams, permission models, and functional workflows—you’re not truly integrated.
V. There Are No Shortcuts in Secure Design
The most dangerous mindset in software development is:
“Get it working first; add security later.”
This “agile procrastination” is fatal for Insecure Design. Once data models, APIs, and permission schemes solidify, changes become exponentially costly.
This also impacts economic auditing (a domain I specialize in):
An application patched post-breach does not gain value from security investments—on the contrary, such spending confirms material defects, warranting asset impairment.
In IP valuation or M&A contexts, this can nullify value entirely. Who would buy a system that required massive post-hoc security fixes? How can buyers trust no other critical flaws remain?
Thus, Secure by Design means:
Security is not a feature—it’s a foundational system capability.
It demands that risk awareness permeate our very first UML diagram and use case.
Systems analysts are the first line of defense for cybersecurity.
VI. Where Do Cybersecurity Professionals Go From Here?
Though rooted in my background (IS auditor + systems analyst + software engineering MSc), this discussion holds vital lessons for pure-play security practitioners:
Future-proof security talent must stand at the intersection of software engineering and cybersecurity.
The era of “Burp Suite warriors” brute-forcing APIs is ending—AI will automate repetitive tasks. Organizations now need bridge-builders: professionals who can read architecture diagrams, contribute to requirements reviews, and operationalize threat modeling. In short: those who can judge whether a design introduces unacceptable risk.
Because today, security is inseparable from modern software engineering.

