Tag: 網絡安全

  • Post-Conference Reflections: HKCS X HKIB Seminar on Quantum Resilient Finance

    Post-Conference Reflections: HKCS X HKIB Seminar on Quantum Resilient Finance

    Sender Su, founder of CrossWise Infotech Limited (the author), attended the seminar titled “Quantum Resilient Finance – Hong Kong’s Next Frontier” jointly organized by the Hong Kong Institute of Bankers (HKIB) and the Hong Kong Computer Society (HKCS) on 27 March 2026.

    Through this conference, the organizers deeply explored the transformative impact of quantum computing on the financial services industry, focusing on how it reshapes the cybersecurity landscape and emphasizing the importance of adopting Post-Quantum Cryptography (PQC) to ensure the future digital resilience of the banking sector. Speakers and guests came from government departments, statutory technical bodies, higher education institutions, banks, and industry enterprises.

    According to the moderator, due to the critical nature of the topic, the total number of registrants exceeded 1,000. The author observed over 700 online participants, and combined with the offline attendance, the event was fully packed.

    The session began with a speech by an HKIB representative, who pointed out that the banking sector is particularly concerned about quantum computing because financial security is built upon modern cryptography—specifically, the “traditional” asymmetric encryption systems. Emerging financial areas, including Bitcoin, are also based on asymmetric encryption and message digest technologies. The purported rapid decryption capability of quantum computing poses a severe threat to the security foundation of the financial system.

    Subsequently, an HKCS representative used the recently high-profile OpenClaw incident as an entry point, stating that Q-Day (Note: The day when quantum computers develop sufficient computing power and stability to practically crack current mainstream public-key encryption systems. It is viewed as the “Doomsday Clock” for cybersecurity; once it arrives, existing digital trust infrastructures, including banking transactions, digital identities, and blockchains, will face collapse risks) is comparable to the Y2K problem but with more far-reaching and longer-lasting impacts. At the same time, the enactment and implementation of Hong Kong’s “Protection of Critical Infrastructure (Computer Systems) Ordinance” have also prompted the industry to pay further attention to cybersecurity and strengthen related responsibilities. Therefore, focusing on quantum computing is a key measure for advancing the implementation of cybersecurity and Critical Infrastructure regulations.

    Following this, several speakers started with the basics of quantum computing, explaining the differences between quantum and classical computers, as well as concepts like Crypto Agility and Quantum Resilience, and sorted out the latest industry developments regarding PQC algorithms, standards, and compliance requirements.

    The ingenuity of quantum algorithms lies in their potential to solve specific problems more efficiently than classical algorithms, as the quantum superposition and entanglement properties they utilize cannot be effectively simulated on classical computers. In the field of cybersecurity, quantum computing has already demonstrated strong positive application potential, with secure transmission during key exchange processes being one of the most mature application scenarios currently.

    So-called post-quantum algorithms refer to algorithms specifically developed to resist the decryption capabilities of quantum computing. According to the author’s understanding, NIST has released three PQC standards (FIPS 203, FIPS 204, FIPS 205) and is advancing the standardization of subsequent algorithms, including FALCON (FIPS 206) and HQC (FIPS 207). Other countries and regions are also formulating related standards, mostly referencing and to a certain extent following these international frontier developments.

    In terms of specific applications, the most prominent area currently is the CA industry. Regarding the validity period of TLS/SSL certificate issuance, the industry is formulating a strategy of gradual shortening. According to the CA/B Forum announcement, the industry consensus goal is to shorten the validity period of newly issued certificates to only 47 days by 2029 to counter the potential cracking capabilities of quantum computers at that time. This strategy appears radical but is actually conservative, aiming to shorten the attacker’s time window (conversely increasing the time cost) to compensate for the vulnerability during the algorithm transition period, without destroying the existing certificate issuance and verification system.

    However, based on the author’s observation, if compatibility and security are to be balanced during the transition phase, the “Hybrid Mode”—running both traditional and PQC algorithms simultaneously in the system—is the inevitable transition solution. This situation is similar to the practice when the TLS/SSL certificate signature digest algorithm migrated from SHA-1 to SHA-256, where each signed program file carried certificates for both old and new algorithms.

    A more aggressive strategy is to rebuild quantum-resistant information technology infrastructure from scratch. For resource-rich organizations or governments, this strategy can be implemented for specific use cases. For example, the Singapore Blockchain Ecosystem, led by the Singapore government, plans to adopt PQC algorithms.

    However, generally speaking, quantum-resistant encryption algorithms are still in the research and development phase and are not yet fully mature.

    Conference speakers also pointed out that feasible measures exist in practice to achieve transparent upgrades of traditional encryption systems, such as deploying encryption proxy mechanisms at the connection point between the system and the network: i.e., traditional asymmetric encryption is used between the system and the proxy, while the proxy provides quantum-resistant encryption connections to the external internet. This bridging method is a common means for information systems to cope with the transition between old and new technologies.

    As a professional member of the HKCS with professional backgrounds including CISA, SA, and MSE, the author found the content of this conference quite familiar. Precisely because of this, the author was able to think from the perspective of HKIB attendees: as potential adopters of PQC, they inevitably face the difficulty of how to implement it. The core of implementation lies in fulfilling the most general requirement:

    Focus on implementing countermeasures early and in a timely manner.

    During the Q&A session, attendees raised similar questions, and guests shared their insights. In the author’s view, to concretely implement this, two questions must be clarified in the time dimension:

    Early: How much time in advance?

    Timely: What is the latest deadline?

    The author believes there is currently no standard answer. This largely depends on the progress of quantum computers and PQC algorithms and their interplay. Since adopters are not R&D personnel or product manufacturers, they cannot practice earlier than the progress of these two factors. Therefore, “early” requires observers to broaden information channels and continuously track industry developments: on one hand through various IT news and vendor releases, and on the other hand, more importantly, by paying attention to the latest research papers in the field. arXiv is an important tool, and its information is often more cutting-edge than releases from government agencies like nist.gov.

    Of course, NIST’s role as a PQC technology aggregator is not to be underestimated. For instance, the 6th PQC Standardization Conference held by NIST in 2025 has related PPTs available for download, which are beneficial to study. The first presentation, “NIST PQC Standardization Project,” reviewed the first batch of PQC standards and subsequent selection processes. Notably, the presentation mentioned SP 800-227 regarding Key Encapsulation Mechanisms, outlined the US government’s mandatory 2035 deadline for migration to PQC, and introduced the NIST IR 8547 transition guidelines and the NCCoE migration project.

    Reading through the conference agenda, one can also find a presentation titled “Learn about the New NIST SP 800-53 Control Overlays for Securing AI Systems Project,” reminding readers to pay attention to AI system security while focusing on PQC.

    In addition, as a “Super Connector” between mainland China and the world, it is a necessary measure for Hong Kong to reference various standards. Domestic commercial cryptography technology can follow the progress of units such as TC578 (National Technical Committee 578 on Quantum Computing and Metrology Standardization) and the Commercial Cryptography Standard Research Institute. It is worth noting that while Chinese national standards distinguish between mandatory and recommended standards, recommended standards should not be ignored; once incorporated into relevant laws and regulations, they become de facto mandatory standards and must be implemented.

    As for “timely,” it requires grasping the timetables of various compliance requirements, such as the aforementioned 2035 deadline required by the US government. However, according to the latest analysis published by the Google Blog on 25 March 2026, based on its continuous investment and progress in quantum computing and PQC, Google believes that the final timeline for migrating to PQC algorithms to counter quantum computing has been moved forward to 2029.

    Thus, regarding what is “timely,” one must maintain a concept of continuous follow-up and dynamic updating.

    If observed from the perspective of gradual adaptation, the most suitable reference is the timetable for the adjustment of TLS/SSL certificate validity periods in the CA industry. As adopters, one should not only adjust internal certificate policies accordingly but also use this as a guide to reasonably arrange budgets, procure mature PQC systems or equipment in a timely manner, update traditional encrypted information systems, and guide the development and construction of new systems.

    For the Hong Kong industry, although the Hong Kong Monetary Authority (HKMA) has not yet released a specific mandatory roadmap for PQC, continuously tracking compliance requirements promulgated by the government and industry and maintaining technological “Agility” will be the best strategy to cope with future uncertainties.

    References:

    https://arxiv.org

    https://quantumalgorithmzoo.org

    https://csrc.nist.gov/events/2025/6th-pqc-standardization-conference

    https://www.nccoe.nist.gov

    http://www.tc578.com.cn/

    https://niccs.org.cn

    https://www.imda.gov.sg/how-we-can-help/blockchain-innovation/singapore-blockchain-ecosystem

    https://blog.google/innovation-and-ai/technology/safety-security/cryptography-migration-timeline

  • Evaluating OWASP Top 10 2025 A06: Insecure Design and the Irreplaceability of Cross-Domain Security Talent — A Perspective from Software Engineering and Cybersecurity, with Real-World Examples

    Evaluating OWASP Top 10 2025 A06: Insecure Design and the Irreplaceability of Cross-Domain Security Talent — A Perspective from Software Engineering and Cybersecurity, with Real-World Examples

    This article was originally published on the personal blog of the founder of CrossWise InfoTech and is republished here with edits.

    I. Introduction

    OWASP Top 10 2025 Risk #6 — “Insecure Design” — represents a true intersection between software engineering and cybersecurity, yet it remains the most common “gray area” where both sides tend to deflect responsibility or selectively ignore the issue.

    Cybersecurity practitioners often label it as “a developer’s problem,” arguing that surgical, tactical security measures are largely ineffective against design flaws deeply embedded in business logic. Conversely, software engineers habitually respond: “If the design is flawed, it’s because the business demanded it—it’s not a vulnerability.” The unspoken message? Business comes first; security can wait.

    This cognitive disconnect—and the failure of both sides to grasp the root cause—is why A06 persists.

    Insecure Design is not merely a coding defect. It is a structural risk seeded as early as the requirements analysis phase. The crux lies in whether the systems analyst conducting requirements elicitation possesses sufficient risk-awareness and the ability to infer threat vectors from business logic—only then can design and implementation teams respond cohesively.

    II. Typical Scenarios Leading to Insecure Design

    I present two real-world cases encountered personally:

    1) Experience-Based Inertia

    In Industrial IoT (IIoT), a common design inertia stems from the assumption that devices reside in physically isolated internal networks. Developers thus transmit control commands (e.g., start/stop, parameter adjustments) in plaintext over the network. While risky even in closed environments, the limited attack surface renders it a “manageable risk.”

    However, when this same logic is directly ported to consumer-facing, internet-connected IoT systems, uncontrolled risks emerge. Attackers can enumerate API endpoints, replay requests, or tamper with parameters to bypass authentication and directly manipulate devices.

    For example, consider this device API:

    POST /api/v1/device/12345/control
    Body: { "action": "set_max_op_mins", "value": 30 }

    If the API lacks proper access controls on the device ID (12345)—such as verifying user identity, permissions, or requiring a valid access token (clearly absent here)—attackers may brute-force device IDs, reconstruct valid payloads, and remotely control devices, causing physical damage and financial loss.

    Critically, if no authorization model was ever designed, this isn’t “broken access control”—it’s Insecure Design by omission. Code audits can confirm this distinction.

    The root cause isn’t poor coding skill, but design inertia: the failure to treat “resource ownership” and “operation authorization” as core business rules during system modeling.

    2) Risk-Embedded Business Models

    Beyond violations of explicit security principles, Insecure Design also manifests more subtly:

    When the business logic model itself constitutes a risk.

    Such issues are hard to detect but must be identified proactively during analysis—not deferred to design, implementation, or post-deployment discovery, where remediation may be catastrophic.

    Take an O2O online booking flow:

    users proceed through steps like selecting a store → service type → parameters → time slot → payment method → payment initiation → post-payment processing. Each step triggers at least one API call.

    From a pure business view, these steps seem necessary to guide users smoothly through the journey. Aligning APIs with this flow appears “logical.”

    But from a risk perspective, this design multiplies the attack surface by N, where N equals the number of APIs. A single vulnerable endpoint compromises the entire system.

    III. Solutions from a Software Engineering Perspective

    Both examples demand architectural and process-level fixes—no “surgical” security tool can compensate.

    For Case 1: Obfuscate sensitive parameters and implement robust authentication/authorization.

    Critics may ask: “Does obfuscation justify added complexity?”

    Yes—and crucially, obfuscation is just one means to achieve Confidentiality (C). The CIA triad (Confidentiality, Integrity, Availability) must be balanced:

    • Integrity: Sign API payloads with digital signatures, or apply HMAC to critical IDs to prevent tampering.
    • Availability: Avoid over-engineering. A 10-minute-valid signature doesn’t require long keys or multi-algorithm validation.

    Patterns like JWT or external-to-internal ID mapping are viable—but selection must follow a holistic CIA assessment.

    For Case 2: Replace step-by-step APIs with a single “fetch service configuration” endpoint. Let the frontend handle all user interactions, and submit the complete order only at payment initiation, with full backend validation.

    This approach requires software engineers to deeply understand business logic during analysis, avoid rigid adherence to legacy workflows, and recognize that decoupling frontend logic from backend APIs enhances both flexibility and robustness.

    Ultimately, the cost of secure design cannot be judged by short-term ROI. Like database transactions or audit logging, security is not optional—it’s a core quality attribute. As analysts or architects, we must treat security as a non-functional requirement from day one and quantify its impact on reliability and compliance.

    IV. Reinterpreting Insecure Design from a Cybersecurity Lens

    Many security professionals still equate Insecure Design with “Broken Access Control”—a misconception.

    The essence of Insecure Design is the absence of mechanisms to constrain risk generation.

    Broken Access Control assumes a permission model exists; Insecure Design is far broader. Even analyzing resource ownership or role-action mappings captures only surface-level risks. True understanding requires a systemic view of risk exposure, as illustrated in Case 2: multiplying APIs exponentially increases systemic fragility.

    This blind spot stems from the chasm between security and business. As I’ve argued before:

    Security, if not integrated into business, will be marginalized—or ignored entirely.

    For executives: Should CIO and CISO remain separate roles? My answer is no.

    For practitioners: If your security reviews don’t engage with business-aligned artifacts—system design docs, data flow diagrams, permission models, and functional workflows—you’re not truly integrated.

    V. There Are No Shortcuts in Secure Design

    The most dangerous mindset in software development is:

    “Get it working first; add security later.”

    This “agile procrastination” is fatal for Insecure Design. Once data models, APIs, and permission schemes solidify, changes become exponentially costly.

    This also impacts economic auditing (a domain I specialize in):

    An application patched post-breach does not gain value from security investments—on the contrary, such spending confirms material defects, warranting asset impairment.

    In IP valuation or M&A contexts, this can nullify value entirely. Who would buy a system that required massive post-hoc security fixes? How can buyers trust no other critical flaws remain?

    Thus, Secure by Design means:

    Security is not a feature—it’s a foundational system capability.

    It demands that risk awareness permeate our very first UML diagram and use case.

    Systems analysts are the first line of defense for cybersecurity.

    VI. Where Do Cybersecurity Professionals Go From Here?

    Though rooted in my background (IS auditor + systems analyst + software engineering MSc), this discussion holds vital lessons for pure-play security practitioners:

    Future-proof security talent must stand at the intersection of software engineering and cybersecurity.

    The era of “Burp Suite warriors” brute-forcing APIs is ending—AI will automate repetitive tasks. Organizations now need bridge-builders: professionals who can read architecture diagrams, contribute to requirements reviews, and operationalize threat modeling. In short: those who can judge whether a design introduces unacceptable risk.

    Because today, security is inseparable from modern software engineering.

  • CrossWise Founder Joins Cybersecurity Technology Symposium 2025: Spotlight on Critical Infrastructure Protection and Executive Leadership in Cyber Governance

    CrossWise Founder Joins Cybersecurity Technology Symposium 2025: Spotlight on Critical Infrastructure Protection and Executive Leadership in Cyber Governance

    On January 20, 2026, Sender Su, Founder of CrossWise InfoTech Limited (“CrossWise”), participated in the Cybersecurity Symposium 2025, organized by the Digital Policy Office of the Hong Kong Special Administrative Region Government.

    This third edition of the symposium demonstrated significantly enhanced professionalism and depth compared to previous years. The featured presentations and panel discussions covered numerous cutting-edge cybersecurity terminologies and the latest industry developments, substantially increasing the event’s value for attendees. Further details about the symposium can be found at:

    https://www.cybersechub.hk/tc/eventDetail/cybersecurity-symposium-2025

    The primary focus of this year’s symposium was undoubtedly the implementation of Hong Kong’s Protection of Critical Infrastructures (Computer Systems) Ordinance. To facilitate its smooth enforcement, the Office of the Commissioner of Critical Infrastructure (Computer-system Security) (OCCICS) has specifically drafted a Code of Practice for reference and compliance by relevant stakeholders. The Code is available at:

    https://www.occics.gov.hk/tc/industry/code-of-practice/index.html

    The sessions addressed current cybersecurity hot topics, including countering nation-state hackers in supply chain attacks, the critical role of executive leadership in cybersecurity practices, effective threat intelligence gathering, the use of artificial intelligence (AI) by both attackers and defenders, the tactics and activity trends of ransomware groups, post-quantum cryptography applications, IoT security, personal data privacy protection, and digital asset safeguarding.

    Mr. Su is well-versed in all these areas. In particular, supply chain security, IoT security, and personal data privacy protection align precisely with the key objectives CrossWise successfully achieved for its clients throughout 2025. The company delivered an integrated suite of services—including penetration testing, risk assessments, source code audits, security remediation, and even business logic re-engineering—to comprehensively address these challenges.

    However, the topic that resonated most deeply with Mr. Su was the role of senior leadership in cybersecurity practice. This is indeed a familiar refrain: as far back as two decades ago, during discussions on enterprise informatization, executive buy-in was already recognized as pivotal. Today, as the focus shifts to cybersecurity, it simply reflects a formerly peripheral aspect of enterprise IT governance moving center stage. Fundamentally, the core challenge remains unchanged across time: it always comes down to people.

    It always comes down to people.

    Leveraging its extensive hands-on experience, CrossWise possesses a deep understanding of the perspectives and competing interests of personnel across different organizational levels and roles. This enables the company to design practical, actionable project solutions that effectively help clients integrate cybersecurity initiatives with software engineering projects, thereby mitigating risks at their source.