The Ultimate Security Blind Spot You Don’t Know You Have

Technology

How much time do developers spend actually writing code?

According to recent studies, developers spend more time maintaining, testing and securing existing code than they do writing or improving code. Security vulnerabilities have a bad habit of popping up during the software development process, only to surface after an application has been deployed. The disappointing part is that many of these security flaws and bugs could have been resolved in an earlier stage and there are proper methods and tools to uncover them.

How much time does a developer spend on learning to write a functioning code? And how much is spent on learning about code security? Or learning how not to code?”

Wouldn’t it be better to eradicate the problem from the system rather than having it there, and then trying to detect and stop an ongoing attack targeting it?

You can test your secure coding skills with this short self-assessment.

The true cost of bugs

Everyone makes mistakes, even developers. Software bugs are inevitable and are accepted as the “cost of doing business” in this field.

That being said, any unfixed bugs in code are the lifeblood of attackers. If they can find at least one bug in a system that can be exploited in the right way (i.e., a software vulnerability), they can leverage that vulnerability to cause massive damage, potentially on the scale of tens of millions of dollars – as we see through well-publicized cases hitting the headlines every year.

And even when it comes to less serious vulnerabilities, fixing them can be very costly – especially if a weakness is introduced much earlier in the SDLC due to a design flaw or a missing security requirement.

Why is the current approach to software security falling short?

1 — Too much reliance on tech (and not enough on humans)

Automation and cybersecurity tools are supposed to reduce the workload for developers and application security staff by scanning, detecting, and mitigating software vulnerabilities, however:

  • While these tools do contribute to cybersecurity efforts, studies show that they can only discover 45% of overall vulnerabilities
  • They can also produce “false positives,” leading to unnecessary concern, delays, and rework
  • …or even worse, “false negatives,” creating an extremely dangerous false sense of security

2 — The DevSec disconnect

The DevSec disconnect refers to the well-known tension between dev teams and security teams due to different (and often conflicting) priorities when it comes to new features and bug fixes.

As a result of this friction, 48% of developers end up regularly pushing vulnerable code into production. Vulnerabilities discovered later in the development cycle often don’t get mitigated, or end up creating extra costs, delays, and risks further down the line. These are the consequences of short-term thinking: ultimately, it would be better to fix the problem at the source instead of spending time and resources on finding code flaws later in the software development lifecycle.

3 — Monitoring your supply chain but not your own software

Another common mistake is focusing solely on the software supply chain security and only addressing known vulnerabilities in existing software products and packages listed in the famous Common Vulnerabilities and Exposures database or the National Vulnerability Database.

Dealing with any vulnerabilities in third-party components, your dependencies, or the operating environment is essential, but this won’t help you with vulnerabilities in your own code.

Similarly, monitoring potential attacks via intrusion detection systems (IDS) or firewalls followed by incident response is a good idea – and is recognized by OWASP Top 10 as a necessity – but these activities just deal with the consequences of cyberattacks rather than the cause.

The solution: make secure coding a team sport

Your cybersecurity is only as strong as your weakest link. Software development is not an assembly line job, and – despite all predictions – it won’t be fully automated anytime soon. Programmers are creative problem-solvers who need to make hundreds of decisions each day as they write code, because software development is a type of craftsmanship.

When it comes down to it, whether a piece of code is secure or not is up to the skills of individual developers.

Processes, standards, and tools can help foster and reinforce best practices, but if a developer doesn’t know about a particular type of bad practice, they’re likely to keep committing the same mistake (and introducing the same type of vulnerability in the code) over and over again.

6 tips for empowering secure coding

The number of newly discovered vulnerabilities is rising and the threats posed by malicious cyber actors are steadily getting more sophisticated. Most organizations start implementing a secure development lifecycle after an incident, but if you ask us when you should start, the answer, of course, will always be the sooner, the better.

That’s because when it comes to critical vulnerabilities, even hours can mean the difference between no lasting damage and a financial disaster.

Here are our top tips for doing exactly that:

1 — Shift left – expand security perspective to early phases of development

Relying on DevSecOps-style security tool automation by itself isn’t enough, you need to implement real culture change. SAST, DAST, or penetration testing is on the right in the SDLC; shift left towards the beginning of the software development lifecycle for more comprehensive coverage.

2 — Adopt a secure development lifecycle approach

MS SDL or OWASP SAMM for example will provide a framework for your processes and act as a good starting point for your cybersecurity initiative.

3 — Cover your entire IT ecosystem

Third-party vulnerabilities pose a huge risk to your business’ cybersecurity, but your own developers may be introducing problems to the application, too. You need to be able to detect and resolve vulnerabilities on premises, in the cloud, and in third-party environments.

4 — Move from reaction to prevention

Add defensive programming concepts to your coding guidelines. Robustness is what you need. Good security is all about paranoia, after all.

5 — Mindset matters more than tech

Firewalls and IDSs won’t protect your software from hackers by themselves; they just deal with the consequences of already existing vulnerabilities. Tackle the problem at its root: the developers’ mindset and personal accountability.

6 — Invest in secure code training

Look for a which covers a wide range of programming languages and provides thorough coverage of secure coding standards, vulnerability databases, and industry-renowned critical software weakness types. Hands-on lab exercises in developers’ native environments are a huge plus for getting them up to speed quickly and bridging that pesky knowing-doing gap.

Cydrill’s blended learning journey provides training in proactive and effective secure coding for developers from Fortune 500 companies all over the world. By combining instructor-led training, e-learning, hands-on labs, and gamification, Cydrill provides a novel and effective approach to learning how to code securely.

Products You May Like

Articles You May Like

Paul Rudd and Jack Black’s Anaconda Reboot Has a Release Date
24 Thrifted Fall Pieces: Tales of Vintage Treasures & Handcrafted Wool
Attackers Exploit Microsoft Teams and AnyDesk to Deploy DarkGate Malware
Congress avoids a shutdown but leaves ‘a big mess’ for Trump and Republicans in 2025 – NBC Los Angeles
New Chainsaw Man Movie Hypes Debut With New Posters & Staff