Initial Access Brokers: the cybercriminals who support cybercriminals
Project managers, along with HR and staff in the finance department, aren't always terribly well served by traditional cybersecurity advice.
You've probably read some of this advice online, and you may possibly have heard it from your own IT team. One example is, "Don't open email attachments unless you're expecting them, and they come from someone you know and trust." Another is, "Never click on links you aren't sure about."
Just to be clear: there's nothing intrinsically bad in these tips because it's clear that their opposites would definitely not be good advice. No one would knowingly open an attachment from someone they actively distrusted and no one would rush to visit websites that were obviously up to no good (even the ones that are so fascinatingly bizarre that you'd secretly love to know what they're all about).
The problem with this sort of very generic cybersecurity advice is that some jobs involve working with lots of other people, known and unknown, both inside and outside the business, and sharing information that can't easily be placed directly into the body of an email. In any case, even a web link or a document that you're expecting, from a correspondent you trust, could put you in harm's way if their email address has been hacked and the email was sent by an imposter.
All of this, of course, raises the question, "What happens if I make an honest mistake?" And even more importantly the problem, "What if I take a look, just in case it's important, but it turns out to be not what I thought?"
After all, how are you supposed to figure out whether any particular document is the one you were expecting without taking a peek at it first, in the same way that you might look through the security peephole in a hotel room door before opening it?
And all of this, in turn, raises the question, "If I did something that with hindsight I probably shouldn't have, but nothing obviously bad happened, do I need to worry about it?"
Unfortunately, there's no one-size-fits-all answer. The potential cost, even of a micro-leak of personal or company data, can be hard to determine. That's because cybercrimes these days often don't unfold as an obviously connected sequence of 'attack signatures' generated by one gang of cybercrooks as they carry out an intrusion from start to finish.
Ransomware attackers, for instance, often seem to arrive in their victims' networks very suddenly, without any obvious digital poking around on the outside first. Their appearance sometimes makes them seem like a gang of bank robbers who have found a way of teleporting themselves directly into the vault, without any of the preparatory activities they'd need in real life.
That's all thanks to a cybercrime subculture of initial access brokers, or IABs for short, the annoyingly legitimate-sounding jargon name for criminals whose primary trade is selling off illegal access to other people's networks.
These attackers don't breach your computers, phones and networks because they want to implant spyware, run ransomware, steal intellectual property, or kick off supply-chain attacks themselves. They leave the final attacks to their "customers" - other criminals who are happy to pay for initial access in order to kick off their own break-ins quickly, easily and unannounced. Think of the IAB business model as a sort of virtual car boot sale where anonymous visitors can haggle over digital entry keys and codes, all the way from lucky dips containing thousands of stolen passwords to guaranteed 'knock-and-enter' system administrator access into major multinational corporations.
Even harmless-sounding breaches or security lapses could therefore be the start of further cybersecurity trouble, because there are cybercriminals out there whose gambit is to collect as much leverage for illegal access as they can right now, and to find someone who is ready to pay for it later on. On the other hand, many harmless-seeming security lapses will turn out to be just that, harmless, so that over-reacting to every possible security incident could be just as disruptive to your business as a real and concerted attack.
In other words, relying entirely on automated tools (including AI), procedural checklists, and IT rules-and-regulations to keep yourself and your company secure is unlikely to achieve the result you want. Instead, you should be aiming for a corporate cybersecurity culture in which IT and users 'meet in the middle' to take collective responsibility for staying ahead of the crooks.
A single article of this length can't provide you with a definitive list of what to do, but we can tell you three things:
1. Stop. Think. Connect
Many cyber-intrusions succeed not because they are well-hidden and sophisticated, but because they are simple and unobtrusive enough to catch you out when you're in a hurry. Basic precautions give some of the best safety: think before you click; pause before you reply; and be aware before you share.
2. Embrace co-operation
As a user, don't do things just because the IT department hasn't prohibited them. If you see something, say something. As an IT expert, don't be dismissive to users who call in issues that turn out to be harmless. Sooner or later, you'll be grateful for their willingness to help.
3. Fight the good fight
Just like project management, a company's attitude to cybersecurity is part of the corporate value to be maximised, not merely a cost to be minimised. Security is a journey, not a destination, so ensure that no one in your project pays mere lip service to cybersecurity, because even small mistakes can have very broad consequences.
You may also be interested in:
- Initial Access Brokers: Why all breaches matter, even if they don't make the headlines
- Malware in the spotlight: why do crooks love rogue code so much?
- Common types of cyber attack
0 comments
Log in to post a comment, or create an account if you don't have one already.