- Cybersecurity Attacks:Red Team Strategies
- Johann Rehberger
- 180字
- 2024-12-21 01:41:40
Leveraging homefield advantage
The concept that the home-team, which plays on their own grounds and amongst their supporters having an advantage over the away-team is referred to in sports as well. Let's look how this applies to red teaming.
Finding a common goal between red, blue, and engineering teams
The biggest advantage an in-house offensive security team has, compared to a real-world adversary, is homefield advantage. Unfortunately, many organizations are not exploiting this advantage. In fact, in some organizations, offensive and defensive teams operate in complete silos rather than learning from each other:
Attackers and defenders operate on their own with no communication channel established. This is a good operational mode occasionally, but it should not be the normal mode of operation between your red and blue teams. The preceding illustration might be slightly exaggerated in highlighting the way findings are reported, but, during my career, I have seen pen tests where the result was just a document of sparse findings.
A more effective way to help improve the security posture of the organization is the purple team offering of our offensive security program. Purple teaming is joining forces between red, blue, and product groups to enable the best possible defense of systems to protect the organization.
An internal purple team can learn the terrain, know the infrastructure, and freely communicate with others in the organization. The team can work on deploying decoys and deceptions and, most importantly, perform attacks and repeat them until the detections succeed consistently:
Homefield advantage is of great value and should be leveraged by a mature security organization as much as possible. The offensive security team is not the real adversary; they are there to help emulate a wide range of adversaries and train the organization so that they're prepared when the occasion arises.
During a real-world incident, the red team can assist and provide the security operations center with detailed information and guidance on what the next step of a newly identified adversary on the network might be. An exceptional red team can help the organization be one step ahead by leveraging homefield advantage.
To reach such a level of maturity, it's necessary for various teams in the organization to collaborate closely with each other. Red and blue teams should not see each other as opponents. In the end, both teams are on the same side.
Having that said, a healthy adversarial stance between the teams is necessary, and we will explore how this can be achieved and how, at times, the collaboration must be disrupted. However, both offensive and defensive teams, as well as the product and service engineering groups, can learn and support each other to be more efficient.
The red team can be more efficient by understanding more details about the blue team's monitoring systems. It is not about being right, but about learning and growing. Both red and blue teams have a common goal, which is to protect the organization, and they have one advantage over a real-world adversary, which is that they have homefield advantage.
Getting caught! How to build a bridge
Every pen tester will remember the first time they get caught during an operation. It's embarrassing, especially when the objective is not to be caught.
Stress can develop suddenly as the actions of the pen tester are scrutinized, and people might ask questions about what happened and why certain actions were being performed; for example, why were you poking around that finance server again? And the pen tester needs to provide answers.
This is a stage that every pen tester must go through for growth.
It helps to learn to put things into perspective, how small actions taken by a pen tester might cause impact in areas that were not clear right away. Getting caught will help raise the security IQ of everyone involved. The best advice for the pen tester in this situation is to highlight how incredible it is that detections are working and that the actions are being detected.
Getting caught is the ultimate way of enabling a bridge building between the red and blue teams. This is the moment for the red team to acknowledge the great work the blue team is doing, and it can be used to encourage further discussions and dialogs between team members to understand other similar areas that should have also triggered detections and so forth.
There is naturally an adversarial view between red and blue teams. Both sides can act and behave defensively if they're not managed well. In the end, getting caught is good! It's the entire validation that the program is working. Furthermore, a mature red team understands that this is a sign that the organization is improving and that the offensive security team is doing its job well to help test and improve the security of the organization.
Learning from each other to improve
At times, red teaming feels like a bunch of security people getting together doing security things in a silo with no noticeable improvements. Red teams know this is true because the same password that allowed them to get access to the production last year still works this year. If there are no improvements, red team operations become more of a liability than anything else. This is where the idea of a more transparent, collaborative, and interactive operational model came from, which many refer to as purple teaming.
Threat hunting
A big part of doing purple teaming is to enable dedicated threat hunting sessions together with the blue team. If no detections are triggered, the red team can give insights and provide hints on what assets were compromised. The blue team can start hunting for indicators and track down the adversary.
Do not be surprised, as this will also identify true adversaries besides the red team. It is not uncommon that the red team crosses paths with real adversaries and/or leverages very similar techniques.
With the result of threat hunting sessions, detections can be implemented, and ideally can be fed back into an automated response process.
Growing the purple team so that it's more effective
When discussing purple teams, it's common to highlight the collaborative efforts between red and blue teams, but there are other stakeholders that are missing in this equation. It's important to include engineers and other representatives from the business units. Basically, attempt to include representatives from the teams who are building the products that the blue team protects. Technically, every member of an organization is part of the blue team, so let's get them involved too.
The engineers building the system can provide amazing insights into how things work, a lot of details and information that neither the traditional red and blue teams possess. They are also the ones to realize when logging is missing or where to add certain alerts that are currently not in the product or application. The business stakeholders can provide insights on the financially-most-worrisome scenarios that should be explored.
Offensive techniques and defensive countermeasures
Every tool or technique the red team performs should immediately have validation occur by the blue team. For instance, let's say a pen tester compromises a host. At that point, the blue team immediately knows that it happened but there might be no evidence or detection being triggered in their logs. If that is the case, then we need to figure out what went wrong; for example, maybe anti-virus was missing or misconfigured. The blue team goes ahead and brainstorms detection ideas with the red team and implements a mitigation. Then, the red team performs the same technique again, attempting to validate the newly put in place defensive countermeasure. This time, it might get picked up correctly and create an automated incident to inform the correct stakeholders about the breach. If not, rinse and repeat together with the red and blue teams until the next level of maturity is reached.
Surrendering those attack machines!
One way to improve communication between teams is to share knowledge, operational procedures, and assets. An excellent way to improve communication and insights is for a progressive red team to surrender their attack machines and infrastructure to the blue teams for forensic analysis and feedback after each operation, or whenever they're caught. Adding the condition of getting caught adds some playful element to it, because for most red teamers, there is no implication of being detected. Most red teamers will strongly object to this, which might be a reason why you should do it. As a manager or lead, it's important to walk the walk here. So, in the past, I spearheaded this in operations and surrendered my own attack machine to the blue team for forensic analysis. It's uncomfortable, but it provides an immediate level up for everyone involved.
Let's go over some of the benefits.
Offensive security operations and tooling improvements
The red team can improve their tooling; for example, it is not uncommon that red team tools unnecessarily store passwords in clear text on machines, and attack machines may typically contain aggregated sensitive information from past operations, long ago, which should have been moved to a cold storage area. The blue team can help advise on how to improve the operational security of the red team and become much more effective over time by leaving less evidence behind on machines that would be useful for blue teams, again raising the bar.
To give an example, one thing I learned the first time I voluntarily surrendered my attack machine was the advanced study of Remote Desktop Protocol cache forensics by an outstanding blue team member. They can get snippets from past Remote Desktop Protocol sessions on hosts so that they can figure out what your screen looked like at some time in the past, what information the attacker might have had access to, and so on – very impressive forensic capabilities.
Providing the blue team with hands-on forensics investigation opportunities
The benefit for the blue team is that they get hands-on experience by looking at computers or disk images to practice forensic analysis. This will help in building further detections and helps the red team understand what tricky techniques blue and forensic analysis possess. By understanding some of the red team tooling, they can also improve their detections for techniques and procedures.
The surrender of machines is not something that always has to be practiced, but it is worth exploring at times if your organization has reached a certain level of maturity on your red and blue teaming.
Active defense, honeypots, and decoys
Performing active defense is necessary to mislead a real adversary (and the offensive security team). Decoys should be deployed to encourage them to trigger alerts and detections by leaving them behind throughout the organization. Similarly, it can help to deploy honeypots to understand what kind of attacks adversaries perform.
Wouldn't it be quite satisfying if an adversary compromises the wrong assets, while still believing until the very end that they achieved their actual objective? Active deception is an effective approach to understand where an adversary is during lateral movement, what path they are on, and what their ultimate objective might be. The term, as suggested in this context, does not reflect any form of retaliation or hacking back.
To put active defense in the context of the offensive security team, the offensive team might mix their online profiles and team identity with decoys to try to mislead real-world adversaries. This might be by inventing additional team members and monitoring if they ever receive invites or emails. Decoys like that can help the offensive team understand when an adversary starts to perform reconnaissance.
It is worthwhile setting up decoy attack machines or decoy red team archives to encourage adversaries to go in the wrong direction and stumble upon decoys. The blue team can actively help in such cases by monitoring and alerting red team assets.
The creativity around building decoys and deception ideas is vast and it's worthwhile to invest in this area.
Protecting the pen tester
Due to the sensitive information that penetration testers and red teamers aggregate over time; they are likely targets for real-world adversaries. This means advanced monitoring for red team members should be put in place to detect compromises. Yes, this means the blue team can help protect the red team. This level of maturity and understanding is when red and blue teams really start shining and start to realize their full potential.
Performing continuous end-to-end test validation of the incident response pipeline
An important aspect of ensuring that adversarial activity is detected consistently is by building end-to-end test scenarios that validate the incident response pipeline. There are multiple ways to achieve this. The idea is to emulate adversarial activity via automation at regular intervals to test that detections are working at all times and that the response change is followed correctly by the Security Operations Center.
It would be unfortunate if an adversary runs a well-known malware such as Mimikatz in a production environment and an anti-virus is detecting it but a misconfiguration on the machine prevented the necessary events to be propagated to the central intelligence system, which would create a security incident.
There are tools such as Caldera available that can help bootstrap such testing quickly. More information about Caldera can be found at the MITRE website: https://www.mitre.org/research/technology-transfer/open-source-software/caldera.
One caveat is that you might not want the pen test team to implement an automation framework for test execution and validation. This task seems better suited for a general engineering team, although it doesn't mean it has to be. Just consider whether the resources are focused on the proper area of expertise to help achieve the biggest return on investment.
Combatting the normalization of deviance
With all the great results that purple teaming can provide and the benefits the organization can achieve, it's critical to highlight the big caveat that should not be forgotten. When operating in transparent and collaborative purple mode for a long time, the chances of developing a groupthink mentality is likely. The organization might be under the impression that everything is under control and might miss looking for new creative ways to perform breaches.
A couple of things that can help with this is to bring other stakeholders into the pool. For instance, if you have a very strong Windows pen test team, maybe you should hire someone with a different background as they might focus on things other than Active Directory and the domain controller.
Hiring an external vendor to perform a security assessment could be another way to ensure the normalization of deviance and groupthink within the organization does not overtake.
Important Note
If you are interested to learn more about the normalization of deviance, please refer to Diane Vaughan's The Challenger Launch Decision.
Retaining a healthy adversarial view between red and blue teams
Keeping a healthy adversarial view between the teams is necessary. Don't get too cozy. For instance, the red team should, of course, be using new TTP without having to tell the blue team upfront about it. Reusing the same TTP repeatedly becomes less effective and impactful over time. The red team lead should see it as a failure on their side if they succeed in using a TTP for a long time. This means the team was unable to ignite the proper change to help create mitigations or additional detections or failed to manage and make leadership aware of it.