😎SOC Notes from Industry
Last updated
Last updated
for quick Shutdown shutdown /r /t 05
get-adcomputer dd-brava -server dd-dc1.dardubai.co.ae this is command used to find the computer that is group to all AD policies applied on specific server
SOC Quick tools
Josh Stroscheins Malware collection:
https://github.com/jstrosch/malware-s...
Malware Bazaar:
Oledump:
https://blog.didierstevens.com/progra...
Any.Run:
VirusTotal:
https://www.virustotal.com/gui/
Pdf-Parser:
Analyzing Malicious Documents Cheat Sheet:
https://zeltser.com/media/docs/analyz...
URLHaus:
Suspicious Email Checklist
[ ] Generate a document for email phishing
Command Line Audit
Command Line Auditing
Inner view of powershell/cmd command —> logs what being done while using sysinternals exe(s)
Sysinternals Utilities - Windows Sysinternals
THREAT HUNTING WITH WINDOWS SECURITY EVENT LOGS - Blue Team Blog
Learn Powershell 7
Download ->
Sample scripts for system administration - PowerShell
Create POwershellscript for Documenting
Visualing Data : ELK Stack
Logrhythm monitor + Sysmon + CMD Auditing
login to rdp to session to Device with Adm account
Installing Logrhytm System Monitor for log collection on WS:
Make sysmon folder in the rdp machine: sysmon copypaste the contents in new folder from dp-puneksc2\c$\sysmon
Go to \\dp-puneksc2\c$\sysmon —>C:\ProgramData\KLShare\Packages\sysmon
copy the folder to device c drive and Copy the Sysmon64.exe to Windows folder
Go to cmd admin :
https://github.com/LogRhythm-Labs/Microsoft-SysMon-config
You can check sysmon Logs in
Got to even viewer → "Action" select Connect to remote Computer →Put the device name
Go to Application and Service Logs drop it down → Microsoft →Windows → Sysmon
Command Line Auditing:
open gpedit.msc with adminstrator
You can check Event viewer for logs.
Logrhythm in DMZ
Forwarding logs to
Enabling Syslog in Linux and Viewing logs
In this tutorial, we will look at Syslog in Linux and how to send logs to a remote machine using it. Syslog is a standard for message logging. It has been the standard logging mechanism on Linux/Unix systems for a very long time. Although most of the distros now ship with journald
– a systemd
based logger – Syslog still exists and is generally used in conjunction with journald
.
Table of Contents
• What is Syslog? • Viewing local Syslogs ◦ 1. Display syslogs with the ls command ◦ 2. View system logs in Linux using the tail command ◦ 3. View and Edit syslogs in Linux with a text editor • Server Configuration for Remote System Logging ◦ 1. Check if rsyslog is installed ◦ 2. Edit rsyslog’s configuration file ◦ 3. Configure the firewall to open the port used by rsyslog ◦ 4. Restart rsyslog ◦ 5. Check if rsyslog is listening on the port opened • Client Configuration for Viewing Remote Syslogs ◦ 1. Check if rsyslog is installed ◦ 2. Edit rsyslog’s configuration file ◦ 3. Restart rsyslog • Test the logging operation • Conclusion
Syslog is a vague concept, generally referring to the following 3 things:
Syslog Daemon: It is a daemon that listens for logs and writes them to a specific location. The location(s) is defined in the configuration file for the daemon. rsyslog
is the Syslog daemon shipped with most of the distros.
Syslog Message Format: It refers to the syntax of Syslog messages. The syntax is usually defined by a standard (for eg RFC5424).
Syslog Protocol: It refers to the protocol used for remote logging. Modern Syslog daemons can use TCP and TLS in addition to UDP which is the legacy protocol for remote logging.
The advantage of Syslog over journald
is that logs are written in files that can be read using basic text manipulation commands like cat, grep, tail, etc.
journald
logs are written in binary and you need to use the journalctl
command to view them.
Logs are a great source of information on what’s happening in the system. They’re also the first place one should look for any kind of troubleshooting.
Generally, logs are written under the /var/log
directory. How this directory is structured depends on your distro.
Note: This method only works for logs written by a Syslog daemon and not for logs written by journald
.
Listing the contents of /var/log
for an Ubuntu 20.04 machine using the ls command:
$ sudo
ls
/var/log
Listing /var/log
Using the tail command you can view the last few logs. Adding the -f option lets you watch them in real time.
For RedHat based systems:
$ sudo
tail
-f /var/log/messages
For Ubuntu/Debian based systems:
$ sudo
tail
-f /var/log/syslog
Similarly, the tail
command can be used to view kernel logs (kern.log
), boot logs (boot.log
), etc .
The rules for which logs go where are defined in the Syslog daemon’s configuration file. For rsyslog
, it is /etc/rsyslog.conf
Let’s look at rsyslog
‘s configuration file using the nano editor:
$ sudo
nano /etc/rsyslog.conf
rsylog Configuration
As can be seen in the screenshot, it uses imjournal
module to read the messages from the journal. Scrolling through the file, the rules for the location of logs can be seen:
rsylog Configuration
Note: For some distros the location rules are defined separately in /etc/rsyslog.d/50-default.conf
The ‘kern’, ‘info’, etc at the start of some lines are ‘facility codes’ as defined by the Syslog standard. More information about the facility codes and other parts of the Syslog standard can be found on this Wikipedia page.
Syslog also supports remote logging over the network in addition to local logging. It follows a server/client architecture for remote logging. Next we’ll look at how to configure this server/client architecture so that messages can be logged remotely.
We will be configuring a CentOS 8 machine as the remote server that receives Syslog messages from hosts through TCP. You’ll need superuser privileges for every step. So, either change to the root user or prefix sudo before every command.
rsyslog
is the Syslog daemon that will listen for logs from host. To check if it’s installed, type:
$ rsyslogd -v
It will print some information if it’s installed
Check For Rsyslog 1
If it is not already installed, you can install it using the dnf
command:
$ sudo
dnf install
rsyslog
Install rsyslog
The file we need to modify is /etc/rsyslog.conf
. You can use the editor of your choice. I’ll be using the nano editor.
$ sudo
nano /etc/rsyslog.conf
You can also group the logs by creating separate directories for separate client systems using what rsyslog
calls ‘templates’. These templates are directives for rsyslog
.
To enable grouping of logs by systems add lines 7 and 8. To enable TCP, uncomment lines 4 and 5 by deleting the ‘#’ character at the start of the line.
1
2
3
4
5
6
7
8
9
...
# Provides TCP syslog reception
# for parameters see <http://www.rsyslog.com/doc/imtcp.html
>
module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")
$template FILENAME,"/var/log/%HOSTNAME%/syslog.log"
.* ?FILENAME
...
Edit Configuration
By default rsyslog
listens on port 514. We need to open this port using the firewall-cmd command:
$ sudo
firewall-cmd --add-port=514/tcp
--zone=public --permanent
$ sudo
firewall-cmd --reload
Open TCP Port 514
Now that we’ve made changes to the configuration file and opened the port, we need to restart rsyslog
so that it can pick up the new configuration. We can restart rsyslog
using the systemctl
command:
$ sudo
systemctl restart rsyslog
If you want rsylog to automatically start every time you boot up, type:
$ sudo
systemctl enable
rsyslog
We can use the netstat command to list all the open ports:
$ sudo
netstat
-pnlt
Using Netstat To Check Open Ports
As is highlighted in the screenshot above, rsyslog
is listening on port 514.
Each client will have to be configured separately. To configure the client:
On client systems too rsyslog
needs to be installed. If it is not already installed, you can install it using the same steps as for the server.
Only 1 line needs to be added for the client’s /etc/rsyslog.conf
file. Open it with the editor of your choice:
$ sudo
nano /etc/rsyslog.conf
And add the following line:
1
2
3
...
.* @@<server's-ip-address>:514
...
Client Side Cofiguration
.*
tells rsyslog
to forward all logs. The @@
means a TCP connection and 514 is the port number. You might need to configure the firewall to open the port no. 514 on client systems as well if the client has a firewall set up. In that case, follow the same steps as for the server.
We need to restart rsyslog
on client systems as well using the systemctl
command:
$ sudo
systemctl restart rsyslog
$ sudo
systemctl enable
rsyslog
On your client system, type:
$ logger "I just logged a message"
Logger
On the server system, you will find a separate directory created for every client( and 1 directory for the server system itself).
Server Directories
Viewing the contents of /var/log/earth66
/syslog.log
using the tail command on the server system:
$ sudo
tail
-f /var/log/earth66/syslog.log
Remote Logging
In this tutorial, we learned about Syslog and set up remote logging using rsyslog
. Checking logs is one of the first and most important parts of troubleshooting. Knowing how to view and understand logs can help save both time and effort. To know more about the features of rsyslog
and it’s configuration, look at its man page and documentation.
Find Installed Microsoft .NET Framework Version
To determine the version of .NET Framework installed follow these steps.
Open the Registry Editor. (Run > “regedit” )
Navigate to the following location HKLM:SOFTWARE\Microsoft\Net Framework Setup\NDP
Now if the following path exists, it means that the installed Microsoft .NET Framework Version is greater than v4.5 (which means install LR Agent v7.2.6.8002). HKLM:SOFTWARE\Microsoft\Net Framework Setup\NDP\v4\Full
If in the previous step you didn’t find the Path shown, you don’t have a version of Microsoft .NET Framework that is greater than version 4.5. This can either mean two things: Install latest version of .NET possible (Refer to Table on Pg. 5) Install LR Agent version v7.1.3.8000
If you find that Microsoft .NET Framework version is less than v4.5. You can find the actual version installed by expanding all the folders in the “NDP” (from Step 2) and checking if the key “Version” exists in the Client Path. This is shown in picture below.
Symantec Data Center Security
Vulnerability Remediation Dashboard
Kaspersky Cloud Deployment + KEDR Deployment
Kaspersky Security Center Cloud Console
Fix/Install the Web Console. :https://dp-puneksc2.darpune.com:8080/
Install KES 11.4 Plugin : Done
Install KES 11.4 only on IT workstations for testing purposes for a month or so
figure out what Cloud Console is : this when Kaspersky is managing the KSC
Event log triaging
Domain Typosquatting
Playbook for typosquatting
EDR Optimum
Domain Health
For ssl certificate :
2 MX Lookup : Mail box lookup
MX Lookup - Check MX Records of Domain
Domain Health :
URL browser screenshot :
Url Health Report
Free Domain Health Check Report
Inside Subdomains or External Redirects
Analyze Internal & External Links of a Webpage - DNSChecker.org
Sandbox URL
SOC Operations Portal
Purple Teaming
Raising Your Own APT: Purple Team Exercises to Drive Security Program Maturity
MARCH 19, 2020 • ANDREW SCOTT
As President George Washington wrote in 1799, “…offensive operations, oftentimes, is the surest, if not the only … Means of defence.” This could not be more true in today’s cyber battleground as organizations work to defend themselves from attackers they cannot see, with tactics they may not be aware of, and with motives that are not favorable.
Security intelligence and defensive measures seek to narrow the playing field and prepare defenders for a real attack. However, without testing your capabilities, responding when it actually hits the fan is cumbersome. Enter purple team exercises.
Purple teaming allows your organization to run scenarios pitting your blue team (defenders) against a red team (penetration testers or pen test software) to identify breakdowns in detective and preventive controls, processes during incidents, and procedures. Pen testing, of course, is nothing new to information security teams, but the potential for conducting pen tests in conjunction with a smart, focused intelligence-driven defense will yield far more information about how ready your organization is.
MITRE ATT&CK has become a buzzword of sorts, but positioning intelligence at the heart of asset- and organization-focused risk management approaches can help drive a proactive security program — and center its mission and results. Pairing MITRE with streamlined security intelligence workflows can push the needle forward or provide a blueprint for organizations to drive toward. Being able to focus not on just threat actors and their TTPs, but also trends, relevance, and the context intelligence provides can dramatically increase value from these exercises.
Organizations that understand their assets, know which threat actors are relevant to their business and industry, and have identified which TTPs are mitigated by their controls, are in a unique position to test their skills and technology in a fire-drill environment. Recorded Future makes it much easier for you to identify attack vectors and exposure points by providing the data you need to support your controls and mitigate risk.
Build out your program and test their connections, both for data and people.
In order to conduct a purple team exercise, you must push to identify what you are ultimately trying to assess. A strong starting point is to review your priority intelligence requirements (PIRs) and determine how you validate those currently. Additionally, keep the scope straightforward. Start by answering these questions:
Is there a gap or something critical you missed in a previous audit, assessment, or model?
What are the goals for the exercise? (Are you trying to validate that controls actually work? Are you testing your IR team’s response capabilities and time? Are your assumptions of understood risks true?)
Finally, the major question: What are you testing (people, process, technology, or everything at once)?
Once you identify the scope of the exercise, you can determine the type of assessment and attributes to test and evaluate. The next question should be, “What type of assessment will support your hypothesis?” For those familiar with secure development and testing, the following concepts will seem familiar. You could try one of the following or a mix:
Black Box Testing: Pen testers have no knowledge of the application being evaluated
White Box Testing: Pen testers have full knowledge of the application being evaluated
Targeted Testing: Third-party or internal pen testers have knowledge about the organization and scope to simulate certain attack types or scenarios
Double Blind Testing: Neither the red nor the blue team have knowledge about the organization to test event identification and response times.
Each of the above can be coordinated to review organizational readiness, controls validation, and application or system hardening.
A key part of the exercise design process is to review your threat models and identify who might attack your organization or industry. Again, these are typically defined by your models and validated through your PIRs. Make sure to include not just external threats, but also internal threats to get a full view of defensive capabilities.
Recorded Future can help you identify relevant threats through research in the UI and by utilizing your “methods” and “attackers” watch lists. You can also use Recorded Future Intelligence Cards to identify and research the context of relevant MITRE tags and TTPs that your organization has identified as threats to operations.
Focusing on remote code execution on Windows Machines vulnerable to BlueKeep could help detect gaps in controls coverage. (Source: Recorded Future)
Once the threats and TTPs to evaluate are identified, the next step is to determine what controls are available to you internally. Another way of asking this is, “Which people, processes, and/or technology (PPT) have been put in place for us to defend with?” Identifying your gaps and control maturity ahead of time will go a long way to refine hypotheses or test and follow up tasks after the actual exercise.
Once you have an understanding of your scope, the controls in place, and you have identified what they cover, you can conduct your exercise. The purpose is to search for vulnerabilities or flaws in PPT that can be attributed to the threat actors, TTPs, and motivations you have modeled in the design and risk management processes.
It’s very important to document and report on the outcomes of the test to continue tracking maturity growth over time. Were you successful? If so, how do you grade that? If not, what new exposure areas did you identify?
No successful test is complete without a follow up of refining priorities (both business and security), as it relates to your organization’s ability to defend itself. Well-defined threat models rely on prioritization to keep the focus of security teams and prevent a frantic “threat du jour” approach.
For each threat and risk, revisit your risk assessment methodologies and evaluations to better define your critical areas of exposure and operation. Such an approach provides the opportunity to craft metrics around controls improvement, process improvement, and maturity growth. A few examples could be:
Mean time to respond
Mean time to escalate
Control efficacy (blocks or allows)
Downtime allowed versus downtime experienced
IAM and access control efficacy evaluations
Ultimately, your organization will not have achieved anything out of scheduled purple team exercises and intelligence driven pen tests if countermeasures to reduce the risks observed during these exercises are not refined. There are several examples of countermeasures and adjustments, including:
Better logging of endpoint and network data
Applying threat data via API to relevant security controls
Improving correlation detections
Supplying block-grade data to host and network security controls or applications to stop threats before they become incidents
Workflow improvement to reduce response times
Whatever the outcome, always think about how you will use the results of the exercise to improve your detective and preventive controls.
Monitoring alerts and vulnerabilities for the sake of monitoring doesn’t do anyone any good — neither does conducting CTI research for tracking threat actor appetites and TTP trends if you dont review controls to battle them. However, neither are valuable if you do not test and validate your organization’s security and risk mitigation controls.
By combining reactive and proactive investigation and intelligence techniques with strategic risk management frameworks, you can evaluate program efficacy and improvement across strategic and operational lines. Being able to evaluate those results in a qualitative and/or quantitative way further allows you to respond in an agile and adaptive fashion — improving consistency while being able to respond to changes in threat landscape quickly and efficiently.
When a new year begins, many people set resolutions and claim the ever-cliche “new year, new me.” Whether your objective is to start developing an intelligence-led security program from scratch or you are trying to take that next step to improve yours, using security intelligence in conjunction with broader risk management and controls validation efforts can bring everything together.
Using Recorded Future to support your intelligence and security program development can help you actually move toward achieving your resolutions, which is more than can be said for the expensive gym membership your coworker across the aisle bought.
Are you ready to make the shift to a risk-based approach to cybersecurity? Download your free copy of “The Risk Business: What CISOs Need to Know About Risk-Based Cybersecurity” to get started.
RF Alerts
Accunetix vulnerability assessment scanner