Stenographer is a full-packet-capture utility for buffering packets to disk for intrusion detection and incident response purposes. It provides a high-performance implementation of NIC-to-disk packet writing, handles deleting those files as disk fills up, and provides methods for reading back specific sets of packets quickly and easily.

It is designed to:

  • Write packets to disk, very quickly (~10Gbps on multi-core, multi-disk machines)
  • Store as much history as it can (managing disk usage, storing longer durations when traffic slows, then deleting the oldest packets when it hits disk limits)
  • Read a very small percentage (<1%) of packets from disk based on analyst needs

It is NOT designed for:

  • Complex packet processing (TCP stream reassembly, etc)
  • It’s fast because it doesn’t do this.  Even with the very minimal, single-pass processing of packets we do, processing ~1Gbps for indexing alone can take >75% of a single core.
  • Processing the data by reading it back from disk also doesn’t work:  see next bullet point.
  • Reading back large amounts of packets (> 1% of packets written)
  • The key concept here is that disk reads compete with disk writes… you can write at 90% of disk speed, but that only gives you 10% of your disk’s time for reading.  Also, we’re writing highly sequential data, which disks are very good at doing quickly, and generally reading back sparse data with lots of seeks, which disks do slowly.

For further reading, check out DESIGN.md for a discussion of stenographer's design, or read INSTALL.md for how to install stenographer on a machine.

Querying

Query Language
A user requests packets from stenographer by specifying them with a very simple query language. This language is a simple subset of BPF, and includes the primitives:

host 8.8.8.8          # Single IP address (hostnames not allowed)
net 1.0.0.0/8 # Network with CIDR
net 1.0.0.0 mask 255.255.255.0 # Network with mask
port 80 # Port number (UDP or TCP)
ip proto 6 # IP protocol number 6
icmp # equivalent to 'ip proto 1'
tcp # equivalent to 'ip proto 6'
udp # equivalent to 'ip proto 17'

# Stenographer-specific time additions:
before 2012-11-03T11:05:00Z # Packets before a specific time (UTC)
after 2012-11-03T11:05:00-07:00 # Packets after a specific time (with TZ)
before 45m ago # Packets before a relative time
before 3h ago # Packets after a relative time

NOTE: Relative times must be measured in integer values of hours or minutes as demonstrated above.
Primitives can be combined with and/&& and with or/||, which have equal precendence and evaluate left-to-right. Parens can also be used to group.

(udp and port 514) or (tcp and port 8080)

Stenoread CLI
The stenoread command line script automates pulling packets from Stenographer and presenting them in a usable format to analysts. It requests raw packets from stenographer, then runs them through tcpdump to provide a more full-featured formatting/filtering experience. The first argument to stenoread is a stenographer query (see ‘Query Language' above). All other arguments are passed to tcpdump. For example:

# Request all packets from IP 1.2.3.4 port 6543, then do extra filtering by
# TCP flag, which typical stenographer does not support.
$ stenoread 'host 1.2.3.4 and port 6543' 'tcp[tcpflags] & tcp-push != 0'

# Request packets on port 8765, disabling IP resolution (-n) and showing
# link-level headers (-e) when printing them out.
$ stenoread 'port 8765' -n -e

# Request packets for any IPs in the range 1.1.1.0-1.1.1.255, writing them
# out to a local PCAP file so they can be opened in Wireshark.
$ stenoread 'net 1.1.1.0/24' -w /tmp/output_for_wireshark.pcap

Downloading
To download the source code, install Go locally, then run:

$ go get github.com/google/stenographer

Go will handle downloading and installing all Go libraries that stenographer depends on. To build stenotype, go into the stenotype directory and run make. You may need to install the following Ubuntu packages (or their equivalents on other Linux distros):

  • libaio-dev
  • libleveldb-dev
  • libsnappy-dev
  • g++
  • libcap2-bin
  • libseccomp-dev

Obligatory Fine Print
This is not an official Google product (experimental or otherwise), it is just code that happens to be owned by Google.
This code is not intended (or used) to watch Google's users. Its purpose is to increase security on our networks by augmenting our internal monitoring capabilities.

image
XIP generates a list of IP addresses by applying a set of transformations used to bypass security measures e.g. blacklist filtering, WAF, etc. Further explaination on our blog post article Usage python3 xip.py –help Docker alternative Official image You can pull the official Drupwn image from the dockerhub registry using the following command: docker pull immunit/XIP Build To build the container, just use this command: docker build -t xip . Docker will download the Alpine image and then execute the installation steps. Be patient, the process can be quite long the first time. Run Once the build process is over, get and enjoy your new tool. docker run –rm -it xip –help Logging The output generated is stored in the /tmp/ folder. When using docker, run your container using the following option -v YOUR_PATH_FOLDER:/tmp/ Download XIP

XIP generates a list of IP addresses by applying a set of transformations used to bypass security measures e.g. blacklist filtering, WAF, etc.

Further explaination on our blog post article

Usage

python3 xip.py --help

Docker alternative

Official image
You can pull the official Drupwn image from the dockerhub registry using the following command:

docker pull immunit/XIP

Build
To build the container, just use this command:

docker build -t xip .

Docker will download the Alpine image and then execute the installation steps.

Be patient, the process can be quite long the first time.

Run
Once the build process is over, get and enjoy your new tool.

docker run --rm -it xip --help

Logging
The output generated is stored in the /tmp/ folder. When using docker, run your container using the following option

-v YOUR_PATH_FOLDER:/tmp/

Fierce is a semi-lightweight scanner that helps locate non-contiguous IP space and hostnames against specified domains.
It's really meant as a pre-cursor to nmap, unicornscan, nessus, nikto, etc, since all of those require that you already know what IP space you are looking for.
This does not perform exploitation and does not scan the whole internet indiscriminately. It is meant specifically to locate likely targets both inside and outside a corporate network.
Because it uses DNS primarily you will often find mis-configured networks that leak internal address space. That's especially useful in targeted malware.


Options:

-connect    Attempt to make http connections to any non RFC1918
(public) addresses. This will output the return headers but
be warned, this could take a long time against a company with
many targets, depending on network/machine lag. I wouldn't
recommend doing this unless it's a small company or you have a
lot of free time on your hands (could take hours-days).
Inside the file specified the text "Host:n" will be replaced
by the host specified. Usage:

perl fierce.pl -dns example.com -connect headers.txt

-delay The number of seconds to wait between lookups.
-dns The domain you would like scanned.
-dnsfile Use DNS servers provided by a file (one per line) for
reverse lookups (brute force).
-dnsserver Use a particular DNS server for reverse lookups
(probably should be the DNS server of the target). Fierce
uses your DNS server for the initial SOA query and then uses
the target's DNS server for all additional queries by default.
-file A file you would like to output to be logged to.
-fulloutput When combined with -connect this will output everything
the webserver sends back, not just the HTTP headers.
-help This screen.
-nopattern Don't use a search pattern when looking for nearby
hosts. Instead dump everything. This is really noisy but
is useful for finding other domains that spammers might be
using. It will also give you lots of false positives,
especially on large domains.
-range Scan an internal IP range (must be combined with
-dnsserver). Note, that this does not support a pattern
and will simply output anything it finds. Usage:

perl fierce.pl -range 111.222.333.0-255 -dnsserver ns1.example.co

-search Search list. When fierce attempts to traverse up and
down ipspace it may encounter other servers within other
domains that may belong to the same company. If you supply a
comma delimited list to fierce it will report anything found.
This is especially useful if the corporate servers are named
different from the public facing website. Usage:

perl fierce.pl -dns examplecompany.com -search corpcompany,blahcompany

Note that using search could also greatly expand the number of
hosts found, as it will continue to traverse once it locates
servers that you specified in your search list. The more the
better.
-suppress Suppress all TTY output (when combined with -file).
-tcptimeout Specify a different timeout (default 10 seconds). You
may want to increase this if the DNS server you are querying
is slow or has a lot of network lag.
-threads Specify how many threads to use while scanning (default
is single threaded).
-traverse Specify a number of IPs above and below whatever IP you
have found to look for nearby IPs. Default is 5 above and
below. Traverse will not move into other C blocks.
-version Output the version number.
-wide Scan the entire class C after finding any matching
hostnames in that class C. This generates a lot more traffic
but can uncover a lot more information.
-wordlist Use a seperate wordlist (one word per line). Usage:

perl fierce.pl -dns examplecompany.com -wordlist dictionary.txt

fierce Usage Example

root@kali:~# fierce -dns example.com
DNS Servers for example.com:
b.iana-servers.net
a.iana-servers.net

Trying zone transfer first...
Testing b.iana-servers.net
Request timed out or transfer not allowed.
Testing a.iana-servers.net
Request timed out or transfer not allowed.

Unsuccessful in zone transfer (it was worth a shot)
Okay, trying the good old fashioned way... brute force

Checking for wildcard DNS...
Nope. Good.
Now performing 2280 test(s)...

image
Fierce is a semi-lightweight scanner that helps locate non-contiguous IP space and hostnames against specified domains. It's really meant as a pre-cursor to nmap, unicornscan, nessus, nikto, etc, since all of those require that you already know what IP space you are looking for. This does not perform exploitation and does not scan the whole internet indiscriminately. It is meant specifically to locate likely targets both inside and outside a corporate network. Because it uses DNS primarily you will often find mis-configured networks that leak internal address space. That's especially useful in targeted malware. Options: -connect Attempt to make http connections to any non RFC1918 (public) addresses. This will output the return headers but be warned, this could take a long time against a company with many targets, depending on network/machine lag. I wouldn't recommend doing this unless it's a small company or you have a lot of free time on your hands (could take hours-days). Inside the file specified the text “Host:n” will be replaced by the host specified. Usage: perl fierce.pl -dns example.com -connect headers.txt -delay The number of seconds to wait between lookups. -dns The domain you would like scanned. -dnsfile Use DNS servers provided by a file (one per line) for reverse lookups (brute force). -dnsserver Use a particular DNS server for reverse lookups (probably should be the DNS server of the target). Fierce uses your DNS server for the initial SOA query and then uses the target's DNS server for all additional queries by default. -file A file you would like to output to be logged to. -fulloutput When combined with -connect this will output everything the webserver sends back, not just the HTTP headers. -help This screen. -nopattern Don't use a search pattern when looking for nearby hosts. Instead dump everything. This is really noisy but is useful for finding other domains that spammers might be using. It will also give you lots of false positives, especially on large domains. -range Scan an internal IP range (must be combined with -dnsserver). Note, that this does not support a pattern and will simply output anything it finds. Usage: perl fierce.pl -range 111.222.333.0-255 -dnsserver ns1.example.co -search Search list. When fierce attempts to traverse up and down ipspace it may encounter other servers within other domains that may belong to the same company. If you supply a comma delimited list to fierce it will report anything found. This is especially useful if the corporate servers are named different from the public facing website. Usage: perl fierce.pl -dns examplecompany.com -search corpcompany,blahcompany Note that using search could also greatly expand the number of hosts found, as it will continue to traverse once it locates servers that you specified in your search list. The more the better. -suppress Suppress all TTY output (when combined with -file). -tcptimeout Specify a different timeout (default 10 seconds). You may want to increase this if the DNS server you are querying is slow or has a lot of network lag. -threads Specify how many threads to use while scanning (default is single threaded). -traverse Specify a number of IPs above and below whatever IP you have found to look for nearby IPs. Default is 5 above and below. Traverse will not move into other C blocks. -version Output the version number. -wide Scan the entire class C after finding any matching hostnames in that class C. This generates a lot more traffic but can uncover a lot more information. -wordlist Use a seperate wordlist (one word per line). Usage: perl fierce.pl -dns examplecompany.com -wordlist dictionary.txt fierce Usage Example [email protected]:~# fierce -dns example.com DNS Servers for example.com: b.iana-servers.net a.iana-servers.net Trying zone transfer first… Testing b.iana-servers.net Request timed out or transfer not allowed. Testing a.iana-servers.net Request timed out or transfer not allowed. Unsuccessful in zone transfer (it was worth a shot) Okay, trying the good old fashioned way… brute force Checking for wildcard DNS… Nope. Good. Now performing 2280 test(s)… Download Fierce-Domain-Scanner

image
dotDefender is the market-leading software Web Application Firewall (WAF). dotDefender boasts enterprise-class security, advanced integration capabilities, easy maintenance and low total cost of ownership (TCO). dotDefender is the perfect choice for protecting your web site and web applications today. Robust Security for Any Web Application dotDefender protects any web site or web service on your server, and continues to as you update, change, and expand your code. The dotDefender WAF reduces the costs of code scanning, and enables you to focus on business, not web application security. dotDefender can handle .NET Security issues. PCI DSS Compliance dotDefender helps you achieve Compliance with the Payment Card Industry Data Security Standard (PCI DSS Compliance). Robust Security for Any Web Application dotDefender protects any web site or web service on your server, and continues to as you update, change, and expand your code. The dotDefender WAF reduces the costs of code scanning, and enables you to focus on business, not web application security. dotDefender can handle .NET Security issues. PCI DSS Compliance dotDefender helps you achieve Compliance with the Payment Card Industry Data Security Standard (PCI DSS Compliance). Why Application Security? If you thought that network security and other “traditional security measures” were enough – think again. Web Application Firewalls deal with security attacks aimed squarely at your website, and these attacks are on the rise. Read more on Web Application Firewalls and the dotDefender security solution. Able to handle .NET Security issues. Download dotDefender

Bolt is in beta phase of development which means there can be bugs. Any production use of this tool discouraged. Pull requests and issues are welcome. I also suggest you to put this repo on watch if you are interested in it.

Workflow
Crawling
Bolt crawls the target website to the specified depth and stores all the HTML forms found in a database for further processing.
Evaluating
In this phase, Bolt finds out the tokens which aren't strong enough and the forms which aren't protected.
Comparing
This phase focuses on detection on replay attack scenarios and hence checks if a token has been issued more than one time. It also calculates the average levenshtein distance between all the tokens to see if they are similar.
Tokens are also compared against a database of 250+ hash patterns.
Observing
In this phase, 100 simultaneous requests are made to a single webpage to see if same tokens are generated for the requests.
Testing
This phase is dedicated to active testing of the CSRF protection mechanism. It includes but not limited to checking if protection exsists for moblie browsers, submitting requests with self-generated token and testing if token is being checked to a certain length.
Analysing
Various statistical checks are performed in this phase to see if the token is really random. Following tests are performed during this phase
  • Monobit frequency test
  • Block frequency test
  • Runs test
  • Spectral test
  • Non-overlapping template matching test
  • Overlapping template matching test
  • Serial test
  • Cumultative sums test
  • Aproximate entropy test
  • Random excursions variant test
  • Linear complexity test
  • Longest runs test
  • Maurers universal statistic test
  • Random excursions test

Usage
Scanning a website for CSRF using Bolt is as easy as doing

python3 bolt.py -u https://github.com -l 2

Where -u is used to supply the URL and -l is used to specify the depth of crawling.
Other options and switches:

  • -t number of threads
  • --delay delay between requests
  • --timeout http request timeout
  • --headers supply http headers

Credits
Regular Expressions for detecting hashes are taken from hashID.
Bit level entropy tests are taken from highfestiva‘s python implementation of statistical tests.

image
Bolt is in beta phase of development which means there can be bugs. Any production use of this tool discouraged. Pull requests and issues are welcome. I also suggest you to put this repo on watch if you are interested in it. Workflow Crawling Bolt crawls the target website to the specified depth and stores all the HTML forms found in a database for further processing. Evaluating In this phase, Bolt finds out the tokens which aren't strong enough and the forms which aren't protected. Comparing This phase focuses on detection on replay attack scenarios and hence checks if a token has been issued more than one time. It also calculates the average levenshtein distance between all the tokens to see if they are similar. Tokens are also compared against a database of 250+ hash patterns. Observing In this phase, 100 simultaneous requests are made to a single webpage to see if same tokens are generated for the requests. Testing This phase is dedicated to active testing of the CSRF protection mechanism. It includes but not limited to checking if protection exsists for moblie browsers, submitting requests with self-generated token and testing if token is being checked to a certain length. Analysing Various statistical checks are performed in this phase to see if the token is really random. Following tests are performed during this phase Monobit frequency test Block frequency test Runs test Spectral test Non-overlapping template matching test Overlapping template matching test Serial test Cumultative sums test Aproximate entropy test Random excursions variant test Linear complexity test Longest runs test Maurers universal statistic test Random excursions test Usage Scanning a website for CSRF using Bolt is as easy as doing python3 bolt.py -u https://github.com -l 2 Where -u is used to supply the URL and -l is used to specify the depth of crawling. Other options and switches: -t number of threads –delay delay between requests –timeout http request timeout –headers supply http headers Credits Regular Expressions for detecting hashes are taken from hashID . Bit level entropy tests are taken from highfestiva ‘s python implementation of statistical tests. Download Bolt

A data leak differs from a data breach in that the former usually happens through omission or faulty practices rather than overt action, and may be so slight that it is never detected. While a data breach usually means that sensitive data has been harvested by someone who should not have accessed it, a data leak is a situation where such sensitive information might have been inadvertently exposed. pwndb is an onion service where leaked accounts are searchable using a simple form.

After a breach occurs the data obtained is often put on sale. Sometimes, people try to blackmail the affected company, asking for money in exchange of not posting the data online. The second option is selling the data to a competitor, a rival or even an enemy. This data is used in so many different ways by companies and countries… but when the people responsible for obtaining the data fail on selling it, the bundle becomes worthless and they end up being placed in some sites like pastebin or pwndb.

pwndb is a tool to search for leaked creadentials on pwndb using the command line.
                          _ _     
| | |
_ ____ ___ __ __| | |__
| '_ / / / '_ / _` | '_
| |_) V V /| | | | (_| | |_) |
| .__/ _/_/ |_| |_|__,_|_.__/
| |
|_|


pwndb.py -u -d

Tutorial
Go to https://davidtavarez.github.io/osint/2019/01/25/pwndb-command-line-tool-python.html

image
A data leak differs from a data breach in that the former usually happens through omission or faulty practices rather than overt action, and may be so slight that it is never detected. While a data breach usually means that sensitive data has been harvested by someone who should not have accessed it, a data leak is a situation where such sensitive information might have been inadvertently exposed. pwndb is an onion service where leaked accounts are searchable using a simple form. After a breach occurs the data obtained is often put on sale. Sometimes, people try to blackmail the affected company, asking for money in exchange of not posting the data online. The second option is selling the data to a competitor, a rival or even an enemy. This data is used in so many different ways by companies and countries… but when the people responsible for obtaining the data fail on selling it, the bundle becomes worthless and they end up being placed in some sites like pastebin or pwndb. pwndb is a tool to search for leaked creadentials on pwndb using the command line. _ _ | | | _ ____ ___ __ __| | |__ | ‘_ / / / ‘_ / _` | ‘_ | |_) V V /| | | | (_| | |_) | | .__/ _/_/ |_| |_|__,_|_.__/ | | |_| pwndb.py -u -d Tutorial Go to https://davidtavarez.github.io/osint/2019/01/25/pwndb-command-line-tool-python.html Download Pwndb