image
What is a subdomain takeover? Subdomain takeover vulnerabilities occur when a subdomain (subdomain.example.com) is pointing to a service (e.g. GitHub pages, Heroku, etc.) that has been removed or deleted. This allows an attacker to set up a page on the service that was being used and point their page to that subdomain. For example, if subdomain.example.com was pointing to a GitHub page and the user decided to delete their GitHub page, an attacker can now create a GitHub page, add a CNAME file containing subdomain.example.com, and claim subdomain.example.com. You can read up more about subdomain takeovers here: https://labs.detectify.com/2014/10/21/hostile-subdomain-takeover-using-herokugithubdesk-more/ https://www.hackerone.com/blog/Guide-Subdomain-Takeovers https://0xpatrik.com/subdomain-takeover-ns/ Safely demonstrating a subdomain takeover Based on personal experience, claiming the subdomain discreetly and serving a harmless file on a hidden page is usually enough to demonstrate the security vulnerability. Do not serve content on the index page. A good proof of concept could consist of an HTML comment served via a random path: $ cat aelfjj1or81uegj9ea8z31zro.html Please be advised that this depends on what bug bounty program you are targeting. When in doubt, please refer to the bug bounty program’s security policy and/or request clarifications from the team behind the program. How to contribute You can submit new services here: https://github.com/EdOverflow/can-i-take-over-xyz/issues/new?template=new-entry.md . A list of services that can be checked (although check for duplicates against this list first) can be found here: https://github.com/EdOverflow/can-i-take-over-xyz/issues/26 . All entries Engine | Status | Fingerprint | Discussion | Documentation —|—|—|—|— Akamai | Not vulnerable | | Issue #13 | AWS/S3 | Vulnerable | The specified bucket does not exist | Issue #36 | Bitbucket | Vulnerable | Repository not found | | Campaign Monitor | Vulnerable | ‘Trying to access your account?’ | | Support Page Cargo Collective | Vulnerable | 404 Not Found | | Cargo Support Page Cloudfront | Not vulnerable | ViewerCertificateException | Issue #29 | Domain Security on Amazon CloudFront Desk | Not vulnerable | Please try again or try Desk.com free for 14 days. | Issue #9 | Fastly | Edge case | Fastly error: unknown domain: | Issue #22 | Feedpress | Vulnerable | The feed has not been found. | HackerOne #195350 | Fly.io | Vulnerable | 404 Not Found | Issue #101 | Freshdesk | Not vulnerable | | | Freshdesk Support Page Ghost | Vulnerable | The thing you were looking for is no longer here, or never was | | Github | Vulnerable | There isn’t a Github Pages site here. | Issue #37 Issue #68 | Gitlab | Not vulnerable | | HackerOne #312118 | Google Cloud Storage | Not vulnerable | | | HatenaBlog | vulnerable | 404 Blog is not found | | Help Juice | Vulnerable | We could not find what you’re looking for. | | Help Juice Support Page Help Scout | Vulnerable | No settings were found for this company: | | HelpScout Docs Heroku | Edge case | No such app | Issue #38 | Intercom | Vulnerable | Uh oh. That page doesn’t exist. | Issue #69 | Help center JetBrains | Vulnerable | is not a registered InCloud YouTrack | | YouTrack InCloud Help Page Kinsta | Vulnerable | No Site For Domain | Issue #48 | kinsta-add-domain LaunchRock | Vulnerable | It looks like you may have taken a wrong turn somewhere. Don’t worry…it happens to all of us. | Issue #74 | Mashery | Edge Case | Unrecognized domain | HackerOne #275714 , Issue #14 | Microsoft Azure | Vulnerable | | Issue #35 | Netlify | Edge Case | | Issue #40 | Pantheon | Vulnerable | 404 error unknown site! | Issue #24 | Pantheon-Sub-takeover Readme.io | Vulnerable | Project doesnt exist… yet! | Issue #41 | Sendgrid | Not vulnerable | | | Shopify | Edge Case | Sorry, this shop is currently unavailable. | Issue #32 , Issue #46 | Medium Article Squarespace | Not vulnerable | | | Statuspage | Vulnerable | Visiting the subdomain will redirect users to https://www.statuspage.io . | PR #105 | Statuspage documentation Strikingly | Vulnerable | page not found | Issue #58 | Strikingly-Sub-takeover Surge.sh | Vulnerable | project not found | | Surge Documentation Tumblr | Vulnerable | Whatever you were looking for doesn’t currently exist at this address | | Tilda | Edge Case | Please renew your subscription | PR #20 | Unbounce | Not vulnerable | The requested URL was not found on this server. | Issue #11 | Uptimerobot | Vulnerable | page not found | Issue #45 | Uptimerobot-Sub-takeover UserVoice | Vulnerable | This UserVoice subdomain is currently available! | | Webflow | Not Vulnerable | | Issue #44 | forum webflow WordPress | Vulnerable | Do you want to register *.wordpress.com? | | WP Engine | Not vulnerable | | | Zendesk | Not Vulnerable | Help Center Closed | Issue #23 | Zendesk Support Download Can-I-Take-Over-Xyz

image
Give those screenshots of yours a quick eyeballing. Eyeballer is meant for large-scope network penetration tests where you need to find “interesting” targets from a huge set of web-based hosts. Go ahead and use your favorite screenshotting tool like normal (EyeWitness or GoWitness) and then run them through Eyeballer to tell you what’s likely to contain vulnerabilities, and what isn’t. Example Labels Old-Looking Sites Login Pages Homepages Custom 404’s Eyeballer uses TF.keras on Tensorflow 2.0. This is (as of this moment) still in “beta”. So the pip requirement for it looks a bit weird. It’ll also probably conflict with an existing TensorFlow installation if you’ve got the regular 1.0 version installed. So, heads-up there. But 2.0 should be out of beta and official “soon” according to Google, so this problem ought to solve itself in short order. Setup Download required packages on pip: sudo pip3 install -r requirements.txt Or if you want GPU support: sudo pip3 install -r requirements-gpu.txt NOTE : Setting up a GPU for use with TensorFlow is way beyond the scope of this README. There’s hardware compatibility to consider, drivers to install… There’s a lot. So you’re just going to have to figure this part out on your own if you want a GPU. But at least from a Python package perspective, the above requirements file has you covered. Training Data You can find our training data here: https://www.dropbox.com/sh/7aouywaid7xptpq/AAD_-I4hAHrDeiosDAQksnBma?dl=1 Pretty soon, we’re going to add this as a TensorFlow DataSet, so you don’t need to download this separately like this. It’ll also let us version the data a bit better. But for now, just deal with it. There’s two things you need from the training data: images/ folder, containing all the screenshots (resized down to 224×140. We’ll have the full-size images up soon) labels.csv that has all the labels bishop-fox-pretrained-v1.h5 A pretrained weights file you can use right out of the box without training. Copy all three into the root of the Eyeballer code tree. Predicting Labels To eyeball some screenshots, just run the “predict” mode: eyeballer.py –weights YOUR_WEIGHTS.h5 predict YOUR_FILE.png Or for a whole directory of files: eyeballer.py –weights YOUR_WEIGHTS.h5 predict PATH_TO/YOUR_FILES/ Eyeballer will spit the results back to you in human readable format (a results.html file so you can browse it easily) and machine readable format (a results.csv file). Training To train a new model, run: eyeballer.py train You’ll want a machine with a good GPU for this to run in a reasonable amount of time. Setting that up is outside the scope of this readme, however. This will output a new model file (weights.h5 by default). Evaluation You just trained a new model, cool! Let’s see how well it performs against some images it’s never seen before, across a variety of metrics: eyeballer.py –weights YOUR_WEIGHTS.h5 evaluate The output will describe the model’s accuracy in both recall and precision for each of the program’s labels. (Including “none of the above” as a pseudo-label) Download Eyeballer

image
Dow Jones Hammer is a multi-account cloud security tool for AWS. It identifies misconfigurations and insecure data exposures within most popular AWS resources, across all regions and accounts. It has near real-time reporting capabilities (e.g. JIRA, Slack) to provide quick feedback to engineers and can perform auto-remediation of some misconfigurations. This helps to protect products deployed on cloud by creating secure guardrails. Documentation Dow Jones Hammer documentation is available via GitHub Pages at https://dowjones.github.io/hammer/ . Security features Insecure Services S3 ACL Public Access S3 Policy Public Access IAM User Inactive Keys IAM User Keys Rotation CloudTrail Logging Issues EBS Unencrypted Volumes EBS Public Snapshots RDS Public Snapshots SQS Public Policy Access S3 Unencrypted Buckets RDS Unencrypted Instances AMIs Public Access Technologies Python 3.6 AWS (Lambda, Dynamodb, EC2, SNS, CloudWatch, CloudFormation) Terraform JIRA Slack Contributing You are welcome to contribute! Issues: You can use GitHub Issues to report issues. Describe what is going on wrong and what you expect to be correct behaviour. Patches: We currently use dev branch for ongoing development. Please open PRs to this branch. Run tests: Run tests with this command: tox Contact Us Feel free to create issue report , pull request or just email us at [email protected] with any other questions or concerns you have. Download Dow Jones Hammer

image
Firmware slap combines concolic analysis with function clustering for vulnerability discovery and function similarity in firmware. Firmware slap is built as a series of libraries and exports most information as either pickles or JSON for integration with other tools. Slides from the talk can be found here Setup Firmware slap should be run in a virtual environment. It has been tested on Python3.6 python setup.py install You will need rabbitmq and (radare2 or Ghidra) # Ubuntu sudo apt install rabbitmq-server # OSX brew install rabbitmq # Radare2 git clone https://github.com/radare/radare2.git sudo ./radare2/sys/install.sh # Ghidra wget https://ghidra-sre.org/ghidra_9.0.4_PUBLIC_20190516.zip unzip ghidra_9.0.4_PUBLIC_20190516.zip -d ghidra echo “export PATH=$PATH:$PWD/ghidra/ghidra_9.0.4/support” >> ~/.bashrc If you want to use the Elastic search stuff run the Elasticsearch_and_kibana.sh script Quickstart Ensure rabbitmq-server is running. # In a Separate terminal celery -A firmware_slap.celery_tasks worker –loglevel=info # Basic buffer overflow Discover_And_Dump.py examples/iwconfig # Command injection tar -xvf examples/Almond_libs.tar.gz Vuln_Discover_Celery.py examples/upload.cgi -L Almond_Root/lib/ Usage # Get the firmware used for examples wget https://firmware.securifi.com/AL3_64MB/AL3-R024-64MB binwalk -Mre AL3-R024-64MB Start a celery work from the project root directory: # In a separate terminal celery -A firmware_slap.celery_tasks worker –loglevel=info In a different terminal window, run a vulnerability discovery job. $ Vuln_Discover_Celery.py Almond_Root/etc_ro/lighttpd/www/cgi-bin/upload_bootloader.cgi -L Almond_Root/lib/ [+] Getting argument functions [+] Analyzing 1 functions 0%| | 0/1 [00:01 b’`reboot`\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00′”}] Memory The memory component of the object keeps track of the required memory values set to trigger the vulnerability. It also offers stack addresses and .text addresses with the offending commands for setting the required memory constraints. The first memory event required is at mtd_write_firmware+0x0 and the second is at mtd_write_firmware+0x38 . Assembly is provided to help prettify future display work. In [2]: result[‘mem’] Out[2]: [{‘BBL_ADDR’: ‘0x401138’, ‘BBL_DESC’: {‘DESCRIPTION’: ‘mtd_write_firmware+0x0 in upload_bootloader.cgi (0x401138)’, ‘DISASSEMBLY’: [‘0x401138:tluit$gp, 0x42’, ‘0x40113c:taddiut$sp, $sp, -0x228’, ‘0x401140:taddiut$gp, $gp, -0x5e90’, ‘0x401144:tlwt$t9, -0x7f84($gp)’, ‘0x401148:tswt$a2, 0x10($sp)’, ‘0x40114c:tluit$a2, 0x40’, ‘0x401150:tmovet$a3, $a1’, ‘0x401154:tswt$ra, 0x224($sp)’, ‘0x401158:tswt$gp, 0x18($sp)’, ‘0x40115c:tswt$a0, 0x14($sp)’, ‘0x401160:taddiut$a1, $zero, 0x200’, ‘0x401164:taddiut$a0, $sp, 0x20’, ‘0x401168:tjalrt$t9’, ‘0x40116c:taddiut$a2, $a2, 0x196c’]}, ‘DATA’: “b’`reboot`\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01 \x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00′”, ‘DATA_ADDRS’: [‘0x0’]}, {‘BBL_ADDR’: ‘0x401170’, ‘BBL_DESC’: {‘DESCRIPTION’: ‘mtd_write_firmware+0x38 in upload_bootloader.cgi (0x401170)’, ‘DISASSEMBLY’: [‘0x401170:tlwt$gp, 0x18($sp)’, ‘0x401174:tnopt’, ‘0x401178:tlwt$t9, -0x7f68($gp)’, ‘0x40117c:tnopt’, ‘0x401180:tjalrt$t9’, ‘0x401184:taddiut$a0, $sp, 0x20’]}, ‘DATA’: “b’/bin/mtd_write -o 0 -l 0 write `reboot`'”, ‘DATA_ADDRS’: [‘0x7ffefe07’]}] Command Injection Specific Since command injections are the easiest to demo, I’ve created a convenience dictionary key to demonstrate the location of the command injection easily. In [4]: result[‘Injected_Location’] Out[4]: {‘base’: ‘0x7ffefde8’, ‘type’: ‘char *’, ‘value’: ‘/bin/mtd_write -o 0 -l 0 write `reboot`’} Sample Vulnerability Cluster Script The vulnerability cluster script will attempt to discover vulnerabilities using the method in the Sample Vulnerability Discovery script and then build k-means clusters of a set of given functions across an extracted firmware to find similar functions to vulnerable ones. $ Vuln_Cluster_Celery.py -h usage: Vuln_Cluster_Celery.py [-h] [-L LD_PATH] [-F FUNCTION] [-V VULN_PICKLE] Directory positional arguments: Directory optional arguments: -h, –help show this help message and exit -L LD_PATH, –LD_PATH LD_PATH Path to libraries to load -F FUNCTION, –Function FUNCTION -V VULN_PICKLE, –Vuln_Pickle VULN_PICKLE The below command takes -F as a known vulnerable function. -V as a dumped pickle from a previous run to not need to discover new vulnerabilites and -L for the library path. A sample usage: $ python Vuln_Cluster_Celery.py -F mtd_write_firmware -L Almond_Root/lib/ Almond_Root/etc_ro/lighttpd/www/cgi-bin/ [+] Reading Files 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████& #9608;██████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.80it/s] Getting functions from executables Starting main … Snip … Download Firmware_Slap

image
Iris WinDbg extension performs basic detection of common Windows exploit mitigations (32 and 64 bits). The checks implemented, as can be seen in the screenshot above, are (for the loaded modules): DynamicBase ASLR DEP SEH SafeSEH CFG RFG GS AppContainer If you don’t know the meaning of some of the keywords above use google, you’ll find better explanations than the ones I could give you. Setup To “install”, copy iris.dll into the winext folder for WinDbg (for x86 and x64 ). WinDbg 10.0.xxxxx Unless you installed the debug tools in a non standard path you’ll find the winext folder at: C:Program Files (x86)Windows Kits10Debuggersx64winext Or, for 32 bits: C:Program Files (x86)Windows Kits10Debuggersx86winext WinDbg Preview Unless you ~~ installed ~~ copied WinDbg preview install folder into a non standard location you’ll have it in a folder with a name close to the one below (depending on the installed version): C:Program FilesWindowsAppsMicrosoft.WinDbg_1.1906.12001.0_neutral__9wekib2d8acwe For 64 bits copy iris.dll into amd64winext or into x86winext for 32 bits. Load the extension After the steps above, just load the extension with .load iris and run !iris.help to see the available command(s). 0:002> .load iris [+] Iris WinDbg Extension Loaded 0:002> !iris.help IRIS WinDbg Extension ([email protected]). Available commands: help = Shows this help modules = Display exploit mitigations for all loaded modules. Running As shown in the screenshot above, just run: !iris.modules or simply !modules . Warning Don’t trust blindly on the results, some might not be accurate . I pretty much used as reference PE-bear parser , winchecksec , Process Hacker , and narly . Thank you to all of them. I put this together in a day to save some time during a specific assignment. It worked for me but it hasn’t been thoroughly tested. You have been warned, use at your own risk. I’ll be updating and maintining this, so any issues you may find please let me know. I plan to add a few more mitigations later. References Besides the references mentioned before, if you want to write your own extension (or contribute to this one) the Advanced Windows Debugging book and the WinDbg SDK are your friends. Download Iris

image
Diaphora (διαφορά, Greek for ‘difference’) is a program diffing plugin for IDA, similar to Zynamics Bindiff or other FOSS counterparts like YaDiff, DarunGrim, TurboDiff, etc… It was released during SyScan 2015. It works with IDA 6.9 to 7.3. Support for Ghidra is in development. Support for Binary Ninja is also planned but will come after Ghidra’s port. If you are looking for Radare2 support you can check this very old fork . For more details, please check the tutorial in the “doc” directory. NOTE: If you’re looking for a tool for diffing or matching functions between binaries and source codes, you might want to take a look to Pigaios . Getting help and asking for features You can join the mailing list https://groups.google.com/forum/?hl=es#!forum/diaphora to ask for help, new features, report issues, etc… For reporting bugs, however, I recommend using the issues tracker: https://github.com/joxeankoret/diaphora/issues Please note that only the last 3 versions of IDA are officially supported. As of today, it means that only IDA 7.1, 7.2 and 7.3 are supported. Versions 6.8, 6.9, 6.95 and 7.0 do work (with all the last patches that were supplied to _ customers _ ), but no official support is offered for them. However, if you run into any problem with these versions, ping me and I will do my best. Documentation You can check the tutorial https://github.com/joxeankoret/diaphora/blob/master/doc/diaphora_help.pdf Screenshots This is a screenshot of Diaphora diffing the PEGASUS iOS kernel Vulnerability fixed in iOS 9.3.5: And this is an old screenshot of Diaphora diffing the Microsoft bulletin MS15-034 : These are some screenshots of Diaphora diffing the Microsoft bulletin MS15-050 , extracted from the blog post Analyzing MS15-050 With Diaphora from Alex Ionescu. Here is a screenshot of Diaphora diffing iBoot from iOS 10.3.3 against iOS 11.0 : Download Diaphora

image
Checklist and tools for increasing security of Apache Airflow. DISCLAIMER This project NOT AFFILIATED with the Apache Foundation and the Airflow project, and is not endorsed by them. Contents The purpose of this project is provide tools to increase security of Apache Airflow . installations. This projects provides the following tools: Configuration file with hardened settings – see hardened_airflow.cfg . Security checklist for hardening default installations – see CHECKLIST.MD . Static analysis tool to check Airflow configuration files for insecure settings. JSON schema document used for validation by the static analysis tool – see airflow_cfg.schema Information for the Static Analysis Tool (airflowscan) The static analysis tool can check an Airflow configuration file for settings related to security. The tool convers the config file to JSON, and then uses a JSON Schema to do the validation. Requirements Python 3 is required and you can find all required modules in the requirements.txt file. Only tested on Python 3.7 but should work on other 3.x releases. No plans to 2.x support at this time. Installation You can install this via PIP as follows: pip install airflowscan airflowscan To download and run manually, do the following: git clone https://github.com/nightwatchcybersecurity/airflowscan.git cd airflowscan pip -r requirements.txt python -m airflowscan.cli How to use To scan a configuration file, do the following command: airflowscan scan some_airflow.cfg Reporting bugs and feature requests Please use the GitHub issue tracker to report issues or suggest features: https://github.com/nightwatchcybersecurity/airflowscan You can also send emai to _ research /at/ nightwatchcybersecurity [dot] com _ Download Airflowscan

image
Docker Security Playground is an application that allows you to: Create network and network security scenarios , in order to understand network protocols, rules, and security issues by installing DSP in your PC. Learn penetration testing techniques by simulating vulnerability labs scenarios Manage a set of docker-compose project . Main goal of DSP is to learn in penetration testing and network security, but its flexibility allows you the creation , graphic editing and managment run / stop of all your docker-compose labs . For more information look at the Labs Managment page. DSP Features Graphic Editor of docker-compose Docker Image Management GIT Integration DSP Repository with a set of network sescurity scenarios How can I share my labs with the world ? During the installation you can create a local environment that has not link with git, or you can associate a personal repository the the application. This is very useful if you want to share your work with other people. DSP Repository must have several requirements, so I have created a base DSP Repo Template that you can use to create your personal repository. So, the easiest way to share labs is the following: Fork the DSP_Repo project: https://github.com/giper45/DSP_Repo.git During the installation set github directory param to your forked repository. Now create your labs and share it! It is important that all images that you use should be available to other users, so: You can publish on docker hub so other users can pull your images in order to use your labs. You can provide dockerfiles inside the .docker-images directory, so users can use build.sh to build your images and use your repo. If you need a “private way” to share labs you should share the repository in other ways, at current time there is no support to share private repositories. In DSP you can manage multiple user repositories (Repositories tab) Prerequisites Nodejs (v 7 or later) git docker docker-compose compiler tools (g++, c, c++) Installation Install prerequisites and run: npm install Troubleshooting during installation If you have error regarding node-pty module, try to: Install build-essentials : (In Ubuntu: apt install -y build-essentials) Use nodejs LTS (note-pty has some isseus, as shown here Update the application: When you update the application it is important to update the npm packages (The application uses mydockerjs, a npm docker API that I am developing during DSP development: https://www.npmjs.com/package/mydockerjs ) npm run update Start Run npm start To start the application. This will launch a server listening on 8080 (or another if you set have setted ENV variable in index.js file) port of your localhost. Go to you favourite browser and digit localhost:8080. You’ll be redirected on installation page, set parameters and click install. Documentation For documentation about DSP usage go to Wiki page: Main Page: http://gitlab.comics.unina.it/NS-Thesis/DockerSecurityPlayground_1/wikis/home User Guide http://gitlab.comics.unina.it/NS-Thesis/DockerSecurityPlayground_1/wikis/user_guide Docker Wrapper Image: http://gitlab.comics.unina.it/NS-Thesis/DockerSecurityPlayground_1/wikis/dsp_wrapper_image It is a little outdated, I will update it as possible ! Docker Wrapper Image DSP implements a label convention called DockerWrapperImage that allows you to create images that expose action to execute when a lab is running. Look at the doc Error Debug MacOS ECONNRESET error: events.js:183 throw er; // Unhandled ‘error’ event ^ Error: read ECONNRESET at _errnoException (util.js:992:11) at TCP.onread (net.js:618:25) On Mac it seems that there is some problem with some node package, so in order to solve this run: MacBook-Pro:DockerSecurityPlayground gaetanoperrone$ npm install [email protected] –save-dev –save-exact Other info here: http://gitlab.comics.unina.it/NS-Thesis/DockerSecurityPlayground_1/wikis/docker-operation-errors Contributing Fork it! Create your feature branch: git checkout -b my-new-feature Commit your changes: git commit -am ‘Add some feature’ Push to the branch: git push origin my-new-feature Submit a pull request, we’ll check Any Questions? Use the Issues in order to ask everything you want!. Links DSP Vagrant Box used in Blackhat Session Blackhat scenario in Gitlab Relevant DSP Repositories https://github.com/giper45/DSP_Projects.git : Official DSP Repository https://github.com/giper45/DSP_Repo.git : DSP Template to create another repository: fork it to start creating your personal remote environment https://github.com/NS-unina/DSP_Repo.git : Repository created for Network Security Course of Simon Pietro Romano in University of the Study in Naples, Federico II Contributors Technical support : Gaetano Perrone, Francesco Caturano Documentation support Gaetano Perrone, Francesco Caturano Application design: Gaetano Perrone, Simon Pietro Romano Application development: Gaetano Perrone, Francesco Caturano Docker wrapper image development: Gaetano Perrone, Francesco Caturano Thanks to Giuseppe Criscuolo for the logo design Changelog Got to CHANGELOG.md to see al the version changes. Download DockerSecurityPlayground

image
DrMITM is a program designed to globally log all traffic. How it works DrMITM sends a request to website and returns the IP of the website just in case the server of the website is designed to rely on the website IP for requests, and the request that goes to the website also ends up being sent to the server which it will log the message that the website sends, it will then return the same message and send it directly to the server, where the server may see it as the website but it will also direct our request to the website once the program changes IP’s. once it sends our request to the website, the program will then pause our traffic, and wait for incoming traffic, when a new user tries to login or whatever and the website sends a request to the server, DrMITM will receive it, and the way it gets the data back to us is by sending the same data to a file. How do i get started For Nim version: Install 19.0 Nim(using choosenim or git clone) Git clone the repo cd into the directory Run nim DrMITM.nim For Python version: Install Python git clone the repo cd into the directory Run python DrMITM.py Commands e(live logging) b(traffic blocking) r(redirect users) Issue Reporting If you have an issue please submit it with the following details given: your issue _ Your Nim Or Python version _ Operating system _ The process of what you were doing before the issue occurred _ Q&A: Q:How does live logging works? A:it just sends the logged data to a file and outputs it on screen. Q: How does the traffic block work? A: a unicode gets sent to the website from server and overflows the traffic towards incoming traffic. Q:How does the redirectio. feature works? A: it sends a fake error message + redirection status code from the server with a modified location. Download DrMITM

image
Sampler is a tool for shell commands execution, visualization and alerting. Configured with a simple YAML file. Installation macOS brew cask install sampler or curl -Lo /usr/local/bin/sampler https://github.com/sqshq/sampler/releases/download/v1.0.1/sampler-1.0.1-darwin-amd64 chmod +x /usr/local/bin/sampler Linux wget https://github.com/sqshq/sampler/releases/download/v1.0.1/sampler-1.0.1-linux-amd64 -O /usr/local/bin/sampler chmod +x /usr/local/bin/sampler Note: libasound2-dev system library is required to be installed for Sampler to play a trigger sound tone. Usually the library is in place, but if not – you can do it with your favorite package manager, e.g apt install libasound2-dev Windows (experimental) Recommended to use with advanced console emulators, e.g. Cmder Download .exe Usage You specify shell commands, Sampler executes them with a required rate. The output is used for visualization. One can sample any dynamic process right from the terminal – observe changes in the database, monitor MQ in-flight messages, trigger deployment process and get notification when it’s done. Using Sampler is basically a 3-step process: Define your configuration in a YAML file Run sampler -c config.yml Adjust components size and location on UI Components The following is a list of configuration examples for each component type, with macOS compatible sampling scripts. Runchart runcharts: – title: Search engine response time rate-ms: 500 # sampling rate, default = 1000 scale: 2 # number of digits after sample decimal point, default = 1 legend: enabled: true # enables item labels, default = true details: false # enables item statistics: cur/min/max/dlt values, default = true items: – label: GOOGLE sample: curl -o /dev/null -s -w ‘%{time_total}’ https://www.google.com color: 178 # 8-bit color number, default one is chosen from a pre-defined palette – label: YAHOO sample: curl -o /dev/null -s -w ‘%{time_total}’ https://search.yahoo.com – label: BING sample: curl -o /dev/null -s -w ‘%{time_total}’ https://www.bing.com Sparkline sparklines: – title: CPU usage rate-ms: 200 scale: 0 sample: ps -A -o %cpu | awk ‘{s+=$1} END {print s}’ – title: Free [memory]( “memory” ) pages rate-ms: 200 scale: 0 sample: memory_pressure | grep ‘Pages free’ | awk ‘{print $3}’ Barchart barcharts: – title: Local network activity rate-ms: 500 # sampling rate, default = 1000 scale: 0 # number of digits after sample decimal point, default = 1 items: – label: UDP bytes in sample: nettop -J bytes_in -l 1 -m udp | awk ‘{sum += $4} END {print sum}’ – label: UDP bytes out sample: nettop -J bytes_out -l 1 -m udp | awk ‘{sum += $4} END {print sum}’ – label: TCP bytes in sample: nettop -J bytes_in -l 1 -m tcp | awk ‘{sum += $4} END {print sum}’ – label: TCP bytes out sample: nettop -J bytes_out -l 1 -m tcp | awk ‘{sum += $4} END {print sum}’ Gauge gauges: – title: Minute progress rate-ms: 500 # sampling rate, default = 1000 scale: 2 # number of digits after sample decimal point, default = 1 percent-only: false # toggle display of the current value, default = false color: 178 # 8-bit color number, default one is chosen from a pre-defined palette cur: sample: date +%S # sample script for current value max: sample: echo 60 # sample script for max value min: sample: echo 0 # sample script for min value – title: Year progress cur: sample: date +%j max: sample: echo 365 min: sample: echo 0 Textbox textboxes: – title: Local weather rate-ms: 10000 # sampling rate, default = 1000 sample: curl wttr.in?0ATQF border: false # border around the item, default = true color: 178 # 8-bit color number, default is white – title: Docker [containers]( “containers” ) stats rate-ms: 500 sample: docker stats –no-stream –format “table {{.Name}}t{{.CPUPerc}}t{{.MemUsage}}t{{.PIDs}}” Asciibox asciiboxes: – title: UTC time rate-ms: 500 # sampling rate, default = 1000 font: 3d # font type, default = 2d border: false # border around the item, default = true color: 43 # 8-bit color number, default is white sample: env TZ=UTC date +%r Bells and whistles Triggers Triggers allow to perform conditional actions, like visual/sound alerts or an arbitrary shell command. The following examples illustrate the concept. Clock gauge, which shows minute progress and announces current time at the beginning of each minute gauges: – title: MINUTE PROGRESS position: [[0, 18], [80, 0]] cur: sample: date +%S max: sample: echo 60 min: sample: echo 0 triggers: – title: CLOCK BELL EVERY MINUTE condition: ‘[ $label == “cur” ] && [ $cur -eq 0 ] && echo 1 || echo 0’ # expects “1” as TRUE indicator actions: terminal-bell: true # standard terminal bell, default = false sound: true # NASA quindar tone, default = false visual: false # notification with current value on top of the component area, default = false script: say -v samantha `date +%I:%M%p` # an arbitrary script, which can use $cur, $prev and $label variables Search engine latency chart, which alerts user when latency exceeds a threshold runcharts: – title: SEARCH ENGINE RESPONSE TIME (sec) rate-ms: 200 items: – label: GOOGLE sample: curl -o /dev/null -s -w ‘%{time_total}’ https://www.google.com – label: YAHOO sample: curl -o /dev/null -s -w ‘%{time_total}’ https://search.yahoo.com triggers: – title: Latency threshold exceeded condition: echo “$prev 0.3” |bc -l # expects “1” as TRUE indicator actions: terminal-bell: true # standard terminal bell, default = false sound: true # NASA quindar tone, default = false visual: true # visual notification on top of the component area, default = false script: ‘say alert: ${label} latency exceeded ${cur} second’ # an arbitrary script, which can use $cur, $prev and $label variables Interactive shell support In addition to the sample command, one can specify init command (executed only once before sampling) and transform command (to post-process sample command output). That covers interactive shell use case, e.g. to establish connection to a database only once, and then perform polling within an interactive shell session. Basic mode textboxes: – title: MongoDB polling rate-ms: 500 init: mongo –quiet –host=localhost test # executes only once to start the interactive session sample: Date.now(); # executes with a required rate, in scope of the interactive session transform: echo result = $sample # executes in scope of local session, $sample variable is available for transformation PTY mode In some cases interactive shell won’t work, because its stdin is not a terminal. We can fool it, using PTY mode: textboxes: – title: Neo4j polling pty: true # enables pseudo-terminal mode, default = false init: cypher-shell -u neo4j -p pwd –format plain sample: RETURN rand(); transform: echo “$sample” | tail -n 1 – title: Top on a remote server pty: true # enables pseudo-terminal mode, default = false init: ssh -i ~/user.pem [email protected] sample: top Multistep init It is also possible to execute multiple init commands one after another, before you start sampling. textboxes: – title: Java application uptime multistep-init: – java -jar jmxterm-1.0.0-uber.jar – open host:port # or local PID – bean java.lang:type=Runtime sample: get Uptime Variables If the configuration file contains repeated patterns, they can be extracted into the variables section. Also variables can be specified using -v / –variable flag on startup, and any system environment variables will also be available in the scripts. variables: mongoconnection: mongo –quiet –host=localhost test barcharts: – title: MongoDB documents by status items: – label: IN_PROGRESS init: $mongoconnection sample: db.getCollection(‘events’).find({status:’IN_PROGRESS’}).count() – label: SUCCESS init: $mongoconnection sample: db.getCollection(‘events’).find({status:’SUCCESS’}).count() – label: FAIL init: $mongoconnection sample: db.getCollection(‘events’).find({status:’FAIL’}).count() Color theme theme: light # default = dark sparklines: – title: CPU usage sample: ps -A -o %cpu | awk ‘{s+=$1} END {print s}’ Real-world recipes Databases The following are different database connection examples. Interactive shell (init script) usage is recommended to establish connection only once and then reuse it during sampling. MySQL # prerequisite: installed mysql shell variables: mysql_connection: mysql -u root -s –database mysql –skip-column-names sparklines: – title: MySQL (random number example) pty: true init: $mysql_connection sample: select rand(); PostgreSQL # prerequisite: installed psql shell variables: PGPASSWORD: pwd postgres_connection: psql -h localhost -U postgres –no-align –tuples-only sparklines: – title: PostgreSQL (random number example) init: $postgres_connection sample: select random(); MongoDB # prerequisite: installed mongo shell variables: mongo_connection: mongo –quiet –host=localhost test sparklines: – title: MongoDB (random number example) init: $mongo_connection sample: Math.random(); Neo4j # prerequisite: installed cypher shell variables: neo4j_connection: cypher-shell -u neo4j -p pwd –format plain sparklines: – title: Neo4j (random number example) pty: true init: $neo4j_connection sample: RETURN rand(); transform: echo “$sample” | tail -n 1 Kafka lag per consumer group variables: kafka_connection: $KAFKA_HOME/bin/kafka-consumer-groups –bootstrap-server localhost:9092 runcharts: – title: Kafka lag per consumer group rate-ms: 5000 scale: 0 items: – label: A->B sample: $kafka_connection –group group_a –describe | awk ‘NR>1 {sum += $5} END {print sum}’ – label: B->C sample: $kafka_connection –group group_b –describe | awk ‘NR>1 {sum += $5} END {print sum}’ – label: C->D sample: $kafka_connection –group group_c –describe | awk ‘NR>1 {sum += $5} END {print sum}’ Docker containers stats (CPU, MEM, O/I) textboxes: – title: Docker containers stats sample: docker stats –no-stream –format “table {{.Name}}t{{.CPUPerc}}t{{.MemPerc}}t{{.MemUsage}}t{{.NetIO}}t{{.BlockIO}}t{{.PIDs}}” SSH TOP command on a remote server variables: sshconnection: ssh -i ~/my-key-pair.pem [email protected] textboxes: – title: SSH pty: true init: $sshconnection sample: top JMX Java application uptime example # prerequisite: download [jmxterm jar file](https://docs.cyclopsgroup.org/jmxterm) textboxes: – title: Java application uptime multistep-init: – java -jar jmxterm-1.0.0-uber.jar – open host:port # or local PID – bean java.lang:type=Runtime sample: get Uptime transform: echo $sample | tr -dc ‘0-9’ | awk ‘{printf “%.1f min”, $1/1000/60}’ Download Sampler