image
ArmourBird CSF – Container Security Framework is an extensible, modular, API-first framework build for regular security monitoring of docker installations and containers against CIS and other custom security checks. ArmourBird CSF has a client-server architecture and is thus divided into two components: a) CSF Client This component is responsible for monitoring the docker installations, containers, and images on target machines In the initial release, it will be checking against Docker CIS benchmark The checks in the CSF client will be configurable and thus will be expanded in future releases and updates It has been build on top of Docker bench for security b) CSF Server This will be the receiver agent for the security logs generated by the various distributed CSF clients (installed on multiple physical/virtual machines) This will also have a UI sub-component for unified management and dashboard-ing of the various vulnerabilities/issues logged by the CSF Clients This server will also expose APIs that can be used for integrating with other systems Important Note : The tool is currently in beta mode. Hence the debug flag of django (CSF Server) is enabled and the SQLite is used as DB in the same docker container. Hence, spinning up a new docker container will reset the database. Architecture Diagram APIs CSF Server Issue APIs POST /issues For reporting issues from CSF clients GET /issues/{issueId} For listing specific issue with {id} GET /issues For listing all issues reported by all CSF clients PUT /issues/{issueId} For updating a specific issue (like for severity, comments, etc.) DELETE /issues/{issueId} For deleting specific issue Client APIs POST /clients For adding a CSF client GET /clients/{clientId} For listing specific CSF client GET /clients/ For listing all the CSF clients PUT /clients/{clientId} For updating the CSF client (for e.g. IP addr, etc.) DELETE /clients/{clientId} For deleting a CSF client from the network Client Group APIs POST /clientGroup Adding client to a specific group (for e.g. product1, HRNetwork, product2, etc.) GET /clientGroup/{groupID} For listing client group details GET /clientGroup/ For listing all client groups PUT /clientGroup/{groupID} For updating client group DELETE /clientGroup/{groupId} For deleting client group Installation/Usage CSF client run as a docker container on the compute instances running docker installation. It can be executed using the following command using the docker image hosted on hub.docker.com: docker run -it –net host –pid host –userns host –cap-add audit_control -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST -e CSF_CDN=” -v /etc:/etc -v /usr/bin/docker-containerd:/usr/bin/docker-containerd -v /usr/bin/docker-runc:/usr/bin/docker-runc -v /usr/lib/systemd:/usr/lib/systemd -v /var/lib:/var/lib -v /var/run/docker.sock:/var/run/docker.sock –label csf_client -d armourbird/csf_client Make sure to update CSF_CDN environment variable in the above command with the CSF server URL. Once the container is executed, it will start sending issue logs to the CSF server on constant intervals. CSF server can run as a docker container or natively on a web server on which various CSF clients will be sending data. You can run it on your server using the following command using the docker image hosted on hub.docker.com docker run -p 80:8000 -d armourbird/csf_server Browse the CSF server via the following links Dashboard: http:///dashboard/ APIs: http:///api/ Building Docker Images Building docker image for CSF Client git clone [email protected]:armourbird/csf.git cd csf_client docker build . -t csf_client Building docker image for CSF Server git clone [email protected]:armourbird/csf.git cd csf_server docker build . -t csf_server Sneak Peak Dashboard API View Website https://www.armourbird.com/ Twitter http://twitter.com/ArmourBird References https://www.cisecurity.org/cis-benchmarks https://github.com/docker/docker-bench-security Download ArmourBird CSF

image
_ A sugared version of RottenPotatoNG , with a bit of juice, i.e. another Local Privilege Escalation tool, from a Windows Service Accounts to NT AUTHORITYSYSTEM _ Summary RottenPotatoNG and its variants leverages the privilege escalation chain based on BITS service having the MiTM listener on 127.0.0.1:6666 and when you have SeImpersonate or SeAssignPrimaryToken privileges. During a Windows build review we found a setup where BITS was intentionally disabled and port 6666 was taken. We decided to weaponize RottenPotatoNG : Say hello to Juicy Potato . For the theory, see Rotten Potato – Privilege Escalation from Service Accounts to SYSTEM and follow the chain of links and references. We discovered that, other than BITS there are a several COM servers we can abuse. They just need to: be instantiable by the current user, normally a “service user” which has impersonation privileges implement the IMarshal interface run as an elevated user (SYSTEM, Administrator, …) After some testing we obtained and tested an extensive list of interesting CLSID's on several Windows versions. Juicy details JuicyPotato allows you to: Target CLSID _ pick any CLSID you want. Here you can find the list organized by OS. _ COM Listening port _ define COM listening port you prefer (instead of the marshalled hardcoded 6666) _ COM Listening IP address _ bind the server on any IP _ Process creation mode _ depending on the impersonated user's privileges you can choose from: _ CreateProcessWithToken (needs SeImpersonate ) CreateProcessAsUser (needs SeAssignPrimaryToken ) both Process to launch _ launch an executable or script if the exploitation succeeds _ Process Argument _ customize the launched process arguments _ RPC Server address _ for a stealthy approach you can authenticate to an external RPC server _ RPC Server port _ useful if you want to authenticate to an external server and firewall is blocking port 135 … _ TEST mode _ mainly for testing purposes, i.e. testing CLSIDs. It creates the DCOM and prints the user of token. See here for testing _ Usage T:>JuicyPotato.exe JuicyPotato v0.1 Mandatory args: -t createprocess call: CreateProcessWithTokenW, CreateProcessAsUser, try both -p : program to launch -l : COM server listen port Optional args: -m : COM server listen address (default 127.0.0.1) -a : [command line]( “command line” ) argument to pass to program (default NULL) -k : RPC server ip address (default 127.0.0.1) -n : RPC server listen port (default 135) -c : CLSID (default BITS:{4991d34b-80a1-4291-83b6-3328366b9097}) -z only test CLSID and print token's user Example Final thoughts If the user has SeImpersonate or SeAssignPrimaryToken privileges then you are SYSTEM . It's nearly impossible to prevent the abuse of all these COM Servers. You could think to modify the permissions of these objects via DCOMCNFG but good luck, this is gonna be challenging. The actual solution is to protect sensitive accounts and applications which run under the * SERVICE accounts. Stopping DCOM would certainly inhibit this exploit but could have a serious impact on the underlying OS. Binaries An automatic build is available. Binaries can be downloaded from the Artifacts section here . Also available in BlackArch . Authors Andrea Pierini Giuseppe Trotta References Rotten Potato – Privilege Escalation from Service Accounts to SYSTEM Windows: DCOM DCE/RPC Local NTLM Reflection Elevation of Privilege Potatoes and Tokens The lonely Potato Social Engineering the Windows Kernel by James Forshaw Download Juicy-Potato

image
Scout Suite is an open source multi-cloud security-auditing tool, which enables security posture assessment of cloud environments. Using the APIs exposed by cloud providers, Scout Suite gathers configuration data for manual inspection and highlights risk areas. Rather than going through dozens of pages on the web consoles, Scout Suite presents a clear view of the attack surface automatically. Scout Suite is stable and actively maintained, but a number of features and internals may change. As such, please bear with us as we find time to work on, and improve, the tool. Feel free to report a bug with details (please provide console output using the –debug argument), request a new feature, or send a pull request. The project team can be contacted at [email protected] . Note: The latest (and final) version of Scout2 can be found in https://github.com/nccgroup/Scout2/releases and https://pypi.org/project/AWSScout2 . Further work is not planned for Scout2. Fixes will be implemented in Scout Suite. Support The following cloud providers are currently supported/planned: Amazon Web Services Microsoft Azure (beta) Google Cloud Platform Alibaba Cloud (early alpha) Oracle Cloud Infrastructure (early alpha) Installation Refer to the wiki . Compliance AWS Use of Scout Suite does not require AWS users to complete and submit the AWS Vulnerability / Penetration Testing Request Form. Scout Suite only performs API calls to fetch configuration data and identify security gaps, which is not considered security scanning as it does not impact AWS' network and applications. Azure Use of Scout Suite does not require Azure users to contact Microsoft to begin testing. The only requirement is that users abide by the Microsoft Cloud Unified Penetration Testing Rules of Engagement. References: https://docs.microsoft.com/en-us/azure/security/azure-security-pen-testing https://www.microsoft.com/en-us/msrc/pentest-rules-of-engagement Google Cloud Platform Use of Scout Suite does not require GCP users to contact Google to begin testing. The only requirement is that users abide by the Cloud Platform Acceptable Use Policy and the Terms of Service and ensure that tests only affect projects you own (and not other customers' applications). References: https://cloud.google.com/terms/aup https://cloud.google.com/terms/ Usage The following command will provide the list of available command line options: $ python scout.py –help You can also use this to get help on a specific provider: $ python scout.py PROVIDER –help For further details, checkout our Wiki pages at https://github.com/nccgroup/ScoutSuite/wiki . After performing a number of API calls, Scout will create a local HTML report and open it in the default browser. Also note that the command line will try to infer the argument name if possible when receiving partial switch. For example, this will work and use the selected profile: $ python scout.py aws –profile PROFILE Credentials Assuming you already have your provider's CLI up and running you should have your credentials already set up and be able to run Scout Suite by using one of the following commands. If that is not the case, please consult the wiki page for the provider desired. Amazon Web Services $ python scout.py aws Azure $ python scout.py azure –cli Google Cloud Platform $ python scout.py gcp –user-account Additional information can be found in the wiki . Download ScoutSuite

image
Mitaka is a browser extension for OSINT search which can: Extract & refang IoC from a selected block of text. E.g. example[.]com to example.com , test[at]example.com to [email protected] , hxxp://example.com to http://example.com , etc. Search / scan it on various engines. E.g. VirusTotal, urlscan.io, Censys, Shodan, etc. Features Supported IOC types name | desc. | e.g. —|—|— text | Freetext | any string(s) ip | IPv4 address | 8.8.8.8 domain | Domain name | github.com url | URL | https://github.com email | Email address | [email protected] asn | ASN | AS13335 hash | md5 / sha1 / sha256 | 44d88612fea8a8f36de82e1278abb02f cve | CVE number | CVE-2018-11776 btc | BTC address | 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa gaPubID | Google Adsense Publisher ID | pub-9383614236930773 gaTrackID | Google Analytics Tracker ID | UA-67609351-1 Supported search engines name | url | supported types —|—|— AbuseIPDB | https://www.abuseipdb.com | ip archive.org | https://archive.org | url archive.today | http://archive.fo | url BGPView | https://bgpview.io | ip / asn BinaryEdge | https://app.binaryedge.io | ip / domain BitcoinAbuse | https://www.bitcoinabuse.com | btc Blockchain.com | https://www.blockchain.com | btc BlockCypher | https://live.blockcypher.com | btc Censys | https://censys.io | ip / domain / asn / text crt.sh | https://crt.sh | domain DNSlytics | https://dnslytics.com | ip / domain DomainBigData | https://domainbigdata.com | domain DomainTools | https://www.domaintools.com | ip / domain DomainWatch | https://domainwat.ch | domain / email EmailRep | https://emailrep.io | email FindSubDomains | https://findsubdomains.com | domain FOFA | https://fofa.so | ip / domain FortiGuard | https://fortiguard.com | ip / url / cve Google Safe Browsing | https://transparencyreport.google.com | domain / url GreyNoise | https://viz.greynoise.io | ip / domain / asn Hashdd | https://hashdd.com | ip / domain / hash HybridAnalysis | https://www.hybrid-analysis.com | ip / domain / hash (sha256 only) Intelligence X | https://intelx.io | ip / domain / url / email / btc IPinfo | https://ipinfo.io | ip / asn IPIP | https://en.ipip.net | ip / asn Joe Sandbox | https://www.joesandbox.com | hash MalShare | https://malshare.com | hash Maltiverse | https://www.maltiverse.com | domain / hash NVD | https://nvd.nist.gov | cve OOCPR | https://data.occrp.org | email ONYPHE | https://www.onyphe.io | ip OTX | https://otx.alienvault.com | ip / domain / hash PubDB | http://pub-db.com | gaPubID / gaTrackID PublicWWW | https://publicwww.com | text Pulsedive | https://pulsedive.com | ip / domaion / url / hash RiskIQ | http://community.riskiq.com | ip / domain / email / gaTrackID SecurityTrails | https://securitytrails.com | ip / domain / email Shodan | https://www.shodan.io | ip / domain / asn Sploitus | https://sploitus.com | cve SpyOnWeb | http://spyonweb.com | ip / domain / gaPubID / gaTrackID Talos | https://talosintelligence.com | ip / domain ThreatConnect | https://app.threatconnect.com | ip / domain / email ThreatCrowd | https://www.threatcrowd.org | ip / domain / email ThreatMiner | https://www.threatminer.org | ip / domain / hash TIP | https://threatintelligenceplatform.com | ip / domain Urlscan | https://urlscan.io | ip / domain / asn / url ViewDNS | https://viewdns.info | ip / domain / email VirusTotal | https://www.virustotal.com | ip / domain / url / hash Vulmon | https://vulmon.com | cve VulncodeDB | https://www.vulncode-db.com | cve VxCube | http://vxcube.com | ip / domain / hash WebAnalyzer | https://wa-com.com | domain We Leak Info | https://weleakinfo.com | email X-Force Exchange | https://exchange.xforce.ibmcloud.com | ip / domain / hash ZoomEye | https://www.zoomeye.org | ip Supported scan engines name | url | supported types —|—|— Urlscan | https://urlscan.io | ip / domain / url VirusTotal | https://www.virustotal.com | url Downloads Chrome: https://chrome.google.com/webstore/detail/mitaka/bfjbejmeoibbdpfdbmbacmefcbannnbg FireFox: https://addons.mozilla.org/en-US/firefox/addon/mitaka/ How to use This browser extension shows context menus based on a type of IoC you selected and then you can choose what you want to search / scan on. Examples: Note: Please set your urlscan.io & VirusTotal API keys in the options page for enabling urlscan.io & VirusTotal scans. Options You can enable / disable a search engine on the options page based on your preference. About Permissons This browser extension requires the following permissions. Read and change all your data on the websites you visit : This extension creates context menus dynamically based on what you select on a website. It means this extension requires reading all your data on the websites you visit. (This extension doesn't change anything on the websites) Display notifications : This extension makes a notification when something goes wrong. I don't (and will never) collect any information from the users. Alternatives or Similar Tools CrowdScrape Gotanda Sputnik ThreatConnect Integrated Chrome Extension ThreatPinch Lookup VTchromizer How to build (for developers) This browser extension is written in TypeScript and built by webpack . TypeScript files will start out in src directory, run through the TypeScript compiler, then webpack, and end up in JavaScript files in dist directory. git clone https://github.com/ninoseki/mitaka.git cd mitaka npm install npm run test npm run build For loading an unpacked extension, please follow the procedures described at https://developer.chrome.com/extensions/getstarted . Misc Mitaka/見たか means “Have you seen it?” in Japanese. Download Mitaka

image
Kirjuri is a simple php/mysql web application for managing physical forensic evidence items. It is intended to be used as a workflow tool from receiving, booking, note-taking and possibly reporting findings. It simplifies and helps in case management when dealing with a large (or small!) number of devices submitted for forensic analysis. Kirjuri requires PHP7. See the official Kirjuri home page for more details. OVERVIEW & LICENSE Kirjuri is developed by Antti Kurittu. It was started at the Helsinki Police Department as an internal tool. Original development released under the MIT license. Some components are distributed with their own licenses, please see folders & help for details. CHANGELOG see CHANGELOG.md LOOKING TO PARTICIPATE? Everyone interested is encouraged to submit code and enhancements. If you don't feel confident submitting code, you can submit lanugage files and localized lists of devices etc. These will gladly be accepted. SCREENSHOTS Download Kirjuri

image
SysAnalyzer is an open-source application that was designed to give malcode analysts an automated tool to quickly collect, compare, and report on the actions a binary took while running on the system. A full installer for the application is available and can be downloaded here. The application supports windows 2000 – windows 10. Including x64 support. The main components of SysAnalyzer work off of comparing snapshots of the system over a user-specified time interval. The reason a snapshot mechanism was used compared to a live logging implementation is to reduce the amount of data that analysts must wade through when conducting their analysis. By using a snapshot system, we can effectively present viewers with only the persistent changes found on the system since the application was first to run. While this mechanism does help to eliminate a lot of the possible noise caused by other applications, or inconsequential runtime nuances, it also opens up the possibility for missing key data. Because of this SysAnalyzer also gives the analyst the option to include several forms of live logging into the analysis procedure. When first run, SysAnalyzer will present the user with the following configuration wizard: The executable path textbox represents the file under analysis. It can be filled in either by Dragging and dropping the target executable on the SysAnalyzer desktop icon Specifying the executable on the command line Dragging and Dropping the target into the actual textbox Using the browse for file button next to the textbox For files which must open in a viewer such as DOC or PDF files, specify the viewer app in the executable textbox, and the file itself in the arguments textbox. there are handful of options available on the screen for optional live logging components such as full packet capture, API logger, and sniff hit. you can also run it as another user. These options are saved to a configuration file and do not need to be entered each time. Note that users can also select the “Skip” link in order to proceed to the main interface where they can manually control the snapshot tools. note that the API logger option is generally stable but not entirely so in every case. I generally reserved this option for when I need more information than a standard analysis provides. Once these options are filled in and the user selects the “Start button” the options will be applied, a base snapshot of the system taken, and the executable launched. Download SysAnalyzer

image
Set of tools for creating/injecting payload into images. SETUP The following Perl modules are required: – GD – Image::ExifTool – String::CRC32 On Debian-based systems install these packages: sudo apt install libgd-perl libimage-exiftool-perl libstring-crc32-perl On OSX please refer to this workaround . Thanks to @iosdec TOOLS bmp.pl BMP Payload Creator/Injector. Usage ./bmp.pl [-payload ‘STRING'] -output payload.bmp If the output file exists, then the payload will be injected into the existing file. Else the new one will be created. Example ./bmp.pl -output payload.bmp [>| BMP Payload Creator/Injector |] Generating output file [✔] File saved to: payload.bmp [>] Injecting payload into payload.bmp [✔] Payload was injected successfully payload.bmp: PC bitmap, OS/2 1.x format, 1 x 1 00000000 42 4d 2f 2a 00 00 00 00 00 00 1a 00 00 00 0c 00 |BM/*…………| 00000010 00 00 01 00 01 00 01 00 18 00 00 00 ff 00 2a 2f |…………..*/| 00000020 3d 31 3b 3c 73 63 72 69 70 74 20 73 72 63 2f 2f |=1;;| 00000042 gif.pl GIF Payload Creator/Injector. Usage ./gif.pl [-payload ‘STRING'] -output payload.gif If the output file exists, then the payload will be injected into the existing file. Else the new one will be generated. Example ./gif.pl -output payload.gif [>| GIF Payload Creator/Injector |] Generating output file [✔] File saved to: payload.gif [>] Injecting payload into payload.gif [✔] Payload was injected successfully payload.gif: GIF image data, version 87a, 10799 x 32 00000000 47 49 46 38 37 61 2f 2a 20 00 80 00 00 04 02 04 |GIF87a/* …….| 00000010 00 00 00 2c 00 00 00 00 20 00 20 00 00 02 1e 84 |…,…. . …..| 00000020 8f a9 cb ed 0f a3 9c b4 da 8b b3 de bc fb 0f 86 |…………….| 00000030 e2 48 96 e6 89 a6 ea ca b6 ee 0b 9b 05 00 3b 2a |.H…………;*| 00000040 2f 3d 31 3b 3c 73 63 72 69 70 74 20 73 72 63 3d |/=1;;| 00000064 jpg.pl JPG Payload Creator/Injector. Usage ./jpg.pl [-payload ‘STRING'] -output payload.jpg If the output file exists, then the payload will be injected into the existing file. Else the new one will be created. Example ./jpg.pl -output payload.jpg [>| JPEG Payload Creator/Injector |] Generating output file [✔] File saved to: payload.jpg [>] Injecting payload into comment tag [✔] Payload was injected successfully payload.jpg: JPEG image data, [JFIF]( “JFIF” ) standard 1.01, resolution (DPI), density 96×96, segment length 16, comment: “”, baseline, precision 8, 32×32, components 3 00000000 ff d8 ff e0 00 10 4a 46 49 46 00 01 01 01 00 60 |……JFIF…..`| 00000010 00 60 00 00 ff fe 00 20 3c 73 63 72 69 70 74 20 |.`….. …C……| 000 00040 05 08 07 07 07 09 09 08 0a 0c 14 0d 0c 0b 0b 0c |…………….| 00000050 19 12 13 0f 14 1d 1a 1f 1e 1d 1a 1c 1c 20 24 2e |…………. $.| 00000060 27 20 22 2c 23 1c 1c 28 37 29 2c 30 31 34 34 34 |' “,#..(7),01444| 00000070 1f 27 39 3d 38 32 3c 2e 33 34 32 ff db 00 43 01 |.'9=82| PNG Payload Creator/Injector |] Generating output file [✔] File saved to: payload.png [>] Injecting payload into payload.png [+] Chunk size: 13 [+] Chunk type: IHDR [+] CRC: fc18eda3 [+] Chunk size: 9 [+] Chunk type: pHYs [+] CRC: 952b0e1b [+] Chunk size: 25 [+] Chunk type: IDAT [+] CRC: c8a288fe [+] Chunk size: 0 [+] Chunk type: IEND [>] Inject payload to the new chunk: ‘pUnk' [✔] Payload was injected successfully payload.png: PNG image data, 32 x 32, 8-bit/color RGB, non-interlaced 00000000 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 |.PNG……..IHDR| 00000010 00 00 00 20 00 00 00 20 08 02 00 00 00 fc 18 ed |… … ……..| 00000020 a3 00 00 00 09 70 48 59 73 00 00 0e c4 00 00 0e |…..pHYs…….| 00000030 c4 01 95 2b 0e 1b 00 00 00 19 49 44 41 54 48 89 |…+……IDATH.| 00000040 ed c1 31 01 00 00 00 c2 a0 f5 4f ed 61 0d a0 00 |..1…….O.a…| 00000050 00 00 6e 0c 20 00 01 c8 a2 88 fe 00 00 00 00 49 |..n. ……….I| 00000060 45 4e 44 ae 42 60 82 00 00 00 00 00 00 00 00 00 |END.B`……….| 00000070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |…………….| * 000000c0 00 1f 70 55 6e 6b 3c 73 63 72 69 70 74 20 73 72 |..pUnk..T..IEND| 000000ee LICENSE WTFPL LEGAL DISCLAIMER The author does not hold any responsibility for the bad use of this tool, remember that attacking targets without prior consent is illegal and punished by law. Download Pixload

image
Dolos Cloak is a python script designed to help network penetration testers and red teamers bypass 802.1x solutions by using an advanced man-in-the-middle attack. The tool is able to piggyback on the wired connection of a victim device that is already allowed on the target network without kicking the vicitim device off the network. It was designed to run on an Odroid C2 running Kali ARM and requires two external USB ethernet dongles. It should be possible to run the tool on other hardware and distros but it has only been tested on an Odroid C2 thus far. How it Works Dolos Cloak uses iptables, arptables, and ebtables NAT rules in order to spoof the MAC and IP addresses of a trusted network device and blend in with regular network traffic. On boot, the script disallows any outbound network traffic from leaving the Odroid in order to hide the MAC addresses of its network interfaces. Next, the script creates a bridge interface and adds the two external USB ethernet dongles to the bridge. All traffic, including any 802.1x authentication steps, is passed on the bridge between these two interfaces. In this state, the device is acting like a wire tap. Once the Odroid is plugged in between a trusted device (desktop, IP phone, printer, etc.) and the network, the script listens to the packets on the bridge interface in order to determine the MAC address and IP of the victim device. Once the script determines the MAC address and IP of the victim device, it configures NAT rules in order to make all traffic on the OUTPUT and POSTROUTING chains look like it is coming from the victim device. At this point, the device is able to communicate with the network without being burned. Once the Odroid is spoofing the MAC address and IP of the victim device, the script sends out a DHCP request in order to determine its default gateway, search domain, and name servers. It uses the response in order to configure its network settings so that the device can communicate with the rest of the network. At this point, the Odroid is acting as a stealthy foothold on the network. Operators can connect to the Odroid over the built-in NIC eth0 in order to obtain network access. The device can also be configured to send out a reverse shell so that operators can utilize the device as a drop box and run commands on the network remotely. For example, the script can be configured to run an Empire python stager after running the man-in-the-middle attack. You can then use the Empire C2 connection to upgrade to a TCP reverse shell or VPN tunnel. Installation and Usage Perform default install of Kali ARM on Odroid C2. Check out the Blackhills writeup here . ssh [email protected] Be sure to save this project to /root/tools/dolos_cloak Plug one external USB NIC into the Odroid and run dhclient to get internet access in order to install dependencies: dhclient usbnet0 Run the install script to get all the dependencies and set the Odroid to perform the MitM on boot by default. Keep in mind that this will make drastic changes to the device's network settings and disable Network Manager. You may want to download any additional tools before this step: cd setup ./setup.sh You may want to install some other tools like ‘host' that do not come standard on Kali ARM. Empire, enum4linux, and responder are also nice additions. Make sure you are able to ssh into the Odroid via the built-in NIC eth0. Add your public key to /root/.ssh/authorized_keys for fast access. Modify config.yaml to meet your needs. You should make sure the interfaces match the default names that your Odroid is giving your USB dongles. Order does not matter here. You should leave client_ip, client_mac, gateway_ip, and gateway_mac blank unless you used a LAN tap to mine them. The script _ should _ be able to figure this out for us. Set these options only if you know for sure their values. The management_int, domain_name, and dns_server options are placeholders for now but will be usefull very soon. For shells, you can set up a custom autorun command in the config.yaml to run when the man-in-middle attack has autoconfigured. You can also set up a cron job to send back shells. Connect two usb ethernet dongles and reboot the device (you need two because the built-in ethernet won't support promiscuous mode) Boot the device and wait a few seconds for autosniff.py to block the OUTPUT ethernet and IP chains. Then plug in the Odroid between a trusted device and the network. PWN N00BZ, get $$$, have fun, hack the planet Tips Mod and run ./scripts/upgrade_to_vpn.sh to turn a stealthy Empire agent into a full blown VPN tunnel Mod and run ./scripts/reverse_listener_setup.sh to set up a port for a reverse listener on the device. Run ./scripts/responder_setup.sh to allow control of the protocols that we capture for responder. You shoud run responder on the bridge interface: responder -I mibr Be careful as some NAC solutions use port 445, 443, and 80 to periodically verify hosts. Working on a solution to this… Logs help when the autosniff.py misbehaves. The rc.local is set to store the current session logs in ./logs/session.log and logs in ./logs/history.log so we can reboot and still check the last session's log if need be. Log files have cool stuff in them like network info, error messages, and all bash commands to set up the NAT ninja magic. Stealth Use the radio_silence parameter to prevent any output originating from us. This is for sniffing-only purpose. Download Dolos_Cloak

image
Dr. ROBOT is a tool for Domain Reconnaissance and Enumeration . By utilizing containers to reduce the overhead of dealing with dependencies, inconsistency across operating sytems, and different languages, Dr. ROBOT is built to be highly portable and configurable. Use Case : Gather as many public facing servers that a target organization possesses. Querying DNS resources enables us to quickly develop a large list of possible targets that you can run further analysis on. Note : Dr. ROBOT is not just a one trick pony. You can easily customize the tools that are used gather information, so that you can enjoy the benefits of using latest and greatest along with your battle tested favorites. Install and Run Inspect Upload Slack Dump DB Output Serve Command Examples Run gather using Sublist3r and Aquatone and Shodan python drrobot.py example.domain gather -sub -aqua -shodan Run gather using Sublist3r with Proxy python drrobot.py –proxy http://some.proxy:port example.domain gather -sub Run inspect using Eyewitness python drrobot.py example.domain inspect -eye Run inspect using httpscreenshot and grabbing headers python drrobot.py example.domain inspect -http -headers Run upload using Mattermost/Slack python drrobot.py example.domain upload -matter MAIN usage: drrobot.py [-h] [–proxy PROXY] [–dns DNS] [–verbose] [–dbfile DBFILE] {gather,inspect,upload,rebuild,dumpdb,output,serve} … Docker DNS recon tool positional arguments: {gather,inspect,upload,rebuild,dumpdb,output,serve} gather Run scanners against a specified domain and gather the associated systems. You have the option to run using any docker_buildfiles/webtools included in your config. inspect Run further tools against domain information gathered from the gather step. Note: you must either supply a file which contains a list of IP/Hostnames, or the targeted domain must have a db file in the dbs folder upload Upload recon data to Mattermost. Currently only works with afolder that contain PNG images. rebuild Rebuild the database with additional files/all files from the previous runtime dumpdb Dump the database of ip, hostname, and banners to a text file output Generate output in specified format. Contains all information from scans (images, headers, hostnames, ips) serve Serve database file in docker container using django optional arguments: -h, –help show this help message and exit –proxy PROXY Proxy server URL to set DOCKER http_proxy too –dns DNS DNS server to add to resolv.conf of DOCKER containers –verbose Display verbose statements –dbfile DBFILE Specify what db file to use for saving data too Gather usage: drrobot.py domain gather [-h] [-aqua] [-sub] [-brute] [-sfinder] [-knock] [-amass] [-recong] [-shodan] [-arin] [-hack] [-dump] [-virus] [–ignore IGNORE] [–headers] positional arguments: domain Domain to run scan against optional arguments: -h, –help Show this help message and exit -aqua, –Aquatone AQUATONE is a set of tools for performing reconnaissance on domain names -sub, –Sublist3r Sublist3r is a python tool designed to enumerate subdomains of websites using OSINT -brute, –Subbrute SubBrute is a community driven project with the goal of creating the fastest, and most accurate subdomain enumeration tool. -sfinder, –Subfinder SubFinder is a subdomain discovery tool that discovers valid subdomains for websites by using passive online sources -knock, –Knock Knockpy is a python tool designed to enumerate subdomains on a target domain through a wordlist -amass, –Amass The OWASP Amass tool suite obtains subdomain names by scraping data sources, recursive brute forcing, crawling web archives, permuting/altering names and reverse DNS sweeping. -recon, –Reconng Recon-ng is a full-featured Web Reconnaissance framework written in Python. DrRobot utilizes several of the recon/hosts-domain modules in this framework. -shodan, –Shodan Query SHODAN for publicly facing sites of given domain -arin, –Arin Query ARIN for public CIDR ranges. This is better as a [brute force]( “brute force” ) option as the ranges -hack, –HackerTarget This query will display the forward DNS records discovered using the data sets outlined above. -dump, –Dumpster Use the limited response of DNSDumpster. Requires API access for better results. -virus, –VirusTotal Utilize VirusTotal's Observer Subdomain Search –ignore IGNORE Space seperated list of subnets to ignore –headers If headers should be scraped from ip addresses gathered INSPECT usage: main.py inspect [-h] [-httpscreen] [-eye] [–proxy PROXY] [–dns DNS] [–file FILE] positional arguments: domain Domain to run scan against optional arguments: -h, –help Show this help message and exit -httpscreen, –HTTPScreenshot Post enumeration tool for screen grabbing websites. All images will be downloaded to an output file: httpscreenshot.tar and unpacked httpscreenshots -eye, –Eyewitness Post enumeration tool for screen grabbing websites. All images will be downloaded to outfile: Eyewitness.tar and unpacked in Eyewitness –proxy PROXY Proxy server URL to set for DOCKER http_proxy –dns DNS DNS server for the resolv.conf of DOCKER containers –file FILE (NOT WORKING) File with hostnames to run further inspection on UPLOAD usage: drrobot.py [-h] [–proxy PROXY] [–dns DNS] [–verbose] [–dbfile DBFILE] {gather,inspect,upload,rebuild,dumpdb,output,serve} … Docker DNS recon tool positional arguments: {gather,inspect,upload,rebuild,dumpdb,output,serve} gather Run scanners against a specified domain and gather the associated systems. You have the option to run using any docker_buildfiles/webtools included in your config. inspect Run further tools against domain information gathered from the gather step. Note: you must either supply a file which contains a list of IP/Hostnames, or the targeted domain must have a db file in the dbs folder upload Upload recon data to Mattermost. Currently only works with afolder that contain PNG images. rebuild Rebuild the database with additional files/all files from the previous runtime dumpdb Dump the database of ip, hostname, and banners to a text file output Generate output in specified format. Contains all information from scans (images, headers, hostnames, ips) serve Serve database file in docker container using django optional arguments: -h, –help show this help message and exit –proxy PROXY Proxy server URL to set DOCKER http_proxy too –dns DNS DNS server to add to resolv.conf of DOCKER containers –verbose Display verbose statements –dbfile DBFILE Specify what db file to use for saving data too Rebuild usage: drrobot.py domain gather [-h] [-aqua] [-sub] [-brute] [-sfinder] [-knock] [-amass] [-recong] [-shodan] [-arin] [-hack] [-dump] [-virus] [–ignore IGNORE] [–headers] positional arguments: domain Domain to run scan against optional arguments: -h, –help Show this help message and exit -aqua, –Aquatone AQUATONE is a set of tools for performing reconnaissance on domain names -sub, –Sublist3r Sublist3r is a python tool designed to enumerate subdomains of websites using OSINT -brute, –Subbrute SubBrute is a community driven project with the goal of creating the fastest, and most accurate subdomain enumeration tool. -sfinder, –Subfinder SubFinder is a subdomain discovery tool that discovers valid subdomains for websites by using passive online sources -knock, –Knock Knockpy is a python tool designed to enumerate subdomains on a target domain through a wordlist -amass, –Amass The OWASP Amass tool suite obtains subdomain names by scraping data sources, recursive brute forcing, crawling web archives, permuting/altering names and reverse DNS sweeping. -recon, –Reconng Recon-ng is a full-featured Web Reconnaissance framework written in Python. DrRobot utilizes several of the recon/hosts-domain modules in this framework. -shodan, –Shodan Query SHODAN for publicly facing sites of given domain -arin, –Arin Query ARIN for public CIDR r anges. This is better as a brute force option as the ranges -hack, –HackerTarget This query will display the forward DNS records discovered using the data sets outlined above. -dump, –Dumpster Use the limited response of DNSDumpster. Requires API access for better results. -virus, –VirusTotal Utilize VirusTotal's Observer Subdomain Search –ignore IGNORE Space seperated list of subnets to ignore –headers If headers should be scraped from ip addresses gathered Dumpdb usage: main.py inspect [-h] [-httpscreen] [-eye] [–proxy PROXY] [–dns DNS] [–file FILE] positional arguments: domain Domain to run scan against optional arguments: -h, –help Show this help message and exit -httpscreen, –HTTPScreenshot Post enumeration tool for screen grabbing websites. All images will be downloaded to an output file: httpscreenshot.tar and unpacked httpscreenshots -eye, –Eyewitness Post enumeration tool for screen grabbing websites. All images will be downloaded to outfile: Eyewitness.tar and unpacked in Eyewitness –proxy PROXY Proxy server URL to set for DOCKER http_proxy –dns DNS DNS server for the resolv.conf of DOCKER containers –file FILE (NOT WORKING) File with hostnames to run further inspection on OUTPUT usage: drrobot.py domain upload [-h] [-matter] [-slack] [–filepath FILEPATH] positional arguments: domain Domain to run scan against optional arguments: -h, –help Show this help message and exit -matter, –Mattermost Mattermost server to upload findings to Mattermost server -slack, –Slack Slack server –filepath FILEPATH Filepath to the folder containing images to upload. This is relative to the domain specified. By default, this will be the path to the output folder Serve usage: drrobot.py rebuild [-h] [-f [FILES [FILES …]]] optional arguments: -h, –help Show this help message and exit -f [FILES [FILES …]], –files [FILES [FILES …]] Additional files to supply in addition to the ones in the config file Configurations This tool is highly dependent on the configuration you provide it. Provided for you is a default_config.json that you can use as a simple template for your user_config.json . Most of the configurations under Scanners are done for you and can be used as is. Note the use of default in this and other sections. default : specifies a Docker or Ansible instance. Make sure you adjust configurations according to their usage. Docker Configuration Requirements Example: usage: drrobot.py dumpdb [-h] positional arguments: domain Domain to run scan against optional arguments: -h, –help Show this help message and exit name: Identifiable name for the program/utility you are using default : (Disabled for now) mode : DOCKER (uses docker container with this tool when chosen) docker_name : What the docker image name will be when running docker images network_mode : Network mode to use when creating container. Host uses the host network default_conf : Template Dockerfile to build form active_conf : Target specific configuration that will be used during runtime description : Description of tool (optional) src : Where the tool comes from (optional) output : Location of output on the docker container. Can be hardcoded into Dockerfiles for preference output_folder : Location under the _ outputs/target _ folder where output for target will be stored Ansible Configuration Requirements Example usage: drrobot.py domain output [-h] [–output OUTPUT] {json,xml} positional arguments: {json,xml} Generate json file under outputs folder (format) domain Domain to dump output of optional arguments: -h, –help Show this help message and exit –output OUTPUT Alternative location to create output file name: Identifiable name for the program/utility you are using default : (Disabled for now) mode : ANSIBLE (uses Ansible with this tool when chosen) ansible_arguments : Json configuration for specific informaiton config : playbook to use ($config keyword is replaces for full path to file when issuing ansible playbook command) flags : specifies extra flags to be used with the ansible command (specifically useful for any extra flags you would like to use) extra flags : key does not matter so long as it is different from any other key. These extra flags will all be applied to the ansible file in question description : Description of tool (optional) src : Where the tool comes from (optional) output : Where output will be stored on the external file system infile : (Unique for certain modules) what files this program will use as input to the program. In this case you will notice that it searches /tmp/output for aggregated_protocol_hostnames.txt. This file is supplied from the above extra flags option. Web Modules Example: usage: drrobot.py domain serve [-h] optional arguments: -h, –help show this help message and exit short_name : quick reference name for use in CLI class_name : this must match the name you specify for a given class under the respective module name The reason behind this results from the loading of modules at runtime which requires the use of importlib. This will load the respective class from the classname provided via the CLI options. default : false (Disabled for now) api_call_unused : (Old, may be used later…) description : Description of tool (optional) Serve Module: Example “Sublist3r”: { “name”: “Sublist3r”, “default” : true, “mode” : “DOCKER”, “docker_name”: “sub”, “network_mode”: “host”, “default_conf”: “docker_buildfiles/Dockerfile.Sublist3r.tmp”, “active_conf”: “docker_buildfiles/Dockerfile.Sublist3r”, “description”: “Sublist3r is a python tool designed to enumerate subdomains of websites using OSINT”, “src”: “https://github.com/aboul3la/Sublist3r”, “output”: “/root/sublist3r”, “output_folder”: “sublist3r” }, command: Command to start server on Docker container (Note: For now only using docker) docker_name : What the docker image name will be when running docker images network_mode : Network mode to use when creating container. Host uses the host network default_conf : Template Dockerfile to build form active_conf : Target specific configuration that will be used during runtime description : Description of tool (optional) ports: Port mapping of localhost to container for docker Example Configuration For WebTools Under configs , you will find a default_config that contains a majority of the default scanners you can use. If you wish to extend upon the WebTools list just follow these steps: Add the new tool to the user_config.json “HTTPScreenshot”: { “name” : “HTTPScreenshot”, “short_name” : “http”, “mode” : “ANSIBLE”, “ansible_arguments” : { “config” : “$config/httpscreenshot_play.yml”, “flags”: “-e ‘$extra' -i ansible_plays/inventory.yml”, “extra_flags”:{ “1” : “variable_host=localhost”, “2” : “infile=$infile/aggregated/aggregated_protocol_hostnames.txt”, “3” : “outfile=$outfile/httpscreenshots.tar”, “4” : “outfolder=$outfile/httpscreenshots”, “5” : “variable_user=bitnami” } }, “description” : “Post enumeration tool for screen grabbing websites. All images will be downloaded to outfile: httpscreenshot.tar and unpacked httpscreenshots”, “output” : “/tmp/output”, “infile” : “/tmp/output/aggregated_protocol_hostnames.txt”, ” enabled” : false Open src/web_resources.py and make a class with the class_name specified in the previous step. _ MAKE SURE IT MATCHES EXACTLY _ “HackerTarget” : { “short_name” : “hack”, “class_name” : “HackerTarget”, “default” : false, “description” : “This query will display the forward DNS records discovered using the data sets outlined above.”, “api_call_unused” : “https://api.hackertarget.com/hostsearch/?q=example.com”, “output_file” : “hacker.txt” }, Example Configurations For Docker Containers Under configs , you will find a default_config which contains a majority of the default scanners you can utilize. If you wish to extend upon the Scanners list just follow these steps: Add the json to the config file (user if generated). “Serve” : { “name” : “Django”, “command” : “python manage.py runserver 0.0.0.0:8888”, “docker_name”: “django”, “network_mode”: “host”, “default_conf”: “serve_api/Dockerfile.Django.tmp”, “active_conf”: “serve_api/Dockerfile.Django”, “description” : “Django container for hosting database”, “ports” : { “8888” : “8888” } } Note _ network_mode _ is an option specifically for docker containers. It is implementing the –network flag when using docker Under the docker_buildfiles/ folder, create your Dockerfile.NewTool.tmp dockerfile. If you desire adding more options at run time to the Dockerfiles, look at editing src/dockerize Note: As of right now Dockerfiles must come from the docker_buildfiles folder. Future work includes specifying a remote source for the docker images. Example Ansible Configuration Under configs you will find a default_config which contains a majority of the default scanners you can have. For this step however, we will be looking at configuring an inspection too Eyewitness for utilization with Ansible . Add the json to the config file (user if generated). { “WebTools”: { “NewTool” : { “short_name”: “ntool”, “class_name”: “NewTool”, “description” : “NewTool description”, “output_file” : “newtool.txt”, “api_key” : null, “endpoint” : null, “username” : null, “password” : null }, As you can see, this has a few items that may seem confusing at first, but will be clarified here: mode: Allows you to specify how you want to deploy a tool you want to use. Currently DOCKER or ANSIBLE are the only available methods to deploy. All options outside of ansible_configuration will be ignored when developing for ANSIBLE . Options under _ ansible_arguments _ config : specify which playbook to use flags : which flags to pass to the ansible-playbook command. With the exception of the $extra flag, you can add anything you would like to be done uniquely here. extra_flags : this corresponds to the $extra flag as seen above. This will be used to populate variables that you input into your playbook. You can use this to supply command line arguments when utilizing ansible and Dr. Robot in order to add files and other utilities to your script. variable_host : hostname alias found in the inventory file variable_user : user to login as on the variable_host machine infile : file to be used with the tool above. Eyewitness requires hostnames with the format https://some.url , hence _ aggregated_protocol_hostnames.txt _ Note the use of the prefix $infile – these names all match as they are placeholders for the default locations that $infile corresponds to in outputs/target_name/aggregated If you have a file in another location you can just specify the entire path without any errors occurring. outfile : The output file location As with the above infile $outfile in the name is just a key to the location outputs/target_name/ You may specify a hard coded path for other use. Just remember the location for uploading or other processing with Dr. Robot outfolder : The output folder to unpack/download files too As with the above infile $outfile in the name is just a key to the location outputs/target_name/ This is a special case for Eyewitness and HttpScreenshot, which you can see in their playbooks. They generate a lot of files and rather than download each individually having them pack up the files as a step in the playbook and then unpacking allows for some integrity. A quick example below shows how we use the extra_flags to supply the hostname to the playbook for ansible. class NewTool(WebTool): def init(self, kwargs): super().init(kwargs) …. def do_query(self): …. do the query … store results in self.results Docker Integration and Customization Docker is relied upon heavily for this tool to function. All Docker files will have a default_conf and an active_conf . default_conf represents the template that will be used for generation of the docker files. The reason for building the docker images is to allow for finer control on the user end, especially if you are in a more restricted environment without access to the docker repositories. active_conf represents the configuration which will be build into the current image. example Dockerfile.tmp “Scanners” : { … “NewTool”: { “name”: “NewTool”, “default” : true, “mode” : DOCKER, “docker_name”: “ntool”, “network_mode”: “host”, “default_conf”: “docker_buildfiles/Dockerfile.NewTool.tmp”, “active_conf”: “docker_buildfiles/Dockerfile.NewTool”, “description”: “NewTool is an awesome tool for domain enumeration”, “src”: “https://github.com/NewTool”, “output”: “/home/newtool”, “output_file”: “NewTool.txt” }, … } We use ENV to keep track of most variable input from Python on the user end. Using the DNS information provided by the user we are able to download packages and git repos during building. Ansible Configuration Please see the ansible documentation: https://docs.ansible.com/ for details on how to develop a playbook for use with DrRobot. Inventory Ansible inventory files will be self contained within DrRobot so as to further seperate itself from any one system. The inventory file will be located under configs/ansible_inventory As noted in the documentation ansible inventory can be defined as groups or single IP's. A quick example: “Enumeration” : { “Eyewitness”: { “name” : “Eyewitness”, “short_name” : “eye”, “docker_name” : “eye”, “mode” : “ANSIBLE”, “network_mode”: “host”, “default_conf” : “docker_buildfiles/Dockerfile.Eyewitness.tmp”, “active_conf” : “docker_buildfiles/Dockerfile.Eyewitness”, “ansible_arguments” : { “config” : “$config/eyewitness_play.yml”, “flags”: “-e ‘$extra' -i ansible_plays/inventory”, “extra_flags”:{ “1” : “variable_host=localhost”, “2” : “variable_user=root”, “3” : “infile=$infile/aggregated_protocol_hostnames.txt”, “4” : “outfile=$outfile/Eyewitness.tar”, “5” : “outfolder=$outfile/Eyewitness” } }, “desc ription” : “Post enumeration tool for screen grabbing websites. All images will be downloaded to outfile: Eyewitness.tar and unpacked in Eyewitness”, “output” : “/tmp/output”, “infile” : “/tmp/output/aggregated/aggregated_protocol_hostnames.txt”, “enabled” : false }, } SSH + Ansible If you desire to run Ansible with this tool and require ssh authentication be done you can use the application as is to run Ansible scripts. The plays will be piped to STDIN/STDOUT so that you may supply credentials if required. If you wish to have to not manually provide credentials just use an ssh-agent — – hosts: “{{ variable_host|quote }}” remote_user: root tasks: – name: Apt install git become: true apt: name: git force: yes Adding Docker Containers If you wish to add another Dockerfile to the project make a Dockerfile.toolname.tmp file within the docker_buildfiles folder. Then opening up your user_config add a new section under the appropriate section as shown above in the docker Dependencies Docker required for any of the scanners to run Python 3.6 required Pipenv for versioning of all Python packages. You can use the Pipfile with setup.py requirements as well. FROM python:3.4 WORKDIR /home ENV http_proxy $proxy ENV https_proxy $proxy ENV DNS $dns ENV TARGET $target ENV OUTPUT $output RUN mkdir -p $$OUTPUT RUN if [ -n “$$DNS” ]; then echo “nameserver $DNS” > /etc/resolv.conf; fi; apt-get install git RUN if [ -n “$$DNS” ]; then echo “nameserver $DNS” > /etc/resolv.conf; fi; git clone https://github.com/aboul3la/Sublist3r.git /home/sublist WORKDIR /home/sublist RUN if [ -n “$$DNS” ]; then echo “nameserver $DNS” > /etc/resolv.conf; fi; pip3 install -r requirements.txt ENTRYPOINT python3 sublist3r.py –domain $target -o $output/sublist3r.txt Ansible if you require the use of external servers. Python Mattermost Driver [Optional] if using Mattermost you will require this module Output Gather : when ran will produce an output similar to: [example-host] ip.example.com You will also notice a sqlite file found under the dbs folder (You can specify alternative db filenames): eval $(ssh-agent -s) ssh-add /path/to/sshkey Inspect : when ran will continue to add files to the output folder. If you provided a domain file under the db section the domain folder will be created for you. The output will look similar to the above but with some added contents: cd /path/to/drrobot/ pipenv install && pipenv shell python drrobot.py target Slack Please check the following for a guide on how to setup your Python bot for messaging. https://github.com/slackapi/python-slackclient SQLite DB file schema Table Data: | domainid | INTEGER | PRIMARY KEY | ——– | ——- | | ip | VARCHAR | | hostname | VARCHAR | | headers | VARCHAR | | http_headers | TEXT | | https_headers| TEXT | | domain | VARCHAR | FOREIGN KEY Table Domain: | domain | VARCHAR | PRIMARY KEY | ——– | ——- | Serve As is often the case, having an API can be nice for automation purposes. Under the serve-api folder, there is a simple Django server implementation that you can stand up locally or serve via Docker. In order to serve the datak, you need to copy your database folder to the root directory of serve-api and rename the file to drrobot.db . If you would like to use an alternative name, simply change the name in the Django serve____-api/drrobot/drrobot/settings.py . Download Dr_Robot

image
FudgeC2 is a campaign orientated Powershell C2 framework built on Python3/Flask – Designed for team collaboration, client interaction, campaign timelining, and usage visibility. _ Note: FudgeC2 is currently in alpha stage, and should be used with caution in non-test environments. _ Setup Installation To quickly install & run FudgeC2 on a Linux host run the following: git clone https://github.com/Ziconius/FudgeC2 cd FudgeC2/FudgeC2 sudo pip3 install -r requirements.txt sudo python3 Controller.py For those who wish to use FudgeC2 via Docker, a template Dockerfile exists within the repo as well. Settings: FudgeC2′ default server configurations can be found in the settings file: /FudgeC2/Storage/settings.py These settings include FudgeC2 server application port, SSL configuration, and database name. For further details see the Server Configuration section. N.b. Depending on your network design/RT architecture deployment, you will likely need to configure a number of proxy and routing adjustments. For upcoming development notes and recent changes see the release.md file First Login After the initial installation you can log in with the default admin account using the credentials: admin:letmein You will be prompted to the change the admin password after you login for the first time. Server Settings Certificate: How to deploy/Where to deploy Port – consider listeners DB name: Users Users within Fudge are divided into 2 groups, admins and standard users. Admins have all of the usual functionality, such as user and campaign creation, and are required to create a new campaigns. Within campaign a users permissions can be configured to once of the following: None/Read/Read+Write. Without read permissions, a user will not be able to see the existence of a campaign, nor will they be able to read implant responses, or registered commands. User with read permission will only be able to view the commands and their output, and the campaigns logging page. This role would typically be assigned to a junior tester, or an observer. Users with write permissions will be able to create implant templates, and execute commands on all active implants. _ Note: in further development this will become more granular, allow write permissions on specific implants. _ User Creation An admin can create a new user from within the Global Settings options. They will also have the option to configure a user with admin privileges. Campaigns What is a campaign? A campaign is a method of organising a engagement against a client, which allows access control to be applied on a per user basis Each campaign contains a unique name, implants, and logs while a user can be a member of multiple campaigns. Implants Implants are broken down into 3 areas Implant Templates Stagers Active Implants Implant Templates An implant template is the what we will create to generate our stagers. The implant template wil contain the default configuration for an implant. Once the stager has been triggered and an active implant is running on the host this can be changed. The list of required configurations are: URL Initial callback delay Port Beacon delay Protocol: HTTP (default) HTTPS DNS Binary Once a template has been created the stager options will be displayed in the Campaign Stagers page. Stagers The stagers are small scripts/macros etc which are responsible for downloaded and executing the full implant. Once an implant has been generated the stagers page will provide a number of basic techniques which can be used to compromise the target. The stagers which are currently available are: IEX method Windows Words macro Active Implants Active implants are the result of successful stager executions. When a stager connects back to the Fudge C2 server a new active implant is generated, and delivered to the target host. Each stager execution & check-in creates a new active implant entry. _ Example _ As part of a campaign an user creates an implant template called “Moozle Implant” which is delivery to a HR department in via word macro. This then results in five successful execution of the macro stager; as a result the user will see five active implants. These will be listed on the campaigns main implant page, with a six character unique blob. The unique implants will be listed something similar to below: Moozle Implant_123459 Moozle Implant_729151 Moozle Implant_182943 Moozle Implant_613516 Moozle Implant_810021 Each of these implants can be individually interacted with, or using the “ALL” keyword to register a command against all active implants. Implant communication Implants will communicate back to the C2 server using whatever protocols the implant template was configured to use. If an implant is setup to use both HTTP and HTTPS, 2 listeners will be required to ensure that full commincation with the implant occurs. Listeners are configured globally within Fudge from the Listeners page. Setting up and modifying the state of listeners requires admin rights, as changes to stagers may impact other on-going campaigns using the same Fudge server. Currently the listeners page displays active listeners, but will allow admins to: Create listeners for HTTP/S, DNS, or binary channels on customisable ports Start created listeners Stop active listeners Assign common names to listeners Implant configuration further info. URL: An implant will be configured to call back to a given URL, or IP address. Beacon time: [Default: 15 minutes] This is the time in between the implant calling back to the C2 server. Once an implant has been deployed it is possible to dynamically set this. Protocols: The implant will be able to use of of the following protocols: HTTP DNS Binary protocol A user can enable and disable protocols depending on the environment they believe they are working in. Download FudgeC2