Home
Categories
EXPLORE
Music
True Crime
Comedy
Society & Culture
Education
Technology
Business
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/72/9c/78/729c78c8-dd4a-83f1-d865-c815a52fcb4a/mza_18143718259370525373.jpg/600x600bb.jpg
CyberCode Academy
CyberCode Academy
60 episodes
1 day ago
Welcome to CyberCode Academy — your audio classroom for Programming and Cybersecurity.
🎧 Each course is divided into a series of short, focused episodes that take you from beginner to advanced level — one lesson at a time.
From Python and web development to ethical hacking and digital defense, our content transforms complex concepts into simple, engaging audio learning.
Study anywhere, anytime — and level up your skills with CyberCode Academy.
🚀 Learn. Code. Secure.
Show more...
Courses
Education,
Technology
RSS
All content for CyberCode Academy is the property of CyberCode Academy and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Welcome to CyberCode Academy — your audio classroom for Programming and Cybersecurity.
🎧 Each course is divided into a series of short, focused episodes that take you from beginner to advanced level — one lesson at a time.
From Python and web development to ethical hacking and digital defense, our content transforms complex concepts into simple, engaging audio learning.
Study anywhere, anytime — and level up your skills with CyberCode Academy.
🚀 Learn. Code. Secure.
Show more...
Courses
Education,
Technology
Episodes (20/60)
CyberCode Academy
Course 8 - Penetration Testing OSINT Gathering with Recon-ng | Episode 4: Recon-ng Results: Comprehensive Reporting Formats and Strategic
In this lesson, you’ll learn about:
Managing Recon-ng Data and Generating Stakeholder Reports This episode provides a complete guide to organizing, reporting, and analyzing the large amounts of data collected in a Recon-ng workspace. The emphasis is on converting raw terminal output into structured reports for stakeholders, and performing the necessary strategic analysis before moving forward with later stages of a penetration test. 1. Generating Organized Reports The first priority is exporting Recon-ng data into formats that can be easily consumed by company administrators, security teams, or management. While the internal show dashboard is useful for the tester’s own overview, it is not suitable for stakeholders. Recon-ng offers several reporting modules to solve this: • CSV Reporting
  • The reporting/csv module generates spreadsheet-style output (compatible with Excel, LibreOffice, etc.).
  • By default, this module exports data from the hosts table.
• JSON and XML Reporting
  • The reporting/json and reporting/xml modules allow exporting data in structured formats.
  • Multiple database tables can be included as needed.
  • These formats are ideal for automated pipelines, dashboards, or integrating with other tools.
• HTML Reporting
  • The reporting/html module creates a ready-to-share HTML report.
  • It includes:
    • An overall summary
    • Sections for all database tables that contain data
    • Optional customization using set creator (your company/organization) and set customer (client name, e.g., “BBC”)
  • This format is suitable for emailing or presenting to non-technical stakeholders.
• Lists
  • The reporting/lists module outputs a single-column list from a selected table.
  • The default column is IP address, but it can be changed (e.g., region, email addresses, etc.).
  • Useful for feeding data into other tools or scripts.
• Pushpin (Geolocation Viewer)
  • A more visual reporting option.
  • When latitude, longitude, and radius are set, this module generates HTML files showing pushpins on a Google Maps interface.
  • Useful for mapping physically geolocated server infrastructure.
All reports reflect the contents of the currently active workspace, so organizing your data beforehand is important. The Python source files defining each reporting module can be inspected within the Recon-ng home directory if needed for customization or learning. 2. Strategic Post-Scan Analysis (Critical Thinking Phase) After exporting the collected data, the episode stresses that a deliberate analytical stage is absolutely essential. Without it, the reconnaissance effort “is pretty much useless.” This stage involves interpreting the findings and evaluating their security implications. Key analysis areas include: • Infrastructure Weakness Identification
  • Reviewing BuiltWith data and other technical findings.
  • Understanding the technologies, frameworks, CMS versions, and hosting setups being used.
  • Assessing how an attacker could target these components.
• Social Engineering Exposure
  • Reviewing publicly accessible HR contacts, admin emails, employee names, and roles.
  • Determining how attackers could misuse this information for phishing or impersonation.
• Public Information Scrubbing
  • Evaluating which data points should be removed from public sources.
  • Prioritizing sensitive or high‑risk information that exposes the organization.
• Policy and Organizational Review
  • Determining whether internal security policies need...
Show more...
1 day ago
9 minutes

CyberCode Academy
Course 8 - Penetration Testing OSINT Gathering with Recon-ng | Episode 3: Harvesting Data, Optimizing Contacts, Geolocation
In this lesson, you’ll learn about: Conducting a Multi‑Stage OSINT Campaign Using Recon‑ng 1. Initial Data Harvesting & Database Population The OSINT campaign begins by creating a dedicated workspace and planning the stages of information gathering. The first objective is to populate core database tables—contacts and hosts. Contact Gathering
  • whois_pocs module collects domain registration information, extracting email addresses and owner details.
  • PGP search modules identify additional contacts by searching for PGP keys associated with the target domain.
Host Discovery
  • bing_domain_web module scans the domain to enumerate subdomains and hostnames.
  • brute_hosts module brute‑forces common hostnames to uncover additional active hosts not found through search engines.
File Analysis
  • Once the hosts table is filled, the interesting_files module scans discovered hosts for publicly accessible files such as:
    • sitemap.xml
    • phpinfo.php
    • Test files
      These files may contain operational details useful for further analysis.
2. Contact Optimization & Breach Assessment This phase enhances collected contact data and checks whether employees or organizational accounts have been compromised. Email Construction Using Mangle
  • The mangle module builds complete email addresses using partial names and organizational naming patterns.
  • It combines first/last names with the domain to produce likely valid addresses.
Breach Monitoring Using HIBP
  • hibp_breach module checks if collected or constructed emails were exposed in known credential leaks.
  • hibp_paste module searches paste sites for leaked emails or credentials.
  • Any hits are stored in the credentials table for responsible reporting and remediation.
3. Geolocation of Target Servers This stage identifies the physical locations of the target’s online infrastructure. IP Resolution
  • The resolve module converts hostnames into IP addresses and updates host entries.
Geolocation
  • The free_geoip module geolocates IPs, revealing the server’s approximate city, region, and country.
  • Location details are appended to the host’s database record.
Shodan Integration (Optional)
  • When a Shodan API key is available:
    • Latitude/longitude data is used by the shodan module to gather additional OSINT such as services, banners, and exposed ports.
4. Comprehensive Software Stack Profiling The final stage performs a deep analysis of the technologies behind the target website. BuiltWith Technology Scan
  • The BuiltWith module identifies:
    • Web technologies (e.g., Apache, Nginx, Ubuntu)
    • Infrastructure providers (e.g., AWS)
    • Associated tools (jQuery, New Relic, Analytics services)
  • For large domains, the scan may return hundreds of data points, greatly enriching the OSINT profile.
Additional Discoveries
  • Administrative contacts
  • Social media integrations
  • CDN details
  • Heat‑mapping and analytics tools (e.g., Mouseflow)
  • Optimization platforms (e.g., Optimizely)
Summary By the end of this lesson, students understand how to conduct a complete OSINT workflow using Recon‑ng:
  • Populate key database tables
  • Form accurate contact and host profiles
  • Identify data breaches ethically
  • Geolocate infrastructure
  • Profile the full technology stack of a target domain
This staged approach reflects real-world ethical OSINT methodology and supports responsible security...
Show more...
2 days ago
11 minutes

CyberCode Academy
Course 8 - Penetration Testing OSINT Gathering with Recon-ng | Episode 2: Modules, Data Flow, Naming Structure, API Keys
In this lesson, you’ll learn about: Mastering Recon-ng Module Operations, Data Flow, Naming Structure, API Integration & Session Automation 1. Understanding Module Functionality To operate any module correctly, analysts must inspect its requirements using:
  • show info — displays the module’s:
    • Name
    • Description
    • Required and optional inputs
    • Source and destination database tables
This command is essential before running any module because it defines what data the module needs and what data it will produce. 2. Data Flow and Interaction Recon-ng modules depend heavily on structured input/output flows:
  • Modules read from specific database tables (e.g., domains, hosts)
  • Then write results to other tables (e.g., contacts, repositories)
Understanding this flow is critical for chaining modules efficiently. 3. Module Chaining and Dependency Modules are often dependent on data gathered by earlier modules. Examples:
  • Use a domain enumeration module (e.g., google_site_web)
    → populates the hosts table
  • Then run a discovery module (e.g., interesting_files)
    → requires the hosts table to be populated to search for files
This process is known as module chaining, forming a structured intelligence pipeline. 4. Database Querying Recon-ng allows advanced database searches:
  • query command → perform SQL-like lookups
  • run + SQL syntax → filter large datasets
Example use case:
Retrieve contacts belonging to one domain instead of dumping the entire contacts table. This improves workflow efficiency when processing large OSINT datasets. 5. Module Configuration Modules can be customized using:
  • set → assign a value (e.g., limit results, pick target subdomains)
  • unset → remove the assigned value
Modules also store collected artifacts (such as downloaded files) inside the workspace directory under the .recon-ng path. 6. Module Naming Structure Recon-ng organizes modules into logical categories such as:
  • Reconnaissance
  • Reporting
  • Import
  • Discovery
The naming scheme for Reconnaissance modules is especially important:
  • Each module name reflects the source → destination flow
    • Example: domains-hosts means “take domains and discover hosts”
  • Common tables used include:
    • companies
    • contacts
    • domains
    • hosts
    • netblocks
    • profiles
    • repositories
This structure makes it easy to understand what each module does simply from its name. 7. API Key Management Some modules rely on external APIs (e.g., BuiltWith, Jigsaw). Key commands:
  • keys add → configure an API key
  • show keys → list all installed keys
Without keys, these modules will fail or return limited data. 8. Session Scripting & Automation Recon-ng supports automation to streamline repetitive assessments. Tools covered include: a. Command Recording
  • record start → begin recording commands
  • record stop → stop recording
  • Run recorded script using:
    recon-ng -r
This allows you to reproduce actions automatically. b. Full Session Logging
  • spool → log everything output in the session
    Useful for audits, reporting, and compliance documentation.
Summary This lesson teaches students how to:
  • Understand module requirements (show info)
  • Chain modules effectively using database-driven workflows
  • Customize modules with set and unset
  • Use Recon-ng’s SQL-like querying for...
Show more...
3 days ago
10 minutes

CyberCode Academy
Course 8 - Penetration Testing OSINT Gathering with Recon-ng | Episode 1: Recon-ng Installation, Shell Exploration and Data Management
In this lesson, you’ll learn about: Recon-ng Installation, Shell Navigation, and Data Management for Penetration Testing 1. Installation and Environment Setup Recon-ng is a powerful OSINT framework designed for information gathering in penetration testing. Installation options:
  • Linux (Kali Linux): Pre-installed, straightforward to use.
  • Other Linux (Ubuntu): Clone the repository using Git from Bitbucket; requires Python 2 (Python 3 not supported).
  • Windows or Mac: Run via Docker or a VirtualBox VM.
  • Dependencies: Install Python packages via pip install -r requirements.
  • API Credentials: Initial launch may show errors; these are addressed when configuring modules later.
2. Exploring the Special Shell and Data Management After launching, Recon-ng opens a custom shell (not Bash). Key elements: a. Commands
  • View top-level commands using:
    help
b. Workspaces
  • Projects are organized into workspaces.
  • Default workspace is created automatically.
  • Manage workspaces with:
    • workspaces add → create new workspace
    • workspaces select → switch workspace
  • Each workspace contains a hidden folder with:
    • data.db → project database
    • Generated report documents
  • The active workspace is shown in the prompt.
c. Database Structure
  • Around 20 tables, including:
    • domains
    • companies
    • credentials
  • Tables store critical project data used by modules.
d. Adding and Viewing Data
  • Add data using add :
  • Example: add domains bbc.com
Example: add companies ExampleCorpView data using:show domainsshow companiesNote: Creating a workspace uses workspaces add instead of add workspaces.3. Modules and Running Scans Modules are scripts that perform specific reconnaissance tasks. Recon-ng currently has around 90 modules. Workflow:Select module:
use Review info:
show info → check required settings and usage instructions.Run module:
run → uses database data (e.g., domains) for scans.Modules can perform actions like web scans, domain enumeration, or credential searches. 4. Viewing Database via Web Interface Recon-ng provides a web interface via recon-web:
Start the server from the Recon-ng directory.Access via: http://localhost:5000 or 127.0.0.1:5000Features: Click a workspace → view database tables and content.5. SummaryRecon-ng organizes projects using workspaces and database tables, enabling structured information gathering.Modules automate reconnaissance tasks using stored data.The custom shell and optional web interface provide flexible ways to manage projects.Understanding workspaces, database tables, and module workflows is critical for effective OSINT and penetration testing.

You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cybercode_academy
Show more...
4 days ago
9 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 8: Phase 8: Collaboration, Maturity Models, and Strategic Planning
In this lesson, you’ll learn about: Phase 8 — Collaborative Model & Continuous Security Improvement 1. Overview Phase Eight of the Secure SDLC emphasizes the Collaborative Model, which focuses on addressing security challenges in distributed and enterprise environments. Collaboration strengthens security by bridging gaps between security, IT, and operations teams, breaking down silos, and integrating defense-in-depth strategies. Key success factors include strong stakeholder support for integration, budgeting, and cross-functional alignment. 2. Team Composition and Benefits Security is an ecosystem involving:
  • Macro-level players: Governments, regulators, and standards organizations.
  • Micro-level players: End-users, corporations, and security professionals.
Benefits of strong team collaboration:
  • Builds confidence in security programs.
  • Encourages shared responsibility, reducing “it’s not my job” attitudes.
  • Leverages automation (e.g., SOAR) to improve efficiency.
  • Ensures security is user-friendly and effective.
  • Strengthens defense-in-depth strategies.
3. Feedback Model Continuous improvement depends on effective feedback, which should be:
  • Timely: Delivered close to the event using real-time metrics.
  • Specific: Concrete, measurable, and aligned with security goals.
  • Action-Oriented: Includes clear instructions for remediation.
  • Constant: Repeated and recurring for ongoing improvement.
  • Collaborative: Employees contribute solutions and insights.
4. Secure Maturity Model (SMM) The SMM measures an organization’s security capability and progress through five levels:
  1. Initial: Processes are ad hoc, informal, reactive, and inconsistent.
  2. Repeatable: Some processes are established and documented but lack discipline.
  3. Defined: Formalized, standardized processes create consistency.
  4. Managed: Security processes are measured, refined, and optimized for efficiency.
  5. Optimizing: Processes are automated, continuously analyzed, and fully integrated into organizational culture.
5. OWASP Software Assurance Maturity Model (SAM) SAM is an open framework helping organizations:
  • Evaluate current software security practices.
  • Build balanced, iterative security programs.
  • Define and measure security-related activities across teams.
It provides a structured path to improve security capabilities in alignment with business objectives. 6. Secure Road Map Developing a security road map ensures security is aligned with business goals and continuously improved. Key principles:
  1. Iterative: Security is a continuous program, regularly reassessing risks and strategies.
  2. Inclusive: Involves all stakeholders—IT, HR, legal, and business units—for alignment.
  3. Measure Success: Success is measured by milestones, deliverables, and clear security metrics to demonstrate value.
7. Summary
  • Phase Eight emphasizes collaboration and continuous improvement in enterprise security.
  • Security is integrated across all SDLC stages, from requirements to testing.
  • Effective collaboration, feedback, maturity assessment, and road mapping ensure resilient security practices that adapt to evolving threats.
  • This phase is critical because applications are increasingly targeted by cyberattacks, making integrated security essential for organizational defense.


You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cybercode_academy
Show more...
5 days ago
12 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 7: Incident Management, Operational Defense, and Continuous Security
In this lesson, you’ll learn about: Secure Response — SDLC Phase 7 1. Overview Secure Response is Phase Seven of the Secure Software Development Life Cycle (SDLC), focusing on managing security incidents, breaches, cyber threats, and vulnerabilities after software deployment. This phase represents the blue team operations, encompassing monitoring, threat hunting, threat intelligence, and reactive defense measures. The goal is to protect, monitor, and react effectively in a production environment. 2. Incident Management and Response Process A robust Incident Response Plan (IRP) is critical for minimizing damage, reducing costs, and maintaining organizational resilience. The response process is structured in six main steps:
  1. Prepare
    • Verify and isolate suspected intrusions.
    • Assign risk ratings.
    • Develop policies and procedures for incident handling.
  2. Explore
    • Perform detailed impact assessments.
    • Detect incidents by correlating alerts, often using Security Information and Event Management (SIEM) tools.
    • Gather digital evidence.
  3. Organize
    • Execute communication plans to update stakeholders.
    • Monitor security events using firewalls, intrusion prevention systems (IPS), and other defensive tools.
  4. Create/Generate (Remediate)
    • Apply software patches and fixes.
    • Update cloud-based services.
    • Implement secure configuration changes.
  5. Notify
    • Inform customers and stakeholders if a breach involves personal data.
    • Follow legal and regulatory notification requirements.
  6. Feedback
    • Capture lessons learned.
    • Maintain incident records.
    • Perform gap analysis and document improvements to prevent similar future incidents.
3. Security Operations and Automation Operational defenses are typically managed by a Security Operations Center (SOC) or Critical Incident Response Center (CIRC). Core SOC functions include:
  • Identify incidents.
  • Analyze results (eliminate false positives).
  • Communicate findings to team members.
  • Report outcomes for documentation and compliance.
Security Orchestration, Automation, and Response (SOAR) enhances efficiency by:
  • Automating routine security operations.
  • Connecting multiple security tools for streamlined workflows.
  • Saving time and resources while enabling flexible, repeatable processes.
4. Investigation and Compliance Forensic Analysis is used to investigate and document incidents, often producing evidence for legal proceedings:
  • Digital Forensics: Recovering evidence from computers.
  • Mobile Device Forensics: Examining phones, tablets, and other portable devices.
  • Software Forensics: Analyzing code to detect intellectual property theft.
  • Memory Forensics: Investigating RAM for artifacts not stored on disk.
Data Lifecycle Management ensures compliance:
  • Data Disposal: Securely destroy data to prevent unauthorized access. Methods include physical shredding, secure digital erasure, and crypto shredding.
  • Data Retention: Define how long data is kept to comply with regulations like GDPR, HIPAA, and SOX. Steps include creating retention teams, defining data types, and building formal policies with employee awareness.
5. Continuous Security Technologies Runtime Application Security Protection (RASP)
  • Integrates directly into running applications to detect and block attacks in real time.
  • Provides contextual awareness and live protection, reducing remediation...
Show more...
5 days ago
12 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 6: Secure Validation: A Comprehensive Look at Security Testing Methodolog
In this lesson, you’ll learn about: Secure Validation — SDLC Phase 6 1. Overview Secure Validation tests software from a hacker’s perspective (ethical hacking) to identify vulnerabilities and weaknesses before attackers can exploit them. Unlike standard QA, which ensures functional correctness, secure validation focuses on negative scenarios and attack simulations, targeting vulnerabilities like SQL injection, cross-site scripting (XSS), and insecure configurations. 2. Key Testing Methodologies Secure validation can be performed manually, automatically, or using a hybrid approach. The main methodologies are: A. Static Application Security Testing (SAST)
  • Type: White-box testing
  • Purpose: Identify vulnerabilities in source code before runtime.
  • Method: Analyze internal code lines and application logic.
  • Tools: Can scan manually, via network import, or by connecting to code repositories like TFS, SVN, Git.
  • Focus: Detect issues such as hard-coded passwords, insecure function usage, and injection points.
B. Interactive Application Security Testing (IAST)
  • Type: Gray-box testing
  • Purpose: Continuous monitoring of running applications to detect vulnerabilities and API weaknesses.
  • Features:
    • Tracks data flow from untrusted sources (chain tracing) to identify injection flaws.
    • Runs throughout the development lifecycle.
    • Faster and more accurate than legacy static or dynamic tools.
C. Dynamic Application Security Testing (DAST)
  • Type: Black-box testing
  • Purpose: Simulate attacks on running software to observe responses.
  • Focus Areas:
    • SQL Injection
    • Cross-site scripting (XSS)
    • Misconfigured servers
  • Goal: Test behavior of deployed applications under attack conditions.
D. Fuzzing
  • Type: Black-box testing
  • Purpose: Identify bugs or vulnerabilities by injecting invalid, random, or malformed data.
  • Applications: Protocols, file formats, APIs, or applications.
  • Goal: Detect errors that could lead to denial of service or remote code execution.
  • Categories:
    • Application fuzzing
    • Protocol fuzzing
    • File format fuzzing
E. Penetration Testing (Pentesting)
  • Purpose: Simulate real-world attacks to find vulnerabilities automated tools might miss.
  • Phases:
    1. Reconnaissance: Gather information about the target.
    2. Scanning: Identify open ports, services, and potential attack surfaces.
    3. Gaining Access: Exploit vulnerabilities to enter the system.
    4. Maintaining Access: Test persistence mechanisms.
    5. Covering Tracks: Evaluate if an attacker could erase traces.
F. Open Source Security Analysis (OSA/SCA)
  • Purpose: Identify vulnerabilities in open-source components used by the application.
  • Process:
    1. Create an inventory of open-source components.
    2. Check for known vulnerabilities (CVEs).
    3. Update components to patch vulnerabilities.
    4. Manage the security response to reported issues.
3. Manual vs. Automated ValidationAspectManual ValidationAutomated ValidationExpertiseRequires high domain expertiseEasier for non-expertsSpeedSlow and time-consumingFast and scalableCoverageCan be very thoroughLimited by supported languagesAccuracyAccurate, less false positivesMay generate false positivesBest UseComplex logic, new attacksRoutine checks, high-volume scans

Recommendation: Use a hybrid approach, combining both manual expertise and automated tools for comprehensive...
Show more...
5 days ago
11 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 5: Hardening, DevSecOps Integration, Container Security and WAF
In this lesson, you’ll learn about: Secure Deploy — SDLC Phase 5 1. Overview Secure Deployment focuses on hardening the environment to protect systems from attacks and data breaches. The objective is to develop, deploy, and release software with continuous security and automation. 2. Secure Deployment and Infrastructure Hardening Key practices for secure deployment include:
  • Infrastructure Hardening: Follow CIS benchmarks to reduce risk across hardware and software.
  • Principle of Least Privilege: Grant only necessary access and revoke unnecessary permissions.
  • Access Control: Enforce strong authentication, restrict network access via firewalls, and monitor system access and network IP addresses.
  • Patching and Logging: Apply security patches based on CVE tracking, and implement auditing and logging policies.
  • Secure Connections: Enable TLS 1.2/1.3, use strong ciphers and secure cookies, and implement SSO or MFA as needed.
3. Secure DevOps (DevSecOps) DevSecOps integrates security throughout the DevOps pipeline. Key considerations:
  • Automation: Increases efficiency, reduces human error, and ensures consistent security checks.
  • Tool Integration: Combine SAST/IAST and WAFs with issue tracking (e.g., Jira) for continuous monitoring.
  • Compliance Automation: Identify applicable controls and automate compliance measurement within the SDLC.
  • Monitoring Metrics: Track deployment frequency, patch timelines, and the percentage of code tested automatically.
4. Secure Container Deployment Containers introduce unique security risks. Recommended practices include:
  • Code Scanning and Testing: Use static analysis tools and check for vulnerable dependencies.
  • Admission Control: Block unsafe container images, e.g., those exposing passwords.
  • Privilege Restriction: Run containers with minimal privileges; avoid root or privileged flags.
  • System Calls and Benchmarks: Limit powerful calls like Ptrace and ensure hosts meet CIS benchmarks for Docker/Kubernetes.
5. Web Application Firewall (WAF) A WAF protects web servers by inspecting, filtering, and blocking HTTP traffic at Layer 7.
  • Protection Capabilities: Mitigates threats like SQL injection, XSS, and file inclusion; supports OWASP Top 10 protection.
  • Security Models: Blacklist (negative), whitelist (positive), or hybrid.
  • Deployment Strategy:
    • Ensure WAF meets application security goals
    • Test alongside RASP or DAST tools
    • Integrate with SIEM and security workflows
    • Support compliance (PCI, HIPAA, GDPR)
6. Secure Review Practices Five key pre-deployment review steps:
  1. Gap Analysis: Compare policies against NIST Cybersecurity Framework (Identify, Protect, Detect, Respond, Recover).
  2. Privacy Review: Assess potential privacy violations and mitigation strategies.
  3. Open-Source Licensing Review: Confirm license compliance and categorize risks (low, medium, high).
  4. Security Test Results Review: Address vulnerabilities from SAST, IAST, WAF prior to release.
  5. Certify the Release: Document and control software releases using a formal approval process.
7. Continuous Vulnerability Management (CVM) CVM ensures ongoing risk reduction by identifying and remediating vulnerabilities continuously:
  • Scanning and Patching: Use SCAP-compliant tools like Nessus, Rapid7, or Qualys; apply updates via automated tools (e.g., SolarWinds Patch Manager, SCCM).
  • Vulnerability Tools: Schedule recurring network scans, define targets, and manage scan plugins to optimize performance.
8. Summary
  • Secure Deployment ensures that security is...
Show more...
5 days ago
14 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 4: Integrating Secure Coding, Code Review, and Application Security Testi
In this lesson, you’ll learn about: Secure Build — SDLC Phase 4 1. Overview Secure Build is the practice of applying secure requirements and design principles during the development phase. Its goal is to ensure that applications used by the organization are secure from threats. Key Participants:
  • Software developers
  • Desktop teams
  • Database teams
  • Infrastructure teams
2. Core Development Practices Secure Coding Guidelines
  • Developers follow standardized rules to ensure threat-resistant code.
  • Security libraries in frameworks are used for critical tasks, such as:
    • Input validation
    • Authentication
    • Data access
Secure Code Review
  • Involves manual and automated review of source code to uncover security weaknesses.
  • Essential checks include:
    • Proper logging of security events
    • Authentication bypass prevention
    • Validation of user input
Formal Code Review Steps:
  1. Source Code Access: Obtain access to the codebase.
  2. Vulnerability Review: Identify weaknesses, categorized by risk impact (e.g., financial, reputation).
  3. Reporting: Remove false positives, document issues, and assess risk severity.
  4. Remediation: Track and fix vulnerabilities using bug tracking systems like Jira.
3. Automated Application Security Testing Static Application Security Testing (SAST)
  • White-box testing that scans source code or binaries without execution.
  • Integrates with CI/CD pipelines or developer IDEs for immediate feedback.
  • Supports the “shift left” approach, finding vulnerabilities early in the SDLC.
  • Tools demonstrated: Coverity, LGTM
Interactive Application Security Testing (IAST)
  • Gray-box testing performed while the application is running, often during functional tests.
  • Monitors application activity in real-time and pinpoints exact lines of code needing fixes.
  • Advantages:
    • Eliminates false positives
    • Fits Agile, DevOps, and CI/CD workflows
4. Third-Party Component Security and Code Quality Open Source Analyzers (OSA) / Secure Component Analysis (SCA)
  • Ensure open-source libraries are current and free of known vulnerabilities.
  • Can integrate with SAST and IAST tools.
  • Resources: OWASP Dependency Check (free tool for detecting vulnerable components).
Code Quality Tools
  • Identify poor coding practices, dead code, and potential security issues.
  • Improving code quality correlates with enhanced overall security.
  • Tools mentioned: SpotBugs, SonarQube
5. Summary
  • Secure Build is Phase 4 of the Secure SDLC.
  • Integrates practices including:
    • Following secure coding standards
    • Performing code reviews
    • Applying automated testing (SAST & IAST)
    • Ensuring component security and code quality
  • Goal: Proactively address security during development, rather than remediating later.


You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cybercode_academy
Show more...
5 days ago
10 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 3: Defining, Implementing 20 Controls, and Mitigating OWASP Top 10 in SDL
In this lesson, you’ll learn about: Secure Requirements — SDLC Phase 2 1. Overview of Secure Requirements Definition and Purpose:
  • Secure requirements are functional and non-functional security features that a system must meet to protect its users, ensure trust, and maintain compliance.
  • They define security expectations during the planning and analysis stage, and are documented in product or business requirements.
Timing and Integration:
  • Security requirements should be defined early in planning and design.
  • Early integration reduces costly late-stage changes and ensures that security is embedded throughout the SDLC.
  • Requirements must be continuously updated to reflect functional changes, compliance needs, and evolving threat landscapes.
Collaboration:
  • Requires coordination between business developers, system architects, and security specialists.
  • Early risk analysis prevents security flaws from propagating through subsequent stages.
2. The 20 Secure Recommendations The course details 20 key recommendations, each tied to mitigation of common application security risks. These cover input validation, authentication, cryptography, and more. Input and Data Validation
  1. Input Validation: Server-side validation using whitelists to prevent injection attacks and XSS.
  2. Database Security Controls: Use parameterized queries and minimal privilege accounts to prevent SQL injection and XSS.
  3. File Upload Validation: Require authentication for uploads, validate file type and headers, and scan for malware to prevent injection or XML external entity attacks.
Authentication and Session Management 4–11. Authentication & Session Management:
  • Strong password policies
  • Secure failure handling
  • Single Sign-On (SSO) and Multi-Factor Authentication (MFA)
  • HTTP security headers
  • Proper session invalidation and reverification
    Goal: Prevent broken authentication and session hijacking.
Output Handling and Data Protection
  1. Output Encoding: Encode all responses to display untrusted input as data rather than code, mitigating XSS attacks.
  2. Data Protection: Validate user roles for CRUD operations to prevent insecure deserialization and unauthorized access.
Memory, Error, and System Management
  1. Secure Memory Management: Use safe functions and integrity checks (like digital signatures) to reduce buffer overflow and insecure deserialization risks.
  2. Error Handling and Logging: Avoid exposing sensitive information in logs (SSN, credit cards) and ensure auditing is in place to prevent security misconfiguration.
  3. System Configuration Hardening: Patch all software, lock down servers, and isolate development from production environments.
Transport and Access Control
  1. Transport Security: Use strong TLS (1.2/1.3), trusted CAs, and robust ciphers to protect data in transit.
  2. Access Control: Enforce Role-Based or Policy-Based Access Control, apply least privilege, and verify authorization on every request.
General Coding Practices and Cryptography
  1. Secure Coding Practices: Protect against CSRF, enforce safe URL redirects, and prevent privilege escalation or phishing attacks.
  2. Cryptography: Apply strong, standard-compliant encryption (symmetric/asymmetric) and avoid using vulnerable components.
3. Mitigation Strategy
  • Each of the 20 recommendations is directly linked to OWASP Top 10 vulnerabilities.
  • Following these recommendations ensures that security is embedded into the SDLC rather than added as an afterthought.
  • This phase emphasizes proactive...
Show more...
5 days ago
14 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 2: Malware, Social Engineering, GRC, and Secure Development Practices
In this lesson, you’ll learn about: Security Awareness Training — Secure SDLC Phase 1 1. Security Awareness Training (SAT) Fundamentals
  • SAT is the education process that teaches employees and users about cybersecurity, IT best practices, and regulatory compliance.
  • Human error is the biggest factor in breaches: 95% of breaches are caused by human error.
  • SAT reduces human mistakes, protects sensitive PII, prevents data breaches, and engages developers, network teams, and business users.
Topics covered in SAT:
  • Password policy and secure authentication
  • PII management
  • Phishing and phone scams
  • Physical security
  • BYOD (Bring Your Own Device) threats
  • Public Wi-Fi protection
Training delivery methods:
  • New employee onboarding
  • Online self-paced modules
  • Club-based training portals
  • Interactive video training
  • Training with certification exams
2. Malware & Social Engineering Threats Malware Classifications
  • Virus: Infects other files by modifying legitimate hosts (the only malware that infects files).
  • Adware: Exposes users to unwanted or malicious advertising.
  • Rootkit: Grants stealthy, unauthorized access and hides its presence; may require OS reinstallation to remove.
  • Spyware: Logs keystrokes to steal passwords or intellectual property.
  • Ransomware: Encrypts data and demands cryptocurrency payments, usually spread via Trojans.
  • Trojans: Malicious programs disguised as legitimate files or software.
  • RAT (Remote Access Trojan): Allows long-term remote control of systems without the user’s knowledge.
  • Worms: Self-replicating malware that spreads without user action.
  • Keyloggers: Capture keystrokes to steal credentials or financial information.
Social Engineering Attacks
  • Social engineering = manipulating people to obtain confidential information.
    Attackers target trust because it is easier to exploit than software.
5 Common Types:
  1. Phishing: Most common attack; uses fraudulent links, urgency, and fake messages.
    • 93% of successful breaches start with phishing.
  2. Baiting: Offers something attractive (free downloads/USBs) to trick users into installing malware or revealing credentials.
  3. Pretexting: Creates a false scenario to build trust and steal information.
  4. Distrust Attacks: Creates conflict or threatens exposure to extort money or access.
  5. Tailgating/Piggybacking: Attacker physically follows an authorized employee into a restricted area.
Defense strategies include:
  • Understanding the difference between phishing and spear phishing.
  • Recognizing that 53% of all attacks are phishing-based.
  • Using 10 email verification steps, including:
    • Check sender display name
    • Look for spelling errors
    • Be skeptical of urgency/threats
    • Inspect URLs before clicking
3. Governance, Risk, and Compliance (GRC) GRC Components:
  • Governance: Board-level processes to lead the organization and achieve business goals.
  • Risk Management: Predicting, assessing, and managing uncertainty and security risks.
  • Compliance: Ensuring adherence to laws, regulations, and internal policies.
Key compliance frameworks:
  • HIPAA — Healthcare data protection
  • SOX — Corporate financial reporting integrity
  • FISMA — Federal information system standards
  • PCI-DSS — Secure cardholder data; employees must acknowledge...
Show more...
5 days ago
11 minutes

CyberCode Academy
Course 7 - Secure SDLC (Software Development Life Cycle) | Episode 1: Approaches, Eight Phases, and Risk Management
In this lesson, you’ll learn about: Secure Software Development Life Cycle (Secure SDLC) — Full Overview
  • Definition of Secure SDLC
    • A framework that integrates security into every phase of system development:
      Planning → Design → Build → Validation → Deployment → Maintenance
  • Why Secure SDLC Matters
    • Rising security concerns: DDoS, account takeover, OWASP Top 10
    • Managing business risks such as breach penalties
    • Achieving GRC (Governance, Risk Management, Compliance) with PCI DSS, HIPAA, GDPR/CCPA
    • Enabling the Shift Left strategy to catch gaps early and reduce cost, time, and effort later
Approaches to Secure SDLC
  • Proactive Approach (for new systems)
    • Preventing and protecting against known threats in advance
    • Securing code and configurations early in the development process
  • Reactive Approach (for existing systems)
    • Detecting and stopping threats before exploitation or breach
    • Acting as a corrective control
The Eight Secure SDLC Phases
  1. Awareness Training
    • Regular security training, phishing exercises, and compliance awareness
    • Note: 93% of successful breaches begin with phishing
  2. Secure Requirements
    • Planning phase to define and continuously update security requirements based on functionality and GRC expectations
  3. Secure Design
    • Architectural phase to establish secure requirements
    • Selecting appropriate secure design principles and patterns
  4. Secure Build
    • Implementation phase focused on building secure systems
    • Using standardized, repeatable components
    • Applying Static Application Security Testing (SAST)
  5. Secure Deployment
    • Ensuring security and integrity during the deployment process
    • Emphasizing automation and protecting sensitive data (passwords, tokens)
  6. Secure Validation
    • Validating artifacts through security testing such as:
      Dynamic Application Security Testing (DAST), fuzzing, penetration testing
  7. Secure Response
    • Operations and maintenance
    • Executing the incident response plan
    • Active monitoring and responding to threats to maintain Confidentiality, Integrity, and Availability (CIA)
  8. Collaborative Model
    • An approach used to solve security issues in enterprise or distributed environments
    • Involves collaboration among development, security, QA, and operations
Secure SDLC Snapshot & Performance View
  • Bottom → Top:
    • Shows investment and performance (proactive approach)
  • Top → Bottom:
    • Shows remediation cost (reactive approach)
Risk Management & Threat Analysis Impact Study
  • Threats:
    • Possible dangers (intentional or accidental) like hacking, natural disasters, phishing, password theft, shoulder surfing, and email malware
  • Security Incidents:
    • Events where information assets are accessed, modified, or lost without authorization
  • Vulnerabilities:
    • Weaknesses that threats may exploit
  • Impact:
    • Outcome of threats and incidents
Risk Analysis & Scoring (NIST Representation)
  • Risk = Likelihood × Impact
  • Likelihood depends on:
    • Threats, incident history, ease of discovery, and ease of exploit
  • Impact...
Show more...
5 days ago
12 minutes

CyberCode Academy
Course 6 - Network Traffic Analysis for Incident Response | Episode 7: Network Data Analysis Toolkit: Tools, Techniques and Threat Signature
In this lesson, you’ll learn about: The complete toolkit and techniques for analyzing network traffic using Connection Analysis, Statistical Analysis, and Event-Based (signature-focused) Analysis. 1. Data Analysis Toolkit General-Purpose Tools These are foundational command-line utilities used to search, filter, and reshape data:
  • grep → pattern searching
  • awk → field extraction and manipulation
  • cut → selecting specific columns
    Used together, they form powerful pipelines for rapid, custom analysis.
Scripting Languages Python
  • Most important language for packet analysis.
  • Scapy allows:
    • Parsing PCAPs
    • Inspecting packet structure
    • Accessing fields (IP, ports)
    • Filtering traffic (e.g., HTTP GET requests)
    • Deobfuscating malware traffic
      • Example: Extracting useful strings from compressed Ghostrat C2 payloads.
R
  • Useful for statistical modeling and clustering of network data.
Specialized Tools
  • Netstat → enumerates active connections
  • Silk → large-scale flow analysis (CERT tool)
  • Yara → rule-based threat matching (binary/text patterns)
  • Snort → signature-based intrusion detection
2. The Three Core Data Analysis Techniques A. Connection Analysis Purpose: High-level visibility into which systems are connecting to which. Ideal for:
  • Detecting unauthorized servers or suspicious programs
  • Spotting lateral movement (e.g., odd SSH usage)
  • Identifying database misuse
  • Ensuring compliance across security zones
Primary Tool: Netstat
  • Shows all active connections + states
    (LISTENING, ESTABLISHED, TIME_WAIT, etc.)
Example Uses:
  • Spotting malware opening a hidden port
  • Identifying unauthorized remote access
  • Finding systems connecting to suspicious IPs
B. Statistical Analysis A macro-level technique designed to spot deviations from normal behavior. Techniques: 1. Clustering Group similar traffic together to identify families or variants.
  • Demonstrated by clustering Ghostrat variants through similarities in their C2 protocol.
2. Stack Counting Sort traffic by count of activity on:
  • Destination ports
  • Host connections
  • Packet types
Used to find anomalies:
  • Single visits to rare ports (2266, 3333)
  • Unexpected FTP traffic (port 21)
3. Wireshark Statistics Using built-in metrics:
  • Packet lengths (large packets → possible exfiltration or malware downloads)
  • Endpoints
  • Protocol hierarchy
Specialized Tool: Silk
  • Designed for massive enterprise networks
  • Supports both command line & Python (Pysilk)
  • Ideal for flow-level analysis, anomaly detection, and trend discovery.
C. Event-Based Analysis (Signature Focused) A micro-level technique used to identify known threats via rules and signatures. 1. Yara Signatures
  • Rules match known binary or text patterns.
  • Example uses:
    • Detecting Ghostrat via identifying strings like "lurk zero" or "v2010"
    • Multi-string matching to detect multi-stage malware
    • Matching malicious hostnames or indicators
Used for:
  • Malware classification
  • Reverse-engineering support
  • Deep content inspection
2. Snort Rules Snort provides concise detection logic for network traffic. Rule Structure Includes:
  • Action (alert, log)
  • Protocol...
Show more...
5 days ago
12 minutes

CyberCode Academy
Course 6 - Network Traffic Analysis for Incident Response | Episode 6: Investigating RATs, Worms, Fileless, and Multi-Stage Malware Variants
In this lesson, you’ll learn about: Advanced Malware Traffic Analysis — how to detect, decode, and investigate RATs, fileless exploits, worms, and multi-stage infections using real network captures. 1. Remote Access Trojans (RATs) WSH RAT
  • Uses plaintext beaconing for C2 → very easy to identify.
  • Key data exfiltrated in HTTP requests:
    • Unique device ID
    • Computer name
    • Username (“admin”)
    • RAT version (often hidden in the User-Agent field)
NJRAT
  • Shows extensive data exfiltration:
    • Windows XP build info
    • CPU type (Intel Core i7)
    • Username (“Laura”)
  • Contains custom data blocks:
    • Likely a proprietary C2 format
    • Example: 4-byte value representing payload length (e.g., 16 bytes)
2. Fileless Malware (Angler Exploit Kit) Detection
  • Traffic contains obfuscated script + random literature quotes
    → used to evade heuristic scanners.
  • Streams show signs of XOR encoding.
Extraction & Deobfuscation Using Network Miner:
  • Extracted files include:
    • A Shockwave Flash file (.swf)
    • Three large application/octet-stream files
  • XOR decoding reveals:
    • Shellcode +
    • Windows executable (DLL)
Purpose
  • Shellcode injects the malicious DLL into a running process (e.g., Internet Explorer).
  • Because nothing is written to disk → bypasses traditional antivirus, making network analysis essential.
3. Network Worm Behavior WannaCry (SMB Worm)
  • Exploits SMB on port 445 using Eternal-family vulnerabilities.
  • Behavior includes:
    • High-volume IP scanning for vulnerable systems
    • SMB exploitation setup (NOP sled → shellcode → payload transfer)
MyDoom (SMTP Mailer Worm)
  • Attempts spreading via SMTP (port 25).
  • Tries to send spoofed “delivery failed” emails with malicious attachments:
    • e.g., mail.zip → actually .exe hidden using spaces + triple dots.
  • In the demonstration, all spreading attempts were blocked, showing modern protections in action.
4. Multi-Stage Malware Infection Tracking Stage 1 — Initial Compromise
  • Suspicious HTTP request containing Base64 ID.
    • Decodes to an email address (e.g., Reginald/Reggie Cage) → privacy red flag.
  • Download of a malicious Microsoft Word file.
Stage 2 — Downloader Activity
  • Traffic to known malware-downloader domains (e.g., Pony botnet infrastructure).
  • Malware sends detailed victim metadata:
    • GUID
    • OS build number
    • IP address
    • Hardware info
Stage 3 — Command & Control
  • Multiple C2 messages observed:
    • Some Base64-encoded
    • Many encrypted → indicating later-stage payloads
  • Strong evidence that:
    • Word file → downloader (Pony) → secondary malware → possible tertiary stage
5. Key Techniques Demonstrated
  • Identifying IOCs in network captures
  • Detecting plaintext, encoded, and encrypted C2 protocols
  • Carving files and reconstructing injected payloads
  • Analyzing worm scanning patterns
  • Tracking infection chains across multiple malicious components


You can listen and download our episodes for free on more than 10 different platforms:
Show more...
5 days ago
10 minutes

CyberCode Academy
Course 6 - Network Traffic Analysis for Incident Response | Episode 5: Scanning, Covert Data Exfiltration, DDoS Attacks and IoT Exploitation
In this lesson, you’ll learn about: Network Threat Analysis — understanding how common attacks and advanced malware appear in real traffic captures, and how to extract intelligence from them. Part 1 — Analysis of Common Network Threats 1. Network Scanning Techniques Attackers scan networks to discover targets, services, and vulnerabilities. Demonstrations cover several scanning styles: SYN / Half-Open Scan
  • Sends SYN packets without completing the handshake.
  • Target responses reveal open vs. closed ports.
Full Connect Scan
  • Completes the full TCP three-way handshake.
  • More noticeable but highly accurate.
Xmas Tree Scan
  • Uses abnormal TCP flags: FIN + PUSH + URG.
  • Leveraged to probe how systems respond to malformed packets.
Zombie / Idle Scan
  • Uses an unwitting third-party host (“zombie”) to hide attacker identity.
  • Tracks incremental IP ID numbers to infer open ports.
Network Worm Scanning (e.g., WannaCry)
  • Worms scan many IPs for a single vulnerable port, such as SMB 445.
  • High-volume, repetitive traffic is a key signature.
2. Data Exfiltration (Covert Channels) Focus: understanding how attackers hide stolen data inside legitimate-appearing traffic. Covert SMB Channel
  • Data leaked one byte at a time inside SMB packets.
  • Requires:
    • Reviewing thousands of similar packets,
    • Extracting embedded data,
    • Base64 decoding,
    • Reversing the result,
    • Revealing hidden Morse code.
ICMP Abuse
  • Attackers embed data into ICMP type fields, reconstructing files (e.g., a GIF).
  • Difficult to detect because ICMP is normally used for diagnostics, not data transfer.
3. Distributed Denial of Service (DDoS) Attacks Explains why DDoS attacks remain common—cheap cloud resources, insecure IoT devices, accessible botnets. Volumetric SYN Flood
  • Floods a port (like HTTP 80) with incomplete handshakes.
  • Exhausts server connection capacity.
HTTP Flood
  • Sends massive amounts of GET/POST requests.
  • Harder to distinguish from normal traffic.
Amplification / Reflection Attacks
  • Small spoofed request → massive response to victim.
  • Examples:
    • Cargen protocol: 1-byte request → 748-byte response.
    • Memcache: tiny request → multi-megabyte responses from cached data.
4. IoT Device Exploitation Demonstration focuses on how attackers compromise weak devices such as DVRs.
  • Many IoT devices use default credentials and insecure services like Telnet.
  • Attack flow typically involves:
    1. Logging in via Telnet.
    2. Attempting to download malware (e.g., Mirai ELF binary).
    3. When automated delivery (TFTP) fails → manually reconstructing binaries using echo.
    4. Device joins a botnet and starts scanning other victims.
Part 2 — In-Depth Malware Case Studies 1. Remote Access Trojans (RATs)
  • Traffic begins with system information reporting from the infected host.
  • Followed by persistent command-and-control (C2) communication.
2. Fileless Malware
  • Malware runs directly in memory, leaving minimal filesystem artifacts.
  • Often, network traffic is the only complete copy of the payload available.
3. Network Worms
  • Automate scanning and propagation.
  • Look for specific open ports, then exploit and install themselves.
4. Multi-Stage Malware
  • Downloader retrieves multiple malware families.
  • Identifying...
Show more...
5 days ago
11 minutes

CyberCode Academy
Course 6 - Network Traffic Analysis for Incident Response | Episode 4: Mapping, Decoding, and Decrypting Network Traffic Intelligence
In this lesson, you’ll learn about: Intelligence Collection from Network Traffic Captures — focusing on anomalies, attacker behavior, and extracting actionable intelligence. 1. Network Mapping & Visualization
  • Humans struggle with long lists → visualizing traffic helps you feel the environment.
  • Tools like pcap viz generate maps at different OSI layers:
Layer 3 (IP Addresses)
  • Shows which machines talk to each other.
  • Helps detect unusual communication paths.
Layer 4 (TCP/UDP Ports)
  • Shows communication between applications.
  • Unusual ports (e.g., 900) may indicate custom or C2 protocols.
2. Content Deobfuscation Attackers often hide traffic with simple encodings (not strong encryption).
Goal → recover the original content, often a payload or second-stage executable. XOR Encoding
  • Common in malware traffic.
  • Repeated patterns in streams (especially when encoding zeros) reveal the key.
  • Example: fixed-length 4-byte key like MLVR.
Base64 (B64)
  • Seen in C2 frameworks like Onion Duke.
  • Recognizable by:
    • A–Z, a–z, 0–9, “+”, “/”
    • Ends with “=” padding
  • Easy to decode using built-in libraries or online tools.
3. Credential Capture from Insecure Protocols Focus: credentials leaking in plaintext protocols. Telnet & IMAP
  • Send usernames/passwords in clear text.
  • Easy to extract directly from the TCP stream.
SMTP
  • Encodes credentials in Base64 → trivial to decode.
  • Python or online decoders reveal username + password.
  • Reinforces the need for TLS encryption.
4. SSL/TLS Decryption in Wireshark Encrypted traffic looks like random “gibberish” unless you have the right keys. Using RSA Private Keys
  • If the RSA private key is available, Wireshark can decrypt sessions directly.
Ephemeral Keys (ECDHE)
  • Cannot be decrypted using the server’s private key.
  • Must capture the session keys using a pre-master secret log file:
    • Often done by setting an SSL key log file environment variable in browsers.
  • Without that log, the sessions are not recoverable.
5. Web Proxy Interception (Deep Packet Inspection) Enterprise method for inspecting encrypted HTTPS traffic. How it works
  • A corporate proxy (e.g., Burp Suite) intercepts connections:
    • Breaks the client → server TLS session.
    • Decrypts → inspects → re-encrypts all traffic.
Requirements
  • Clients must install the proxy’s self-signed root certificate.
  • Needed to bypass controls like HSTS.
Risks
  • Proxy becomes a single high-value target for attackers.
  • Raises privacy concerns, especially when employees do personal browsing (banking, etc.).


You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cybercode_academy
Show more...
5 days ago
11 minutes

CyberCode Academy
Course 6 - Network Traffic Analysis for Incident Response | Episode 3: Wireshark Alternatives: Network Miner, Terminal Shark, and CloudShark
In this lesson, you’ll learn about:
  • Three powerful alternatives to Wireshark that expand your capabilities in network traffic analysis.
  • How to use Network Miner for passive intelligence, T-shark for automation, and CloudShark for collaborative, web-based analysis.
  • When and why each tool is more effective than Wireshark in specific scenarios.
Network Miner — Passive Data Collection & File Extraction
  • Purpose: A passive network forensics tool excellent for extracting intelligence without actively interfering with traffic.
Key Capabilities
  • Host Intelligence (Auto-Recon):
    • Automatically breaks traffic down by host.
    • Extracts IP/MAC, hostnames, OS fingerprints (e.g., Red Hat Linux), NIC vendor, open TCP ports, and even web server banners (e.g., Apache 2.0.40).
    • Provides a detailed, Nmap-like overview without performing any active scans.
  • Data Extraction (File Carving):
    • Automatically pulls files transmitted during the capture (images, documents, etc.).
    • Makes recovery of transferred files extremely easy.
  • Credential Extraction:
    • Effective at pulling credentials from clear-text protocols like:
      • SMTP (usernames and passwords when TLS is not used)
      • HTTP cookies (considered credentials because they allow authentication)
  • Traffic Review Tools:
    • Lists DNS queries for browsing activity.
    • Breaks HTTP and SMTP header fields into searchable tables for instant lookup (e.g., search by user agent).
Terminal Shark (T-shark) — Command-Line Automation
  • Purpose: A command-line version of Wireshark designed for automation, scripting, and large-scale analysis.
Key Capabilities
  • Same Power as Wireshark, but CLI-Based:
    • Uses the same filtering language as Wireshark (e.g., http.request, tcp.port == 80).
    • Ideal for environments without a GUI or for remote analysis over SSH.
  • Automation & Integration:
    • Perfect for batch processing, cron jobs, or running inside scripts.
    • Output can be piped into other tools for threat intel or blacklist checks.
  • Custom Output:
    • Extract specific fields only (e.g., HTTP hostnames, source IPs).
    • Reduces noise and makes threat hunting more efficient.
  • Simple Threat Detection:
    • Analysts can filter important fields and check them against malicious blocklists.
    • Enables lightweight, fast, automated detection workflows.
CloudShark — Web-Based Visualization & Collaboration
  • Purpose: A browser-based network analysis platform similar to Wireshark, designed for team collaboration.
Key Capabilities
  • Collaborative Interface:
    • Apply filters just like in Wireshark.
    • Add comments/annotations directly to packets for team-based investigations.
  • Advanced Visualization Tools:
    • Traffic-over-time graph: Helps analysts zoom into sudden spikes or suspicious bursts.
    • Ladder diagrams: Show packet flow between hosts — extremely useful for understanding sequences like handshakes or attack chains.
    • Bytes-over-time visualization: Helps detect anomalies such as large outbound data spikes (e.g., from SQL injection exfiltration).
  • Interoperability:
    • Upload PCAPs to CloudShark for analysis.
    • Download them again (with or without comments) to continue work in Wireshark.
    • Works as a complementary tool rather than a replacement.
Key...
Show more...
5 days ago
10 minutes

CyberCode Academy
Course 6 - Network Traffic Analysis for Incident Response | Episode 2: Wireshark Features and Comprehensive Protocol Dissection
In this lesson, you’ll learn about:
  • Transitioning from theoretical networking concepts to hands-on traffic analysis.
  • Using Wireshark to capture, dissect, filter, and understand live network traffic.
  • Identifying how common protocols appear in real packet captures, including their structure and behavior.
  • Recognizing how different protocols handle communication, reliability, and security.
Wireshark: Introduction & Core Features
  • What Wireshark Is:
    • A free, GUI-based network traffic analyzer (formerly Ethereal).
    • Supports live packet capture and loading .cap / .pcap files.
  • Key Features Covered:
    • Capture Management:
      • Start live captures with options like promiscuous mode.
      • Load and inspect previously saved capture files.
    • File Handling & Exporting:
      • Merge capture files (if timestamps align).
      • Import packets from hex dumps.
      • Export selected packets or full dissections in text, CSV, JSON, XML.
      • Export TLS session keys for decrypting certain encrypted traffic.
    • UI Navigation:
      • Color-coded packet list (e.g., green = TCP/HTTP, red = errors/retransmissions).
      • Three-pane layout: Packet list → Protocol dissection → Raw hex/ASCII.
    • Analysis Tools:
      • Display filters for precise inspection (e.g., tcp.port == 80).
      • Follow TCP/HTTP Stream to trace entire conversations.
      • Decode As to reinterpret traffic running on uncommon ports.
Protocol Dissection: What You’ll See in Wireshark 1. IP (IPv4/IPv6)
  • View IP headers, including TTL (Time To Live) as hop count.
  • Look at IPv6 structures and tunneling protocols such as:
    • 6to4
    • 6in4
  • Learn how IPv6 packets travel across IPv4 networks.
2. TCP (Transmission Control Protocol)
  • Understand reliability and session management.
  • Observe:
    • The 3-way handshake: SYN → SYN-ACK → ACK
    • Connection teardown: FIN/FIN-ACK or RST
    • Flags, sequence numbers, acknowledgments, and retransmissions.
3. UDP (User Datagram Protocol)
  • Minimal, fast, connectionless protocol.
  • No handshake, no retransmission.
  • Used in scenarios requiring speed over reliability.
4. ICMP (Internet Control Message Protocol)
  • Used for error reporting and diagnostic tools like:
    • Ping (Echo Request/Reply – Type 8/Type 0)
    • Traceroute
  • Note: While essential, ICMP must be carefully controlled on networks.
5. ARP (Address Resolution Protocol)
  • Maps IP → MAC inside local networks.
  • Stateless nature allows ARP poisoning, a common man-in-the-middle technique.
Higher-Level / Application Protocols in Wireshark 1. DNS (Domain Name System)
  • Seen mostly over UDP.
  • Analyze queries, recursion, multiple responses (A, MX, etc.).
2. HTTP (Hypertext Transfer Protocol)
  • Review request lines, headers (User-Agent, Host, URI) and response codes.
  • HTTP is common in analysis due to high traffic volume.
  • Also widely monitored because attackers often misuse it for hidden communications.
3. FTP (File Transfer Protocol)
  • A clear-text protocol:
    • Credentials and transfers visible in packet captures.
  • Highlights the need for secure alternatives (FTPS / SFTP).
4. IRC (Internet Relay Chat)
  • Simple text-based...
Show more...
5 days ago
12 minutes

CyberCode Academy
Course 6 - Network Traffic Analysis for Incident Response | Episode 1: Fundamentals of Networking: The OSI Model and Essential Protocols
In this lesson, you’ll learn about:
  • The core networking concepts required before beginning any network traffic analysis.
  • The relationship between the OSI model, low-level protocols, and application-level protocols, and how they shape the behaviour of traffic you’ll examine in a tool like Wireshark.
  • How to recognize common protocol behaviours at a high level so you can later understand patterns, anomalies, and security-related findings during analysis.
1. The OSI Model and the Network Stack (high-level foundation)
  • The OSI model divides networking functionality into structured layers.
  • Hardware-oriented layers:
    • Physical → bits on the wire
    • Data Link → frames within a local network
  • Software-oriented layers relevant for analysis:
    • Network (Layer 3) → packets, routing
    • Transport (Layer 4) → reliability, ports
    • Session / Presentation / Application (Layers 5–7) → how applications encode, manage, and interpret network data
  • Students should understand the distinctions between bits → frames → packets, because these appear in captures.
2. Base Network Protocols (the building blocks)
  • IP (Internet Protocol – Layer 3):
    • Core packet-forwarding protocol for IPv4/IPv6.
    • Manages routing across networks.
  • TCP (Transmission Control Protocol):
    • Ensures reliable delivery: sequencing, acknowledgments, error checking, retransmission.
    • Manages connections using ports and a handshake mechanism.
  • UDP (User Datagram Protocol):
    • Connectionless and faster but offers no delivery guarantees.
    • Used when speed and low latency matter more than reliability.
  • ICMP (Internet Control Message Protocol):
    • Sends diagnostic and control messages.
    • Used by tools like ping and traceroute.
3. Common Higher-Level Protocols & Security Wrappers (conceptual behaviour)ProtocolPurpose (High-Level)Security-Relevant Behaviours (Conceptual Only)ARPResolves IP → MAC within a LAN.Can be abused conceptually for redirecting traffic.DNSTranslates domain names to IP addresses.Commonly targeted for redirection or misdirection attacks.FTPTransfers files using ports 20/21.Weak configurations may allow unauthorized file movement.HTTP / HTTPSWeb communication.Frequently analysed due to large volume of traffic and vulnerabilities.IRCText-based group chat channels.Historically used in automation and remote coordination systems.SMTPSends email.High-volume traffic channel; relevant for filtering and monitoring.SNMPNetwork device management.Misconfigurations can lead to information disclosure.SSHSecure, encrypted remote terminal access.Important for secure administration.TFTPLightweight file transfer on port 69.Seen in simple or automated device configurations.TLSProvides authentication and encryption for other protocols.Masks traffic contents in both legitimate and illegitimate uses.

Key Takeaways
  • Understanding how protocols behave at each OSI layer is essential for interpreting traffic captures.
  • Familiarity with the normal patterns of protocols (IP, TCP/UDP, DNS, TLS, etc.) helps analysts later identify unusual or suspicious activity.
  • This theoretical module prepares students for the practical phase using tools like Wireshark, where they will analyse real traffic captures in a controlled, educational setting.


You can listen and download our episodes for free on more than 10 different platforms:
https://linktr.ee/cybercode_academy
Show more...
5 days ago
11 minutes

CyberCode Academy
Course 5 - Full Mobile Hacking | Episode 8: Technical Check for Mobile Indicators of Compromise using ADB and Command Line
In this lesson, you’ll learn about:
  • Goal — verifying if an Android device is compromised (conceptual):
    • How investigators look for Indicators of Compromise (IoCs) on a device by inspecting network activity and running processes; emphasis on performing all checks only with explicit authorization and on isolated lab devices.
  • Network‑level indicators:
    • Look for unexpected outbound or long‑lived connections to remote IPs or uncommon ports (examples of suspicious patterns, not how‑to).
    • High‑risk signals include connections to unknown foreign IPs, repeated reconnect attempts, or traffic to ports commonly associated with remote shells/listeners.
    • Correlate network findings with timing (when the connection started) and with other telemetry (battery spikes, data usage) to prioritize investigation.
  • Process & runtime indicators:
    • Unusual processes or services running on the device (unexpected shells, daemons, or package names) are strong red flags.
    • Signs include processes that appear to be interactive shells, packages with strange or obfuscated names, or processes that persist after reboots.
    • Correlate process names with installed package lists and binary locations to determine provenance (signed store app vs. side‑loaded package).
  • Behavioral symptoms to watch for:
    • Sudden battery drain, unexplained data usage, spikes in CPU, or device sluggishness.
    • Unexpected prompts for permissions, new apps appearing without user consent, or developer options/USB debugging enabled unexpectedly.
  • Forensic collection & triage (high level):
    • Capture volatile telemetry (network connections, running processes, recent logs) and preserve evidence with careful documentation (timestamps, commands run, who authorized the collection).
    • Preserve a copy/snapshot of the device state (emulator/VM snapshot or filesystem image) before further analysis to avoid contaminating evidence.
    • Export logs and network captures to an isolated analyst workstation for deeper correlation and timeline building.
  • Correlation & investigation workflow (conceptual):
    • Cross‑reference suspicious outbound connections with running processes and installed packages to identify likely malicious artifacts.
    • Use process metadata (package name, signing certificate, install time) and network metadata (destination domain, ASN, geolocation) to assess intent and scope.
    • Prioritize containment (isolate device/network) if active exfiltration or ongoing C2 is suspected.
  • Containment & remediation guidance:
    • Isolate the device from networks (airplane mode / disconnect) and, where appropriate, block suspicious destinations at the network perimeter.
    • Preserve evidence, then follow a remediation plan: revoke credentials, wipe/restore from a known‑good image, reinstall OS from trusted media, and rotate any secrets that may have been exposed.
    • Report incidents per organizational policy and involve legal/compliance if sensitive data was involved.
  • Safe lab & teaching suggestions:
    • Demonstrate IoCs using emulators or instructor‑controlled devices in an isolated lab network; never create or deploy real malicious payloads.
    • Provide students with sanitized capture files and pre‑built scenarios so they can practice correlation and investigation without touching live systems.
  • Key takeaway:
    • Detecting device compromise relies on correlating suspicious network activity with anomalous processes and device behavior. Always investigate within legal/ethical bounds, preserve evidence, and prioritize...
Show more...
6 days ago
11 minutes

CyberCode Academy
Welcome to CyberCode Academy — your audio classroom for Programming and Cybersecurity.
🎧 Each course is divided into a series of short, focused episodes that take you from beginner to advanced level — one lesson at a time.
From Python and web development to ethical hacking and digital defense, our content transforms complex concepts into simple, engaging audio learning.
Study anywhere, anytime — and level up your skills with CyberCode Academy.
🚀 Learn. Code. Secure.