← Back to jsonwalker.com

Security & Pen Testing in 15 Minutes: A Crash Course for Engineers Who Build Things

You already know systems. Let me map security onto what you know.


I’m a self-taught senior staff engineer. I’ve built mobile apps, full-stack platforms, self-hosted infrastructure, and production CI/CD pipelines. I’ve never had a security class. And I just spent a week auditing my own home lab — Gitea, Woodpecker CI, Docker services, a NAS, and an Alienware running as a server — and found things that would have made me someone’s pivot point.

This is the crash course I wish someone had given me. No CompTIA prerequisites. No certified ethical hacker gatekeeping. Just the concepts, mapped onto things you already understand.

If you’ve ever deployed a web server, you already have the mental models for all of this.


Security Is Just Asymmetric Testing

You write tests to prove your code does what you intend. Security is proving your code only does what you intend.

The attacker runs your code with inputs you never tested. They read your docs, your error messages, your headers, your timing. They’re a QA engineer with no constraints and unlimited time. Your job isn’t to make attacks impossible — it’s to make it more expensive than the alternative target.

That’s it.

Everything else in security is details about which inputs, which assumptions, and where the friction is.


Attack Surface Is Your Public API

You know how a well-designed API exposes only what’s necessary and hides everything else? Security surface area works the same way.

Every open port is an API endpoint. Every running service is a handler. Every exposed credential is a public method that shouldn’t be public. Every default password is a route with no authentication middleware.

When a pen tester scans your machine with nmap -sV -p- 192.168.x.x, they’re calling GET / on every port from 1 to 65535 and reading the response headers. The output tells them exactly what’s running, what version, and which CVEs apply to it.

Run it on yourself before someone else does. Every open port that surprises you is a bug.


Recon Is Read the Docs

Before any attacker runs a single exploit, they read. They run nmap, whatweb, curl -I, dig, whois. They check your HTTP headers for version strings. They look at your robots.txt. They enumerate your subdomains. They read your error messages.

This phase is called recon and it’s 80% of the work. Most attacks don’t require clever exploits — they require reading what the target told them for free.

In software terms: attackers read your stack traces, your X-Powered-By headers, your directory listings, your git history accidentally pushed to prod. They don’t hack in. They log in with credentials you left in a public repo from 2019.

Hardening at this layer means removing signal. Strip version headers. Disable directory listing. Use .gitignore religiously. Kill default credentials on day one. Never push .env files.


Exploitation Is Calling an Undocumented API

A vulnerability is an unintended API. SQL injection is calling your database through your form field because you didn’t validate the input. A buffer overflow is writing past the end of an array because you didn’t check the length. A path traversal is reading ../../etc/passwd because you didn’t sanitize the filename.

In every case, the attacker found an input path you didn’t design for and used it to call functionality you didn’t intend.

CVEs (Common Vulnerabilities and Exposures) are a public registry of these undocumented APIs. When you run outdated software, you’re advertising which undocumented APIs you have. Metasploit is a library that already knows how to call them.

Patching is closing undocumented APIs before they’re catalogued. It’s why apt upgrade is a security control, not just a chore.


A Reverse Shell Is a Callback Function You Didn’t Register

You know how event-driven systems work: you register a callback, something triggers it, your code runs. A reverse shell is when an attacker plants a callback on your machine that phones home to theirs.

Your firewall blocks inbound connections — but outbound connections are usually allowed. So the shell calls out to the attacker, establishing a connection in the direction your firewall trusts. When it lands, the attacker has an interactive terminal on your machine with the permissions of whatever process was exploited.

From there, they have a foothold. They can run whoami, id, ls /, read your .env files, exfiltrate your private keys, install a cron job that reconnects every 5 minutes if the connection drops.

The defense: restrict outbound connections too. Most servers have no business connecting to arbitrary IPs on arbitrary ports. Network egress rules are underused and high-value.


Privilege Escalation Is Dependency Injection You Didn’t Authorize

You land on a machine as www-data — low-privilege, limited access. But you notice sudo is misconfigured. Or there’s a SUID binary that runs as root. Or a writable cron job that executes as root every minute. Or a Docker socket mounted in a container, which is equivalent to root access on the host.

Privilege escalation is exploiting trust relationships to get permissions the initial compromise didn’t include. It’s the same as injecting a dependency into a container with elevated permissions because you misconfigured the interface.

Tools like LinPEAS automate this enumeration — they scan for every misconfiguration, SUID bit, writable directory, and weak sudo rule that could be a stepping stone. Running LinPEAS on your own system is an audit. Running it on someone else’s without permission is a crime.

The defense is least privilege — the same SOLID principle you apply to code. Every process, every user, every container gets only the permissions it needs to do its specific job. Nothing more.


Lateral Movement Is Traversing Your Dependency Graph

Once inside, an attacker doesn’t stay in one place. They move. They read your environment variables for database credentials. They check if those credentials work on the database server. They look for SSH keys that might unlock other machines. They check if your internal network trusts this compromised host.

This is lateral movement — pivoting from the initial foothold to higher-value targets. It follows your trust graph exactly. If machine A trusts machine B’s SSH key, compromising A gives access to B. If service A can connect to database B without re-authentication, compromising A gives access to B’s data.

This is why network segmentation matters. Your Gitea server should not be able to reach your NAS directly. Your CI/CD environment should not have access to your production secrets. Blast radius is a function of how much trust you’ve granted between systems.

A flat network where everything can talk to everything is a dependency graph with no encapsulation. One compromise cascades everywhere.


Persistence Is Installing a Dependency That Starts on Boot

After exploitation, attackers don’t want to re-exploit every session. They install persistence — a mechanism that survives reboots and reconnects them automatically.

Common persistence mechanisms: cron jobs that beacon out every few minutes. SSH keys added to ~/.ssh/authorized_keys. Systemd services installed with innocuous names. Docker containers that restart on failure and run their own callback. Web shells dropped into /var/www/html disguised as PHP cache files.

These are all the same thing: a startup hook that runs their code in your environment.

The defense is knowing your baseline. If you don’t know what cron jobs, services, and startup scripts should exist on your machine, you can’t detect the ones that shouldn’t. Tools like Wazuh establish that baseline and alert on deviations. An SSH key that wasn’t there yesterday is a finding.


Defense in Depth Is Middleware Chains

No single security control stops everything. The model that works is layered — each layer catches what the previous layer missed.

Your perimeter firewall blocks most inbound traffic. Your VPN means you don’t expose services publicly at all. Your application-layer auth means even if someone reaches the service, they need credentials. Your secrets management means compromised credentials don’t expose your database password. Your network segmentation means a compromised app server can’t reach your NAS. Your monitoring means you know when something unexpected happened.

This is the same pattern as middleware chains in Express. Each middleware does one job and passes to the next. The request that gets through cors() still has to pass auth(), then rateLimit(), then validation(). No single middleware is the whole security story.

The attacker who gets through your firewall still hits your VPN requirement. The one who bypasses that still needs credentials. The one who has credentials still can’t reach your internal services. Each layer is a multiplier on the attacker’s cost.


Encryption Is Compilation: You Don’t Ship the Source

You know why you don’t ship TypeScript to production — you compile it first so consumers can’t trivially read your implementation. Encryption applies the same principle to data.

Encryption at rest means your data on disk is compiled. If someone walks out with your NAS drives, they get ciphertext. TLS/encryption in transit means your data on the wire is compiled. If someone captures your packets with Wireshark, they get ciphertext.

The key is your compiler. Lose the key, lose the data. This is why key management is the hard problem — not the encryption algorithm itself. AES-256 is fine. The question is: where does the key live? In the same place as the data? In an environment variable committed to your repo? On a sticky note?

Secrets management — HashiCorp Vault, sops, git-crypt — is the discipline of treating keys the way you treat your private signing keys. Never in version control. Never in environment variables in production containers without a secrets backend. Never duplicated across services that don’t need them.


Monitoring Is Your Test Suite Running in Production

You write tests that run before deploy. But tests can’t catch what they don’t anticipate. Monitoring is your test suite running continuously against the live system, watching for assertions you didn’t know you needed to write.

A spike in failed SSH auth attempts is a flaky test that keeps failing — something is probing your port. An unexpected outbound connection to a new IP is an assertion violation — that service shouldn’t be calling home. A new listening port that wasn’t there yesterday is an undeclared API endpoint.

Wazuh is a SIEM — Security Information and Event Management. It’s a log aggregator with rules that fire assertions against your log stream in real time. CrowdSec is a distributed threat intelligence layer — when your instance blocks an IP, it shares that finding with everyone else running CrowdSec, so the attacker who hit your Gitea yesterday is already blocked on someone else’s stack.

You can’t remediate what you can’t see. Observability in production isn’t optional, and security observability is just observability with higher stakes.


Pen Testing Is QA With No Access Controls

Penetration testing is systematically finding undocumented APIs in your own system before someone else does. It follows the same phases every time:

Recon — what is this thing? What ports, services, versions, and technologies? nmap, gobuster, whatweb. Read everything the target tells you for free.

Scanning — which known vulnerabilities apply? Cross-reference running versions against CVE databases. Find misconfigurations. Find default credentials.

Exploitation — attempt to trigger the vulnerabilities. Confirm they’re real, not theoretical. Document the exact payload that worked.

Post-exploitation — given the foothold, what’s actually accessible? What data? What lateral movement is possible?

Reporting — every finding gets a severity, a reproduction step, and a remediation.

The legal version of this is you doing it to your own infrastructure, or someone you’ve hired doing it with a signed scope document. The criminal version is the same methodology applied without authorization. The technical skills are identical. The authorization is everything.


Your Docker Setup Is a Security Surface

Docker gives a false sense of isolation. Containers share a kernel. A misconfigured container can escape to the host.

The specific foot-guns: mounting the Docker socket (/var/run/docker.sock) inside a container gives that container root on the host — it can create new containers, mount host directories, and escape trivially. Containers running as root (the default) mean any process exploit inside the container runs with root context. Containers bound to 0.0.0.0 expose services to every interface including your WAN.

Audit your docker-compose.yml files for three things: who is the user inside the container, what is mounted from the host, and what ports are bound and to which interface. 127.0.0.1:3000:3000 is a service only reachable on localhost. 0.0.0.0:3000:3000 (or just 3000:3000) is reachable from anywhere that can reach the host.

The secure default: no Docker socket mounts, user: "1000:1000" in compose, bind to 127.0.0.1 and let your reverse proxy (Caddy, nginx) handle external exposure.


Why This Matters for What I’m Running

Most security problems aren’t clever attacks. They’re configuration debt.

I’m running Gitea, Woodpecker CI, a NAS, and a dozen Docker services on a self-hosted server. Every one of those services has a default configuration that was optimized for getting started, not for production hardening. Woodpecker CI by design executes arbitrary code — that’s its job. If it’s misconfigured or its webhook secrets are weak, someone can trigger builds and run their code in my environment.

The fix isn’t paranoia. It’s discipline. Wireguard instead of exposed ports. Secrets in Vault instead of .env files. Containers that run as non-root and don’t mount the socket. CrowdSec on every internet-facing service. Wazuh watching the baseline.

None of this is clever. It’s all just closing undocumented APIs.

Constraints win.


The Cheat Sheet

Security Concept Your Mental Model
Attack surface Public API surface
Recon Read the docs
CVE Undocumented API in a published registry
Vulnerability Unintended input path
Exploit Calling an API you weren’t supposed to know about
Reverse shell Callback function you didn’t register
Privilege escalation Unauthorized dependency injection
Lateral movement Traversing the trust/dependency graph
Persistence Startup hook you didn’t install
Defense in depth Middleware chain
Encryption Compilation — don’t ship the source
TLS Encrypted transport — your data on the wire
Secrets management Key management for your compiler
Firewall Route guard / middleware that drops unauthenticated requests
Network segmentation Module encapsulation — no direct access across boundaries
Monitoring / SIEM Production test suite running continuously
Pen testing QA with no access controls
Docker socket mount Granting a container sudo on the host
Least privilege SOLID: single responsibility, minimal interface
Zero trust Authenticate every request, trust no implicit context

Where to Go From Here

If this clicked for you, the next step isn’t reading more theory. It’s running nmap against your own machines right now and reading what it tells you.

After that:

The barrier isn’t skill or hardware. It’s knowing which ports to close.

Now you do.


Jason Walker is building Loop Lock, the perfect A/V loop creator, and designing the standard of system design grammar. He writes about specification-driven engineering, solo development, and locking down the machines that run the work. Interested in the grammar? Email stonecassette@gmail.com with the subject “Interested in your system design grammar.” Follow at jsonwalker.com.