I still see senior engineers littering their code with print("here") and print("variable x is:", x). I’m not judging—I do it too. Sometimes, you just need a quick sanity check to see if a function is actually running. But relying solely on print statements is a terrible strategy when you’re building complex systems, specifically when working with extensions, asynchronous agents, or heavy backend processing.
When I started working on larger Python frameworks that integrate with C++ or Node.js components, my “print and pray” method fell apart. I spent hours chasing race conditions that only appeared when I wasn’t looking. That’s when I forced myself to actually learn the debugging tools Python offers out of the box.
If you are building custom extensions or working on multimodal agents, you need a robust strategy. I want to walk you through the specific Python debugging workflows I use daily. This isn’t just about finding syntax errors; it’s about understanding runtime behavior, inspecting memory, and fixing logic flaws without losing your mind.
The Built-in Powerhouse: breakpoint()
Since Python 3.7, we’ve had the breakpoint() function. I use this constantly. It replaces the old, verbose import pdb; pdb.set_trace(). When the interpreter hits this line, it pauses execution and drops you into an interactive shell.
This is invaluable when I’m developing a new class method and I’m not 100% sure what the data structure coming from an API looks like. Instead of guessing, I just pause the code and look.
def process_agent_data(payload):
# I suspect the payload structure varies here
required_keys = ["id", "timestamp", "vectors"]
# Drop a debugger right here
breakpoint()
if not all(k in payload for k in required_keys):
raise ValueError("Invalid payload structure")
return payload["vectors"]
# Simulating a call
data = {"id": 123, "timestamp": 1735110000}
# Missing 'vectors' key
process_agent_data(data)
When I run this, the terminal stops at the breakpoint. I can type payload to see the dictionary. I can try running the list comprehension [k in payload for k in required_keys] to see exactly which key is missing. This immediate feedback loop cuts my development time in half compared to modifying the code, re-running it, seeing a log, and repeating.
Common commands I use inside the debugger:
- n (next): Execute the current line and move to the next.
- s (step): Step into a function call.
- c (continue): Resume execution until the next breakpoint.
- l (list): Show the code surrounding the current line.
- p variable_name: Print the value of a variable.
Logging: The History of Your Application
Debugging is for the present; logging is for the past. If I’m debugging an issue that happened on a server last night, breakpoint() won’t help me. I need a detailed history. I see too many developers using print for logging. The problem with print is that it goes to standard output (stdout), which might get lost, isn’t timestamped, and doesn’t have severity levels.
I always set up the standard logging library immediately. It allows me to toggle verbosity without changing code. When I’m developing, I run at DEBUG level. In production, I switch to WARNING or ERROR.

Here is the configuration I use for almost every backend service I write. It sets up a formatter that includes the timestamp and the file name, which is critical for tracking down errors in large codebases.
import logging
import sys
def setup_logger(name):
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
# Create handlers
c_handler = logging.StreamHandler(sys.stdout)
f_handler = logging.FileHandler('agent_debug.log')
c_handler.setLevel(logging.DEBUG)
f_handler.setLevel(logging.ERROR)
# Create formatters and add it to handlers
c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
c_handler.setFormatter(c_format)
f_handler.setFormatter(f_format)
# Add handlers to the logger
logger.addHandler(c_handler)
logger.addHandler(f_handler)
return logger
log = setup_logger('core_module')
def calculate_metrics(data):
log.debug(f"Starting calculation with data: {data}")
try:
result = 100 / len(data)
log.info(f"Calculation successful: {result}")
return result
except ZeroDivisionError:
log.error("Data list is empty! Cannot divide by zero.", exc_info=True)
return 0
calculate_metrics([])
Notice the exc_info=True parameter in the error log. This is a lifesaver. It appends the entire stack trace to the log file automatically. Without this, you just get the error message, which often isn’t enough context to solve the bug.
Visual Debugging with VS Code
While I love the terminal, I do most of my heavy lifting in VS Code. The visual debugger is fantastic for inspecting complex objects or navigating deep inheritance hierarchies. The key to making this work, especially when you are building extensions or running code inside Docker containers, is the launch.json configuration.
I often work on systems where the Python process is launched by another tool (like a framework runner or a shell script). In these cases, hitting “F5” to start the debugger doesn’t work because VS Code isn’t launching the process. Instead, I use the “Attach” method.
First, I install the debugpy package in my environment:
pip install debugpy
Then, inside my Python entry point (the code that gets run first), I add this snippet:
import debugpy
# 5678 is the default port, but you can change it
debugpy.listen(("0.0.0.0", 5678))
print("Waiting for debugger attach...")
debugpy.wait_for_client() # Pauses execution until you connect
Now, I configure my launch.json in VS Code to connect to this port:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
]
}
]
}
This setup allows me to debug code running inside a Docker container or a remote server as if it were running locally on my machine. I can set breakpoints in my editor, and when the remote process hits them, my VS Code window flashes, and I have full control. This is essential for full stack debugging where the environment matters.
Debugging Async Code
Asynchronous Python (asyncio) is standard now for high-performance network agents, but debugging it can be a nightmare. Stack traces in async code often look disjointed because the execution context jumps around the event loop.
One specific tool I enable when things get weird is the asyncio debug mode. It highlights slow callbacks and resource warnings (like forgetting to await a coroutine).

Here is how I structure my async entry points to make them debuggable:
import asyncio
import logging
# Configure logging to see asyncio messages
logging.basicConfig(level=logging.DEBUG)
async def slow_operation():
print("Starting slow operation")
await asyncio.sleep(1)
print("Finished slow operation")
async def main():
print("Main started")
# Forgot to await this? Debug mode will warn you.
task = asyncio.create_task(slow_operation())
await asyncio.sleep(0.1)
print("Main finished")
if __name__ == "__main__":
# Enable debug mode explicitly
asyncio.run(main(), debug=True)
When debug=True is set, Python checks for coroutines that were defined but never awaited. It also logs if a callback takes too long to execute, which blocks the loop. If I’m seeing performance issues in my API development, this is the first switch I flip.
Handling Hard Crashes with sys.excepthook
Sometimes, the application just crashes to the desktop, and if you aren’t staring at the console, you miss the traceback. This happens frequently with background workers or GUI applications.
I like to override sys.excepthook. This function is called automatically whenever an uncaught exception occurs. I use it to log the crash to a file before the program terminates. It ensures I never miss a “Game Over” moment.
import sys
import traceback
from datetime import datetime
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
filename = f"crash_log_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt"
with open(filename, 'w') as f:
f.write("Uncaught exception:\n")
traceback.print_exception(exc_type, exc_value, exc_traceback, file=f)
print(f"CRITICAL ERROR: Logged to {filename}")
# Register the hook
sys.excepthook = handle_exception
# Trigger a crash to test it
def cause_crash():
return 1 / 0
cause_crash()
This snippet captures the error type, value, and the full stack trace, saving it to a timestamped file. I can’t tell you how many times this has saved me when a long-running data processing job failed at 3 AM.
Profiling: When the Bug is Speed

Not all bugs cause crashes. Some bugs just make your application unusable slow. Performance monitoring is a form of debugging. I use the built-in cProfile module to find bottlenecks. It tells me exactly how many times a function was called and how much time was spent in it.
I often wrap the main execution of my script like this:
import cProfile
import pstats
def heavy_computation():
total = 0
for i in range(1000000):
total += i ** 2
return total
def main():
print("Running heavy task...")
heavy_computation()
if __name__ == "__main__":
profiler = cProfile.Profile()
profiler.enable()
main()
profiler.disable()
stats = pstats.Stats(profiler).sort_stats('cumtime')
stats.print_stats(10) # Print top 10 time-consuming functions
Running this outputs a table showing where the time went. If I see a function taking 90% of the execution time, I know exactly where to focus my optimization efforts. This is much more effective than guessing or randomly optimizing code that isn’t actually the bottleneck.
Final Thoughts
Debugging is a skill distinct from coding. I know plenty of developers who can write algorithms but struggle to find out why they don’t work in production. By moving beyond print statements and embracing tools like breakpoint(), structural logging, remote debugging with VS Code, and profiling, you gain control over your software.
When you are dealing with complex integrations or building extensions for frameworks, you can’t afford to guess. Set up your environment correctly, use the tools available, and stop fighting your own code.
