Skip to content

Configuration

Because even genies need settings.

import wishful
wishful.configure(
model="openai/gpt-5", # Switch models - use litellm model IDs (default: "azure/gpt-4.1")
cache_dir="/tmp/.wishful", # Cache directory for generated modules (default: ".wishful")
spinner=False, # Show/hide the "generating..." spinner (default: True)
review=True, # Review code before execution (default: False)
allow_unsafe=False, # Disable safety checks - dangerous! (default: False)
temperature=0.7, # LLM sampling temperature (default: 1.0)
max_tokens=8000, # Maximum LLM response tokens (default: 4096)
debug=True, # Enable debug logging (default: False)
log_level="INFO", # Logging level: DEBUG, INFO, WARNING, ERROR (default: WARNING)
log_to_file=True, # Write logs to cache_dir/_logs/ (default: True)
system_prompt="Custom prompt", # Override the system prompt for LLM (advanced)
)
# Context radius is configured separately (it likes to be special)
wishful.set_context_radius(6) # Lines of context around imports AND call sites (default: 3)

All Configuration Options:

Your wish, your rules.

ParameterTypeDefaultDescription
modelstr"azure/gpt-4.1"LLM model identifier (litellm format)
cache_dirstr | Path".wishful"Directory for cached generated modules
reviewboolFalsePrompt for approval before executing generated code
spinnerboolTrueShow spinner during LLM generation
allow_unsafeboolFalseDisable safety validation (use with caution!)
temperaturefloat1.0LLM sampling temperature (0.0-2.0)
max_tokensint4096Maximum tokens for LLM response
debugboolFalseEnable debug mode (sets log_level to DEBUG)
log_levelstr"WARNING"Logging level (DEBUG, INFO, WARNING, ERROR)
log_to_fileboolTrueWrite logs to {cache_dir}/_logs/
system_promptstr(see source)Custom system prompt for LLM (advanced)

Environment variables (loaded via python-dotenv)

Section titled “Environment variables (loaded via python-dotenv)”

The .env file: where wishes go before they’re spoken aloud.

All settings can also be configured via environment variables:

  • WISHFUL_MODEL or DEFAULT_MODEL - LLM model identifier
  • WISHFUL_CACHE_DIR - Cache directory path
  • WISHFUL_REVIEW - Set to "1" to enable review mode
  • WISHFUL_DEBUG - Set to "1" to enable debug mode
  • WISHFUL_UNSAFE - Set to "1" to disable safety checks
  • WISHFUL_SPINNER - Set to "0" to disable spinner
  • WISHFUL_MAX_TOKENS - Maximum tokens (integer)
  • WISHFUL_TEMPERATURE - Sampling temperature (float)
  • WISHFUL_CONTEXT_RADIUS - Context lines around imports and call sites (integer)
  • WISHFUL_LOG_LEVEL - Logging level (DEBUG, INFO, WARNING, ERROR)
  • WISHFUL_LOG_TO_FILE - Set to "0" to disable file logging
  • WISHFUL_SYSTEM_PROMPT - Custom system prompt
  • WISHFUL_FAKE_LLM - Set to "1" for deterministic stub generation (testing)
  • Safety checks block os, subprocess, sys imports; eval/exec; and open in write-ish modes unless allow_unsafe=True.
  • Tests flip allow_unsafe=True by default, so don’t blindly copy test fixtures into production code.
  • log_level & log_to_file piggyback on wishful.settings.
  • Logs live under <cache_dir>/_logs/YYYY-MM-DD.log when file logging is on (enabled by default; stays in the sandboxed cache dir).

Want to see the knobs in action? Pair this with the CLI.