- Removed the inline Mermaid diagram definition for the secure document ingestion pipeline.
- Replaced the diagram with a reference to a pre-rendered image (assets/rec21_secure_ingestion.png).
- Ensures consistent visual representation of the pipeline across different markdown viewers.
- Avoids potential rendering issues or inconsistencies associated with dynamic Mermaid diagrams.
- Add a new schematic image to visually represent the supply chain components.
- Resize the 'Model Poisoning Flow' image for improved layout and readability.
- Enhance the visual clarity of the 'Data Provenance and Supply Chain Security' chapter.
- remove outdated supply chain map image
- replace model poisoning flow image with a clearer version
- update image width for improved readability
- enhance visual explanations within the chapter
- Convert the secure ingestion flow diagram from a simple list to a mermaid graph TD flowchart.
- Enhance visual clarity and structure of the secure ingestion process.
- Explicitly show rejection paths for malware scans and format validation.
- Improve readability and understanding of the RAG pipeline's secure ingestion steps.
- Add detailed network isolation methods using Docker, VMs, and iptables for secure lab environments.
- Introduce multiple LLM setup options including Ollama, Text-Generation-WebUI, and llama.cpp for diverse testing needs.
- Integrate practical red teaming tools like Garak and a core Python environment for automated vulnerability scanning.
- Implement robust environmental safety mechanisms: a comprehensive kill switch, watchdog timer, API rate limiter, and cost tracker.
- Update .gitignore to exclude old_chapter_07.md, cleaning up old file references.
- Add a visual flowchart for the safety watchdog script.
- Enhance understanding of the autonomous agent kill switch implementation.
- Illustrates the monitoring and termination logic for exceeding thresholds.
- Refresh model pricing for OpenAI, Anthropic, and Google APIs.
- Update tool installation instructions and notes for vLLM and llama.cpp.
- Change default Anthropic model to claude-3-5-haiku-latest.
- Add advisories to verify external tool versions and API pricing due to rapid evolution.
- Remove deprecated version: "3.8" from Docker Compose examples for clarity.
- Reworded numerous sections and paragraphs for improved clarity and conciseness.
- Simplified sentence structures and adopted a more direct, imperative tone throughout the chapter.
- Shortened section titles and bullet points to enhance readability and reduce verbosity.
- Updated .gitignore to exclude final_audit.json, a new output file from lab processes.
- Aims to make the technical guidance more accessible and easier to digest for readers.
- Added detailed decision guides for local LLM deployment options and virtualization.
- Enhanced guidance for commercial LLM API testing, including cost, rate limiting, and logging.
- Provided a comprehensive overview of network isolation strategies and their GPU support.
- Introduced essential red team tooling categories and explained the use of Garak.
- Detailed the importance of kill switches, watchdog timers, and rate limiters for lab safety.
- Embed three new diagrams in Chapter 7 documentation.
- Provide visual explanations for proxy traffic interception.
- Illustrate the architectural setup for Docker-based lab isolation.
- Detail the execution flow of the custom test harness.
- Enhance readability and comprehension of complex lab setup procedures.
- Significantly expanded Chapter 7 with detailed guides and code examples for AI red teaming lab setup.
- Introduced comprehensive sections on local LLM deployment, API-based testing, and network isolation.
- Added critical safety controls including kill switches, watchdog timers, rate limiting, and cost management.
- Included advanced topics such as testing RAG, agent systems, and multi-modal models.
- Provided pre-engagement and daily operational checklists, risk management, and incident response procedures.
- Implement automatic port scanning for LLM services if no port is provided.
- Discover available models from OpenAI-compatible and Ollama API endpoints.
- Enable the attack phase to test against multiple discovered models.
- Improve usability by reducing manual configuration for common LLM setups.
- Enhance test coverage by automatically validating against various models.
- remove empty os, sys, and typer files from the pit directory
- these files were likely created as placeholders during initial setup
- they serve no functional purpose and were never populated with content
- improves project hygiene by removing unused and redundant artifacts
- Introduce a new sequential pipeline architecture for automated scans.
- Update the scan command to utilize the new pipeline for auto mode.
- Integrate InjectionTester for actual discovery of injection points within the DiscoveryPhase.
- Implement real attack execution in the AttackPhase using InjectionTester and pattern registry.
- Enhance the VerificationPhase to process detailed TestResult objects with detection scoring.
- Completely rewrites SPECIFICATION.md to detail a new CLI architecture and user experience for PIT v2.0.0.
- Introduces a comprehensive command hierarchy, detailed terminal output mockups across 5 phases, and extensive options for pit scan.
- Focuses on a "premium TUI" design philosophy, "one-command" operation, zero-config defaults, and enhanced error handling.
- Adds sections for configuration, pattern, and history management, along with accessibility and performance specifications.
- Updates .gitignore to reflect the new documentation structure, moving legacy and new spec files into dedicated docs subdirectories.
- Add SPECIFICATION.md to the list of ignored files.
- Prevent tracking of a new project specification document.
- Ensure only relevant source files are committed to the repository.
- Add typer and rich to the prompt_injection_tester project dependencies.
- Introduce pit as a new command-line entry point in pyproject.toml.
- These dependencies are essential for developing the new pit command-line interface.