Merge pull request #28 from FuzzingLabs/feature/android-workflow-conversion

feat: Android Static Analysis Workflow with ARM64 Support
This commit is contained in:
tduhamel42
2025-10-24 17:22:49 +02:00
committed by GitHub
33 changed files with 3347 additions and 28 deletions

View File

@@ -47,8 +47,40 @@ for worker in $WORKERS; do
continue
fi
# Check required files
REQUIRED_FILES=("Dockerfile" "requirements.txt" "worker.py")
# Check Dockerfile (single file or multi-platform pattern)
if [ -f "$WORKER_DIR/Dockerfile" ]; then
# Single Dockerfile
if ! git ls-files --error-unmatch "$WORKER_DIR/Dockerfile" &> /dev/null; then
echo -e "${RED} ❌ File not tracked by git: $WORKER_DIR/Dockerfile${NC}"
echo -e "${YELLOW} Check .gitignore patterns!${NC}"
ERRORS=$((ERRORS + 1))
else
echo -e "${GREEN} ✓ Dockerfile (tracked)${NC}"
fi
elif compgen -G "$WORKER_DIR/Dockerfile.*" > /dev/null; then
# Multi-platform Dockerfiles (e.g., Dockerfile.amd64, Dockerfile.arm64)
PLATFORM_DOCKERFILES=$(ls "$WORKER_DIR"/Dockerfile.* 2>/dev/null)
DOCKERFILE_FOUND=false
for dockerfile in $PLATFORM_DOCKERFILES; do
if git ls-files --error-unmatch "$dockerfile" &> /dev/null; then
echo -e "${GREEN}$(basename "$dockerfile") (tracked)${NC}"
DOCKERFILE_FOUND=true
else
echo -e "${RED} ❌ File not tracked by git: $dockerfile${NC}"
ERRORS=$((ERRORS + 1))
fi
done
if [ "$DOCKERFILE_FOUND" = false ]; then
echo -e "${RED} ❌ No platform-specific Dockerfiles found${NC}"
ERRORS=$((ERRORS + 1))
fi
else
echo -e "${RED} ❌ Missing Dockerfile or Dockerfile.* files${NC}"
ERRORS=$((ERRORS + 1))
fi
# Check other required files
REQUIRED_FILES=("requirements.txt" "worker.py")
for file in "${REQUIRED_FILES[@]}"; do
FILE_PATH="$WORKER_DIR/$file"

View File

@@ -5,7 +5,93 @@ All notable changes to FuzzForge will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.7.0] - 2025-01-16
## [Unreleased]
### 🎯 Major Features
#### Android Static Analysis Workflow
- **Added comprehensive Android security testing workflow** (`android_static_analysis`):
- Jadx decompiler for APK → Java source code decompilation
- OpenGrep/Semgrep static analysis with custom Android security rules
- MobSF integration for comprehensive mobile security scanning
- SARIF report generation with unified findings format
- Test results: Successfully decompiled 4,145 Java files, found 8 security vulnerabilities
- Full workflow completes in ~1.5 minutes
#### Platform-Aware Worker Architecture
- **ARM64 (Apple Silicon) support**:
- Automatic platform detection (ARM64 vs x86_64) in CLI using `platform.machine()`
- Worker metadata convention (`metadata.yaml`) for platform-specific capabilities
- Multi-Dockerfile support: `Dockerfile.amd64` (full toolchain) and `Dockerfile.arm64` (optimized)
- Conditional module imports for graceful degradation (MobSF skips on ARM64)
- Backend path resolution via `FUZZFORGE_HOST_ROOT` for CLI worker management
- **Worker selection logic**:
- CLI automatically selects appropriate Dockerfile based on detected platform
- Multi-strategy path resolution (API → .fuzzforge marker → environment variable)
- Platform-specific tool availability documented in metadata
#### Python SAST Workflow
- **Added Python Static Application Security Testing workflow** (`python_sast`):
- Bandit for Python security linting (SAST)
- MyPy for static type checking
- Safety for dependency vulnerability scanning
- Integrated SARIF reporter for unified findings format
- Auto-start Python worker on-demand
### ✨ Enhancements
#### CI/CD Improvements
- Added automated worker validation in CI pipeline
- Docker build checks for all workers before merge
- Worker file change detection for selective builds
- Optimized Docker layer caching for faster builds
- Dev branch testing workflow triggers
#### CLI Improvements
- Fixed live monitoring bug in `ff monitor live` command
- Enhanced `ff findings` command with better table formatting
- Improved `ff monitor` with clearer status displays
- Auto-start workers on-demand when workflows require them
- Better error messages with actionable manual start commands
#### Worker Management
- Standardized worker service names (`worker-python`, `worker-android`, etc.)
- Added missing `worker-secrets` to repository
- Improved worker naming consistency across codebase
#### LiteLLM Integration
- Centralized LLM provider management with proxy
- Governance and request/response routing
- OTEL collector integration for observability
- Environment-based configurable timeouts
- Optional `.env.litellm` configuration
### 🐛 Bug Fixes
- Fixed MobSF API key generation from secret file (SHA256 hash)
- Corrected Temporal activity names (decompile_with_jadx, scan_with_opengrep, scan_with_mobsf)
- Resolved linter errors across codebase
- Fixed unused import issues to pass CI checks
- Removed deprecated workflow parameters
- Docker Compose version compatibility fixes
### 🔧 Technical Changes
- Conditional import pattern for optional dependencies (MobSF on ARM64)
- Multi-platform Dockerfile architecture
- Worker metadata convention for capability declaration
- Improved CI worker build optimization
- Enhanced storage activity error handling
### 📝 Test Projects
- Added `test_projects/android_test/` with BeetleBug.apk and shopnest.apk
- Android workflow validation with real APK samples
- ARM64 platform testing and validation
---
## [0.7.0] - 2025-10-16
### 🎯 Major Features

47
backend/src/api/system.py Normal file
View File

@@ -0,0 +1,47 @@
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
"""
System information endpoints for FuzzForge API.
Provides system configuration and filesystem paths to CLI for worker management.
"""
import os
from typing import Dict
from fastapi import APIRouter
router = APIRouter(prefix="/system", tags=["system"])
@router.get("/info")
async def get_system_info() -> Dict[str, str]:
"""
Get system information including host filesystem paths.
This endpoint exposes paths needed by the CLI to manage workers via docker-compose.
The FUZZFORGE_HOST_ROOT environment variable is set by docker-compose and points
to the FuzzForge installation directory on the host machine.
Returns:
Dictionary containing:
- host_root: Absolute path to FuzzForge root on host
- docker_compose_path: Path to docker-compose.yml on host
- workers_dir: Path to workers directory on host
"""
host_root = os.getenv("FUZZFORGE_HOST_ROOT", "")
return {
"host_root": host_root,
"docker_compose_path": f"{host_root}/docker-compose.yml" if host_root else "",
"workers_dir": f"{host_root}/workers" if host_root else "",
}

View File

@@ -24,7 +24,7 @@ from fastmcp.server.http import create_sse_app
from src.temporal.manager import TemporalManager
from src.core.setup import setup_result_storage, validate_infrastructure
from src.api import workflows, runs, fuzzing
from src.api import workflows, runs, fuzzing, system
from fastmcp import FastMCP
@@ -76,6 +76,7 @@ app = FastAPI(
app.include_router(workflows.router)
app.include_router(runs.router)
app.include_router(fuzzing.router)
app.include_router(system.router)
def get_temporal_status() -> Dict[str, Any]:

View File

@@ -0,0 +1,31 @@
"""
Android Security Analysis Modules
Modules for Android application security testing:
- JadxDecompiler: APK decompilation using Jadx
- MobSFScanner: Mobile security analysis using MobSF
- OpenGrepAndroid: Static analysis using OpenGrep/Semgrep with Android-specific rules
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
from .jadx_decompiler import JadxDecompiler
from .opengrep_android import OpenGrepAndroid
# MobSF is optional (not available on ARM64 platform)
try:
from .mobsf_scanner import MobSFScanner
__all__ = ["JadxDecompiler", "MobSFScanner", "OpenGrepAndroid"]
except ImportError:
# MobSF dependencies not available (e.g., ARM64 platform)
MobSFScanner = None
__all__ = ["JadxDecompiler", "OpenGrepAndroid"]

View File

@@ -0,0 +1,15 @@
rules:
- id: clipboard-sensitive-data
severity: WARNING
languages: [java]
message: "Sensitive data may be copied to the clipboard."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: security
area: clipboard
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$CLIPBOARD.setPrimaryClip($CLIP)"

View File

@@ -0,0 +1,23 @@
rules:
- id: hardcoded-secrets
severity: WARNING
languages: [java]
message: "Possible hardcoded secret found in variable '$NAME'."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: secrets
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern-either:
- pattern: 'String $NAME = "$VAL";'
- pattern: 'final String $NAME = "$VAL";'
- pattern: 'private String $NAME = "$VAL";'
- pattern: 'public static String $NAME = "$VAL";'
- pattern: 'static final String $NAME = "$VAL";'
- pattern-regex: "$NAME =~ /(?i).*(api|key|token|secret|pass|auth|session|bearer|access|private).*/"

View File

@@ -0,0 +1,18 @@
rules:
- id: insecure-data-storage
severity: WARNING
languages: [java]
message: "Potential insecure data storage (external storage)."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: security
area: storage
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern-either:
- pattern: "$CTX.openFileOutput($NAME, $MODE)"
- pattern: "Environment.getExternalStorageDirectory()"

View File

@@ -0,0 +1,16 @@
rules:
- id: insecure-deeplink
severity: WARNING
languages: [xml]
message: "Potential insecure deeplink found in intent-filter."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<intent-filter>

View File

@@ -0,0 +1,21 @@
rules:
- id: insecure-logging
severity: WARNING
languages: [java]
message: "Sensitive data logged via Android Log API."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: logging
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern-either:
- pattern: "Log.d($TAG, $MSG)"
- pattern: "Log.e($TAG, $MSG)"
- pattern: "System.out.println($MSG)"
- pattern-regex: "$MSG =~ /(?i).*(password|token|secret|api|auth|session).*/"

View File

@@ -0,0 +1,15 @@
rules:
- id: intent-redirection
severity: WARNING
languages: [java]
message: "Potential intent redirection: using getIntent().getExtras() without validation."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: intent
area: intercomponent
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$ACT.getIntent().getExtras()"

View File

@@ -0,0 +1,18 @@
rules:
- id: sensitive-data-in-shared-preferences
severity: WARNING
languages: [java]
message: "Sensitive data may be stored in SharedPreferences. Please review the key '$KEY'."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: security
area: storage
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern: "$EDITOR.putString($KEY, $VAL);"
- pattern-regex: "$KEY =~ /(?i).*(username|password|pass|token|auth_token|api_key|secret|sessionid|email).*/"

View File

@@ -0,0 +1,21 @@
rules:
- id: sqlite-injection
severity: ERROR
languages: [java]
message: "Possible SQL injection: concatenated input in rawQuery or execSQL."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M7
category: injection
area: database
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern-either:
- pattern: "$DB.rawQuery($QUERY, ...)"
- pattern: "$DB.execSQL($QUERY)"
- pattern-regex: "$QUERY =~ /.*\".*\".*\\+.*/"

View File

@@ -0,0 +1,16 @@
rules:
- id: vulnerable-activity
severity: WARNING
languages: [xml]
message: "Activity exported without permission."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<activity android:exported="true"

View File

@@ -0,0 +1,16 @@
rules:
- id: vulnerable-content-provider
severity: WARNING
languages: [xml]
message: "ContentProvider exported without permission."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<provider android:exported="true"

View File

@@ -0,0 +1,16 @@
rules:
- id: vulnerable-service
severity: WARNING
languages: [xml]
message: "Service exported without permission."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<service android:exported="true"

View File

@@ -0,0 +1,16 @@
rules:
- id: webview-javascript-enabled
severity: ERROR
languages: [java]
message: "WebView with JavaScript enabled can be dangerous if loading untrusted content."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M7
category: webview
area: ui
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$W.getSettings().setJavaScriptEnabled(true)"

View File

@@ -0,0 +1,16 @@
rules:
- id: webview-load-arbitrary-url
severity: WARNING
languages: [java]
message: "Loading unvalidated URL in WebView may cause open redirect or XSS."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M7
category: webview
area: ui
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$W.loadUrl($URL)"

View File

@@ -0,0 +1,270 @@
"""
Jadx APK Decompilation Module
Decompiles Android APK files to Java source code using Jadx.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import asyncio
import shutil
import logging
from pathlib import Path
from typing import Dict, Any
try:
from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult
except ImportError:
try:
from modules.base import BaseModule, ModuleMetadata, ModuleResult
except ImportError:
from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult
logger = logging.getLogger(__name__)
class JadxDecompiler(BaseModule):
"""Module for decompiling APK files to Java source code using Jadx"""
def get_metadata(self) -> ModuleMetadata:
return ModuleMetadata(
name="jadx_decompiler",
version="1.5.0",
description="Android APK decompilation using Jadx - converts DEX bytecode to Java source",
author="FuzzForge Team",
category="android",
tags=["android", "jadx", "decompilation", "reverse", "apk"],
input_schema={
"type": "object",
"properties": {
"apk_path": {
"type": "string",
"description": "Path to the APK to decompile (absolute or relative to workspace)",
},
"output_dir": {
"type": "string",
"description": "Directory (relative to workspace) where Jadx output should be written",
"default": "jadx_output",
},
"overwrite": {
"type": "boolean",
"description": "Overwrite existing output directory if present",
"default": True,
},
"threads": {
"type": "integer",
"description": "Number of Jadx decompilation threads",
"default": 4,
"minimum": 1,
"maximum": 32,
},
"decompiler_args": {
"type": "array",
"items": {"type": "string"},
"description": "Additional arguments passed directly to Jadx",
"default": [],
},
},
"required": ["apk_path"],
},
output_schema={
"type": "object",
"properties": {
"output_dir": {
"type": "string",
"description": "Path to decompiled output directory",
},
"source_dir": {
"type": "string",
"description": "Path to decompiled Java sources",
},
"resource_dir": {
"type": "string",
"description": "Path to extracted resources",
},
"java_files": {
"type": "integer",
"description": "Number of Java files decompiled",
},
},
},
requires_workspace=True,
)
def validate_config(self, config: Dict[str, Any]) -> bool:
"""Validate module configuration"""
apk_path = config.get("apk_path")
if not apk_path:
raise ValueError("'apk_path' must be provided for Jadx decompilation")
threads = config.get("threads", 4)
if not isinstance(threads, int) or threads < 1 or threads > 32:
raise ValueError("threads must be between 1 and 32")
return True
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""
Execute Jadx decompilation on an APK file.
Args:
config: Configuration dict with apk_path, output_dir, etc.
workspace: Workspace directory path
Returns:
ModuleResult with decompilation summary and metadata
"""
self.start_timer()
try:
self.validate_config(config)
self.validate_workspace(workspace)
workspace = workspace.resolve()
# Resolve APK path
apk_path = Path(config["apk_path"])
if not apk_path.is_absolute():
apk_path = (workspace / apk_path).resolve()
if not apk_path.exists():
raise ValueError(f"APK not found: {apk_path}")
if apk_path.is_dir():
raise ValueError(f"APK path must be a file, not a directory: {apk_path}")
logger.info(f"Decompiling APK: {apk_path}")
# Resolve output directory
output_dir = Path(config.get("output_dir", "jadx_output"))
if not output_dir.is_absolute():
output_dir = (workspace / output_dir).resolve()
# Handle existing output directory
if output_dir.exists():
if config.get("overwrite", True):
logger.info(f"Removing existing output directory: {output_dir}")
shutil.rmtree(output_dir)
else:
raise ValueError(
f"Output directory already exists: {output_dir}. Set overwrite=true to replace it."
)
output_dir.mkdir(parents=True, exist_ok=True)
# Build Jadx command
threads = str(config.get("threads", 4))
extra_args = config.get("decompiler_args", []) or []
cmd = [
"jadx",
"--threads-count",
threads,
"--deobf", # Deobfuscate code
"--output-dir",
str(output_dir),
]
cmd.extend(extra_args)
cmd.append(str(apk_path))
logger.info(f"Running Jadx: {' '.join(cmd)}")
# Execute Jadx
process = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
cwd=str(workspace),
)
stdout, stderr = await process.communicate()
stdout_str = stdout.decode(errors="ignore") if stdout else ""
stderr_str = stderr.decode(errors="ignore") if stderr else ""
if stdout_str:
logger.debug(f"Jadx stdout: {stdout_str[:200]}...")
if stderr_str:
logger.debug(f"Jadx stderr: {stderr_str[:200]}...")
if process.returncode != 0:
error_output = stderr_str or stdout_str or "No error output"
raise RuntimeError(
f"Jadx failed with exit code {process.returncode}: {error_output[:500]}"
)
# Verify output structure
source_dir = output_dir / "sources"
resource_dir = output_dir / "resources"
if not source_dir.exists():
logger.warning(
f"Jadx sources directory not found at expected path: {source_dir}"
)
# Use output_dir as fallback
source_dir = output_dir
# Count decompiled Java files
java_files = 0
if source_dir.exists():
java_files = sum(1 for _ in source_dir.rglob("*.java"))
logger.info(f"Decompiled {java_files} Java files")
# Log sample files for debugging
sample_files = []
for idx, file_path in enumerate(source_dir.rglob("*.java")):
sample_files.append(str(file_path.relative_to(workspace)))
if idx >= 4:
break
if sample_files:
logger.debug(f"Sample Java files: {sample_files}")
# Create summary
summary = {
"output_dir": str(output_dir),
"source_dir": str(source_dir if source_dir.exists() else output_dir),
"resource_dir": str(
resource_dir if resource_dir.exists() else output_dir
),
"java_files": java_files,
"apk_name": apk_path.name,
"apk_size_bytes": apk_path.stat().st_size,
}
metadata = {
"apk_path": str(apk_path),
"output_dir": str(output_dir),
"source_dir": summary["source_dir"],
"resource_dir": summary["resource_dir"],
"threads": threads,
"decompiler": "jadx",
"decompiler_version": "1.5.0",
}
logger.info(
f"✓ Jadx decompilation completed: {java_files} Java files generated"
)
return self.create_result(
findings=[], # Jadx doesn't generate findings, only decompiles
status="success",
summary=summary,
metadata=metadata,
)
except Exception as exc:
logger.error(f"Jadx decompilation failed: {exc}", exc_info=True)
return self.create_result(
findings=[],
status="failed",
error=str(exc),
metadata={"decompiler": "jadx", "apk_path": config.get("apk_path")},
)

View File

@@ -0,0 +1,393 @@
"""
MobSF Scanner Module
Mobile Security Framework (MobSF) integration for comprehensive Android app security analysis.
Performs static analysis on APK files including permissions, manifest analysis, code analysis, and behavior checks.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import logging
import os
from collections import Counter
from pathlib import Path
from typing import Dict, Any, List
import aiohttp
try:
from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
try:
from modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
logger = logging.getLogger(__name__)
class MobSFScanner(BaseModule):
"""Mobile Security Framework (MobSF) scanner module for Android applications"""
SEVERITY_MAP = {
"dangerous": "critical",
"high": "high",
"warning": "medium",
"medium": "medium",
"low": "low",
"info": "low",
"secure": "low",
}
def get_metadata(self) -> ModuleMetadata:
return ModuleMetadata(
name="mobsf_scanner",
version="3.9.7",
description="Comprehensive Android security analysis using Mobile Security Framework (MobSF)",
author="FuzzForge Team",
category="android",
tags=["mobile", "android", "mobsf", "sast", "scanner", "security"],
input_schema={
"type": "object",
"properties": {
"mobsf_url": {
"type": "string",
"description": "MobSF server URL",
"default": "http://localhost:8877",
},
"file_path": {
"type": "string",
"description": "Path to the APK file to scan (absolute or relative to workspace)",
},
"api_key": {
"type": "string",
"description": "MobSF API key (if not provided, will try MOBSF_API_KEY env var)",
"default": None,
},
"rescan": {
"type": "boolean",
"description": "Force rescan even if file was previously analyzed",
"default": False,
},
},
"required": ["file_path"],
},
output_schema={
"type": "object",
"properties": {
"findings": {
"type": "array",
"description": "Security findings from MobSF analysis"
},
"scan_hash": {"type": "string"},
"total_findings": {"type": "integer"},
"severity_counts": {"type": "object"},
}
},
requires_workspace=True,
)
def validate_config(self, config: Dict[str, Any]) -> bool:
"""Validate module configuration"""
if "mobsf_url" in config and not isinstance(config["mobsf_url"], str):
raise ValueError("mobsf_url must be a string")
file_path = config.get("file_path")
if not file_path:
raise ValueError("file_path is required for MobSF scanning")
return True
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""
Execute MobSF security analysis on an APK file.
Args:
config: Configuration dict with file_path, mobsf_url, api_key
workspace: Workspace directory path
Returns:
ModuleResult with security findings from MobSF
"""
self.start_timer()
try:
self.validate_config(config)
self.validate_workspace(workspace)
# Get configuration
mobsf_url = config.get("mobsf_url", "http://localhost:8877")
file_path_str = config["file_path"]
rescan = config.get("rescan", False)
# Get API key from config or environment
api_key = config.get("api_key") or os.environ.get("MOBSF_API_KEY", "")
if not api_key:
logger.warning("No MobSF API key provided. Some functionality may be limited.")
# Resolve APK file path
file_path = Path(file_path_str)
if not file_path.is_absolute():
file_path = (workspace / file_path).resolve()
if not file_path.exists():
raise FileNotFoundError(f"APK file not found: {file_path}")
if not file_path.is_file():
raise ValueError(f"APK path must be a file: {file_path}")
logger.info(f"Starting MobSF scan of APK: {file_path}")
# Upload and scan APK
scan_hash = await self._upload_file(mobsf_url, file_path, api_key)
logger.info(f"APK uploaded to MobSF with hash: {scan_hash}")
# Start scan
await self._start_scan(mobsf_url, scan_hash, api_key, rescan=rescan)
logger.info(f"MobSF scan completed for hash: {scan_hash}")
# Get JSON results
scan_results = await self._get_json_results(mobsf_url, scan_hash, api_key)
# Parse results into findings
findings = self._parse_scan_results(scan_results, file_path)
# Create summary
summary = self._create_summary(findings, scan_hash)
logger.info(f"✓ MobSF scan completed: {len(findings)} findings")
return self.create_result(
findings=findings,
status="success",
summary=summary,
metadata={
"tool": "mobsf",
"tool_version": "3.9.7",
"scan_hash": scan_hash,
"apk_file": str(file_path),
"mobsf_url": mobsf_url,
}
)
except Exception as exc:
logger.error(f"MobSF scanner failed: {exc}", exc_info=True)
return self.create_result(
findings=[],
status="failed",
error=str(exc),
metadata={"tool": "mobsf", "file_path": config.get("file_path")}
)
async def _upload_file(self, mobsf_url: str, file_path: Path, api_key: str) -> str:
"""
Upload APK file to MobSF server.
Returns:
Scan hash for the uploaded file
"""
headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
# Create multipart form data
filename = file_path.name
async with aiohttp.ClientSession() as session:
with open(file_path, 'rb') as f:
data = aiohttp.FormData()
data.add_field('file',
f,
filename=filename,
content_type='application/vnd.android.package-archive')
async with session.post(
f"{mobsf_url}/api/v1/upload",
headers=headers,
data=data,
timeout=aiohttp.ClientTimeout(total=300)
) as response:
if response.status != 200:
error_text = await response.text()
raise Exception(f"Failed to upload file to MobSF: {error_text}")
result = await response.json()
scan_hash = result.get('hash')
if not scan_hash:
raise Exception(f"MobSF upload failed: {result}")
return scan_hash
async def _start_scan(self, mobsf_url: str, scan_hash: str, api_key: str, rescan: bool = False) -> Dict[str, Any]:
"""
Start MobSF scan for uploaded file.
Returns:
Scan result dictionary
"""
headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
data = {
'hash': scan_hash,
're_scan': '1' if rescan else '0'
}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{mobsf_url}/api/v1/scan",
headers=headers,
data=data,
timeout=aiohttp.ClientTimeout(total=600) # 10 minutes for scan
) as response:
if response.status != 200:
error_text = await response.text()
raise Exception(f"MobSF scan failed: {error_text}")
result = await response.json()
return result
async def _get_json_results(self, mobsf_url: str, scan_hash: str, api_key: str) -> Dict[str, Any]:
"""
Retrieve JSON scan results from MobSF.
Returns:
Scan results dictionary
"""
headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
data = {'hash': scan_hash}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{mobsf_url}/api/v1/report_json",
headers=headers,
data=data,
timeout=aiohttp.ClientTimeout(total=60)
) as response:
if response.status != 200:
error_text = await response.text()
raise Exception(f"Failed to retrieve MobSF results: {error_text}")
return await response.json()
def _parse_scan_results(self, scan_data: Dict[str, Any], apk_path: Path) -> List[ModuleFinding]:
"""Parse MobSF JSON results into standardized findings"""
findings = []
# Parse permissions
if 'permissions' in scan_data:
for perm_name, perm_attrs in scan_data['permissions'].items():
if isinstance(perm_attrs, dict):
severity = self.SEVERITY_MAP.get(
perm_attrs.get('status', '').lower(), 'low'
)
finding = self.create_finding(
title=f"Android Permission: {perm_name}",
description=perm_attrs.get('description', 'No description'),
severity=severity,
category="android-permission",
metadata={
'permission': perm_name,
'status': perm_attrs.get('status'),
'info': perm_attrs.get('info'),
'tool': 'mobsf',
}
)
findings.append(finding)
# Parse manifest analysis
if 'manifest_analysis' in scan_data:
manifest_findings = scan_data['manifest_analysis'].get('manifest_findings', [])
for item in manifest_findings:
if isinstance(item, dict):
severity = self.SEVERITY_MAP.get(item.get('severity', '').lower(), 'medium')
finding = self.create_finding(
title=item.get('title') or item.get('name') or "Manifest Issue",
description=item.get('description', 'No description'),
severity=severity,
category="android-manifest",
metadata={
'rule': item.get('rule'),
'tool': 'mobsf',
}
)
findings.append(finding)
# Parse code analysis
if 'code_analysis' in scan_data:
code_findings = scan_data['code_analysis'].get('findings', {})
for finding_name, finding_data in code_findings.items():
if isinstance(finding_data, dict):
metadata_dict = finding_data.get('metadata', {})
severity = self.SEVERITY_MAP.get(
metadata_dict.get('severity', '').lower(), 'medium'
)
files_list = finding_data.get('files', [])
file_path = files_list[0] if files_list else None
finding = self.create_finding(
title=finding_name,
description=metadata_dict.get('description', 'No description'),
severity=severity,
category="android-code-analysis",
file_path=file_path,
metadata={
'cwe': metadata_dict.get('cwe'),
'owasp': metadata_dict.get('owasp'),
'files': files_list,
'tool': 'mobsf',
}
)
findings.append(finding)
# Parse behavior analysis
if 'behaviour' in scan_data:
for key, value in scan_data['behaviour'].items():
if isinstance(value, dict):
metadata_dict = value.get('metadata', {})
labels = metadata_dict.get('label', [])
label = labels[0] if labels else 'Unknown Behavior'
severity = self.SEVERITY_MAP.get(
metadata_dict.get('severity', '').lower(), 'medium'
)
files_list = value.get('files', [])
finding = self.create_finding(
title=f"Behavior: {label}",
description=metadata_dict.get('description', 'No description'),
severity=severity,
category="android-behavior",
metadata={
'files': files_list,
'tool': 'mobsf',
}
)
findings.append(finding)
logger.debug(f"Parsed {len(findings)} findings from MobSF results")
return findings
def _create_summary(self, findings: List[ModuleFinding], scan_hash: str) -> Dict[str, Any]:
"""Create analysis summary"""
severity_counter = Counter()
category_counter = Counter()
for finding in findings:
severity_counter[finding.severity] += 1
category_counter[finding.category] += 1
return {
"scan_hash": scan_hash,
"total_findings": len(findings),
"severity_counts": dict(severity_counter),
"category_counts": dict(category_counter),
}

View File

@@ -0,0 +1,440 @@
"""
OpenGrep Android Static Analysis Module
Pattern-based static analysis for Android applications using OpenGrep/Semgrep
with Android-specific security rules.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import asyncio
import json
import logging
from pathlib import Path
from typing import Dict, Any, List
try:
from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
try:
from modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
logger = logging.getLogger(__name__)
class OpenGrepAndroid(BaseModule):
"""OpenGrep static analysis module specialized for Android security"""
def get_metadata(self) -> ModuleMetadata:
"""Get module metadata"""
return ModuleMetadata(
name="opengrep_android",
version="1.45.0",
description="Android-focused static analysis using OpenGrep/Semgrep with custom security rules for Java/Kotlin",
author="FuzzForge Team",
category="android",
tags=["sast", "android", "opengrep", "semgrep", "java", "kotlin", "security"],
input_schema={
"type": "object",
"properties": {
"config": {
"type": "string",
"enum": ["auto", "p/security-audit", "p/owasp-top-ten", "p/cwe-top-25"],
"default": "auto",
"description": "Rule configuration to use"
},
"custom_rules_path": {
"type": "string",
"description": "Path to a directory containing custom OpenGrep rules (Android-specific rules recommended)",
"default": None,
},
"languages": {
"type": "array",
"items": {"type": "string"},
"description": "Specific languages to analyze (defaults to java, kotlin for Android)",
"default": ["java", "kotlin"],
},
"include_patterns": {
"type": "array",
"items": {"type": "string"},
"description": "File patterns to include",
"default": [],
},
"exclude_patterns": {
"type": "array",
"items": {"type": "string"},
"description": "File patterns to exclude",
"default": [],
},
"max_target_bytes": {
"type": "integer",
"default": 1000000,
"description": "Maximum file size to analyze (bytes)"
},
"timeout": {
"type": "integer",
"default": 300,
"description": "Analysis timeout in seconds"
},
"severity": {
"type": "array",
"items": {"type": "string", "enum": ["ERROR", "WARNING", "INFO"]},
"default": ["ERROR", "WARNING", "INFO"],
"description": "Minimum severity levels to report"
},
"confidence": {
"type": "array",
"items": {"type": "string", "enum": ["HIGH", "MEDIUM", "LOW"]},
"default": ["HIGH", "MEDIUM", "LOW"],
"description": "Minimum confidence levels to report"
}
}
},
output_schema={
"type": "object",
"properties": {
"findings": {
"type": "array",
"description": "Security findings from OpenGrep analysis"
},
"total_findings": {"type": "integer"},
"severity_counts": {"type": "object"},
"files_analyzed": {"type": "integer"},
}
},
requires_workspace=True,
)
def validate_config(self, config: Dict[str, Any]) -> bool:
"""Validate configuration"""
timeout = config.get("timeout", 300)
if not isinstance(timeout, int) or timeout < 30 or timeout > 3600:
raise ValueError("Timeout must be between 30 and 3600 seconds")
max_bytes = config.get("max_target_bytes", 1000000)
if not isinstance(max_bytes, int) or max_bytes < 1000 or max_bytes > 10000000:
raise ValueError("max_target_bytes must be between 1000 and 10000000")
custom_rules_path = config.get("custom_rules_path")
if custom_rules_path:
rules_path = Path(custom_rules_path)
if not rules_path.exists():
logger.warning(f"Custom rules path does not exist: {custom_rules_path}")
return True
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""Execute OpenGrep static analysis on Android code"""
self.start_timer()
try:
# Validate inputs
self.validate_config(config)
self.validate_workspace(workspace)
logger.info(f"Running OpenGrep Android analysis on {workspace}")
# Build opengrep command
cmd = ["opengrep", "scan", "--json"]
# Add configuration
custom_rules_path = config.get("custom_rules_path")
use_custom_rules = False
if custom_rules_path and Path(custom_rules_path).exists():
cmd.extend(["--config", custom_rules_path])
use_custom_rules = True
logger.info(f"Using custom Android rules from: {custom_rules_path}")
else:
config_type = config.get("config", "auto")
if config_type == "auto":
cmd.extend(["--config", "auto"])
else:
cmd.extend(["--config", config_type])
# Add timeout
cmd.extend(["--timeout", str(config.get("timeout", 300))])
# Add max target bytes
cmd.extend(["--max-target-bytes", str(config.get("max_target_bytes", 1000000))])
# Add languages if specified (but NOT when using custom rules)
languages = config.get("languages", ["java", "kotlin"])
if languages and not use_custom_rules:
langs = ",".join(languages)
cmd.extend(["--lang", langs])
logger.debug(f"Analyzing languages: {langs}")
# Add include patterns
include_patterns = config.get("include_patterns", [])
for pattern in include_patterns:
cmd.extend(["--include", pattern])
# Add exclude patterns
exclude_patterns = config.get("exclude_patterns", [])
for pattern in exclude_patterns:
cmd.extend(["--exclude", pattern])
# Add severity filter if single level requested
severity_levels = config.get("severity", ["ERROR", "WARNING", "INFO"])
if severity_levels and len(severity_levels) == 1:
cmd.extend(["--severity", severity_levels[0]])
# Disable metrics collection
cmd.append("--disable-version-check")
cmd.append("--no-git-ignore")
# Add target directory
cmd.append(str(workspace))
logger.debug(f"Running command: {' '.join(cmd)}")
# Run OpenGrep
process = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
cwd=workspace
)
stdout, stderr = await process.communicate()
# Parse results
findings = []
if process.returncode in [0, 1]: # 0 = no findings, 1 = findings found
findings = self._parse_opengrep_output(stdout.decode(), workspace, config)
logger.info(f"OpenGrep found {len(findings)} potential security issues")
else:
error_msg = stderr.decode()
logger.error(f"OpenGrep failed: {error_msg}")
return self.create_result(
findings=[],
status="failed",
error=f"OpenGrep execution failed (exit code {process.returncode}): {error_msg[:500]}"
)
# Create summary
summary = self._create_summary(findings)
return self.create_result(
findings=findings,
status="success",
summary=summary,
metadata={
"tool": "opengrep",
"tool_version": "1.45.0",
"languages": languages,
"custom_rules": bool(custom_rules_path),
}
)
except Exception as e:
logger.error(f"OpenGrep Android module failed: {e}", exc_info=True)
return self.create_result(
findings=[],
status="failed",
error=str(e)
)
def _parse_opengrep_output(self, output: str, workspace: Path, config: Dict[str, Any]) -> List[ModuleFinding]:
"""Parse OpenGrep JSON output into findings"""
findings = []
if not output.strip():
return findings
try:
data = json.loads(output)
results = data.get("results", [])
logger.debug(f"OpenGrep returned {len(results)} raw results")
# Get filtering criteria
allowed_severities = set(config.get("severity", ["ERROR", "WARNING", "INFO"]))
allowed_confidences = set(config.get("confidence", ["HIGH", "MEDIUM", "LOW"]))
for result in results:
# Extract basic info
rule_id = result.get("check_id", "unknown")
message = result.get("message", "")
extra = result.get("extra", {})
severity = extra.get("severity", "INFO").upper()
# File location info
path_info = result.get("path", "")
start_line = result.get("start", {}).get("line", 0)
end_line = result.get("end", {}).get("line", 0)
# Code snippet
lines = extra.get("lines", "")
# Metadata
rule_metadata = extra.get("metadata", {})
cwe = rule_metadata.get("cwe", [])
owasp = rule_metadata.get("owasp", [])
confidence = extra.get("confidence", rule_metadata.get("confidence", "MEDIUM")).upper()
# Apply severity filter
if severity not in allowed_severities:
continue
# Apply confidence filter
if confidence not in allowed_confidences:
continue
# Make file path relative to workspace
if path_info:
try:
rel_path = Path(path_info).relative_to(workspace)
path_info = str(rel_path)
except ValueError:
pass
# Map severity to our standard levels
finding_severity = self._map_severity(severity)
# Create finding
finding = self.create_finding(
title=f"Android Security: {rule_id}",
description=message or f"OpenGrep rule {rule_id} triggered",
severity=finding_severity,
category=self._get_category(rule_id, extra),
file_path=path_info if path_info else None,
line_start=start_line if start_line > 0 else None,
line_end=end_line if end_line > 0 and end_line != start_line else None,
code_snippet=lines.strip() if lines else None,
recommendation=self._get_recommendation(rule_id, extra),
metadata={
"rule_id": rule_id,
"opengrep_severity": severity,
"confidence": confidence,
"cwe": cwe,
"owasp": owasp,
"fix": extra.get("fix", ""),
"impact": extra.get("impact", ""),
"likelihood": extra.get("likelihood", ""),
"references": extra.get("references", []),
"tool": "opengrep",
}
)
findings.append(finding)
except json.JSONDecodeError as e:
logger.warning(f"Failed to parse OpenGrep output: {e}. Output snippet: {output[:200]}...")
except Exception as e:
logger.warning(f"Error processing OpenGrep results: {e}", exc_info=True)
return findings
def _map_severity(self, opengrep_severity: str) -> str:
"""Map OpenGrep severity to our standard severity levels"""
severity_map = {
"ERROR": "high",
"WARNING": "medium",
"INFO": "low"
}
return severity_map.get(opengrep_severity.upper(), "medium")
def _get_category(self, rule_id: str, extra: Dict[str, Any]) -> str:
"""Determine finding category based on rule and metadata"""
rule_metadata = extra.get("metadata", {})
cwe_list = rule_metadata.get("cwe", [])
owasp_list = rule_metadata.get("owasp", [])
rule_lower = rule_id.lower()
# Android-specific categories
if "injection" in rule_lower or "sql" in rule_lower:
return "injection"
elif "intent" in rule_lower:
return "android-intent"
elif "webview" in rule_lower:
return "android-webview"
elif "deeplink" in rule_lower:
return "android-deeplink"
elif "storage" in rule_lower or "sharedpreferences" in rule_lower:
return "android-storage"
elif "logging" in rule_lower or "log" in rule_lower:
return "android-logging"
elif "clipboard" in rule_lower:
return "android-clipboard"
elif "activity" in rule_lower or "service" in rule_lower or "provider" in rule_lower:
return "android-component"
elif "crypto" in rule_lower or "encrypt" in rule_lower:
return "cryptography"
elif "hardcode" in rule_lower or "secret" in rule_lower:
return "secrets"
elif "auth" in rule_lower:
return "authentication"
elif cwe_list:
return f"cwe-{cwe_list[0]}"
elif owasp_list:
return f"owasp-{owasp_list[0].replace(' ', '-').lower()}"
else:
return "android-security"
def _get_recommendation(self, rule_id: str, extra: Dict[str, Any]) -> str:
"""Generate recommendation based on rule and metadata"""
fix_suggestion = extra.get("fix", "")
if fix_suggestion:
return fix_suggestion
rule_lower = rule_id.lower()
# Android-specific recommendations
if "injection" in rule_lower or "sql" in rule_lower:
return "Use parameterized queries or Room database with type-safe queries to prevent SQL injection."
elif "intent" in rule_lower:
return "Validate all incoming Intent data and use explicit Intents when possible to prevent Intent manipulation attacks."
elif "webview" in rule_lower and "javascript" in rule_lower:
return "Disable JavaScript in WebView if not needed, or implement proper JavaScript interfaces with @JavascriptInterface annotation."
elif "deeplink" in rule_lower:
return "Validate all deeplink URLs and sanitize user input to prevent deeplink hijacking attacks."
elif "storage" in rule_lower or "sharedpreferences" in rule_lower:
return "Encrypt sensitive data before storing in SharedPreferences or use EncryptedSharedPreferences for Android API 23+."
elif "logging" in rule_lower:
return "Remove sensitive data from logs in production builds. Use ProGuard/R8 to strip logging statements."
elif "clipboard" in rule_lower:
return "Avoid placing sensitive data on the clipboard. If necessary, clear clipboard data when no longer needed."
elif "crypto" in rule_lower:
return "Use modern cryptographic algorithms (AES-GCM, RSA-OAEP) and Android Keystore for key management."
elif "hardcode" in rule_lower or "secret" in rule_lower:
return "Remove hardcoded secrets. Use Android Keystore, environment variables, or secure configuration management."
else:
return "Review this Android security issue and apply appropriate fixes based on Android security best practices."
def _create_summary(self, findings: List[ModuleFinding]) -> Dict[str, Any]:
"""Create analysis summary"""
severity_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0}
category_counts = {}
rule_counts = {}
for finding in findings:
# Count by severity
severity_counts[finding.severity] += 1
# Count by category
category = finding.category
category_counts[category] = category_counts.get(category, 0) + 1
# Count by rule
rule_id = finding.metadata.get("rule_id", "unknown")
rule_counts[rule_id] = rule_counts.get(rule_id, 0) + 1
return {
"total_findings": len(findings),
"severity_counts": severity_counts,
"category_counts": category_counts,
"top_rules": dict(sorted(rule_counts.items(), key=lambda x: x[1], reverse=True)[:10]),
"files_analyzed": len(set(f.file_path for f in findings if f.file_path))
}

View File

@@ -0,0 +1,35 @@
"""
Android Static Analysis Workflow
Comprehensive Android application security testing combining:
- Jadx APK decompilation
- OpenGrep/Semgrep static analysis with Android-specific rules
- MobSF mobile security framework analysis
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
from .workflow import AndroidStaticAnalysisWorkflow
from .activities import (
decompile_with_jadx_activity,
scan_with_opengrep_activity,
scan_with_mobsf_activity,
generate_android_sarif_activity,
)
__all__ = [
"AndroidStaticAnalysisWorkflow",
"decompile_with_jadx_activity",
"scan_with_opengrep_activity",
"scan_with_mobsf_activity",
"generate_android_sarif_activity",
]

View File

@@ -0,0 +1,213 @@
"""
Android Static Analysis Workflow Activities
Activities for the Android security testing workflow:
- decompile_with_jadx_activity: Decompile APK using Jadx
- scan_with_opengrep_activity: Analyze code with OpenGrep/Semgrep
- scan_with_mobsf_activity: Scan APK with MobSF
- generate_android_sarif_activity: Generate combined SARIF report
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import logging
import sys
from pathlib import Path
from temporalio import activity
# Configure logging
logger = logging.getLogger(__name__)
# Add toolbox to path for module imports
sys.path.insert(0, '/app/toolbox')
@activity.defn(name="decompile_with_jadx")
async def decompile_with_jadx_activity(workspace_path: str, config: dict) -> dict:
"""
Decompile Android APK to Java source code using Jadx.
Args:
workspace_path: Path to the workspace directory
config: JadxDecompiler configuration
Returns:
Decompilation results dictionary
"""
logger.info(f"Activity: decompile_with_jadx (workspace={workspace_path})")
try:
from modules.android import JadxDecompiler
workspace = Path(workspace_path)
if not workspace.exists():
raise FileNotFoundError(f"Workspace not found: {workspace_path}")
decompiler = JadxDecompiler()
result = await decompiler.execute(config, workspace)
logger.info(
f"✓ Jadx decompilation completed: "
f"{result.summary.get('java_files', 0)} Java files generated"
)
return result.dict()
except Exception as e:
logger.error(f"Jadx decompilation failed: {e}", exc_info=True)
raise
@activity.defn(name="scan_with_opengrep")
async def scan_with_opengrep_activity(workspace_path: str, config: dict) -> dict:
"""
Analyze Android code for security issues using OpenGrep/Semgrep.
Args:
workspace_path: Path to the workspace directory
config: OpenGrepAndroid configuration
Returns:
Analysis results dictionary
"""
logger.info(f"Activity: scan_with_opengrep (workspace={workspace_path})")
try:
from modules.android import OpenGrepAndroid
workspace = Path(workspace_path)
if not workspace.exists():
raise FileNotFoundError(f"Workspace not found: {workspace_path}")
analyzer = OpenGrepAndroid()
result = await analyzer.execute(config, workspace)
logger.info(
f"✓ OpenGrep analysis completed: "
f"{result.summary.get('total_findings', 0)} security issues found"
)
return result.dict()
except Exception as e:
logger.error(f"OpenGrep analysis failed: {e}", exc_info=True)
raise
@activity.defn(name="scan_with_mobsf")
async def scan_with_mobsf_activity(workspace_path: str, config: dict) -> dict:
"""
Analyze Android APK for security issues using MobSF.
Args:
workspace_path: Path to the workspace directory
config: MobSFScanner configuration
Returns:
Scan results dictionary (or skipped status if MobSF unavailable)
"""
logger.info(f"Activity: scan_with_mobsf (workspace={workspace_path})")
# Check if MobSF is installed (graceful degradation for ARM64 platform)
mobsf_path = Path("/app/mobsf")
if not mobsf_path.exists():
logger.warning("MobSF not installed on this platform (ARM64/Rosetta limitation)")
return {
"status": "skipped",
"findings": [],
"summary": {
"total_findings": 0,
"skip_reason": "MobSF unavailable on ARM64 platform (Rosetta 2 incompatibility)"
}
}
try:
from modules.android import MobSFScanner
workspace = Path(workspace_path)
if not workspace.exists():
raise FileNotFoundError(f"Workspace not found: {workspace_path}")
scanner = MobSFScanner()
result = await scanner.execute(config, workspace)
logger.info(
f"✓ MobSF scan completed: "
f"{result.summary.get('total_findings', 0)} findings"
)
return result.dict()
except Exception as e:
logger.error(f"MobSF scan failed: {e}", exc_info=True)
raise
@activity.defn(name="generate_android_sarif")
async def generate_android_sarif_activity(
jadx_result: dict,
opengrep_result: dict,
mobsf_result: dict,
config: dict,
workspace_path: str
) -> dict:
"""
Generate combined SARIF report from all Android security findings.
Args:
jadx_result: Jadx decompilation results
opengrep_result: OpenGrep analysis results
mobsf_result: MobSF scan results (may be None if disabled)
config: Reporter configuration
workspace_path: Workspace path
Returns:
SARIF report dictionary
"""
logger.info("Activity: generate_android_sarif")
try:
from modules.reporter import SARIFReporter
workspace = Path(workspace_path)
# Collect all findings
all_findings = []
all_findings.extend(opengrep_result.get("findings", []))
if mobsf_result:
all_findings.extend(mobsf_result.get("findings", []))
# Prepare reporter config
reporter_config = {
**(config or {}),
"findings": all_findings,
"tool_name": "FuzzForge Android Static Analysis",
"tool_version": "1.0.0",
"metadata": {
"jadx_version": "1.5.0",
"opengrep_version": "1.45.0",
"mobsf_version": "3.9.7",
"java_files_decompiled": jadx_result.get("summary", {}).get("java_files", 0),
}
}
reporter = SARIFReporter()
result = await reporter.execute(reporter_config, workspace)
sarif_report = result.dict().get("sarif", {})
logger.info(f"✓ SARIF report generated with {len(all_findings)} findings")
return sarif_report
except Exception as e:
logger.error(f"SARIF report generation failed: {e}", exc_info=True)
raise

View File

@@ -0,0 +1,172 @@
name: android_static_analysis
version: "1.0.0"
vertical: android
description: "Comprehensive Android application security testing using Jadx decompilation, OpenGrep static analysis, and MobSF mobile security framework"
author: "FuzzForge Team"
tags:
- "android"
- "mobile"
- "static-analysis"
- "security"
- "opengrep"
- "semgrep"
- "mobsf"
- "jadx"
- "apk"
- "sarif"
# Workspace isolation mode
# Using "shared" mode for read-only APK analysis (no file modifications except decompilation output)
workspace_isolation: "shared"
parameters:
type: object
properties:
apk_path:
type: string
description: "Path to the APK file to analyze (relative to uploaded target or absolute within workspace)"
default: ""
decompile_apk:
type: boolean
description: "Whether to decompile APK with Jadx before OpenGrep analysis"
default: true
jadx_config:
type: object
description: "Jadx decompiler configuration"
properties:
output_dir:
type: string
description: "Output directory for decompiled sources"
default: "jadx_output"
overwrite:
type: boolean
description: "Overwrite existing decompilation output"
default: true
threads:
type: integer
description: "Number of decompilation threads"
default: 4
minimum: 1
maximum: 32
decompiler_args:
type: array
items:
type: string
description: "Additional Jadx arguments"
default: []
opengrep_config:
type: object
description: "OpenGrep/Semgrep static analysis configuration"
properties:
config:
type: string
enum: ["auto", "p/security-audit", "p/owasp-top-ten", "p/cwe-top-25"]
description: "Preset OpenGrep ruleset (ignored if custom_rules_path is set)"
default: "auto"
custom_rules_path:
type: string
description: "Path to custom OpenGrep rules directory (use Android-specific rules for best results)"
default: "/app/toolbox/modules/android/custom_rules"
languages:
type: array
items:
type: string
description: "Programming languages to analyze (defaults to java, kotlin for Android)"
default: ["java", "kotlin"]
include_patterns:
type: array
items:
type: string
description: "File patterns to include in scan"
default: []
exclude_patterns:
type: array
items:
type: string
description: "File patterns to exclude from scan"
default: []
max_target_bytes:
type: integer
description: "Maximum file size to analyze (bytes)"
default: 1000000
timeout:
type: integer
description: "Analysis timeout in seconds"
default: 300
severity:
type: array
items:
type: string
enum: ["ERROR", "WARNING", "INFO"]
description: "Severity levels to include in results"
default: ["ERROR", "WARNING", "INFO"]
confidence:
type: array
items:
type: string
enum: ["HIGH", "MEDIUM", "LOW"]
description: "Confidence levels to include in results"
default: ["HIGH", "MEDIUM", "LOW"]
mobsf_config:
type: object
description: "MobSF scanner configuration"
properties:
enabled:
type: boolean
description: "Enable MobSF analysis (requires APK file)"
default: true
mobsf_url:
type: string
description: "MobSF server URL"
default: "http://localhost:8877"
api_key:
type: string
description: "MobSF API key (if not provided, uses MOBSF_API_KEY env var)"
default: null
rescan:
type: boolean
description: "Force rescan even if APK was previously analyzed"
default: false
reporter_config:
type: object
description: "SARIF reporter configuration"
properties:
include_code_flows:
type: boolean
description: "Include code flow information in SARIF output"
default: false
logical_id:
type: string
description: "Custom identifier for the SARIF report"
default: null
output_schema:
type: object
properties:
sarif:
type: object
description: "SARIF-formatted findings from all Android security tools"
summary:
type: object
description: "Android security analysis summary"
properties:
total_findings:
type: integer
decompiled_java_files:
type: integer
description: "Number of Java files decompiled by Jadx"
opengrep_findings:
type: integer
description: "Findings from OpenGrep/Semgrep analysis"
mobsf_findings:
type: integer
description: "Findings from MobSF analysis"
severity_distribution:
type: object
category_distribution:
type: object

View File

@@ -0,0 +1,289 @@
"""
Android Static Analysis Workflow - Temporal Version
Comprehensive security testing for Android applications using Jadx, OpenGrep, and MobSF.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
from datetime import timedelta
from typing import Dict, Any, Optional
from pathlib import Path
from temporalio import workflow
from temporalio.common import RetryPolicy
# Import activity interfaces (will be executed by worker)
with workflow.unsafe.imports_passed_through():
import logging
logger = logging.getLogger(__name__)
@workflow.defn
class AndroidStaticAnalysisWorkflow:
"""
Android Static Application Security Testing workflow.
This workflow:
1. Downloads target (APK) from MinIO
2. (Optional) Decompiles APK using Jadx
3. Runs OpenGrep/Semgrep static analysis on decompiled code
4. (Optional) Runs MobSF comprehensive security scan
5. Generates a SARIF report with all findings
6. Uploads results to MinIO
7. Cleans up cache
"""
@workflow.run
async def run(
self,
target_id: str,
apk_path: Optional[str] = None,
decompile_apk: bool = True,
jadx_config: Optional[Dict[str, Any]] = None,
opengrep_config: Optional[Dict[str, Any]] = None,
mobsf_config: Optional[Dict[str, Any]] = None,
reporter_config: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Main workflow execution.
Args:
target_id: UUID of the uploaded target (APK) in MinIO
apk_path: Path to APK file within target (if target is not a single APK)
decompile_apk: Whether to decompile APK with Jadx before OpenGrep
jadx_config: Configuration for Jadx decompiler
opengrep_config: Configuration for OpenGrep analyzer
mobsf_config: Configuration for MobSF scanner
reporter_config: Configuration for SARIF reporter
Returns:
Dictionary containing SARIF report and summary
"""
workflow_id = workflow.info().workflow_id
workflow.logger.info(
f"Starting AndroidStaticAnalysisWorkflow "
f"(workflow_id={workflow_id}, target_id={target_id})"
)
# Default configurations
if not jadx_config:
jadx_config = {
"output_dir": "jadx_output",
"overwrite": True,
"threads": 4,
"decompiler_args": []
}
if not opengrep_config:
opengrep_config = {
"config": "auto",
"custom_rules_path": "/app/toolbox/modules/android/custom_rules",
"languages": ["java", "kotlin"],
"severity": ["ERROR", "WARNING", "INFO"],
"confidence": ["HIGH", "MEDIUM", "LOW"],
"timeout": 300,
}
if not mobsf_config:
mobsf_config = {
"enabled": True,
"mobsf_url": "http://localhost:8877",
"api_key": None,
"rescan": False,
}
if not reporter_config:
reporter_config = {
"include_code_flows": False
}
# Activity retry policy
retry_policy = RetryPolicy(
initial_interval=timedelta(seconds=1),
maximum_interval=timedelta(seconds=60),
maximum_attempts=3,
backoff_coefficient=2.0,
)
# Phase 0: Download target from MinIO
workflow.logger.info(f"Phase 0: Downloading target from MinIO (target_id={target_id})")
workspace_path = await workflow.execute_activity(
"get_target",
args=[target_id, workflow.info().workflow_id, "shared"],
start_to_close_timeout=timedelta(minutes=10),
retry_policy=retry_policy,
)
workflow.logger.info(f"✓ Target downloaded to: {workspace_path}")
# Handle case where workspace_path is a file (single APK upload)
# vs. a directory containing files
workspace_path_obj = Path(workspace_path)
# Determine actual workspace directory and APK path
if apk_path:
# User explicitly provided apk_path
actual_apk_path = apk_path
# workspace_path could be either a file or directory
# If it's a file and apk_path matches the filename, use parent as workspace
if workspace_path_obj.name == apk_path:
workspace_path = str(workspace_path_obj.parent)
workflow.logger.info(f"Adjusted workspace to parent directory: {workspace_path}")
else:
# No apk_path provided - check if workspace_path is an APK file
if workspace_path_obj.suffix.lower() == '.apk' or workspace_path_obj.name.endswith('.apk'):
# workspace_path is the APK file itself
actual_apk_path = workspace_path_obj.name
workspace_path = str(workspace_path_obj.parent)
workflow.logger.info(f"Detected single APK file: {actual_apk_path}, workspace: {workspace_path}")
else:
# workspace_path is a directory, need to find APK within it
actual_apk_path = None
workflow.logger.info("Workspace is a directory, APK detection will be handled by modules")
# Phase 1: Jadx decompilation (if enabled and APK provided)
jadx_result = None
analysis_workspace = workspace_path
if decompile_apk and actual_apk_path:
workflow.logger.info(f"Phase 1: Decompiling APK with Jadx (apk={actual_apk_path})")
jadx_activity_config = {
**jadx_config,
"apk_path": actual_apk_path
}
jadx_result = await workflow.execute_activity(
"decompile_with_jadx",
args=[workspace_path, jadx_activity_config],
start_to_close_timeout=timedelta(minutes=15),
retry_policy=retry_policy,
)
if jadx_result.get("status") == "success":
# Use decompiled sources as workspace for OpenGrep
source_dir = jadx_result.get("summary", {}).get("source_dir")
if source_dir:
analysis_workspace = source_dir
workflow.logger.info(
f"✓ Jadx decompiled {jadx_result.get('summary', {}).get('java_files', 0)} Java files"
)
else:
workflow.logger.warning(f"Jadx decompilation failed: {jadx_result.get('error')}")
else:
workflow.logger.info("Phase 1: Jadx decompilation skipped")
# Phase 2: OpenGrep static analysis
workflow.logger.info(f"Phase 2: OpenGrep analysis on {analysis_workspace}")
opengrep_result = await workflow.execute_activity(
"scan_with_opengrep",
args=[analysis_workspace, opengrep_config],
start_to_close_timeout=timedelta(minutes=20),
retry_policy=retry_policy,
)
workflow.logger.info(
f"✓ OpenGrep completed: {opengrep_result.get('summary', {}).get('total_findings', 0)} findings"
)
# Phase 3: MobSF analysis (if enabled and APK provided)
mobsf_result = None
if mobsf_config.get("enabled", True) and actual_apk_path:
workflow.logger.info(f"Phase 3: MobSF scan on APK: {actual_apk_path}")
mobsf_activity_config = {
**mobsf_config,
"file_path": actual_apk_path
}
try:
mobsf_result = await workflow.execute_activity(
"scan_with_mobsf",
args=[workspace_path, mobsf_activity_config],
start_to_close_timeout=timedelta(minutes=30),
retry_policy=RetryPolicy(
maximum_attempts=2 # MobSF can be flaky, limit retries
),
)
# Handle skipped or completed status
if mobsf_result.get("status") == "skipped":
workflow.logger.warning(
f"⚠️ MobSF skipped: {mobsf_result.get('summary', {}).get('skip_reason', 'Unknown reason')}"
)
else:
workflow.logger.info(
f"✓ MobSF completed: {mobsf_result.get('summary', {}).get('total_findings', 0)} findings"
)
except Exception as e:
workflow.logger.warning(f"MobSF scan failed (continuing without it): {e}")
mobsf_result = None
else:
workflow.logger.info("Phase 3: MobSF scan skipped (disabled or no APK)")
# Phase 4: Generate SARIF report
workflow.logger.info("Phase 4: Generating SARIF report")
sarif_report = await workflow.execute_activity(
"generate_android_sarif",
args=[jadx_result or {}, opengrep_result, mobsf_result, reporter_config, workspace_path],
start_to_close_timeout=timedelta(minutes=5),
retry_policy=retry_policy,
)
# Phase 5: Upload results to MinIO
workflow.logger.info("Phase 5: Uploading results to MinIO")
result_url = await workflow.execute_activity(
"upload_results",
args=[workflow.info().workflow_id, sarif_report, "sarif"],
start_to_close_timeout=timedelta(minutes=10),
retry_policy=retry_policy,
)
workflow.logger.info(f"✓ Results uploaded: {result_url}")
# Phase 6: Cleanup cache
workflow.logger.info("Phase 6: Cleaning up cache")
await workflow.execute_activity(
"cleanup_cache",
args=[workspace_path, "shared"],
start_to_close_timeout=timedelta(minutes=5),
retry_policy=RetryPolicy(maximum_attempts=1), # Don't retry cleanup
)
# Calculate summary
total_findings = len(sarif_report.get("runs", [{}])[0].get("results", []))
summary = {
"workflow": "android_static_analysis",
"target_id": target_id,
"total_findings": total_findings,
"decompiled_java_files": (jadx_result or {}).get("summary", {}).get("java_files", 0) if jadx_result else 0,
"opengrep_findings": opengrep_result.get("summary", {}).get("total_findings", 0),
"mobsf_findings": mobsf_result.get("summary", {}).get("total_findings", 0) if mobsf_result else 0,
"result_url": result_url,
}
workflow.logger.info(
f"✅ AndroidStaticAnalysisWorkflow completed successfully: {total_findings} findings"
)
return {
"sarif": sarif_report,
"summary": summary,
}

View File

@@ -15,11 +15,15 @@ Manages on-demand startup and shutdown of Temporal workers using Docker Compose.
# Additional attribution and requirements are provided in the NOTICE file.
import logging
import os
import platform
import subprocess
import time
from pathlib import Path
from typing import Optional, Dict, Any
import requests
import yaml
from rich.console import Console
logger = logging.getLogger(__name__)
@@ -57,27 +61,181 @@ class WorkerManager:
def _find_compose_file(self) -> Path:
"""
Auto-detect docker-compose.yml location.
Auto-detect docker-compose.yml location using multiple strategies.
Searches upward from current directory to find the compose file.
Strategies (in order):
1. Query backend API for host path
2. Search upward for .fuzzforge marker directory
3. Use FUZZFORGE_ROOT environment variable
4. Fallback to current directory
Returns:
Path to docker-compose.yml
Raises:
FileNotFoundError: If docker-compose.yml cannot be located
"""
current = Path.cwd()
# Strategy 1: Ask backend for location
try:
backend_url = os.getenv("FUZZFORGE_API_URL", "http://localhost:8000")
response = requests.get(f"{backend_url}/system/info", timeout=2)
if response.ok:
info = response.json()
if compose_path_str := info.get("docker_compose_path"):
compose_path = Path(compose_path_str)
if compose_path.exists():
logger.debug(f"Found docker-compose.yml via backend API: {compose_path}")
return compose_path
except Exception as e:
logger.debug(f"Backend API not reachable for path lookup: {e}")
# Try current directory and parents
# Strategy 2: Search upward for .fuzzforge marker directory
current = Path.cwd()
for parent in [current] + list(current.parents):
compose_path = parent / "docker-compose.yml"
if (parent / ".fuzzforge").exists():
compose_path = parent / "docker-compose.yml"
if compose_path.exists():
logger.debug(f"Found docker-compose.yml via .fuzzforge marker: {compose_path}")
return compose_path
# Strategy 3: Environment variable
if fuzzforge_root := os.getenv("FUZZFORGE_ROOT"):
compose_path = Path(fuzzforge_root) / "docker-compose.yml"
if compose_path.exists():
logger.debug(f"Found docker-compose.yml via FUZZFORGE_ROOT: {compose_path}")
return compose_path
# Fallback to default location
return Path("docker-compose.yml")
# Strategy 4: Fallback to current directory
compose_path = Path("docker-compose.yml")
if compose_path.exists():
return compose_path
def _run_docker_compose(self, *args: str) -> subprocess.CompletedProcess:
raise FileNotFoundError(
"Cannot find docker-compose.yml. Ensure backend is running, "
"run from FuzzForge directory, or set FUZZFORGE_ROOT environment variable."
)
def _get_workers_dir(self) -> Path:
"""
Run docker-compose command.
Get the workers directory path.
Uses same strategy as _find_compose_file():
1. Query backend API
2. Derive from compose_file location
3. Use FUZZFORGE_ROOT
Returns:
Path to workers directory
"""
# Strategy 1: Ask backend
try:
backend_url = os.getenv("FUZZFORGE_API_URL", "http://localhost:8000")
response = requests.get(f"{backend_url}/system/info", timeout=2)
if response.ok:
info = response.json()
if workers_dir_str := info.get("workers_dir"):
workers_dir = Path(workers_dir_str)
if workers_dir.exists():
return workers_dir
except Exception:
pass
# Strategy 2: Derive from compose file location
if self.compose_file.exists():
workers_dir = self.compose_file.parent / "workers"
if workers_dir.exists():
return workers_dir
# Strategy 3: Use environment variable
if fuzzforge_root := os.getenv("FUZZFORGE_ROOT"):
workers_dir = Path(fuzzforge_root) / "workers"
if workers_dir.exists():
return workers_dir
# Fallback
return Path("workers")
def _detect_platform(self) -> str:
"""
Detect the current platform.
Returns:
Platform string: "linux/amd64" or "linux/arm64"
"""
machine = platform.machine().lower()
if machine in ["x86_64", "amd64"]:
return "linux/amd64"
elif machine in ["arm64", "aarch64"]:
return "linux/arm64"
return "unknown"
def _read_worker_metadata(self, vertical: str) -> dict:
"""
Read worker metadata.yaml for a vertical.
Args:
vertical: Worker vertical name (e.g., "android", "python")
Returns:
Dictionary containing metadata, or empty dict if not found
"""
try:
workers_dir = self._get_workers_dir()
metadata_file = workers_dir / vertical / "metadata.yaml"
if not metadata_file.exists():
logger.debug(f"No metadata.yaml found for {vertical}")
return {}
with open(metadata_file, 'r') as f:
return yaml.safe_load(f) or {}
except Exception as e:
logger.debug(f"Failed to read metadata for {vertical}: {e}")
return {}
def _select_dockerfile(self, vertical: str) -> str:
"""
Select the appropriate Dockerfile for the current platform.
Args:
vertical: Worker vertical name
Returns:
Dockerfile name (e.g., "Dockerfile.amd64", "Dockerfile.arm64")
"""
detected_platform = self._detect_platform()
metadata = self._read_worker_metadata(vertical)
if not metadata:
# No metadata: use default Dockerfile
logger.debug(f"No metadata for {vertical}, using Dockerfile")
return "Dockerfile"
platforms = metadata.get("platforms", {})
# Try detected platform first
if detected_platform in platforms:
dockerfile = platforms[detected_platform].get("dockerfile", "Dockerfile")
logger.debug(f"Selected {dockerfile} for {vertical} on {detected_platform}")
return dockerfile
# Fallback to default platform
default_platform = metadata.get("default_platform", "linux/amd64")
if default_platform in platforms:
dockerfile = platforms[default_platform].get("dockerfile", "Dockerfile.amd64")
logger.debug(f"Using default platform {default_platform}: {dockerfile}")
return dockerfile
# Last resort
return "Dockerfile"
def _run_docker_compose(self, *args: str, env: Optional[Dict[str, str]] = None) -> subprocess.CompletedProcess:
"""
Run docker-compose command with optional environment variables.
Args:
*args: Arguments to pass to docker-compose
env: Optional environment variables to set
Returns:
CompletedProcess with result
@@ -88,11 +246,18 @@ class WorkerManager:
cmd = ["docker-compose", "-f", str(self.compose_file)] + list(args)
logger.debug(f"Running: {' '.join(cmd)}")
# Merge with current environment
full_env = os.environ.copy()
if env:
full_env.update(env)
logger.debug(f"Environment overrides: {env}")
return subprocess.run(
cmd,
capture_output=True,
text=True,
check=True
check=True,
env=full_env
)
def _service_to_container_name(self, service_name: str) -> str:
@@ -135,21 +300,35 @@ class WorkerManager:
def start_worker(self, service_name: str) -> bool:
"""
Start a worker service using docker-compose.
Start a worker service using docker-compose with platform-specific Dockerfile.
Args:
service_name: Name of the Docker Compose service to start (e.g., "worker-python")
service_name: Name of the Docker Compose service to start (e.g., "worker-android")
Returns:
True if started successfully, False otherwise
"""
try:
console.print(f"🚀 Starting worker: {service_name}")
# Extract vertical name from service name
vertical = service_name.replace("worker-", "")
# Use docker-compose up to create and start the service
result = self._run_docker_compose("up", "-d", service_name)
# Detect platform and select appropriate Dockerfile
detected_platform = self._detect_platform()
dockerfile = self._select_dockerfile(vertical)
logger.info(f"Worker {service_name} started")
# Set environment variable for docker-compose
env_var_name = f"{vertical.upper()}_DOCKERFILE"
env = {env_var_name: dockerfile}
console.print(
f"🚀 Starting worker: {service_name} "
f"(platform: {detected_platform}, using {dockerfile})"
)
# Use docker-compose up with --build to ensure correct Dockerfile is used
result = self._run_docker_compose("up", "-d", "--build", service_name, env=env)
logger.info(f"Worker {service_name} started with {dockerfile}")
return True
except subprocess.CalledProcessError as e:

View File

@@ -342,7 +342,7 @@ services:
worker-android:
build:
context: ./workers/android
dockerfile: Dockerfile
dockerfile: ${ANDROID_DOCKERFILE:-Dockerfile.amd64}
container_name: fuzzforge-worker-android
profiles:
- workers
@@ -430,6 +430,9 @@ services:
PYTHONPATH: /app
PYTHONUNBUFFERED: 1
# Host filesystem paths (for CLI worker management)
FUZZFORGE_HOST_ROOT: ${PWD}
# Logging
LOG_LEVEL: INFO
ports:

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,695 @@
{
"tool": {
"name": "FuzzForge Security Assessment",
"version": "1.0.0"
},
"summary": {
"total_issues": 68,
"by_severity": {
"warning": 51,
"error": 17
}
},
"findings": [
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .env",
"location": {
"file": ".env",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .git-credentials",
"location": {
"file": ".git-credentials",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at private_key.pem",
"location": {
"file": "private_key.pem",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at wallet.json",
"location": {
"file": "wallet.json",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at SECRETS_GROUND_TRUTH.json",
"location": {
"file": "SECRETS_GROUND_TRUTH.json",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .npmrc",
"location": {
"file": ".npmrc",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .fuzzforge/.env",
"location": {
"file": ".fuzzforge/.env",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .fuzzforge/.env.template",
"location": {
"file": ".fuzzforge/.env.template",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at data/credentials.json",
"location": {
"file": "data/credentials.json",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at data/api_keys.txt",
"location": {
"file": "data/api_keys.txt",
"line": null,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via F-string in SQL query",
"location": {
"file": "app.py",
"line": 31,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded API Key in src/api_handler.py",
"location": {
"file": "src/api_handler.py",
"line": 25,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Authentication Token in src/api_handler.py",
"location": {
"file": "src/api_handler.py",
"line": 21,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "src/api_handler.py",
"line": 34,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "src/api_handler.py",
"line": 54,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function exec(): Arbitrary code execution",
"location": {
"file": "src/api_handler.py",
"line": 49,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function os.system(): Command injection risk",
"location": {
"file": "src/api_handler.py",
"line": 44,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function os.system(): Command injection risk",
"location": {
"file": "src/api_handler.py",
"line": 71,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function subprocess with shell=True: Command injection risk",
"location": {
"file": "src/api_handler.py",
"line": 39,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String concatenation in SQL",
"location": {
"file": "src/database.py",
"line": 43,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String formatting in SQL",
"location": {
"file": "src/database.py",
"line": 50,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String formatting in SQL",
"location": {
"file": "src/database.py",
"line": 57,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via F-string in SQL query",
"location": {
"file": "src/database.py",
"line": 50,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/database.py",
"line": 43,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/database.py",
"line": 75,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function os.system(): Command injection risk",
"location": {
"file": "src/database.py",
"line": 69,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function pickle.load(): Deserialization vulnerability",
"location": {
"file": "src/database.py",
"line": 64,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in scripts/backup.js",
"location": {
"file": "scripts/backup.js",
"line": 81,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Potential Secret Hash in scripts/backup.js",
"location": {
"file": "scripts/backup.js",
"line": 81,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "scripts/backup.js",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function new Function(): Arbitrary code execution",
"location": {
"file": "scripts/backup.js",
"line": 28,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function innerHTML: XSS vulnerability",
"location": {
"file": "scripts/backup.js",
"line": 33,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function innerHTML: XSS vulnerability",
"location": {
"file": "scripts/backup.js",
"line": 37,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function document.write(): XSS vulnerability",
"location": {
"file": "scripts/backup.js",
"line": 42,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in src/Main.java",
"location": {
"file": "src/Main.java",
"line": 77,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String concatenation in SQL",
"location": {
"file": "src/Main.java",
"line": 23,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String concatenation in SQL",
"location": {
"file": "src/Main.java",
"line": 29,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/Main.java",
"line": 23,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/Main.java",
"line": 29,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "scripts/deploy.php",
"line": 28,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function exec(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 22,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function exec(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function system(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 21,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function shell_exec(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 12,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 21,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 24,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 31,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 45,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 50,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 57,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 13,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 22,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 27,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 32,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 40,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 46,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 53,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 54,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 61,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 62,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded API Key in src/utils.rb",
"location": {
"file": "src/utils.rb",
"line": 64,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Hardcoded Password in src/utils.rb",
"location": {
"file": "src/utils.rb",
"line": 63,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in src/app.go",
"location": {
"file": "src/app.go",
"line": 59,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in src/app.go",
"location": {
"file": "src/app.go",
"line": 62,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Potential Secret Hash in src/app.go",
"location": {
"file": "src/app.go",
"line": 59,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Potential Secret Hash in src/app.go",
"location": {
"file": "src/app.go",
"line": 62,
"column": null
}
}
]
}

View File

@@ -0,0 +1,148 @@
# FuzzForge Vertical Worker: Android Security
#
# Pre-installed tools for Android security analysis:
# - Android SDK (adb, aapt)
# - apktool (APK decompilation)
# - jadx (Dex to Java decompiler)
# - Frida (dynamic instrumentation)
# - androguard (Python APK analysis)
# - MobSF dependencies
#
# Note: Uses amd64 platform for compatibility with Android 32-bit tools
FROM --platform=linux/amd64 python:3.11-slim-bookworm
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
# Build essentials
build-essential \
git \
curl \
wget \
unzip \
# Java (required for Android tools)
openjdk-17-jdk \
# Android tools dependencies (32-bit libraries for emulated amd64)
lib32stdc++6 \
lib32z1 \
# Frida dependencies
libc6-dev \
# XML/Binary analysis
libxml2-dev \
libxslt-dev \
# Network tools
netcat-openbsd \
tcpdump \
# MobSF dependencies
xfonts-75dpi \
xfonts-base \
# Cleanup
&& rm -rf /var/lib/apt/lists/*
# Install wkhtmltopdf (required for MobSF PDF reports)
RUN wget -q https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6.1-3/wkhtmltox_0.12.6.1-3.bookworm_amd64.deb && \
apt-get update && \
apt-get install -y ./wkhtmltox_0.12.6.1-3.bookworm_amd64.deb && \
rm wkhtmltox_0.12.6.1-3.bookworm_amd64.deb && \
rm -rf /var/lib/apt/lists/*
# Install Android SDK Command Line Tools
ENV ANDROID_HOME=/opt/android-sdk
ENV PATH="${ANDROID_HOME}/cmdline-tools/latest/bin:${ANDROID_HOME}/platform-tools:${PATH}"
RUN mkdir -p ${ANDROID_HOME}/cmdline-tools && \
cd ${ANDROID_HOME}/cmdline-tools && \
wget -q https://dl.google.com/android/repository/commandlinetools-linux-9477386_latest.zip && \
unzip -q commandlinetools-linux-9477386_latest.zip && \
mv cmdline-tools latest && \
rm commandlinetools-linux-9477386_latest.zip && \
# Accept licenses
yes | ${ANDROID_HOME}/cmdline-tools/latest/bin/sdkmanager --licenses && \
# Install platform tools (adb, fastboot)
${ANDROID_HOME}/cmdline-tools/latest/bin/sdkmanager "platform-tools" "build-tools;33.0.0"
# Install apktool
RUN wget -q https://raw.githubusercontent.com/iBotPeaches/Apktool/master/scripts/linux/apktool -O /usr/local/bin/apktool && \
wget -q https://bitbucket.org/iBotPeaches/apktool/downloads/apktool_2.9.3.jar -O /usr/local/bin/apktool.jar && \
chmod +x /usr/local/bin/apktool
# Install jadx (Dex to Java decompiler)
RUN wget -q https://github.com/skylot/jadx/releases/download/v1.4.7/jadx-1.4.7.zip -O /tmp/jadx.zip && \
unzip -q /tmp/jadx.zip -d /opt/jadx && \
ln -s /opt/jadx/bin/jadx /usr/local/bin/jadx && \
ln -s /opt/jadx/bin/jadx-gui /usr/local/bin/jadx-gui && \
rm /tmp/jadx.zip
# Install Python dependencies for Android security tools
COPY requirements.txt /tmp/requirements.txt
RUN pip3 install --no-cache-dir -r /tmp/requirements.txt && \
rm /tmp/requirements.txt
# Install androguard (Python APK analysis framework)
RUN pip3 install --no-cache-dir androguard pyaxmlparser
# Install Frida
RUN pip3 install --no-cache-dir frida-tools frida
# Install OpenGrep/Semgrep (expose as opengrep command)
RUN pip3 install --no-cache-dir semgrep==1.45.0 && \
ln -sf /usr/local/bin/semgrep /usr/local/bin/opengrep
# Install MobSF (Mobile Security Framework)
RUN git clone --depth 1 --branch v3.9.7 https://github.com/MobSF/Mobile-Security-Framework-MobSF.git /app/mobsf && \
cd /app/mobsf && \
./setup.sh
# Install aiohttp for async HTTP requests (used by MobSF scanner module)
RUN pip3 install --no-cache-dir aiohttp
# Create cache directory
RUN mkdir -p /cache && chmod 755 /cache
# Copy worker entrypoint (generic, works for all verticals)
COPY worker.py /app/worker.py
# Create startup script that runs MobSF in background and then starts worker
RUN echo '#!/bin/bash\n\
# Start MobSF server in background with sync workers (avoid Rosetta syscall issues)\n\
echo "Starting MobSF server in background..."\n\
cd /app/mobsf && python3 -m poetry run gunicorn -b 127.0.0.1:8877 \\\n\
mobsf.MobSF.wsgi:application \\\n\
--worker-class=sync \\\n\
--workers=2 \\\n\
--timeout=3600 \\\n\
--log-level=error \\\n\
> /tmp/mobsf.log 2>&1 &\n\
MOBSF_PID=$!\n\
echo "MobSF started with PID: $MOBSF_PID"\n\
\n\
# Wait for MobSF to initialize\n\
sleep 10\n\
\n\
# Generate and store MobSF API key\n\
if [ -f /root/.MobSF/secret ]; then\n\
SECRET=$(cat /root/.MobSF/secret)\n\
export MOBSF_API_KEY=$(echo -n "$SECRET" | sha256sum | cut -d " " -f1)\n\
echo "MobSF API key: $MOBSF_API_KEY"\n\
fi\n\
\n\
# Start worker\n\
echo "Starting Temporal worker..."\n\
exec python3 /app/worker.py\n\
' > /app/start.sh && chmod +x /app/start.sh
# Add toolbox to Python path (mounted at runtime)
ENV PYTHONPATH="/app:/app/toolbox:${PYTHONPATH}"
ENV PYTHONUNBUFFERED=1
ENV JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
ENV MOBSF_PORT=8877
# Healthcheck
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=5 \
CMD python3 -c "import sys; sys.exit(0)"
# Run startup script (starts MobSF + worker)
CMD ["/app/start.sh"]

View File

@@ -1,4 +1,4 @@
# FuzzForge Vertical Worker: Android Security
# FuzzForge Vertical Worker: Android Security (ARM64)
#
# Pre-installed tools for Android security analysis:
# - Android SDK (adb, aapt)
@@ -6,9 +6,11 @@
# - jadx (Dex to Java decompiler)
# - Frida (dynamic instrumentation)
# - androguard (Python APK analysis)
# - MobSF dependencies
#
# Note: MobSF is excluded due to Rosetta 2 syscall incompatibility
# Note: Uses amd64 platform for compatibility with Android 32-bit tools
FROM python:3.11-slim-bookworm
FROM --platform=linux/amd64 python:3.11-slim-bookworm
# Set working directory
WORKDIR /app
@@ -23,7 +25,7 @@ RUN apt-get update && apt-get install -y \
unzip \
# Java (required for Android tools)
openjdk-17-jdk \
# Android tools dependencies
# Android tools dependencies (32-bit libraries for emulated amd64)
lib32stdc++6 \
lib32z1 \
# Frida dependencies
@@ -75,20 +77,34 @@ RUN pip3 install --no-cache-dir androguard pyaxmlparser
# Install Frida
RUN pip3 install --no-cache-dir frida-tools frida
# Install OpenGrep/Semgrep (expose as opengrep command)
RUN pip3 install --no-cache-dir semgrep==1.45.0 && \
ln -sf /usr/local/bin/semgrep /usr/local/bin/opengrep
# NOTE: MobSF is NOT installed on ARM64 platform due to Rosetta 2 incompatibility
# The workflow will gracefully skip MobSF analysis on this platform
# Create cache directory
RUN mkdir -p /cache && chmod 755 /cache
# Copy worker entrypoint (generic, works for all verticals)
COPY worker.py /app/worker.py
# Create simplified startup script (no MobSF)
RUN echo '#!/bin/bash\n\
# ARM64 worker - MobSF disabled due to Rosetta 2 limitations\n\
echo "Starting Temporal worker (ARM64 platform - MobSF disabled)..."\n\
exec python3 /app/worker.py\n\
' > /app/start.sh && chmod +x /app/start.sh
# Add toolbox to Python path (mounted at runtime)
ENV PYTHONPATH="/app:/app/toolbox:${PYTHONPATH}"
ENV PYTHONUNBUFFERED=1
ENV JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
# Healthcheck
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=5 \
CMD python3 -c "import sys; sys.exit(0)"
# Run worker
CMD ["python3", "/app/worker.py"]
# Run startup script
CMD ["/app/start.sh"]

View File

@@ -0,0 +1,42 @@
# Android Worker Metadata
#
# Platform-specific configuration for Android security analysis worker.
# This file defines which Dockerfile to use for each platform and what tools
# are available on that platform.
name: android
version: "1.0.0"
description: "Android application security testing worker with Jadx, OpenGrep, and MobSF"
# Default platform when auto-detection fails or metadata is not platform-aware
default_platform: linux/amd64
# Platform-specific configurations
platforms:
# x86_64 / Intel / AMD platform (full toolchain including MobSF)
linux/amd64:
dockerfile: Dockerfile.amd64
description: "Full Android toolchain with MobSF support"
supported_tools:
- jadx # APK decompiler
- opengrep # Static analysis with custom Android rules
- mobsf # Mobile Security Framework
- frida # Dynamic instrumentation
- androguard # Python APK analysis
# ARM64 / Apple Silicon platform (MobSF excluded due to Rosetta limitations)
linux/arm64:
dockerfile: Dockerfile.arm64
description: "Android toolchain without MobSF (ARM64/Apple Silicon compatible)"
supported_tools:
- jadx # APK decompiler
- opengrep # Static analysis with custom Android rules
- frida # Dynamic instrumentation
- androguard # Python APK analysis
disabled_tools:
mobsf: "Incompatible with Rosetta 2 emulation (requires syscall 284: copy_file_range)"
notes: |
MobSF cannot run under Rosetta 2 on Apple Silicon Macs due to missing
syscall implementations. The workflow will gracefully skip MobSF analysis
on this platform while still providing comprehensive security testing via
Jadx decompilation and OpenGrep static analysis.