feat: Add Android static analysis workflow with Jadx, OpenGrep, and MobSF

Comprehensive Android security testing workflow converted from Prefect to Temporal architecture:

Modules (3):
- JadxDecompiler: APK to Java source code decompilation
- OpenGrepAndroid: Static analysis with Android-specific security rules
- MobSFScanner: Comprehensive mobile security framework integration

Custom Rules (13):
- clipboard-sensitive-data, hardcoded-secrets, insecure-data-storage
- insecure-deeplink, insecure-logging, intent-redirection
- sensitive_data_sharedPreferences, sqlite-injection
- vulnerable-activity, vulnerable-content-provider, vulnerable-service
- webview-javascript-enabled, webview-load-arbitrary-url

Workflow:
- 6-phase Temporal workflow: download → Jadx → OpenGrep → MobSF → SARIF → upload
- 4 activities: decompile_with_jadx, scan_with_opengrep, scan_with_mobsf, generate_android_sarif
- SARIF output combining findings from all security tools

Docker Worker:
- ARM64 Mac compatibility via amd64 platform emulation
- Pre-installed: Android SDK, Jadx 1.4.7, OpenGrep 1.45.0, MobSF 3.9.7
- MobSF runs as background service with API key auto-generation
- Added aiohttp for async HTTP communication

Test APKs:
- BeetleBug.apk and shopnest.apk for workflow validation
This commit is contained in:
tduhamel42
2025-10-23 10:25:52 +02:00
parent 171941ef26
commit aa2cd48b00
25 changed files with 2776 additions and 5 deletions

View File

@@ -0,0 +1,25 @@
"""
Android Security Analysis Modules
Modules for Android application security testing:
- JadxDecompiler: APK decompilation using Jadx
- MobSFScanner: Mobile security analysis using MobSF
- OpenGrepAndroid: Static analysis using OpenGrep/Semgrep with Android-specific rules
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
from .jadx_decompiler import JadxDecompiler
from .mobsf_scanner import MobSFScanner
from .opengrep_android import OpenGrepAndroid
__all__ = ["JadxDecompiler", "MobSFScanner", "OpenGrepAndroid"]

View File

@@ -0,0 +1,15 @@
rules:
- id: clipboard-sensitive-data
severity: WARNING
languages: [java]
message: "Sensitive data may be copied to the clipboard."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: security
area: clipboard
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$CLIPBOARD.setPrimaryClip($CLIP)"

View File

@@ -0,0 +1,23 @@
rules:
- id: hardcoded-secrets
severity: WARNING
languages: [java]
message: "Possible hardcoded secret found in variable '$NAME'."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: secrets
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern-either:
- pattern: 'String $NAME = "$VAL";'
- pattern: 'final String $NAME = "$VAL";'
- pattern: 'private String $NAME = "$VAL";'
- pattern: 'public static String $NAME = "$VAL";'
- pattern: 'static final String $NAME = "$VAL";'
- pattern-regex: "$NAME =~ /(?i).*(api|key|token|secret|pass|auth|session|bearer|access|private).*/"

View File

@@ -0,0 +1,18 @@
rules:
- id: insecure-data-storage
severity: WARNING
languages: [java]
message: "Potential insecure data storage (external storage)."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: security
area: storage
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern-either:
- pattern: "$CTX.openFileOutput($NAME, $MODE)"
- pattern: "Environment.getExternalStorageDirectory()"

View File

@@ -0,0 +1,16 @@
rules:
- id: insecure-deeplink
severity: WARNING
languages: [xml]
message: "Potential insecure deeplink found in intent-filter."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<intent-filter>

View File

@@ -0,0 +1,21 @@
rules:
- id: insecure-logging
severity: WARNING
languages: [java]
message: "Sensitive data logged via Android Log API."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: logging
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern-either:
- pattern: "Log.d($TAG, $MSG)"
- pattern: "Log.e($TAG, $MSG)"
- pattern: "System.out.println($MSG)"
- pattern-regex: "$MSG =~ /(?i).*(password|token|secret|api|auth|session).*/"

View File

@@ -0,0 +1,15 @@
rules:
- id: intent-redirection
severity: WARNING
languages: [java]
message: "Potential intent redirection: using getIntent().getExtras() without validation."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: intent
area: intercomponent
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$ACT.getIntent().getExtras()"

View File

@@ -0,0 +1,18 @@
rules:
- id: sensitive-data-in-shared-preferences
severity: WARNING
languages: [java]
message: "Sensitive data may be stored in SharedPreferences. Please review the key '$KEY'."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M2
category: security
area: storage
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern: "$EDITOR.putString($KEY, $VAL);"
- pattern-regex: "$KEY =~ /(?i).*(username|password|pass|token|auth_token|api_key|secret|sessionid|email).*/"

View File

@@ -0,0 +1,21 @@
rules:
- id: sqlite-injection
severity: ERROR
languages: [java]
message: "Possible SQL injection: concatenated input in rawQuery or execSQL."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M7
category: injection
area: database
verification-level: [L1]
paths:
include:
- "**/*.java"
patterns:
- pattern-either:
- pattern: "$DB.rawQuery($QUERY, ...)"
- pattern: "$DB.execSQL($QUERY)"
- pattern-regex: "$QUERY =~ /.*\".*\".*\\+.*/"

View File

@@ -0,0 +1,16 @@
rules:
- id: vulnerable-activity
severity: WARNING
languages: [xml]
message: "Activity exported without permission."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<activity android:exported="true"

View File

@@ -0,0 +1,16 @@
rules:
- id: vulnerable-content-provider
severity: WARNING
languages: [xml]
message: "ContentProvider exported without permission."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<provider android:exported="true"

View File

@@ -0,0 +1,16 @@
rules:
- id: vulnerable-service
severity: WARNING
languages: [xml]
message: "Service exported without permission."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
category: component
area: manifest
verification-level: [L1]
paths:
include:
- "**/AndroidManifest.xml"
pattern: |
<service android:exported="true"

View File

@@ -0,0 +1,16 @@
rules:
- id: webview-javascript-enabled
severity: ERROR
languages: [java]
message: "WebView with JavaScript enabled can be dangerous if loading untrusted content."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M7
category: webview
area: ui
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$W.getSettings().setJavaScriptEnabled(true)"

View File

@@ -0,0 +1,16 @@
rules:
- id: webview-load-arbitrary-url
severity: WARNING
languages: [java]
message: "Loading unvalidated URL in WebView may cause open redirect or XSS."
metadata:
authors:
- Guerric ELOI (FuzzingLabs)
owasp-mobile: M7
category: webview
area: ui
verification-level: [L1]
paths:
include:
- "**/*.java"
pattern: "$W.loadUrl($URL)"

View File

@@ -0,0 +1,270 @@
"""
Jadx APK Decompilation Module
Decompiles Android APK files to Java source code using Jadx.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import asyncio
import shutil
import logging
from pathlib import Path
from typing import Dict, Any
try:
from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult
except ImportError:
try:
from modules.base import BaseModule, ModuleMetadata, ModuleResult
except ImportError:
from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult
logger = logging.getLogger(__name__)
class JadxDecompiler(BaseModule):
"""Module for decompiling APK files to Java source code using Jadx"""
def get_metadata(self) -> ModuleMetadata:
return ModuleMetadata(
name="jadx_decompiler",
version="1.5.0",
description="Android APK decompilation using Jadx - converts DEX bytecode to Java source",
author="FuzzForge Team",
category="android",
tags=["android", "jadx", "decompilation", "reverse", "apk"],
input_schema={
"type": "object",
"properties": {
"apk_path": {
"type": "string",
"description": "Path to the APK to decompile (absolute or relative to workspace)",
},
"output_dir": {
"type": "string",
"description": "Directory (relative to workspace) where Jadx output should be written",
"default": "jadx_output",
},
"overwrite": {
"type": "boolean",
"description": "Overwrite existing output directory if present",
"default": True,
},
"threads": {
"type": "integer",
"description": "Number of Jadx decompilation threads",
"default": 4,
"minimum": 1,
"maximum": 32,
},
"decompiler_args": {
"type": "array",
"items": {"type": "string"},
"description": "Additional arguments passed directly to Jadx",
"default": [],
},
},
"required": ["apk_path"],
},
output_schema={
"type": "object",
"properties": {
"output_dir": {
"type": "string",
"description": "Path to decompiled output directory",
},
"source_dir": {
"type": "string",
"description": "Path to decompiled Java sources",
},
"resource_dir": {
"type": "string",
"description": "Path to extracted resources",
},
"java_files": {
"type": "integer",
"description": "Number of Java files decompiled",
},
},
},
requires_workspace=True,
)
def validate_config(self, config: Dict[str, Any]) -> bool:
"""Validate module configuration"""
apk_path = config.get("apk_path")
if not apk_path:
raise ValueError("'apk_path' must be provided for Jadx decompilation")
threads = config.get("threads", 4)
if not isinstance(threads, int) or threads < 1 or threads > 32:
raise ValueError("threads must be between 1 and 32")
return True
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""
Execute Jadx decompilation on an APK file.
Args:
config: Configuration dict with apk_path, output_dir, etc.
workspace: Workspace directory path
Returns:
ModuleResult with decompilation summary and metadata
"""
self.start_timer()
try:
self.validate_config(config)
self.validate_workspace(workspace)
workspace = workspace.resolve()
# Resolve APK path
apk_path = Path(config["apk_path"])
if not apk_path.is_absolute():
apk_path = (workspace / apk_path).resolve()
if not apk_path.exists():
raise ValueError(f"APK not found: {apk_path}")
if apk_path.is_dir():
raise ValueError(f"APK path must be a file, not a directory: {apk_path}")
logger.info(f"Decompiling APK: {apk_path}")
# Resolve output directory
output_dir = Path(config.get("output_dir", "jadx_output"))
if not output_dir.is_absolute():
output_dir = (workspace / output_dir).resolve()
# Handle existing output directory
if output_dir.exists():
if config.get("overwrite", True):
logger.info(f"Removing existing output directory: {output_dir}")
shutil.rmtree(output_dir)
else:
raise ValueError(
f"Output directory already exists: {output_dir}. Set overwrite=true to replace it."
)
output_dir.mkdir(parents=True, exist_ok=True)
# Build Jadx command
threads = str(config.get("threads", 4))
extra_args = config.get("decompiler_args", []) or []
cmd = [
"jadx",
"--threads-count",
threads,
"--deobf", # Deobfuscate code
"--output-dir",
str(output_dir),
]
cmd.extend(extra_args)
cmd.append(str(apk_path))
logger.info(f"Running Jadx: {' '.join(cmd)}")
# Execute Jadx
process = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
cwd=str(workspace),
)
stdout, stderr = await process.communicate()
stdout_str = stdout.decode(errors="ignore") if stdout else ""
stderr_str = stderr.decode(errors="ignore") if stderr else ""
if stdout_str:
logger.debug(f"Jadx stdout: {stdout_str[:200]}...")
if stderr_str:
logger.debug(f"Jadx stderr: {stderr_str[:200]}...")
if process.returncode != 0:
error_output = stderr_str or stdout_str or "No error output"
raise RuntimeError(
f"Jadx failed with exit code {process.returncode}: {error_output[:500]}"
)
# Verify output structure
source_dir = output_dir / "sources"
resource_dir = output_dir / "resources"
if not source_dir.exists():
logger.warning(
f"Jadx sources directory not found at expected path: {source_dir}"
)
# Use output_dir as fallback
source_dir = output_dir
# Count decompiled Java files
java_files = 0
if source_dir.exists():
java_files = sum(1 for _ in source_dir.rglob("*.java"))
logger.info(f"Decompiled {java_files} Java files")
# Log sample files for debugging
sample_files = []
for idx, file_path in enumerate(source_dir.rglob("*.java")):
sample_files.append(str(file_path.relative_to(workspace)))
if idx >= 4:
break
if sample_files:
logger.debug(f"Sample Java files: {sample_files}")
# Create summary
summary = {
"output_dir": str(output_dir),
"source_dir": str(source_dir if source_dir.exists() else output_dir),
"resource_dir": str(
resource_dir if resource_dir.exists() else output_dir
),
"java_files": java_files,
"apk_name": apk_path.name,
"apk_size_bytes": apk_path.stat().st_size,
}
metadata = {
"apk_path": str(apk_path),
"output_dir": str(output_dir),
"source_dir": summary["source_dir"],
"resource_dir": summary["resource_dir"],
"threads": threads,
"decompiler": "jadx",
"decompiler_version": "1.5.0",
}
logger.info(
f"✓ Jadx decompilation completed: {java_files} Java files generated"
)
return self.create_result(
findings=[], # Jadx doesn't generate findings, only decompiles
status="success",
summary=summary,
metadata=metadata,
)
except Exception as exc:
logger.error(f"Jadx decompilation failed: {exc}", exc_info=True)
return self.create_result(
findings=[],
status="failed",
error=str(exc),
metadata={"decompiler": "jadx", "apk_path": config.get("apk_path")},
)

View File

@@ -0,0 +1,396 @@
"""
MobSF Scanner Module
Mobile Security Framework (MobSF) integration for comprehensive Android app security analysis.
Performs static analysis on APK files including permissions, manifest analysis, code analysis, and behavior checks.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import asyncio
import hashlib
import json
import logging
import os
from collections import Counter
from pathlib import Path
from typing import Dict, Any, List, Optional
import aiohttp
try:
from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
try:
from modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
logger = logging.getLogger(__name__)
class MobSFScanner(BaseModule):
"""Mobile Security Framework (MobSF) scanner module for Android applications"""
SEVERITY_MAP = {
"dangerous": "critical",
"high": "high",
"warning": "medium",
"medium": "medium",
"low": "low",
"info": "low",
"secure": "low",
}
def get_metadata(self) -> ModuleMetadata:
return ModuleMetadata(
name="mobsf_scanner",
version="3.9.7",
description="Comprehensive Android security analysis using Mobile Security Framework (MobSF)",
author="FuzzForge Team",
category="android",
tags=["mobile", "android", "mobsf", "sast", "scanner", "security"],
input_schema={
"type": "object",
"properties": {
"mobsf_url": {
"type": "string",
"description": "MobSF server URL",
"default": "http://localhost:8877",
},
"file_path": {
"type": "string",
"description": "Path to the APK file to scan (absolute or relative to workspace)",
},
"api_key": {
"type": "string",
"description": "MobSF API key (if not provided, will try MOBSF_API_KEY env var)",
"default": None,
},
"rescan": {
"type": "boolean",
"description": "Force rescan even if file was previously analyzed",
"default": False,
},
},
"required": ["file_path"],
},
output_schema={
"type": "object",
"properties": {
"findings": {
"type": "array",
"description": "Security findings from MobSF analysis"
},
"scan_hash": {"type": "string"},
"total_findings": {"type": "integer"},
"severity_counts": {"type": "object"},
}
},
requires_workspace=True,
)
def validate_config(self, config: Dict[str, Any]) -> bool:
"""Validate module configuration"""
if "mobsf_url" in config and not isinstance(config["mobsf_url"], str):
raise ValueError("mobsf_url must be a string")
file_path = config.get("file_path")
if not file_path:
raise ValueError("file_path is required for MobSF scanning")
return True
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""
Execute MobSF security analysis on an APK file.
Args:
config: Configuration dict with file_path, mobsf_url, api_key
workspace: Workspace directory path
Returns:
ModuleResult with security findings from MobSF
"""
self.start_timer()
try:
self.validate_config(config)
self.validate_workspace(workspace)
# Get configuration
mobsf_url = config.get("mobsf_url", "http://localhost:8877")
file_path_str = config["file_path"]
rescan = config.get("rescan", False)
# Get API key from config or environment
api_key = config.get("api_key") or os.environ.get("MOBSF_API_KEY", "")
if not api_key:
logger.warning("No MobSF API key provided. Some functionality may be limited.")
# Resolve APK file path
file_path = Path(file_path_str)
if not file_path.is_absolute():
file_path = (workspace / file_path).resolve()
if not file_path.exists():
raise FileNotFoundError(f"APK file not found: {file_path}")
if not file_path.is_file():
raise ValueError(f"APK path must be a file: {file_path}")
logger.info(f"Starting MobSF scan of APK: {file_path}")
# Upload and scan APK
scan_hash = await self._upload_file(mobsf_url, file_path, api_key)
logger.info(f"APK uploaded to MobSF with hash: {scan_hash}")
# Start scan
await self._start_scan(mobsf_url, scan_hash, api_key, rescan=rescan)
logger.info(f"MobSF scan completed for hash: {scan_hash}")
# Get JSON results
scan_results = await self._get_json_results(mobsf_url, scan_hash, api_key)
# Parse results into findings
findings = self._parse_scan_results(scan_results, file_path)
# Create summary
summary = self._create_summary(findings, scan_hash)
logger.info(f"✓ MobSF scan completed: {len(findings)} findings")
return self.create_result(
findings=findings,
status="success",
summary=summary,
metadata={
"tool": "mobsf",
"tool_version": "3.9.7",
"scan_hash": scan_hash,
"apk_file": str(file_path),
"mobsf_url": mobsf_url,
}
)
except Exception as exc:
logger.error(f"MobSF scanner failed: {exc}", exc_info=True)
return self.create_result(
findings=[],
status="failed",
error=str(exc),
metadata={"tool": "mobsf", "file_path": config.get("file_path")}
)
async def _upload_file(self, mobsf_url: str, file_path: Path, api_key: str) -> str:
"""
Upload APK file to MobSF server.
Returns:
Scan hash for the uploaded file
"""
headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
# Create multipart form data
filename = file_path.name
async with aiohttp.ClientSession() as session:
with open(file_path, 'rb') as f:
data = aiohttp.FormData()
data.add_field('file',
f,
filename=filename,
content_type='application/vnd.android.package-archive')
async with session.post(
f"{mobsf_url}/api/v1/upload",
headers=headers,
data=data,
timeout=aiohttp.ClientTimeout(total=300)
) as response:
if response.status != 200:
error_text = await response.text()
raise Exception(f"Failed to upload file to MobSF: {error_text}")
result = await response.json()
scan_hash = result.get('hash')
if not scan_hash:
raise Exception(f"MobSF upload failed: {result}")
return scan_hash
async def _start_scan(self, mobsf_url: str, scan_hash: str, api_key: str, rescan: bool = False) -> Dict[str, Any]:
"""
Start MobSF scan for uploaded file.
Returns:
Scan result dictionary
"""
headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
data = {
'hash': scan_hash,
're_scan': '1' if rescan else '0'
}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{mobsf_url}/api/v1/scan",
headers=headers,
data=data,
timeout=aiohttp.ClientTimeout(total=600) # 10 minutes for scan
) as response:
if response.status != 200:
error_text = await response.text()
raise Exception(f"MobSF scan failed: {error_text}")
result = await response.json()
return result
async def _get_json_results(self, mobsf_url: str, scan_hash: str, api_key: str) -> Dict[str, Any]:
"""
Retrieve JSON scan results from MobSF.
Returns:
Scan results dictionary
"""
headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
data = {'hash': scan_hash}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{mobsf_url}/api/v1/report_json",
headers=headers,
data=data,
timeout=aiohttp.ClientTimeout(total=60)
) as response:
if response.status != 200:
error_text = await response.text()
raise Exception(f"Failed to retrieve MobSF results: {error_text}")
return await response.json()
def _parse_scan_results(self, scan_data: Dict[str, Any], apk_path: Path) -> List[ModuleFinding]:
"""Parse MobSF JSON results into standardized findings"""
findings = []
# Parse permissions
if 'permissions' in scan_data:
for perm_name, perm_attrs in scan_data['permissions'].items():
if isinstance(perm_attrs, dict):
severity = self.SEVERITY_MAP.get(
perm_attrs.get('status', '').lower(), 'low'
)
finding = self.create_finding(
title=f"Android Permission: {perm_name}",
description=perm_attrs.get('description', 'No description'),
severity=severity,
category="android-permission",
metadata={
'permission': perm_name,
'status': perm_attrs.get('status'),
'info': perm_attrs.get('info'),
'tool': 'mobsf',
}
)
findings.append(finding)
# Parse manifest analysis
if 'manifest_analysis' in scan_data:
manifest_findings = scan_data['manifest_analysis'].get('manifest_findings', [])
for item in manifest_findings:
if isinstance(item, dict):
severity = self.SEVERITY_MAP.get(item.get('severity', '').lower(), 'medium')
finding = self.create_finding(
title=item.get('title') or item.get('name') or "Manifest Issue",
description=item.get('description', 'No description'),
severity=severity,
category="android-manifest",
metadata={
'rule': item.get('rule'),
'tool': 'mobsf',
}
)
findings.append(finding)
# Parse code analysis
if 'code_analysis' in scan_data:
code_findings = scan_data['code_analysis'].get('findings', {})
for finding_name, finding_data in code_findings.items():
if isinstance(finding_data, dict):
metadata_dict = finding_data.get('metadata', {})
severity = self.SEVERITY_MAP.get(
metadata_dict.get('severity', '').lower(), 'medium'
)
files_list = finding_data.get('files', [])
file_path = files_list[0] if files_list else None
finding = self.create_finding(
title=finding_name,
description=metadata_dict.get('description', 'No description'),
severity=severity,
category="android-code-analysis",
file_path=file_path,
metadata={
'cwe': metadata_dict.get('cwe'),
'owasp': metadata_dict.get('owasp'),
'files': files_list,
'tool': 'mobsf',
}
)
findings.append(finding)
# Parse behavior analysis
if 'behaviour' in scan_data:
for key, value in scan_data['behaviour'].items():
if isinstance(value, dict):
metadata_dict = value.get('metadata', {})
labels = metadata_dict.get('label', [])
label = labels[0] if labels else 'Unknown Behavior'
severity = self.SEVERITY_MAP.get(
metadata_dict.get('severity', '').lower(), 'medium'
)
files_list = value.get('files', [])
finding = self.create_finding(
title=f"Behavior: {label}",
description=metadata_dict.get('description', 'No description'),
severity=severity,
category="android-behavior",
metadata={
'files': files_list,
'tool': 'mobsf',
}
)
findings.append(finding)
logger.debug(f"Parsed {len(findings)} findings from MobSF results")
return findings
def _create_summary(self, findings: List[ModuleFinding], scan_hash: str) -> Dict[str, Any]:
"""Create analysis summary"""
severity_counter = Counter()
category_counter = Counter()
for finding in findings:
severity_counter[finding.severity] += 1
category_counter[finding.category] += 1
return {
"scan_hash": scan_hash,
"total_findings": len(findings),
"severity_counts": dict(severity_counter),
"category_counts": dict(category_counter),
}

View File

@@ -0,0 +1,442 @@
"""
OpenGrep Android Static Analysis Module
Pattern-based static analysis for Android applications using OpenGrep/Semgrep
with Android-specific security rules.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import asyncio
import json
import logging
from pathlib import Path
from typing import Dict, Any, List
try:
from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
try:
from modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
except ImportError:
from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
logger = logging.getLogger(__name__)
class OpenGrepAndroid(BaseModule):
"""OpenGrep static analysis module specialized for Android security"""
def get_metadata(self) -> ModuleMetadata:
"""Get module metadata"""
return ModuleMetadata(
name="opengrep_android",
version="1.45.0",
description="Android-focused static analysis using OpenGrep/Semgrep with custom security rules for Java/Kotlin",
author="FuzzForge Team",
category="android",
tags=["sast", "android", "opengrep", "semgrep", "java", "kotlin", "security"],
input_schema={
"type": "object",
"properties": {
"config": {
"type": "string",
"enum": ["auto", "p/security-audit", "p/owasp-top-ten", "p/cwe-top-25"],
"default": "auto",
"description": "Rule configuration to use"
},
"custom_rules_path": {
"type": "string",
"description": "Path to a directory containing custom OpenGrep rules (Android-specific rules recommended)",
"default": None,
},
"languages": {
"type": "array",
"items": {"type": "string"},
"description": "Specific languages to analyze (defaults to java, kotlin for Android)",
"default": ["java", "kotlin"],
},
"include_patterns": {
"type": "array",
"items": {"type": "string"},
"description": "File patterns to include",
"default": [],
},
"exclude_patterns": {
"type": "array",
"items": {"type": "string"},
"description": "File patterns to exclude",
"default": [],
},
"max_target_bytes": {
"type": "integer",
"default": 1000000,
"description": "Maximum file size to analyze (bytes)"
},
"timeout": {
"type": "integer",
"default": 300,
"description": "Analysis timeout in seconds"
},
"severity": {
"type": "array",
"items": {"type": "string", "enum": ["ERROR", "WARNING", "INFO"]},
"default": ["ERROR", "WARNING", "INFO"],
"description": "Minimum severity levels to report"
},
"confidence": {
"type": "array",
"items": {"type": "string", "enum": ["HIGH", "MEDIUM", "LOW"]},
"default": ["HIGH", "MEDIUM", "LOW"],
"description": "Minimum confidence levels to report"
}
}
},
output_schema={
"type": "object",
"properties": {
"findings": {
"type": "array",
"description": "Security findings from OpenGrep analysis"
},
"total_findings": {"type": "integer"},
"severity_counts": {"type": "object"},
"files_analyzed": {"type": "integer"},
}
},
requires_workspace=True,
)
def validate_config(self, config: Dict[str, Any]) -> bool:
"""Validate configuration"""
timeout = config.get("timeout", 300)
if not isinstance(timeout, int) or timeout < 30 or timeout > 3600:
raise ValueError("Timeout must be between 30 and 3600 seconds")
max_bytes = config.get("max_target_bytes", 1000000)
if not isinstance(max_bytes, int) or max_bytes < 1000 or max_bytes > 10000000:
raise ValueError("max_target_bytes must be between 1000 and 10000000")
custom_rules_path = config.get("custom_rules_path")
if custom_rules_path:
rules_path = Path(custom_rules_path)
if not rules_path.exists():
logger.warning(f"Custom rules path does not exist: {custom_rules_path}")
return True
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""Execute OpenGrep static analysis on Android code"""
self.start_timer()
try:
# Validate inputs
self.validate_config(config)
self.validate_workspace(workspace)
logger.info(f"Running OpenGrep Android analysis on {workspace}")
# Build opengrep command
cmd = ["opengrep", "scan", "--json"]
# Add configuration
custom_rules_path = config.get("custom_rules_path")
use_custom_rules = False
if custom_rules_path and Path(custom_rules_path).exists():
cmd.extend(["--config", custom_rules_path])
use_custom_rules = True
logger.info(f"Using custom Android rules from: {custom_rules_path}")
else:
config_type = config.get("config", "auto")
if config_type == "auto":
cmd.extend(["--config", "auto"])
else:
cmd.extend(["--config", config_type])
# Add timeout
cmd.extend(["--timeout", str(config.get("timeout", 300))])
# Add max target bytes
cmd.extend(["--max-target-bytes", str(config.get("max_target_bytes", 1000000))])
# Add languages if specified (but NOT when using custom rules)
languages = config.get("languages", ["java", "kotlin"])
if languages and not use_custom_rules:
langs = ",".join(languages)
cmd.extend(["--lang", langs])
logger.debug(f"Analyzing languages: {langs}")
# Add include patterns
include_patterns = config.get("include_patterns", [])
for pattern in include_patterns:
cmd.extend(["--include", pattern])
# Add exclude patterns
exclude_patterns = config.get("exclude_patterns", [])
for pattern in exclude_patterns:
cmd.extend(["--exclude", pattern])
# Add severity filter if single level requested
severity_levels = config.get("severity", ["ERROR", "WARNING", "INFO"])
if severity_levels and len(severity_levels) == 1:
cmd.extend(["--severity", severity_levels[0]])
# Disable metrics collection
cmd.append("--disable-version-check")
cmd.append("--no-git-ignore")
# Add target directory
cmd.append(str(workspace))
logger.debug(f"Running command: {' '.join(cmd)}")
# Run OpenGrep
process = await asyncio.create_subprocess_exec(
*cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
cwd=workspace
)
stdout, stderr = await process.communicate()
# Parse results
findings = []
if process.returncode in [0, 1]: # 0 = no findings, 1 = findings found
findings = self._parse_opengrep_output(stdout.decode(), workspace, config)
logger.info(f"OpenGrep found {len(findings)} potential security issues")
else:
error_msg = stderr.decode()
logger.error(f"OpenGrep failed: {error_msg}")
return self.create_result(
findings=[],
status="failed",
error=f"OpenGrep execution failed (exit code {process.returncode}): {error_msg[:500]}"
)
# Create summary
summary = self._create_summary(findings)
return self.create_result(
findings=findings,
status="success",
summary=summary,
metadata={
"tool": "opengrep",
"tool_version": "1.45.0",
"languages": languages,
"custom_rules": bool(custom_rules_path),
}
)
except Exception as e:
logger.error(f"OpenGrep Android module failed: {e}", exc_info=True)
return self.create_result(
findings=[],
status="failed",
error=str(e)
)
def _parse_opengrep_output(self, output: str, workspace: Path, config: Dict[str, Any]) -> List[ModuleFinding]:
"""Parse OpenGrep JSON output into findings"""
findings = []
if not output.strip():
return findings
try:
data = json.loads(output)
results = data.get("results", [])
logger.debug(f"OpenGrep returned {len(results)} raw results")
# Get filtering criteria
allowed_severities = set(config.get("severity", ["ERROR", "WARNING", "INFO"]))
allowed_confidences = set(config.get("confidence", ["HIGH", "MEDIUM", "LOW"]))
for result in results:
# Extract basic info
rule_id = result.get("check_id", "unknown")
message = result.get("message", "")
extra = result.get("extra", {})
severity = extra.get("severity", "INFO").upper()
# File location info
path_info = result.get("path", "")
start_line = result.get("start", {}).get("line", 0)
end_line = result.get("end", {}).get("line", 0)
start_col = result.get("start", {}).get("col", 0)
end_col = result.get("end", {}).get("col", 0)
# Code snippet
lines = extra.get("lines", "")
# Metadata
rule_metadata = extra.get("metadata", {})
cwe = rule_metadata.get("cwe", [])
owasp = rule_metadata.get("owasp", [])
confidence = extra.get("confidence", rule_metadata.get("confidence", "MEDIUM")).upper()
# Apply severity filter
if severity not in allowed_severities:
continue
# Apply confidence filter
if confidence not in allowed_confidences:
continue
# Make file path relative to workspace
if path_info:
try:
rel_path = Path(path_info).relative_to(workspace)
path_info = str(rel_path)
except ValueError:
pass
# Map severity to our standard levels
finding_severity = self._map_severity(severity)
# Create finding
finding = self.create_finding(
title=f"Android Security: {rule_id}",
description=message or f"OpenGrep rule {rule_id} triggered",
severity=finding_severity,
category=self._get_category(rule_id, extra),
file_path=path_info if path_info else None,
line_start=start_line if start_line > 0 else None,
line_end=end_line if end_line > 0 and end_line != start_line else None,
code_snippet=lines.strip() if lines else None,
recommendation=self._get_recommendation(rule_id, extra),
metadata={
"rule_id": rule_id,
"opengrep_severity": severity,
"confidence": confidence,
"cwe": cwe,
"owasp": owasp,
"fix": extra.get("fix", ""),
"impact": extra.get("impact", ""),
"likelihood": extra.get("likelihood", ""),
"references": extra.get("references", []),
"tool": "opengrep",
}
)
findings.append(finding)
except json.JSONDecodeError as e:
logger.warning(f"Failed to parse OpenGrep output: {e}. Output snippet: {output[:200]}...")
except Exception as e:
logger.warning(f"Error processing OpenGrep results: {e}", exc_info=True)
return findings
def _map_severity(self, opengrep_severity: str) -> str:
"""Map OpenGrep severity to our standard severity levels"""
severity_map = {
"ERROR": "high",
"WARNING": "medium",
"INFO": "low"
}
return severity_map.get(opengrep_severity.upper(), "medium")
def _get_category(self, rule_id: str, extra: Dict[str, Any]) -> str:
"""Determine finding category based on rule and metadata"""
rule_metadata = extra.get("metadata", {})
cwe_list = rule_metadata.get("cwe", [])
owasp_list = rule_metadata.get("owasp", [])
rule_lower = rule_id.lower()
# Android-specific categories
if "injection" in rule_lower or "sql" in rule_lower:
return "injection"
elif "intent" in rule_lower:
return "android-intent"
elif "webview" in rule_lower:
return "android-webview"
elif "deeplink" in rule_lower:
return "android-deeplink"
elif "storage" in rule_lower or "sharedpreferences" in rule_lower:
return "android-storage"
elif "logging" in rule_lower or "log" in rule_lower:
return "android-logging"
elif "clipboard" in rule_lower:
return "android-clipboard"
elif "activity" in rule_lower or "service" in rule_lower or "provider" in rule_lower:
return "android-component"
elif "crypto" in rule_lower or "encrypt" in rule_lower:
return "cryptography"
elif "hardcode" in rule_lower or "secret" in rule_lower:
return "secrets"
elif "auth" in rule_lower:
return "authentication"
elif cwe_list:
return f"cwe-{cwe_list[0]}"
elif owasp_list:
return f"owasp-{owasp_list[0].replace(' ', '-').lower()}"
else:
return "android-security"
def _get_recommendation(self, rule_id: str, extra: Dict[str, Any]) -> str:
"""Generate recommendation based on rule and metadata"""
fix_suggestion = extra.get("fix", "")
if fix_suggestion:
return fix_suggestion
rule_lower = rule_id.lower()
# Android-specific recommendations
if "injection" in rule_lower or "sql" in rule_lower:
return "Use parameterized queries or Room database with type-safe queries to prevent SQL injection."
elif "intent" in rule_lower:
return "Validate all incoming Intent data and use explicit Intents when possible to prevent Intent manipulation attacks."
elif "webview" in rule_lower and "javascript" in rule_lower:
return "Disable JavaScript in WebView if not needed, or implement proper JavaScript interfaces with @JavascriptInterface annotation."
elif "deeplink" in rule_lower:
return "Validate all deeplink URLs and sanitize user input to prevent deeplink hijacking attacks."
elif "storage" in rule_lower or "sharedpreferences" in rule_lower:
return "Encrypt sensitive data before storing in SharedPreferences or use EncryptedSharedPreferences for Android API 23+."
elif "logging" in rule_lower:
return "Remove sensitive data from logs in production builds. Use ProGuard/R8 to strip logging statements."
elif "clipboard" in rule_lower:
return "Avoid placing sensitive data on the clipboard. If necessary, clear clipboard data when no longer needed."
elif "crypto" in rule_lower:
return "Use modern cryptographic algorithms (AES-GCM, RSA-OAEP) and Android Keystore for key management."
elif "hardcode" in rule_lower or "secret" in rule_lower:
return "Remove hardcoded secrets. Use Android Keystore, environment variables, or secure configuration management."
else:
return "Review this Android security issue and apply appropriate fixes based on Android security best practices."
def _create_summary(self, findings: List[ModuleFinding]) -> Dict[str, Any]:
"""Create analysis summary"""
severity_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0}
category_counts = {}
rule_counts = {}
for finding in findings:
# Count by severity
severity_counts[finding.severity] += 1
# Count by category
category = finding.category
category_counts[category] = category_counts.get(category, 0) + 1
# Count by rule
rule_id = finding.metadata.get("rule_id", "unknown")
rule_counts[rule_id] = rule_counts.get(rule_id, 0) + 1
return {
"total_findings": len(findings),
"severity_counts": severity_counts,
"category_counts": category_counts,
"top_rules": dict(sorted(rule_counts.items(), key=lambda x: x[1], reverse=True)[:10]),
"files_analyzed": len(set(f.file_path for f in findings if f.file_path))
}

View File

@@ -0,0 +1,35 @@
"""
Android Static Analysis Workflow
Comprehensive Android application security testing combining:
- Jadx APK decompilation
- OpenGrep/Semgrep static analysis with Android-specific rules
- MobSF mobile security framework analysis
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
from .workflow import AndroidStaticAnalysisWorkflow
from .activities import (
decompile_with_jadx_activity,
scan_with_opengrep_activity,
scan_with_mobsf_activity,
generate_android_sarif_activity,
)
__all__ = [
"AndroidStaticAnalysisWorkflow",
"decompile_with_jadx_activity",
"scan_with_opengrep_activity",
"scan_with_mobsf_activity",
"generate_android_sarif_activity",
]

View File

@@ -0,0 +1,200 @@
"""
Android Static Analysis Workflow Activities
Activities for the Android security testing workflow:
- decompile_with_jadx_activity: Decompile APK using Jadx
- scan_with_opengrep_activity: Analyze code with OpenGrep/Semgrep
- scan_with_mobsf_activity: Scan APK with MobSF
- generate_android_sarif_activity: Generate combined SARIF report
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import logging
import sys
from pathlib import Path
from temporalio import activity
# Configure logging
logger = logging.getLogger(__name__)
# Add toolbox to path for module imports
sys.path.insert(0, '/app/toolbox')
@activity.defn(name="decompile_with_jadx")
async def decompile_with_jadx_activity(workspace_path: str, config: dict) -> dict:
"""
Decompile Android APK to Java source code using Jadx.
Args:
workspace_path: Path to the workspace directory
config: JadxDecompiler configuration
Returns:
Decompilation results dictionary
"""
logger.info(f"Activity: decompile_with_jadx (workspace={workspace_path})")
try:
from modules.android import JadxDecompiler
workspace = Path(workspace_path)
if not workspace.exists():
raise FileNotFoundError(f"Workspace not found: {workspace_path}")
decompiler = JadxDecompiler()
result = await decompiler.execute(config, workspace)
logger.info(
f"✓ Jadx decompilation completed: "
f"{result.summary.get('java_files', 0)} Java files generated"
)
return result.dict()
except Exception as e:
logger.error(f"Jadx decompilation failed: {e}", exc_info=True)
raise
@activity.defn(name="scan_with_opengrep")
async def scan_with_opengrep_activity(workspace_path: str, config: dict) -> dict:
"""
Analyze Android code for security issues using OpenGrep/Semgrep.
Args:
workspace_path: Path to the workspace directory
config: OpenGrepAndroid configuration
Returns:
Analysis results dictionary
"""
logger.info(f"Activity: scan_with_opengrep (workspace={workspace_path})")
try:
from modules.android import OpenGrepAndroid
workspace = Path(workspace_path)
if not workspace.exists():
raise FileNotFoundError(f"Workspace not found: {workspace_path}")
analyzer = OpenGrepAndroid()
result = await analyzer.execute(config, workspace)
logger.info(
f"✓ OpenGrep analysis completed: "
f"{result.summary.get('total_findings', 0)} security issues found"
)
return result.dict()
except Exception as e:
logger.error(f"OpenGrep analysis failed: {e}", exc_info=True)
raise
@activity.defn(name="scan_with_mobsf")
async def scan_with_mobsf_activity(workspace_path: str, config: dict) -> dict:
"""
Analyze Android APK for security issues using MobSF.
Args:
workspace_path: Path to the workspace directory
config: MobSFScanner configuration
Returns:
Scan results dictionary
"""
logger.info(f"Activity: scan_with_mobsf (workspace={workspace_path})")
try:
from modules.android import MobSFScanner
workspace = Path(workspace_path)
if not workspace.exists():
raise FileNotFoundError(f"Workspace not found: {workspace_path}")
scanner = MobSFScanner()
result = await scanner.execute(config, workspace)
logger.info(
f"✓ MobSF scan completed: "
f"{result.summary.get('total_findings', 0)} findings"
)
return result.dict()
except Exception as e:
logger.error(f"MobSF scan failed: {e}", exc_info=True)
raise
@activity.defn(name="generate_android_sarif")
async def generate_android_sarif_activity(
jadx_result: dict,
opengrep_result: dict,
mobsf_result: dict,
config: dict,
workspace_path: str
) -> dict:
"""
Generate combined SARIF report from all Android security findings.
Args:
jadx_result: Jadx decompilation results
opengrep_result: OpenGrep analysis results
mobsf_result: MobSF scan results (may be None if disabled)
config: Reporter configuration
workspace_path: Workspace path
Returns:
SARIF report dictionary
"""
logger.info("Activity: generate_android_sarif")
try:
from modules.reporter import SARIFReporter
workspace = Path(workspace_path)
# Collect all findings
all_findings = []
all_findings.extend(opengrep_result.get("findings", []))
if mobsf_result:
all_findings.extend(mobsf_result.get("findings", []))
# Prepare reporter config
reporter_config = {
**(config or {}),
"findings": all_findings,
"tool_name": "FuzzForge Android Static Analysis",
"tool_version": "1.0.0",
"metadata": {
"jadx_version": "1.5.0",
"opengrep_version": "1.45.0",
"mobsf_version": "3.9.7",
"java_files_decompiled": jadx_result.get("summary", {}).get("java_files", 0),
}
}
reporter = SARIFReporter()
result = await reporter.execute(reporter_config, workspace)
sarif_report = result.dict().get("sarif", {})
logger.info(f"✓ SARIF report generated with {len(all_findings)} findings")
return sarif_report
except Exception as e:
logger.error(f"SARIF report generation failed: {e}", exc_info=True)
raise

View File

@@ -0,0 +1,172 @@
name: android_static_analysis
version: "1.0.0"
vertical: android
description: "Comprehensive Android application security testing using Jadx decompilation, OpenGrep static analysis, and MobSF mobile security framework"
author: "FuzzForge Team"
tags:
- "android"
- "mobile"
- "static-analysis"
- "security"
- "opengrep"
- "semgrep"
- "mobsf"
- "jadx"
- "apk"
- "sarif"
# Workspace isolation mode
# Using "shared" mode for read-only APK analysis (no file modifications except decompilation output)
workspace_isolation: "shared"
parameters:
type: object
properties:
apk_path:
type: string
description: "Path to the APK file to analyze (relative to uploaded target or absolute within workspace)"
default: ""
decompile_apk:
type: boolean
description: "Whether to decompile APK with Jadx before OpenGrep analysis"
default: true
jadx_config:
type: object
description: "Jadx decompiler configuration"
properties:
output_dir:
type: string
description: "Output directory for decompiled sources"
default: "jadx_output"
overwrite:
type: boolean
description: "Overwrite existing decompilation output"
default: true
threads:
type: integer
description: "Number of decompilation threads"
default: 4
minimum: 1
maximum: 32
decompiler_args:
type: array
items:
type: string
description: "Additional Jadx arguments"
default: []
opengrep_config:
type: object
description: "OpenGrep/Semgrep static analysis configuration"
properties:
config:
type: string
enum: ["auto", "p/security-audit", "p/owasp-top-ten", "p/cwe-top-25"]
description: "Preset OpenGrep ruleset (ignored if custom_rules_path is set)"
default: "auto"
custom_rules_path:
type: string
description: "Path to custom OpenGrep rules directory (use Android-specific rules for best results)"
default: "/app/toolbox/modules/android/custom_rules"
languages:
type: array
items:
type: string
description: "Programming languages to analyze (defaults to java, kotlin for Android)"
default: ["java", "kotlin"]
include_patterns:
type: array
items:
type: string
description: "File patterns to include in scan"
default: []
exclude_patterns:
type: array
items:
type: string
description: "File patterns to exclude from scan"
default: []
max_target_bytes:
type: integer
description: "Maximum file size to analyze (bytes)"
default: 1000000
timeout:
type: integer
description: "Analysis timeout in seconds"
default: 300
severity:
type: array
items:
type: string
enum: ["ERROR", "WARNING", "INFO"]
description: "Severity levels to include in results"
default: ["ERROR", "WARNING", "INFO"]
confidence:
type: array
items:
type: string
enum: ["HIGH", "MEDIUM", "LOW"]
description: "Confidence levels to include in results"
default: ["HIGH", "MEDIUM", "LOW"]
mobsf_config:
type: object
description: "MobSF scanner configuration"
properties:
enabled:
type: boolean
description: "Enable MobSF analysis (requires APK file)"
default: true
mobsf_url:
type: string
description: "MobSF server URL"
default: "http://localhost:8877"
api_key:
type: string
description: "MobSF API key (if not provided, uses MOBSF_API_KEY env var)"
default: null
rescan:
type: boolean
description: "Force rescan even if APK was previously analyzed"
default: false
reporter_config:
type: object
description: "SARIF reporter configuration"
properties:
include_code_flows:
type: boolean
description: "Include code flow information in SARIF output"
default: false
logical_id:
type: string
description: "Custom identifier for the SARIF report"
default: null
output_schema:
type: object
properties:
sarif:
type: object
description: "SARIF-formatted findings from all Android security tools"
summary:
type: object
description: "Android security analysis summary"
properties:
total_findings:
type: integer
decompiled_java_files:
type: integer
description: "Number of Java files decompiled by Jadx"
opengrep_findings:
type: integer
description: "Findings from OpenGrep/Semgrep analysis"
mobsf_findings:
type: integer
description: "Findings from MobSF analysis"
severity_distribution:
type: object
category_distribution:
type: object

View File

@@ -0,0 +1,261 @@
"""
Android Static Analysis Workflow - Temporal Version
Comprehensive security testing for Android applications using Jadx, OpenGrep, and MobSF.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
from datetime import timedelta
from typing import Dict, Any, Optional
from pathlib import Path
from temporalio import workflow
from temporalio.common import RetryPolicy
# Import activity interfaces (will be executed by worker)
with workflow.unsafe.imports_passed_through():
import logging
logger = logging.getLogger(__name__)
@workflow.defn
class AndroidStaticAnalysisWorkflow:
"""
Android Static Application Security Testing workflow.
This workflow:
1. Downloads target (APK) from MinIO
2. (Optional) Decompiles APK using Jadx
3. Runs OpenGrep/Semgrep static analysis on decompiled code
4. (Optional) Runs MobSF comprehensive security scan
5. Generates a SARIF report with all findings
6. Uploads results to MinIO
7. Cleans up cache
"""
@workflow.run
async def run(
self,
target_id: str,
apk_path: Optional[str] = None,
decompile_apk: bool = True,
jadx_config: Optional[Dict[str, Any]] = None,
opengrep_config: Optional[Dict[str, Any]] = None,
mobsf_config: Optional[Dict[str, Any]] = None,
reporter_config: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Main workflow execution.
Args:
target_id: UUID of the uploaded target (APK) in MinIO
apk_path: Path to APK file within target (if target is not a single APK)
decompile_apk: Whether to decompile APK with Jadx before OpenGrep
jadx_config: Configuration for Jadx decompiler
opengrep_config: Configuration for OpenGrep analyzer
mobsf_config: Configuration for MobSF scanner
reporter_config: Configuration for SARIF reporter
Returns:
Dictionary containing SARIF report and summary
"""
workflow_id = workflow.info().workflow_id
workflow.logger.info(
f"Starting AndroidStaticAnalysisWorkflow "
f"(workflow_id={workflow_id}, target_id={target_id})"
)
# Default configurations
if not jadx_config:
jadx_config = {
"output_dir": "jadx_output",
"overwrite": True,
"threads": 4,
"decompiler_args": []
}
if not opengrep_config:
opengrep_config = {
"config": "auto",
"custom_rules_path": "/app/toolbox/modules/android/custom_rules",
"languages": ["java", "kotlin"],
"severity": ["ERROR", "WARNING", "INFO"],
"confidence": ["HIGH", "MEDIUM", "LOW"],
"timeout": 300,
}
if not mobsf_config:
mobsf_config = {
"enabled": True,
"mobsf_url": "http://localhost:8877",
"api_key": None,
"rescan": False,
}
if not reporter_config:
reporter_config = {
"include_code_flows": False
}
# Activity retry policy
retry_policy = RetryPolicy(
initial_interval=timedelta(seconds=1),
maximum_interval=timedelta(seconds=60),
maximum_attempts=3,
backoff_coefficient=2.0,
)
# Phase 0: Download target from MinIO
workflow.logger.info(f"Phase 0: Downloading target from MinIO (target_id={target_id})")
download_result = await workflow.execute_activity(
"download_target",
args=[target_id],
start_to_close_timeout=timedelta(minutes=10),
retry_policy=retry_policy,
)
workspace_path = download_result["workspace_path"]
workflow.logger.info(f"✓ Target downloaded to: {workspace_path}")
# Determine APK path
actual_apk_path = apk_path if apk_path else download_result.get("primary_file", "app.apk")
# Phase 1: Jadx decompilation (if enabled and APK provided)
jadx_result = None
analysis_workspace = workspace_path
if decompile_apk and actual_apk_path:
workflow.logger.info(f"Phase 1: Decompiling APK with Jadx (apk={actual_apk_path})")
jadx_activity_config = {
**jadx_config,
"apk_path": actual_apk_path
}
jadx_result = await workflow.execute_activity(
"decompile_with_jadx",
args=[workspace_path, jadx_activity_config],
start_to_close_timeout=timedelta(minutes=15),
retry_policy=retry_policy,
)
if jadx_result.get("status") == "success":
# Use decompiled sources as workspace for OpenGrep
source_dir = jadx_result.get("summary", {}).get("source_dir")
if source_dir:
analysis_workspace = source_dir
workflow.logger.info(
f"✓ Jadx decompiled {jadx_result.get('summary', {}).get('java_files', 0)} Java files"
)
else:
workflow.logger.warning(f"Jadx decompilation failed: {jadx_result.get('error')}")
else:
workflow.logger.info("Phase 1: Jadx decompilation skipped")
# Phase 2: OpenGrep static analysis
workflow.logger.info(f"Phase 2: OpenGrep analysis on {analysis_workspace}")
opengrep_result = await workflow.execute_activity(
"scan_with_opengrep",
args=[analysis_workspace, opengrep_config],
start_to_close_timeout=timedelta(minutes=20),
retry_policy=retry_policy,
)
workflow.logger.info(
f"✓ OpenGrep completed: {opengrep_result.get('summary', {}).get('total_findings', 0)} findings"
)
# Phase 3: MobSF analysis (if enabled and APK provided)
mobsf_result = None
if mobsf_config.get("enabled", True) and actual_apk_path:
workflow.logger.info(f"Phase 3: MobSF scan on APK: {actual_apk_path}")
mobsf_activity_config = {
**mobsf_config,
"file_path": actual_apk_path
}
try:
mobsf_result = await workflow.execute_activity(
"scan_with_mobsf",
args=[workspace_path, mobsf_activity_config],
start_to_close_timeout=timedelta(minutes=30),
retry_policy=RetryPolicy(
maximum_attempts=2 # MobSF can be flaky, limit retries
),
)
workflow.logger.info(
f"✓ MobSF completed: {mobsf_result.get('summary', {}).get('total_findings', 0)} findings"
)
except Exception as e:
workflow.logger.warning(f"MobSF scan failed (continuing without it): {e}")
mobsf_result = None
else:
workflow.logger.info("Phase 3: MobSF scan skipped (disabled or no APK)")
# Phase 4: Generate SARIF report
workflow.logger.info("Phase 4: Generating SARIF report")
sarif_report = await workflow.execute_activity(
"generate_android_sarif",
args=[jadx_result or {}, opengrep_result, mobsf_result, reporter_config, workspace_path],
start_to_close_timeout=timedelta(minutes=5),
retry_policy=retry_policy,
)
# Phase 5: Upload results to MinIO
workflow.logger.info("Phase 5: Uploading results to MinIO")
upload_result = await workflow.execute_activity(
"upload_results",
args=[target_id, sarif_report],
start_to_close_timeout=timedelta(minutes=10),
retry_policy=retry_policy,
)
workflow.logger.info(f"✓ Results uploaded: {upload_result.get('result_url')}")
# Phase 6: Cleanup cache
workflow.logger.info("Phase 6: Cleaning up cache")
await workflow.execute_activity(
"cleanup_cache",
args=[target_id],
start_to_close_timeout=timedelta(minutes=5),
retry_policy=RetryPolicy(maximum_attempts=1), # Don't retry cleanup
)
# Calculate summary
total_findings = len(sarif_report.get("runs", [{}])[0].get("results", []))
summary = {
"workflow": "android_static_analysis",
"target_id": target_id,
"total_findings": total_findings,
"decompiled_java_files": (jadx_result or {}).get("summary", {}).get("java_files", 0) if jadx_result else 0,
"opengrep_findings": opengrep_result.get("summary", {}).get("total_findings", 0),
"mobsf_findings": mobsf_result.get("summary", {}).get("total_findings", 0) if mobsf_result else 0,
"result_url": upload_result.get("result_url"),
}
workflow.logger.info(
f"✅ AndroidStaticAnalysisWorkflow completed successfully: {total_findings} findings"
)
return {
"sarif": sarif_report,
"summary": summary,
}

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,695 @@
{
"tool": {
"name": "FuzzForge Security Assessment",
"version": "1.0.0"
},
"summary": {
"total_issues": 68,
"by_severity": {
"warning": 51,
"error": 17
}
},
"findings": [
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .env",
"location": {
"file": ".env",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .git-credentials",
"location": {
"file": ".git-credentials",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at private_key.pem",
"location": {
"file": "private_key.pem",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at wallet.json",
"location": {
"file": "wallet.json",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at SECRETS_GROUND_TRUTH.json",
"location": {
"file": "SECRETS_GROUND_TRUTH.json",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .npmrc",
"location": {
"file": ".npmrc",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .fuzzforge/.env",
"location": {
"file": ".fuzzforge/.env",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at .fuzzforge/.env.template",
"location": {
"file": ".fuzzforge/.env.template",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at data/credentials.json",
"location": {
"file": "data/credentials.json",
"line": null,
"column": null
}
},
{
"rule_id": "sensitive_file_medium",
"severity": "warning",
"message": "Found potentially sensitive file at data/api_keys.txt",
"location": {
"file": "data/api_keys.txt",
"line": null,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via F-string in SQL query",
"location": {
"file": "app.py",
"line": 31,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded API Key in src/api_handler.py",
"location": {
"file": "src/api_handler.py",
"line": 25,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Authentication Token in src/api_handler.py",
"location": {
"file": "src/api_handler.py",
"line": 21,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "src/api_handler.py",
"line": 34,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "src/api_handler.py",
"line": 54,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function exec(): Arbitrary code execution",
"location": {
"file": "src/api_handler.py",
"line": 49,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function os.system(): Command injection risk",
"location": {
"file": "src/api_handler.py",
"line": 44,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function os.system(): Command injection risk",
"location": {
"file": "src/api_handler.py",
"line": 71,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function subprocess with shell=True: Command injection risk",
"location": {
"file": "src/api_handler.py",
"line": 39,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String concatenation in SQL",
"location": {
"file": "src/database.py",
"line": 43,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String formatting in SQL",
"location": {
"file": "src/database.py",
"line": 50,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String formatting in SQL",
"location": {
"file": "src/database.py",
"line": 57,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via F-string in SQL query",
"location": {
"file": "src/database.py",
"line": 50,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/database.py",
"line": 43,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/database.py",
"line": 75,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function os.system(): Command injection risk",
"location": {
"file": "src/database.py",
"line": 69,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function pickle.load(): Deserialization vulnerability",
"location": {
"file": "src/database.py",
"line": 64,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in scripts/backup.js",
"location": {
"file": "scripts/backup.js",
"line": 81,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Potential Secret Hash in scripts/backup.js",
"location": {
"file": "scripts/backup.js",
"line": 81,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "scripts/backup.js",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function new Function(): Arbitrary code execution",
"location": {
"file": "scripts/backup.js",
"line": 28,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function innerHTML: XSS vulnerability",
"location": {
"file": "scripts/backup.js",
"line": 33,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function innerHTML: XSS vulnerability",
"location": {
"file": "scripts/backup.js",
"line": 37,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function document.write(): XSS vulnerability",
"location": {
"file": "scripts/backup.js",
"line": 42,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in src/Main.java",
"location": {
"file": "src/Main.java",
"line": 77,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String concatenation in SQL",
"location": {
"file": "src/Main.java",
"line": 23,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via String concatenation in SQL",
"location": {
"file": "src/Main.java",
"line": 29,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/Main.java",
"line": 23,
"column": null
}
},
{
"rule_id": "sql_injection_high",
"severity": "error",
"message": "Detected potential SQL injection vulnerability via Dynamic query building",
"location": {
"file": "src/Main.java",
"line": 29,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function eval(): Arbitrary code execution",
"location": {
"file": "scripts/deploy.php",
"line": 28,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function exec(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 22,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function exec(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function system(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 21,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function shell_exec(): Command execution",
"location": {
"file": "scripts/deploy.php",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 12,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 21,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 23,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 24,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 31,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 45,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 50,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_GET usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 57,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 13,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 22,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 27,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 32,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 40,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 46,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 53,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 54,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 61,
"column": null
}
},
{
"rule_id": "dangerous_function_medium",
"severity": "warning",
"message": "Use of potentially dangerous function Direct $_POST usage: Input validation missing",
"location": {
"file": "scripts/deploy.php",
"line": 62,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded API Key in src/utils.rb",
"location": {
"file": "src/utils.rb",
"line": 64,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Hardcoded Password in src/utils.rb",
"location": {
"file": "src/utils.rb",
"line": 63,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in src/app.go",
"location": {
"file": "src/app.go",
"line": 59,
"column": null
}
},
{
"rule_id": "hardcoded_secret_high",
"severity": "error",
"message": "Found potential hardcoded Private Key in src/app.go",
"location": {
"file": "src/app.go",
"line": 62,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Potential Secret Hash in src/app.go",
"location": {
"file": "src/app.go",
"line": 59,
"column": null
}
},
{
"rule_id": "hardcoded_secret_medium",
"severity": "warning",
"message": "Found potential hardcoded Potential Secret Hash in src/app.go",
"location": {
"file": "src/app.go",
"line": 62,
"column": null
}
}
]
}

View File

@@ -7,8 +7,10 @@
# - Frida (dynamic instrumentation)
# - androguard (Python APK analysis)
# - MobSF dependencies
#
# Note: Uses amd64 platform for compatibility with Android 32-bit tools
FROM python:3.11-slim-bookworm
FROM --platform=linux/amd64 python:3.11-slim-bookworm
# Set working directory
WORKDIR /app
@@ -23,7 +25,7 @@ RUN apt-get update && apt-get install -y \
unzip \
# Java (required for Android tools)
openjdk-17-jdk \
# Android tools dependencies
# Android tools dependencies (32-bit libraries for emulated amd64)
lib32stdc++6 \
lib32z1 \
# Frida dependencies
@@ -34,9 +36,19 @@ RUN apt-get update && apt-get install -y \
# Network tools
netcat-openbsd \
tcpdump \
# MobSF dependencies
xfonts-75dpi \
xfonts-base \
# Cleanup
&& rm -rf /var/lib/apt/lists/*
# Install wkhtmltopdf (required for MobSF PDF reports)
RUN wget -q https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6.1-3/wkhtmltox_0.12.6.1-3.bookworm_amd64.deb && \
apt-get update && \
apt-get install -y ./wkhtmltox_0.12.6.1-3.bookworm_amd64.deb && \
rm wkhtmltox_0.12.6.1-3.bookworm_amd64.deb && \
rm -rf /var/lib/apt/lists/*
# Install Android SDK Command Line Tools
ENV ANDROID_HOME=/opt/android-sdk
ENV PATH="${ANDROID_HOME}/cmdline-tools/latest/bin:${ANDROID_HOME}/platform-tools:${PATH}"
@@ -75,20 +87,56 @@ RUN pip3 install --no-cache-dir androguard pyaxmlparser
# Install Frida
RUN pip3 install --no-cache-dir frida-tools frida
# Install OpenGrep/Semgrep (expose as opengrep command)
RUN pip3 install --no-cache-dir semgrep==1.45.0 && \
ln -sf /usr/local/bin/semgrep /usr/local/bin/opengrep
# Install MobSF (Mobile Security Framework)
RUN git clone --depth 1 --branch v3.9.7 https://github.com/MobSF/Mobile-Security-Framework-MobSF.git /app/mobsf && \
cd /app/mobsf && \
./setup.sh
# Install aiohttp for async HTTP requests (used by MobSF scanner module)
RUN pip3 install --no-cache-dir aiohttp
# Create cache directory
RUN mkdir -p /cache && chmod 755 /cache
# Copy worker entrypoint (generic, works for all verticals)
COPY worker.py /app/worker.py
# Create startup script that runs MobSF in background and then starts worker
RUN echo '#!/bin/bash\n\
# Start MobSF server in background\n\
echo "Starting MobSF server in background..."\n\
cd /app/mobsf && ./run.sh 127.0.0.1:8877 > /tmp/mobsf.log 2>&1 &\n\
MOBSF_PID=$!\n\
echo "MobSF started with PID: $MOBSF_PID"\n\
\n\
# Wait a moment for MobSF to initialize\n\
sleep 5\n\
\n\
# Generate and store MobSF API key\n\
if [ -f /root/.MobSF/secret ]; then\n\
SECRET=$(cat /root/.MobSF/secret)\n\
export MOBSF_API_KEY=$(echo -n "$SECRET" | sha256sum | cut -d\" \" -f1)\n\
echo "MobSF API key generated and exported"\n\
fi\n\
\n\
# Start worker\n\
echo "Starting Temporal worker..."\n\
exec python3 /app/worker.py\n\
' > /app/start.sh && chmod +x /app/start.sh
# Add toolbox to Python path (mounted at runtime)
ENV PYTHONPATH="/app:/app/toolbox:${PYTHONPATH}"
ENV PYTHONUNBUFFERED=1
ENV JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64
ENV MOBSF_PORT=8877
# Healthcheck
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=5 \
CMD python3 -c "import sys; sys.exit(0)"
# Run worker
CMD ["python3", "/app/worker.py"]
# Run startup script (starts MobSF + worker)
CMD ["/app/start.sh"]