mirror of
https://github.com/BigBodyCobain/Shadowbroker.git
synced 2026-04-29 14:35:57 +02:00
Initial commit: ShadowBroker v1.0
Former-commit-id: 3b8912a8f67efd51c08cc6f4c77d43266b8fdc1b
This commit is contained in:
+66
@@ -0,0 +1,66 @@
|
||||
# shadowbroker .gitignore
|
||||
# ----------------------
|
||||
|
||||
# Dependencies
|
||||
node_modules/
|
||||
venv/
|
||||
env/
|
||||
.venv/
|
||||
|
||||
# Environment Variables & Secrets
|
||||
.env
|
||||
.env.local
|
||||
.env.development.local
|
||||
.env.test.local
|
||||
.env.production.local
|
||||
|
||||
# Python caches & compiled files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
|
||||
# Next.js build output
|
||||
.next/
|
||||
out/
|
||||
build/
|
||||
|
||||
# Application Specific Caches & DBs
|
||||
backend/ais_cache.json
|
||||
backend/carrier_cache.json
|
||||
backend/cctv.db
|
||||
*.sqlite3
|
||||
|
||||
# OS generated files
|
||||
.DS_Store
|
||||
.DS_Store?
|
||||
._*
|
||||
.Spotlight-V100
|
||||
.Trashes
|
||||
ehthumbs.db
|
||||
Thumbs.db
|
||||
|
||||
# IDEs and Editors
|
||||
.vscode/
|
||||
.idea/
|
||||
*.suo
|
||||
*.ntvs*
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
|
||||
# Vercel / Deployment
|
||||
.vercel
|
||||
|
||||
# Temp files
|
||||
tmp/
|
||||
*.log
|
||||
*.tmp
|
||||
*.bak
|
||||
out.txt
|
||||
out_sys.txt
|
||||
rss_output.txt
|
||||
merged.txt
|
||||
tmp_fast.json
|
||||
TheAirTraffic Database.xlsx
|
||||
@@ -0,0 +1,301 @@
|
||||
<p align="center">
|
||||
<h1 align="center">🛰️ S H A D O W B R O K E R</h1>
|
||||
<p align="center"><strong>Global Threat Intercept — Real-Time Geospatial Intelligence Platform</strong></p>
|
||||
<p align="center">
|
||||
<code>TOP SECRET // SI TK // NOFORN</code>
|
||||
</p>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
**ShadowBroker** is a real-time, full-spectrum geospatial intelligence dashboard that aggregates live data from dozens of open-source intelligence (OSINT) feeds and renders them on a unified dark-ops map interface. It tracks aircraft, ships, satellites, earthquakes, conflict zones, CCTV networks, GPS jamming, and breaking geopolitical events — all updating in real time.
|
||||
|
||||
Built with **Next.js**, **MapLibre GL**, **FastAPI**, and **Python**, it's designed for analysts, researchers, and enthusiasts who want a single-pane-of-glass view of global activity.
|
||||
|
||||
---
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🛩️ Aviation Tracking
|
||||
|
||||
- **Commercial Flights** — Real-time positions via OpenSky Network (~5,000+ aircraft)
|
||||
- **Private Aircraft** — Light GA, turboprops, bizjets tracked separately
|
||||
- **Private Jets** — High-net-worth individual aircraft with owner identification
|
||||
- **Military Flights** — Tankers, ISR, fighters, transports via adsb.lol military endpoint
|
||||
- **Flight Trail Accumulation** — Persistent breadcrumb trails for all tracked aircraft
|
||||
- **Holding Pattern Detection** — Automatically flags aircraft circling (>300° total turn)
|
||||
- **Aircraft Classification** — Shape-accurate SVG icons: airliners, turboprops, bizjets, helicopters
|
||||
- **Grounded Detection** — Aircraft below 100ft AGL rendered with grey icons
|
||||
|
||||
### 🚢 Maritime Tracking
|
||||
|
||||
- **AIS Vessel Stream** — 25,000+ vessels via aisstream.io WebSocket (real-time)
|
||||
- **Ship Classification** — Cargo, tanker, passenger, yacht, military vessel types with color-coded icons
|
||||
- **Carrier Strike Group Tracker** — All 11 active US Navy aircraft carriers with OSINT-estimated positions
|
||||
- Automated GDELT news scraping for carrier movement intelligence
|
||||
- 50+ geographic region-to-coordinate mappings
|
||||
- Disk-cached positions, auto-updates at 00:00 & 12:00 UTC
|
||||
- **Cruise & Passenger Ships** — Dedicated layer for cruise liners and ferries
|
||||
- **Clustered Display** — Ships cluster at low zoom with count labels, decluster on zoom-in
|
||||
|
||||
### 🛰️ Space & Satellites
|
||||
|
||||
- **Orbital Tracking** — Real-time satellite positions from N2YO API
|
||||
- **Mission-Type Classification** — Color-coded by mission: military recon (red), SAR (cyan), SIGINT (white), navigation (blue), early warning (magenta), commercial imaging (green), space station (gold)
|
||||
|
||||
### 🌍 Geopolitics & Conflict
|
||||
|
||||
- **Global Incidents** — GDELT-powered conflict event aggregation (last 8 hours, ~1,000 events)
|
||||
- **Ukraine Frontline** — Live warfront GeoJSON from DeepState Map
|
||||
- **SIGINT/RISINT News Feed** — Real-time RSS aggregation from multiple intelligence-focused sources
|
||||
- **Region Dossier** — Right-click anywhere on the map for:
|
||||
- Country profile (population, capital, languages, currencies, area)
|
||||
- Head of state & government type (Wikidata SPARQL)
|
||||
- Local Wikipedia summary with thumbnail
|
||||
|
||||
### 📷 Surveillance
|
||||
|
||||
- **CCTV Mesh** — 2,000+ live traffic cameras from:
|
||||
- 🇬🇧 Transport for London JamCams
|
||||
- 🇺🇸 Austin, TX TxDOT
|
||||
- 🇺🇸 NYC DOT
|
||||
- 🇸🇬 Singapore LTA
|
||||
- Custom URL ingestion
|
||||
- **Feed Rendering** — Automatic detection & rendering of video, MJPEG, HLS, embed, satellite tile, and image feeds
|
||||
- **Clustered Map Display** — Green dots cluster with count labels, decluster on zoom
|
||||
|
||||
### 📡 Signal Intelligence
|
||||
|
||||
- **GPS Jamming Detection** — Real-time analysis of aircraft NAC-P (Navigation Accuracy Category) values
|
||||
- Grid-based aggregation identifies interference zones
|
||||
- Red overlay squares with "GPS JAM XX%" severity labels
|
||||
- **Radio Intercept Panel** — Scanner-style UI for monitoring communications
|
||||
|
||||
### 🌐 Additional Layers
|
||||
|
||||
- **Earthquakes (24h)** — USGS real-time earthquake feed with magnitude-scaled markers
|
||||
- **Day/Night Cycle** — Solar terminator overlay showing global daylight/darkness
|
||||
- **Global Markets Ticker** — Live financial market indices (minimizable)
|
||||
- **Measurement Tool** — Point-to-point distance & bearing measurement on the map
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────┐
|
||||
│ FRONTEND (Next.js) │
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌──────────┐ ┌─────────────────┐ │
|
||||
│ │ MapLibre GL │ │ NewsFeed │ │ Control Panels │ │
|
||||
│ │ 2D WebGL │ │ SIGINT │ │ Layers/Filters │ │
|
||||
│ │ Map Render │ │ Intel │ │ Markets/Radio │ │
|
||||
│ └──────┬──────┘ └────┬─────┘ └────────┬────────┘ │
|
||||
│ └──────────────┼─────────────────┘ │
|
||||
│ │ REST API (15s fast / 60s slow│
|
||||
├────────────────────────┼─────────────────────────────┤
|
||||
│ BACKEND (FastAPI) │
|
||||
│ │ │
|
||||
│ ┌─────────────────────┼─────────────────────────┐ │
|
||||
│ │ Data Fetcher (Scheduler) │ │
|
||||
│ │ ┌──────────┬──────────┬──────────┬─────────┐ │ │
|
||||
│ │ │ OpenSky │ adsb.lol │ N2YO │ USGS │ │ │
|
||||
│ │ │ Flights │ Military │ Sats │ Quakes │ │ │
|
||||
│ │ ├──────────┼──────────┼──────────┼─────────┤ │ │
|
||||
│ │ │ AIS WS │ Carrier │ GDELT │ CCTV │ │ │
|
||||
│ │ │ Ships │ Tracker │ Conflict │ Cameras │ │ │
|
||||
│ │ ├──────────┼──────────┼──────────┼─────────┤ │ │
|
||||
│ │ │ DeepState│ RSS │ Region │ GPS │ │ │
|
||||
│ │ │ Frontline│ Intel │ Dossier │ Jamming │ │ │
|
||||
│ │ └──────────┴──────────┴──────────┴─────────┘ │ │
|
||||
│ └───────────────────────────────────────────────┘ │
|
||||
└──────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Data Sources & APIs
|
||||
|
||||
| Source | Data | Update Frequency | API Key Required |
|
||||
|---|---|---|---|
|
||||
| [OpenSky Network](https://opensky-network.org) | Commercial & private flights | ~60s | Optional (anonymous limited) |
|
||||
| [adsb.lol](https://adsb.lol) | Military aircraft | ~60s | No |
|
||||
| [aisstream.io](https://aisstream.io) | AIS vessel positions | Real-time WebSocket | **Yes** |
|
||||
| [N2YO](https://www.n2yo.com) | Satellite orbital positions | ~60s | **Yes** |
|
||||
| [USGS Earthquake](https://earthquake.usgs.gov) | Global seismic events | ~60s | No |
|
||||
| [GDELT Project](https://www.gdeltproject.org) | Global conflict events | ~6h | No |
|
||||
| [DeepState Map](https://deepstatemap.live) | Ukraine frontline | ~30min | No |
|
||||
| [Transport for London](https://api.tfl.gov.uk) | London CCTV JamCams | ~5min | No |
|
||||
| [TxDOT](https://its.txdot.gov) | Austin TX traffic cameras | ~5min | No |
|
||||
| [NYC DOT](https://webcams.nyctmc.org) | NYC traffic cameras | ~5min | No |
|
||||
| [Singapore LTA](https://datamall.lta.gov.sg) | Singapore traffic cameras | ~5min | **Yes** |
|
||||
| [RestCountries](https://restcountries.com) | Country profile data | On-demand (cached 24h) | No |
|
||||
| [Wikidata SPARQL](https://query.wikidata.org) | Head of state data | On-demand (cached 24h) | No |
|
||||
| [Wikipedia API](https://en.wikipedia.org/api) | Location summaries & aircraft images | On-demand (cached) | No |
|
||||
| [CARTO Basemaps](https://carto.com) | Dark map tiles | Continuous | No |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Getting Started
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Node.js** 18+ and **npm**
|
||||
- **Python** 3.10+ with `pip`
|
||||
- API keys for: `aisstream.io`, `n2yo.com` (and optionally `opensky-network.org`, `lta.gov.sg`)
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/your-username/shadowbroker.git
|
||||
cd shadowbroker/live-risk-dashboard
|
||||
|
||||
# Backend setup
|
||||
cd backend
|
||||
python -m venv venv
|
||||
venv\Scripts\activate # Windows
|
||||
# source venv/bin/activate # macOS/Linux
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Create .env with your API keys
|
||||
echo "AISSTREAM_API_KEY=your_key_here" >> .env
|
||||
echo "N2YO_API_KEY=your_key_here" >> .env
|
||||
echo "OPENSKY_USERNAME=your_user" >> .env
|
||||
echo "OPENSKY_PASSWORD=your_pass" >> .env
|
||||
|
||||
# Frontend setup
|
||||
cd ../frontend
|
||||
npm install
|
||||
```
|
||||
|
||||
### Running
|
||||
|
||||
```bash
|
||||
# From the frontend directory — starts both frontend & backend concurrently
|
||||
npm run dev
|
||||
```
|
||||
|
||||
This starts:
|
||||
|
||||
- **Next.js** frontend on `http://localhost:3000`
|
||||
- **FastAPI** backend on `http://localhost:8000`
|
||||
|
||||
---
|
||||
|
||||
## 🎛️ Data Layers
|
||||
|
||||
All layers are independently toggleable from the left panel:
|
||||
|
||||
| Layer | Default | Description |
|
||||
|---|---|---|
|
||||
| Commercial Flights | ✅ ON | Airlines, cargo, GA aircraft |
|
||||
| Private Flights | ✅ ON | Non-commercial private aircraft |
|
||||
| Private Jets | ✅ ON | High-value bizjets with owner data |
|
||||
| Military Flights | ✅ ON | Military & government aircraft |
|
||||
| Tracked Aircraft | ✅ ON | Special interest watch list |
|
||||
| Satellites | ✅ ON | Orbital assets by mission type |
|
||||
| Carriers / Mil / Cargo | ✅ ON | Navy carriers, cargo ships, tankers |
|
||||
| Civilian Vessels | ❌ OFF | Yachts, fishing, recreational |
|
||||
| Cruise / Passenger | ✅ ON | Cruise ships and ferries |
|
||||
| Earthquakes (24h) | ✅ ON | USGS seismic events |
|
||||
| CCTV Mesh | ❌ OFF | Surveillance camera network |
|
||||
| Ukraine Frontline | ✅ ON | Live warfront positions |
|
||||
| Global Incidents | ✅ ON | GDELT conflict events |
|
||||
| GPS Jamming | ✅ ON | NAC-P degradation zones |
|
||||
| Day / Night Cycle | ✅ ON | Solar terminator overlay |
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Performance
|
||||
|
||||
The platform is optimized for handling massive real-time datasets:
|
||||
|
||||
- **Gzip Compression** — API payloads compressed ~92% (11.6 MB → 915 KB)
|
||||
- **ETag Caching** — `304 Not Modified` responses skip redundant JSON parsing
|
||||
- **Viewport Culling** — Only features within the visible map bounds (+20% buffer) are rendered
|
||||
- **Clustered Rendering** — Ships, CCTV, and earthquakes use MapLibre clustering to reduce feature count
|
||||
- **Debounced Viewport Updates** — 300ms debounce prevents GeoJSON rebuild thrash during pan/zoom
|
||||
- **Position Interpolation** — Smooth 10s tick animation between data refreshes
|
||||
- **React.memo** — Heavy components wrapped to prevent unnecessary re-renders
|
||||
- **Coordinate Precision** — Lat/lng rounded to 5 decimals (~1m) to reduce JSON size
|
||||
|
||||
---
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
```
|
||||
live-risk-dashboard/
|
||||
├── backend/
|
||||
│ ├── main.py # FastAPI app, middleware, API routes
|
||||
│ ├── carrier_cache.json # Persisted carrier OSINT positions
|
||||
│ ├── cctv.db # SQLite CCTV camera database
|
||||
│ └── services/
|
||||
│ ├── data_fetcher.py # Core scheduler — fetches all data sources
|
||||
│ ├── ais_stream.py # AIS WebSocket client (25K+ vessels)
|
||||
│ ├── carrier_tracker.py # OSINT carrier position tracker
|
||||
│ ├── cctv_pipeline.py # Multi-source CCTV camera ingestion
|
||||
│ ├── geopolitics.py # GDELT + Ukraine frontline fetcher
|
||||
│ ├── region_dossier.py # Right-click country/city intelligence
|
||||
│ ├── radio_intercept.py # Scanner radio feed integration
|
||||
│ ├── network_utils.py # HTTP client with curl fallback
|
||||
│ └── api_settings.py # API key management
|
||||
│
|
||||
├── frontend/
|
||||
│ ├── src/
|
||||
│ │ ├── app/
|
||||
│ │ │ └── page.tsx # Main dashboard — state, polling, layout
|
||||
│ │ └── components/
|
||||
│ │ ├── MaplibreViewer.tsx # Core map — 2,000+ lines, all GeoJSON layers
|
||||
│ │ ├── NewsFeed.tsx # SIGINT feed + entity detail panels
|
||||
│ │ ├── WorldviewLeftPanel.tsx # Data layer toggles
|
||||
│ │ ├── WorldviewRightPanel.tsx # Search + filter sidebar
|
||||
│ │ ├── FilterPanel.tsx # Basic layer filters
|
||||
│ │ ├── AdvancedFilterModal.tsx # Airport/country/owner filtering
|
||||
│ │ ├── MapLegend.tsx # Dynamic legend with all icons
|
||||
│ │ ├── MarketsPanel.tsx # Global financial markets ticker
|
||||
│ │ ├── RadioInterceptPanel.tsx # Scanner-style radio panel
|
||||
│ │ ├── FindLocateBar.tsx # Search/locate bar
|
||||
│ │ ├── SettingsPanel.tsx # App settings
|
||||
│ │ ├── ScaleBar.tsx # Map scale indicator
|
||||
│ │ ├── WikiImage.tsx # Wikipedia image fetcher
|
||||
│ │ └── ErrorBoundary.tsx # Crash recovery wrapper
|
||||
│ └── package.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔑 Environment Variables
|
||||
|
||||
Create a `.env` file in the `backend/` directory:
|
||||
|
||||
```env
|
||||
# Required
|
||||
AISSTREAM_API_KEY=your_aisstream_key # Maritime vessel tracking
|
||||
N2YO_API_KEY=your_n2yo_key # Satellite position data
|
||||
|
||||
# Optional (enhances data quality)
|
||||
OPENSKY_USERNAME=your_opensky_user # Higher rate limits for flight data
|
||||
OPENSKY_PASSWORD=your_opensky_pass
|
||||
LTA_ACCOUNT_KEY=your_lta_key # Singapore CCTV cameras
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Disclaimer
|
||||
|
||||
This is an **educational and research tool** built entirely on publicly available, open-source intelligence (OSINT) data. No classified, restricted, or non-public data sources are used. Carrier positions are estimates based on public reporting. The military-themed UI is purely aesthetic.
|
||||
|
||||
**Do not use this tool for any operational, military, or intelligence purpose.**
|
||||
|
||||
---
|
||||
|
||||
## 📜 License
|
||||
|
||||
This project is for educational and personal research purposes. See individual API provider terms of service for data usage restrictions.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<sub>Built with ☕ and too many API calls</sub>
|
||||
</p>
|
||||
@@ -0,0 +1,16 @@
|
||||
FROM python:3.10-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8000
|
||||
|
||||
# Start FastAPI server
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
@@ -0,0 +1,52 @@
|
||||
const WebSocket = require('ws');
|
||||
|
||||
const args = process.argv.slice(2);
|
||||
const API_KEY = args[0] || '75cc39af03c9cc23c90e8a7b3c3bc2b2a507c5fb';
|
||||
|
||||
const FILTER = [
|
||||
// US Aircraft Carriers and major naval groups
|
||||
{ "MMSI": 338000000 }, { "MMSI": 338100000 }, // US Navy general prefixes
|
||||
// Plus let's grab some global shipping for density
|
||||
{ "BoundingBoxes": [[[-90, -180], [90, 180]]] }
|
||||
];
|
||||
|
||||
function connect() {
|
||||
const ws = new WebSocket('wss://stream.aisstream.io/v0/stream');
|
||||
|
||||
ws.on('open', () => {
|
||||
const subMsg = {
|
||||
APIKey: API_KEY,
|
||||
BoundingBoxes: [
|
||||
[[-90, -180], [90, 180]]
|
||||
],
|
||||
FilterMessageTypes: [
|
||||
"PositionReport",
|
||||
"ShipStaticData",
|
||||
"StandardClassBPositionReport"
|
||||
]
|
||||
};
|
||||
ws.send(JSON.stringify(subMsg));
|
||||
});
|
||||
|
||||
ws.on('message', (data) => {
|
||||
// Output raw AIS message JSON to stdout so Python can consume it
|
||||
// We ensure exactly one JSON object per line.
|
||||
try {
|
||||
const parsed = JSON.parse(data);
|
||||
console.log(JSON.stringify(parsed));
|
||||
} catch (e) {
|
||||
// ignore non-json
|
||||
}
|
||||
});
|
||||
|
||||
ws.on('error', (err) => {
|
||||
console.error("WebSocket Proxy Error:", err.message);
|
||||
});
|
||||
|
||||
ws.on('close', () => {
|
||||
console.error("WebSocket Proxy Closed. Reconnecting in 5s...");
|
||||
setTimeout(connect, 5000);
|
||||
});
|
||||
}
|
||||
|
||||
connect();
|
||||
@@ -0,0 +1,112 @@
|
||||
import zipfile
|
||||
import xml.etree.ElementTree as ET
|
||||
import re
|
||||
import csv
|
||||
import os
|
||||
|
||||
xlsx_path = r"f:\Codebase\Oracle\live-risk-dashboard\TheAirTraffic Database.xlsx"
|
||||
output_path = r"f:\Codebase\Oracle\live-risk-dashboard\backend\xlsx_analysis.txt"
|
||||
|
||||
def parse_xlsx_sheet(z, shared_strings, sheet_num):
|
||||
ns = {'s': 'http://schemas.openxmlformats.org/spreadsheetml/2006/main'}
|
||||
sheet_file = f'xl/worksheets/sheet{sheet_num}.xml'
|
||||
if sheet_file not in z.namelist():
|
||||
return []
|
||||
ws_xml = z.read(sheet_file)
|
||||
ws_root = ET.fromstring(ws_xml)
|
||||
rows = []
|
||||
for row in ws_root.findall('.//s:sheetData/s:row', ns):
|
||||
cells = {}
|
||||
for cell in row.findall('s:c', ns):
|
||||
cell_ref = cell.get('r', '')
|
||||
cell_type = cell.get('t', '')
|
||||
val_elem = cell.find('s:v', ns)
|
||||
val = val_elem.text if val_elem is not None else ''
|
||||
if cell_type == 's' and val:
|
||||
val = shared_strings[int(val)]
|
||||
col = re.match(r'([A-Z]+)', cell_ref).group(1) if re.match(r'([A-Z]+)', cell_ref) else ''
|
||||
cells[col] = val
|
||||
rows.append(cells)
|
||||
return rows
|
||||
|
||||
with open(output_path, 'w', encoding='utf-8') as out:
|
||||
with zipfile.ZipFile(xlsx_path, 'r') as z:
|
||||
shared_strings = []
|
||||
if 'xl/sharedStrings.xml' in z.namelist():
|
||||
ss_xml = z.read('xl/sharedStrings.xml')
|
||||
root = ET.fromstring(ss_xml)
|
||||
ns = {'s': 'http://schemas.openxmlformats.org/spreadsheetml/2006/main'}
|
||||
for si in root.findall('.//s:si', ns):
|
||||
texts = si.findall('.//s:t', ns)
|
||||
val = ''.join(t.text or '' for t in texts)
|
||||
shared_strings.append(val)
|
||||
|
||||
all_entries = []
|
||||
for sheet_idx in range(1, 5):
|
||||
rows = parse_xlsx_sheet(z, shared_strings, sheet_idx)
|
||||
if not rows:
|
||||
continue
|
||||
|
||||
out.write(f"\n=== SHEET {sheet_idx}: {len(rows)} rows ===\n")
|
||||
# Print first 5 rows
|
||||
for i in range(min(5, len(rows))):
|
||||
for col in sorted(rows[i].keys(), key=lambda x: (len(x), x)):
|
||||
val = rows[i][col]
|
||||
if val:
|
||||
out.write(f" Row{i} {col}: '{val[:80]}'\n")
|
||||
out.write("\n")
|
||||
|
||||
for r in rows[1:]:
|
||||
for col, val in r.items():
|
||||
val = str(val).strip()
|
||||
n_regs = re.findall(r'N\d{1,5}[A-Z]{0,2}', val)
|
||||
owner = r.get('B', r.get('A', '')).strip()
|
||||
aircraft_type = r.get('C', r.get('D', '')).strip()
|
||||
for reg in n_regs:
|
||||
all_entries.append({
|
||||
'registration': reg.upper(),
|
||||
'owner': owner,
|
||||
'type': aircraft_type,
|
||||
'sheet': sheet_idx
|
||||
})
|
||||
|
||||
unique_regs = set(e['registration'] for e in all_entries)
|
||||
out.write(f"\nTOTAL ENTRIES: {len(all_entries)}\n")
|
||||
out.write(f"UNIQUE REGISTRATIONS: {len(unique_regs)}\n")
|
||||
|
||||
csv_path = r"f:\Codebase\Oracle\live-risk-dashboard\PLANEALERTLIST\plane-alert-db-main\plane-alert-db.csv"
|
||||
existing = {}
|
||||
with open(csv_path, 'r', encoding='utf-8') as f:
|
||||
reader = csv.DictReader(f)
|
||||
for row in reader:
|
||||
icao = row.get('$ICAO', '').strip().upper()
|
||||
reg = row.get('$Registration', '').strip().upper()
|
||||
if reg:
|
||||
existing[reg] = {
|
||||
'icao': icao,
|
||||
'category': row.get('Category', ''),
|
||||
'operator': row.get('$Operator', ''),
|
||||
}
|
||||
|
||||
already_in = unique_regs & set(existing.keys())
|
||||
missing = unique_regs - set(existing.keys())
|
||||
out.write(f"\nplane-alert-db: {len(existing)} registrations\n")
|
||||
out.write(f"Already covered: {len(already_in)}\n")
|
||||
out.write(f"MISSING: {len(missing)}\n")
|
||||
|
||||
out.write(f"\n--- ALREADY TRACKED ---\n")
|
||||
seen = set()
|
||||
for e in all_entries:
|
||||
if e['registration'] in already_in and e['registration'] not in seen:
|
||||
info = existing[e['registration']]
|
||||
out.write(f" {e['owner'][:40]:40s} {e['registration']:10s} DB_CAT: {info['category'][:25]:25s} DB_OP: {info['operator'][:40]}\n")
|
||||
seen.add(e['registration'])
|
||||
|
||||
out.write(f"\n--- MISSING (NEED TO ADD) ---\n")
|
||||
seen = set()
|
||||
for e in all_entries:
|
||||
if e['registration'] in missing and e['registration'] not in seen:
|
||||
out.write(f" {e['owner'][:40]:40s} {e['registration']:10s} TYPE: {e['type'][:30]}\n")
|
||||
seen.add(e['registration'])
|
||||
|
||||
print(f"Analysis written to {output_path}")
|
||||
@@ -0,0 +1,17 @@
|
||||
import requests
|
||||
|
||||
regions = [
|
||||
{"lat": 39.8, "lon": -98.5, "dist": 2000}, # USA
|
||||
{"lat": 50.0, "lon": 15.0, "dist": 2000}, # Europe
|
||||
{"lat": 35.0, "lon": 105.0, "dist": 2000} # Asia / China
|
||||
]
|
||||
|
||||
for r in regions:
|
||||
url = f"https://api.adsb.lol/v2/lat/{r['lat']}/lon/{r['lon']}/dist/{r['dist']}"
|
||||
res = requests.get(url, timeout=10)
|
||||
if res.status_code == 200:
|
||||
data = res.json()
|
||||
acs = data.get("ac", [])
|
||||
print(f"Region lat:{r['lat']} lon:{r['lon']} dist:{r['dist']} -> Flights: {len(acs)}")
|
||||
else:
|
||||
print(f"Error for Region lat:{r['lat']} lon:{r['lon']}: HTTP {res.status_code}")
|
||||
@@ -0,0 +1,10 @@
|
||||
import sqlite3
|
||||
import os
|
||||
|
||||
db_path = os.path.join(os.path.dirname(__file__), 'cctv.db')
|
||||
conn = sqlite3.connect(db_path)
|
||||
cur = conn.cursor()
|
||||
cur.execute("DELETE FROM cameras WHERE id LIKE 'OSM-%'")
|
||||
print(f"Deleted {cur.rowcount} OSM cameras from DB.")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
@@ -0,0 +1 @@
|
||||
{}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1 @@
|
||||
5c3b1c768973ca54e9a1befee8dc075f38e8cc56
|
||||
@@ -0,0 +1 @@
|
||||
2b64633521ffb6f06da36e19f5c8eb86979e2187
|
||||
@@ -0,0 +1,25 @@
|
||||
import re
|
||||
import json
|
||||
|
||||
try:
|
||||
with open('liveua_test.html', 'r', encoding='utf-8') as f:
|
||||
html = f.read()
|
||||
|
||||
m = re.search(r"var\s+ovens\s*=\s*(.*?);(?!function)", html, re.DOTALL)
|
||||
if m:
|
||||
json_str = m.group(1)
|
||||
# Handle if it is a string containing base64
|
||||
if json_str.startswith("'") or json_str.startswith('"'):
|
||||
json_str = json_str.strip('"\'')
|
||||
import base64
|
||||
import urllib.parse
|
||||
json_str = base64.b64decode(urllib.parse.unquote(json_str)).decode('utf-8')
|
||||
|
||||
data = json.loads(json_str)
|
||||
with open('out_liveua.json', 'w', encoding='utf-8') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
print(f"Successfully extracted {len(data)} ovens items.")
|
||||
else:
|
||||
print("var ovens not found.")
|
||||
except Exception as e:
|
||||
print("Error:", e)
|
||||
File diff suppressed because one or more lines are too long
+195
@@ -0,0 +1,195 @@
|
||||
from fastapi import FastAPI, Request, Response
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from contextlib import asynccontextmanager
|
||||
from services.data_fetcher import start_scheduler, stop_scheduler, get_latest_data
|
||||
from services.ais_stream import start_ais_stream, stop_ais_stream
|
||||
from services.carrier_tracker import start_carrier_tracker, stop_carrier_tracker
|
||||
import uvicorn
|
||||
import logging
|
||||
import hashlib
|
||||
import json as json_mod
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
# Startup: Start background data fetching, AIS stream, and carrier tracker
|
||||
start_carrier_tracker()
|
||||
start_ais_stream()
|
||||
start_scheduler()
|
||||
yield
|
||||
# Shutdown: Stop all background services
|
||||
stop_ais_stream()
|
||||
stop_scheduler()
|
||||
stop_carrier_tracker()
|
||||
|
||||
app = FastAPI(title="Live Risk Dashboard API", lifespan=lifespan)
|
||||
|
||||
from fastapi.middleware.gzip import GZipMiddleware
|
||||
app.add_middleware(GZipMiddleware, minimum_size=1000)
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # For prototyping, allow all
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
from services.data_fetcher import update_all_data
|
||||
|
||||
@app.get("/api/refresh")
|
||||
async def force_refresh():
|
||||
# Force an immediate synchronous update of the data payload
|
||||
import threading
|
||||
t = threading.Thread(target=update_all_data)
|
||||
t.start()
|
||||
return {"status": "refreshing in background"}
|
||||
|
||||
@app.get("/api/live-data")
|
||||
async def live_data():
|
||||
return get_latest_data()
|
||||
|
||||
@app.get("/api/live-data/fast")
|
||||
async def live_data_fast(request: Request):
|
||||
d = get_latest_data()
|
||||
payload = {
|
||||
"commercial_flights": d.get("commercial_flights", []),
|
||||
"military_flights": d.get("military_flights", []),
|
||||
"private_flights": d.get("private_flights", []),
|
||||
"private_jets": d.get("private_jets", []),
|
||||
"tracked_flights": d.get("tracked_flights", []),
|
||||
"ships": d.get("ships", []),
|
||||
"cctv": d.get("cctv", []),
|
||||
"uavs": d.get("uavs", []),
|
||||
"liveuamap": d.get("liveuamap", []),
|
||||
"gps_jamming": d.get("gps_jamming", []),
|
||||
}
|
||||
# ETag includes last_updated timestamp so it changes on every data refresh,
|
||||
# not just when item counts change (old bug: positions went stale)
|
||||
last_updated = d.get("last_updated", "")
|
||||
counts = "|".join(f"{k}:{len(v) if isinstance(v, list) else 0}" for k, v in payload.items())
|
||||
etag = hashlib.md5(f"{last_updated}|{counts}".encode()).hexdigest()[:16]
|
||||
if request.headers.get("if-none-match") == etag:
|
||||
return Response(status_code=304, headers={"ETag": etag, "Cache-Control": "no-cache"})
|
||||
return Response(
|
||||
content=json_mod.dumps(payload),
|
||||
media_type="application/json",
|
||||
headers={"ETag": etag, "Cache-Control": "no-cache"}
|
||||
)
|
||||
|
||||
@app.get("/api/live-data/slow")
|
||||
async def live_data_slow(request: Request):
|
||||
d = get_latest_data()
|
||||
payload = {
|
||||
"last_updated": d.get("last_updated"),
|
||||
"news": d.get("news", []),
|
||||
"stocks": d.get("stocks", {}),
|
||||
"oil": d.get("oil", {}),
|
||||
"weather": d.get("weather"),
|
||||
"traffic": d.get("traffic", []),
|
||||
"earthquakes": d.get("earthquakes", []),
|
||||
"frontlines": d.get("frontlines"),
|
||||
"gdelt": d.get("gdelt", []),
|
||||
"airports": d.get("airports", []),
|
||||
"satellites": d.get("satellites", [])
|
||||
}
|
||||
# ETag based on last_updated + item counts
|
||||
last_updated = d.get("last_updated", "")
|
||||
counts = "|".join(f"{k}:{len(v) if isinstance(v, list) else 0}" for k, v in payload.items())
|
||||
etag = hashlib.md5(f"slow|{last_updated}|{counts}".encode()).hexdigest()[:16]
|
||||
if request.headers.get("if-none-match") == etag:
|
||||
return Response(status_code=304, headers={"ETag": etag, "Cache-Control": "no-cache"})
|
||||
return Response(
|
||||
content=json_mod.dumps(payload, default=str),
|
||||
media_type="application/json",
|
||||
headers={"ETag": etag, "Cache-Control": "no-cache"}
|
||||
)
|
||||
|
||||
@app.get("/api/debug-latest")
|
||||
async def debug_latest_data():
|
||||
return list(get_latest_data().keys())
|
||||
|
||||
|
||||
@app.get("/api/health")
|
||||
async def health_check():
|
||||
return {"status": "ok"}
|
||||
|
||||
from services.radio_intercept import get_top_broadcastify_feeds, get_openmhz_systems, get_recent_openmhz_calls, find_nearest_openmhz_system
|
||||
|
||||
@app.get("/api/radio/top")
|
||||
async def get_top_radios():
|
||||
return get_top_broadcastify_feeds()
|
||||
|
||||
@app.get("/api/radio/openmhz/systems")
|
||||
async def api_get_openmhz_systems():
|
||||
return get_openmhz_systems()
|
||||
|
||||
@app.get("/api/radio/openmhz/calls/{sys_name}")
|
||||
async def api_get_openmhz_calls(sys_name: str):
|
||||
return get_recent_openmhz_calls(sys_name)
|
||||
|
||||
@app.get("/api/radio/nearest")
|
||||
async def api_get_nearest_radio(lat: float, lng: float):
|
||||
return find_nearest_openmhz_system(lat, lng)
|
||||
|
||||
from services.radio_intercept import find_nearest_openmhz_systems_list
|
||||
|
||||
@app.get("/api/radio/nearest-list")
|
||||
async def api_get_nearest_radios_list(lat: float, lng: float, limit: int = 5):
|
||||
return find_nearest_openmhz_systems_list(lat, lng, limit=limit)
|
||||
|
||||
from services.network_utils import fetch_with_curl
|
||||
|
||||
@app.get("/api/route/{callsign}")
|
||||
async def get_flight_route(callsign: str):
|
||||
r = fetch_with_curl("https://api.adsb.lol/api/0/routeset", method="POST", json_data={"planes": [{"callsign": callsign}]}, timeout=10)
|
||||
if r.status_code == 200:
|
||||
data = r.json()
|
||||
route_list = []
|
||||
if isinstance(data, dict):
|
||||
route_list = data.get("value", [])
|
||||
elif isinstance(data, list):
|
||||
route_list = data
|
||||
|
||||
if route_list and len(route_list) > 0:
|
||||
route = route_list[0]
|
||||
airports = route.get("_airports", [])
|
||||
if len(airports) >= 2:
|
||||
return {
|
||||
"orig_loc": [airports[0].get("lon", 0), airports[0].get("lat", 0)],
|
||||
"dest_loc": [airports[-1].get("lon", 0), airports[-1].get("lat", 0)]
|
||||
}
|
||||
return {}
|
||||
|
||||
from services.region_dossier import get_region_dossier
|
||||
|
||||
@app.get("/api/region-dossier")
|
||||
def api_region_dossier(lat: float, lng: float):
|
||||
"""Sync def so FastAPI runs it in a threadpool — prevents blocking the event loop."""
|
||||
return get_region_dossier(lat, lng)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# API Settings — key registry & management
|
||||
# ---------------------------------------------------------------------------
|
||||
from services.api_settings import get_api_keys, update_api_key
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ApiKeyUpdate(BaseModel):
|
||||
env_key: str
|
||||
value: str
|
||||
|
||||
@app.get("/api/settings/api-keys")
|
||||
async def api_get_keys():
|
||||
return get_api_keys()
|
||||
|
||||
@app.put("/api/settings/api-keys")
|
||||
async def api_update_key(body: ApiKeyUpdate):
|
||||
ok = update_api_key(body.env_key, body.value)
|
||||
if ok:
|
||||
return {"status": "updated", "env_key": body.env_key}
|
||||
return {"status": "error", "message": "Failed to update .env file"}
|
||||
|
||||
if __name__ == "__main__":
|
||||
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
|
||||
|
||||
# Application successfully initialized with background scraping tasks
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1 @@
|
||||
{"callsign": "JWZ7", "country": "N625GN", "lng": -111.914754, "lat": 33.620235, "alt": 0, "heading": 0, "type": "tracked_flight", "origin_loc": null, "dest_loc": null, "origin_name": "UNKNOWN", "dest_name": "UNKNOWN", "registration": "N625GN", "model": "GLF5", "icao24": "a82973", "speed_knots": 6.8, "squawk": "1200", "airline_code": "", "aircraft_category": "plane", "alert_operator": "Tilman Fertitta", "alert_category": "People", "alert_color": "pink", "trail": [[33.62024, -111.91475, 0, 1772302052]]}
|
||||
File diff suppressed because it is too large
Load Diff
Generated
+33
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"name": "backend",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"dependencies": {
|
||||
"ws": "^8.19.0"
|
||||
}
|
||||
},
|
||||
"node_modules/ws": {
|
||||
"version": "8.19.0",
|
||||
"resolved": "https://registry.npmjs.org/ws/-/ws-8.19.0.tgz",
|
||||
"integrity": "sha512-blAT2mjOEIi0ZzruJfIhb3nps74PRWTCz1IjglWEEpQl5XS/UNama6u2/rjFkDDouqr4L67ry+1aGIALViWjDg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=10.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"bufferutil": "^4.0.1",
|
||||
"utf-8-validate": ">=5.0.2"
|
||||
},
|
||||
"peerDependenciesMeta": {
|
||||
"bufferutil": {
|
||||
"optional": true
|
||||
},
|
||||
"utf-8-validate": {
|
||||
"optional": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"dependencies": {
|
||||
"ws": "^8.19.0"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
fastapi==0.103.1
|
||||
uvicorn==0.23.2
|
||||
yfinance>=0.2.40
|
||||
feedparser==6.0.10
|
||||
requests==2.31.0
|
||||
apscheduler==3.10.3
|
||||
pydantic==2.3.0
|
||||
pydantic-settings==2.0.3
|
||||
playwright>=1.58.0
|
||||
beautifulsoup4>=4.12.0
|
||||
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"code" : "dataset.missing",
|
||||
"error" : true,
|
||||
"message" : "Not found",
|
||||
"data" : {
|
||||
"id" : "xqwu-hwdm"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1 @@
|
||||
# Empty init
|
||||
@@ -0,0 +1 @@
|
||||
83ba61e7af89c1dc7b4d9b972e08d3edf3493966
|
||||
@@ -0,0 +1,359 @@
|
||||
"""
|
||||
AIS Stream WebSocket client for real-time maritime vessel tracking.
|
||||
Connects to aisstream.io and maintains a live dictionary of global vessel positions.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
import os
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
AIS_WS_URL = "wss://stream.aisstream.io/v0/stream"
|
||||
API_KEY = os.environ.get("AIS_API_KEY", "75cc39af03c9cc23c90e8a7b3c3bc2b2a507c5fb")
|
||||
|
||||
# AIS vessel type code classification
|
||||
# See: https://coast.noaa.gov/data/marinecadastre/ais/VesselTypeCodes2018.pdf
|
||||
def classify_vessel(ais_type: int, mmsi: int) -> str:
|
||||
"""Classify a vessel by its AIS type code into a rendering category."""
|
||||
if 80 <= ais_type <= 89:
|
||||
return "tanker" # Oil/Chemical/Gas tankers → RED
|
||||
if 70 <= ais_type <= 79:
|
||||
return "cargo" # Cargo ships, container vessels → RED
|
||||
if 60 <= ais_type <= 69:
|
||||
return "passenger" # Cruise ships, ferries → GRAY
|
||||
if ais_type in (36, 37):
|
||||
return "yacht" # Sailing/Pleasure craft → DARK BLUE
|
||||
if ais_type == 35:
|
||||
return "military_vessel" # Military → YELLOW
|
||||
# MMSI-based military detection: military MMSIs often start with certain prefixes
|
||||
mmsi_str = str(mmsi)
|
||||
if mmsi_str.startswith("3380") or mmsi_str.startswith("3381"):
|
||||
return "military_vessel" # US Navy
|
||||
if ais_type in (30, 31, 32, 33, 34):
|
||||
return "other" # Fishing, towing, dredging, diving, etc.
|
||||
if ais_type in (50, 51, 52, 53, 54, 55, 56, 57, 58, 59):
|
||||
return "other" # Pilot, SAR, tug, port tender, etc.
|
||||
return "unknown" # Not yet classified — will update when ShipStaticData arrives
|
||||
|
||||
|
||||
# MMSI Maritime Identification Digit (MID) → Country mapping
|
||||
# First 3 digits of MMSI (for 9-digit MMSIs) encode the flag state
|
||||
MID_COUNTRY = {
|
||||
201: "Albania", 202: "Andorra", 203: "Austria", 204: "Portugal", 205: "Belgium",
|
||||
206: "Belarus", 207: "Bulgaria", 208: "Vatican", 209: "Cyprus", 210: "Cyprus",
|
||||
211: "Germany", 212: "Cyprus", 213: "Georgia", 214: "Moldova", 215: "Malta",
|
||||
216: "Armenia", 218: "Germany", 219: "Denmark", 220: "Denmark", 224: "Spain",
|
||||
225: "Spain", 226: "France", 227: "France", 228: "France", 229: "Malta",
|
||||
230: "Finland", 231: "Faroe Islands", 232: "United Kingdom", 233: "United Kingdom",
|
||||
234: "United Kingdom", 235: "United Kingdom", 236: "Gibraltar", 237: "Greece",
|
||||
238: "Croatia", 239: "Greece", 240: "Greece", 241: "Greece", 242: "Morocco",
|
||||
243: "Hungary", 244: "Netherlands", 245: "Netherlands", 246: "Netherlands",
|
||||
247: "Italy", 248: "Malta", 249: "Malta", 250: "Ireland", 251: "Iceland",
|
||||
252: "Liechtenstein", 253: "Luxembourg", 254: "Monaco", 255: "Portugal",
|
||||
256: "Malta", 257: "Norway", 258: "Norway", 259: "Norway", 261: "Poland",
|
||||
263: "Portugal", 264: "Romania", 265: "Sweden", 266: "Sweden", 267: "Slovakia",
|
||||
268: "San Marino", 269: "Switzerland", 270: "Czech Republic", 271: "Turkey",
|
||||
272: "Ukraine", 273: "Russia", 274: "North Macedonia", 275: "Latvia",
|
||||
276: "Estonia", 277: "Lithuania", 278: "Slovenia",
|
||||
301: "Anguilla", 303: "Alaska", 304: "Antigua", 305: "Antigua",
|
||||
306: "Netherlands Antilles", 307: "Aruba", 308: "Bahamas", 309: "Bahamas",
|
||||
310: "Bermuda", 311: "Bahamas", 312: "Belize", 314: "Barbados", 316: "Canada",
|
||||
319: "Cayman Islands", 321: "Costa Rica", 323: "Cuba", 325: "Dominica",
|
||||
327: "Dominican Republic", 329: "Guadeloupe", 330: "Grenada", 331: "Greenland",
|
||||
332: "Guatemala", 334: "Honduras", 336: "Haiti", 338: "United States",
|
||||
339: "Jamaica", 341: "Saint Kitts", 343: "Saint Lucia", 345: "Mexico",
|
||||
347: "Martinique", 348: "Montserrat", 350: "Nicaragua", 351: "Panama",
|
||||
352: "Panama", 353: "Panama", 354: "Panama", 355: "Panama",
|
||||
356: "Panama", 357: "Panama", 358: "Puerto Rico", 359: "El Salvador",
|
||||
361: "Saint Pierre", 362: "Trinidad", 364: "Turks and Caicos",
|
||||
366: "United States", 367: "United States", 368: "United States", 369: "United States",
|
||||
370: "Panama", 371: "Panama", 372: "Panama", 373: "Panama",
|
||||
374: "Panama", 375: "Saint Vincent", 376: "Saint Vincent", 377: "Saint Vincent",
|
||||
378: "British Virgin Islands", 379: "US Virgin Islands",
|
||||
401: "Afghanistan", 403: "Saudi Arabia", 405: "Bangladesh", 408: "Bahrain",
|
||||
410: "Bhutan", 412: "China", 413: "China", 414: "China",
|
||||
416: "Taiwan", 417: "Sri Lanka", 419: "India", 422: "Iran",
|
||||
423: "Azerbaijan", 425: "Iraq", 428: "Israel", 431: "Japan",
|
||||
432: "Japan", 434: "Turkmenistan", 436: "Kazakhstan", 437: "Uzbekistan",
|
||||
438: "Jordan", 440: "South Korea", 441: "South Korea", 443: "Palestine",
|
||||
445: "North Korea", 447: "Kuwait", 450: "Lebanon", 451: "Kyrgyzstan",
|
||||
453: "Macao", 455: "Maldives", 457: "Mongolia", 459: "Nepal",
|
||||
461: "Oman", 463: "Pakistan", 466: "Qatar", 468: "Syria",
|
||||
470: "UAE", 472: "Tajikistan", 473: "Yemen", 475: "Tonga",
|
||||
477: "Hong Kong", 478: "Bosnia",
|
||||
501: "Antarctica", 503: "Australia", 506: "Myanmar",
|
||||
508: "Brunei", 510: "Micronesia", 511: "Palau", 512: "New Zealand",
|
||||
514: "Cambodia", 515: "Cambodia", 516: "Christmas Island",
|
||||
518: "Cook Islands", 520: "Fiji", 523: "Cocos Islands",
|
||||
525: "Indonesia", 529: "Kiribati", 531: "Laos", 533: "Malaysia",
|
||||
536: "Northern Mariana Islands", 538: "Marshall Islands",
|
||||
540: "New Caledonia", 542: "Niue", 544: "Nauru", 546: "French Polynesia",
|
||||
548: "Philippines", 553: "Papua New Guinea", 555: "Pitcairn",
|
||||
557: "Solomon Islands", 559: "American Samoa", 561: "Samoa",
|
||||
563: "Singapore", 564: "Singapore", 565: "Singapore", 566: "Singapore",
|
||||
567: "Thailand", 570: "Tonga", 572: "Tuvalu", 574: "Vietnam",
|
||||
576: "Vanuatu", 577: "Vanuatu", 578: "Wallis and Futuna",
|
||||
601: "South Africa", 603: "Angola", 605: "Algeria", 607: "Benin",
|
||||
609: "Botswana", 610: "Burundi", 611: "Cameroon", 612: "Cape Verde",
|
||||
613: "Central African Republic", 615: "Congo", 616: "Comoros",
|
||||
617: "DR Congo", 618: "Ivory Coast", 619: "Djibouti",
|
||||
620: "Egypt", 621: "Equatorial Guinea", 622: "Ethiopia",
|
||||
624: "Eritrea", 625: "Gabon", 626: "Gambia", 627: "Ghana",
|
||||
629: "Guinea", 630: "Guinea-Bissau", 631: "Kenya", 632: "Lesotho",
|
||||
633: "Liberia", 634: "Liberia", 635: "Liberia", 636: "Liberia",
|
||||
637: "Libya", 642: "Madagascar", 644: "Malawi", 645: "Mali",
|
||||
647: "Mauritania", 649: "Mauritius", 650: "Mozambique",
|
||||
654: "Namibia", 655: "Niger", 656: "Nigeria", 657: "Guinea",
|
||||
659: "Rwanda", 660: "Senegal", 661: "Sierra Leone",
|
||||
662: "Somalia", 663: "South Africa", 664: "Sudan",
|
||||
667: "Tanzania", 668: "Togo", 669: "Tunisia", 670: "Uganda",
|
||||
671: "Egypt", 672: "Tanzania", 674: "Zambia", 675: "Zimbabwe",
|
||||
676: "Comoros", 677: "Tanzania",
|
||||
}
|
||||
|
||||
def get_country_from_mmsi(mmsi: int) -> str:
|
||||
"""Look up flag state from MMSI Maritime Identification Digit."""
|
||||
mmsi_str = str(mmsi)
|
||||
if len(mmsi_str) == 9:
|
||||
mid = int(mmsi_str[:3])
|
||||
return MID_COUNTRY.get(mid, "UNKNOWN")
|
||||
return "UNKNOWN"
|
||||
|
||||
|
||||
# Global vessel store: MMSI → vessel dict
|
||||
_vessels: dict[int, dict] = {}
|
||||
_vessels_lock = threading.Lock()
|
||||
_ws_thread: threading.Thread | None = None
|
||||
_ws_running = False
|
||||
|
||||
import os
|
||||
CACHE_FILE = os.path.join(os.path.dirname(__file__), "ais_cache.json")
|
||||
|
||||
|
||||
def _save_cache():
|
||||
"""Save vessel data to disk for persistence across restarts."""
|
||||
try:
|
||||
with _vessels_lock:
|
||||
# Convert int keys to strings for JSON
|
||||
data = {str(k): v for k, v in _vessels.items()}
|
||||
with open(CACHE_FILE, 'w') as f:
|
||||
json.dump(data, f)
|
||||
logger.info(f"AIS cache saved: {len(data)} vessels")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to save AIS cache: {e}")
|
||||
|
||||
|
||||
def _load_cache():
|
||||
"""Load vessel data from disk on startup."""
|
||||
global _vessels
|
||||
if not os.path.exists(CACHE_FILE):
|
||||
return
|
||||
try:
|
||||
with open(CACHE_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
now = time.time()
|
||||
stale_cutoff = now - 3600 # Accept vessels up to 1 hour old on restart
|
||||
loaded = 0
|
||||
with _vessels_lock:
|
||||
for k, v in data.items():
|
||||
if v.get("_updated", 0) > stale_cutoff:
|
||||
_vessels[int(k)] = v
|
||||
loaded += 1
|
||||
logger.info(f"AIS cache loaded: {loaded} vessels from disk")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load AIS cache: {e}")
|
||||
|
||||
|
||||
def get_ais_vessels() -> list[dict]:
|
||||
"""Return a snapshot of tracked AIS vessels, excluding 'other' type, pruning stale."""
|
||||
now = time.time()
|
||||
stale_cutoff = now - 900 # 15 minutes
|
||||
|
||||
with _vessels_lock:
|
||||
# Prune stale vessels
|
||||
stale_keys = [k for k, v in _vessels.items() if v.get("_updated", 0) < stale_cutoff]
|
||||
for k in stale_keys:
|
||||
del _vessels[k]
|
||||
|
||||
result = []
|
||||
for mmsi, v in _vessels.items():
|
||||
v_type = v.get("type", "unknown")
|
||||
# Skip 'other' vessels (fishing, tug, pilot, etc.) to reduce load
|
||||
if v_type == "other":
|
||||
continue
|
||||
# Skip vessels without valid position
|
||||
if not v.get("lat") or not v.get("lng"):
|
||||
continue
|
||||
|
||||
result.append({
|
||||
"mmsi": mmsi,
|
||||
"name": v.get("name", "UNKNOWN"),
|
||||
"type": v_type,
|
||||
"lat": round(v.get("lat", 0), 5),
|
||||
"lng": round(v.get("lng", 0), 5),
|
||||
"heading": v.get("heading", 0),
|
||||
"sog": round(v.get("sog", 0), 1),
|
||||
"cog": round(v.get("cog", 0), 1),
|
||||
"callsign": v.get("callsign", ""),
|
||||
"destination": v.get("destination", "") or "UNKNOWN",
|
||||
"imo": v.get("imo", 0),
|
||||
"country": get_country_from_mmsi(mmsi),
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
def _ais_stream_loop():
|
||||
"""Main loop: spawn node proxy and process messages from stdout."""
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
proxy_script = os.path.join(os.path.dirname(os.path.dirname(__file__)), "ais_proxy.js")
|
||||
|
||||
while _ws_running:
|
||||
try:
|
||||
logger.info("Starting Node.js AIS Stream Proxy...")
|
||||
process = subprocess.Popen(
|
||||
['node', proxy_script, API_KEY],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True,
|
||||
bufsize=1
|
||||
)
|
||||
|
||||
# Drain stderr in a background thread to prevent deadlock
|
||||
import threading
|
||||
def _drain_stderr():
|
||||
for errline in iter(process.stderr.readline, ''):
|
||||
errline = errline.strip()
|
||||
if errline:
|
||||
logger.warning(f"AIS proxy stderr: {errline}")
|
||||
threading.Thread(target=_drain_stderr, daemon=True).start()
|
||||
|
||||
logger.info("AIS Stream proxy started — receiving vessel data")
|
||||
|
||||
msg_count = 0
|
||||
for raw_msg in iter(process.stdout.readline, ''):
|
||||
if not _ws_running:
|
||||
process.terminate()
|
||||
break
|
||||
|
||||
raw_msg = raw_msg.strip()
|
||||
if not raw_msg:
|
||||
continue
|
||||
|
||||
try:
|
||||
data = json.loads(raw_msg)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
if "error" in data:
|
||||
logger.error(f"AIS Stream error: {data['error']}")
|
||||
continue
|
||||
|
||||
msg_type = data.get("MessageType", "")
|
||||
metadata = data.get("MetaData", {})
|
||||
message = data.get("Message", {})
|
||||
|
||||
mmsi = metadata.get("MMSI", 0)
|
||||
if not mmsi:
|
||||
continue
|
||||
|
||||
with _vessels_lock:
|
||||
if mmsi not in _vessels:
|
||||
_vessels[mmsi] = {"_updated": time.time()}
|
||||
vessel = _vessels[mmsi]
|
||||
|
||||
# Update position from PositionReport or StandardClassBPositionReport
|
||||
if msg_type in ("PositionReport", "StandardClassBPositionReport"):
|
||||
report = message.get(msg_type, {})
|
||||
lat = report.get("Latitude", metadata.get("latitude", 0))
|
||||
lng = report.get("Longitude", metadata.get("longitude", 0))
|
||||
|
||||
# Skip invalid positions
|
||||
if lat == 0 and lng == 0:
|
||||
continue
|
||||
if abs(lat) > 90 or abs(lng) > 180:
|
||||
continue
|
||||
|
||||
with _vessels_lock:
|
||||
vessel["lat"] = lat
|
||||
vessel["lng"] = lng
|
||||
vessel["sog"] = report.get("Sog", 0)
|
||||
vessel["cog"] = report.get("Cog", 0)
|
||||
heading = report.get("TrueHeading", 511)
|
||||
vessel["heading"] = heading if heading != 511 else report.get("Cog", 0)
|
||||
vessel["_updated"] = time.time()
|
||||
# Use metadata name if we don't have one yet
|
||||
if not vessel.get("name") or vessel["name"] == "UNKNOWN":
|
||||
vessel["name"] = metadata.get("ShipName", "UNKNOWN").strip() or "UNKNOWN"
|
||||
|
||||
# Update static data from ShipStaticData
|
||||
elif msg_type == "ShipStaticData":
|
||||
static = message.get("ShipStaticData", {})
|
||||
ais_type = static.get("Type", 0)
|
||||
|
||||
with _vessels_lock:
|
||||
vessel["name"] = (static.get("Name", "") or metadata.get("ShipName", "UNKNOWN")).strip() or "UNKNOWN"
|
||||
vessel["callsign"] = (static.get("CallSign", "") or "").strip()
|
||||
vessel["imo"] = static.get("ImoNumber", 0)
|
||||
vessel["destination"] = (static.get("Destination", "") or "").strip().replace("@", "")
|
||||
vessel["ais_type_code"] = ais_type
|
||||
vessel["type"] = classify_vessel(ais_type, mmsi)
|
||||
vessel["_updated"] = time.time()
|
||||
|
||||
msg_count += 1
|
||||
if msg_count % 5000 == 0:
|
||||
with _vessels_lock:
|
||||
# Inline pruning: remove vessels not updated in 15 minutes
|
||||
prune_cutoff = time.time() - 900
|
||||
stale = [k for k, v in _vessels.items() if v.get("_updated", 0) < prune_cutoff]
|
||||
for k in stale:
|
||||
del _vessels[k]
|
||||
count = len(_vessels)
|
||||
if stale:
|
||||
logger.info(f"AIS pruned {len(stale)} stale vessels")
|
||||
logger.info(f"AIS Stream: processed {msg_count} messages, tracking {count} vessels")
|
||||
_save_cache() # Auto-save every 5000 messages (~60 seconds)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"AIS proxy connection error: {e}")
|
||||
if _ws_running:
|
||||
logger.info("Restarting AIS proxy in 5 seconds...")
|
||||
time.sleep(5)
|
||||
|
||||
|
||||
def _run_ais_loop():
|
||||
"""Thread target: run the AIS loop."""
|
||||
try:
|
||||
_ais_stream_loop()
|
||||
except Exception as e:
|
||||
logger.error(f"AIS Stream thread crashed: {e}")
|
||||
|
||||
|
||||
def start_ais_stream():
|
||||
"""Start the AIS WebSocket stream in a background thread."""
|
||||
global _ws_thread, _ws_running
|
||||
if _ws_thread and _ws_thread.is_alive():
|
||||
logger.info("AIS Stream already running")
|
||||
return
|
||||
|
||||
# Load cached vessel data from disk
|
||||
_load_cache()
|
||||
|
||||
_ws_running = True
|
||||
_ws_thread = threading.Thread(target=_run_ais_loop, daemon=True, name="ais-stream")
|
||||
_ws_thread.start()
|
||||
logger.info("AIS Stream background thread started")
|
||||
|
||||
|
||||
def stop_ais_stream():
|
||||
"""Stop the AIS WebSocket stream and save cache."""
|
||||
global _ws_running
|
||||
_ws_running = False
|
||||
_save_cache() # Save on shutdown
|
||||
logger.info("AIS Stream stopping...")
|
||||
@@ -0,0 +1,175 @@
|
||||
"""
|
||||
API Settings management — serves the API key registry and allows updates.
|
||||
Keys are stored in the backend .env file and loaded via python-dotenv.
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
# Path to the backend .env file
|
||||
ENV_PATH = Path(__file__).parent.parent / ".env"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# API Registry — every external service the dashboard depends on
|
||||
# ---------------------------------------------------------------------------
|
||||
API_REGISTRY = [
|
||||
{
|
||||
"id": "opensky_client_id",
|
||||
"env_key": "OPENSKY_CLIENT_ID",
|
||||
"name": "OpenSky Network — Client ID",
|
||||
"description": "OAuth2 client ID for the OpenSky Network API. Provides global flight state vectors with 400 requests/day.",
|
||||
"category": "Aviation",
|
||||
"url": "https://opensky-network.org/",
|
||||
"required": True,
|
||||
},
|
||||
{
|
||||
"id": "opensky_client_secret",
|
||||
"env_key": "OPENSKY_CLIENT_SECRET",
|
||||
"name": "OpenSky Network — Client Secret",
|
||||
"description": "OAuth2 client secret paired with the Client ID above. Used for authenticated token refresh.",
|
||||
"category": "Aviation",
|
||||
"url": "https://opensky-network.org/",
|
||||
"required": True,
|
||||
},
|
||||
{
|
||||
"id": "ais_api_key",
|
||||
"env_key": "AIS_API_KEY",
|
||||
"name": "AIS Stream",
|
||||
"description": "WebSocket API key for real-time Automatic Identification System (AIS) vessel tracking data worldwide.",
|
||||
"category": "Maritime",
|
||||
"url": "https://aisstream.io/",
|
||||
"required": True,
|
||||
},
|
||||
{
|
||||
"id": "adsb_lol",
|
||||
"env_key": None,
|
||||
"name": "ADS-B Exchange (adsb.lol)",
|
||||
"description": "Community-maintained ADS-B flight tracking API. No key required — public endpoint.",
|
||||
"category": "Aviation",
|
||||
"url": "https://api.adsb.lol/",
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "usgs_earthquakes",
|
||||
"env_key": None,
|
||||
"name": "USGS Earthquake Hazards",
|
||||
"description": "Real-time earthquake data feed from the United States Geological Survey. No key required.",
|
||||
"category": "Geophysical",
|
||||
"url": "https://earthquake.usgs.gov/",
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "celestrak",
|
||||
"env_key": None,
|
||||
"name": "CelesTrak (NORAD TLEs)",
|
||||
"description": "Satellite orbital element data from CelesTrak. Provides TLE sets for 2,000+ active satellites. No key required.",
|
||||
"category": "Space",
|
||||
"url": "https://celestrak.org/",
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "gdelt",
|
||||
"env_key": None,
|
||||
"name": "GDELT Project",
|
||||
"description": "Global Database of Events, Language, and Tone. Monitors news media for geopolitical events worldwide. No key required.",
|
||||
"category": "Intelligence",
|
||||
"url": "https://www.gdeltproject.org/",
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "nominatim",
|
||||
"env_key": None,
|
||||
"name": "Nominatim (OpenStreetMap)",
|
||||
"description": "Reverse geocoding service. Converts lat/lng coordinates to human-readable location names. No key required.",
|
||||
"category": "Geolocation",
|
||||
"url": "https://nominatim.openstreetmap.org/",
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "rainviewer",
|
||||
"env_key": None,
|
||||
"name": "RainViewer",
|
||||
"description": "Weather radar tile overlay. Provides global precipitation data as map tiles. No key required.",
|
||||
"category": "Weather",
|
||||
"url": "https://www.rainviewer.com/",
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "rss_feeds",
|
||||
"env_key": None,
|
||||
"name": "RSS News Feeds",
|
||||
"description": "Aggregates from NPR, BBC, Al Jazeera, NYT, Reuters, and AP for global news coverage. No key required.",
|
||||
"category": "Intelligence",
|
||||
"url": None,
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "yfinance",
|
||||
"env_key": None,
|
||||
"name": "Yahoo Finance (yfinance)",
|
||||
"description": "Defense sector stock tickers and commodity prices. Uses the yfinance Python library. No key required.",
|
||||
"category": "Markets",
|
||||
"url": "https://finance.yahoo.com/",
|
||||
"required": False,
|
||||
},
|
||||
{
|
||||
"id": "openmhz",
|
||||
"env_key": None,
|
||||
"name": "OpenMHz",
|
||||
"description": "Public radio scanner feeds for SIGINT interception. Streams police/fire/EMS radio traffic. No key required.",
|
||||
"category": "SIGINT",
|
||||
"url": "https://openmhz.com/",
|
||||
"required": False,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
def _obfuscate(value: str) -> str:
|
||||
"""Show first 4 chars, mask the rest with bullets."""
|
||||
if not value or len(value) <= 4:
|
||||
return "••••••••"
|
||||
return value[:4] + "•" * (len(value) - 4)
|
||||
|
||||
|
||||
def get_api_keys():
|
||||
"""Return the full API registry with obfuscated key values."""
|
||||
result = []
|
||||
for api in API_REGISTRY:
|
||||
entry = {
|
||||
"id": api["id"],
|
||||
"name": api["name"],
|
||||
"description": api["description"],
|
||||
"category": api["category"],
|
||||
"url": api["url"],
|
||||
"required": api["required"],
|
||||
"has_key": api["env_key"] is not None,
|
||||
"env_key": api["env_key"],
|
||||
"value_obfuscated": None,
|
||||
"value_plain": None,
|
||||
}
|
||||
if api["env_key"]:
|
||||
raw = os.environ.get(api["env_key"], "")
|
||||
entry["value_obfuscated"] = _obfuscate(raw)
|
||||
entry["value_plain"] = raw # Sent only when reveal is requested
|
||||
result.append(entry)
|
||||
return result
|
||||
|
||||
|
||||
def update_api_key(env_key: str, new_value: str) -> bool:
|
||||
"""Update a single key in the .env file and in the current process env."""
|
||||
if not ENV_PATH.exists():
|
||||
return False
|
||||
|
||||
# Update os.environ immediately
|
||||
os.environ[env_key] = new_value
|
||||
|
||||
# Update the .env file on disk
|
||||
content = ENV_PATH.read_text(encoding="utf-8")
|
||||
pattern = re.compile(rf"^{re.escape(env_key)}=.*$", re.MULTILINE)
|
||||
if pattern.search(content):
|
||||
content = pattern.sub(f"{env_key}={new_value}", content)
|
||||
else:
|
||||
content = content.rstrip("\n") + f"\n{env_key}={new_value}\n"
|
||||
|
||||
ENV_PATH.write_text(content, encoding="utf-8")
|
||||
return True
|
||||
@@ -0,0 +1,455 @@
|
||||
"""
|
||||
Carrier Strike Group OSINT Tracker
|
||||
===================================
|
||||
Scrapes multiple OSINT sources to maintain current estimated positions
|
||||
for US Navy Carrier Strike Groups. Updates on startup + 00:00 & 12:00 UTC.
|
||||
|
||||
Sources:
|
||||
1. GDELT News API — recent carrier movement headlines
|
||||
2. WikiVoyage / public port-call databases
|
||||
3. Fallback — last-known or static OSINT estimates
|
||||
"""
|
||||
|
||||
import re
|
||||
import json
|
||||
import time
|
||||
import logging
|
||||
import threading
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional
|
||||
from services.network_utils import fetch_with_curl
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Carrier registry: hull number → metadata + fallback position
|
||||
# -----------------------------------------------------------------
|
||||
CARRIER_REGISTRY: Dict[str, dict] = {
|
||||
"CVN-68": {
|
||||
"name": "USS Nimitz (CVN-68)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Nimitz",
|
||||
"homeport": "Bremerton, WA",
|
||||
"homeport_lat": 47.56, "homeport_lng": -122.63,
|
||||
"fallback_lat": 21.35, "fallback_lng": -157.95,
|
||||
"fallback_heading": 270,
|
||||
"fallback_desc": "Pacific Fleet / Pearl Harbor"
|
||||
},
|
||||
"CVN-69": {
|
||||
"name": "USS Dwight D. Eisenhower (CVN-69)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Dwight_D._Eisenhower",
|
||||
"homeport": "Norfolk, VA",
|
||||
"homeport_lat": 36.95, "homeport_lng": -76.33,
|
||||
"fallback_lat": 18.0, "fallback_lng": 39.5,
|
||||
"fallback_heading": 120,
|
||||
"fallback_desc": "Red Sea / CENTCOM AOR"
|
||||
},
|
||||
"CVN-78": {
|
||||
"name": "USS Gerald R. Ford (CVN-78)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Gerald_R._Ford",
|
||||
"homeport": "Norfolk, VA",
|
||||
"homeport_lat": 36.95, "homeport_lng": -76.33,
|
||||
"fallback_lat": 34.0, "fallback_lng": 25.0,
|
||||
"fallback_heading": 90,
|
||||
"fallback_desc": "Eastern Mediterranean deterrence"
|
||||
},
|
||||
"CVN-70": {
|
||||
"name": "USS Carl Vinson (CVN-70)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Carl_Vinson",
|
||||
"homeport": "San Diego, CA",
|
||||
"homeport_lat": 32.68, "homeport_lng": -117.15,
|
||||
"fallback_lat": 15.0, "fallback_lng": 115.0,
|
||||
"fallback_heading": 45,
|
||||
"fallback_desc": "South China Sea patrol"
|
||||
},
|
||||
"CVN-71": {
|
||||
"name": "USS Theodore Roosevelt (CVN-71)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Theodore_Roosevelt_(CVN-71)",
|
||||
"homeport": "San Diego, CA",
|
||||
"homeport_lat": 32.68, "homeport_lng": -117.15,
|
||||
"fallback_lat": 22.0, "fallback_lng": 122.0,
|
||||
"fallback_heading": 300,
|
||||
"fallback_desc": "Philippine Sea / Taiwan Strait"
|
||||
},
|
||||
"CVN-72": {
|
||||
"name": "USS Abraham Lincoln (CVN-72)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Abraham_Lincoln_(CVN-72)",
|
||||
"homeport": "San Diego, CA",
|
||||
"homeport_lat": 32.68, "homeport_lng": -117.15,
|
||||
"fallback_lat": 21.0, "fallback_lng": -158.0,
|
||||
"fallback_heading": 270,
|
||||
"fallback_desc": "Pacific deployment"
|
||||
},
|
||||
"CVN-73": {
|
||||
"name": "USS George Washington (CVN-73)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_George_Washington_(CVN-73)",
|
||||
"homeport": "Yokosuka, Japan",
|
||||
"homeport_lat": 35.28, "homeport_lng": 139.67,
|
||||
"fallback_lat": 35.0, "fallback_lng": 139.0,
|
||||
"fallback_heading": 0,
|
||||
"fallback_desc": "Yokosuka, Japan (Forward deployed)"
|
||||
},
|
||||
"CVN-74": {
|
||||
"name": "USS John C. Stennis (CVN-74)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_John_C._Stennis",
|
||||
"homeport": "Norfolk, VA",
|
||||
"homeport_lat": 36.95, "homeport_lng": -76.33,
|
||||
"fallback_lat": 36.95, "fallback_lng": -76.33,
|
||||
"fallback_heading": 0,
|
||||
"fallback_desc": "RCOH / Norfolk (maintenance)"
|
||||
},
|
||||
"CVN-75": {
|
||||
"name": "USS Harry S. Truman (CVN-75)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Harry_S._Truman",
|
||||
"homeport": "Norfolk, VA",
|
||||
"homeport_lat": 36.95, "homeport_lng": -76.33,
|
||||
"fallback_lat": 36.0, "fallback_lng": 15.0,
|
||||
"fallback_heading": 90,
|
||||
"fallback_desc": "Mediterranean deployment"
|
||||
},
|
||||
"CVN-76": {
|
||||
"name": "USS Ronald Reagan (CVN-76)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_Ronald_Reagan",
|
||||
"homeport": "Bremerton, WA",
|
||||
"homeport_lat": 47.56, "homeport_lng": -122.63,
|
||||
"fallback_lat": 47.56, "fallback_lng": -122.63,
|
||||
"fallback_heading": 0,
|
||||
"fallback_desc": "Bremerton, WA (Homeport)"
|
||||
},
|
||||
"CVN-77": {
|
||||
"name": "USS George H.W. Bush (CVN-77)",
|
||||
"wiki": "https://en.wikipedia.org/wiki/USS_George_H.W._Bush",
|
||||
"homeport": "Norfolk, VA",
|
||||
"homeport_lat": 36.95, "homeport_lng": -76.33,
|
||||
"fallback_lat": 36.95, "fallback_lng": -76.33,
|
||||
"fallback_heading": 0,
|
||||
"fallback_desc": "Norfolk, VA (Homeport)"
|
||||
},
|
||||
}
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Region → approximate center coordinates
|
||||
# Used to map textual geographic descriptions to lat/lng
|
||||
# -----------------------------------------------------------------
|
||||
REGION_COORDS: Dict[str, tuple] = {
|
||||
# Oceans & Seas
|
||||
"eastern mediterranean": (34.0, 25.0),
|
||||
"mediterranean": (36.0, 15.0),
|
||||
"western mediterranean": (37.0, 2.0),
|
||||
"red sea": (18.0, 39.5),
|
||||
"arabian sea": (16.0, 64.0),
|
||||
"persian gulf": (26.5, 51.5),
|
||||
"gulf of oman": (24.5, 58.5),
|
||||
"north arabian sea": (20.0, 64.0),
|
||||
"south china sea": (15.0, 115.0),
|
||||
"east china sea": (28.0, 125.0),
|
||||
"philippine sea": (20.0, 130.0),
|
||||
"sea of japan": (40.0, 135.0),
|
||||
"taiwan strait": (24.0, 119.5),
|
||||
"western pacific": (20.0, 140.0),
|
||||
"pacific": (20.0, -150.0),
|
||||
"indian ocean": (-5.0, 70.0),
|
||||
"north atlantic": (40.0, -40.0),
|
||||
"atlantic": (30.0, -50.0),
|
||||
"gulf of aden": (12.5, 45.0),
|
||||
"horn of africa": (10.0, 50.0),
|
||||
"strait of hormuz": (26.5, 56.3),
|
||||
"bab el-mandeb": (12.6, 43.3),
|
||||
"suez canal": (30.5, 32.3),
|
||||
"baltic sea": (57.0, 18.0),
|
||||
"north sea": (56.0, 3.0),
|
||||
"black sea": (43.0, 34.0),
|
||||
"south atlantic": (-20.0, -20.0),
|
||||
"coral sea": (-18.0, 155.0),
|
||||
"gulf of mexico": (25.0, -90.0),
|
||||
"caribbean": (15.0, -75.0),
|
||||
|
||||
# Specific bases / ports
|
||||
"norfolk": (36.95, -76.33),
|
||||
"san diego": (32.68, -117.15),
|
||||
"yokosuka": (35.28, 139.67),
|
||||
"pearl harbor": (21.35, -157.95),
|
||||
"guam": (13.45, 144.79),
|
||||
"bahrain": (26.23, 50.55),
|
||||
"rota": (36.62, -6.35),
|
||||
"naples": (40.85, 14.27),
|
||||
"bremerton": (47.56, -122.63),
|
||||
"puget sound": (47.56, -122.63),
|
||||
"newport news": (36.98, -76.43),
|
||||
|
||||
# Areas of operation
|
||||
"centcom": (25.0, 55.0),
|
||||
"indopacom": (20.0, 130.0),
|
||||
"eucom": (48.0, 15.0),
|
||||
"southcom": (10.0, -80.0),
|
||||
"5th fleet": (25.0, 55.0),
|
||||
"6th fleet": (36.0, 15.0),
|
||||
"7th fleet": (25.0, 130.0),
|
||||
"3rd fleet": (30.0, -130.0),
|
||||
"2nd fleet": (35.0, -60.0),
|
||||
}
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Cache file for persisting positions between restarts
|
||||
# -----------------------------------------------------------------
|
||||
CACHE_FILE = Path(__file__).parent.parent / "carrier_cache.json"
|
||||
|
||||
_carrier_positions: Dict[str, dict] = {}
|
||||
_positions_lock = threading.Lock()
|
||||
_last_update: Optional[datetime] = None
|
||||
|
||||
|
||||
def _load_cache() -> Dict[str, dict]:
|
||||
"""Load cached carrier positions from disk."""
|
||||
try:
|
||||
if CACHE_FILE.exists():
|
||||
data = json.loads(CACHE_FILE.read_text())
|
||||
logger.info(f"Carrier cache loaded: {len(data)} carriers from {CACHE_FILE}")
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to load carrier cache: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
def _save_cache(positions: Dict[str, dict]):
|
||||
"""Persist carrier positions to disk."""
|
||||
try:
|
||||
CACHE_FILE.write_text(json.dumps(positions, indent=2))
|
||||
logger.info(f"Carrier cache saved: {len(positions)} carriers")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to save carrier cache: {e}")
|
||||
|
||||
|
||||
def _match_region(text: str) -> Optional[tuple]:
|
||||
"""Match a text string against known regions, return (lat, lng) or None."""
|
||||
text_lower = text.lower()
|
||||
for region, coords in sorted(REGION_COORDS.items(), key=lambda x: -len(x[0])):
|
||||
if region in text_lower:
|
||||
return coords
|
||||
return None
|
||||
|
||||
|
||||
def _match_carrier(text: str) -> Optional[str]:
|
||||
"""Match a text string against known carrier names/hull numbers."""
|
||||
text_lower = text.lower()
|
||||
for hull, info in CARRIER_REGISTRY.items():
|
||||
hull_check = hull.lower().replace("-", "")
|
||||
name_parts = info["name"].lower()
|
||||
# Match hull number (e.g., "CVN-78", "CVN78")
|
||||
if hull.lower() in text_lower or hull_check in text_lower.replace("-", ""):
|
||||
return hull
|
||||
# Match ship name (e.g., "Ford", "Eisenhower", "Vinson")
|
||||
ship_name = name_parts.split("(")[0].strip()
|
||||
last_name = ship_name.split()[-1] if ship_name else ""
|
||||
if last_name and len(last_name) > 3 and last_name in text_lower:
|
||||
return hull
|
||||
return None
|
||||
|
||||
|
||||
def _fetch_gdelt_carrier_news() -> List[dict]:
|
||||
"""Search GDELT for recent carrier movement news."""
|
||||
results = []
|
||||
search_terms = [
|
||||
"aircraft+carrier+deployed",
|
||||
"carrier+strike+group+navy",
|
||||
"USS+Nimitz+carrier", "USS+Ford+carrier", "USS+Eisenhower+carrier",
|
||||
"USS+Vinson+carrier", "USS+Roosevelt+carrier+navy",
|
||||
"USS+Lincoln+carrier", "USS+Truman+carrier",
|
||||
"USS+Reagan+carrier", "USS+Washington+carrier+navy",
|
||||
"USS+Bush+carrier", "USS+Stennis+carrier",
|
||||
]
|
||||
|
||||
for term in search_terms:
|
||||
try:
|
||||
url = f"https://api.gdeltproject.org/api/v2/doc/doc?query={term}&mode=artlist&maxrecords=5&format=json×pan=14d"
|
||||
raw = fetch_with_curl(url, timeout=8)
|
||||
if not raw:
|
||||
continue
|
||||
data = json.loads(raw)
|
||||
articles = data.get("articles", [])
|
||||
for art in articles:
|
||||
title = art.get("title", "")
|
||||
url = art.get("url", "")
|
||||
results.append({"title": title, "url": url})
|
||||
except Exception as e:
|
||||
logger.debug(f"GDELT search failed for '{term}': {e}")
|
||||
continue
|
||||
|
||||
logger.info(f"Carrier OSINT: found {len(results)} GDELT articles")
|
||||
return results
|
||||
|
||||
|
||||
def _parse_carrier_positions_from_news(articles: List[dict]) -> Dict[str, dict]:
|
||||
"""Parse carrier positions from news article titles and descriptions."""
|
||||
updates: Dict[str, dict] = {}
|
||||
|
||||
for article in articles:
|
||||
title = article.get("title", "")
|
||||
|
||||
# Try to match a carrier from the title
|
||||
hull = _match_carrier(title)
|
||||
if not hull:
|
||||
continue
|
||||
|
||||
# Try to match a region from the title
|
||||
coords = _match_region(title)
|
||||
if not coords:
|
||||
continue
|
||||
|
||||
# Only update if we haven't seen this carrier yet (first match wins — most recent)
|
||||
if hull not in updates:
|
||||
updates[hull] = {
|
||||
"lat": coords[0],
|
||||
"lng": coords[1],
|
||||
"desc": title[:100],
|
||||
"source": "GDELT OSINT",
|
||||
"updated": datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
logger.info(f"Carrier update: {CARRIER_REGISTRY[hull]['name']} → {coords} (from: {title[:80]})")
|
||||
|
||||
return updates
|
||||
|
||||
|
||||
def update_carrier_positions():
|
||||
"""Main update function — called on startup and every 12h."""
|
||||
global _last_update
|
||||
|
||||
logger.info("Carrier tracker: updating positions from OSINT sources...")
|
||||
|
||||
# Start with fallback positions
|
||||
positions: Dict[str, dict] = {}
|
||||
for hull, info in CARRIER_REGISTRY.items():
|
||||
positions[hull] = {
|
||||
"name": info["name"],
|
||||
"lat": info["fallback_lat"],
|
||||
"lng": info["fallback_lng"],
|
||||
"heading": info["fallback_heading"],
|
||||
"desc": info["fallback_desc"],
|
||||
"wiki": info["wiki"],
|
||||
"source": "Static OSINT estimate",
|
||||
"updated": datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
|
||||
# Load cached positions (may have better data from previous runs)
|
||||
cached = _load_cache()
|
||||
for hull, cached_pos in cached.items():
|
||||
if hull in positions:
|
||||
# Only use cache if it has a real OSINT source (not just static)
|
||||
if cached_pos.get("source", "").startswith("GDELT") or cached_pos.get("source", "").startswith("News"):
|
||||
positions[hull].update({
|
||||
"lat": cached_pos["lat"],
|
||||
"lng": cached_pos["lng"],
|
||||
"desc": cached_pos.get("desc", positions[hull]["desc"]),
|
||||
"source": cached_pos.get("source", "Cached OSINT"),
|
||||
"updated": cached_pos.get("updated", "")
|
||||
})
|
||||
|
||||
# Try GDELT news for fresh positions
|
||||
try:
|
||||
articles = _fetch_gdelt_carrier_news()
|
||||
news_positions = _parse_carrier_positions_from_news(articles)
|
||||
for hull, pos in news_positions.items():
|
||||
if hull in positions:
|
||||
positions[hull].update(pos)
|
||||
logger.info(f"Carrier OSINT: updated {CARRIER_REGISTRY[hull]['name']} from news")
|
||||
except Exception as e:
|
||||
logger.warning(f"GDELT carrier fetch failed: {e}")
|
||||
|
||||
# Save and update the global state
|
||||
with _positions_lock:
|
||||
_carrier_positions.clear()
|
||||
_carrier_positions.update(positions)
|
||||
_last_update = datetime.now(timezone.utc)
|
||||
|
||||
_save_cache(positions)
|
||||
|
||||
sources = {}
|
||||
for p in positions.values():
|
||||
src = p.get("source", "unknown")
|
||||
sources[src] = sources.get(src, 0) + 1
|
||||
logger.info(f"Carrier tracker: {len(positions)} carriers updated. Sources: {sources}")
|
||||
|
||||
|
||||
def get_carrier_positions() -> List[dict]:
|
||||
"""Return current carrier positions for the data pipeline."""
|
||||
with _positions_lock:
|
||||
result = []
|
||||
for hull, pos in _carrier_positions.items():
|
||||
info = CARRIER_REGISTRY.get(hull, {})
|
||||
result.append({
|
||||
"name": pos.get("name", info.get("name", hull)),
|
||||
"type": "carrier",
|
||||
"lat": pos["lat"],
|
||||
"lng": pos["lng"],
|
||||
"heading": pos.get("heading", 0),
|
||||
"sog": 0,
|
||||
"cog": 0,
|
||||
"country": "United States",
|
||||
"desc": pos.get("desc", ""),
|
||||
"wiki": pos.get("wiki", info.get("wiki", "")),
|
||||
"estimated": True,
|
||||
"source": pos.get("source", "OSINT estimated position"),
|
||||
"last_osint_update": pos.get("updated", "")
|
||||
})
|
||||
return result
|
||||
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
# Scheduler: runs at startup, then at 00:00 and 12:00 UTC daily
|
||||
# -----------------------------------------------------------------
|
||||
_scheduler_thread: Optional[threading.Thread] = None
|
||||
_scheduler_stop = threading.Event()
|
||||
|
||||
|
||||
def _scheduler_loop():
|
||||
"""Background thread that triggers updates at 00:00 and 12:00 UTC."""
|
||||
# Initial update on startup
|
||||
try:
|
||||
update_carrier_positions()
|
||||
except Exception as e:
|
||||
logger.error(f"Carrier tracker initial update failed: {e}")
|
||||
|
||||
while not _scheduler_stop.is_set():
|
||||
now = datetime.now(timezone.utc)
|
||||
# Next target: 00:00 or 12:00 UTC, whichever is sooner
|
||||
hour = now.hour
|
||||
if hour < 12:
|
||||
next_hour = 12
|
||||
else:
|
||||
next_hour = 24 # midnight = next day 00:00
|
||||
|
||||
next_run = now.replace(hour=next_hour % 24, minute=0, second=0, microsecond=0)
|
||||
if next_hour == 24:
|
||||
from datetime import timedelta
|
||||
next_run = (now + timedelta(days=1)).replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
|
||||
wait_seconds = (next_run - now).total_seconds()
|
||||
logger.info(f"Carrier tracker: next update at {next_run.isoformat()} ({wait_seconds/3600:.1f}h)")
|
||||
|
||||
# Wait until next scheduled time, or until stop event
|
||||
if _scheduler_stop.wait(timeout=wait_seconds):
|
||||
break # Stop event was set
|
||||
|
||||
try:
|
||||
update_carrier_positions()
|
||||
except Exception as e:
|
||||
logger.error(f"Carrier tracker scheduled update failed: {e}")
|
||||
|
||||
|
||||
def start_carrier_tracker():
|
||||
"""Start the carrier tracker background thread."""
|
||||
global _scheduler_thread
|
||||
if _scheduler_thread and _scheduler_thread.is_alive():
|
||||
return
|
||||
_scheduler_stop.clear()
|
||||
_scheduler_thread = threading.Thread(target=_scheduler_loop, daemon=True, name="carrier-tracker")
|
||||
_scheduler_thread.start()
|
||||
logger.info("Carrier tracker started")
|
||||
|
||||
|
||||
def stop_carrier_tracker():
|
||||
"""Stop the carrier tracker background thread."""
|
||||
_scheduler_stop.set()
|
||||
if _scheduler_thread:
|
||||
_scheduler_thread.join(timeout=5)
|
||||
logger.info("Carrier tracker stopped")
|
||||
@@ -0,0 +1,274 @@
|
||||
import sqlite3
|
||||
import requests
|
||||
from services.network_utils import fetch_with_curl
|
||||
import logging
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import List, Dict, Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DB_PATH = "cctv.db"
|
||||
|
||||
def init_db():
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS cameras (
|
||||
id TEXT PRIMARY KEY,
|
||||
source_agency TEXT,
|
||||
lat REAL,
|
||||
lon REAL,
|
||||
direction_facing TEXT,
|
||||
media_url TEXT,
|
||||
refresh_rate_seconds INTEGER,
|
||||
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
""")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
class BaseCCTVIngestor(ABC):
|
||||
def __init__(self):
|
||||
self.conn = sqlite3.connect(DB_PATH)
|
||||
|
||||
@abstractmethod
|
||||
def fetch_data(self) -> List[Dict[str, Any]]:
|
||||
pass
|
||||
|
||||
def ingest(self):
|
||||
try:
|
||||
cameras = self.fetch_data()
|
||||
cursor = self.conn.cursor()
|
||||
for cam in cameras:
|
||||
cursor.execute("""
|
||||
INSERT INTO cameras
|
||||
(id, source_agency, lat, lon, direction_facing, media_url, refresh_rate_seconds)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(id) DO UPDATE SET
|
||||
media_url=excluded.media_url,
|
||||
last_updated=CURRENT_TIMESTAMP
|
||||
""", (
|
||||
cam.get("id"),
|
||||
cam.get("source_agency"),
|
||||
cam.get("lat"),
|
||||
cam.get("lon"),
|
||||
cam.get("direction_facing", "Unknown"),
|
||||
cam.get("media_url"),
|
||||
cam.get("refresh_rate_seconds", 60)
|
||||
))
|
||||
self.conn.commit()
|
||||
logger.info(f"Successfully ingested {len(cameras)} cameras from {self.__class__.__name__}")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to ingest cameras in {self.__class__.__name__}: {e}")
|
||||
|
||||
class TFLJamCamIngestor(BaseCCTVIngestor):
|
||||
def fetch_data(self) -> List[Dict[str, Any]]:
|
||||
# Transport for London Open Data API
|
||||
url = "https://api.tfl.gov.uk/Place/Type/JamCam"
|
||||
response = fetch_with_curl(url, timeout=15)
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
cameras = []
|
||||
for item in data:
|
||||
# TfL returns URLs without protocols sometimes or with a base path
|
||||
vid_url = None
|
||||
img_url = None
|
||||
|
||||
for prop in item.get('additionalProperties', []):
|
||||
if prop.get('key') == 'videoUrl':
|
||||
vid_url = prop.get('value')
|
||||
elif prop.get('key') == 'imageUrl':
|
||||
img_url = prop.get('value')
|
||||
|
||||
media = vid_url if vid_url else img_url
|
||||
if media:
|
||||
cameras.append({
|
||||
"id": f"TFL-{item.get('id')}",
|
||||
"source_agency": "TfL",
|
||||
"lat": item.get('lat'),
|
||||
"lon": item.get('lon'),
|
||||
"direction_facing": item.get('commonName', 'Unknown'),
|
||||
"media_url": media,
|
||||
"refresh_rate_seconds": 15
|
||||
})
|
||||
return cameras
|
||||
|
||||
class LTASingaporeIngestor(BaseCCTVIngestor):
|
||||
def fetch_data(self) -> List[Dict[str, Any]]:
|
||||
# Singapore Land Transport Authority (LTA) Traffic Images API
|
||||
url = "https://api.data.gov.sg/v1/transport/traffic-images"
|
||||
response = fetch_with_curl(url, timeout=15)
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
cameras = []
|
||||
if "items" in data and len(data["items"]) > 0:
|
||||
for item in data["items"][0].get("cameras", []):
|
||||
loc = item.get("location", {})
|
||||
if "latitude" in loc and "longitude" in loc and "image" in item:
|
||||
cameras.append({
|
||||
"id": f"SGP-{item.get('camera_id', 'UNK')}",
|
||||
"source_agency": "Singapore LTA",
|
||||
"lat": loc.get("latitude"),
|
||||
"lon": loc.get("longitude"),
|
||||
"direction_facing": f"Camera {item.get('camera_id')}",
|
||||
"media_url": item.get("image"),
|
||||
"refresh_rate_seconds": 60
|
||||
})
|
||||
return cameras
|
||||
|
||||
|
||||
|
||||
class AustinTXIngestor(BaseCCTVIngestor):
|
||||
def fetch_data(self) -> List[Dict[str, Any]]:
|
||||
# City of Austin Traffic Cameras Open Data
|
||||
url = "https://data.austintexas.gov/resource/b4k4-adkb.json?$limit=2000"
|
||||
response = fetch_with_curl(url, timeout=15)
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
cameras = []
|
||||
for item in data:
|
||||
cam_id = item.get("camera_id")
|
||||
if not cam_id: continue
|
||||
|
||||
loc = item.get("location", {})
|
||||
coords = loc.get("coordinates", [])
|
||||
|
||||
# coords is usually [lon, lat]
|
||||
if len(coords) == 2:
|
||||
cameras.append({
|
||||
"id": f"ATX-{cam_id}",
|
||||
"source_agency": "Austin TxDOT",
|
||||
"lat": coords[1],
|
||||
"lon": coords[0],
|
||||
"direction_facing": item.get("location_name", "Austin TX Camera"),
|
||||
"media_url": f"https://cctv.austinmobility.io/image/{cam_id}.jpg",
|
||||
"refresh_rate_seconds": 60
|
||||
})
|
||||
return cameras
|
||||
|
||||
class NYCDOTIngestor(BaseCCTVIngestor):
|
||||
def fetch_data(self) -> List[Dict[str, Any]]:
|
||||
url = "https://webcams.nyctmc.org/api/cameras"
|
||||
response = fetch_with_curl(url, timeout=15)
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
cameras = []
|
||||
for item in data:
|
||||
cam_id = item.get("id")
|
||||
if not cam_id: continue
|
||||
|
||||
lat = item.get("latitude")
|
||||
lon = item.get("longitude")
|
||||
if lat and lon:
|
||||
cameras.append({
|
||||
"id": f"NYC-{cam_id}",
|
||||
"source_agency": "NYC DOT",
|
||||
"lat": lat,
|
||||
"lon": lon,
|
||||
"direction_facing": item.get("name", "NYC Camera"),
|
||||
"media_url": f"https://webcams.nyctmc.org/api/cameras/{cam_id}/image",
|
||||
"refresh_rate_seconds": 30
|
||||
})
|
||||
return cameras
|
||||
|
||||
class GlobalOSMCrawlingIngestor(BaseCCTVIngestor):
|
||||
def fetch_data(self) -> List[Dict[str, Any]]:
|
||||
# This will pull physical street surveillance cameras across all global hotspots
|
||||
# using OpenStreetMap Overpass mapping their exact geospatial coordinates to Google Street View
|
||||
regions = [
|
||||
("35.6,139.6,35.8,139.8", "Tokyo"),
|
||||
("48.8,2.3,48.9,2.4", "Paris"),
|
||||
("40.6,-74.1,40.8,-73.9", "NYC Expanded"),
|
||||
("34.0,-118.4,34.2,-118.2", "Los Angeles"),
|
||||
("-33.9,151.1,-33.7,151.3", "Sydney"),
|
||||
("52.4,13.3,52.6,13.5", "Berlin"),
|
||||
("25.1,55.2,25.3,55.4", "Dubai"),
|
||||
("19.3,-99.2,19.5,-99.0", "Mexico City"),
|
||||
("-23.6,-46.7,-23.4,-46.5", "Sao Paulo"),
|
||||
("39.6,-105.1,39.9,-104.8", "Denver")
|
||||
]
|
||||
|
||||
query_parts = [f'node["man_made"="surveillance"]({bbox});' for bbox, city in regions]
|
||||
query = "".join(query_parts)
|
||||
url = f"https://overpass-api.de/api/interpreter?data=[out:json];({query});out%202000;"
|
||||
|
||||
try:
|
||||
response = fetch_with_curl(url, timeout=15)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
cameras = []
|
||||
for item in data.get('elements', []):
|
||||
lat = item.get("lat")
|
||||
lon = item.get("lon")
|
||||
cam_id = item.get("id")
|
||||
|
||||
if lat and lon:
|
||||
# Find which city this belongs to
|
||||
source_city = "Global OSINT"
|
||||
for bbox, city in regions:
|
||||
s, w, n, e = map(float, bbox.split(','))
|
||||
if s <= lat <= n and w <= lon <= e:
|
||||
source_city = f"OSINT: {city}"
|
||||
break
|
||||
|
||||
# Attempt to parse camera direction for a cool realistic bearing angle if OSM mapped it
|
||||
direction_str = item.get("tags", {}).get("camera:direction", "0")
|
||||
try:
|
||||
bearing = int(float(direction_str))
|
||||
except:
|
||||
bearing = 0
|
||||
|
||||
mapbox_key = "YOUR_MAPBOX_TOKEN_HERE"
|
||||
mapbox_url = f"https://api.mapbox.com/styles/v1/mapbox/satellite-streets-v12/static/{lon},{lat},18,{bearing},60/600x400?access_token={mapbox_key}"
|
||||
|
||||
cameras.append({
|
||||
"id": f"OSM-{cam_id}",
|
||||
"source_agency": source_city,
|
||||
"lat": lat,
|
||||
"lon": lon,
|
||||
"direction_facing": item.get("tags", {}).get("surveillance:type", "Street Level Camera"),
|
||||
"media_url": mapbox_url,
|
||||
"refresh_rate_seconds": 3600
|
||||
})
|
||||
return cameras
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
|
||||
def _detect_media_type(url: str) -> str:
|
||||
"""Detect the media type from a camera URL for proper frontend rendering."""
|
||||
if not url:
|
||||
return "image"
|
||||
url_lower = url.lower()
|
||||
if any(ext in url_lower for ext in ['.mp4', '.webm', '.ogg']):
|
||||
return "video"
|
||||
if any(kw in url_lower for kw in ['.mjpg', '.mjpeg', 'mjpg', 'axis-cgi/mjpg', 'mode=motion']):
|
||||
return "mjpeg"
|
||||
if '.m3u8' in url_lower or 'hls' in url_lower:
|
||||
return "hls"
|
||||
if any(kw in url_lower for kw in ['embed', 'maps/embed', 'iframe']):
|
||||
return "embed"
|
||||
if 'mapbox.com' in url_lower or 'satellite' in url_lower:
|
||||
return "satellite"
|
||||
return "image"
|
||||
|
||||
def get_all_cameras() -> List[Dict[str, Any]]:
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
conn.row_factory = sqlite3.Row
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT * FROM cameras")
|
||||
rows = cursor.fetchall()
|
||||
conn.close()
|
||||
cameras = []
|
||||
for row in rows:
|
||||
cam = dict(row)
|
||||
cam['media_type'] = _detect_media_type(cam.get('media_url', ''))
|
||||
cameras.append(cam)
|
||||
return cameras
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,301 @@
|
||||
import requests
|
||||
import logging
|
||||
from cachetools import cached, TTLCache
|
||||
from datetime import datetime
|
||||
from services.network_utils import fetch_with_curl
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Cache Frontline data for 30 minutes, it doesn't move that fast
|
||||
frontline_cache = TTLCache(maxsize=1, ttl=1800)
|
||||
|
||||
@cached(frontline_cache)
|
||||
def fetch_ukraine_frontlines():
|
||||
"""
|
||||
Fetches the latest GeoJSON data representing the Ukraine frontline.
|
||||
We use the cyterat/deepstate-map-data github mirror since the public API is locked.
|
||||
"""
|
||||
try:
|
||||
logger.info("Fetching DeepStateMap from GitHub mirror...")
|
||||
|
||||
# First, query the repo tree to find the latest file name
|
||||
tree_url = "https://api.github.com/repos/cyterat/deepstate-map-data/git/trees/main?recursive=1"
|
||||
res_tree = requests.get(tree_url, timeout=10)
|
||||
|
||||
if res_tree.status_code == 200:
|
||||
tree_data = res_tree.json().get("tree", [])
|
||||
# Filter for geojson files in data folder
|
||||
geo_files = [item["path"] for item in tree_data if item["path"].startswith("data/deepstatemap_data_") and item["path"].endswith(".geojson")]
|
||||
|
||||
if geo_files:
|
||||
# Get the alphabetically latest file (since it's named with YYYYMMDD)
|
||||
latest_file = sorted(geo_files)[-1]
|
||||
|
||||
raw_url = f"https://raw.githubusercontent.com/cyterat/deepstate-map-data/main/{latest_file}"
|
||||
logger.info(f"Downloading latest DeepStateMap: {raw_url}")
|
||||
|
||||
res_geo = requests.get(raw_url, timeout=20)
|
||||
if res_geo.status_code == 200:
|
||||
data = res_geo.json()
|
||||
|
||||
# The Cyterat GitHub mirror strips all properties and just provides a raw array of Feature polygons.
|
||||
# Based on DeepStateMap's frontend mapping, the array index corresponds to the zone type:
|
||||
# 0: Russian-occupied areas
|
||||
# 1: Russian advance
|
||||
# 2: Liberated area
|
||||
# 3: Uncontested/Crimea (often folded into occupied)
|
||||
name_map = {
|
||||
0: "Russian-occupied areas",
|
||||
1: "Russian advance",
|
||||
2: "Liberated area",
|
||||
3: "Russian-occupied areas", # Crimea / LPR / DPR
|
||||
4: "Directions of UA attacks"
|
||||
}
|
||||
|
||||
if "features" in data:
|
||||
for idx, feature in enumerate(data["features"]):
|
||||
if "properties" not in feature or feature["properties"] is None:
|
||||
feature["properties"] = {}
|
||||
|
||||
feature["properties"]["name"] = name_map.get(idx, "Russian-occupied areas")
|
||||
feature["properties"]["zone_id"] = idx
|
||||
|
||||
return data
|
||||
else:
|
||||
logger.error(f"Failed to fetch parsed Github Raw GeoJSON: {res_geo.status_code}")
|
||||
else:
|
||||
logger.error(f"Failed to fetch Github Tree for Deepstatemap: {res_tree.status_code}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching DeepStateMap: {e}")
|
||||
return None
|
||||
|
||||
# Cache GDELT data for 6 hours - heavy aggregation, data doesn't change rapidly
|
||||
gdelt_cache = TTLCache(maxsize=1, ttl=21600)
|
||||
|
||||
def _extract_domain(url):
|
||||
"""Extract a clean source name from a URL, e.g. 'nytimes.com' from 'https://www.nytimes.com/...'"""
|
||||
try:
|
||||
from urllib.parse import urlparse
|
||||
host = urlparse(url).hostname or ''
|
||||
# Strip www. prefix
|
||||
if host.startswith('www.'):
|
||||
host = host[4:]
|
||||
return host
|
||||
except Exception:
|
||||
return url[:40]
|
||||
|
||||
def _url_to_headline(url):
|
||||
"""Extract a human-readable headline from a URL path.
|
||||
e.g. 'https://nytimes.com/2026/03/us-strikes-iran-nuclear-sites.html' -> 'Us Strikes Iran Nuclear Sites (nytimes.com)'
|
||||
"""
|
||||
try:
|
||||
from urllib.parse import urlparse, unquote
|
||||
parsed = urlparse(url)
|
||||
domain = parsed.hostname or ''
|
||||
if domain.startswith('www.'):
|
||||
domain = domain[4:]
|
||||
|
||||
# Get last meaningful path segment
|
||||
path = unquote(parsed.path).strip('/')
|
||||
if not path:
|
||||
return domain
|
||||
|
||||
# Take the last path segment (usually the slug)
|
||||
slug = path.split('/')[-1]
|
||||
# Remove file extensions
|
||||
for ext in ['.html', '.htm', '.php', '.asp', '.aspx', '.shtml']:
|
||||
if slug.lower().endswith(ext):
|
||||
slug = slug[:-len(ext)]
|
||||
# If slug is purely numeric or a short ID, try the second-to-last segment
|
||||
import re
|
||||
if re.match(r'^[a-z]?\d{5,}$', slug, re.IGNORECASE):
|
||||
segments = path.split('/')
|
||||
if len(segments) >= 2:
|
||||
slug = segments[-2]
|
||||
for ext in ['.html', '.htm', '.php']:
|
||||
if slug.lower().endswith(ext):
|
||||
slug = slug[:-len(ext)]
|
||||
# Remove common ID patterns at start/end
|
||||
slug = re.sub(r'^[\d]+-', '', slug) # leading numbers like "13847569-"
|
||||
slug = re.sub(r'-[\da-f]{6,}$', '', slug) # trailing hex IDs
|
||||
slug = re.sub(r'[-_]c-\d+$', '', slug) # trailing "-c-21803431"
|
||||
slug = re.sub(r'^p=\d+$', '', slug) # WordPress ?p=1234
|
||||
# Convert slug separators to spaces
|
||||
slug = slug.replace('-', ' ').replace('_', ' ')
|
||||
# Clean up multiple spaces
|
||||
slug = re.sub(r'\s+', ' ', slug).strip()
|
||||
|
||||
# If slug is still just a number or too short, fall back to domain
|
||||
if len(slug) < 5 or re.match(r'^\d+$', slug):
|
||||
return domain
|
||||
|
||||
# Title case and truncate
|
||||
headline = slug.title()
|
||||
if len(headline) > 80:
|
||||
headline = headline[:77] + '...'
|
||||
return f"{headline} ({domain})"
|
||||
except Exception:
|
||||
return url[:60]
|
||||
|
||||
def _parse_gdelt_export_zip(zip_bytes, conflict_codes, seen_locs, features, loc_index):
|
||||
"""Parse a single GDELT export ZIP and append conflict features.
|
||||
loc_index maps loc_key -> index in features list for fast duplicate merging.
|
||||
"""
|
||||
import csv, io, zipfile
|
||||
try:
|
||||
zf = zipfile.ZipFile(io.BytesIO(zip_bytes))
|
||||
csv_name = zf.namelist()[0]
|
||||
with zf.open(csv_name) as cf:
|
||||
reader = csv.reader(io.TextIOWrapper(cf, encoding='utf-8', errors='replace'), delimiter='\t')
|
||||
for row in reader:
|
||||
try:
|
||||
if len(row) < 61:
|
||||
continue
|
||||
event_code = row[26][:2] if len(row[26]) >= 2 else ''
|
||||
if event_code not in conflict_codes:
|
||||
continue
|
||||
lat = float(row[56]) if row[56] else None
|
||||
lng = float(row[57]) if row[57] else None
|
||||
if lat is None or lng is None or (lat == 0 and lng == 0):
|
||||
continue
|
||||
|
||||
source_url = row[60].strip() if len(row) > 60 else ''
|
||||
location = row[52].strip() if len(row) > 52 else 'Unknown'
|
||||
actor1 = row[6].strip() if len(row) > 6 else ''
|
||||
actor2 = row[16].strip() if len(row) > 16 else ''
|
||||
|
||||
loc_key = f"{round(lat, 1)}_{round(lng, 1)}"
|
||||
if loc_key in seen_locs:
|
||||
# Merge: increment count and add source URL if new (dedup by domain)
|
||||
idx = loc_index[loc_key]
|
||||
feat = features[idx]
|
||||
feat["properties"]["count"] = feat["properties"].get("count", 1) + 1
|
||||
urls = feat["properties"].get("_urls", [])
|
||||
seen_domains = feat["properties"].get("_domains", set())
|
||||
if source_url:
|
||||
domain = _extract_domain(source_url)
|
||||
if domain not in seen_domains and len(urls) < 10:
|
||||
urls.append(source_url)
|
||||
seen_domains.add(domain)
|
||||
feat["properties"]["_urls"] = urls
|
||||
feat["properties"]["_domains"] = seen_domains
|
||||
continue
|
||||
seen_locs.add(loc_key)
|
||||
|
||||
name = location or (f"{actor1} vs {actor2}" if actor1 and actor2 else actor1) or "Unknown Incident"
|
||||
domain = _extract_domain(source_url) if source_url else ''
|
||||
loc_index[loc_key] = len(features)
|
||||
features.append({
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": name,
|
||||
"count": 1,
|
||||
"_urls": [source_url] if source_url else [],
|
||||
"_domains": {domain} if domain else set(),
|
||||
},
|
||||
"geometry": {"type": "Point", "coordinates": [lng, lat]},
|
||||
"_loc_key": loc_key
|
||||
})
|
||||
except (ValueError, IndexError):
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to parse GDELT export zip: {e}")
|
||||
|
||||
def _download_gdelt_export(url):
|
||||
"""Download a single GDELT export file, return bytes or None."""
|
||||
try:
|
||||
res = fetch_with_curl(url, timeout=15)
|
||||
if res.status_code == 200:
|
||||
return res.content
|
||||
except Exception:
|
||||
pass
|
||||
return None
|
||||
|
||||
@cached(gdelt_cache)
|
||||
def fetch_global_military_incidents():
|
||||
"""
|
||||
Fetches global military/conflict incidents from GDELT Events Export files.
|
||||
Aggregates the last ~8 hours of 15-minute exports to build ~1000 incidents.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
try:
|
||||
logger.info("Fetching GDELT events via export CDN (multi-file)...")
|
||||
|
||||
# Get the latest export URL to determine current timestamp
|
||||
index_res = fetch_with_curl("http://data.gdeltproject.org/gdeltv2/lastupdate.txt", timeout=10)
|
||||
if index_res.status_code != 200:
|
||||
logger.error(f"GDELT lastupdate failed: {index_res.status_code}")
|
||||
return []
|
||||
|
||||
# Extract latest export URL and its timestamp
|
||||
latest_url = None
|
||||
for line in index_res.text.strip().split('\n'):
|
||||
parts = line.strip().split()
|
||||
if len(parts) >= 3 and parts[2].endswith('.export.CSV.zip'):
|
||||
latest_url = parts[2]
|
||||
break
|
||||
|
||||
if not latest_url:
|
||||
logger.error("Could not find GDELT export URL")
|
||||
return []
|
||||
|
||||
# Extract timestamp from URL like: http://data.gdeltproject.org/gdeltv2/20260301120000.export.CSV.zip
|
||||
import re
|
||||
ts_match = re.search(r'(\d{14})\.export\.CSV\.zip', latest_url)
|
||||
if not ts_match:
|
||||
logger.error("Could not parse GDELT export timestamp")
|
||||
return []
|
||||
|
||||
latest_ts = datetime.strptime(ts_match.group(1), '%Y%m%d%H%M%S')
|
||||
|
||||
# Generate URLs for the last 8 hours (32 files at 15-min intervals)
|
||||
NUM_FILES = 32
|
||||
urls = []
|
||||
for i in range(NUM_FILES):
|
||||
ts = latest_ts - timedelta(minutes=15 * i)
|
||||
fname = ts.strftime('%Y%m%d%H%M%S') + '.export.CSV.zip'
|
||||
url = f"http://data.gdeltproject.org/gdeltv2/{fname}"
|
||||
urls.append(url)
|
||||
|
||||
logger.info(f"Downloading {len(urls)} GDELT export files...")
|
||||
|
||||
# Download in parallel (8 threads)
|
||||
with ThreadPoolExecutor(max_workers=8) as executor:
|
||||
zip_results = list(executor.map(_download_gdelt_export, urls))
|
||||
|
||||
successful = sum(1 for r in zip_results if r is not None)
|
||||
logger.info(f"Downloaded {successful}/{len(urls)} GDELT exports")
|
||||
|
||||
# Parse all downloaded files
|
||||
CONFLICT_CODES = {'14', '17', '18', '19', '20'}
|
||||
features = []
|
||||
seen_locs = set()
|
||||
loc_index = {} # loc_key -> index in features
|
||||
|
||||
for zip_bytes in zip_results:
|
||||
if zip_bytes:
|
||||
_parse_gdelt_export_zip(zip_bytes, CONFLICT_CODES, seen_locs, features, loc_index)
|
||||
|
||||
# Build URL + headline arrays for frontend rendering
|
||||
for f in features:
|
||||
urls = f["properties"].pop("_urls", [])
|
||||
f["properties"].pop("_domains", None)
|
||||
headlines = [_url_to_headline(u) for u in urls]
|
||||
f["properties"]["_urls_list"] = urls
|
||||
f["properties"]["_headlines_list"] = headlines
|
||||
# Keep html as fallback
|
||||
if urls:
|
||||
links = [f'<div style="margin-bottom:6px;"><a href="{u}" target="_blank">{h}</a></div>' for u, h in zip(urls, headlines)]
|
||||
f["properties"]["html"] = ''.join(links)
|
||||
else:
|
||||
f["properties"]["html"] = f["properties"]["name"]
|
||||
f.pop("_loc_key", None)
|
||||
|
||||
logger.info(f"GDELT multi-file parsed: {len(features)} conflict locations from {successful} files")
|
||||
return features
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error fetching GDELT data: {e}")
|
||||
return []
|
||||
@@ -0,0 +1,98 @@
|
||||
import json
|
||||
import logging
|
||||
import base64
|
||||
import urllib.parse
|
||||
import re
|
||||
from playwright.sync_api import sync_playwright
|
||||
from playwright_stealth import stealth_sync
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def fetch_liveuamap():
|
||||
logger.info("Starting Liveuamap scraper with Playwright Stealth...")
|
||||
|
||||
regions = [
|
||||
{"name": "Ukraine", "url": "https://liveuamap.com"},
|
||||
{"name": "Middle East", "url": "https://mideast.liveuamap.com"},
|
||||
{"name": "Israel-Palestine", "url": "https://israelpalestine.liveuamap.com"},
|
||||
{"name": "Syria", "url": "https://syria.liveuamap.com"}
|
||||
]
|
||||
|
||||
all_markers = []
|
||||
seen_ids = set()
|
||||
|
||||
with sync_playwright() as p:
|
||||
# Launching with a real user agent to bypass Turnstile
|
||||
browser = p.chromium.launch(headless=False, args=["--disable-blink-features=AutomationControlled"])
|
||||
context = browser.new_context(
|
||||
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
viewport={"width": 1920, "height": 1080},
|
||||
color_scheme="dark"
|
||||
)
|
||||
page = context.new_page()
|
||||
stealth_sync(page)
|
||||
|
||||
for region in regions:
|
||||
try:
|
||||
logger.info(f"Scraping Liveuamap region: {region['name']}")
|
||||
page.goto(region["url"], timeout=60000, wait_until="domcontentloaded")
|
||||
|
||||
# Wait for the map canvas or markers script to load, max 10s wait
|
||||
try:
|
||||
page.wait_for_timeout(5000)
|
||||
except:
|
||||
pass
|
||||
|
||||
html = page.content()
|
||||
|
||||
m = re.search(r"var\s+ovens\s*=\s*(.*?);(?!function)", html, re.DOTALL)
|
||||
if not m:
|
||||
logger.warning(f"Could not find 'ovens' data for {region['name']} in raw HTML")
|
||||
# Let's try grabbing the evaluated JavaScript variable if it's there
|
||||
try:
|
||||
ovens_json = page.evaluate("() => typeof ovens !== 'undefined' ? JSON.stringify(ovens) : null")
|
||||
if ovens_json:
|
||||
markers = json.loads(ovens_json)
|
||||
# process below
|
||||
html = f"var ovens={ovens_json};"
|
||||
m = re.search(r"var\s+ovens=(.*?);", html, re.DOTALL)
|
||||
except:
|
||||
pass
|
||||
|
||||
if m:
|
||||
json_str = m.group(1).strip()
|
||||
if json_str.startswith("'") or json_str.startswith('"'):
|
||||
json_str = json_str.strip('"\'')
|
||||
json_str = base64.b64decode(urllib.parse.unquote(json_str)).decode('utf-8')
|
||||
|
||||
try:
|
||||
markers = json.loads(json_str)
|
||||
for marker in markers:
|
||||
mid = marker.get("id")
|
||||
if mid and mid not in seen_ids:
|
||||
seen_ids.add(mid)
|
||||
all_markers.append({
|
||||
"id": mid,
|
||||
"type": "liveuamap",
|
||||
"title": marker.get("s", "Unknown Event") or marker.get("title", ""),
|
||||
"lat": marker.get("lat"),
|
||||
"lng": marker.get("lng"),
|
||||
"timestamp": marker.get("time", ""),
|
||||
"link": marker.get("link", region["url"]),
|
||||
"region": region["name"]
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing JSON for {region['name']}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error scraping Liveuamap {region['name']}: {e}")
|
||||
|
||||
browser.close()
|
||||
|
||||
logger.info(f"Liveuamap scraper finished, extracted {len(all_markers)} unique markers.")
|
||||
return all_markers
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
res = fetch_liveuamap()
|
||||
print(json.dumps(res[:3], indent=2))
|
||||
@@ -0,0 +1,90 @@
|
||||
import logging
|
||||
import json
|
||||
import subprocess
|
||||
import shutil
|
||||
import time
|
||||
from urllib.parse import urlparse
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Find bash for curl fallback — Git bash's curl has the TLS features
|
||||
# needed to pass CDN fingerprint checks (brotli, zstd, libpsl)
|
||||
_BASH_PATH = shutil.which("bash") or "bash"
|
||||
|
||||
# Cache domains where requests fails — skip straight to curl for 5 minutes
|
||||
_domain_fail_cache: dict[str, float] = {}
|
||||
_DOMAIN_FAIL_TTL = 300 # 5 minutes
|
||||
|
||||
class _DummyResponse:
|
||||
"""Minimal response object matching requests.Response interface."""
|
||||
def __init__(self, status_code, text):
|
||||
self.status_code = status_code
|
||||
self.text = text
|
||||
self.content = text.encode('utf-8', errors='replace')
|
||||
|
||||
def json(self):
|
||||
return json.loads(self.text)
|
||||
|
||||
def raise_for_status(self):
|
||||
if self.status_code >= 400:
|
||||
raise Exception(f"HTTP {self.status_code}: {self.text[:100]}")
|
||||
|
||||
|
||||
def fetch_with_curl(url, method="GET", json_data=None, timeout=15, headers=None):
|
||||
"""Wrapper to bypass aggressive local firewall that blocks Python but permits curl.
|
||||
|
||||
Falls back to running curl through Git bash, which has the TLS features
|
||||
(brotli, zstd, libpsl) needed to pass CDN fingerprint checks that block
|
||||
both Python requests and the barebones Windows system curl.
|
||||
"""
|
||||
default_headers = {
|
||||
"User-Agent": "ShadowBroker-OSINT/1.0 (live-risk-dashboard)",
|
||||
}
|
||||
if headers:
|
||||
default_headers.update(headers)
|
||||
|
||||
domain = urlparse(url).netloc
|
||||
|
||||
# Check if this domain recently failed with requests — skip straight to curl
|
||||
if domain in _domain_fail_cache and (time.time() - _domain_fail_cache[domain]) < _DOMAIN_FAIL_TTL:
|
||||
pass # Fall through to curl below
|
||||
else:
|
||||
try:
|
||||
import requests
|
||||
if method == "POST":
|
||||
res = requests.post(url, json=json_data, timeout=timeout, headers=default_headers)
|
||||
else:
|
||||
res = requests.get(url, timeout=timeout, headers=default_headers)
|
||||
res.raise_for_status()
|
||||
# Clear failure cache on success
|
||||
_domain_fail_cache.pop(domain, None)
|
||||
return res
|
||||
except Exception as e:
|
||||
logger.warning(f"Python requests failed for {url} ({e}), falling back to bash curl...")
|
||||
_domain_fail_cache[domain] = time.time()
|
||||
|
||||
# Build curl command string for bash execution
|
||||
header_flags = " ".join(f'-H "{k}: {v}"' for k, v in default_headers.items())
|
||||
if method == "POST" and json_data:
|
||||
payload = json.dumps(json_data).replace('"', '\\"')
|
||||
curl_cmd = f'curl -s -w "\\n%{{http_code}}" {header_flags} -X POST -H "Content-Type: application/json" -d "{payload}" "{url}"'
|
||||
else:
|
||||
curl_cmd = f'curl -s -w "\\n%{{http_code}}" {header_flags} "{url}"'
|
||||
|
||||
try:
|
||||
res = subprocess.run(
|
||||
[_BASH_PATH, "-c", curl_cmd],
|
||||
capture_output=True, text=True, timeout=timeout + 5
|
||||
)
|
||||
if res.returncode == 0 and res.stdout.strip():
|
||||
# Parse HTTP status code from -w output (last line)
|
||||
lines = res.stdout.rstrip().rsplit("\n", 1)
|
||||
body = lines[0] if len(lines) > 1 else res.stdout
|
||||
http_code = int(lines[-1]) if len(lines) > 1 and lines[-1].strip().isdigit() else 200
|
||||
return _DummyResponse(http_code, body)
|
||||
else:
|
||||
logger.error(f"bash curl fallback failed: exit={res.returncode} stderr={res.stderr[:200]}")
|
||||
return _DummyResponse(500, "")
|
||||
except Exception as curl_e:
|
||||
logger.error(f"bash curl fallback exception: {curl_e}")
|
||||
return _DummyResponse(500, "")
|
||||
@@ -0,0 +1,177 @@
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
import logging
|
||||
from cachetools import cached, TTLCache
|
||||
import cloudscraper
|
||||
import reverse_geocoder as rg
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Cache the top feeds for 5 minutes so we don't hammer Broadcastify
|
||||
radio_cache = TTLCache(maxsize=1, ttl=300)
|
||||
|
||||
@cached(radio_cache)
|
||||
def get_top_broadcastify_feeds():
|
||||
"""
|
||||
Scrapes the Broadcastify Top 50 live audio feeds public dashboard.
|
||||
Returns a list of dictionaries containing feed metadata and direct stream URLs.
|
||||
"""
|
||||
logger.info("Scraping Broadcastify Top Feeds (Cache Miss)")
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
|
||||
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
|
||||
'Accept-Language': 'en-US,en;q=0.9',
|
||||
}
|
||||
|
||||
try:
|
||||
res = requests.get("https://www.broadcastify.com/listen/top", headers=headers, timeout=10)
|
||||
if res.status_code != 200:
|
||||
logger.error(f"Broadcastify Scrape Failed: HTTP {res.status_code}")
|
||||
return []
|
||||
|
||||
soup = BeautifulSoup(res.text, 'html.parser')
|
||||
|
||||
table = soup.find('table', {'class': 'btable'})
|
||||
if not table:
|
||||
logger.error("Could not find feeds table on Broadcastify.")
|
||||
return []
|
||||
|
||||
feeds = []
|
||||
rows = table.find_all('tr')[1:] # Skip header row
|
||||
|
||||
for row in rows:
|
||||
cols = row.find_all('td')
|
||||
if len(cols) >= 5:
|
||||
# Top layout: [Listeners, Feed ID (hidden), Location, Feed Name, Category, Genre]
|
||||
listeners_str = cols[0].text.strip().replace(',', '')
|
||||
listeners = int(listeners_str) if listeners_str.isdigit() else 0
|
||||
|
||||
link_tag = cols[2].find('a')
|
||||
if not link_tag:
|
||||
continue
|
||||
|
||||
href = link_tag.get('href', '')
|
||||
feed_id = href.split('/')[-1] if '/listen/feed/' in href else None
|
||||
|
||||
if not feed_id:
|
||||
continue
|
||||
|
||||
location = cols[1].text.strip()
|
||||
name = cols[2].text.strip()
|
||||
category = cols[3].text.strip()
|
||||
|
||||
feeds.append({
|
||||
"id": feed_id,
|
||||
"listeners": listeners,
|
||||
"location": location,
|
||||
"name": name,
|
||||
"category": category,
|
||||
"stream_url": f"https://broadcastify.cdnstream1.com/{feed_id}"
|
||||
})
|
||||
|
||||
logger.info(f"Successfully scraped {len(feeds)} top feeds from Broadcastify.")
|
||||
return feeds
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Broadcastify Scrape Exception: {e}")
|
||||
return []
|
||||
|
||||
# Cache OpenMHZ systems mapping so we don't have to fetch all 450+ every time
|
||||
openmhz_systems_cache = TTLCache(maxsize=1, ttl=3600)
|
||||
|
||||
@cached(openmhz_systems_cache)
|
||||
def get_openmhz_systems():
|
||||
"""Fetches the full directory of OpenMHZ systems."""
|
||||
logger.info("Scraping OpenMHZ Systems (Cache Miss)")
|
||||
scraper = cloudscraper.create_scraper(browser={'browser': 'chrome', 'platform': 'windows', 'desktop': True})
|
||||
|
||||
try:
|
||||
res = scraper.get("https://api.openmhz.com/systems", timeout=15)
|
||||
if res.status_code == 200:
|
||||
data = res.json()
|
||||
# Return list of systems
|
||||
return data.get('systems', []) if isinstance(data, dict) else []
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"OpenMHZ Systems Scrape Exception: {e}")
|
||||
return []
|
||||
|
||||
# Cache specific city calls briefly (15-30s) to limit our polling rate
|
||||
openmhz_calls_cache = TTLCache(maxsize=100, ttl=20)
|
||||
|
||||
@cached(openmhz_calls_cache)
|
||||
def get_recent_openmhz_calls(sys_name: str):
|
||||
"""Fetches the actual audio burst .m4a URLs for a specific system (e.g., 'wmata')."""
|
||||
logger.info(f"Fetching OpenMHZ calls for {sys_name} (Cache Miss)")
|
||||
scraper = cloudscraper.create_scraper(browser={'browser': 'chrome', 'platform': 'windows', 'desktop': True})
|
||||
|
||||
try:
|
||||
url = f"https://api.openmhz.com/{sys_name}/calls"
|
||||
res = scraper.get(url, timeout=15)
|
||||
if res.status_code == 200:
|
||||
data = res.json()
|
||||
return data.get('calls', []) if isinstance(data, dict) else []
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"OpenMHZ Calls Scrape Exception ({sys_name}): {e}")
|
||||
return []
|
||||
|
||||
US_STATES = {
|
||||
'Alabama': 'AL', 'Alaska': 'AK', 'Arizona': 'AZ', 'Arkansas': 'AR', 'California': 'CA',
|
||||
'Colorado': 'CO', 'Connecticut': 'CT', 'Delaware': 'DE', 'Florida': 'FL', 'Georgia': 'GA',
|
||||
'Hawaii': 'HI', 'Idaho': 'ID', 'Illinois': 'IL', 'Indiana': 'IN', 'Iowa': 'IA',
|
||||
'Kansas': 'KS', 'Kentucky': 'KY', 'Louisiana': 'LA', 'Maine': 'ME', 'Maryland': 'MD',
|
||||
'Massachusetts': 'MA', 'Michigan': 'MI', 'Minnesota': 'MN', 'Mississippi': 'MS',
|
||||
'Missouri': 'MO', 'Montana': 'MT', 'Nebraska': 'NE', 'Nevada': 'NV', 'New Hampshire': 'NH',
|
||||
'New Jersey': 'NJ', 'New Mexico': 'NM', 'New York': 'NY', 'North Carolina': 'NC',
|
||||
'North Dakota': 'ND', 'Ohio': 'OH', 'Oklahoma': 'OK', 'Oregon': 'OR', 'Pennsylvania': 'PA',
|
||||
'Rhode Island': 'RI', 'South Carolina': 'SC', 'South Dakota': 'SD', 'Tennessee': 'TN',
|
||||
'Texas': 'TX', 'Utah': 'UT', 'Vermont': 'VT', 'Virginia': 'VA', 'Washington': 'WA',
|
||||
'West Virginia': 'WV', 'Wisconsin': 'WI', 'Wyoming': 'WY', 'Washington, D.C.': 'DC', 'District of Columbia': 'DC'
|
||||
}
|
||||
|
||||
import math
|
||||
|
||||
def haversine_distance(lat1, lon1, lat2, lon2):
|
||||
R = 3958.8 # Earth radius in miles
|
||||
dLat = math.radians(lat2 - lat1)
|
||||
dLon = math.radians(lon2 - lon1)
|
||||
a = math.sin(dLat/2) * math.sin(dLat/2) + \
|
||||
math.cos(math.radians(lat1)) * math.cos(math.radians(lat2)) * \
|
||||
math.sin(dLon/2) * math.sin(dLon/2)
|
||||
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1-a))
|
||||
return R * c
|
||||
|
||||
def find_nearest_openmhz_systems_list(lat: float, lng: float, limit: int = 5):
|
||||
"""
|
||||
Finds the strictly nearest OpenMHZ systems by distance.
|
||||
"""
|
||||
systems = get_openmhz_systems()
|
||||
if not systems:
|
||||
return []
|
||||
|
||||
# Calculate distance for all systems that provide coordinates
|
||||
valid_systems = []
|
||||
for s in systems:
|
||||
s_lat = s.get('lat')
|
||||
s_lng = s.get('lng')
|
||||
if s_lat is not None and s_lng is not None:
|
||||
dist = haversine_distance(lat, lng, float(s_lat), float(s_lng))
|
||||
s['distance_miles'] = dist
|
||||
valid_systems.append(s)
|
||||
|
||||
if not valid_systems:
|
||||
return []
|
||||
|
||||
# Sort strictly by distance
|
||||
valid_systems.sort(key=lambda x: x['distance_miles'])
|
||||
return valid_systems[:limit]
|
||||
|
||||
def find_nearest_openmhz_system(lat: float, lng: float):
|
||||
"""
|
||||
Returns the single closest OpenMHZ system by distance.
|
||||
"""
|
||||
nearest = find_nearest_openmhz_systems_list(lat, lng, limit=1)
|
||||
if nearest:
|
||||
return nearest[0]
|
||||
return None
|
||||
@@ -0,0 +1,202 @@
|
||||
import logging
|
||||
import concurrent.futures
|
||||
from urllib.parse import quote
|
||||
from cachetools import TTLCache
|
||||
from services.network_utils import fetch_with_curl
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Cache dossier results for 24 hours — country data barely changes
|
||||
# Key: rounded lat/lng grid (0.1 degree ≈ 11km)
|
||||
dossier_cache = TTLCache(maxsize=500, ttl=86400)
|
||||
|
||||
|
||||
def _reverse_geocode(lat: float, lng: float) -> dict:
|
||||
url = (
|
||||
f"https://nominatim.openstreetmap.org/reverse?"
|
||||
f"lat={lat}&lon={lng}&format=json&zoom=10&addressdetails=1&accept-language=en"
|
||||
)
|
||||
try:
|
||||
res = fetch_with_curl(url, timeout=10)
|
||||
if res.status_code == 200:
|
||||
data = res.json()
|
||||
addr = data.get("address", {})
|
||||
return {
|
||||
"city": addr.get("city") or addr.get("town") or addr.get("village") or addr.get("county") or "",
|
||||
"state": addr.get("state") or addr.get("region") or "",
|
||||
"country": addr.get("country") or "",
|
||||
"country_code": (addr.get("country_code") or "").upper(),
|
||||
"display_name": data.get("display_name", ""),
|
||||
}
|
||||
except Exception as e:
|
||||
logger.warning(f"Reverse geocode failed: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
def _fetch_country_data(country_code: str) -> dict:
|
||||
if not country_code:
|
||||
return {}
|
||||
url = (
|
||||
f"https://restcountries.com/v3.1/alpha/{country_code}"
|
||||
f"?fields=name,population,capital,languages,region,subregion,area,currencies,borders,flag"
|
||||
)
|
||||
try:
|
||||
res = fetch_with_curl(url, timeout=10)
|
||||
if res.status_code == 200:
|
||||
return res.json()
|
||||
except Exception as e:
|
||||
logger.warning(f"RestCountries failed for {country_code}: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
def _fetch_wikidata_leader(country_name: str) -> dict:
|
||||
if not country_name:
|
||||
return {"leader": "Unknown", "government_type": "Unknown"}
|
||||
# SPARQL: get head of state (P35) and form of government (P122) for a sovereign state
|
||||
safe_name = country_name.replace('"', '\\"').replace("'", "\\'")
|
||||
sparql = f"""
|
||||
SELECT ?leaderLabel ?govTypeLabel WHERE {{
|
||||
?country wdt:P31 wd:Q6256 ;
|
||||
rdfs:label "{safe_name}"@en .
|
||||
OPTIONAL {{ ?country wdt:P35 ?leader . }}
|
||||
OPTIONAL {{ ?country wdt:P122 ?govType . }}
|
||||
SERVICE wikibase:label {{ bd:serviceParam wikibase:language "en". }}
|
||||
}} LIMIT 1
|
||||
"""
|
||||
url = f"https://query.wikidata.org/sparql?query={quote(sparql)}&format=json"
|
||||
try:
|
||||
res = fetch_with_curl(url, timeout=15)
|
||||
if res.status_code == 200:
|
||||
results = res.json().get("results", {}).get("bindings", [])
|
||||
if results:
|
||||
r = results[0]
|
||||
return {
|
||||
"leader": r.get("leaderLabel", {}).get("value", "Unknown"),
|
||||
"government_type": r.get("govTypeLabel", {}).get("value", "Unknown"),
|
||||
}
|
||||
except Exception as e:
|
||||
logger.warning(f"Wikidata SPARQL failed for {country_name}: {e}")
|
||||
return {"leader": "Unknown", "government_type": "Unknown"}
|
||||
|
||||
|
||||
def _fetch_local_wiki_summary(place_name: str, country_name: str = "") -> dict:
|
||||
if not place_name:
|
||||
return {}
|
||||
# Try exact match first, then with country qualifier
|
||||
candidates = [place_name]
|
||||
if country_name:
|
||||
candidates.append(f"{place_name}, {country_name}")
|
||||
|
||||
for name in candidates:
|
||||
slug = quote(name.replace(" ", "_"))
|
||||
url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{slug}"
|
||||
try:
|
||||
res = fetch_with_curl(url, timeout=10)
|
||||
if res.status_code == 200:
|
||||
data = res.json()
|
||||
if data.get("type") != "disambiguation":
|
||||
return {
|
||||
"description": data.get("description", ""),
|
||||
"extract": data.get("extract", ""),
|
||||
"thumbnail": data.get("thumbnail", {}).get("source", ""),
|
||||
}
|
||||
except Exception:
|
||||
continue
|
||||
return {}
|
||||
|
||||
|
||||
def get_region_dossier(lat: float, lng: float) -> dict:
|
||||
cache_key = f"{round(lat, 1)}_{round(lng, 1)}"
|
||||
if cache_key in dossier_cache:
|
||||
return dossier_cache[cache_key]
|
||||
|
||||
# Step 1: Reverse geocode
|
||||
geo = _reverse_geocode(lat, lng)
|
||||
if not geo or not geo.get("country"):
|
||||
return {
|
||||
"coordinates": {"lat": lat, "lng": lng},
|
||||
"location": geo or {},
|
||||
"country": None,
|
||||
"local": None,
|
||||
"error": "No country data — possibly international waters or uninhabited area",
|
||||
}
|
||||
|
||||
country_code = geo.get("country_code", "")
|
||||
country_name = geo.get("country", "")
|
||||
city_name = geo.get("city", "")
|
||||
state_name = geo.get("state", "")
|
||||
|
||||
# Step 2: Parallel fetch with timeouts to prevent hanging
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as pool:
|
||||
country_fut = pool.submit(_fetch_country_data, country_code)
|
||||
leader_fut = pool.submit(_fetch_wikidata_leader, country_name)
|
||||
local_fut = pool.submit(_fetch_local_wiki_summary, city_name or state_name, country_name)
|
||||
# Also fetch country-level Wikipedia summary as fallback for local
|
||||
country_wiki_fut = pool.submit(_fetch_local_wiki_summary, country_name, "")
|
||||
|
||||
try:
|
||||
country_data = country_fut.result(timeout=12)
|
||||
except Exception:
|
||||
logger.warning("Country data fetch timed out or failed")
|
||||
country_data = {}
|
||||
try:
|
||||
leader_data = leader_fut.result(timeout=12)
|
||||
except Exception:
|
||||
logger.warning("Leader data fetch timed out or failed")
|
||||
leader_data = {"leader": "Unknown", "government_type": "Unknown"}
|
||||
try:
|
||||
local_data = local_fut.result(timeout=12)
|
||||
except Exception:
|
||||
logger.warning("Local wiki fetch timed out or failed")
|
||||
local_data = {}
|
||||
try:
|
||||
country_wiki_data = country_wiki_fut.result(timeout=12)
|
||||
except Exception:
|
||||
country_wiki_data = {}
|
||||
|
||||
# If no local data but we have country wiki summary, use that
|
||||
if not local_data.get("extract") and country_wiki_data.get("extract"):
|
||||
local_data = country_wiki_data
|
||||
|
||||
# Build languages list
|
||||
languages = country_data.get("languages", {})
|
||||
lang_list = list(languages.values()) if isinstance(languages, dict) else []
|
||||
|
||||
# Build currencies
|
||||
currencies = country_data.get("currencies", {})
|
||||
currency_list = []
|
||||
if isinstance(currencies, dict):
|
||||
for v in currencies.values():
|
||||
if isinstance(v, dict):
|
||||
symbol = v.get("symbol", "")
|
||||
name = v.get("name", "")
|
||||
currency_list.append(f"{name} ({symbol})" if symbol else name)
|
||||
|
||||
result = {
|
||||
"coordinates": {"lat": lat, "lng": lng},
|
||||
"location": geo,
|
||||
"country": {
|
||||
"name": country_data.get("name", {}).get("common", country_name),
|
||||
"official_name": country_data.get("name", {}).get("official", ""),
|
||||
"leader": leader_data.get("leader", "Unknown"),
|
||||
"government_type": leader_data.get("government_type", "Unknown"),
|
||||
"population": country_data.get("population", 0),
|
||||
"capital": (country_data.get("capital") or ["Unknown"])[0] if isinstance(country_data.get("capital"), list) else "Unknown",
|
||||
"languages": lang_list,
|
||||
"currencies": currency_list,
|
||||
"region": country_data.get("region", ""),
|
||||
"subregion": country_data.get("subregion", ""),
|
||||
"area_km2": country_data.get("area", 0),
|
||||
"flag_emoji": country_data.get("flag", ""),
|
||||
},
|
||||
"local": {
|
||||
"name": city_name,
|
||||
"state": state_name,
|
||||
"description": local_data.get("description", ""),
|
||||
"summary": local_data.get("extract", ""),
|
||||
"thumbnail": local_data.get("thumbnail", ""),
|
||||
},
|
||||
}
|
||||
|
||||
dossier_cache[cache_key] = result
|
||||
return result
|
||||
@@ -0,0 +1,17 @@
|
||||
import sys
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
# Add backend directory to sys path so we can import modules
|
||||
sys.path.append(r'f:\Codebase\Oracle\live-risk-dashboard\backend')
|
||||
|
||||
from services.data_fetcher import fetch_flights, latest_data
|
||||
|
||||
print("Testing fetch_flights...")
|
||||
try:
|
||||
fetch_flights()
|
||||
print("Commercial flights count:", len(latest_data.get('commercial_flights', [])))
|
||||
print("Private jets count:", len(latest_data.get('private_jets', [])))
|
||||
except Exception as e:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
@@ -0,0 +1,38 @@
|
||||
import json
|
||||
from playwright.sync_api import sync_playwright
|
||||
|
||||
def scrape_liveuamap():
|
||||
print("Launching playwright...")
|
||||
with sync_playwright() as p:
|
||||
# User agents are important for headless browsing
|
||||
browser = p.chromium.launch(headless=True)
|
||||
page = browser.new_page(user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36")
|
||||
|
||||
def handle_response(response):
|
||||
try:
|
||||
if not response.url.endswith(('js', 'css', 'png', 'jpg', 'woff2', 'svg', 'ico')):
|
||||
print(f"Intercepted API Call: {response.url}")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
page.on("response", handle_response)
|
||||
|
||||
print("Navigating to liveuamap...")
|
||||
try:
|
||||
page.goto("https://liveuamap.com/", timeout=30000, wait_until="domcontentloaded")
|
||||
page.wait_for_timeout(5000)
|
||||
|
||||
print("Grabbing all script tags...")
|
||||
scripts = page.evaluate("() => Array.from(document.querySelectorAll('script')).map(s => s.innerText)")
|
||||
for i, s in enumerate(scripts):
|
||||
if 'JSON.parse' in s or 'markers' in s or 'JSON' in s:
|
||||
with open(f"script_{i}.txt", "w", encoding="utf-8") as f:
|
||||
f.write(s)
|
||||
except Exception as e:
|
||||
print("Playwright timeout or error:", e)
|
||||
|
||||
print("Closing browser...")
|
||||
browser.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
scrape_liveuamap()
|
||||
@@ -0,0 +1,59 @@
|
||||
import requests
|
||||
import json
|
||||
import time
|
||||
import cloudscraper
|
||||
|
||||
def scrape_openmhz_systems():
|
||||
print("Testing OpenMHZ undocumented API with Cloudscraper...")
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
|
||||
}
|
||||
|
||||
scraper = cloudscraper.create_scraper(browser={'browser': 'chrome', 'platform': 'windows', 'desktop': True})
|
||||
|
||||
try:
|
||||
# Step 1: Hit the public systems list that the front-end map uses
|
||||
res = scraper.get("https://api.openmhz.com/systems", headers=headers, timeout=15)
|
||||
json_data = res.json()
|
||||
systems = json_data.get('systems', []) if isinstance(json_data, dict) else []
|
||||
print(f"Successfully spoofed OpenMHZ frontend. Found {len(systems)} active police/fire systems.")
|
||||
|
||||
if not systems:
|
||||
return
|
||||
|
||||
# Inspect the first system (usually a major city)
|
||||
city = systems[0]
|
||||
sys_name = city.get('shortName')
|
||||
print(f"Targeting System: {city.get('name')} ({sys_name})")
|
||||
|
||||
if not sys_name:
|
||||
return
|
||||
|
||||
time.sleep(2) # Ethical delay
|
||||
|
||||
# Step 2: Query the recent calls for this specific system
|
||||
# The frontend queries: https://api.openmhz.com/<system_name>/calls
|
||||
calls_url = f"https://api.openmhz.com/{sys_name}/calls"
|
||||
print(f"Fetching recent bursts: {calls_url}")
|
||||
|
||||
call_res = scraper.get(calls_url, headers=headers, timeout=15)
|
||||
|
||||
if call_res.status_code == 200:
|
||||
call_json = call_res.json()
|
||||
calls = call_json.get('calls', []) if isinstance(call_json, dict) else []
|
||||
if calls and len(calls) > 0:
|
||||
print(f"Intercepted {len(calls)} audio bursts.")
|
||||
latest = calls[0]
|
||||
print("LATEST INTERCEPT:")
|
||||
print(f"Talkgroup: {latest.get('talkgroupNum')}")
|
||||
print(f"Audio URL: {latest.get('url')}")
|
||||
else:
|
||||
print("No recent calls found for this system.")
|
||||
else:
|
||||
print(f"Failed to fetch calls. HTTP {call_res.status_code}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Scrape Exception: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
scrape_openmhz_systems()
|
||||
@@ -0,0 +1,19 @@
|
||||
import requests
|
||||
|
||||
def test_openmhz():
|
||||
print("Testing OpenMHZ...")
|
||||
res = requests.get("https://api.openmhz.com/systems")
|
||||
if res.status_code == 200:
|
||||
data = res.json()
|
||||
print(f"OpenMHZ returned {len(data)} systems.")
|
||||
print(f"Example: {data[0]['name']} ({data[0]['shortName']})")
|
||||
else:
|
||||
print(f"OpenMHZ Failed: {res.status_code}")
|
||||
|
||||
def test_scanner_radio():
|
||||
print("Testing Scanner Radio...")
|
||||
# Gordon Edwards app often uses something like this
|
||||
# We will just try broadcastify public page scrape as a secondary fallback
|
||||
pass
|
||||
|
||||
test_openmhz()
|
||||
@@ -0,0 +1,55 @@
|
||||
import feedparser
|
||||
import requests
|
||||
import re
|
||||
|
||||
feeds = {
|
||||
"NPR": "https://feeds.npr.org/1004/rss.xml",
|
||||
"BBC": "http://feeds.bbci.co.uk/news/world/rss.xml"
|
||||
}
|
||||
|
||||
keyword_coords = {
|
||||
"venezuela": (7.119, -66.589), "brazil": (-14.235, -51.925), "argentina": (-38.416, -63.616),
|
||||
"colombia": (4.570, -74.297), "mexico": (23.634, -102.552), "united states": (38.907, -77.036),
|
||||
" usa ": (38.907, -77.036), " us ": (38.907, -77.036), "washington": (38.907, -77.036),
|
||||
"canada": (56.130, -106.346), "ukraine": (49.487, 31.272), "kyiv": (50.450, 30.523),
|
||||
"russia": (61.524, 105.318), "moscow": (55.755, 37.617), "israel": (31.046, 34.851),
|
||||
"gaza": (31.416, 34.333), "iran": (32.427, 53.688), "lebanon": (33.854, 35.862),
|
||||
"syria": (34.802, 38.996), "yemen": (15.552, 48.516), "china": (35.861, 104.195),
|
||||
"beijing": (39.904, 116.407), "taiwan": (23.697, 120.960), "north korea": (40.339, 127.510),
|
||||
"south korea": (35.907, 127.766), "pyongyang": (39.039, 125.762), "seoul": (37.566, 126.978),
|
||||
"japan": (36.204, 138.252), "afghanistan": (33.939, 67.709), "pakistan": (30.375, 69.345),
|
||||
"india": (20.593, 78.962), " uk ": (55.378, -3.435), "london": (51.507, -0.127),
|
||||
"france": (46.227, 2.213), "paris": (48.856, 2.352), "germany": (51.165, 10.451),
|
||||
"berlin": (52.520, 13.405), "sudan": (12.862, 30.217), "congo": (-4.038, 21.758),
|
||||
"south africa": (-30.559, 22.937), "nigeria": (9.082, 8.675), "egypt": (26.820, 30.802),
|
||||
"zimbabwe": (-19.015, 29.154), "australia": (-25.274, 133.775), "middle east": (31.500, 34.800),
|
||||
"europe": (48.800, 2.300), "africa": (0.000, 25.000), "america": (38.900, -77.000),
|
||||
"south america": (-14.200, -51.900), "asia": (34.000, 100.000),
|
||||
"california": (36.778, -119.417), "texas": (31.968, -99.901), "florida": (27.994, -81.760),
|
||||
"new york": (40.712, -74.006), "virginia": (37.431, -78.656),
|
||||
"british columbia": (53.726, -127.647), "ontario": (51.253, -85.323), "quebec": (52.939, -73.549),
|
||||
"delhi": (28.704, 77.102), "new delhi": (28.613, 77.209), "mumbai": (19.076, 72.877),
|
||||
"shanghai": (31.230, 121.473), "hong kong": (22.319, 114.169), "istanbul": (41.008, 28.978),
|
||||
"dubai": (25.204, 55.270), "singapore": (1.352, 103.819)
|
||||
}
|
||||
|
||||
for name, url in feeds.items():
|
||||
r = requests.get(url)
|
||||
feed = feedparser.parse(r.text)
|
||||
for entry in feed.entries[:10]:
|
||||
title = entry.get('title', '')
|
||||
summary = entry.get('summary', '')
|
||||
text = (title + " " + summary).lower()
|
||||
padded_text = f" {text} "
|
||||
|
||||
matched_kw = None
|
||||
for kw, coords in keyword_coords.items():
|
||||
if kw.startswith(" ") or kw.endswith(" "):
|
||||
if kw in padded_text:
|
||||
matched_kw = kw
|
||||
break
|
||||
else:
|
||||
if re.search(r'\b' + re.escape(kw) + r'\b', text):
|
||||
matched_kw = kw
|
||||
break
|
||||
print(f"[{name}] {title}\n Matched: {matched_kw}\n Text: {text}\n")
|
||||
@@ -0,0 +1,67 @@
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
import json
|
||||
|
||||
def scrape_broadcastify_top():
|
||||
print("Scraping Broadcastify Top Feeds...")
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
|
||||
}
|
||||
|
||||
try:
|
||||
# The top 50 feeds page provides a wealth of listening data
|
||||
res = requests.get("https://www.broadcastify.com/listen/top", headers=headers, timeout=10)
|
||||
if res.status_code != 200:
|
||||
print(f"Failed HTTP {res.status_code}")
|
||||
return []
|
||||
|
||||
soup = BeautifulSoup(res.text, 'html.parser')
|
||||
|
||||
# The table of feeds is in a standard class
|
||||
table = soup.find('table', {'class': 'btable'})
|
||||
if not table:
|
||||
print("Could not find feeds table.")
|
||||
return []
|
||||
|
||||
feeds = []
|
||||
rows = table.find_all('tr')[1:] # Skip header
|
||||
|
||||
for row in rows:
|
||||
cols = row.find_all('td')
|
||||
if len(cols) >= 5:
|
||||
# Top layout: [Listeners, Feed ID (hidden), Location, Feed Name, Category, Genre]
|
||||
listeners_str = cols[0].text.strip().replace(',', '')
|
||||
listeners = int(listeners_str) if listeners_str.isdigit() else 0
|
||||
|
||||
# The link is usually in the Feed Name column
|
||||
link_tag = cols[2].find('a')
|
||||
if not link_tag:
|
||||
continue
|
||||
|
||||
href = link_tag.get('href', '')
|
||||
feed_id = href.split('/')[-1] if '/listen/feed/' in href else None
|
||||
|
||||
if not feed_id:
|
||||
continue
|
||||
|
||||
location = cols[1].text.strip()
|
||||
name = cols[2].text.strip()
|
||||
|
||||
feeds.append({
|
||||
"id": feed_id,
|
||||
"listeners": listeners,
|
||||
"location": location,
|
||||
"name": name,
|
||||
"stream_url": f"https://broadcastify.cdnstream1.com/{feed_id}"
|
||||
})
|
||||
|
||||
print(f"Successfully scraped {len(feeds)} top feeds.")
|
||||
return feeds
|
||||
|
||||
except Exception as e:
|
||||
print(f"Scrape error: {e}")
|
||||
return []
|
||||
|
||||
if __name__ == "__main__":
|
||||
top_feeds = scrape_broadcastify_top()
|
||||
print(json.dumps(top_feeds[:3], indent=2))
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1,10 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<title>Error</title>
|
||||
</head>
|
||||
<body>
|
||||
<pre>Cannot GET /api/history/public/latest</pre>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,59 @@
|
||||
import requests
|
||||
import time
|
||||
import math
|
||||
import random
|
||||
|
||||
def test_fetch_and_triangulate():
|
||||
t0 = time.time()
|
||||
url = "https://api.adsb.lol/v2/lat/39.8/lon/-98.5/dist/1000"
|
||||
try:
|
||||
r = requests.get(url, timeout=10)
|
||||
data = r.json()
|
||||
print(f"Downloaded in {time.time() - t0:.2f}s")
|
||||
if "ac" in data:
|
||||
sampled = data["ac"]
|
||||
print("Flights:", len(sampled))
|
||||
else:
|
||||
print("No 'ac' in response:", data)
|
||||
|
||||
|
||||
# Load airports (mock for test)
|
||||
airports = [{"lat": random.uniform(-90, 90), "lng": random.uniform(-180, 180), "iata": f"A{i}"} for i in range(4000)]
|
||||
|
||||
t1 = time.time()
|
||||
for f in sampled:
|
||||
lat = f.get("lat")
|
||||
lng = f.get("lon")
|
||||
heading = f.get("track", 0)
|
||||
if lat is None or lng is None: continue
|
||||
|
||||
# Project 15 degrees (~1000 miles) backwards and forwards
|
||||
dist_deg = 15.0
|
||||
h_rad = math.radians(heading)
|
||||
dy = math.cos(h_rad) * dist_deg
|
||||
dx = math.sin(h_rad) * dist_deg
|
||||
cos_lat = max(0.2, math.cos(math.radians(lat)))
|
||||
|
||||
origin_lat = lat - dy
|
||||
origin_lng = lng - (dx / cos_lat)
|
||||
|
||||
dest_lat = lat + dy
|
||||
dest_lng = lng + (dx / cos_lat)
|
||||
|
||||
# Find closest origin airport
|
||||
best_o, min_o = None, float('inf')
|
||||
for a in airports:
|
||||
d = (a['lat'] - origin_lat)**2 + (a['lng'] - origin_lng)**2
|
||||
if d < min_o: min_o = d; best_o = a
|
||||
|
||||
# Find closest dest airport
|
||||
best_d, min_d = None, float('inf')
|
||||
for a in airports:
|
||||
d = (a['lat'] - dest_lat)**2 + (a['lng'] - dest_lng)**2
|
||||
if d < min_d: min_d = d; best_d = a
|
||||
|
||||
print(f"Triangulated 500 flights against {len(airports)} airports in {time.time() - t1:.2f}s")
|
||||
except Exception as e:
|
||||
print("Error:", e)
|
||||
|
||||
test_fetch_and_triangulate()
|
||||
@@ -0,0 +1,13 @@
|
||||
from services.data_fetcher import fetch_airports, fetch_flights, cached_airports, latest_data
|
||||
|
||||
fetch_airports()
|
||||
|
||||
# We patch logger to see what happens inside fetch_flights
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
# let's run fetch_flights
|
||||
fetch_flights()
|
||||
|
||||
flights = latest_data.get('flights', [])
|
||||
print(f"Total flights: {len(flights)}")
|
||||
@@ -0,0 +1,45 @@
|
||||
import json
|
||||
import subprocess
|
||||
import os
|
||||
import time
|
||||
|
||||
proxy_script = os.path.join(os.path.dirname(os.path.abspath(__file__)), "ais_proxy.js")
|
||||
API_KEY = "75cc39af03c9cc23c90e8a7b3c3bc2b2a507c5fb"
|
||||
|
||||
print(f"Proxy script: {proxy_script}")
|
||||
print(f"Exists: {os.path.exists(proxy_script)}")
|
||||
|
||||
process = subprocess.Popen(
|
||||
['node', proxy_script, API_KEY],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE, # Separate stderr!
|
||||
text=True,
|
||||
bufsize=1
|
||||
)
|
||||
|
||||
print("Process started, reading stdout...")
|
||||
count = 0
|
||||
start = time.time()
|
||||
for line in iter(process.stdout.readline, ''):
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
data = json.loads(line)
|
||||
msg_type = data.get("MessageType", "?")
|
||||
mmsi = data.get("MetaData", {}).get("MMSI", 0)
|
||||
count += 1
|
||||
if count <= 5:
|
||||
print(f" MSG {count}: type={msg_type} mmsi={mmsi}")
|
||||
if count == 20:
|
||||
elapsed = time.time() - start
|
||||
print(f"\nReceived {count} messages in {elapsed:.1f}s — proxy is working!")
|
||||
process.terminate()
|
||||
break
|
||||
except json.JSONDecodeError as e:
|
||||
print(f" BAD JSON: {line[:100]}... err={e}")
|
||||
|
||||
if count == 0:
|
||||
# Check stderr
|
||||
stderr_out = process.stderr.read()
|
||||
print(f"Zero messages received. stderr: {stderr_out[:500]}")
|
||||
@@ -0,0 +1,54 @@
|
||||
import json
|
||||
import subprocess
|
||||
import os
|
||||
import time
|
||||
import sys
|
||||
|
||||
proxy_script = os.path.join(os.path.dirname(os.path.abspath(__file__)), "ais_proxy.js")
|
||||
API_KEY = "75cc39af03c9cc23c90e8a7b3c3bc2b2a507c5fb"
|
||||
|
||||
print(f"Proxy script: {proxy_script}")
|
||||
|
||||
process = subprocess.Popen(
|
||||
['node', proxy_script, API_KEY],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True,
|
||||
bufsize=1
|
||||
)
|
||||
|
||||
import threading
|
||||
|
||||
def read_stderr():
|
||||
for line in iter(process.stderr.readline, ''):
|
||||
print(f"[STDERR] {line.strip()}", file=sys.stderr)
|
||||
|
||||
t = threading.Thread(target=read_stderr, daemon=True)
|
||||
t.start()
|
||||
|
||||
print("Process started, reading stdout for 15 seconds...")
|
||||
count = 0
|
||||
start = time.time()
|
||||
while time.time() - start < 15:
|
||||
line = process.stdout.readline()
|
||||
if not line:
|
||||
if process.poll() is not None:
|
||||
print(f"Process exited with code {process.returncode}")
|
||||
break
|
||||
continue
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
data = json.loads(line)
|
||||
msg_type = data.get("MessageType", "?")
|
||||
mmsi = data.get("MetaData", {}).get("MMSI", 0)
|
||||
count += 1
|
||||
if count <= 5:
|
||||
print(f" MSG {count}: type={msg_type} mmsi={mmsi}")
|
||||
except json.JSONDecodeError as e:
|
||||
print(f" BAD LINE: {line[:80]}...")
|
||||
|
||||
elapsed = time.time() - start
|
||||
print(f"\nTotal {count} messages in {elapsed:.1f}s")
|
||||
process.terminate()
|
||||
@@ -0,0 +1,13 @@
|
||||
import requests
|
||||
import traceback
|
||||
|
||||
try:
|
||||
print("Testing adsb.lol...")
|
||||
r = requests.get("https://api.adsb.lol/v2/lat/39.8/lon/-98.5/dist/100", timeout=15)
|
||||
print(f"Status: {r.status_code}")
|
||||
d = r.json()
|
||||
print(f"Aircraft: {len(d.get('ac', []))}")
|
||||
except Exception as e:
|
||||
print(f"Error type: {type(e).__name__}")
|
||||
print(f"Error: {e}")
|
||||
traceback.print_exc()
|
||||
@@ -0,0 +1,11 @@
|
||||
import json
|
||||
import urllib.request
|
||||
import time
|
||||
|
||||
time.sleep(5)
|
||||
try:
|
||||
data = urllib.request.urlopen('http://localhost:8000/api/live-data').read()
|
||||
d = json.loads(data)
|
||||
print(f"News: {len(d.get('news', []))} | Earthquakes: {len(d.get('earthquakes', []))} | Satellites: {len(d.get('satellites', []))} | CCTV: {len(d.get('cctv', []))}")
|
||||
except Exception as e:
|
||||
print(f"Error fetching API: {e}")
|
||||
@@ -0,0 +1,56 @@
|
||||
import requests
|
||||
import json
|
||||
|
||||
# Step 1: Fetch some real flights from adsb.lol
|
||||
print("Fetching real flights from adsb.lol...")
|
||||
r = requests.get("https://api.adsb.lol/v2/lat/39.8/lon/-98.5/dist/250", timeout=10)
|
||||
data = r.json()
|
||||
ac = data.get("ac", [])
|
||||
print("Got", len(ac), "aircraft")
|
||||
|
||||
# Step 2: Build a batch of real callsigns
|
||||
planes = []
|
||||
for f in ac[:20]: # Just 20 real flights
|
||||
cs = str(f.get("flight", "")).strip()
|
||||
lat = f.get("lat")
|
||||
lon = f.get("lon")
|
||||
if cs and lat and lon:
|
||||
planes.append({"callsign": cs, "lat": lat, "lng": lon})
|
||||
|
||||
print("Built batch of", len(planes), "planes")
|
||||
print("Sample plane:", json.dumps(planes[0]) if planes else "NONE")
|
||||
|
||||
# Step 3: Test routeset with real data
|
||||
if planes:
|
||||
payload = {"planes": planes}
|
||||
print("Payload size:", len(json.dumps(payload)), "bytes")
|
||||
r2 = requests.post("https://api.adsb.lol/api/0/routeset", json=payload, timeout=15)
|
||||
print("Routeset HTTP:", r2.status_code)
|
||||
if r2.status_code == 200:
|
||||
result = r2.json()
|
||||
print("Response type:", type(result).__name__)
|
||||
print("Routes found:", len(result) if isinstance(result, list) else "dict")
|
||||
if isinstance(result, list) and len(result) > 0:
|
||||
print("First route:", json.dumps(result[0], indent=2))
|
||||
else:
|
||||
print("Error body:", r2.text[:500])
|
||||
|
||||
# Step 4: Test with bigger batch
|
||||
print("\n--- Testing with 100 real flights ---")
|
||||
planes100 = []
|
||||
for f in ac[:120]:
|
||||
cs = str(f.get("flight", "")).strip()
|
||||
lat = f.get("lat")
|
||||
lon = f.get("lon")
|
||||
if cs and lat and lon:
|
||||
planes100.append({"callsign": cs, "lat": lat, "lng": lon})
|
||||
planes100 = planes100[:100]
|
||||
|
||||
print("Built batch of", len(planes100), "planes")
|
||||
r3 = requests.post("https://api.adsb.lol/api/0/routeset", json={"planes": planes100}, timeout=15)
|
||||
print("Routeset HTTP:", r3.status_code)
|
||||
if r3.status_code == 200:
|
||||
result3 = r3.json()
|
||||
print("Routes found:", len(result3) if isinstance(result3, list) else "dict")
|
||||
else:
|
||||
print("Error body:", r3.text[:500])
|
||||
@@ -0,0 +1,10 @@
|
||||
from services.cctv_pipeline import init_db, TFLJamCamIngestor, LTASingaporeIngestor
|
||||
|
||||
init_db()
|
||||
print("Initialized DB")
|
||||
|
||||
tfl = TFLJamCamIngestor()
|
||||
print(f"TFL Cameras: {len(tfl.fetch_data())}")
|
||||
|
||||
nyc = LTASingaporeIngestor()
|
||||
print(f"SGP Cameras: {len(nyc.fetch_data())}")
|
||||
@@ -0,0 +1,24 @@
|
||||
import requests
|
||||
|
||||
try:
|
||||
print('Testing Seattle SDOT...')
|
||||
r_sea = requests.get('https://data.seattle.gov/resource/65fc-btcc.json?$limit=5', headers={'X-App-Token': 'f2jdDBw5JMXPFOQyk64SKlPkn'})
|
||||
print(r_sea.status_code)
|
||||
try:
|
||||
print(r_sea.json()[0])
|
||||
except:
|
||||
pass
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
print('Testing NYC 511...')
|
||||
r_nyc = requests.get('https://webcams.nyctmc.org/api/cameras', timeout=5)
|
||||
print(r_nyc.status_code)
|
||||
try:
|
||||
print(len(r_nyc.json()))
|
||||
print(r_nyc.json()[0])
|
||||
except:
|
||||
pass
|
||||
except:
|
||||
pass
|
||||
@@ -0,0 +1,10 @@
|
||||
import json, urllib.request
|
||||
|
||||
data = json.loads(urllib.request.urlopen('http://localhost:8000/api/live-data').read())
|
||||
print(f"Commercial flights: {len(data.get('commercial_flights', []))}")
|
||||
print(f"Private flights: {len(data.get('private_flights', []))}")
|
||||
print(f"Private jets: {len(data.get('private_jets', []))}")
|
||||
print(f"Military flights: {len(data.get('military_flights', []))}")
|
||||
print(f"Tracked flights: {len(data.get('tracked_flights', []))}")
|
||||
print(f"Ships: {len(data.get('ships', []))}")
|
||||
print(f"CCTV: {len(data.get('cctv', []))}")
|
||||
@@ -0,0 +1,38 @@
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
try:
|
||||
data = json.loads(urllib.request.urlopen('http://localhost:8000/api/live-data').read())
|
||||
|
||||
# Tracked flights
|
||||
tracked = data.get('tracked_flights', [])
|
||||
print(f"=== TRACKED FLIGHTS: {len(tracked)} ===")
|
||||
if tracked:
|
||||
colors = {}
|
||||
for t in tracked:
|
||||
c = t.get('alert_color', 'NONE')
|
||||
colors[c] = colors.get(c, 0) + 1
|
||||
print(f" Colors: {colors}")
|
||||
print(f" Sample: {json.dumps(tracked[0], indent=2)[:500]}")
|
||||
|
||||
# Ships
|
||||
ships = data.get('ships', [])
|
||||
print(f"\n=== SHIPS: {len(ships)} ===")
|
||||
types = {}
|
||||
for s in ships:
|
||||
t = s.get('type', 'unknown')
|
||||
types[t] = types.get(t, 0) + 1
|
||||
print(f" Types: {types}")
|
||||
if ships:
|
||||
print(f" Sample: {json.dumps(ships[0], indent=2)[:300]}")
|
||||
|
||||
# News
|
||||
news = data.get('news', [])
|
||||
print(f"\n=== NEWS: {len(news)} ===")
|
||||
|
||||
# Earthquakes
|
||||
quakes = data.get('earthquakes', [])
|
||||
print(f"=== EARTHQUAKES: {len(quakes)} ===")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
@@ -0,0 +1,23 @@
|
||||
import json
|
||||
import urllib.request
|
||||
|
||||
try:
|
||||
data = json.loads(urllib.request.urlopen('http://localhost:8000/api/live-data').read())
|
||||
|
||||
tracked = data.get('tracked_flights', [])
|
||||
colors = {}
|
||||
for t in tracked:
|
||||
c = t.get('alert_color', 'NONE')
|
||||
colors[c] = colors.get(c, 0) + 1
|
||||
print(f"TRACKED FLIGHTS: {len(tracked)} | Colors: {colors}")
|
||||
|
||||
ships = data.get('ships', [])
|
||||
types = {}
|
||||
for s in ships:
|
||||
t = s.get('type', 'unknown')
|
||||
types[t] = types.get(t, 0) + 1
|
||||
print(f"SHIPS: {len(ships)} | Types: {types}")
|
||||
|
||||
print(f"NEWS: {len(data.get('news', []))} | EARTHQUAKES: {len(data.get('earthquakes', []))} | CCTV: {len(data.get('cctv', []))}")
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
@@ -0,0 +1,10 @@
|
||||
import requests, json
|
||||
|
||||
url = "https://api.us.socrata.com/api/catalog/v1?domains=data.cityofnewyork.us&q=camera"
|
||||
try:
|
||||
r = requests.get(url)
|
||||
res = r.json().get('results', [])
|
||||
for d in res:
|
||||
print(f"{d['resource']['id']} - {d['resource']['name']}")
|
||||
except Exception as e:
|
||||
print(e)
|
||||
@@ -0,0 +1,36 @@
|
||||
import json, urllib.request
|
||||
|
||||
data = json.loads(urllib.request.urlopen('http://localhost:8000/api/live-data').read())
|
||||
|
||||
# Check trail data
|
||||
comm = data.get('commercial_flights', [])
|
||||
mil = data.get('military_flights', [])
|
||||
tracked = data.get('tracked_flights', [])
|
||||
pvt = data.get('private_flights', [])
|
||||
|
||||
# Count flights with trails
|
||||
comm_trails = [f for f in comm if f.get('trail') and len(f['trail']) > 0]
|
||||
mil_trails = [f for f in mil if f.get('trail') and len(f['trail']) > 0]
|
||||
tracked_trails = [f for f in tracked if f.get('trail') and len(f['trail']) > 0]
|
||||
pvt_trails = [f for f in pvt if f.get('trail') and len(f['trail']) > 0]
|
||||
|
||||
print(f"Commercial: {len(comm)} total, {len(comm_trails)} with trails")
|
||||
print(f"Military: {len(mil)} total, {len(mil_trails)} with trails")
|
||||
print(f"Tracked: {len(tracked)} total, {len(tracked_trails)} with trails")
|
||||
print(f"Private: {len(pvt)} total, {len(pvt_trails)} with trails")
|
||||
|
||||
# Show a sample trail
|
||||
if mil_trails:
|
||||
f = mil_trails[0]
|
||||
print(f"\nSample trail ({f['callsign']}):")
|
||||
print(f" Points: {len(f['trail'])}")
|
||||
if f['trail']:
|
||||
print(f" First: {f['trail'][0]}")
|
||||
print(f" Last: {f['trail'][-1]}")
|
||||
|
||||
# Check for grounded planes
|
||||
grounded = [f for f in comm if f.get('alt', 999) <= 500 and f.get('speed_knots', 999) < 30]
|
||||
print(f"\nGrounded commercial: {len(grounded)}")
|
||||
if grounded:
|
||||
g = grounded[0]
|
||||
print(f" Example: {g['callsign']} alt={g.get('alt')} speed={g.get('speed_knots')}")
|
||||
@@ -0,0 +1,13 @@
|
||||
import sqlite3
|
||||
|
||||
try:
|
||||
conn = sqlite3.connect('cctv.db')
|
||||
conn.row_factory = sqlite3.Row
|
||||
cur = conn.cursor()
|
||||
cur.execute("SELECT source_agency, COUNT(*) as count FROM cameras WHERE id LIKE 'OSM-%' GROUP BY source_agency")
|
||||
rows = cur.fetchall()
|
||||
print('OSM Cameras by City:')
|
||||
for r in rows:
|
||||
print(f"{r['source_agency']}: {r['count']}")
|
||||
except Exception as e:
|
||||
print('DB Error:', e)
|
||||
@@ -0,0 +1,12 @@
|
||||
import json
|
||||
import urllib.request
|
||||
import time
|
||||
|
||||
time.sleep(5)
|
||||
try:
|
||||
data = urllib.request.urlopen('http://localhost:8000/api/live-data').read()
|
||||
d = json.loads(data)
|
||||
ships = d.get('ships', [])
|
||||
print(f"Ships: {len(ships)}")
|
||||
except Exception as e:
|
||||
print(f"Error fetching API: {e}")
|
||||
@@ -0,0 +1,13 @@
|
||||
import requests, json
|
||||
|
||||
print("Searching Socrata NYC/Seattle Cameras...")
|
||||
try:
|
||||
url = "https://api.us.socrata.com/api/catalog/v1?q=traffic cameras&limit=100"
|
||||
r = requests.get(url)
|
||||
res = r.json().get('results', [])
|
||||
for d in res:
|
||||
domain = d['metadata']['domain'].lower()
|
||||
if 'seattle' in domain or 'newyork' in domain or 'nyc' in domain:
|
||||
print(f"{d['resource']['id']} - {d['resource']['name']} ({domain})")
|
||||
except Exception as e:
|
||||
print(e)
|
||||
@@ -0,0 +1,61 @@
|
||||
"""Test trace endpoints with explicit output."""
|
||||
import json, subprocess
|
||||
|
||||
hex_code = "a34bac" # DOJ166
|
||||
|
||||
from datetime import datetime, timezone
|
||||
now = datetime.now(timezone.utc)
|
||||
date_str = now.strftime("%Y/%m/%d")
|
||||
hex_prefix = hex_code[-2:]
|
||||
|
||||
# Test 1: adsb.fi trace_full
|
||||
url1 = f"https://globe.adsb.fi/data/traces/{date_str}/{hex_prefix}/trace_full_{hex_code}.json"
|
||||
print(f"URL1: {url1}")
|
||||
r = subprocess.run(["curl.exe", "-s", "--max-time", "10", url1], capture_output=True, text=True, timeout=15)
|
||||
if r.stdout.strip().startswith("{"):
|
||||
data = json.loads(r.stdout)
|
||||
print(f"SUCCESS! Keys: {list(data.keys())}")
|
||||
if 'trace' in data:
|
||||
pts = data['trace']
|
||||
print(f"Trace points: {len(pts)}")
|
||||
if pts:
|
||||
print(f"FIRST (takeoff): {pts[0]}")
|
||||
print(f"LAST (now): {pts[-1]}")
|
||||
else:
|
||||
print(f"Not JSON (first 100 chars): {r.stdout[:100]}")
|
||||
# That response was behind cloudflare, try adsb.lol instead
|
||||
|
||||
# Test 2: adsb.lol hex lookup
|
||||
url2 = f"https://api.adsb.lol/v2/hex/{hex_code}"
|
||||
print(f"\nURL2: {url2}")
|
||||
r2 = subprocess.run(["curl.exe", "-s", "--max-time", "10", url2], capture_output=True, text=True, timeout=15)
|
||||
if r2.stdout.strip().startswith("{"):
|
||||
data = json.loads(r2.stdout)
|
||||
if 'ac' in data and data['ac']:
|
||||
ac = data['ac'][0]
|
||||
keys = sorted(ac.keys())
|
||||
print(f"All keys ({len(keys)}): {keys}")
|
||||
else:
|
||||
print(f"Not JSON: {r2.stdout[:100]}")
|
||||
|
||||
# Test 3: Try adsb.lol trace
|
||||
url3 = f"https://api.adsb.lol/trace/{hex_code}"
|
||||
print(f"\nURL3: {url3}")
|
||||
r3 = subprocess.run(["curl.exe", "-s", "-o", "/dev/null", "-w", "%{http_code}", "--max-time", "10", url3], capture_output=True, text=True, timeout=15)
|
||||
print(f"HTTP status: {r3.stdout}")
|
||||
|
||||
# Test 4: Try globe.adsb.lol format
|
||||
url4 = f"https://globe.adsb.lol/data/traces/{date_str}/{hex_prefix}/trace_full_{hex_code}.json"
|
||||
print(f"\nURL4: {url4}")
|
||||
r4 = subprocess.run(["curl.exe", "-s", "--max-time", "10", url4], capture_output=True, text=True, timeout=15)
|
||||
if r4.stdout.strip().startswith("{"):
|
||||
data = json.loads(r4.stdout)
|
||||
print(f"SUCCESS! Keys: {list(data.keys())}")
|
||||
if 'trace' in data:
|
||||
pts = data['trace']
|
||||
print(f"Trace points: {len(pts)}")
|
||||
if pts:
|
||||
print(f"FIRST (takeoff): {pts[0]}")
|
||||
print(f"LAST (now): {pts[-1]}")
|
||||
else:
|
||||
print(f"Response: {r4.stdout[:150]}")
|
||||
@@ -0,0 +1,8 @@
|
||||
import asyncio, websockets
|
||||
async def main():
|
||||
try:
|
||||
async with websockets.connect('wss://stream.aisstream.io/v0/stream') as ws:
|
||||
print('Connected to AIS Stream!')
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
asyncio.run(main())
|
||||
@@ -0,0 +1 @@
|
||||
Bad Request
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,33 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
backend:
|
||||
build:
|
||||
context: ./backend
|
||||
container_name: shadowbroker-backend
|
||||
ports:
|
||||
- "8000:8000"
|
||||
environment:
|
||||
- AISSTREAM_API_KEY=${AISSTREAM_API_KEY}
|
||||
- N2YO_API_KEY=${N2YO_API_KEY}
|
||||
- OPENSKY_USERNAME=${OPENSKY_USERNAME}
|
||||
- OPENSKY_PASSWORD=${OPENSKY_PASSWORD}
|
||||
- LTA_ACCOUNT_KEY=${LTA_ACCOUNT_KEY}
|
||||
volumes:
|
||||
- backend_data:/app/data
|
||||
restart: unless-stopped
|
||||
|
||||
frontend:
|
||||
build:
|
||||
context: ./frontend
|
||||
container_name: shadowbroker-frontend
|
||||
ports:
|
||||
- "3000:3000"
|
||||
environment:
|
||||
- NEXT_PUBLIC_API_URL=http://localhost:8000
|
||||
depends_on:
|
||||
- backend
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
backend_data:
|
||||
@@ -0,0 +1,41 @@
|
||||
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
|
||||
|
||||
# dependencies
|
||||
/node_modules
|
||||
/.pnp
|
||||
.pnp.*
|
||||
.yarn/*
|
||||
!.yarn/patches
|
||||
!.yarn/plugins
|
||||
!.yarn/releases
|
||||
!.yarn/versions
|
||||
|
||||
# testing
|
||||
/coverage
|
||||
|
||||
# next.js
|
||||
/.next/
|
||||
/out/
|
||||
|
||||
# production
|
||||
/build
|
||||
|
||||
# misc
|
||||
.DS_Store
|
||||
*.pem
|
||||
|
||||
# debug
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
.pnpm-debug.log*
|
||||
|
||||
# env files (can opt-in for committing if needed)
|
||||
.env*
|
||||
|
||||
# vercel
|
||||
.vercel
|
||||
|
||||
# typescript
|
||||
*.tsbuildinfo
|
||||
next-env.d.ts
|
||||
@@ -0,0 +1,19 @@
|
||||
FROM node:18-alpine
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install dependencies
|
||||
COPY package*.json ./
|
||||
RUN npm install
|
||||
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
# Expose port
|
||||
EXPOSE 3000
|
||||
|
||||
# Next.js telemetry disable
|
||||
ENV NEXT_TELEMETRY_DISABLED 1
|
||||
|
||||
# Start development server
|
||||
CMD ["npm", "run", "dev:frontend"]
|
||||
@@ -0,0 +1,36 @@
|
||||
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
|
||||
|
||||
## Getting Started
|
||||
|
||||
First, run the development server:
|
||||
|
||||
```bash
|
||||
npm run dev
|
||||
# or
|
||||
yarn dev
|
||||
# or
|
||||
pnpm dev
|
||||
# or
|
||||
bun dev
|
||||
```
|
||||
|
||||
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
|
||||
|
||||
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
|
||||
|
||||
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
|
||||
|
||||
## Learn More
|
||||
|
||||
To learn more about Next.js, take a look at the following resources:
|
||||
|
||||
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
|
||||
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
|
||||
|
||||
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
|
||||
|
||||
## Deploy on Vercel
|
||||
|
||||
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
|
||||
|
||||
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
|
||||
@@ -0,0 +1,33 @@
|
||||
|
||||
> frontend@0.1.0 build
|
||||
> next build
|
||||
|
||||
Γû▓ Next.js 16.1.6 (Turbopack)
|
||||
|
||||
Creating an optimized production build ...
|
||||
|
||||
> Build error occurred
|
||||
Error: Turbopack build failed with 1 errors:
|
||||
./src/components/MapboxViewer.tsx:4:1
|
||||
Module not found: Can't resolve 'react-map-gl'
|
||||
2 |
|
||||
3 | import React, { useRef, useEffect, useState, useMemo } from "react";
|
||||
> 4 | import Map, { Source, Layer, Fog, Sky, useMap, MapRef } from "react-map-gl";
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
5 | import type { LayerProps } from "react-map-gl";
|
||||
6 | import "mapbox-gl/dist/mapbox-gl.css";
|
||||
7 | import * as satellite from "satellite.js";
|
||||
|
||||
|
||||
|
||||
Import trace:
|
||||
Client Component Browser:
|
||||
./src/components/MapboxViewer.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Server Component]
|
||||
|
||||
https://nextjs.org/docs/messages/module-not-found
|
||||
|
||||
|
||||
at <unknown> (./src/components/MapboxViewer.tsx:4:1)
|
||||
at <unknown> (https://nextjs.org/docs/messages/module-not-found)
|
||||
@@ -0,0 +1,33 @@
|
||||
|
||||
> frontend@0.1.0 build
|
||||
> next build
|
||||
|
||||
Γû▓ Next.js 16.1.6 (Turbopack)
|
||||
|
||||
Creating an optimized production build ...
|
||||
|
||||
> Build error occurred
|
||||
Error: Turbopack build failed with 1 errors:
|
||||
./src/components/MapboxViewer.tsx:4:1
|
||||
Module not found: Can't resolve 'react-map-gl'
|
||||
2 |
|
||||
3 | import React, { useRef, useEffect, useState, useMemo } from "react";
|
||||
> 4 | import Map, { Source, Layer, Fog, Sky, useMap, MapRef } from "react-map-gl";
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
5 | import type { LayerProps } from "react-map-gl";
|
||||
6 | import "mapbox-gl/dist/mapbox-gl.css";
|
||||
7 | import * as satellite from "satellite.js";
|
||||
|
||||
|
||||
|
||||
Import trace:
|
||||
Client Component Browser:
|
||||
./src/components/MapboxViewer.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Server Component]
|
||||
|
||||
https://nextjs.org/docs/messages/module-not-found
|
||||
|
||||
|
||||
at <unknown> (./src/components/MapboxViewer.tsx:4:1)
|
||||
at <unknown> (https://nextjs.org/docs/messages/module-not-found)
|
||||
@@ -0,0 +1,33 @@
|
||||
|
||||
> frontend@0.1.0 build
|
||||
> next build
|
||||
|
||||
Γû▓ Next.js 16.1.6 (Turbopack)
|
||||
|
||||
Creating an optimized production build ...
|
||||
|
||||
> Build error occurred
|
||||
Error: Turbopack build failed with 1 errors:
|
||||
./src/components/MapboxViewer.tsx:4:1
|
||||
Module not found: Can't resolve 'react-map-gl'
|
||||
2 |
|
||||
3 | import React, { useRef, useEffect, useState, useMemo } from "react";
|
||||
> 4 | import Map, { Source, Layer, Fog, Sky, useMap, MapRef } from "react-map-gl";
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
5 | import type { LayerProps } from "react-map-gl";
|
||||
6 | import "mapbox-gl/dist/mapbox-gl.css";
|
||||
7 | import * as satellite from "satellite.js";
|
||||
|
||||
|
||||
|
||||
Import trace:
|
||||
Client Component Browser:
|
||||
./src/components/MapboxViewer.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Server Component]
|
||||
|
||||
https://nextjs.org/docs/messages/module-not-found
|
||||
|
||||
|
||||
at <unknown> (./src/components/MapboxViewer.tsx:4:1)
|
||||
at <unknown> (https://nextjs.org/docs/messages/module-not-found)
|
||||
@@ -0,0 +1,33 @@
|
||||
|
||||
> frontend@0.1.0 build
|
||||
> next build
|
||||
|
||||
Γû▓ Next.js 16.1.6 (Turbopack)
|
||||
|
||||
Creating an optimized production build ...
|
||||
|
||||
> Build error occurred
|
||||
Error: Turbopack build failed with 1 errors:
|
||||
./src/components/MapboxViewer.tsx:4:1
|
||||
Module not found: Can't resolve 'react-map-gl'
|
||||
2 |
|
||||
3 | import React, { useRef, useEffect, useState, useMemo } from "react";
|
||||
> 4 | import Map, { Source, Layer, Fog, Sky, useMap, MapRef } from "react-map-gl";
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
5 | import type { LayerProps } from "react-map-gl";
|
||||
6 | import "mapbox-gl/dist/mapbox-gl.css";
|
||||
7 | import * as satellite from "satellite.js";
|
||||
|
||||
|
||||
|
||||
Import trace:
|
||||
Client Component Browser:
|
||||
./src/components/MapboxViewer.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Server Component]
|
||||
|
||||
https://nextjs.org/docs/messages/module-not-found
|
||||
|
||||
|
||||
at <unknown> (./src/components/MapboxViewer.tsx:4:1)
|
||||
at <unknown> (https://nextjs.org/docs/messages/module-not-found)
|
||||
@@ -0,0 +1,33 @@
|
||||
|
||||
> frontend@0.1.0 build
|
||||
> next build
|
||||
|
||||
Γû▓ Next.js 16.1.6 (Turbopack)
|
||||
|
||||
Creating an optimized production build ...
|
||||
|
||||
> Build error occurred
|
||||
Error: Turbopack build failed with 1 errors:
|
||||
./src/components/MapboxViewer.tsx:4:1
|
||||
Module not found: Can't resolve 'react-map-gl'
|
||||
2 |
|
||||
3 | import React, { useRef, useEffect, useState, useMemo } from "react";
|
||||
> 4 | import Map, { Source, Layer, Fog, Sky, useMap, MapRef } from "react-map-gl";
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
5 | import type { LayerProps } from "react-map-gl";
|
||||
6 | import "mapbox-gl/dist/mapbox-gl.css";
|
||||
7 | import * as satellite from "satellite.js";
|
||||
|
||||
|
||||
|
||||
Import trace:
|
||||
Client Component Browser:
|
||||
./src/components/MapboxViewer.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Client Component Browser]
|
||||
./src/app/page.tsx [Server Component]
|
||||
|
||||
https://nextjs.org/docs/messages/module-not-found
|
||||
|
||||
|
||||
at <unknown> (./src/components/MapboxViewer.tsx:4:1)
|
||||
at <unknown> (https://nextjs.org/docs/messages/module-not-found)
|
||||
Binary file not shown.
@@ -0,0 +1,18 @@
|
||||
import { defineConfig, globalIgnores } from "eslint/config";
|
||||
import nextVitals from "eslint-config-next/core-web-vitals";
|
||||
import nextTs from "eslint-config-next/typescript";
|
||||
|
||||
const eslintConfig = defineConfig([
|
||||
...nextVitals,
|
||||
...nextTs,
|
||||
// Override default ignores of eslint-config-next.
|
||||
globalIgnores([
|
||||
// Default ignores of eslint-config-next:
|
||||
".next/**",
|
||||
"out/**",
|
||||
"build/**",
|
||||
"next-env.d.ts",
|
||||
]),
|
||||
]);
|
||||
|
||||
export default eslintConfig;
|
||||
@@ -0,0 +1,7 @@
|
||||
import type { NextConfig } from "next";
|
||||
|
||||
const nextConfig: NextConfig = {
|
||||
transpilePackages: ['react-map-gl', 'mapbox-gl', 'maplibre-gl'],
|
||||
};
|
||||
|
||||
export default nextConfig;
|
||||
Generated
+7739
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"name": "frontend",
|
||||
"version": "0.1.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"dev": "concurrently --names \"NEXT,API\" --prefix-colors \"cyan,yellow\" \"next dev\" \"cd ../backend && venv\\Scripts\\python.exe main.py\"",
|
||||
"dev:frontend": "next dev",
|
||||
"dev:backend": "cd ../backend && venv\\Scripts\\python.exe main.py",
|
||||
"build": "next build",
|
||||
"start": "next start",
|
||||
"lint": "eslint"
|
||||
},
|
||||
"dependencies": {
|
||||
"@types/leaflet": "^1.9.21",
|
||||
"@types/mapbox-gl": "^3.4.1",
|
||||
"framer-motion": "^12.34.3",
|
||||
"leaflet": "^1.9.4",
|
||||
"lucide-react": "^0.575.0",
|
||||
"mapbox-gl": "^3.19.0",
|
||||
"maplibre-gl": "^4.7.1",
|
||||
"next": "16.1.6",
|
||||
"react": "19.2.3",
|
||||
"react-dom": "19.2.3",
|
||||
"react-leaflet": "^5.0.0",
|
||||
"react-map-gl": "^8.1.0",
|
||||
"satellite.js": "^6.0.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@tailwindcss/postcss": "^4",
|
||||
"@types/mapbox__point-geometry": "^1.0.87",
|
||||
"@types/node": "^20",
|
||||
"@types/react": "^19",
|
||||
"@types/react-dom": "^19",
|
||||
"concurrently": "^9.2.1",
|
||||
"eslint": "^9",
|
||||
"eslint-config-next": "16.1.6",
|
||||
"tailwindcss": "^4",
|
||||
"typescript": "^5"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
const config = {
|
||||
plugins: {
|
||||
"@tailwindcss/postcss": {},
|
||||
},
|
||||
};
|
||||
|
||||
export default config;
|
||||
@@ -0,0 +1,245 @@
|
||||
{
|
||||
"type": "FeatureCollection",
|
||||
"features": [
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "TAT-14"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
-74.0,
|
||||
40.7
|
||||
],
|
||||
[
|
||||
-30.0,
|
||||
45.0
|
||||
],
|
||||
[
|
||||
-5.0,
|
||||
50.0
|
||||
],
|
||||
[
|
||||
0.0,
|
||||
51.5
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "Apollo"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
-74.0,
|
||||
40.7
|
||||
],
|
||||
[
|
||||
-40.0,
|
||||
43.0
|
||||
],
|
||||
[
|
||||
-10.0,
|
||||
48.0
|
||||
],
|
||||
[
|
||||
-3.0,
|
||||
48.5
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "FASTER"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
140.0,
|
||||
35.0
|
||||
],
|
||||
[
|
||||
180.0,
|
||||
45.0
|
||||
],
|
||||
[
|
||||
-124.0,
|
||||
43.0
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "SEA-ME-WE 3"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
115.0,
|
||||
-32.0
|
||||
],
|
||||
[
|
||||
100.0,
|
||||
0.0
|
||||
],
|
||||
[
|
||||
80.0,
|
||||
5.0
|
||||
],
|
||||
[
|
||||
60.0,
|
||||
15.0
|
||||
],
|
||||
[
|
||||
40.0,
|
||||
12.0
|
||||
],
|
||||
[
|
||||
35.0,
|
||||
30.0
|
||||
],
|
||||
[
|
||||
15.0,
|
||||
35.0
|
||||
],
|
||||
[
|
||||
0.0,
|
||||
40.0
|
||||
],
|
||||
[
|
||||
-10.0,
|
||||
38.0
|
||||
],
|
||||
[
|
||||
-5.0,
|
||||
48.0
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "SACS"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
-38.5,
|
||||
-3.7
|
||||
],
|
||||
[
|
||||
-10.0,
|
||||
-5.0
|
||||
],
|
||||
[
|
||||
13.2,
|
||||
-8.8
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "Marea"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
-76.0,
|
||||
36.8
|
||||
],
|
||||
[
|
||||
-40.0,
|
||||
40.0
|
||||
],
|
||||
[
|
||||
-2.9,
|
||||
43.3
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "Dunant"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
-76.0,
|
||||
36.8
|
||||
],
|
||||
[
|
||||
-40.0,
|
||||
42.0
|
||||
],
|
||||
[
|
||||
-1.5,
|
||||
46.5
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "Feature",
|
||||
"properties": {
|
||||
"name": "AAE-1"
|
||||
},
|
||||
"geometry": {
|
||||
"type": "LineString",
|
||||
"coordinates": [
|
||||
[
|
||||
114.0,
|
||||
22.0
|
||||
],
|
||||
[
|
||||
100.0,
|
||||
10.0
|
||||
],
|
||||
[
|
||||
80.0,
|
||||
5.0
|
||||
],
|
||||
[
|
||||
60.0,
|
||||
20.0
|
||||
],
|
||||
[
|
||||
40.0,
|
||||
15.0
|
||||
],
|
||||
[
|
||||
30.0,
|
||||
30.0
|
||||
],
|
||||
[
|
||||
20.0,
|
||||
38.0
|
||||
],
|
||||
[
|
||||
10.0,
|
||||
40.0
|
||||
]
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1 @@
|
||||
<svg fill="none" viewBox="0 0 16 16" xmlns="http://www.w3.org/2000/svg"><path d="M14.5 13.5V5.41a1 1 0 0 0-.3-.7L9.8.29A1 1 0 0 0 9.08 0H1.5v13.5A2.5 2.5 0 0 0 4 16h8a2.5 2.5 0 0 0 2.5-2.5m-1.5 0v-7H8v-5H3v12a1 1 0 0 0 1 1h8a1 1 0 0 0 1-1M9.5 5V2.12L12.38 5zM5.13 5h-.62v1.25h2.12V5zm-.62 3h7.12v1.25H4.5zm.62 3h-.62v1.25h7.12V11z" clip-rule="evenodd" fill="#666" fill-rule="evenodd"/></svg>
|
||||
|
After Width: | Height: | Size: 391 B |
@@ -0,0 +1,75 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>OpenMapTiles Font Server - Open-source maps for self-hosting</title>
|
||||
|
||||
<meta name="author" content="MapTiler AG" />
|
||||
<meta name="description" content="Official font glyph server for the OpenMapTiles project. Providing open-source fonts for MapLibre and Mapbox GL styles." />
|
||||
<meta name="robots" content="index, follow">
|
||||
<link rel="canonical" href="https://openmaptiles.org/styles/">
|
||||
|
||||
<link href="https://openmaptiles.org/base.css" rel="stylesheet" />
|
||||
|
||||
<meta http-equiv="refresh" content="3; url=https://openmaptiles.org/styles/">
|
||||
|
||||
<style>
|
||||
.redirect-box {
|
||||
margin-top: 100px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
border: 1px solid #e9ecef;
|
||||
}
|
||||
.loader {
|
||||
border: 4px solid #f3f3f3;
|
||||
border-top: 4px solid #95BF73;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
display: inline-block;
|
||||
vertical-align: middle;
|
||||
margin-right: 15px;
|
||||
}
|
||||
@keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } }
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body class="home">
|
||||
<div class="container">
|
||||
<div id="navbar-top" class="large">
|
||||
<div class="container">
|
||||
<a class="title" href="https://openmaptiles.org/">OpenMapTiles</a>
|
||||
<div class="nav">
|
||||
<a href="https://openmaptiles.org/about/">About</a>
|
||||
<a href="https://openmaptiles.org/docs/">Docs</a>
|
||||
<a href="https://openmaptiles.org/styles/">Styles</a>
|
||||
<a class="github" href="https://github.com/openmaptiles/fonts">e</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="container padt-10">
|
||||
<div class="row">
|
||||
<div class="col8 offset2 redirect-box pady-8 center">
|
||||
<h1 class="gray">OpenMapTiles Font Server</h1>
|
||||
<p class="padt-2">This is the backend server providing <b>vector glyphs</b> for open map styles.</p>
|
||||
<p class="pady-4">
|
||||
<span class="loader"></span>
|
||||
Redirecting to the <a href="https://openmaptiles.org/styles/">Map Styles</a> gallery...
|
||||
</p>
|
||||
<p class="small gray">If you are a developer, you can find the source code on <a href="https://github.com/openmaptiles/fonts">GitHub</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Immediate redirect fallback
|
||||
setTimeout(function() {
|
||||
window.location.href = "https://openmaptiles.org/styles/";
|
||||
}, 3000);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,75 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>OpenMapTiles Font Server - Open-source maps for self-hosting</title>
|
||||
|
||||
<meta name="author" content="MapTiler AG" />
|
||||
<meta name="description" content="Official font glyph server for the OpenMapTiles project. Providing open-source fonts for MapLibre and Mapbox GL styles." />
|
||||
<meta name="robots" content="index, follow">
|
||||
<link rel="canonical" href="https://openmaptiles.org/styles/">
|
||||
|
||||
<link href="https://openmaptiles.org/base.css" rel="stylesheet" />
|
||||
|
||||
<meta http-equiv="refresh" content="3; url=https://openmaptiles.org/styles/">
|
||||
|
||||
<style>
|
||||
.redirect-box {
|
||||
margin-top: 100px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
border: 1px solid #e9ecef;
|
||||
}
|
||||
.loader {
|
||||
border: 4px solid #f3f3f3;
|
||||
border-top: 4px solid #95BF73;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
display: inline-block;
|
||||
vertical-align: middle;
|
||||
margin-right: 15px;
|
||||
}
|
||||
@keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } }
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body class="home">
|
||||
<div class="container">
|
||||
<div id="navbar-top" class="large">
|
||||
<div class="container">
|
||||
<a class="title" href="https://openmaptiles.org/">OpenMapTiles</a>
|
||||
<div class="nav">
|
||||
<a href="https://openmaptiles.org/about/">About</a>
|
||||
<a href="https://openmaptiles.org/docs/">Docs</a>
|
||||
<a href="https://openmaptiles.org/styles/">Styles</a>
|
||||
<a class="github" href="https://github.com/openmaptiles/fonts">e</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="container padt-10">
|
||||
<div class="row">
|
||||
<div class="col8 offset2 redirect-box pady-8 center">
|
||||
<h1 class="gray">OpenMapTiles Font Server</h1>
|
||||
<p class="padt-2">This is the backend server providing <b>vector glyphs</b> for open map styles.</p>
|
||||
<p class="pady-4">
|
||||
<span class="loader"></span>
|
||||
Redirecting to the <a href="https://openmaptiles.org/styles/">Map Styles</a> gallery...
|
||||
</p>
|
||||
<p class="small gray">If you are a developer, you can find the source code on <a href="https://github.com/openmaptiles/fonts">GitHub</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Immediate redirect fallback
|
||||
setTimeout(function() {
|
||||
window.location.href = "https://openmaptiles.org/styles/";
|
||||
}, 3000);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,75 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>OpenMapTiles Font Server - Open-source maps for self-hosting</title>
|
||||
|
||||
<meta name="author" content="MapTiler AG" />
|
||||
<meta name="description" content="Official font glyph server for the OpenMapTiles project. Providing open-source fonts for MapLibre and Mapbox GL styles." />
|
||||
<meta name="robots" content="index, follow">
|
||||
<link rel="canonical" href="https://openmaptiles.org/styles/">
|
||||
|
||||
<link href="https://openmaptiles.org/base.css" rel="stylesheet" />
|
||||
|
||||
<meta http-equiv="refresh" content="3; url=https://openmaptiles.org/styles/">
|
||||
|
||||
<style>
|
||||
.redirect-box {
|
||||
margin-top: 100px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
border: 1px solid #e9ecef;
|
||||
}
|
||||
.loader {
|
||||
border: 4px solid #f3f3f3;
|
||||
border-top: 4px solid #95BF73;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
display: inline-block;
|
||||
vertical-align: middle;
|
||||
margin-right: 15px;
|
||||
}
|
||||
@keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } }
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body class="home">
|
||||
<div class="container">
|
||||
<div id="navbar-top" class="large">
|
||||
<div class="container">
|
||||
<a class="title" href="https://openmaptiles.org/">OpenMapTiles</a>
|
||||
<div class="nav">
|
||||
<a href="https://openmaptiles.org/about/">About</a>
|
||||
<a href="https://openmaptiles.org/docs/">Docs</a>
|
||||
<a href="https://openmaptiles.org/styles/">Styles</a>
|
||||
<a class="github" href="https://github.com/openmaptiles/fonts">e</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="container padt-10">
|
||||
<div class="row">
|
||||
<div class="col8 offset2 redirect-box pady-8 center">
|
||||
<h1 class="gray">OpenMapTiles Font Server</h1>
|
||||
<p class="padt-2">This is the backend server providing <b>vector glyphs</b> for open map styles.</p>
|
||||
<p class="pady-4">
|
||||
<span class="loader"></span>
|
||||
Redirecting to the <a href="https://openmaptiles.org/styles/">Map Styles</a> gallery...
|
||||
</p>
|
||||
<p class="small gray">If you are a developer, you can find the source code on <a href="https://github.com/openmaptiles/fonts">GitHub</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Immediate redirect fallback
|
||||
setTimeout(function() {
|
||||
window.location.href = "https://openmaptiles.org/styles/";
|
||||
}, 3000);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,75 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>OpenMapTiles Font Server - Open-source maps for self-hosting</title>
|
||||
|
||||
<meta name="author" content="MapTiler AG" />
|
||||
<meta name="description" content="Official font glyph server for the OpenMapTiles project. Providing open-source fonts for MapLibre and Mapbox GL styles." />
|
||||
<meta name="robots" content="index, follow">
|
||||
<link rel="canonical" href="https://openmaptiles.org/styles/">
|
||||
|
||||
<link href="https://openmaptiles.org/base.css" rel="stylesheet" />
|
||||
|
||||
<meta http-equiv="refresh" content="3; url=https://openmaptiles.org/styles/">
|
||||
|
||||
<style>
|
||||
.redirect-box {
|
||||
margin-top: 100px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
border: 1px solid #e9ecef;
|
||||
}
|
||||
.loader {
|
||||
border: 4px solid #f3f3f3;
|
||||
border-top: 4px solid #95BF73;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
display: inline-block;
|
||||
vertical-align: middle;
|
||||
margin-right: 15px;
|
||||
}
|
||||
@keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } }
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body class="home">
|
||||
<div class="container">
|
||||
<div id="navbar-top" class="large">
|
||||
<div class="container">
|
||||
<a class="title" href="https://openmaptiles.org/">OpenMapTiles</a>
|
||||
<div class="nav">
|
||||
<a href="https://openmaptiles.org/about/">About</a>
|
||||
<a href="https://openmaptiles.org/docs/">Docs</a>
|
||||
<a href="https://openmaptiles.org/styles/">Styles</a>
|
||||
<a class="github" href="https://github.com/openmaptiles/fonts">e</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="container padt-10">
|
||||
<div class="row">
|
||||
<div class="col8 offset2 redirect-box pady-8 center">
|
||||
<h1 class="gray">OpenMapTiles Font Server</h1>
|
||||
<p class="padt-2">This is the backend server providing <b>vector glyphs</b> for open map styles.</p>
|
||||
<p class="pady-4">
|
||||
<span class="loader"></span>
|
||||
Redirecting to the <a href="https://openmaptiles.org/styles/">Map Styles</a> gallery...
|
||||
</p>
|
||||
<p class="small gray">If you are a developer, you can find the source code on <a href="https://github.com/openmaptiles/fonts">GitHub</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Immediate redirect fallback
|
||||
setTimeout(function() {
|
||||
window.location.href = "https://openmaptiles.org/styles/";
|
||||
}, 3000);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,75 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>OpenMapTiles Font Server - Open-source maps for self-hosting</title>
|
||||
|
||||
<meta name="author" content="MapTiler AG" />
|
||||
<meta name="description" content="Official font glyph server for the OpenMapTiles project. Providing open-source fonts for MapLibre and Mapbox GL styles." />
|
||||
<meta name="robots" content="index, follow">
|
||||
<link rel="canonical" href="https://openmaptiles.org/styles/">
|
||||
|
||||
<link href="https://openmaptiles.org/base.css" rel="stylesheet" />
|
||||
|
||||
<meta http-equiv="refresh" content="3; url=https://openmaptiles.org/styles/">
|
||||
|
||||
<style>
|
||||
.redirect-box {
|
||||
margin-top: 100px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
border: 1px solid #e9ecef;
|
||||
}
|
||||
.loader {
|
||||
border: 4px solid #f3f3f3;
|
||||
border-top: 4px solid #95BF73;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
display: inline-block;
|
||||
vertical-align: middle;
|
||||
margin-right: 15px;
|
||||
}
|
||||
@keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } }
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body class="home">
|
||||
<div class="container">
|
||||
<div id="navbar-top" class="large">
|
||||
<div class="container">
|
||||
<a class="title" href="https://openmaptiles.org/">OpenMapTiles</a>
|
||||
<div class="nav">
|
||||
<a href="https://openmaptiles.org/about/">About</a>
|
||||
<a href="https://openmaptiles.org/docs/">Docs</a>
|
||||
<a href="https://openmaptiles.org/styles/">Styles</a>
|
||||
<a class="github" href="https://github.com/openmaptiles/fonts">e</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="container padt-10">
|
||||
<div class="row">
|
||||
<div class="col8 offset2 redirect-box pady-8 center">
|
||||
<h1 class="gray">OpenMapTiles Font Server</h1>
|
||||
<p class="padt-2">This is the backend server providing <b>vector glyphs</b> for open map styles.</p>
|
||||
<p class="pady-4">
|
||||
<span class="loader"></span>
|
||||
Redirecting to the <a href="https://openmaptiles.org/styles/">Map Styles</a> gallery...
|
||||
</p>
|
||||
<p class="small gray">If you are a developer, you can find the source code on <a href="https://github.com/openmaptiles/fonts">GitHub</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Immediate redirect fallback
|
||||
setTimeout(function() {
|
||||
window.location.href = "https://openmaptiles.org/styles/";
|
||||
}, 3000);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,75 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>OpenMapTiles Font Server - Open-source maps for self-hosting</title>
|
||||
|
||||
<meta name="author" content="MapTiler AG" />
|
||||
<meta name="description" content="Official font glyph server for the OpenMapTiles project. Providing open-source fonts for MapLibre and Mapbox GL styles." />
|
||||
<meta name="robots" content="index, follow">
|
||||
<link rel="canonical" href="https://openmaptiles.org/styles/">
|
||||
|
||||
<link href="https://openmaptiles.org/base.css" rel="stylesheet" />
|
||||
|
||||
<meta http-equiv="refresh" content="3; url=https://openmaptiles.org/styles/">
|
||||
|
||||
<style>
|
||||
.redirect-box {
|
||||
margin-top: 100px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
border: 1px solid #e9ecef;
|
||||
}
|
||||
.loader {
|
||||
border: 4px solid #f3f3f3;
|
||||
border-top: 4px solid #95BF73;
|
||||
border-radius: 50%;
|
||||
width: 30px;
|
||||
height: 30px;
|
||||
animation: spin 2s linear infinite;
|
||||
display: inline-block;
|
||||
vertical-align: middle;
|
||||
margin-right: 15px;
|
||||
}
|
||||
@keyframes spin { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } }
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body class="home">
|
||||
<div class="container">
|
||||
<div id="navbar-top" class="large">
|
||||
<div class="container">
|
||||
<a class="title" href="https://openmaptiles.org/">OpenMapTiles</a>
|
||||
<div class="nav">
|
||||
<a href="https://openmaptiles.org/about/">About</a>
|
||||
<a href="https://openmaptiles.org/docs/">Docs</a>
|
||||
<a href="https://openmaptiles.org/styles/">Styles</a>
|
||||
<a class="github" href="https://github.com/openmaptiles/fonts">e</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="container padt-10">
|
||||
<div class="row">
|
||||
<div class="col8 offset2 redirect-box pady-8 center">
|
||||
<h1 class="gray">OpenMapTiles Font Server</h1>
|
||||
<p class="padt-2">This is the backend server providing <b>vector glyphs</b> for open map styles.</p>
|
||||
<p class="pady-4">
|
||||
<span class="loader"></span>
|
||||
Redirecting to the <a href="https://openmaptiles.org/styles/">Map Styles</a> gallery...
|
||||
</p>
|
||||
<p class="small gray">If you are a developer, you can find the source code on <a href="https://github.com/openmaptiles/fonts">GitHub</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Immediate redirect fallback
|
||||
setTimeout(function() {
|
||||
window.location.href = "https://openmaptiles.org/styles/";
|
||||
}, 3000);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -0,0 +1 @@
|
||||
<svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16"><g clip-path="url(#a)"><path fill-rule="evenodd" clip-rule="evenodd" d="M10.27 14.1a6.5 6.5 0 0 0 3.67-3.45q-1.24.21-2.7.34-.31 1.83-.97 3.1M8 16A8 8 0 1 0 8 0a8 8 0 0 0 0 16m.48-1.52a7 7 0 0 1-.96 0H7.5a4 4 0 0 1-.84-1.32q-.38-.89-.63-2.08a40 40 0 0 0 3.92 0q-.25 1.2-.63 2.08a4 4 0 0 1-.84 1.31zm2.94-4.76q1.66-.15 2.95-.43a7 7 0 0 0 0-2.58q-1.3-.27-2.95-.43a18 18 0 0 1 0 3.44m-1.27-3.54a17 17 0 0 1 0 3.64 39 39 0 0 1-4.3 0 17 17 0 0 1 0-3.64 39 39 0 0 1 4.3 0m1.1-1.17q1.45.13 2.69.34a6.5 6.5 0 0 0-3.67-3.44q.65 1.26.98 3.1M8.48 1.5l.01.02q.41.37.84 1.31.38.89.63 2.08a40 40 0 0 0-3.92 0q.25-1.2.63-2.08a4 4 0 0 1 .85-1.32 7 7 0 0 1 .96 0m-2.75.4a6.5 6.5 0 0 0-3.67 3.44 29 29 0 0 1 2.7-.34q.31-1.83.97-3.1M4.58 6.28q-1.66.16-2.95.43a7 7 0 0 0 0 2.58q1.3.27 2.95.43a18 18 0 0 1 0-3.44m.17 4.71q-1.45-.12-2.69-.34a6.5 6.5 0 0 0 3.67 3.44q-.65-1.27-.98-3.1" fill="#666"/></g><defs><clipPath id="a"><path fill="#fff" d="M0 0h16v16H0z"/></clipPath></defs></svg>
|
||||
|
After Width: | Height: | Size: 1.0 KiB |
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"version": 8,
|
||||
"glyphs": "/fonts/{fontstack}/{range}.pbf",
|
||||
"sources": {
|
||||
"carto-dark": {
|
||||
"type": "raster",
|
||||
"tiles": [
|
||||
"https://a.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}@2x.png",
|
||||
"https://b.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}@2x.png",
|
||||
"https://c.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}@2x.png",
|
||||
"https://d.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}@2x.png"
|
||||
],
|
||||
"tileSize": 256
|
||||
}
|
||||
},
|
||||
"layers": [
|
||||
{
|
||||
"id": "carto-dark-layer",
|
||||
"type": "raster",
|
||||
"source": "carto-dark",
|
||||
"minzoom": 0,
|
||||
"maxzoom": 22
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80"><path fill="#000" d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/><path fill="#000" d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/></svg>
|
||||
|
After Width: | Height: | Size: 1.3 KiB |
@@ -0,0 +1 @@
|
||||
<svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1155 1000"><path d="m577.3 0 577.4 1000H0z" fill="#fff"/></svg>
|
||||
|
After Width: | Height: | Size: 128 B |
@@ -0,0 +1 @@
|
||||
<svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.5 2.5h13v10a1 1 0 0 1-1 1h-11a1 1 0 0 1-1-1zM0 1h16v11.5a2.5 2.5 0 0 1-2.5 2.5h-11A2.5 2.5 0 0 1 0 12.5zm3.75 4.5a.75.75 0 1 0 0-1.5.75.75 0 0 0 0 1.5M7 4.75a.75.75 0 1 1-1.5 0 .75.75 0 0 1 1.5 0m1.75.75a.75.75 0 1 0 0-1.5.75.75 0 0 0 0 1.5" fill="#666"/></svg>
|
||||
|
After Width: | Height: | Size: 385 B |
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user