Raw sensor fusion data for perception model training // NL · DE · BE · FR · AT · CH
RT-Fusion delivers raw sensor fusion data — RGB video, 200Hz IMU, and human gaze — for robotics, AVs, and foundation models. The specific adversarial edge cases your simulators cannot render: glare, rain, and unpredictable pedestrian behavior.
Operational Capacity: 4h+ continuous World-View (GoPro 5.3K) combined with Event-Triggered Intent-View (Ray-Ban Meta) for high-entropy interactions.
Resolution
5.3K
Telemetry
200Hz IMU
Scenario
Adversarial Weather
Whether you rely on LiDAR, Radar, or pure Vision, your model needs real-world context and observable intent. RT-Fusion deploys on-demand to acquire the specific European failure modes your simulators cannot render: sensor degradation in rain, glare-induced false negatives, and unpredictable VRU behavior at uncontrolled crossings.
Chest-mounted GoPro (world-view) + head-worn Ray-Ban Meta (gaze-view), running in parallel with audio-synced timestamps.
HARDWARE: GOPRO HERO 13 (CUSTOM ACQUISITION RIG)
Captures the "World Model." High dynamic range handles the "Tunnel Exit" blinding light problem. Rolling shutter stress-tests VIO pipelines against vibration artifacts.
HARDWARE: RAY-BAN META GEN 2
Captures the 'Agent Model.' Solves the High-Density VRU problem by recording the eye-contact negotiation and intent signaling that LiDAR cannot see.
Reference captures demonstrating acquisition methodology and output quality. Raw sensor output. No stabilization. No grading. Pure entropy.
RT-Fusion delivers structured, time-synchronized assets. Every frame is mapped to IMU telemetry and operator head-pose, enabling direct ingestion into standard machine learning and robotics pipelines.
{
"timestamp_utc": "2026-02-11T09:14:22.045Z",
"frame_id": 4920,
"environment": {
"location": "NL_Amsterdam_Canal_District",
"weather": "overcast_diffuse",
"surface": "asphalt_bike_lane"
},
"telemetry": {
"imu_accel_x_y_z": [0.02, -0.81, 0.15],
"speed_mps": 5.8
},
"sensors": {
"world_cam_file": "GH010492.MP4",
"attention_cam_file": "RM010492.MP4",
"head_pose_proxy": true
}
}
All assets delivered as time-stamped MP4 + GPMF telemetry, directly ingestible via ROS 2 bag conversion or PyTorch DataLoader.
Optimized For Standard Engineering Pipelines
CREDENTIALS // METHODOLOGY
ARTY ZUEV
10+ years in professional media production — camera systems, color science, lighting, and post-production — across commercial, documentary, and marketing projects in the EU. When the industry shifted from language models to real-world perception, I identified a critical gap: companies building autonomous systems in Europe had no dedicated, on-demand source for the specific adversarial edge cases that break production stacks. RT-Fusion was built to close that gap — applying professional acquisition methodology to capture the high-entropy sensor data that simulators cannot render and US-centric datasets do not contain.
FROM BRIEF TO PIPELINE-READY DATASET
You specify target failure modes, locations, and environmental conditions. Campaign scoped per acquisition day.
Dual-sensor rig deploys to target location. GoPro 5.3K World-View + Ray-Ban Meta Gaze-View running in parallel. 4h+ continuous acquisition.
Time-stamped MP4 + GPMF telemetry, paired with JSON metadata per scene. All clips indexed by failure mode category and sensor config.
Convert directly to ROS 2 bag via rosbag2, or load into a PyTorch DataLoader. GPMF telemetry parsed with gopro2gpx. Zero custom tooling required.
/// DIRECT ENGINEERING FEED
Direct line to Engineering. No sales agents.
Prefer async? [email protected]
— or submit a full brief below:
ENCRYPTION: PGP-4096 // CONNECTION: SECURE