Skip to main content

Timing and data quality

This page explains the timing infrastructure and data-quality safeguards built into the participation pipeline. Understanding these mechanisms helps you design studies that produce clean, analyzable data and gives reviewers confidence in your methodology.

Reaction-time measurement

Every behavioral task that records a participant response measures reaction time (RT) using the browser's high-resolution performance clock. This clock provides sub-millisecond precision and is not affected by system clock adjustments, making it the recommended timing source for web-based experiments.

  • How it works: When a stimulus appears, the platform records the onset time. When the participant responds (key press, click, or touch), the platform records the response time and computes the difference. The result is stored in milliseconds as latency_ms alongside each trial response.
  • Reference timeline: At the start of each session, the platform establishes a reference point that anchors all subsequent timestamps. Every event (stimulus, response, marker, phase transition) is recorded relative to this reference, so you can reconstruct the exact sequence and spacing of events during analysis.
  • Dual timestamps: Each logged event carries both an ISO wall-clock timestamp (for human-readable dates) and a high-resolution value in milliseconds (for precise interval calculations). Use the high-resolution values when computing RTs; use the wall-clock timestamps for session identification and sorting.

What affects web-based timing precision

Web experiments run inside a browser, which introduces sources of variability that do not exist in laboratory software:

  • Display refresh rate — Stimuli appear at the next screen refresh after the draw call. On a 60 Hz monitor this adds up to ~16.7 ms of jitter; at 144 Hz the window shrinks to ~6.9 ms. The platform does not lock stimulus presentation to specific video frames.
  • Input device latency — Keyboard and mouse events pass through the operating system's input stack before reaching the browser. Typical added latency is 4–20 ms depending on hardware, OS, and whether the device uses USB polling or Bluetooth.
  • Browser event scheduling — JavaScript timers (used for fixation crosses, inter-stimulus intervals, and stimulus durations) rely on the browser's event loop. Under normal conditions the scheduling error is ≤4 ms; heavy page activity or background tabs can increase it.
  • Cross-browser differences — All modern browsers (Chrome, Firefox, Edge, Safari) implement the same high-resolution timer, but micro-level precision varies. Chrome-based browsers tend to report the most consistent sub-millisecond values.

Practical guidance: For paradigms where timing precision is critical (masked priming, attentional blink, backward masking), consider requiring participants to use a desktop or laptop with a wired keyboard and a Chrome-based browser. Use the device-gating and screen-metrics features described below to record the participant's setup.

Session-level data quality

Device gating

The platform always enforces three device requirements before allowing participation. These checks ensure participants have the minimum hardware capabilities needed for behavioral research:

  • Camera support: A working camera must be available (required for video-recorded sessions).
  • Microphone support: A working microphone must be available (required for speech-based tasks and audio recording).
  • Minimum viewport width: The browser's inner width must be at least 768 pixels to ensure task stimuli display correctly.

These requirements are not configurable and apply to all studies. Participants who fail any check see a message explaining the requirement. They are not allowed to proceed until the issue is resolved or they close the page.

Environmental metadata

Each participation session automatically records a snapshot of the participant's environment. This metadata is available in the exported session data and helps you assess whether environmental differences might explain outlier results.

Screen metrics

Captured at the start of each session and updated whenever the participant resizes the window or rotates their device:

  • Screen resolution — Physical screen width, height, available width, and available height in pixels.
  • Viewport size — The browser's inner dimensions (the area where stimuli are displayed).
  • Device pixel ratio — Indicates high-DPI ("Retina") displays. A ratio of 2 means the display renders at twice the nominal pixel count.
  • Screen orientation — Portrait or landscape, plus the rotation angle.
  • Color and pixel depth — The screen's color bit depth.
  • Browser and language — User agent string and browser language setting.

Interaction telemetry

When enabled for a study, the platform records a continuous stream of participant interactions throughout the session:

  • Keyboard events — Each key press is logged with a high-resolution timestamp (individual characters are anonymized to "char" to avoid capturing typed passwords or personal information).
  • Pointer events — Mouse clicks, taps, and cursor movements are recorded with viewport-relative coordinates (as percentages, not absolute pixels).
  • Scroll events — Scroll position changes.
  • Focus and visibility — When the participant switches away from the browser tab or returns to it. This is particularly useful for detecting participants who multitask during a session.

All telemetry events carry the same high-resolution timestamps as task data, so you can align interaction patterns with stimulus and response events in your analysis.

Media timeline

For sessions with camera or screen-share recording, the platform records start and end timestamps for each media stream. This lets you synchronize the video recording with behavioral markers, responses, and task transitions.

Quality summary in exported data

When you export a participation, the quality summary includes:

  • Device gating results: What the participant's device supported and whether they passed the required checks.
  • Screen metrics: The initial snapshot and any changes during the session.
  • Interaction telemetry: The full event stream with timestamps relative to the session reference point.

Best practices

  • Enable interaction telemetry for exploratory studies where you want to understand participant behavior beyond task responses (e.g., hesitation patterns, tab-switching frequency).
  • Report timing methodology in your manuscript. State that reaction times were measured using the browser's high-resolution performance clock. Note that all participants met the platform's device requirements (camera, microphone, minimum 768px viewport width). If you applied additional exclusion criteria during analysis, describe them clearly.