SearchLabs.uk

_HTML ELEMENT POSITIONING INJECTION TEST_

This experiment investigates how browsing assistants and Large Language Models (LLMs) are influenced by "prompt-like" instructions embedded within a webpage, specifically focusing on the **placement** of these instructions within the HTML structure.

The core objective is to determine whether the location of a hidden directive impacts an AI's adherence to it, even when a visible "ground truth" fact is present.

The Test Design

Each controlled web page in this test contains two key pieces of information:

By comparing AI responses across these variants, we aim to understand if, and where, these hidden instructions take effect, overriding the visible fact.

Test Variants (A–T)

We utilize sixteen unique web page variants, each embedding the override instruction in a specific HTML element. A unique variant identifier is used in the override payload.

Expected Outcomes

Experiment Process

The experiment involves hosting these variant files and using specific prompts in AI browsing platforms (e.g., ChatGPT) to query for the page content. The AI's responses are then logged and analyzed for:

This allows us to benchmark the AI's susceptibility to HTML element positioning injection and its ability to prioritize visible information over hidden directives.