- Single-run queues present one run at a time and let reviewers submit any rubric feedback you configure.
- Pairwise annotation queues (PAQs) present two runs side-by-side so reviewers can quickly decide which output is better (or if they are equivalent) against the rubric items you define.
Create an annotation queue
In the LangSmith UI, single-run queues can be created directly from the Annotation queues section. Pairwise queues must be created from the Datasets & Experiments pages, where you select the experiments to compare.Create a single-run annotation queue
- Navigate to Annotation queues in the left navigation.
-
Click + New annotation queue in the top-right corner.

Basic Details
- Fill in the Name and Description of the queue.
- Optionally assign a default dataset to streamline exporting reviewed runs into a dataset in your LangSmith workspace.
Annotation Rubric
- Draft some high-level instructions for your annotators, which will be shown in the sidebar on every run.
- Click + Desired Feedback to add feedback keys to your annotation queue. Annotators will be presented with these feedback keys on each run.
-
Add a description for each, as well as a short description of each category, if the feedback is categorical.
For example, with the descriptions in the previous screenshot, reviewers will see the Annotation Rubric details in the right-hand pane of the UI.

Collaborator Settings (single-run)
When there are multiple annotators for a run:-
Number of reviewers per run: This determines the number of reviewers that must mark a run as Done for it to be removed from the queue. If you check All workspace members review each run, then a run will remain in the queue until all workspace members have marked their review as Done.
- Reviewers cannot view the feedback left by other reviewers.
- Comments on runs are visible to all reviewers.
-
Enable reservations on runs: When a reviewer views a run, the run is reserved for that reviewer for the specified Reservation length. If there are multiple reviewers per run as specified above, the run can be reserved by multiple reviewers (up to the number of reviewers per run) at the same time.
If a reviewer has viewed a run and then leaves the run without marking it Done, the reservation will expire after the specified Reservation length. The run is then released back into the queue and can be reserved by another reviewer.
Clicking Requeue for a run’s annotation will only move the current run to the end of the current user’s queue; it won’t affect the queue order of any other user. It will also release the reservation that the current user has on that run.
Create a pairwise annotation queue
Pairwise queues are designed for fast A/B comparisons between two experiments (often a baseline vs. a candidate model). You initiate them from the Datasets & Experiments pages:- Navigate to Datasets & Experiments, open a dataset, and select exactly two experiments you want to compare.
- Click Annotate. In the popover, choose Add to Pairwise Annotation Queue. (The button is disabled until exactly two experiments are selected.)
- Decide whether to send the experiments to an existing pairwise queue or create a new one.
-
Provide the queue details:
- Basic details (name and description)
- Instructions & rubrics tailored to pairwise scoring
- Collaborator settings (reviewer count, reservations, reservation length)
-
Submit the form to create the queue. LangSmith immediately pairs runs from the two experiments and populates the queue.


- Experiments: You must provide two experiment sessions up front. LangSmith automatically pairs their runs in chronological order and populates the queue during creation.
- Rubric: Pairwise rubric items only require a feedback key and (optionally) a description. Annotators decide whether Run A, Run B, or both are better for each rubric item.
- Dataset: Pairwise queues do not use a default dataset, because comparisons span two experiments.
- Reservations & reviewers: The same collaborator controls apply. Reservations help prevent two people from judging the same comparison simultaneously.
Assign runs to an annotation queue
Depending on your queue type, there are several ways to populate it with work items.Single-run queues
- From a trace view: Click Add to Annotation Queue in the top-right corner of any trace view. You can add any intermediate run, but not the root span.

- From the runs table: Select multiple runs, then click Add to Annotation Queue at the bottom of the page.

- Automation rules: Set up a rule to automatically assign runs that match a filter (for example, errors or low user scores) into a queue.
- Datasets & experiments: Select one or more experiments within a dataset and click Annotate. Choose an existing queue or create a new one, then confirm the (single-run) queue option.

Pairwise annotation queues
- During creation: Selecting two experiments and creating a PAQ automatically pairs the runs. No additional “populate” step is required.
- Populate later: From Datasets & Experiments, select two experiments and choose Add to Pairwise Annotation Queue. You can add them to an existing queue so new comparisons are appended after the historical ones.
Consider routing runs that already have user feedback (e.g., thumbs-down) into a single-run queue for triage and a pairwise queue for head-to-head comparisons against a stronger baseline. This helps you identify regressions quickly. To learn more about how to capture user feedback from your LLM application, follow the guide on attaching user feedback.
Review runs in an annotation queue
Review a single-run queue
- Navigate to the Annotation Queues section through the left-hand navigation bar.
- Click on the queue you want to review. This will take you to a focused, cyclical view of the runs in the queue that require review.
-
You can attach a comment, attach a score for a particular feedback criteria, add the run to a dataset or mark the run as reviewed. You can also remove the run from the queue for all users, despite any current reservations or settings for the queue, by clicking the Trash icon next to View run.

Review a pairwise annotation queue
- From Annotation queues, select the pairwise queue you want to review.
- Each queue item displays Run A on the left and Run B on the right, along with your rubric.
- For every rubric item:
- Choose A is better, B is better, or Equal. The UI records binary feedback on both runs behind the scenes.
- Use hotkeys
A,B, orEto lock in your choice.
- Once you finish all rubric items, press Done (or
Enteron the final rubric item) to advance to the next comparison. - Optional actions:
- Leave comments tied to either run.
- Requeue the comparison if you need to revisit it later.
- Open the full trace view for deeper debugging.

