Fast feedback from Robot Framework in GitHub Actions
by Jani Mikkonen
Fast feedback from Robot Framework in GitHub Actions
One thing that keeps bothering me in Robot Framework pipelines is how much friction there still is around test results. A job fails, and the useful details are trapped inside log.html, report.html, or output.xml. The data is there, but getting to it usually means downloading artifacts locally and opening files outside GitHub.
That works, but it is slow.
Most of the time I do not need the full HTML report right away. I just need fast feedback:
- how many tests passed
- what failed
- what got skipped
- what warning messages were logged
- which environment the run targeted
That is the problem robotframework-ghareports is meant to solve.
It generates a Markdown summary from Robot Framework results and writes it straight into the GitHub Actions job summary. If needed, the same content can also be written to a standalone Markdown file or posted as an updatable pull request comment.
Why this is useful
GitHub Actions already gives us a good place for short-form reporting through GITHUB_STEP_SUMMARY. Instead of making people hunt for artifacts, we can put the important parts directly in the workflow UI.
For Robot Framework runs, that means you can expose:
- total passed, failed, skipped, total count, pass rate, and duration
- failing tests and their failure messages
- skipped tests and skip reasons
- warnings emitted during execution
- selected environment variables such as browser, target environment, or rerun mode
The end result is not a replacement for the standard Robot Framework HTML outputs. It is a fast first view. When something breaks, the summary tells you where to look before you download anything.
What the report looks like
The generated summary is a Markdown report with collapsible sections for passed, failed, and skipped tests. A typical report starts with a totals table and then expands into detailed sections only when needed.
That makes it a good fit for GitHub Actions where you want signal first and detail second.
Install it
The package is available on PyPI:
python -m pip install robotframework-ghareports
# or:
uv pip install robotframework-ghareports
# or:
uv tool install robotframework-ghareports
That installs the ghareports CLI entry point. If you prefer ephemeral execution instead of installing it first, uvx robotframework-ghareports --help works too.
Use it as a listener
If you run Robot Framework in a normal single-process workflow, the simplest option is to use ghareports as a listener:
- name: Run Robot Framework tests
run: |
robot \
--listener GHAReports \
tests/
When the workflow is running inside GitHub Actions, ghareports detects GITHUB_STEP_SUMMARY automatically and writes the report there.
This is the lowest-friction setup because there is no post-processing step at all. Run the tests, and the job summary is ready.
You can also include environment variables so the summary shows what the job actually executed:
jobs:
test:
runs-on: ubuntu-latest
env:
ENVIRONMENT: staging
BROWSER: chromium
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: |
python -m pip install robotframework robotframework-ghareports
# or:
uv pip install robotframework robotframework-ghareports
- name: Run tests
run: |
robot \
--listener GHAReports:env_variables=ENVIRONMENT,BROWSER \
tests/
That small addition makes summaries much more useful when the same workflow runs across multiple targets.
Use it after the run
Listener mode is convenient, but not every pipeline is structured that way. Sometimes you want to process the final output.xml afterward, especially when the execution and reporting phases are split into separate steps.
That is where the CLI comes in:
- name: Run Robot Framework tests
run: robot tests/
- name: Publish GitHub summary
run: |
ghareports --robotlog output.xml
# or:
uvx robotframework-ghareports --robotlog output.xml
This produces the same style of summary, but it does so from the finished result file.
If you want a standalone Markdown artifact as well, add --markdown:
- name: Publish GitHub summary
run: |
ghareports \
--robotlog output.xml \
--markdown robot-summary.md
# or:
uvx robotframework-ghareports \
--robotlog output.xml \
--markdown robot-summary.md
That is also useful outside GitHub Actions because it gives you an easy way to inspect the generated report locally.
Long failure messages are still readable
One practical annoyance with GitHub-flavored Markdown tables is that long failure messages can become ugly fast. ghareports has a width option for wrapping table cell content:
- name: Publish GitHub summary
run: |
ghareports \
--robotlog output.xml \
--width 35
# or:
uvx robotframework-ghareports \
--robotlog output.xml \
--width 35
That helps a lot with browser automation failures and stack traces that would otherwise stretch the page horizontally.
Pull request comments
Sometimes even the job summary is not the right place. Maybe you want the result visible directly in the pull request conversation, or maybe the summary is getting too large. In that case ghareports can create or update a PR comment:
permissions:
contents: read
pull-requests: write
steps:
- name: Publish PR comment
run: |
ghareports --robotlog output.xml --pr-comment
# or:
uvx robotframework-ghareports --robotlog output.xml --pr-comment
The comment is updated in place on repeated runs instead of creating a new bot comment every time, which keeps the pull request cleaner than the usual comment spam pattern. Example how this looks in practice here
A note about pabot
Direct listener use inside pabot is intentionally not supported. That part is important enough to call out explicitly.
If you are running suites in parallel with pabot, the expected flow is:
- Run tests with
pabot - Merge results into a single
output.xml - Run
ghareports --robotlog output.xml
That design keeps the reporting step simple and avoids partial or conflicting summaries from parallel workers.
Minimal workflow example
This is the smallest useful example for a normal GitHub Actions job:
name: Robot Tests
on:
push:
pull_request:
jobs:
test:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
env:
ENVIRONMENT: dev
BROWSER: firefox
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: |
python -m pip install robotframework robotframework-ghareports
# or:
uv pip install robotframework robotframework-ghareports
- name: Run tests
run: robot tests/
- name: Publish report to GitHub Actions summary
run: |
ghareports \
--robotlog output.xml \
--envs ENVIRONMENT,BROWSER \
--width 35
# or:
uvx robotframework-ghareports \
--robotlog output.xml \
--envs ENVIRONMENT,BROWSER \
--width 35
That alone gets the most important feedback into the workflow UI without anyone having to download output.xml or open generated HTML artifacts just to see what failed.
Final thoughts
I still generate the normal Robot Framework reports. They are the right tool when I need full drill-down details. But for day-to-day CI work, I want the first failure signal to show up where I already am: inside GitHub Actions.
That is what robotframework-ghareports gives me. It does not try to replace Robot Framework reporting. It just shortens the path from failing pipeline to useful information.
If that sounds familiar, the project is here:
tags: robotframework - qa - reporting - python - github - github-actions - ci