Ben ShermanPaolo Di TommasoPhil Ewels
Ben ShermanPaolo Di Tommaso & Phil Ewels
Apr 30, 2026

What's New in Nextflow 26.04 - Strict Syntax, Records, Static Typing, Module Registry


We’re thrilled to announce the release of Nextflow 26.04, with new capabilities to help you write bug-free pipeline code and catch errors early. A new module registry and native nextflow module CLI commands make sharing and installing workflow modules simple.

Building on the foundation of workflow inputs/outputs and type annotations introduced in Nextflow 25.10, this release delivers several key features that make writing and maintaining high-quality bioinformatics pipelines easier than ever. Nextflow 26.04 introduces powerful new language constructs such as records and static typing, and takes a significant step forward in code reuse with the new Module Registry.

These features improve the experience of writing Nextflow code, whether you're doing it yourself or with an AI agent. Strict syntax and static typing produce specific, actionable error messages that a code-generation loop can parse and fix without human intervention. Records use named fields instead of positional indices, so generated code is less likely to silently break when inputs change. The module registry is fully CLI-driven: agents can search for modules, read their documentation, and run them directly, all through shell commands.


Strict Syntax

Preparing for strict syntax


In 2025 we released a completely new Nextflow syntax parser, which we call “strict syntax”. This parser introduced a stricter implementation of Nextflow DSL2, and laid the groundwork for new language features like static typing. Until now, it has been opt-in, but Nextflow 26.04 uses the strict syntax parser by default, bringing the rich error checking of nextflow lint to nextflow run. This also means that you can use new language features without needing to set NXF_SYNTAX_PARSER=v2 in your environment.

Nextflow reporting an error at runtime with the strict syntax parser

Some existing pipelines might not run out-of-the-box with Nextflow 26.04 – this can happen when a pipeline uses Groovy syntax that was not included in the Nextflow language specification. You can still run these pipelines by setting NXF_SYNTAX_PARSER=v1 in your environment. We recommend updating these pipelines to comply with strict syntax so that they can benefit from the improved developer experience and new language features.

Records and Record Types

Migrating to records


Nextflow 26.04 delivers on a long-awaited feature for the Nextflow language: records and record types. Most Nextflow pipelines have some form of structured data that is propagated through the pipeline. Multiple values need to be kept in sync so that Nextflow knows how to properly parallelise process execution. Records are a new way to structure this data as it flows through a pipeline. They are a replacement for tuples: where tuples contain elements that are accessed by index, records contain fields that are accessed by name.

To get started, create a variable with the new record keyword:

// BEFORE: tuple sample = tuple('1', file('1_1.fastq'), file('1_2.fastq')) println sample[0] // AFTER: record sample = record( id: '1', fastq_1: file('1_1.fastq'), fastq_2: file('1_2.fastq') ) println sample.id

Since record fields use named keys instead of positional indices, their order no longer matters. This avoids a common pitfall in Nextflow pipelines where modifying a tuple input requires checking the order of arguments everywhere it’s called.

To document and validate records, you can also create custom Record types. These are named data structures that define the expected fields and types for a record.

If a record is supplied that does not match the expected record type, the Nextflow language server catches the error immediately:

// enable type checking in the language server nextflow.enable.types = true // define a record type called "Sample" record Sample { id: String fastq_1: Path fastq_2: Path? // ? denotes that field is optional } // use the record type in a function parameter def hello(sample: Sample) { println "Hello sample ${sample.id}!" } // call hello() with records workflow { sample1 = record(id: '1', fastq_2: file('1_2.fastq')) hello(sample1) // error: `sample1` is missing `fastq_1` field required by Sample sample2 = record(id: '1', fastq_1: file('1_1.fastq')) hello(sample2) // ok (fastq_2 is optional) }
💡 Note: Type checking is currently only performed by the language server. A future version of Nextflow will provide type checking in nextflow lint and nextflow run.

Records in Nextflow aren’t like classes and objects in other languages:

  • Records are anonymous, which means that you can create them on-the-fly without an explicit type.
  • Record types are used to specify minimum requirements at the boundaries of a pipeline, workflow, or process. Any record that satisfies the requirements of a record type can be used, even if it has additional fields.

Records are designed to make it easy to model data at every layer of a pipeline, without bloating your pipeline code with type definitions and type conversions.

Static Typing (preview)

Migrating to static typing


Nextflow 25.10 introduced the first phase of static typing with type annotations and basic type checking. Nextflow 26.04 brings full support for static typing, with typed processes and typed workflows.

Processes

Typed processes now support record inputs and outputs:

nextflow.enable.types = true process FASTQC { input: record( id: String, reads: List<Path> ) output: // No need to define types for outputs, they are automatically inferred record( id: id, fastqc_html: file('*_fastqc.html'), fastqc_zip: file('*_fastqc.zip') ) script: // ... }

As well as a streamlined syntax for tuple inputs, making it easier to adopt static typing before migrating to records:

nextflow.enable.types = true process FASTQC { input: tuple(id: String, reads: List<Path>) output: tuple(id, file('fastqc_logs')) script: // ... }

Workflows

Typed workflows now provide first-class support for records and static typing in dataflow logic:

nextflow.enable.types = true workflow RNASEQ { take: // Typed inputs can include custom record types read_pairs_ch: Channel<Sample> transcriptome: Path main: index = INDEX(transcriptome) fastqc_ch = FASTQC(read_pairs_ch) quant_ch = QUANT(read_pairs_ch, index) samples_ch = fastqc_ch.join(quant_ch, by: 'id') emit: // Optional: Define expected output types to catch errors early samples: Channel<AlignedSample> = samples_ch } record Sample { /* ... */ } record AlignedSample { /* ... */ }

Several operators have also been updated for use with static typing and records. For example, the join operator can now join channels of records on a matching record field (such as id in the above example). See the best practices guide for more information about using operators with static typing.


Language server

The Nextflow language server can use all of this new information to validate the structure of your data at every step of a pipeline, including records. For example:

VS Code reporting an error for a record mismatch against a process input

Looking ahead

Static typing (specifically, typed processes and typed workflows) must be enabled using the nextflow.enable.types feature flag. This is done separately for each script, allowing you to adopt static typing one file at a time. It remains in preview for Nextflow 26.04 and will become stable in Nextflow 26.10. We encourage everyone to experiment with static typing in their pipelines and share feedback, as we work towards stabilization.

Module Registry

Using registry modules


Nextflow 25.10 introduced the Nextflow registry for publishing and discovering plugins, which led to an explosion of new community plugins. With Nextflow 26.04, we have extended the registry to implement a native module system – a way to publish, install, and run modules through the Nextflow CLI.

Nextflow modules can now be published and shared through the Nextflow Registry, similar to plugins. You can use the nextflow module command to work with remote modules. For example:

# Search for modules in registry nextflow module search qc # View info about a module nextflow module view nf-core/fastqc # Install a module nextflow module install nf-core/fastqc

Especially powerful is the nextflow module run command, which allows you to run a module directly without writing a pipeline:

nextflow module run nf-core/fastqc --meta.id 1 --reads sample1.fastq.gz -with-docker

Process inputs can be specified as command-line arguments, and process outputs are published to an output directory and printed to standard output.

Modules are installed in the modules directory of a pipeline, following a standard community practice. As a bonus, you can include modules using their canonical name, saving you from the headache of relative paths:

// before include { BWA_MEM } from '../../../modules/nf-core/bwa/mem' // after include { BWA_MEM } from 'nf-core/bwa/mem'

All nf-core modules are automatically synced to the registry under the nf-core namespace, and can be used as nf-core/<name> as shown above. You can also publish your own modules by claiming a namespace in the module registry.

Get Involved

We hope that you’re as excited about Nextflow 26.04 and the new module We hope that you’re as excited about Nextflow 26.04 and the new module registry as we are! As a community-driven project, we’d love for you to get involved:

The Nextflow ecosystem continues to mature with these updates, providing the foundation for more robust, maintainable, and enterprise-ready bioinformatics workflows. Whether you're writing pipelines by hand or letting AI agents generate them for you, Nextflow 26.04 gives you the tools to move faster and catch mistakes sooner.


New to NextflowNextflow is the leading open-source workflow orchestrator that simplifies writing and deploying compute and data-intensive pipelines at scale on any infrastructure.