Skip to content

Releases: JohnSnowLabs/spark-nlp

6.2.0

22 Oct 15:38
6.2.0

Choose a tag to compare

📢 Spark NLP 6.2.0: A new stage for unstructured document ingestion and processing at scale

Spark NLP 6.2.0 introduces key upgrades across entity extraction, document normalization, HTML reading, and GGUF-based models. To recap, since the releases of Spark NLP 6.1 you can:

  • Infer quantized cutting-edge LLMs and VLMs such as Gemma 3, Phi-4, Llama 3.1, Qwen 2.5
  • Rerank documents using llama.cpp with AutoGGUFReranker
  • Ingest unstructured documents of diverse formats
    • Reader2Doc: streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface.
    • Reader2Table: streamlines tabular data extraction from multiple document formats with seamless pipeline integration.
    • Reader2Image: extract structured image content from various document types

Spark NLP release 6.2.0 further focuses on automation, structure-awareness, and resource efficiency, making pipelines easier to configure, manage, and extend.

🔥 Highlights

  • Auto Modes for EntityRuler and DocumentNormalizer: automatic regex and text-cleaning presets for faster setup.
  • Hierarchical Element Tracking in HTMLReader: adds element and parent identifiers for structure-aware document processing.
  • Resource Management for AutoGGUF Annotators: improved control and cleanup of llama.cpp-based models.

🚀 New Features & Enhancements

EntityRulerModel and DocumentNormalizer Auto Modes

EntityRulerModel

  • Added autoMode parameter to enable predefined regex entity groups ("network_entities", "communication_entities", "media_entities", "email_entities", "all_entities").
  • Added extractEntities parameter to filter entities within auto modes.
  • Automatically applies case-insensitive regex presets and falls back to manual mode if not specified.
  • Retains full backward compatibility with JSON or RocksDB-based rules.

DocumentNormalizer

  • Added presetPattern and autoMode parameters to apply built-in text cleaning patterns.
  • New modes include "light_clean", "document_clean", "social_clean", "html_clean", and "full_auto".
  • Enables quick application of multiple cleaning operations without manual configuration.

Together, these additions significantly reduce boilerplate setup for common text extraction and normalization workflows.

Hierarchical Element Identification in HTMLReader

  • Introduced element_id and parent_id metadata fields for each parsed HTML element.
  • Enables explicit structural relationships (e.g., title → paragraph → link) for hierarchical retrieval and contextual reasoning.
  • Supports graph-based indexing, hybrid search, and multi-level document analysis.
  • Metadata propagation improvements ensure Sentence Detector outputs also retain upstream hierarchy information.

AutoGGUF Annotator Enhancements

For AutoGGUFModel, AutoGGUFVision, AutoGGUFEmbeddings, AutoGGUFReranker

  • Added close() method to explicitly release llama.cpp model resources, preventing memory retention in long-running sessions.
  • Introduced setRemoveThinkingTag(tag: String) parameter to remove internal <think>...</think> sections from model outputs.
    • Regex pattern: (?s)<$tag>.+?</$tag>
    • Simplifies downstream processing for chat and reasoning models.

🐛 Bug Fixes

  • RobertaEmbeddings Warmup Test - fixed token sequence bug where unknown tokens caused initialization errors.

❤️ Community Support

  • Slack - real-time discussion with the Spark NLP community and team
  • GitHub - issue tracking, feature requests, and contributions
  • Discussions - community ideas and showcases
  • Medium - latest Spark NLP articles and tutorials
  • YouTube - educational videos and demos

💻 Installation

Python

pip install spark-nlp==6.2.0

Spark Packages

CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.2.0

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.2.0

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.2.0

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.0
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.2.0

Maven

<dependency>
  <groupId>com.johnsnowlabs.nlp</groupId>
  <artifactId>spark-nlp_2.12</artifactId>
  <version>6.2.0</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.5...6.2.0

6.1.5

09 Oct 11:46
6.1.5

Choose a tag to compare

📢 Spark NLP 6.1.5: Smarter Readers and More Resilient Pipelines

Spark NLP 6.1.5 focuses on improving data ingestion reliability and pipeline flexibility. This release enhances reader components with better fault tolerance, broader input support, and introduces a new ReaderAssembler annotator for streamlined integration. Several key fixes also improve model loading and stability in distributed environments.

🔥 Highlights

  • New ReaderAssembler Annotator: Unify multiple reader annotators into one configurable component for simpler and cleaner ingestion pipelines.

🚀 New Features & Enhancements

Reader Pipeline Enhancements

  • ReaderAssembler Annotator
    A new meta-annotator that unifies Reader2X components (e.g., Reader2Doc, Reader2Image, Reader2Table) under a single interface.

    • Automatically selects the right reader(s) based on configuration.
    • Supports declarative assembly of reading stages.
    • Provides parameters for reader selection, fallback rules, and error handling.
      This simplifies pipeline construction and improves maintainability for multi-format ingestion workflows. (Link to notebook)
  • Support for String Input Columns in Readers (SPARKNLP-1291)
    Spark NLP readers only supported inputs via file paths. That means if you already had a DataFrame with text content (say from another pipeline or a preliminary load), you had to write it to disk just to let the reader ingest it. This adds friction and overhead, especially in streaming or in-memory pipelines.

    With this change, you can:

    • Feed raw text stored in a DataFrame column directly into Spark NLP readers — zero I/O overhead when not needed.
    • Simplify workflows and pipelines (no need for temporary file staging just to “read” back data).
    • Improve performance and resource usage in scenarios where input is already available as strings (e.g. generated, preprocessed, or coming from another system).
    • Make the reader APIs more flexible and general-purpose.
  • Fault-Tolerant XML Reader
    The XML reader now skips malformed XML fragments (e.g., mismatched tags, missing closures, invalid characters) instead of failing the job.
    Enhanced error handling ensures more resilient ingestion of imperfect real-world data.

🐛 Bug Fixes

  • GGUF Model Loading Duplication
    Fixed an issue in FeaturesFallbackReader that caused duplicate loading or missing model files when calling .pretrained() on GGUF-based annotators such as AutoGGUFModel and rerankers, especially in Databricks environments.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

💻 Installation

Python

pip install spark-nlp==6.1.5

Spark Packages

CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.5

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.5

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.5

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.5
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.5

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.5</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.4...6.1.5

6.1.4

23 Sep 14:21

Choose a tag to compare

📢 Spark NLP 6.1.4: Advancing Multimodal Workflows with Reader2Image

We are excited to announce the release of Spark NLP 6.1.4!
This version introduces a powerful new annotator, Reader2Image, which extends Spark NLP’s universal ingestion capabilities to embedded images across a wide range of document formats. With this release, Spark NLP users can now seamlessly integrate text and image processing in the same pipeline, unlocking new opportunities for vision-language modeling (VLM), multimodal search, and document understanding.


🔥 Highlights

  • New Reader2Image Annotator: Extract and structure image content directly from documents like PDFs, Word, PowerPoint, Excel, HTML, Markdown, and Email files.
  • Multimodal Pipeline Expansion: Build workflows that combine text, tables, and now images for comprehensive document AI applications.
  • Consistent Structured Output: Access image metadata (filename, dimensions, channels, mode) alongside binary image data in Spark DataFrames, fully compatible with other visual annotators.

🚀 New Features & Enhancements

Document Ingestion

  • Reader2Image Annotator
    A new multimodal annotator designed to parse image content embedded in structured documents. Supported formats include:

    • PDFs
    • Word (.doc/.docx)
    • Excel (.xls/.xlsx)
    • PowerPoint (.ppt/.pptx)
    • HTML & Markdown (.md)
    • Email files (.eml, .msg)

    Output Fields:

    • File name
    • Image dimensions (height, width)
    • Number of channels
    • Mode
    • Binary image data
    • Metadata

    This enables seamless integration with vision-language models (VLMs), multimodal embeddings, and downstream Spark NLP annotators, all within the same distributed pipeline.


🐛 Bug Fixes

  • None

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

pip install spark-nlp==6.1.4

Spark Packages

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.4

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.4

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.4

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.4

Maven

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.4</version>
</dependency>
  • GPU: spark-nlp-gpu_2.12:6.1.4
  • Apple Silicon: spark-nlp-silicon_2.12:6.1.4
  • AArch64: spark-nlp-aarch64_2.12:6.1.4

FAT JARs


📊 What’s Changed

Full Changelog: 6.1.3...6.1.4

What's Changed

Full Changelog: 6.1.3...6.1.4

6.1.3

01 Sep 13:57

Choose a tag to compare

📢 Spark NLP 6.1.3: NerDL Graph Checker, Reader2Doc Enhancements, Ranking Finisher

We are pleased to announce Spark NLP 6.1.3, introducing a new graph validation annotator for NER training, enhancements to Reader2Doc for flexible document handling, and a new ranking finisher for AutoGGUFReranker outputs. This release focuses on improving training robustness, document processing flexibility, and retrieval ranking capabilities.

🔥 Highlights

  • New NerDLGraphChecker annotator to validate NER training graphs before training starts.
  • Reader2Doc enhancements with options for consolidated output and filtering.
  • New AutoGGUFRerankerFinisher for ranking, filtering, and normalizing reranker outputs.

🚀 New Features & Enhancements

Named Entity Recognition (NER)

NerDLGraphChecker:
A new annotator that validates whether a suitable NerDL graph is available for a given training dataset before embeddings or training start. This helps avoid wasted computation in custom training scenarios. (Link to notebook)

  • Must be placed before embedding or NerDLApproach annotators.
  • Requires token and label columns in the dataset.
  • Automatically extracts embedding dimensions from the pipeline to validate graph compatibility.

Document Processing

Reader2Doc Enhancements:
New configuration options provide more control over output formatting:

  • outputAsDocument: Concatenates all sentences into a single document.
  • excludeNonText: Filters out non-textual elements (e.g., tables, images) from the document.

Ranking & Retrieval

AutoGGUFRerankerFinisher:
A finisher for processing AutoGGUFReranker outputs, adding advanced ranking and filtering capabilities (Link to notebook):

  • Top-k document selection.
  • Score threshold filtering.
  • Min-max score normalization (0–1 range).
  • Sorting by relevance score.
  • Rank assignment in metadata while preserving document structure.

🐛 Bug Fixes

None.

❤️ Community Support

  • Slack Live discussion with the Spark NLP community and team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Share ideas and engage with other community members
  • Medium Spark NLP technical articles
  • JohnSnowLabs Medium Official blog
  • YouTube Spark NLP tutorials and demos

Installation

Python

pip install spark-nlp==6.1.3

Spark Packages

spark-nlp on Apache Spark 3.0.x–3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.3

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.3

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.3

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.3

Maven

spark-nlp:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.3</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.2...6.1.3

6.1.2

20 Aug 13:05
6.1.2

Choose a tag to compare

📢 Spark NLP 6.1.2: AutoGGUFReranker and AutoGGUF improvements

We are excited to announce Spark NLP 6.1.2, enhancing AutoGGUF model support and introduces a brand new reranking annotator based on llama.cpp LLMs. This release also brings fixes for AutoGGUFVision model and improvements for CUDA compatibility of AutoGGUF models.

🔥 Highlights

New AutoGGUFReranker annotator for advanced LLM-based reranking in information retrieval and retrieval-augmented generation (RAG) pipelines.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • AutoGGUFReranker
    A new annotator for reranking candidate results using AutoGGUF-based LLM embeddings. This enables more accurate ranking in retrieval pipelines, benefiting applications such as search, RAG, and question answering. (Link to notebook)

🐛 Bug Fixes

  • Fixed Python initialization errors in AutoGGUFVisionModel.
  • Using save for AutoGGUF models now supports more file protocols.
  • Ensured better GPU support for AutoGGUF annotators on a broader range of CUDA devices.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

Installation

Python

pip install spark-nlp==6.1.2

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.2

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.2

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.2

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.2

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.2</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.1...6.1.2

6.1.1

05 Aug 12:24
6.1.1

Choose a tag to compare

📢 Spark NLP 6.1.1: Enhanced LLM Performance and Expanded Data Ingestion Capabilities

We are thrilled to announce Spark NLP 6.1.1, a focused release that delivers significant performance improvements and enhanced functionality for large language models and universal data ingestion. This release continues our commitment to providing state-of-the-art AI capabilities within the native Spark ecosystem, with optimized inference performance and expanded multimodal support.

🔥 Highlights

  • Performance Boost for llama.cpp models: Inference optimizations in AutoGGUFModel and AutoGGUFEmbeddings deliver improvements for large language model workflows on GPU.
  • Multimodal Vision Models Restored: The AutoGGUFVisionModel annotator is back with full functionality and latest SOTA VLMs, enabling sophisticated vision-language processing capabilities.
  • Enhanced Table Processing: New Reader2Table annotator streamlines tabular data extraction from multiple document formats with seamless pipeline integration.
  • Upgraded openVINO backend: We upgraded our openVINO backend to 2025.02 and added hyperthreading configuration options to maximize performance on multi-core systems.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • Optimized AutoGGUFModel Performance: We improved the inference of llama.cpp models and achieved a 10% performance increase for AutoGGUFModel on GPU.
  • Restored AutoGGUFVisionModel: The multimodal vision model annotator is fully operational again, enabling powerful vision-language processing capabilities. Users can now process images alongside text for comprehensive multimodal AI applications while using the latest SOTA vision-language models.
  • Enhanced Model Compatibility: AutoGGUFModel can now seamlessly load the language model components from pretrained AutoGGUFVisionModel instances, providing greater flexibility in model deployment and usage. (Link to notebook)
  • Robust Model Loading: Pretrained AutoGGUF-based annotators now load despite the inclusion of deprecated parameters, ensuring broader compatibility.
  • Updated Default Models: All AutoGGUF annotators now use more recent and capable pretrained models:
Annotator Default pretrained model
AutoGGUFModel Phi_4_mini_instruct_Q4_K_M_gguf
AutoGGUFEmbeddings Qwen3_Embedding_0.6B_Q8_0_gguf
AutoGGUFVisionModel Qwen2.5_VL_3B_Instruct_Q4_K_M_gguf

Document Ingestion

  • Reader2Table Annotator: This powerful new annotator provides a streamlined interface for extracting and processing tabular data from various document formats (Link to notebook). It offers:
    • Unified API for interacting with Spark NLP readers
    • Enhanced flexibility through reader-specific configurations
    • Improved maintainability and scalability for data loading workflows
    • Support for multiple formats including HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), Markdown (.md), and CSV (.csv)

Performance Optimizations

  • OpenVINO Upgrade: We upgrade the backend to 2025.02 and added comprehensive hyperthreading configuration options for the OpenVINO backend, enabling users to optimize performance on multi-core systems by fine-tuning thread allocation and CPU utilization.

🐛 Bug Fixes

None

❤️ Community Support

  • Slack: For live discussion with the Spark NLP community and the team.
  • GitHub: Bug reports, feature requests, and contributions.
  • Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium: Spark NLP articles.
  • JohnSnowLabs official Medium
  • YouTube: Spark NLP video tutorials.

Installation

Python

pip install spark-nlp==6.1.1

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.1

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.1

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.1

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.1

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.1</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.1.0...6.1.1

6.1.0

23 Jul 16:10
6.1.0

Choose a tag to compare

📢 Spark NLP 6.1.0: State-of-the-art LLM Capabilities and Advancing Universal Ingestion

We are excited to announce Spark NLP 6.1.0, another milestone for building scalable, distributed AI pipelines! This major release significantly enhances our capabilities for state-of-the-art multimodal and large language models and universal data ingestion. Upgrade Spark NLP to 6.1.0 to improve both usability and performance across ingestion, inference, and multimodal processing pipelines, all within the native Spark ecosystem.

🔥 Highlights

  • Upgraded llama.cpp Integration: We've updated our llama.cpp backend to tag b5932 which supports inference with the latest generation of LLMs.
  • Unified Document Ingestion with Reader2Doc: Introducing a new annotator that streamlines the process of loading and integrating diverse file formats (PDFs, Word, Excel, PowerPoint, HTML, Text, Email, Markdown) directly into Spark NLP pipelines with a unified and flexible interface.
  • Support for Phi-4: Spark NLP now natively supports the Phi-4 model, allowing users to leverage its advanced reasoning capabilities.

🚀 New Features & Enhancements

Large Language Models (LLMs)

  • llama.cpp Upgrade: Our llama.cpp backend has been upgraded to version b5932. This update enables native inference for the newest LLMs, such as Gemma 3 and Phi-4, ensuring broader model compatibility and improved performance.
    • NOTE: We are still in the process of upgrading our multimodal AutoGGUFVisionModel annotator to the latest backend. This means that this annotator will not be available in this version. As a workaround, please use version 6.0.5 of Spark NLP.
  • Phi-4 Model Support: Spark NLP now integrates the Phi-4 model, an advanced open model trained on a blend of synthetic data, filtered public domain content, and academic Q&A datasets. This integration enables sophisticated reasoning capabilities directly within Spark NLP. (Link to notebook)

Document Ingestion

  • Reader2Doc Annotator: This new annotator provides a simplified, unified interface for integrating various Spark NLP readers. It supports a wide range of formats, including PDFs, plain text, HTML, Word (.doc/.docx), Excel (.xls/.xlsx), PowerPoint (.ppt/.pptx), email files (.eml, .msg), and Markdown (.md).
  • Using this annotator, you can read all these different formats into Spark NLP documents, making them directly accessible in all your Spark NLP pipelines. This significantly reduces boilerplate code and enhances flexibility in data loading workflows, making it easier to scale and switch between data sources.

Let's use a code example to see how easy it is to use:

reader2doc = Reader2Doc() \
    .setContentType("application/pdf") \
    .setContentPath("./pdf-files") \
    .setOutputCol("document")

# other NLP stages in `nlp_stages`

pipeline = Pipeline(stages=[reader2doc] + nlp_stages)
model = pipeline.fit(empty_df)
result_df = model.transform(empty_df)

Check out our full example notebook to see it in action.

🐛 Bug Fixes

  • HuggingFace OpenVINO Notebook for Qwen2VL: Addressed and fixed issues in the notebook related to the OpenVINO conversion of the Qwen2VL model, ensuring smoother functionality.

❤️ Community Support

  • Slack: For live discussion with the Spark NLP community and the team.
  • GitHub: Bug reports, feature requests, and contributions.
  • Discussions: Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium: Spark NLP articles.
  • JohnSnowLabs official Medium
  • YouTube: Spark NLP video tutorials.

Installation

Python

pip install spark-nlp==6.1.0

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.1.0

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.1.0

Apple Silicon (M1 & M2)

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.1.0

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.1.0

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.1.0</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.5...6.1.0

6.0.5

10 Jul 07:58
6.0.5

Choose a tag to compare

📢 Spark NLP 6.0.5: Enhanced Microsoft Fabric Integration & Markdown Processing

We're thrilled to announce the release of Spark NLP 6.0.5! This version introduces a new Markdown Reader, enabling direct processing of Markdown files into structured Spark DataFrames for more diverse NLP workflows. We have also enhanced Microsoft Fabric integration, allowing for seamless model downloads from Lakehouse containers.

🔥 Highlights

  • New Markdown Reader: Introduce the new MarkdownReader for effortlessly parsing Markdown files into structured Spark DataFrames, paving the way for advanced content analysis and NLP on Markdown content.
  • Enhanced Microsoft Fabric Support: Download models directly from Microsoft Fabric Lakehouse containers, streamlining your NLP deployments in the Fabric environment.

🚀 New Features & Enhancements

  • New MarkdownReader Annotator: Introducing the MarkdownReader, a powerful new feature that allows you to read and parse Markdown files directly into a structured Spark DataFrame. This enables efficient processing and analysis of Markdown content for various NLP applications. We recommend using this reader automatically in our Partition annotator. (Link to notebook)

    partitioner = Partition(content_type = "text/markdown"").partition(md_directory)
  • Microsoft Fabric Integration: Spark NLP now supports downloading models from Microsoft Fabric Lakehouse containers, providing a more integrated and efficient workflow for users leveraging Microsoft Fabric. This enhancement ensures smoother model access and deployment within the Fabric ecosystem. For example, you can define the path to our pretrained models in Spark like so:

    from pyspark import SparkConf
    
    conf = SparkConf()
    conf.set("spark.jsl.settings.pretrained.cache_folder", "abfss://[email protected]/lakehouse_folder.Lakehouse/Files/my_models")

🐛 Bug Fixes

We performed crucial maintenance updates to all of our example notebooks, ensuring that they are reproducible and properly displayed in GitHub.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI
pip install spark-nlp==6.0.5

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.5

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.5

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.5

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.5

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.5

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.5</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.4...6.0.5

6.0.4

30 Jun 10:50
6.0.4

Choose a tag to compare

📢 Spark NLP 6.0.4: MiniLMEmbeddings, DataFrame Optimization, and Enhanced PDF Processing

We are excited to announce the release of Spark NLP 6.0.4! This version brings advancements in text embeddings with the introduction of the MiniLM family, Spark DataFrame optimizations, and enhanced PDF document parsing. Upgrade to 6.0.4 to leverage these cutting-edge features and expand your NLP capabilities at scale.

Stay updated with our latest examples and tutorials by visiting our Medium - Spark NLP blog!

🔥 Highlights

  • Introducing MiniLMEmbeddings: Support for the efficient and powerful MiniLMEmbeddings models, providing state-of-the-art text representations.
  • New DataFrameOptimizer: A new DataFrameOptimizer transformer to streamline and optimize Spark DataFrame operations, offering configurable repartitioning, caching, and persistence options.
  • Advanced PDF Reader Features: Enhancements to the PDF Reader with extractCoordinates for spatial metadata, normalizeLigatures for improved text consistency, and a new exception column for enhanced fault tolerance.

🚀 New Features & Enhancements

Advanced Text Embeddings

This release introduces a new family of efficient text embedding models:

  • MiniLMEmbeddings: Support for the MiniLMEmbeddings annotator, enabling the use of MiniLM models for generating highly efficient and effective sentence embeddings. These models are designed to provide strong performance while being significantly smaller and faster than larger alternatives, making them ideal for a wide range of NLP tasks requiring compact and powerful text representations. (Link to notebook)

Spark DataFrame Optimization

  • DataFrameOptimizer: Introducing the new DataFrameOptimizer transformer, designed to enhance the performance and manageability of Spark DataFrames within your NLP pipelines. (Link to notebook)
    • Configurable Repartitioning: Allows for automatic repartitioning of DataFrames, ensuring optimal data distribution for downstream processing.
    • Optional Caching: Supports DataFrame caching (doCache) to significantly speed up iterative computations.
    • Persistent Output: Adds robust support for persisting DataFrames to disk in various formats (csv, json, parquet) with custom writer options via outputOptions.
    • Schema Preservation: Efficiently preserves the original DataFrame schema, making it a seamless utility for complex Spark NLP pipelines.

Enhanced PDF Document Processing

The PDF Reader and PdfToText transformer have been significantly improved for more comprehensive and fault-tolerant document parsing. (Link to notebook)

  • Spatial Metadata Extraction (extractCoordinates): A new configurable parameter extractCoordinates in PdfToText and the PDF Reader. When enabled, this outputs detailed spatial metadata (text position and dimensions) for each character in the PDF.
  • Ligature Normalization (normalizeLigatures): When extractCoordinates is enabled, the normalizeLigatures option ensures that ligature characters (e.g., fi, fl, œ) are automatically normalized to their decomposed forms (fi, fl, oe).
  • Fault Tolerance with Exception Column: A new exception output column has been introduced to capture and log any processing errors encountered while handling individual PDF documents.

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI
pip install spark-nlp==6.0.4

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.4

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.4

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.4

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.4

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.4

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.4</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.3...6.0.4

6.0.3

11 Jun 15:23
6.0.3

Choose a tag to compare

📢 Spark NLP 6.0.3: Multimodal E5-V Embeddings and Enhanced Document Partitioning

We are excited to announce the release of Spark NLP 6.0.3! This version introduces significant advancements in multimodal capabilities and further refines document processing workflows. Upgrade to 6.0.3 to leverage these cutting-edge features and expand your NLP and vision task capabilities at scale.

🔥 Highlights

  • Introducing E5-V Universal Multimodal Embeddings: Support for E5VEmbeddings, enabling universal multimodal embeddings with Multimodal Large Language Models (MLLMs). It can express semantic similarly between texts, images, or a combination of both.
  • Enhanced Document Partitioning: Improvements to the Partition and PartitionTransformer annotators with new character and title-based chunking strategies.
  • New XML Reader: Added sparknlp.read().xml() and integrated XML support into the Partition annotator for streamlined XML document processing.

🚀 New Features & Enhancements

E5-V Multimodal Embeddings

This release further boosts Spark NLP's multimodal processing power with the integration of E5-V.

  • E5VEmbeddings is designed to adapt MLLMs for achieving universal multimodal embeddings. It leverages MLLMs with prompts to effectively bridge the modality gap between different types of inputs, demonstrating strong performance in multimodal embeddings even without fine-tuning. (Link to notebook)

Enhanced Unstructured Document Processing

The Partition and PartitionTransformer components now include additional chunking strategies and enhancements, which divides content into meaningful units based on the document's structure or number of characters.

  • New Chunking Strategies (Link to notebook)
    • Character Number Strategy (maxCharacters): Split documents by number of characters.
    • Title-Based Chunking Strategy (byTitle): Split documents by titles in the documents. Additional settings:
    • Soft Chunking Limit (newAfterNChars): Allows for early section breaks before reaching the maxCharacters threshold.
    • Contextual Overlap (overlapAll): Adds trailing context from the previous chunk to the next, improving semantic continuity.
  • Enhancements
    • Page Boundary Splitting: Respects pageNumber metadata and starts a new section when a page changes.
    • Title Inclusion Behavior: Ensures titles are embedded within the following content rather than forming isolated chunks.
    • New XML Reader: This release introduces a new feature that enables reading and parsing XML files into a structured Spark DataFrame. (Link to notebook)
      • Added sparknlp.read().xml(): This method accepts file paths of XML content.
      • Use in Partition: XML content can now be processed using the Partition annotator by setting content_type = "application/xml".

🐛 Bug Fixes

❤️ Community Support

  • Slack For live discussion with the Spark NLP community and the team
  • GitHub Bug reports, feature requests, and contributions
  • Discussions Engage with other community members, share ideas, and show off how you use Spark NLP!
  • Medium Spark NLP articles
  • JohnSnowLabs official Medium
  • YouTube Spark NLP video tutorials

⚙️ Installation

Python

#PyPI
pip install spark-nlp==6.0.3

Spark Packages

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x (Scala 2.12):

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:6.0.3

GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:6.0.3

Apple Silicon

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-silicon_2.12:6.0.3

AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.3

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:6.0.3

Maven

spark-nlp on Apache Spark 3.0.x, 3.1.x, 3.2.x, 3.3.x, and 3.4.x:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

spark-nlp-gpu:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-gpu_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

spark-nlp-silicon:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-silicon_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

spark-nlp-aarch64:

<dependency>
    <groupId>com.johnsnowlabs.nlp</groupId>
    <artifactId>spark-nlp-aarch64_2.12</artifactId>
    <version>6.0.3</version>
</dependency>

FAT JARs

What's Changed

Full Changelog: 6.0.2...6.0.3