diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md
index cf457b2294..db301d90d7 100644
--- a/_analyzers/token-filters/index.md
+++ b/_analyzers/token-filters/index.md
@@ -43,7 +43,7 @@ Token filter | Underlying Lucene token filter| Description
`lowercase` | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)).
[`min_hash`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/min-hash/) | [MinHashFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/minhash/MinHashFilter.html) | Uses the [MinHash technique](https://en.wikipedia.org/wiki/MinHash) to estimate document similarity. Performs the following operations on a token stream sequentially:
1. Hashes each token in the stream.
2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket.
3. Outputs the smallest hash from each bucket as a token stream.
`multiplexer` | N/A | Emits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens.
-`ngram` | [NGramTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html) | Tokenizes the given token into n-grams of lengths between `min_gram` and `max_gram`.
+[`ngram`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/ngram/) | [NGramTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html) | Tokenizes the given token into n-grams of lengths between `min_gram` and `max_gram`.
Normalization | `arabic_normalization`: [ArabicNormalizer](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ar/ArabicNormalizer.html)
`german_normalization`: [GermanNormalizationFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html)
`hindi_normalization`: [HindiNormalizer](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/hi/HindiNormalizer.html)
`indic_normalization`: [IndicNormalizer](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/in/IndicNormalizer.html)
`sorani_normalization`: [SoraniNormalizer](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ckb/SoraniNormalizer.html)
`persian_normalization`: [PersianNormalizer](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/fa/PersianNormalizer.html)
`scandinavian_normalization` : [ScandinavianNormalizationFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html)
`scandinavian_folding`: [ScandinavianFoldingFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html)
`serbian_normalization`: [SerbianNormalizationFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/sr/SerbianNormalizationFilter.html) | Normalizes the characters of one of the listed languages.
`pattern_capture` | N/A | Generates a token for every capture group in the provided regular expression. Uses [Java regular expression syntax](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html).
`pattern_replace` | N/A | Matches a pattern in the provided regular expression and replaces matching substrings. Uses [Java regular expression syntax](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html).
diff --git a/_analyzers/token-filters/ngram.md b/_analyzers/token-filters/ngram.md
new file mode 100644
index 0000000000..c029eac26e
--- /dev/null
+++ b/_analyzers/token-filters/ngram.md
@@ -0,0 +1,137 @@
+---
+layout: default
+title: N-gram
+parent: Token filters
+nav_order: 290
+---
+
+# N-gram token filter
+
+The `ngram` token filter is a powerful tool used to break down text into smaller components, known as _n-grams_, which can improve partial matching and fuzzy search capabilities. It works by splitting a token into smaller substrings of defined lengths. These filters are commonly used in search applications to support autocomplete, partial matches, and typo-tolerant search. For more information, see [Autocomplete functionality]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/autocomplete/) and [Did-you-mean]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/did-you-mean/).
+
+## Parameters
+
+The `ngram` token filter can be configured with the following parameters.
+
+Parameter | Required/Optional | Data type | Description
+:--- | :--- | :--- | :---
+`min_gram` | Optional | Integer | The minimum length of the n-grams. Default is `1`.
+`max_gram` | Optional | Integer | The maximum length of the n-grams. Default is `2`.
+`preserve_original` | Optional | Boolean | Whether to keep the original token as one of the outputs. Default is `false`.
+
+## Example
+
+The following example request creates a new index named `ngram_example_index` and configures an analyzer with an `ngram` filter:
+
+```json
+PUT /ngram_example_index
+{
+ "settings": {
+ "analysis": {
+ "filter": {
+ "ngram_filter": {
+ "type": "ngram",
+ "min_gram": 2,
+ "max_gram": 3
+ }
+ },
+ "analyzer": {
+ "ngram_analyzer": {
+ "type": "custom",
+ "tokenizer": "standard",
+ "filter": [
+ "lowercase",
+ "ngram_filter"
+ ]
+ }
+ }
+ }
+ }
+}
+```
+{% include copy-curl.html %}
+
+## Generated tokens
+
+Use the following request to examine the tokens generated using the analyzer:
+
+```json
+POST /ngram_example_index/_analyze
+{
+ "analyzer": "ngram_analyzer",
+ "text": "Search"
+}
+```
+{% include copy-curl.html %}
+
+The response contains the generated tokens:
+
+```json
+{
+ "tokens": [
+ {
+ "token": "se",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "sea",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "ea",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "ear",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "ar",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "arc",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "rc",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "rch",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "ch",
+ "start_offset": 0,
+ "end_offset": 6,
+ "type": "",
+ "position": 0
+ }
+ ]
+}
+```