Skip to content

Commit

Permalink
Doc review
Browse files Browse the repository at this point in the history
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
  • Loading branch information
kolchfa-aws committed Jan 3, 2025
1 parent a647b7e commit 1dc5390
Show file tree
Hide file tree
Showing 3 changed files with 98 additions and 37 deletions.
2 changes: 1 addition & 1 deletion _analyzers/tokenizers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: default
title: Tokenizers
nav_order: 60
has_children: false
has_children: true
has_toc: false
redirect_from:
- /analyzers/tokenizers/index/
Expand Down
36 changes: 0 additions & 36 deletions _analyzers/tokenizers/letter-tokenizers.md

This file was deleted.

97 changes: 97 additions & 0 deletions _analyzers/tokenizers/letter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
---
layout: default
title: Letter
parent: Tokenizers
nav_order: 60
---

# Letter tokenizer

The `letter` tokenizer splits text into words on any non-letter characters. It works well for many European languages but struggles with some Asian languages where words aren't separated by spaces.

## Example usage

The following example request creates a new index named `my_index` and configures an analyzer with a `letter` tokenizer:

```json
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_letter_analyzer": {
"type": "custom",
"tokenizer": "letter"
}
}
}
},
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "my_letter_analyzer"
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST _analyze
{
"tokenizer": "letter",
"text": "Cats 4EVER love chasing butterflies!"
}

```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "Cats",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
},
{
"token": "EVER",
"start_offset": 6,
"end_offset": 10,
"type": "word",
"position": 1
},
{
"token": "love",
"start_offset": 11,
"end_offset": 15,
"type": "word",
"position": 2
},
{
"token": "chasing",
"start_offset": 16,
"end_offset": 23,
"type": "word",
"position": 3
},
{
"token": "butterflies",
"start_offset": 24,
"end_offset": 35,
"type": "word",
"position": 4
}
]
}
```

0 comments on commit 1dc5390

Please sign in to comment.