Improve pg_search documentation around tokenizer details #1056
Labels
documentation
Improvements or additions to documentation
pg_search
Issue related to `pg_search/`
priority-2-medium
Medium priority issue
What
Specifically, we should make clear that search operations like regex, fuzzy find, etc all take place on the tokens that are generated by the index tokenizer. Not necessarily the row data itself.
For example, a
username
ofjohnlovemarry
might be tokenized as "john love marry" or tokenized as "johnlovemarry", and it would affect how a regex of"(love)"
would match it... matchingjohn love marry
and notjohnlovemarry
.Some examples of how the tokenizers work on simple words would go a long way.
The text was updated successfully, but these errors were encountered: