-
Notifications
You must be signed in to change notification settings - Fork 505
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add public doc for scheduler #8825
Add public doc for scheduler #8825
Conversation
Thank you for submitting your PR. The PR states are In progress (or Draft) -> Tech review -> Doc review -> Editorial review -> Merged. Before you submit your PR for doc review, make sure the content is technically accurate. If you need help finding a tech reviewer, tag a maintainer. When you're ready for doc review, tag the assignee of this PR. The doc reviewer may push edits to the PR directly or leave comments and editorial suggestions for you to address (let us know in a comment if you have a preference). The doc reviewer will arrange for an editorial review. |
Signed-off-by: Louis Chu <clingzhi@amazon.com>
858f4fa
to
fdb032c
Compare
|
## Getting Started | ||
|
||
### Prerequisites | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also the document of prereq:
Step1: for setup data source https://opensearch.org/docs/latest/dashboards/management/S3-data-source/, after then dashboard user can fire query via workbench, ref https://opensearch.org/docs/latest/dashboards/management/query-data-source/
Step2: user can accelerate query using secondary index ref https://opensearch.org/docs/latest/dashboards/management/accelerate-external-data/
During this process, user can change config to try out everything described on this doc
|
||
### Spark Configurations | ||
|
||
- `spark.flint.job.externalScheduler.enabled`: Default is `false`. Enable external scheduler for Flint auto-refresh to schedule refresh jobs outside of Spark. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on sql plugin 2.17 this value is passsed as true by default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So is the default for spark.flint.job.externalScheduler.enabled
true then?
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
@noCharger Please review the changes and let me know if anything should be changed. Thank you! |
Introduced 2.17 | ||
{: .label .label-purple } | ||
|
||
Scheduled Query Acceleration (SQA) is designed to optimize direct queries from OpenSearch to Amazon Simple Storage Service (Amazon S3). It addresses issues often faced when managing and refreshing indexes, views, and data in an automated way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenSearch to Amazon Simple Storage Service (Amazon S3)
This direct query is not only limited to s3, let's use "external data sources" on this page for consistency with other pages
|
||
Using SQA provides the following benefits: | ||
|
||
- **Cost reduction through optimized resource usage**: SQA reduces the operational load on driver nodes, lowering the costs associated with maintaining auto-refresh for indexes and views. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's an experiment with data can be visualized in charts, shall we add them somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, please make a suggestion or commit the updates directly. Thanks!
|
||
- **Better control over refresh scheduling**: SQA allows flexible scheduling of refresh intervals, helping manage resource usage and refresh frequency according to specific requirements. | ||
|
||
- **Simplified index management**: SQA enables updates to index settings, such as refresh intervals, without requiring multiple queries, simplifying workflows. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one may need some clarification, for example originally it was two passes
- stop the streaming query
- change the interval and other index options
now it's within a single query
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
|
||
- [Optimizing query performance using OpenSearch indexing]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/) | ||
- [Flint index refresh](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#flint-index-refresh) | ||
- [Index State Management]({{site.url}}{{site.baseurl}}/im-plugin/ism/index/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is unrelated, the index state is here https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#index-state-transition-1
|
||
To configure SQA, perform the following steps. | ||
|
||
### Step 1: Configure the OpenSearch cluster settings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI this is enabled by default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put "Set plugins.query.executionengine.async_query.enabled
to true
(default value)" so users know it's default. But for plugins.query.executionengine.async_query.external_scheduler.interval
there's no default value - correct? So users must set it manually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But for plugins.query.executionengine.async_query.external_scheduler.interval there's no default value - correct? So users must set it manually.
Correct, if this value is empty, from spark side (opensearch-spark) it would have an default value (5 mins)
|
||
For more information, see [Settings](https://github.com/opensearch-project/sql/blob/main/docs/user/admin/settings.rst#pluginsqueryexecutionengineasync_queryexternal_schedulerinterval). | ||
|
||
### Step 2: Configure Apache Spark settings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this step doesn't need
WITH ( | ||
auto_refresh = true, | ||
refresh_interval = '15 minutes', | ||
scheduler_mode = 'external' | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by default it would use external scheduler, no need to mention unless user wants to use internal one
## Modifying refresh settings | ||
|
||
To modify refresh settings, use the `ALTER` command: | ||
|
||
```sql | ||
ALTER INDEX example_index | ||
WITH (refresh_interval = '30 minutes'); | ||
``` | ||
{% include copy.html %} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are some overlaps between ## Managing scheduled jobs
and ## Managing scheduled jobs
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
### Step 2: Configure a data source | ||
|
||
Connect OpenSearch to your Amazon S3 data source using the OpenSearch Dashboards interface. For more information, see [Connecting Amazon S3 to OpenSearch]({{site.url}}{{site.baseurl}}/dashboards/management/S3-data-source/). | ||
|
||
After this step, you can directly query your S3 data (the primary data source) using [Query Workbench]({{site.url}}{{site.baseurl}}/dashboards/query-workbench/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if it's more suitable to say data source is an prereq
### Step 3: Configure query acceleration | ||
|
||
Configure a skipping index, covering index, or materialized view. These secondary data sources are additional data structures that improve query performance by optimizing queries on external data sources, such as Amazon S3. For more information, see [Optimize query performance using OpenSearch indexing]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/). | ||
|
||
After this step, you can [run accelerated queries](#running-an-accelerated-query) using one of the secondary data sources. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This step 3 is the same as "run accelerated queries". Open source user should be able to use this feature without any customization since default is enabled with threshold as 5 mins. But they are able to custermize via Configure the OpenSearch cluster settings
## Creating a scheduled refresh job | ||
|
||
To create an index with a scheduled refresh job, use the following statement: | ||
|
||
```sql | ||
CREATE SKIPPING INDEX example_index | ||
WITH ( | ||
auto_refresh = true, | ||
refresh_interval = '15 minutes', | ||
scheduler_mode = 'external' | ||
); | ||
``` | ||
{% include copy.html %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems also duplicate to ## Running an accelerated query
|
||
### Enabling jobs | ||
|
||
To disable the external scheduler, use the ALTER command with a manual refresh: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To disable the external scheduler, use the ALTER command with a manual refresh: | |
To disable the auto refresh with internal / external scheduler, use the ALTER command with a manual refresh: |
``` | ||
{% include copy.html %} | ||
|
||
To enable the external scheduler, use the ALTER command with an auto-refresh: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To enable the external scheduler, use the ALTER command with an auto-refresh: | |
To enable the auto refresh with internal / external scheduler, use the ALTER command with an auto-refresh: |
|
||
### Updating the scheduler mode | ||
|
||
To update the scheduler mode, specify the `scheduler_mode` in the `WITH` clause: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To update the scheduler mode, specify the `scheduler_mode` in the `WITH` clause: | |
To switch the scheduler mode from internal to external, or vice versa, specify the `scheduler_mode` in the `WITH` clause: |
### Verifying scheduler job status | ||
|
||
To verify scheduler job status, use the following request: | ||
|
||
```json | ||
GET /.async-query-scheduler/_search | ||
``` | ||
{% include copy-curl.html %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### Verifying scheduler job status | |
To verify scheduler job status, use the following request: | |
```json | |
GET /.async-query-scheduler/_search | |
``` | |
{% include copy-curl.html %} | |
### Inspect scheduler metadata | |
To inspect scheduler metadata, use the following dsl request: | |
```json | |
GET /.async-query-scheduler/_search |
{% include copy-curl.html %}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kolchfa-aws Please see my comments and changes and let me know if you have any questions. Given that you received additional comments from @noCharger prior to editorial, tag me on any resulting additions/modifications, and I can give them a quick read. Thanks!
Introduced 2.17 | ||
{: .label .label-purple } | ||
|
||
Scheduled Query Acceleration (SQA) is designed to optimize direct queries from OpenSearch to external data sources, such as Amazon Simple Storage Service (Amazon S3). It addresses issues often faced when managing and refreshing indexes, views, and data in an automated way. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scheduled Query Acceleration (SQA) is designed to optimize direct queries from OpenSearch to external data sources, such as Amazon Simple Storage Service (Amazon S3). It addresses issues often faced when managing and refreshing indexes, views, and data in an automated way. | |
Scheduled Query Acceleration (SQA) is designed to optimize queries sent directly from OpenSearch to external data sources, such as Amazon Simple Storage Service (Amazon S3). It uses automation to address issues commonly encountered when managing and refreshing indexes, views, and data. |
|
||
Scheduled Query Acceleration (SQA) is designed to optimize direct queries from OpenSearch to external data sources, such as Amazon Simple Storage Service (Amazon S3). It addresses issues often faced when managing and refreshing indexes, views, and data in an automated way. | ||
|
||
Query acceleration is facilitated by secondary indexes like [skipping indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#skipping-indexes), [covering indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#covering-indexes), or [materialized views]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#materialized-views). When queries run, they use these indexes instead of directly querying S3. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Query acceleration is facilitated by secondary indexes like [skipping indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#skipping-indexes), [covering indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#covering-indexes), or [materialized views]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#materialized-views). When queries run, they use these indexes instead of directly querying S3. | |
Query acceleration is facilitated by secondary indexes like [skipping indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#skipping-indexes), [covering indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#covering-indexes), or [materialized views]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#materialized-views). When queries run, they use these indexes instead of directly querying Amazon S3. |
|
||
Query acceleration is facilitated by secondary indexes like [skipping indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#skipping-indexes), [covering indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#covering-indexes), or [materialized views]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#materialized-views). When queries run, they use these indexes instead of directly querying S3. | ||
|
||
The secondary indexes need to be refreshed periodically to stay current with the Amazon S3 data. This refresh can be scheduled using an internal scheduler (within Spark) or an external scheduler. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The secondary indexes need to be refreshed periodically to stay current with the Amazon S3 data. This refresh can be scheduled using an internal scheduler (within Spark) or an external scheduler. | |
The secondary indexes need to be refreshed periodically in order to remain current with the Amazon S3 data. This refresh operation can be scheduled using either an internal scheduler (within Spark) or an external scheduler. |
|
||
The secondary indexes need to be refreshed periodically to stay current with the Amazon S3 data. This refresh can be scheduled using an internal scheduler (within Spark) or an external scheduler. | ||
|
||
Using SQA provides the following benefits: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using SQA provides the following benefits: | |
SQA provides the following benefits: |
|
||
- **Cost reduction through optimized resource usage**: SQA reduces the operational load on driver nodes, lowering the costs associated with maintaining auto-refresh for indexes and views. | ||
|
||
- **Improved observability of refresh operations**: SQA provides visibility into index states and refresh timings, offering insights into data processing and the current system state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- **Improved observability of refresh operations**: SQA provides visibility into index states and refresh timings, offering insights into data processing and the current system state. | |
- **Improved observability of refresh operations**: SQA provides visibility into index states and refresh timing, offering insights into data processing and the current system state. |
|
||
## Validations | ||
|
||
You can validate your settings by running a test query and verifying the scheduler configurations: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"configuration" (singular)?
``` | ||
{% include copy.html %} | ||
|
||
For more information, see [OpenSearch Spark documentation](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#all-indexes). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For more information, see [OpenSearch Spark documentation](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#all-indexes). | |
For more information, see the [OpenSearch Spark documentation](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#all-indexes). |
{% include copy.html %} | ||
|
||
For more information, see [OpenSearch Spark documentation](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md#all-indexes). | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line 244: The link leads to the "All Indexes" section of the Flint Index Reference Manual. Is this correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
|
||
## Troubleshooting | ||
|
||
If the refresh operation is not triggering as expected, ensure the `auto_refresh` setting is enabled and the refresh interval is properly configured. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the refresh operation is not triggering as expected, ensure the `auto_refresh` setting is enabled and the refresh interval is properly configured. | |
If the refresh operation is not triggering as expected, ensure that the `auto_refresh` setting is enabled and the refresh interval is properly configured. |
|
||
## Next steps | ||
|
||
For answers to more technical questions, see the [OpenSearch Spark RFC](https://github.com/opensearch-project/opensearch-spark/issues/416). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a huge fan of this phrasing, and I'm not sure that this RFC is a great reference for the user. It doesn't actually present answers to technical questions, as such. I would either replace with a more generic "For more information, see [Different page with more information]" or remove the section entirely.
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
- Ensure you have the SQL plugin installed. The SQL plugin is part of most OpenSearch distributions. For more information, see [Installing plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/). | ||
- Ensure you have configured an Amazon S3 and Amazon EMR Serverless (needed for access to Apache Spark). | ||
- Ensure you have the SQL plugin installed. The SQL plugin is included in most OpenSearch distributions. For more information, see [Installing plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/). | ||
- Ensure you have configured a data source (in this example, Amazon S3): Configure a skipping index, covering index, or materialized view. These secondary data sources are additional data structures that improve query performance by optimizing queries sent to external data sources, such as Amazon S3. For more information, see [Optimizing query performance using OpenSearch indexing]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra space between "to external"
@@ -164,42 +135,35 @@ Use the following commands to manage scheduled jobs. | |||
|
|||
### Enabling jobs | |||
|
|||
To disable the external scheduler, use the ALTER command with a manual refresh: | |||
To disable auto refresh using an internal or external scheduler, set `auto_refresh` to `false`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe there may be other instances where "auto-refresh" is hyphenated.
|
||
- **Concurrent job limits**: Limit the number of concurrent jobs running to avoid overloading system resources. Monitor system capacity and adjust job limits accordingly to ensure optimal performance. | ||
- **Concurrent job limits**: Limit the number of concurrent running jobs running to avoid overloading system resources. Monitor system capacity and adjust job limits accordingly to ensure optimal performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete the instance of "running" after "jobs".
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add ../.worktrees/backport-2.17 2.17
# Navigate to the new working tree
pushd ../.worktrees/backport-2.17
# Create a new branch
git switch --create backport/backport-8825-to-2.17
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 61bb436450c9e36a6f487ac7c5b2322fca576578
# Push it to GitHub
git push --set-upstream origin backport/backport-8825-to-2.17
# Go back to the original working tree
popd
# Delete the working tree
git worktree remove ../.worktrees/backport-2.17 Then, create a pull request where the |
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add ../.worktrees/backport-2.18 2.18
# Navigate to the new working tree
pushd ../.worktrees/backport-2.18
# Create a new branch
git switch --create backport/backport-8825-to-2.18
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 61bb436450c9e36a6f487ac7c5b2322fca576578
# Push it to GitHub
git push --set-upstream origin backport/backport-8825-to-2.18
# Go back to the original working tree
popd
# Delete the working tree
git worktree remove ../.worktrees/backport-2.18 Then, create a pull request where the |
* Add public doc for scheduler Signed-off-by: Louis Chu <clingzhi@amazon.com> * Doc review Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add time units Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Clarify checkpoint location Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Add description Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Added more links and command Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Convert settings back to list Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * More links Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Formatting fix Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Review comments Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * Tech and editorial review Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * One more comments Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> * More editorial comments Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> --------- Signed-off-by: Louis Chu <clingzhi@amazon.com> Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina <kolchfa@amazon.com> (cherry picked from commit 61bb436) Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add ../.worktrees/backport-2.17 2.17
# Navigate to the new working tree
pushd ../.worktrees/backport-2.17
# Create a new branch
git switch --create backport/backport-8825-to-2.17
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 61bb436450c9e36a6f487ac7c5b2322fca576578
# Push it to GitHub
git push --set-upstream origin backport/backport-8825-to-2.17
# Go back to the original working tree
popd
# Delete the working tree
git worktree remove ../.worktrees/backport-2.17 Then, create a pull request where the |
* Add public doc for scheduler * Doc review * Add time units * Clarify checkpoint location * Add description * Added more links and command * Convert settings back to list * More links * Formatting fix * Review comments * Tech and editorial review * One more comments * More editorial comments --------- (cherry picked from commit 61bb436) Signed-off-by: Louis Chu <clingzhi@amazon.com> Signed-off-by: Fanit Kolchina <kolchfa@amazon.com> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: Fanit Kolchina <kolchfa@amazon.com>
Description
Add public doc for scheduler
Issues Resolved
#8263
Version
2.17
Frontend features
If you're submitting documentation for an OpenSearch Dashboards feature, add a video that shows how a user will interact with the UI step by step. A voiceover is optional.
Checklist
For more information on following Developer Certificate of Origin and signing off your commits, please check here.