Skip to content

Commit

Permalink
merging main
Browse files Browse the repository at this point in the history
  • Loading branch information
masseyke committed Oct 12, 2023
2 parents 474b203 + 79b2e9a commit 036e58b
Show file tree
Hide file tree
Showing 85 changed files with 2,135 additions and 714 deletions.
6 changes: 6 additions & 0 deletions docs/changelog/100650.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 100650
summary: "ESQL: Improve verifier error for incorrect agg declaration"
area: ES|QL
type: bug
issues:
- 100641
5 changes: 5 additions & 0 deletions docs/changelog/100760.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 100760
summary: Remove noisy 'Could not find trained model' message
area: Machine Learning
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/98882.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 99983
summary: Use non-deprecated SAML callback URL in tests
area: Authorization
type: enhancement
issues:
- 99985
6 changes: 6 additions & 0 deletions docs/changelog/98883.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 99983
summary: Use non-deprecated SAML callback URL in SAML smoketests
area: Authorization
type: enhancement
issues:
- 99986
5 changes: 5 additions & 0 deletions docs/changelog/99107.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 99107
summary: Wait to gracefully stop deployments until alternative allocation exists
area: Machine Learning
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/99852.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 99852
summary: Record more detailed HTTP stats
area: Network
type: enhancement
issues: []
36 changes: 30 additions & 6 deletions docs/reference/snapshot-restore/repository-s3.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -224,18 +224,42 @@ covered by the {es} test suite.
Note that some storage systems claim to be S3-compatible but do not faithfully
emulate S3's behaviour in full. The `repository-s3` type requires full
compatibility with S3. In particular it must support the same set of API
endpoints, return the same errors in case of failures, and offer consistency
and performance at least as good as S3 even when accessed concurrently by
multiple nodes. You will need to work with the supplier of your storage system
to address any incompatibilities you encounter.
endpoints, return the same errors in case of failures, and offer consistency and
performance at least as good as S3 even when accessed concurrently by multiple
nodes. You will need to work with the supplier of your storage system to address
any incompatibilities you encounter. Please do not report {es} issues involving
storage systems which claim to be S3-compatible unless you can demonstrate that
the same issue exists when using a genuine AWS S3 repository.

You can perform some basic checks of the suitability of your storage system
using the {ref}/repo-analysis-api.html[repository analysis API]. If this API
does not complete successfully, or indicates poor performance, then your
storage system is not fully compatible with AWS S3 and therefore unsuitable for
use as a snapshot repository. However, these checks do not guarantee full
compatibility. Incompatible error codes and consistency or performance issues
may be rare and hard to reproduce.
compatibility.

Most storage systems can be configured to log the details of their interaction
with {es}. If you are investigating a suspected incompatibility with AWS S3, it
is usually simplest to collect these logs and provide them to the supplier of
your storage system for further analysis. If the incompatibility is not clear
from the logs emitted by the storage system, configure {es} to log every
request it makes to the S3 API by <<configuring-logging-levels,setting the
logging level>> of the `com.amazonaws.request` logger to `DEBUG`:

[source,console]
----
PUT /_cluster/settings
{
"persistent": {
"logger.com.amazonaws.request": "DEBUG"
}
}
----
// TEST[skip:we don't really want to change this logger]

The supplier of your storage system will be able to analyse these logs to determine the problem. See
the https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-logging.html[AWS Java SDK]
documentation for further information.

[[repository-s3-repository]]
==== Repository settings
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,12 @@
import org.elasticsearch.repositories.RepositoryException;
import org.elasticsearch.repositories.blobstore.MeteredBlobStoreRepository;
import org.elasticsearch.snapshots.SnapshotDeleteListener;
import org.elasticsearch.snapshots.SnapshotId;
import org.elasticsearch.snapshots.SnapshotsService;
import org.elasticsearch.telemetry.metric.Meter;
import org.elasticsearch.threadpool.Scheduler;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xcontent.NamedXContentRegistry;

import java.util.Collection;
import java.util.Map;
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
Expand Down Expand Up @@ -315,46 +313,35 @@ public void onFailure(Exception e) {
}

@Override
public void deleteSnapshots(
Collection<SnapshotId> snapshotIds,
long repositoryDataGeneration,
IndexVersion repositoryFormatIndexVersion,
SnapshotDeleteListener listener
) {
final SnapshotDeleteListener wrappedListener;
if (SnapshotsService.useShardGenerations(repositoryFormatIndexVersion)) {
wrappedListener = listener;
} else {
wrappedListener = new SnapshotDeleteListener() {
@Override
public void onDone() {
listener.onDone();
}

@Override
public void onRepositoryDataWritten(RepositoryData repositoryData) {
logCooldownInfo();
final Scheduler.Cancellable existing = finalizationFuture.getAndSet(threadPool.schedule(() -> {
final Scheduler.Cancellable cancellable = finalizationFuture.getAndSet(null);
assert cancellable != null;
listener.onRepositoryDataWritten(repositoryData);
}, coolDown, snapshotExecutor));
assert existing == null : "Already have an ongoing finalization " + finalizationFuture;
}

@Override
public void onFailure(Exception e) {
logCooldownInfo();
final Scheduler.Cancellable existing = finalizationFuture.getAndSet(threadPool.schedule(() -> {
final Scheduler.Cancellable cancellable = finalizationFuture.getAndSet(null);
assert cancellable != null;
listener.onFailure(e);
}, coolDown, snapshotExecutor));
assert existing == null : "Already have an ongoing finalization " + finalizationFuture;
}
};
}
super.deleteSnapshots(snapshotIds, repositoryDataGeneration, repositoryFormatIndexVersion, wrappedListener);
protected SnapshotDeleteListener wrapWithWeakConsistencyProtection(SnapshotDeleteListener listener) {
return new SnapshotDeleteListener() {
@Override
public void onDone() {
listener.onDone();
}

@Override
public void onRepositoryDataWritten(RepositoryData repositoryData) {
logCooldownInfo();
final Scheduler.Cancellable existing = finalizationFuture.getAndSet(threadPool.schedule(() -> {
final Scheduler.Cancellable cancellable = finalizationFuture.getAndSet(null);
assert cancellable != null;
listener.onRepositoryDataWritten(repositoryData);
}, coolDown, snapshotExecutor));
assert existing == null : "Already have an ongoing finalization " + finalizationFuture;
}

@Override
public void onFailure(Exception e) {
logCooldownInfo();
final Scheduler.Cancellable existing = finalizationFuture.getAndSet(threadPool.schedule(() -> {
final Scheduler.Cancellable cancellable = finalizationFuture.getAndSet(null);
assert cancellable != null;
listener.onFailure(e);
}, coolDown, snapshotExecutor));
assert existing == null : "Already have an ongoing finalization " + finalizationFuture;
}
};
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@
import static org.elasticsearch.cluster.metadata.IndexGraveyard.SETTING_MAX_TOMBSTONES;
import static org.elasticsearch.indices.IndicesService.WRITE_DANGLING_INDICES_INFO_SETTING;
import static org.elasticsearch.rest.RestStatus.ACCEPTED;
import static org.elasticsearch.rest.RestStatus.OK;
import static org.elasticsearch.test.XContentTestUtils.createJsonMapView;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.equalTo;
Expand Down Expand Up @@ -184,10 +183,6 @@ private List<String> listDanglingIndexIds() throws IOException {
return danglingIndexIds;
}

private void assertOK(Response response) {
assertThat(response.getStatusLine().getStatusCode(), equalTo(OK.getStatus()));
}

/**
* Given a node name, finds the corresponding node ID.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
*/
package org.elasticsearch.http;

import org.elasticsearch.client.Response;
import org.elasticsearch.common.network.NetworkModule;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.plugins.Plugin;
Expand All @@ -17,6 +18,8 @@
import java.util.Collection;
import java.util.List;

import static org.hamcrest.Matchers.oneOf;

public abstract class HttpSmokeTestCase extends ESIntegTestCase {

@Override
Expand All @@ -42,4 +45,8 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {
protected boolean ignoreExternalCluster() {
return true;
}

public static void assertOK(Response response) {
assertThat(response.getStatusLine().getStatusCode(), oneOf(200, 201));
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/

package org.elasticsearch.http;

import org.elasticsearch.client.Request;
import org.elasticsearch.client.Response;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.XContentTestUtils;
import org.elasticsearch.xcontent.json.JsonXContent;

import java.io.IOException;
import java.util.List;
import java.util.Map;

import static org.hamcrest.Matchers.aMapWithSize;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.greaterThan;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.notNullValue;
import static org.hamcrest.Matchers.nullValue;

@ESIntegTestCase.ClusterScope(scope = ESIntegTestCase.Scope.SUITE, supportsDedicatedMasters = false, numDataNodes = 0, numClientNodes = 0)
public class HttpStatsIT extends HttpSmokeTestCase {

@SuppressWarnings("unchecked")
public void testNodeHttpStats() throws IOException {
internalCluster().startNode();
performHttpRequests();

final Response response = getRestClient().performRequest(new Request("GET", "/_nodes/stats/http"));
assertOK(response);

final Map<String, Object> responseMap = XContentHelper.convertToMap(
JsonXContent.jsonXContent,
response.getEntity().getContent(),
false
);
final Map<String, Object> nodesMap = (Map<String, Object>) responseMap.get("nodes");

assertThat(nodesMap, aMapWithSize(1));
final String nodeId = nodesMap.keySet().iterator().next();

assertHttpStats(new XContentTestUtils.JsonMapView((Map<String, Object>) nodesMap.get(nodeId)));
}

@SuppressWarnings("unchecked")
public void testClusterInfoHttpStats() throws IOException {
internalCluster().ensureAtLeastNumDataNodes(3);
performHttpRequests();

final Response response = getRestClient().performRequest(new Request("GET", "/_info/http"));
assertOK(response);

final Map<String, Object> responseMap = XContentHelper.convertToMap(
JsonXContent.jsonXContent,
response.getEntity().getContent(),
false
);
assertHttpStats(new XContentTestUtils.JsonMapView(responseMap));
}

private void performHttpRequests() throws IOException {
// basic request
final RestClient restClient = getRestClient();
assertOK(restClient.performRequest(new Request("GET", "/")));
// request with body and URL placeholder
final Request searchRequest = new Request("GET", "*/_search");
searchRequest.setJsonEntity("""
{"query":{"match_all":{}}}""");
assertOK(restClient.performRequest(searchRequest));
// chunked response
assertOK(restClient.performRequest(new Request("GET", "/_cluster/state")));
// chunked text response
assertOK(restClient.performRequest(new Request("GET", "/_cat/nodes")));
}

private void assertHttpStats(XContentTestUtils.JsonMapView jsonMapView) {
final List<String> routes = List.of("/", "/_cat/nodes", "/{index}/_search", "/_cluster/state");

for (var route : routes) {
assertThat(route, jsonMapView.get("http.routes." + route), notNullValue());
assertThat(route, jsonMapView.get("http.routes." + route + ".requests.count"), equalTo(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".requests.total_size_in_bytes"), greaterThanOrEqualTo(0));
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.count"), equalTo(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.total_size_in_bytes"), greaterThan(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".requests.size_histogram"), hasSize(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".requests.size_histogram.0.count"), equalTo(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".requests.size_histogram.0.lt_bytes"), notNullValue());
if (route.equals("/{index}/_search")) {
assertThat(route, jsonMapView.get("http.routes." + route + ".requests.size_histogram.0.ge_bytes"), notNullValue());
}
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.size_histogram"), hasSize(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.size_histogram.0.count"), equalTo(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.size_histogram.0.lt_bytes"), notNullValue());
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.size_histogram.0.ge_bytes"), notNullValue());
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.handling_time_histogram"), hasSize(1));
assertThat(route, jsonMapView.get("http.routes." + route + ".responses.handling_time_histogram.0.count"), equalTo(1));
final int ltMillis = jsonMapView.get("http.routes." + route + ".responses.handling_time_histogram.0.lt_millis");
assertThat(route, ltMillis, notNullValue());
assertThat(
route,
jsonMapView.get("http.routes." + route + ".responses.handling_time_histogram.0.ge_millis"),
ltMillis > 1 ? notNullValue() : nullValue()
);
}
}
}
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"inference.delete_model":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/inference_delete_model.html",
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/delete-inference-api.html",
"description":"Delete model in the Inference API"
},
"stability":"experimental",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"inference.get_model":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/inference_get_model.html",
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/get-inference-api.html",
"description":"Get a model in the Inference API"
},
"stability":"experimental",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"inference.inference":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/inference.html",
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/post-inference-api.html",
"description":"Perform inference on a model"
},
"stability":"experimental",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"inference.put_model":{
"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/inference_put_model.html",
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/put-inference-api.html",
"description":"Configure a model for use in the Inference API"
},
"stability":"experimental",
Expand Down
2 changes: 1 addition & 1 deletion server/src/main/java/org/elasticsearch/Build.java
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ private static Build findLocalBuild() {
}

public static String minimumCompatString(IndexVersion minimumCompatible) {
if (minimumCompatible.before(IndexVersion.V_8_500_000)) {
if (minimumCompatible.before(IndexVersion.FIRST_DETACHED_INDEX_VERSION)) {
// use Version for compatibility
return Version.fromId(minimumCompatible.id()).toString();
} else {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,8 @@ static TransportVersion def(int id) {
public static final TransportVersion PLUGIN_DESCRIPTOR_OPTIONAL_CLASSNAME = def(8_513_00_0);
public static final TransportVersion UNIVERSAL_PROFILING_LICENSE_ADDED = def(8_514_00_0);
public static final TransportVersion ELSER_SERVICE_MODEL_VERSION_ADDED = def(8_515_00_0);
public static final TransportVersion PIPELINES_IN_BULK_RESPONSE_ADDED = def(8_516_00_0);
public static final TransportVersion NODE_STATS_HTTP_ROUTE_STATS_ADDED = def(8_516_00_0);
public static final TransportVersion PIPELINES_IN_BULK_RESPONSE_ADDED = def(8_517_00_0);

/*
* STOP! READ THIS FIRST! No, really,
Expand Down
Loading

0 comments on commit 036e58b

Please sign in to comment.