Skip to content

Commit

Permalink
Merge branch 'making-BulkRequest-implement-RefCounted' into making-Bu…
Browse files Browse the repository at this point in the history
…lkShardRequest-implement-RefCounted
  • Loading branch information
masseyke committed Jan 4, 2024
2 parents 990684c + f584f2b commit f3ad2e7
Show file tree
Hide file tree
Showing 135 changed files with 2,251 additions and 1,005 deletions.
5 changes: 5 additions & 0 deletions docs/changelog/101487.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 101487
summary: Wait for async searches to finish when shutting down
area: Infra/Node Lifecycle
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/102207.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 102207
summary: Fix disk computation when initializing unassigned shards in desired balance
computation
area: Allocation
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103232.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103232
summary: "Remove leniency in msearch parsing"
area: Search
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103453.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103453
summary: Add expiration time to update api key api
area: Security
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/103873.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 103873
summary: Catch exceptions during `pytorch_inference` startup
area: Machine Learning
type: bug
issues: []
11 changes: 8 additions & 3 deletions docs/reference/rest-api/security/bulk-update-api-keys.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ This operation can greatly improve performance over making individual updates.

It's not possible to update expired or <<security-api-invalidate-api-key,invalidated>> API keys.

This API supports updates to API key access scope and metadata.
This API supports updates to API key access scope, metadata and expiration.
The access scope of each API key is derived from the <<security-api-bulk-update-api-keys-api-key-role-descriptors,`role_descriptors`>> you specify in the request, and a snapshot of the owner user's permissions at the time of the request.
The snapshot of the owner's permissions is updated automatically on every call.

Expand Down Expand Up @@ -63,6 +63,9 @@ The structure of a role descriptor is the same as the request for the <<api-key-
Within the `metadata` object, top-level keys beginning with an underscore (`_`) are reserved for system usage.
Any information specified with this parameter fully replaces metadata previously associated with the API key.

`expiration`::
(Optional, string) Expiration time for the API keys. By default, API keys never expire. Can be omitted to leave unchanged.

[[security-api-bulk-update-api-keys-response-body]]
==== {api-response-body-title}

Expand Down Expand Up @@ -166,7 +169,8 @@ Further, assume that the owner user's permissions are:
--------------------------------------------------
// NOTCONSOLE

The following example updates the API keys created above, assigning them new role descriptors and metadata.
The following example updates the API keys created above, assigning them new role descriptors, metadata and updates
their expiration time.

[source,console]
----
Expand All @@ -192,7 +196,8 @@ POST /_security/api_key/_bulk_update
"trusted": true,
"tags": ["production"]
}
}
},
"expiration": "30d"
}
----
// TEST[skip:api key ids not available]
Expand Down
5 changes: 4 additions & 1 deletion docs/reference/rest-api/security/update-api-key.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ If you need to apply the same update to many API keys, you can use <<security-ap

It's not possible to update expired API keys, or API keys that have been invalidated by <<security-api-invalidate-api-key,invalidate API Key>>.

This API supports updates to an API key's access scope and metadata.
This API supports updates to an API key's access scope, metadata and expiration.
The access scope of an API key is derived from the <<security-api-update-api-key-api-key-role-descriptors,`role_descriptors`>> you specify in the request, and a snapshot of the owner user's permissions at the time of the request.
The snapshot of the owner's permissions is updated automatically on every call.

Expand Down Expand Up @@ -67,6 +67,9 @@ It supports nested data structure.
Within the `metadata` object, top-level keys beginning with `_` are reserved for system usage.
When specified, this fully replaces metadata previously associated with the API key.

`expiration`::
(Optional, string) Expiration time for the API key. By default, API keys never expire. Can be omitted to leave unchanged.

[[security-api-update-api-key-response-body]]
==== {api-response-body-title}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Use this API to update cross-cluster API keys created by the <<security-api-crea
It's not possible to update expired API keys, or API keys that have been invalidated by
<<security-api-invalidate-api-key,invalidate API Key>>.

This API supports updates to an API key's access scope and metadata.
This API supports updates to an API key's access scope, metadata and expiration.
The owner user's information, e.g. `username`, `realm`, is also updated automatically on every call.

NOTE: This API cannot update <<security-api-create-api-key,REST API keys>>, which should be updated by
Expand Down Expand Up @@ -66,6 +66,9 @@ It supports nested data structure.
Within the `metadata` object, top-level keys beginning with `_` are reserved for system usage.
When specified, this fully replaces metadata previously associated with the API key.

`expiration`::
(Optional, string) Expiration time for the API key. By default, API keys never expire. Can be omitted to leave unchanged.

[[security-api-update-cross-cluster-api-key-response-body]]
==== {api-response-body-title}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,9 @@ DELETE /_enrich/policy/clientip_policy

// tag::demo-env[]

On the demo environment at https://esql.demo.elastic.co/[esql.demo.elastic.co],
On the demo environment at https://ela.st/ql/[ela.st/ql],
an enrich policy called `clientip_policy` has already been created an executed.
The policy links an IP address to an environment ("Development", "QA", or
"Production")
"Production").

// end::demo-env[]
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,6 @@ PUT sample_data/_bulk

The data set used in this guide has been preloaded into the Elastic {esql}
public demo environment. Visit
https://esql.demo.elastic.co/[esql.demo.elastic.co] to start using it.
https://ela.st/ql[ela.st/ql] to start using it.

// end::demo-env[]
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
*/
package org.elasticsearch.transport.netty4;

import org.apache.lucene.util.Constants;
import org.elasticsearch.ESNetty4IntegTestCase;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;
Expand All @@ -31,14 +32,16 @@
@ClusterScope(scope = Scope.SUITE, supportsDedicatedMasters = false, numDataNodes = 1, numClientNodes = 0)
public class Netty4TransportMultiPortIntegrationIT extends ESNetty4IntegTestCase {

private static final int NUMBER_OF_CLIENT_PORTS = Constants.WINDOWS ? 300 : 10;

private static int randomPort = -1;
private static String randomPortRange;

@Override
protected Settings nodeSettings(int nodeOrdinal, Settings otherSettings) {
if (randomPort == -1) {
randomPort = randomIntBetween(49152, 65525);
randomPortRange = Strings.format("%s-%s", randomPort, randomPort + 10);
randomPort = randomIntBetween(49152, 65535 - NUMBER_OF_CLIENT_PORTS);
randomPortRange = Strings.format("%s-%s", randomPort, randomPort + NUMBER_OF_CLIENT_PORTS);
}
Settings.Builder builder = Settings.builder()
.put(super.nodeSettings(nodeOrdinal, otherSettings))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -782,10 +782,13 @@ public Settings onNodeStopped(String nodeName) {
* Tests shard recovery throttling on the target node. Node statistics should show throttling time on the target node, while no
* throttling should be shown on the source node because the target will accept data more slowly than the source's throttling threshold.
*/
@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/103204")
public void testTargetThrottling() throws Exception {
logger.info("--> starting node A with default settings");
final String nodeA = internalCluster().startNode();
final String nodeA = internalCluster().startNode(
Settings.builder()
// Use a high value so that when unthrottling recoveries we do not cause accidental throttling on the source node.
.put(RecoverySettings.INDICES_RECOVERY_MAX_BYTES_PER_SEC_SETTING.getKey(), "200mb")
);

logger.info("--> creating index on node A");
ByteSizeValue shardSize = createAndPopulateIndex(INDEX_NAME, 1, SHARD_COUNT_1, REPLICA_COUNT_0).getShards()[0].getStats()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.cluster.metadata.NodesShutdownMetadata;
import org.elasticsearch.cluster.metadata.SingleNodeShutdownMetadata;
import org.elasticsearch.cluster.routing.ShardRoutingState;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Priority;
import org.elasticsearch.common.Strings;
Expand All @@ -46,6 +47,7 @@
import java.util.stream.Stream;

import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.oneOf;

Expand Down Expand Up @@ -84,6 +86,7 @@ public void testRestartNodeDuringSnapshot() throws Exception {
);
return false;
});
addUnassignedShardsWatcher(clusterService, indexName);

PlainActionFuture.<Void, RuntimeException>get(
fut -> putShutdownMetadata(
Expand Down Expand Up @@ -117,6 +120,7 @@ public void testRemoveNodeDuringSnapshot() throws Exception {
final var clusterService = internalCluster().getCurrentMasterNodeInstance(ClusterService.class);
final var snapshotFuture = startFullSnapshotBlockedOnDataNode(randomIdentifier(), repoName, originalNode);
final var snapshotPausedListener = createSnapshotPausedListener(clusterService, repoName, indexName);
addUnassignedShardsWatcher(clusterService, indexName);

updateIndexSettings(Settings.builder().putNull(REQUIRE_NODE_NAME_SETTING), indexName);
putShutdownForRemovalMetadata(originalNode, clusterService);
Expand All @@ -128,7 +132,6 @@ public void testRemoveNodeDuringSnapshot() throws Exception {

if (randomBoolean()) {
internalCluster().stopNode(originalNode);
ensureGreen(indexName);
}

clearShutdownMetadata(clusterService);
Expand All @@ -146,6 +149,7 @@ public void testRemoveNodeAndFailoverMasterDuringSnapshot() throws Exception {
final var clusterService = internalCluster().getCurrentMasterNodeInstance(ClusterService.class);
final var snapshotFuture = startFullSnapshotBlockedOnDataNode(randomIdentifier(), repoName, originalNode);
final var snapshotPausedListener = createSnapshotPausedListener(clusterService, repoName, indexName);
addUnassignedShardsWatcher(clusterService, indexName);

final var snapshotStatusUpdateBarrier = new CyclicBarrier(2);
final var masterName = internalCluster().getMasterName();
Expand Down Expand Up @@ -258,6 +262,8 @@ public void testRemoveNodeDuringSnapshotWithOtherRunningShardSnapshots() throws
final var clusterService = internalCluster().getCurrentMasterNodeInstance(ClusterService.class);
final var snapshotFuture = startFullSnapshotBlockedOnDataNode(randomIdentifier(), repoName, nodeForRemoval);
final var snapshotPausedListener = createSnapshotPausedListener(clusterService, repoName, indexName);
addUnassignedShardsWatcher(clusterService, indexName);

waitForBlock(otherNode, repoName);

putShutdownForRemovalMetadata(nodeForRemoval, clusterService);
Expand Down Expand Up @@ -312,6 +318,7 @@ public void testStartRemoveNodeButDoNotComplete() throws Exception {
final var clusterService = internalCluster().getCurrentMasterNodeInstance(ClusterService.class);
final var snapshotFuture = startFullSnapshotBlockedOnDataNode(randomIdentifier(), repoName, primaryNode);
final var snapshotPausedListener = createSnapshotPausedListener(clusterService, repoName, indexName);
addUnassignedShardsWatcher(clusterService, indexName);

putShutdownForRemovalMetadata(primaryNode, clusterService);
unblockAllDataNodes(repoName); // lets the shard snapshot abort, but allocation filtering stops it from moving
Expand Down Expand Up @@ -351,6 +358,7 @@ public void testAbortSnapshotWhileRemovingNode() throws Exception {
);

final var clusterService = internalCluster().getCurrentMasterNodeInstance(ClusterService.class);
addUnassignedShardsWatcher(clusterService, indexName);
putShutdownForRemovalMetadata(primaryNode, clusterService);
unblockAllDataNodes(repoName); // lets the shard snapshot abort, but allocation filtering stops it from moving
safeAwait(updateSnapshotStatusBarrier); // wait for data node to notify master that the shard snapshot is paused
Expand Down Expand Up @@ -395,6 +403,7 @@ public void testShutdownWhileSuccessInFlight() throws Exception {
)
);

addUnassignedShardsWatcher(clusterService, indexName);
assertEquals(
SnapshotState.SUCCESS,
startFullSnapshot(repoName, randomIdentifier()).get(10, TimeUnit.SECONDS).getSnapshotInfo().state()
Expand Down Expand Up @@ -424,6 +433,18 @@ private static SubscribableListener<Void> createSnapshotPausedListener(
});
}

private static void addUnassignedShardsWatcher(ClusterService clusterService, String indexName) {
ClusterServiceUtils.addTemporaryStateListener(clusterService, state -> {
final var indexRoutingTable = state.routingTable().index(indexName);
if (indexRoutingTable == null) {
// index was deleted, can remove this listener now
return true;
}
assertThat(indexRoutingTable.shardsWithState(ShardRoutingState.UNASSIGNED), empty());
return false;
});
}

private static void putShutdownForRemovalMetadata(String nodeName, ClusterService clusterService) {
PlainActionFuture.<Void, RuntimeException>get(
fut -> putShutdownForRemovalMetadata(clusterService, nodeName, fut),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,7 @@ static TransportVersion def(int id) {
public static final TransportVersion ESQL_CLUSTER_ALIAS = def(8_565_00_0);
public static final TransportVersion SNAPSHOTS_IN_PROGRESS_TRACKING_REMOVING_NODES_ADDED = def(8_566_00_0);
public static final TransportVersion SMALLER_RELOAD_SECURE_SETTINGS_REQUEST = def(8_567_00_0);
public static final TransportVersion UPDATE_API_KEY_EXPIRATION_TIME_ADDED = def(8_568_00_0);

/*
* STOP! READ THIS FIRST! No, really,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
import org.elasticsearch.xcontent.ToXContent;
import org.elasticsearch.xcontent.XContent;
import org.elasticsearch.xcontent.XContentBuilder;
import org.elasticsearch.xcontent.XContentParseException;
import org.elasticsearch.xcontent.XContentParser;
import org.elasticsearch.xcontent.XContentParserConfiguration;

Expand Down Expand Up @@ -246,6 +247,9 @@ public static void readMultiLineFormat(
)
) {
Map<String, Object> source = parser.map();
if (parser.nextToken() != null) {
throw new XContentParseException(parser.getTokenLocation(), "Unexpected token after end of object");
}
Object expandWildcards = null;
Object ignoreUnavailable = null;
Object ignoreThrottled = null;
Expand Down Expand Up @@ -312,6 +316,9 @@ public static void readMultiLineFormat(
)
) {
consumer.accept(searchRequest, parser);
if (parser.nextToken() != null) {
throw new XContentParseException(parser.getTokenLocation(), "Unexpected token after end of object");
}
}
// move pointers
from = nextMarker + 1;
Expand Down
Loading

0 comments on commit f3ad2e7

Please sign in to comment.