You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Go to file
Jörg Prante 500a7a7885 update to net-http 4.6.0 2 hours ago
elx-api update to netty 4.1.104, add snakeyaml to override legacy version 5 months ago
elx-common change cluster target health to yellow 4 weeks ago
elx-http update to jackson 2.15.2 8 months ago
elx-node update to Gradle 8.1.1 1 year ago
elx-transport update to Gradle 8.1.1 1 year ago
gradle update netty to 4.1.110, jackson to 2.17.1 5 days ago
.gitignore update for new version 4 years ago
LICENSE.txt initial commit 8 years ago
README.adoc working on node/transport/http clients for ES 6.2 6 years ago
build.gradle update netty to 4.1.110, jackson to 2.17.1 5 days ago update to net-http 4.6.0 2 hours ago
gradlew update to OpenJDK 21, Gradle 8.4-rc-1 8 months ago
gradlew.bat update to gradle 8.7, netty 4.1.109 1 month ago
settings.gradle update to net-http 4.6.0 2 hours ago


# Elasticsearch Clients

image:[title="Build status", link=""]
image:[title="Coverage", link=""]
image:[title="Maven Central", link=""]
image:[title="Apache License 2.0", link=""]

This Java library extends the Elasticsearch Java Client classes for better convenience.

It is not a plugin for Elasticsearch. Use it by importing the jar from Maven Central into your project.

The Elasticsearch node client and transport client APIs are unified in a `ClientMethods` interface. This interface uses
bulk services and index management under the hood, like index creation, alias managent, and retention policies.

Two classes `BulkNodeClient` and `BulkTransportClient` combine the client methods with the `BulkProcessor`,
provide some logging convenience, and still offer the `Client` interface of Elasticsearch by using the `client()` method.

A `MockTransportClient` implements the `BulkTransportClient` API but does not need a running Elasticsearch node
to connect to. This is useful for unit testing.

The client classes are enriched by metrics that can measure document count, size, and speed.

A `ClientBuilder` helps to build client instances. For example

       ClientBuilder clientBuilder = ClientBuilder.builder()
                .put("client.transport.ping_timeout", settings.get("timeout", "30s"))
                .put(ClientBuilder.MAX_ACTIONS_PER_REQUEST, settings.getAsInt("maxbulkactions", 1000))
                .put(ClientBuilder.MAX_CONCURRENT_REQUESTS, settings.getAsInt("maxconcurrentbulkrequests",
                .setMetric(new SimpleBulkMetric())
                .setControl(new SimpleBulkControl());
       BulkTransportClient client = clientBuilder.toBulkTransportClient();

For more examples, consult the integration etsts at `src/integration-test/java`.

A re-implemented `BulkProcessor` allows flushing of documents before closing.

Also, a light-weight re-implementation of the `TransportClient` class is provided with the following differences to the original `TransportClient`:

- no retry mechanism, no exponential back off, if an error or exception is encountered, the client fails fast

- no _sniffing_, that means, no additional nodes are detected during runtime

- methods of `TransportClient`, `TransportClientNodesServce`, `TransportClientProxy` classes are merged into one class

- configurable ping timeout

#### Some interesting methods

Here are some methods from the `ClientMethods` API, these are not all methods, but maybe
some of which can demonstrate the convenience.

Create new index, use settings and mappings from input streams.
ClientMethods newIndex(String index, String type, InputStream settings, InputStream mappings) throws IOException

Switch an index to bulk mode - disable replicas, set refresh interval.
ClientMethods startBulk(String index, long startRefreshIntervalSeconds, long stopRefreshIntervalSeconds) throws IOException

Index document, use bulk mode automatically.
ClientMethods index(String index, String type, String id, String source);

Wait for outstanding bulk responsed from the cluster.
ClientMethods waitForResponses(TimeValue maxWait) throws InterruptedException, ExecutionException;

Update replica level on an index.
int updateReplicaLevel(String index, int level) throws IOException;

Switch aliases from a previously created index with a timestamp to a current index under the common base name `index`.

void switchAliases(String index, String concreteIndex, List<String> extraAliases, IndexAliasAdder adder);

Retention policy for an index. All indices before `timestampdiff` should be deleted,
but `mintokeep` indices must be kept.

void performRetentionPolicy(String index, String concreteIndex, int timestampdiff, int mintokeep);

## Prerequisites

You will need Java 8, although Elasticsearch 2.x requires Java 7. Java 7 is not supported.

## Dependencies

This project depends only on which is a slim version of Coda Hale's metrics library,
Elasticsearch, and Log4j2 API.

## How to decode the Elasticsearch version

This project uses semantic versioning to determine the Elasticsearch upstream version it is built against.

The first three version numbers are the corresponding Elasticsearch version. The last version number is
an incrementing number, the version of this project.

Please use exactly the Elasticsearch version which is declared in the project's version.
Other Elasticsearch versions do not work and will never work, it is not worth to try it.
This is by design of the Elasticsearch project because the internal node communication protocol depends on the
exact same API implementation. Also, the exact same version of Java virtual machine is remoonded on server
and client side.