remove MonitorService from shutdown call, add more docs

This commit is contained in:
Jörg Prante 2016-11-15 20:58:08 +01:00
parent 9c5c99d770
commit 4f1ce83167
4 changed files with 118 additions and 10 deletions

113
README.adoc Normal file
View file

@ -0,0 +1,113 @@
# Elasticsearch Extras - Client
image:https://api.travis-ci.org/xbib/content.svg[title="Build status", link="https://travis-ci.org/jprante/elasticsearch-extras-client/"]
image:https://img.shields.io/sonar/http/nemo.sonarqube.com/org.xbib%3Aelasticsearch-extras-client/coverage.svg?style=flat-square[title="Coverage", link="https://sonarqube.com/dashboard/index?id=org.xbib%3Aelasticsearch-extras-client"]
image:https://maven-badges.herokuapp.com/maven-central/org.xbib/elasticsearch-extras-client/badge.svg[title="Maven Central", link="http://search.maven.org/#search%7Cga%7C1%7Cxbib%20elasticsearch-extras-client"]
image:https://img.shields.io/badge/License-Apache%202.0-blue.svg[title="Apache License 2.0", link="https://opensource.org/licenses/Apache-2.0"]
This Java library extends the Elasticsearch Java Client classes for better convenience.
It is not a plugin for Elasticsearch. Use it by importing the jar from Maven Central into your project.
The Elasticsearch node client and transport client APIs are unified in a `ClientMethods` interface. This interface uses
bulk services and index management under the hood, like index creation, alias managent, and retention policies.
Two classes `BulkNodeClient` and `BulkTransportClient` combine the client methods with the `BulkProcessor`,
and still offer the `Client` interface of Elasticsearch by using the `client()` method.
A `MockTransportClient` implements the `BulkTransportClient` API but does not need a running Elasticsearch node
to connect to. This is useful for unit testing.
The client classes are enriched by metrics that can measure document count, size, and speed.
A `ClientBuilder` helps to build client instances. For exmaple
[source,java]
----
ClientBuilder clientBuilder = ClientBuilder.builder()
.put(elasticsearchSettings)
.put("client.transport.ping_timeout", settings.get("timeout", "30s"))
.put(ClientBuilder.MAX_ACTIONS_PER_REQUEST, settings.getAsInt("maxbulkactions", 1000))
.put(ClientBuilder.MAX_CONCURRENT_REQUESTS, settings.getAsInt("maxconcurrentbulkrequests",
Runtime.getRuntime().availableProcessors()))
.setMetric(new SimpleBulkMetric())
.setControl(new SimpleBulkControl());
BulkTransportClient client = clientBuilder.toBulkTransportClient();
----
A re-implemented `BulkProcessor` allows flushing of documents before closing.
Also, a light-weight re-implementation of the `TransportClient` class is provided with the following differences to the original `TransportClient`:
- no retry mechanism, no exponential back off, if an error or exception is encountered, the client fails fast
- no _sniffing_, that means, no additional nodes are detected during runtime
- methods of `TransportClient`, `TransportClientNodesServce`, `TransportClientProxy` classes are merged into one class
- configurable ping timeout
#### Some interesting methods
Here are some methods from the `ClientMethods` API, these are not all methods, but maybe
some of which can demonstrate the convencience.
Create new index, use settings and mappings from input streams.
----
ClientMethods newIndex(String index, String type, InputStream settings, InputStream mappings) throws IOException
----
Switch an index to bulk mode - disable replicas, set refresh interval.
----
ClientMethods startBulk(String index, long startRefreshIntervalSeconds, long stopRefreshIntervalSeconds) throws IOException
----
Index document, use bulk mode automatically.
----
ClientMethods index(String index, String type, String id, String source);
----
Wait for outstanding bulk responsed from the cluster.
----
ClientMethods waitForResponses(TimeValue maxWait) throws InterruptedException, ExecutionException;
----
Update replica level on an index.
----
int updateReplicaLevel(String index, int level) throws IOException;
----
Switch aliases from a previously created index with a timestamp to a current index under the common base name `index`.
----
void switchAliases(String index, String concreteIndex, List<String> extraAliases, IndexAliasAdder adder);
----
Retention policy for an index. All indices before `timestampdiff` should be deleted,
but `mintokeep` indices must be kept.
----
void performRetentionPolicy(String index, String concreteIndex, int timestampdiff, int mintokeep);
----
## Prerequisites
You will need Java 8, although Elasticsearch 2.x requires Java 7. Java 7 is not supported.
## Dependencies
This project depends only on https://github/com/xbib/metrics which is a slim version of Coda Hale's metrics library,
and Elasticsearch.
## How to decode the Elasticsearch version
This project uses semantic versioning to determine the Elasticsearch upstream version it is built against.
The first three version numbers are the corresponding Elasticsearch version. The last version number is
an incrementing number, the version of this project.
Please use exactly the Elasticsearch version which is declared in the project's version.
Other Elasticsearch versions do not work and will never work, it is not worth to try it.
This is by design of the Elasticsearch project because the internal node communication protocol depends on the
exact same API implementation. Also, the exact same version of Java virtual machine is remoonded on server
and client side.

View file

View file

@ -2,11 +2,11 @@
plugins { plugins {
id "org.sonarqube" version "2.2" id "org.sonarqube" version "2.2"
id "org.ajoberstar.github-pages" version "1.6.0-rc.1" id "org.ajoberstar.github-pages" version "1.6.0-rc.1"
id "org.xbib.gradle.plugin.jbake" version "1.1.0" id "org.xbib.gradle.plugin.jbake" version "1.2.1"
} }
group = 'org.xbib' group = 'org.xbib'
version = '2.2.1.0' version = '2.2.1.1'
printf "Host: %s\nOS: %s %s %s\nJVM: %s %s %s %s\nGroovy: %s\nGradle: %s\n" + printf "Host: %s\nOS: %s %s %s\nJVM: %s %s %s %s\nGroovy: %s\nGradle: %s\n" +
"Build: group: ${project.group} name: ${project.name} version: ${project.version}\n", "Build: group: ${project.group} name: ${project.name} version: ${project.version}\n",
@ -45,8 +45,8 @@ sourceSets {
} }
} }
sourceCompatibility = 1.8 sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = 1.8 targetCompatibility = JavaVersion.VERSION_1_8
configurations { configurations {
wagon wagon
@ -64,6 +64,7 @@ dependencies {
wagon 'org.apache.maven.wagon:wagon-ssh-external:2.10' wagon 'org.apache.maven.wagon:wagon-ssh-external:2.10'
} }
[compileJava, compileTestJava]*.options*.encoding = 'UTF-8'
tasks.withType(JavaCompile) { tasks.withType(JavaCompile) {
options.compilerArgs << "-Xlint:all" << "-profile" << "compact3" options.compilerArgs << "-Xlint:all" << "-profile" << "compact3"
} }

View file

@ -37,7 +37,6 @@ import org.elasticsearch.common.settings.SettingsModule;
import org.elasticsearch.common.transport.InetSocketTransportAddress; import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.indices.breaker.CircuitBreakerModule; import org.elasticsearch.indices.breaker.CircuitBreakerModule;
import org.elasticsearch.monitor.MonitorService;
import org.elasticsearch.node.internal.InternalSettingsPreparer; import org.elasticsearch.node.internal.InternalSettingsPreparer;
import org.elasticsearch.plugins.Plugin; import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.plugins.PluginsModule; import org.elasticsearch.plugins.PluginsModule;
@ -254,11 +253,6 @@ public class TransportClient extends AbstractClient {
nodes = Collections.emptyList(); nodes = Collections.emptyList();
} }
injector.getInstance(TransportService.class).close(); injector.getInstance(TransportService.class).close();
try {
injector.getInstance(MonitorService.class).close();
} catch (Exception e) {
logger.debug(e.getMessage(), e);
}
for (Class<? extends LifecycleComponent> plugin : injector.getInstance(PluginsService.class).nodeServices()) { for (Class<? extends LifecycleComponent> plugin : injector.getInstance(PluginsService.class).nodeServices()) {
injector.getInstance(plugin).close(); injector.getInstance(plugin).close();
} }