initial commit, fork of netty 4.1.104.Final

This commit is contained in:
Jörg Prante 2024-01-06 00:06:56 +01:00
commit 707e054e50
1973 changed files with 421464 additions and 0 deletions

16
.gitignore vendored Normal file
View file

@ -0,0 +1,16 @@
/.settings
/.classpath
/.project
/.gradle
**/data
**/work
**/logs
**/.idea
**/target
**/out
**/build
.DS_Store
*.iml
*~
*.key
*.crt

202
LICENSE.txt Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

30
NOTICE.txt Normal file
View file

@ -0,0 +1,30 @@
The following changes were performed on the original source code:
- removed slf4j, log4j, log4j2 logging
- removed internal classes for GraalVM (SCM)
- removed internal classes for Blockhound
- removed jetbrains annotations
- private copy of jctools in io.netty.jctools
- removed SecurityManager code
- add module info
- removed lzma dependency (too old for module)
- use JdkZLibDecoder/JdkZlibEncoder in websocketx
- removed JettyAlpnSslEngine
- removed JettyNpnSslEngine
- removed NPN
- use of javax.security.cert.X509Certificate replaced by java.security.cert.Certificate
- private copy of com.jcraft.zlib in io.netty.zlib
- precompiled io.netty.util.collection classes added
- refactored SSL handler to separate subproject netty-handler-ssl
- refactored compression codecs to separate subproject netty-handler-codec-compression
- moved netty-tcnative/openssl-classes to netty-internal-tcnative
- removed logging handler test
- removed native image handler test
Challenges for Netty build on JDK 21
- unmaintained com.jcraft.jzlib
- JCTools uses sun.misc.Unsafe, not VarHandles
- PlatformDependent uses sun.misc.Unsafe
- finalize() in PoolThreadCache, PoolArena

35
build.gradle Normal file
View file

@ -0,0 +1,35 @@
plugins {
id 'maven-publish'
id 'signing'
id "io.github.gradle-nexus.publish-plugin" version "2.0.0-rc-1"
}
wrapper {
gradleVersion = libs.versions.gradle.get()
distributionType = Wrapper.DistributionType.ALL
}
ext {
user = 'joerg'
name = 'undertow'
description = 'Undertow port forked from quarkus-http'
inceptionYear = '2023'
url = 'https://xbib.org/' + user + '/' + name
scmUrl = 'https://xbib.org/' + user + '/' + name
scmConnection = 'scm:git:git://xbib.org/' + user + '/' + name + '.git'
scmDeveloperConnection = 'scm:git:ssh://forgejo@xbib.org:' + user + '/' + name + '.git'
issueManagementSystem = 'Forgejo'
issueManagementUrl = ext.scmUrl + '/issues'
licenseName = 'The Apache License, Version 2.0'
licenseUrl = 'http://www.apache.org/licenses/LICENSE-2.0.txt'
}
subprojects {
apply from: rootProject.file('gradle/repositories/maven.gradle')
apply from: rootProject.file('gradle/compile/java.gradle')
apply from: rootProject.file('gradle/test/junit5.gradle')
apply from: rootProject.file('gradle/publish/maven.gradle')
}
apply from: rootProject.file('gradle/publish/sonatype.gradle')
apply from: rootProject.file('gradle/publish/forgejo.gradle')

3
gradle.properties Normal file
View file

@ -0,0 +1,3 @@
group = org.xbib
name = netty
version = 4.1.104

View file

@ -0,0 +1,37 @@
apply plugin: 'java-library'
java {
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
modularity.inferModulePath.set(true)
withSourcesJar()
withJavadocJar()
}
jar {
manifest {
attributes('Implementation-Version': project.version)
attributes('X-Java-Compiler-Version': JavaLanguageVersion.of(21).toString())
}
}
tasks.withType(JavaCompile) {
options.fork = true
options.forkOptions.jvmArgs += [
'-Duser.language=en',
'-Duser.country=US',
]
options.compilerArgs += [
'-Xlint:all',
'--add-exports=jdk.unsupported/sun.misc=org.xbib.io.netty.jctools',
'--add-exports=java.base/jdk.internal.misc=org.xbib.io.netty.util'
]
options.encoding = 'UTF-8'
}
tasks.withType(Javadoc) {
options.addStringOption('Xdoclint:none', '-quiet')
options.encoding = 'UTF-8'
}

View file

@ -0,0 +1,19 @@
apply plugin: 'org.xbib.gradle.plugin.asciidoctor'
asciidoctor {
backends 'html5'
outputDir = file("${rootProject.projectDir}/docs")
separateOutputDirs = false
attributes 'source-highlighter': 'coderay',
idprefix: '',
idseparator: '-',
toc: 'left',
doctype: 'book',
icons: 'font',
encoding: 'utf-8',
sectlink: true,
sectanchors: true,
linkattrs: true,
imagesdir: 'img',
stylesheet: "${projectDir}/src/docs/asciidoc/css/foundation.css"
}

8
gradle/ide/idea.gradle Normal file
View file

@ -0,0 +1,8 @@
apply plugin: 'idea'
idea {
module {
outputDir file('build/classes/java/main')
testOutputDir file('build/classes/java/test')
}
}

View file

@ -0,0 +1,16 @@
if (project.hasProperty('forgeJoToken')) {
publishing {
repositories {
maven {
url 'https://xbib.org/api/packages/joerg/maven'
credentials(HttpHeaderCredentials) {
name = "Authorization"
value = "token ${project.property('forgeJoToken')}"
}
authentication {
header(HttpHeaderAuthentication)
}
}
}
}
}

27
gradle/publish/ivy.gradle Normal file
View file

@ -0,0 +1,27 @@
apply plugin: 'ivy-publish'
publishing {
repositories {
ivy {
url = "https://xbib.org/repo"
}
}
publications {
ivy(IvyPublication) {
from components.java
descriptor {
license {
name = 'The Apache License, Version 2.0'
url = 'http://www.apache.org/licenses/LICENSE-2.0.txt'
}
author {
name = 'Jörg Prante'
url = 'http://example.com/users/jane'
}
descriptor.description {
text = rootProject.ext.description
}
}
}
}
}

View file

@ -0,0 +1,52 @@
publishing {
publications {
"${project.name}"(MavenPublication) {
from components.java
pom {
artifactId = project.name
name = project.name
version = project.version
description = rootProject.ext.description
url = rootProject.ext.url
inceptionYear = rootProject.ext.inceptionYear
packaging = 'jar'
organization {
name = 'xbib'
url = 'https://xbib.org'
}
developers {
developer {
id = 'jprante'
name = 'Jörg Prante'
email = 'joergprante@gmail.com'
url = 'https://xbib.org/joerg'
}
}
scm {
url = rootProject.ext.scmUrl
connection = rootProject.ext.scmConnection
developerConnection = rootProject.ext.scmDeveloperConnection
}
issueManagement {
system = rootProject.ext.issueManagementSystem
url = rootProject.ext.issueManagementUrl
}
licenses {
license {
name = rootProject.ext.licenseName
url = rootProject.ext.licenseUrl
distribution = 'repo'
}
}
}
}
}
}
if (project.hasProperty("signing.keyId")) {
apply plugin: 'signing'
signing {
sign publishing.publications."${project.name}"
}
}

View file

@ -0,0 +1,12 @@
if (project.hasProperty('ossrhUsername') && project.hasProperty('ossrhPassword')) {
nexusPublishing {
repositories {
sonatype {
username = project.property('ossrhUsername')
password = project.property('ossrhPassword')
packageGroup = "org.xbib"
}
}
}
}

View file

@ -0,0 +1,19 @@
apply plugin: 'checkstyle'
tasks.withType(Checkstyle) {
ignoreFailures = true
reports {
xml.getRequired().set(true)
html.getRequired().set(true)
}
}
checkstyle {
toolVersion = '10.4'
configFile = rootProject.file('gradle/quality/checkstyle.xml')
ignoreFailures = true
showViolations = false
checkstyleMain {
source = sourceSets.main.allSource
}
}

View file

@ -0,0 +1,333 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE module PUBLIC
"-//Puppy Crawl//DTD Check Configuration 1.3//EN"
"http://www.puppycrawl.com/dtds/configuration_1_3.dtd">
<!-- This is a checkstyle configuration file. For descriptions of
what the following rules do, please see the checkstyle configuration
page at http://checkstyle.sourceforge.net/config.html -->
<module name="Checker">
<module name="BeforeExecutionExclusionFileFilter">
<property name="fileNamePattern" value=".*(Example|Test|module-info)(\$.*)?"/>
</module>
<module name="FileTabCharacter">
<!-- Checks that there are no tab characters in the file.
-->
</module>
<module name="NewlineAtEndOfFile">
<property name="lineSeparator" value="lf"/>
</module>
<module name="RegexpSingleline">
<!-- Checks that FIXME is not used in comments. TODO is preferred.
-->
<property name="format" value="((//.*)|(\*.*))FIXME" />
<property name="message" value='TODO is preferred to FIXME. e.g. "TODO(johndoe): Refactor when v2 is released."' />
</module>
<module name="RegexpSingleline">
<!-- Checks that TODOs are named. (Actually, just that they are followed
by an open paren.)
-->
<property name="format" value="((//.*)|(\*.*))TODO[^(]" />
<property name="message" value='All TODOs should be named. e.g. "TODO(johndoe): Refactor when v2 is released."' />
</module>
<module name="JavadocPackage">
<!-- Checks that each Java package has a Javadoc file used for commenting.
Only allows a package-info.java, not package.html. -->
</module>
<!-- All Java AST specific tests live under TreeWalker module. -->
<module name="TreeWalker">
<!--
IMPORT CHECKS
-->
<module name="RedundantImport">
<!-- Checks for redundant import statements. -->
<property name="severity" value="error"/>
</module>
<module name="ImportOrder">
<!-- Checks for out of order import statements. -->
<property name="severity" value="warning"/>
<!-- <property name="tokens" value="IMPORT, STATIC_IMPORT"/> -->
<property name="separated" value="false"/>
<property name="groups" value="*"/>
<!-- <property name="option" value="above"/> -->
<property name="sortStaticImportsAlphabetically" value="true"/>
</module>
<module name="CustomImportOrder">
<!-- <property name="customImportOrderRules" value="THIRD_PARTY_PACKAGE###SPECIAL_IMPORTS###STANDARD_JAVA_PACKAGE###STATIC"/> -->
<!-- <property name="specialImportsRegExp" value="^javax\."/> -->
<!-- <property name="standardPackageRegExp" value="^java\."/> -->
<property name="sortImportsInGroupAlphabetically" value="true"/>
<property name="separateLineBetweenGroups" value="false"/>
</module>
<!--
JAVADOC CHECKS
-->
<!-- Checks for Javadoc comments. -->
<!-- See http://checkstyle.sf.net/config_javadoc.html -->
<module name="JavadocMethod">
<property name="accessModifiers" value="protected"/>
<property name="severity" value="warning"/>
<property name="allowMissingParamTags" value="true"/>
<property name="allowMissingReturnTag" value="true"/>
</module>
<module name="JavadocType">
<property name="scope" value="protected"/>
<property name="severity" value="error"/>
</module>
<module name="JavadocStyle">
<property name="severity" value="warning"/>
</module>
<!--
NAMING CHECKS
-->
<!-- Item 38 - Adhere to generally accepted naming conventions -->
<module name="PackageName">
<!-- Validates identifiers for package names against the
supplied expression. -->
<!-- Here the default checkstyle rule restricts package name parts to
seven characters, this is not in line with common practice at Google.
-->
<property name="format" value="^[a-z]+(\.[a-z][a-z0-9]{1,})*$"/>
<property name="severity" value="warning"/>
</module>
<module name="TypeNameCheck">
<!-- Validates static, final fields against the
expression "^[A-Z][a-zA-Z0-9]*$". -->
<metadata name="altname" value="TypeName"/>
<property name="severity" value="warning"/>
</module>
<module name="ConstantNameCheck">
<!-- Validates non-private, static, final fields against the supplied
public/package final fields "^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$". -->
<metadata name="altname" value="ConstantName"/>
<property name="applyToPublic" value="true"/>
<property name="applyToProtected" value="true"/>
<property name="applyToPackage" value="true"/>
<property name="applyToPrivate" value="false"/>
<property name="format" value="^([A-Z][A-Z0-9]*(_[A-Z0-9]+)*|FLAG_.*)$"/>
<message key="name.invalidPattern"
value="Variable ''{0}'' should be in ALL_CAPS (if it is a constant) or be private (otherwise)."/>
<property name="severity" value="warning"/>
</module>
<module name="StaticVariableNameCheck">
<!-- Validates static, non-final fields against the supplied
expression "^[a-z][a-zA-Z0-9]*_?$". -->
<metadata name="altname" value="StaticVariableName"/>
<property name="applyToPublic" value="true"/>
<property name="applyToProtected" value="true"/>
<property name="applyToPackage" value="true"/>
<property name="applyToPrivate" value="true"/>
<property name="format" value="^[a-z][a-zA-Z0-9]*_?$"/>
<property name="severity" value="warning"/>
</module>
<module name="MemberNameCheck">
<!-- Validates non-static members against the supplied expression. -->
<metadata name="altname" value="MemberName"/>
<property name="applyToPublic" value="true"/>
<property name="applyToProtected" value="true"/>
<property name="applyToPackage" value="true"/>
<property name="applyToPrivate" value="true"/>
<property name="format" value="^[a-z][a-zA-Z0-9]*$"/>
<property name="severity" value="warning"/>
</module>
<module name="MethodNameCheck">
<!-- Validates identifiers for method names. -->
<metadata name="altname" value="MethodName"/>
<property name="format" value="^[a-z][a-zA-Z0-9]*(_[a-zA-Z0-9]+)*$"/>
<property name="severity" value="warning"/>
</module>
<module name="ParameterName">
<!-- Validates identifiers for method parameters against the
expression "^[a-z][a-zA-Z0-9]*$". -->
<property name="severity" value="warning"/>
</module>
<module name="LocalFinalVariableName">
<!-- Validates identifiers for local final variables against the
expression "^[a-z][a-zA-Z0-9]*$". -->
<property name="severity" value="warning"/>
</module>
<module name="LocalVariableName">
<!-- Validates identifiers for local variables against the
expression "^[a-z][a-zA-Z0-9]*$". -->
<property name="severity" value="warning"/>
</module>
<!--
LENGTH and CODING CHECKS
-->
<module name="LeftCurly">
<!-- Checks for placement of the left curly brace ('{'). -->
<property name="severity" value="warning"/>
</module>
<module name="RightCurly">
<!-- Checks right curlies on CATCH, ELSE, and TRY blocks are on
the same line. e.g., the following example is fine:
<pre>
if {
...
} else
</pre>
-->
<!-- This next example is not fine:
<pre>
if {
...
}
else
</pre>
-->
<property name="option" value="same"/>
<property name="severity" value="warning"/>
</module>
<!-- Checks for braces around if and else blocks -->
<module name="NeedBraces">
<property name="severity" value="warning"/>
<property name="tokens" value="LITERAL_IF, LITERAL_ELSE, LITERAL_FOR, LITERAL_WHILE, LITERAL_DO"/>
</module>
<module name="UpperEll">
<!-- Checks that long constants are defined with an upper ell.-->
<property name="severity" value="error"/>
</module>
<module name="FallThrough">
<!-- Warn about falling through to the next case statement. Similar to
javac -Xlint:fallthrough, but the check is suppressed if a single-line comment
on the last non-blank line preceding the fallen-into case contains 'fall through' (or
some other variants which we don't publicized to promote consistency).
-->
<property name="reliefPattern"
value="fall through|Fall through|fallthru|Fallthru|falls through|Falls through|fallthrough|Fallthrough|No break|NO break|no break|continue on"/>
<property name="severity" value="error"/>
</module>
<!--
MODIFIERS CHECKS
-->
<module name="ModifierOrder">
<!-- Warn if modifier order is inconsistent with JLS3 8.1.1, 8.3.1, and
8.4.3. The prescribed order is:
public, protected, private, abstract, static, final, transient, volatile,
synchronized, native, strictfp
-->
</module>
<!--
WHITESPACE CHECKS
-->
<module name="WhitespaceAround">
<!-- Checks that various tokens are surrounded by whitespace.
This includes most binary operators and keywords followed
by regular or curly braces.
-->
<property name="tokens" value="ASSIGN, BAND, BAND_ASSIGN, BOR,
BOR_ASSIGN, BSR, BSR_ASSIGN, BXOR, BXOR_ASSIGN, COLON, DIV, DIV_ASSIGN,
EQUAL, GE, GT, LAND, LE, LITERAL_CATCH, LITERAL_DO, LITERAL_ELSE,
LITERAL_FINALLY, LITERAL_FOR, LITERAL_IF, LITERAL_RETURN,
LITERAL_SYNCHRONIZED, LITERAL_TRY, LITERAL_WHILE, LOR, LT, MINUS,
MINUS_ASSIGN, MOD, MOD_ASSIGN, NOT_EQUAL, PLUS, PLUS_ASSIGN, QUESTION,
SL, SL_ASSIGN, SR_ASSIGN, STAR, STAR_ASSIGN"/>
<property name="severity" value="error"/>
</module>
<module name="WhitespaceAfter">
<!-- Checks that commas, semicolons and typecasts are followed by
whitespace.
-->
<property name="tokens" value="COMMA, SEMI, TYPECAST"/>
</module>
<module name="NoWhitespaceAfter">
<!-- Checks that there is no whitespace after various unary operators.
Linebreaks are allowed.
-->
<property name="tokens" value="BNOT, DEC, DOT, INC, LNOT, UNARY_MINUS,
UNARY_PLUS"/>
<property name="allowLineBreaks" value="true"/>
<property name="severity" value="error"/>
</module>
<module name="NoWhitespaceBefore">
<!-- Checks that there is no whitespace before various unary operators.
Linebreaks are allowed.
-->
<property name="tokens" value="SEMI, DOT, POST_DEC, POST_INC"/>
<property name="allowLineBreaks" value="true"/>
<property name="severity" value="error"/>
</module>
<module name="ParenPad">
<!-- Checks that there is no whitespace before close parens or after
open parens.
-->
<property name="severity" value="warning"/>
</module>
</module>
<module name="LineLength">
<!-- Checks if a line is too long. -->
<property name="max" value="${com.puppycrawl.tools.checkstyle.checks.sizes.LineLength.max}" default="128"/>
<property name="severity" value="error"/>
<!--
The default ignore pattern exempts the following elements:
- import statements
- long URLs inside comments
-->
<property name="ignorePattern"
value="${com.puppycrawl.tools.checkstyle.checks.sizes.LineLength.ignorePattern}"
default="^(package .*;\s*)|(import .*;\s*)|( *(\*|//).*https?://.*)$"/>
</module>
</module>

View file

@ -0,0 +1,11 @@
cyclonedxBom {
includeConfigs = [ 'runtimeClasspath' ]
skipConfigs = [ 'compileClasspath', 'testCompileClasspath' ]
projectType = "library"
schemaVersion = "1.4"
destination = file("build/reports")
outputName = "bom"
outputFormat = "json"
includeBomSerialNumber = true
componentVersion = "2.0.0"
}

17
gradle/quality/pmd.gradle Normal file
View file

@ -0,0 +1,17 @@
apply plugin: 'pmd'
tasks.withType(Pmd) {
ignoreFailures = true
reports {
xml.getRequired().set(true)
html.getRequired().set(true)
}
}
pmd {
ignoreFailures = true
consoleOutput = false
toolVersion = "6.51.0"
ruleSetFiles = rootProject.files('gradle/quality/pmd/category/java/bestpractices.xml')
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,10 @@
rulesets.filenames=\
category/java/bestpractices.xml,\
category/java/codestyle.xml,\
category/java/design.xml,\
category/java/documentation.xml,\
category/java/errorprone.xml,\
category/java/multithreading.xml,\
category/java/performance.xml,\
category/java/security.xml

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,144 @@
<?xml version="1.0"?>
<ruleset name="Documentation"
xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 https://pmd.sourceforge.io/ruleset_2_0_0.xsd">
<description>
Rules that are related to code documentation.
</description>
<rule name="CommentContent"
since="5.0"
message="Invalid words or phrases found"
class="net.sourceforge.pmd.lang.java.rule.documentation.CommentContentRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_documentation.html#commentcontent">
<description>
A rule for the politically correct... we don't want to offend anyone.
</description>
<priority>3</priority>
<example>
<![CDATA[
//OMG, this is horrible, Bob is an idiot !!!
]]>
</example>
</rule>
<rule name="CommentRequired"
since="5.1"
message="Comment is required"
class="net.sourceforge.pmd.lang.java.rule.documentation.CommentRequiredRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_documentation.html#commentrequired">
<description>
Denotes whether comments are required (or unwanted) for specific language elements.
</description>
<priority>3</priority>
<example>
<![CDATA[
/**
*
*
* @author Jon Doe
*/
]]>
</example>
</rule>
<rule name="CommentSize"
since="5.0"
message="Comment is too large"
class="net.sourceforge.pmd.lang.java.rule.documentation.CommentSizeRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_documentation.html#commentsize">
<description>
Determines whether the dimensions of non-header comments found are within the specified limits.
</description>
<priority>3</priority>
<example>
<![CDATA[
/**
*
* too many lines!
*
*
*
*
*
*
*
*
*
*
*
*
*/
]]>
</example>
</rule>
<rule name="UncommentedEmptyConstructor"
language="java"
since="3.4"
message="Document empty constructor"
class="net.sourceforge.pmd.lang.rule.XPathRule"
typeResolution="true"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_documentation.html#uncommentedemptyconstructor">
<description>
Uncommented Empty Constructor finds instances where a constructor does not
contain statements, but there is no comment. By explicitly commenting empty
constructors it is easier to distinguish between intentional (commented)
and unintentional empty constructors.
</description>
<priority>3</priority>
<properties>
<property name="xpath">
<value>
<![CDATA[
//ConstructorDeclaration[@Private='false']
[count(BlockStatement) = 0 and ($ignoreExplicitConstructorInvocation = 'true' or not(ExplicitConstructorInvocation)) and @containsComment = 'false']
[not(../Annotation/MarkerAnnotation/Name[pmd-java:typeIs('javax.inject.Inject')])]
]]>
</value>
</property>
<property name="ignoreExplicitConstructorInvocation" type="Boolean" description="Ignore explicit constructor invocation when deciding whether constructor is empty or not" value="false"/>
</properties>
<example>
<![CDATA[
public Foo() {
// This constructor is intentionally empty. Nothing special is needed here.
}
]]>
</example>
</rule>
<rule name="UncommentedEmptyMethodBody"
language="java"
since="3.4"
message="Document empty method body"
class="net.sourceforge.pmd.lang.rule.XPathRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_documentation.html#uncommentedemptymethodbody">
<description>
Uncommented Empty Method Body finds instances where a method body does not contain
statements, but there is no comment. By explicitly commenting empty method bodies
it is easier to distinguish between intentional (commented) and unintentional
empty methods.
</description>
<priority>3</priority>
<properties>
<property name="xpath">
<value>
<![CDATA[
//MethodDeclaration/Block[count(BlockStatement) = 0 and @containsComment = 'false']
]]>
</value>
</property>
</properties>
<example>
<![CDATA[
public void doSomething() {
}
]]>
</example>
</rule>
</ruleset>

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,393 @@
<?xml version="1.0"?>
<ruleset name="Multithreading"
xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 https://pmd.sourceforge.io/ruleset_2_0_0.xsd">
<description>
Rules that flag issues when dealing with multiple threads of execution.
</description>
<rule name="AvoidSynchronizedAtMethodLevel"
language="java"
since="3.0"
message="Use block level rather than method level synchronization"
class="net.sourceforge.pmd.lang.rule.XPathRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#avoidsynchronizedatmethodlevel">
<description>
Method-level synchronization can cause problems when new code is added to the method.
Block-level synchronization helps to ensure that only the code that needs synchronization
gets it.
</description>
<priority>3</priority>
<properties>
<property name="xpath">
<value>//MethodDeclaration[@Synchronized='true']</value>
</property>
</properties>
<example>
<![CDATA[
public class Foo {
// Try to avoid this:
synchronized void foo() {
}
// Prefer this:
void bar() {
synchronized(this) {
}
}
// Try to avoid this for static methods:
static synchronized void fooStatic() {
}
// Prefer this:
static void barStatic() {
synchronized(Foo.class) {
}
}
}
]]>
</example>
</rule>
<rule name="AvoidThreadGroup"
language="java"
since="3.6"
message="Avoid using java.lang.ThreadGroup; it is not thread safe"
class="net.sourceforge.pmd.lang.rule.XPathRule"
typeResolution="true"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#avoidthreadgroup">
<description>
Avoid using java.lang.ThreadGroup; although it is intended to be used in a threaded environment
it contains methods that are not thread-safe.
</description>
<priority>3</priority>
<properties>
<property name="xpath">
<value>
<![CDATA[
//AllocationExpression/ClassOrInterfaceType[pmd-java:typeIs('java.lang.ThreadGroup')]|
//PrimarySuffix[contains(@Image, 'getThreadGroup')]
]]>
</value>
</property>
</properties>
<example>
<![CDATA[
public class Bar {
void buz() {
ThreadGroup tg = new ThreadGroup("My threadgroup");
tg = new ThreadGroup(tg, "my thread group");
tg = Thread.currentThread().getThreadGroup();
tg = System.getSecurityManager().getThreadGroup();
}
}
]]>
</example>
</rule>
<rule name="AvoidUsingVolatile"
language="java"
since="4.1"
class="net.sourceforge.pmd.lang.rule.XPathRule"
message="Use of modifier volatile is not recommended."
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#avoidusingvolatile">
<description>
Use of the keyword 'volatile' is generally used to fine tune a Java application, and therefore, requires
a good expertise of the Java Memory Model. Moreover, its range of action is somewhat misknown. Therefore,
the volatile keyword should not be used for maintenance purpose and portability.
</description>
<priority>2</priority>
<properties>
<property name="xpath">
<value>//FieldDeclaration[contains(@Volatile,'true')]</value>
</property>
</properties>
<example>
<![CDATA[
public class ThrDeux {
private volatile String var1; // not suggested
private String var2; // preferred
}
]]>
</example>
</rule>
<rule name="DoNotUseThreads"
language="java"
since="4.1"
class="net.sourceforge.pmd.lang.rule.XPathRule"
message="To be compliant to J2EE, a webapp should not use any thread."
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#donotusethreads">
<description>
The J2EE specification explicitly forbids the use of threads.
</description>
<priority>3</priority>
<properties>
<property name="xpath">
<value>//ClassOrInterfaceType[@Image = 'Thread' or @Image = 'Runnable']</value>
</property>
</properties>
<example>
<![CDATA[
// This is not allowed
public class UsingThread extends Thread {
}
// Neither this,
public class OtherThread implements Runnable {
// Nor this ...
public void methode() {
Runnable thread = new Thread(); thread.run();
}
}
]]>
</example>
</rule>
<rule name="DontCallThreadRun"
language="java"
since="4.3"
message="Don't call Thread.run() explicitly, use Thread.start()"
class="net.sourceforge.pmd.lang.rule.XPathRule"
typeResolution="true"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#dontcallthreadrun">
<description>
Explicitly calling Thread.run() method will execute in the caller's thread of control. Instead, call Thread.start() for the intended behavior.
</description>
<priority>4</priority>
<properties>
<property name="xpath">
<value>
<![CDATA[
//StatementExpression/PrimaryExpression
[
PrimaryPrefix
[
./Name[ends-with(@Image, '.run') or @Image = 'run']
and substring-before(Name/@Image, '.') =//VariableDeclarator/VariableDeclaratorId/@Image
[../../../Type/ReferenceType/ClassOrInterfaceType[pmd-java:typeIs('java.lang.Thread')]]
or (./AllocationExpression/ClassOrInterfaceType[pmd-java:typeIs('java.lang.Thread')]
and ../PrimarySuffix[@Image = 'run'])
]
]
]]>
</value>
</property>
</properties>
<example>
<![CDATA[
Thread t = new Thread();
t.run(); // use t.start() instead
new Thread().run(); // same violation
]]>
</example>
</rule>
<rule name="DoubleCheckedLocking"
language="java"
since="1.04"
message="Double checked locking is not thread safe in Java."
class="net.sourceforge.pmd.lang.java.rule.multithreading.DoubleCheckedLockingRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#doublecheckedlocking">
<description>
Partially created objects can be returned by the Double Checked Locking pattern when used in Java.
An optimizing JRE may assign a reference to the baz variable before it calls the constructor of the object the
reference points to.
Note: With Java 5, you can make Double checked locking work, if you declare the variable to be `volatile`.
For more details refer to: &lt;http://www.javaworld.com/javaworld/jw-02-2001/jw-0209-double.html>
or &lt;http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html>
</description>
<priority>1</priority>
<example>
<![CDATA[
public class Foo {
/*volatile */ Object baz = null; // fix for Java5 and later: volatile
Object bar() {
if (baz == null) { // baz may be non-null yet not fully created
synchronized(this) {
if (baz == null) {
baz = new Object();
}
}
}
return baz;
}
}
]]>
</example>
</rule>
<rule name="NonThreadSafeSingleton"
since="3.4"
message="Singleton is not thread safe"
class="net.sourceforge.pmd.lang.java.rule.multithreading.NonThreadSafeSingletonRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#nonthreadsafesingleton">
<description>
Non-thread safe singletons can result in bad state changes. Eliminate
static singletons if possible by instantiating the object directly. Static
singletons are usually not needed as only a single instance exists anyway.
Other possible fixes are to synchronize the entire method or to use an
[initialize-on-demand holder class](https://en.wikipedia.org/wiki/Initialization-on-demand_holder_idiom).
Refrain from using the double-checked locking pattern. The Java Memory Model doesn't
guarantee it to work unless the variable is declared as `volatile`, adding an uneeded
performance penalty. [Reference](http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html)
See Effective Java, item 48.
</description>
<priority>3</priority>
<example>
<![CDATA[
private static Foo foo = null;
//multiple simultaneous callers may see partially initialized objects
public static Foo getFoo() {
if (foo==null) {
foo = new Foo();
}
return foo;
}
]]>
</example>
</rule>
<rule name="UnsynchronizedStaticDateFormatter"
since="3.6"
deprecated="true"
message="Static DateFormatter objects should be accessed in a synchronized manner"
class="net.sourceforge.pmd.lang.java.rule.multithreading.UnsynchronizedStaticDateFormatterRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#unsynchronizedstaticdateformatter">
<description>
SimpleDateFormat instances are not synchronized. Sun recommends using separate format instances
for each thread. If multiple threads must access a static formatter, the formatter must be
synchronized either on method or block level.
This rule has been deprecated in favor of the rule {% rule UnsynchronizedStaticFormatter %}.
</description>
<priority>3</priority>
<example>
<![CDATA[
public class Foo {
private static final SimpleDateFormat sdf = new SimpleDateFormat();
void bar() {
sdf.format(); // poor, no thread-safety
}
synchronized void foo() {
sdf.format(); // preferred
}
}
]]>
</example>
</rule>
<rule name="UnsynchronizedStaticFormatter"
since="6.11.0"
message="Static Formatter objects should be accessed in a synchronized manner"
class="net.sourceforge.pmd.lang.java.rule.multithreading.UnsynchronizedStaticFormatterRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#unsynchronizedstaticformatter">
<description>
Instances of `java.text.Format` are generally not synchronized.
Sun recommends using separate format instances for each thread.
If multiple threads must access a static formatter, the formatter must be
synchronized either on method or block level.
</description>
<priority>3</priority>
<example>
<![CDATA[
public class Foo {
private static final SimpleDateFormat sdf = new SimpleDateFormat();
void bar() {
sdf.format(); // poor, no thread-safety
}
synchronized void foo() {
sdf.format(); // preferred
}
}
]]>
</example>
</rule>
<rule name="UseConcurrentHashMap"
language="java"
minimumLanguageVersion="1.5"
since="4.2.6"
message="If you run in Java5 or newer and have concurrent access, you should use the ConcurrentHashMap implementation"
class="net.sourceforge.pmd.lang.rule.XPathRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#useconcurrenthashmap">
<description>
Since Java5 brought a new implementation of the Map designed for multi-threaded access, you can
perform efficient map reads without blocking other threads.
</description>
<priority>3</priority>
<properties>
<property name="xpath">
<value>
<![CDATA[
//Type[../VariableDeclarator/VariableInitializer//AllocationExpression/ClassOrInterfaceType[@Image != 'ConcurrentHashMap']]
/ReferenceType/ClassOrInterfaceType[@Image = 'Map']
]]>
</value>
</property>
</properties>
<example>
<![CDATA[
public class ConcurrentApp {
public void getMyInstance() {
Map map1 = new HashMap(); // fine for single-threaded access
Map map2 = new ConcurrentHashMap(); // preferred for use with multiple threads
// the following case will be ignored by this rule
Map map3 = someModule.methodThatReturnMap(); // might be OK, if the returned map is already thread-safe
}
}
]]>
</example>
</rule>
<rule name="UseNotifyAllInsteadOfNotify"
language="java"
since="3.0"
message="Call Thread.notifyAll() rather than Thread.notify()"
class="net.sourceforge.pmd.lang.rule.XPathRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_multithreading.html#usenotifyallinsteadofnotify">
<description>
Thread.notify() awakens a thread monitoring the object. If more than one thread is monitoring, then only
one is chosen. The thread chosen is arbitrary; thus its usually safer to call notifyAll() instead.
</description>
<priority>3</priority>
<properties>
<property name="xpath">
<value>
<![CDATA[
//StatementExpression/PrimaryExpression
[PrimarySuffix/Arguments[@ArgumentCount = '0']]
[
PrimaryPrefix[
./Name[@Image='notify' or ends-with(@Image,'.notify')]
or ../PrimarySuffix/@Image='notify'
or (./AllocationExpression and ../PrimarySuffix[@Image='notify'])
]
]
]]>
</value>
</property>
</properties>
<example>
<![CDATA[
void bar() {
x.notify();
// If many threads are monitoring x, only one (and you won't know which) will be notified.
// use instead:
x.notifyAll();
}
]]>
</example>
</rule>
</ruleset>

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,65 @@
<?xml version="1.0"?>
<ruleset name="Security" xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 https://pmd.sourceforge.io/ruleset_2_0_0.xsd">
<description>
Rules that flag potential security flaws.
</description>
<rule name="HardCodedCryptoKey"
since="6.4.0"
message="Do not use hard coded encryption keys"
class="net.sourceforge.pmd.lang.java.rule.security.HardCodedCryptoKeyRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_security.html#hardcodedcryptokey">
<description>
Do not use hard coded values for cryptographic operations. Please store keys outside of source code.
</description>
<priority>3</priority>
<example>
<![CDATA[
public class Foo {
void good() {
SecretKeySpec secretKeySpec = new SecretKeySpec(Properties.getKey(), "AES");
}
void bad() {
SecretKeySpec secretKeySpec = new SecretKeySpec("my secret here".getBytes(), "AES");
}
}
]]>
</example>
</rule>
<rule name="InsecureCryptoIv"
since="6.3.0"
message="Do not use hard coded initialization vector in crypto operations"
class="net.sourceforge.pmd.lang.java.rule.security.InsecureCryptoIvRule"
externalInfoUrl="${pmd.website.baseurl}/pmd_rules_java_security.html#insecurecryptoiv">
<description>
Do not use hard coded initialization vector in cryptographic operations. Please use a randomly generated IV.
</description>
<priority>3</priority>
<example>
<![CDATA[
public class Foo {
void good() {
SecureRandom random = new SecureRandom();
byte iv[] = new byte[16];
random.nextBytes(bytes);
}
void bad() {
byte[] iv = new byte[] { 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, };
}
void alsoBad() {
byte[] iv = "secret iv in here".getBytes();
}
}
]]>
</example>
</rule>
</ruleset>

View file

@ -0,0 +1,10 @@
/*
sonarqube {
properties {
property "sonar.projectName", "${project.group} ${project.name}"
property "sonar.sourceEncoding", "UTF-8"
property "sonar.tests", "src/test/java"
property "sonar.scm.provider", "git"
}
}
*/

View file

@ -0,0 +1,14 @@
apply plugin: "com.github.spotbugs"
spotbugs {
effort = "min"
reportLevel = "low"
ignoreFailures = true
}
spotbugsMain {
reports {
xml.getRequired().set(false)
html.getRequired().set(true)
}
}

View file

@ -0,0 +1,4 @@
repositories {
mavenLocal()
mavenCentral()
}

22
gradle/test/jmh.gradle Normal file
View file

@ -0,0 +1,22 @@
sourceSets {
jmh {
java.srcDirs = ['src/jmh/java']
resources.srcDirs = ['src/jmh/resources']
compileClasspath += sourceSets.main.runtimeClasspath
}
}
dependencies {
jmhImplementation 'org.openjdk.jmh:jmh-core:1.37'
jmhAnnotationProcessor 'org.openjdk.jmh:jmh-generator-annprocess:1.37'
}
task jmh(type: JavaExec, group: 'jmh', dependsOn: jmhClasses) {
mainClass.set('org.openjdk.jmh.Main')
classpath = sourceSets.jmh.compileClasspath + sourceSets.jmh.runtimeClasspath
project.file('build/reports/jmh').mkdirs()
args '-rf', 'json'
args '-rff', project.file('build/reports/jmh/result.json')
}
classes.finalizedBy(jmhClasses)

44
gradle/test/junit5.gradle Normal file
View file

@ -0,0 +1,44 @@
dependencies {
testImplementation testLibs.junit.jupiter.api
testImplementation testLibs.junit.jupiter.params
testImplementation testLibs.hamcrest
testRuntimeOnly testLibs.junit.jupiter.engine
testRuntimeOnly testLibs.junit.vintage.engine
testRuntimeOnly testLibs.junit.jupiter.platform.launcher
}
test {
useJUnitPlatform()
failFast = false
testLogging {
events 'STARTED', 'PASSED', 'FAILED', 'SKIPPED'
showStandardStreams = true
}
minHeapSize = "1g" // initial heap size
maxHeapSize = "2g" // maximum heap size
jvmArgs '--add-exports=java.base/jdk.internal=ALL-UNNAMED',
'--add-exports=java.base/jdk.internal.misc=ALL-UNNAMED',
'--add-exports=java.base/sun.nio.ch=ALL-UNNAMED',
'--add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED',
'--add-opens=java.base/java.lang=ALL-UNNAMED',
'--add-opens=java.base/java.lang.reflect=ALL-UNNAMED',
'--add-opens=java.base/java.io=ALL-UNNAMED',
'--add-opens=java.base/java.util=ALL-UNNAMED',
'--add-opens=java.base/jdk.internal=ALL-UNNAMED',
'--add-opens=java.base/jdk.internal.misc=ALL-UNNAMED',
'--add-opens=jdk.unsupported/sun.misc=ALL-UNNAMED',
'-Dio.netty.bootstrap.extensions=serviceload'
systemProperty 'java.util.logging.config.file', 'src/test/resources/logging.properties'
systemProperty "nativeImage.handlerMetadataGroupId", "io.netty"
systemProperty "nativeimage.handlerMetadataArtifactId", "netty-transport"
afterSuite { desc, result ->
if (!desc.parent) {
println "\nTest result: ${result.resultType}"
println "Test summary: ${result.testCount} tests, " +
"${result.successfulTestCount} succeeded, " +
"${result.failedTestCount} failed, " +
"${result.skippedTestCount} skipped"
}
}
}

BIN
gradle/wrapper/gradle-wrapper.jar vendored Normal file

Binary file not shown.

View file

@ -0,0 +1,7 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-8.5-all.zip
networkTimeout=10000
validateDistributionUrl=true
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

249
gradlew vendored Executable file
View file

@ -0,0 +1,249 @@
#!/bin/sh
#
# Copyright © 2015-2021 the original authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
##############################################################################
#
# Gradle start up script for POSIX generated by Gradle.
#
# Important for running:
#
# (1) You need a POSIX-compliant shell to run this script. If your /bin/sh is
# noncompliant, but you have some other compliant shell such as ksh or
# bash, then to run this script, type that shell name before the whole
# command line, like:
#
# ksh Gradle
#
# Busybox and similar reduced shells will NOT work, because this script
# requires all of these POSIX shell features:
# * functions;
# * expansions «$var», «${var}», «${var:-default}», «${var+SET}»,
# «${var#prefix}», «${var%suffix}», and «$( cmd )»;
# * compound commands having a testable exit status, especially «case»;
# * various built-in commands including «command», «set», and «ulimit».
#
# Important for patching:
#
# (2) This script targets any POSIX shell, so it avoids extensions provided
# by Bash, Ksh, etc; in particular arrays are avoided.
#
# The "traditional" practice of packing multiple parameters into a
# space-separated string is a well documented source of bugs and security
# problems, so this is (mostly) avoided, by progressively accumulating
# options in "$@", and eventually passing that to Java.
#
# Where the inherited environment variables (DEFAULT_JVM_OPTS, JAVA_OPTS,
# and GRADLE_OPTS) rely on word-splitting, this is performed explicitly;
# see the in-line comments for details.
#
# There are tweaks for specific operating systems such as AIX, CygWin,
# Darwin, MinGW, and NonStop.
#
# (3) This script is generated from the Groovy template
# https://github.com/gradle/gradle/blob/HEAD/subprojects/plugins/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt
# within the Gradle project.
#
# You can find Gradle at https://github.com/gradle/gradle/.
#
##############################################################################
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
app_path=$0
# Need this for daisy-chained symlinks.
while
APP_HOME=${app_path%"${app_path##*/}"} # leaves a trailing /; empty if no leading path
[ -h "$app_path" ]
do
ls=$( ls -ld "$app_path" )
link=${ls#*' -> '}
case $link in #(
/*) app_path=$link ;; #(
*) app_path=$APP_HOME$link ;;
esac
done
# This is normally unused
# shellcheck disable=SC2034
APP_BASE_NAME=${0##*/}
# Discard cd standard output in case $CDPATH is set (https://github.com/gradle/gradle/issues/25036)
APP_HOME=$( cd "${APP_HOME:-./}" > /dev/null && pwd -P ) || exit
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD=maximum
warn () {
echo "$*"
} >&2
die () {
echo
echo "$*"
echo
exit 1
} >&2
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
nonstop=false
case "$( uname )" in #(
CYGWIN* ) cygwin=true ;; #(
Darwin* ) darwin=true ;; #(
MSYS* | MINGW* ) msys=true ;; #(
NONSTOP* ) nonstop=true ;;
esac
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD=$JAVA_HOME/jre/sh/java
else
JAVACMD=$JAVA_HOME/bin/java
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD=java
if ! command -v java >/dev/null 2>&1
then
die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
fi
# Increase the maximum file descriptors if we can.
if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then
case $MAX_FD in #(
max*)
# In POSIX sh, ulimit -H is undefined. That's why the result is checked to see if it worked.
# shellcheck disable=SC2039,SC3045
MAX_FD=$( ulimit -H -n ) ||
warn "Could not query maximum file descriptor limit"
esac
case $MAX_FD in #(
'' | soft) :;; #(
*)
# In POSIX sh, ulimit -n is undefined. That's why the result is checked to see if it worked.
# shellcheck disable=SC2039,SC3045
ulimit -n "$MAX_FD" ||
warn "Could not set maximum file descriptor limit to $MAX_FD"
esac
fi
# Collect all arguments for the java command, stacking in reverse order:
# * args from the command line
# * the main class name
# * -classpath
# * -D...appname settings
# * --module-path (only if needed)
# * DEFAULT_JVM_OPTS, JAVA_OPTS, and GRADLE_OPTS environment variables.
# For Cygwin or MSYS, switch paths to Windows format before running java
if "$cygwin" || "$msys" ; then
APP_HOME=$( cygpath --path --mixed "$APP_HOME" )
CLASSPATH=$( cygpath --path --mixed "$CLASSPATH" )
JAVACMD=$( cygpath --unix "$JAVACMD" )
# Now convert the arguments - kludge to limit ourselves to /bin/sh
for arg do
if
case $arg in #(
-*) false ;; # don't mess with options #(
/?*) t=${arg#/} t=/${t%%/*} # looks like a POSIX filepath
[ -e "$t" ] ;; #(
*) false ;;
esac
then
arg=$( cygpath --path --ignore --mixed "$arg" )
fi
# Roll the args list around exactly as many times as the number of
# args, so each arg winds up back in the position where it started, but
# possibly modified.
#
# NB: a `for` loop captures its iteration list before it begins, so
# changing the positional parameters here affects neither the number of
# iterations, nor the values presented in `arg`.
shift # remove old arg
set -- "$@" "$arg" # push replacement arg
done
fi
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
# Collect all arguments for the java command:
# * DEFAULT_JVM_OPTS, JAVA_OPTS, JAVA_OPTS, and optsEnvironmentVar are not allowed to contain shell fragments,
# and any embedded shellness will be escaped.
# * For example: A user cannot expect ${Hostname} to be expanded, as it is an environment variable and will be
# treated as '${Hostname}' itself on the command line.
set -- \
"-Dorg.gradle.appname=$APP_BASE_NAME" \
-classpath "$CLASSPATH" \
org.gradle.wrapper.GradleWrapperMain \
"$@"
# Stop when "xargs" is not available.
if ! command -v xargs >/dev/null 2>&1
then
die "xargs is not available"
fi
# Use "xargs" to parse quoted args.
#
# With -n1 it outputs one arg per line, with the quotes and backslashes removed.
#
# In Bash we could simply go:
#
# readarray ARGS < <( xargs -n1 <<<"$var" ) &&
# set -- "${ARGS[@]}" "$@"
#
# but POSIX shell has neither arrays nor command substitution, so instead we
# post-process each arg (as a line of input to sed) to backslash-escape any
# character that might be a shell metacharacter, then use eval to reverse
# that process (while maintaining the separation between arguments), and wrap
# the whole thing up as a single "set" statement.
#
# This will of course break if any of these variables contains a newline or
# an unmatched quote.
#
eval "set -- $(
printf '%s\n' "$DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS" |
xargs -n1 |
sed ' s~[^-[:alnum:]+,./:=@_]~\\&~g; ' |
tr '\n' ' '
)" '"$@"'
exec "$JAVACMD" "$@"

92
gradlew.bat vendored Normal file
View file

@ -0,0 +1,92 @@
@rem
@rem Copyright 2015 the original author or authors.
@rem
@rem Licensed under the Apache License, Version 2.0 (the "License");
@rem you may not use this file except in compliance with the License.
@rem You may obtain a copy of the License at
@rem
@rem https://www.apache.org/licenses/LICENSE-2.0
@rem
@rem Unless required by applicable law or agreed to in writing, software
@rem distributed under the License is distributed on an "AS IS" BASIS,
@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@rem See the License for the specific language governing permissions and
@rem limitations under the License.
@rem
@if "%DEBUG%"=="" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
set DIRNAME=%~dp0
if "%DIRNAME%"=="" set DIRNAME=.
@rem This is normally unused
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Resolve any "." and ".." in APP_HOME to make it shorter.
for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if %ERRORLEVEL% equ 0 goto execute
echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto execute
echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %*
:end
@rem End local scope for the variables with windows NT shell
if %ERRORLEVEL% equ 0 goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
set EXIT_CODE=%ERRORLEVEL%
if %EXIT_CODE% equ 0 set EXIT_CODE=1
if not ""=="%GRADLE_EXIT_CONSOLE%" exit %EXIT_CODE%
exit /b %EXIT_CODE%
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega

View file

@ -0,0 +1,5 @@
dependencies {
api project(':netty-util')
testImplementation testLibs.mockito.core
testImplementation testLibs.assertj
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,280 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.util.ResourceLeakDetector;
import io.netty.util.ResourceLeakTracker;
import io.netty.util.internal.MathUtil;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
/**
* Skeletal {@link ByteBufAllocator} implementation to extend.
*/
public abstract class AbstractByteBufAllocator implements ByteBufAllocator {
static final int DEFAULT_INITIAL_CAPACITY = 256;
static final int DEFAULT_MAX_CAPACITY = Integer.MAX_VALUE;
static final int DEFAULT_MAX_COMPONENTS = 16;
static final int CALCULATE_THRESHOLD = 1048576 * 4; // 4 MiB page
static {
ResourceLeakDetector.addExclusions(AbstractByteBufAllocator.class, "toLeakAwareBuffer");
}
protected static ByteBuf toLeakAwareBuffer(ByteBuf buf) {
ResourceLeakTracker<ByteBuf> leak;
switch (ResourceLeakDetector.getLevel()) {
case SIMPLE:
leak = AbstractByteBuf.leakDetector.track(buf);
if (leak != null) {
buf = new SimpleLeakAwareByteBuf(buf, leak);
}
break;
case ADVANCED:
case PARANOID:
leak = AbstractByteBuf.leakDetector.track(buf);
if (leak != null) {
buf = new AdvancedLeakAwareByteBuf(buf, leak);
}
break;
default:
break;
}
return buf;
}
protected static CompositeByteBuf toLeakAwareBuffer(CompositeByteBuf buf) {
ResourceLeakTracker<ByteBuf> leak;
switch (ResourceLeakDetector.getLevel()) {
case SIMPLE:
leak = AbstractByteBuf.leakDetector.track(buf);
if (leak != null) {
buf = new SimpleLeakAwareCompositeByteBuf(buf, leak);
}
break;
case ADVANCED:
case PARANOID:
leak = AbstractByteBuf.leakDetector.track(buf);
if (leak != null) {
buf = new AdvancedLeakAwareCompositeByteBuf(buf, leak);
}
break;
default:
break;
}
return buf;
}
private final boolean directByDefault;
private final ByteBuf emptyBuf;
/**
* Instance use heap buffers by default
*/
protected AbstractByteBufAllocator() {
this(false);
}
/**
* Create new instance
*
* @param preferDirect {@code true} if {@link #buffer(int)} should try to allocate a direct buffer rather than
* a heap buffer
*/
protected AbstractByteBufAllocator(boolean preferDirect) {
directByDefault = preferDirect && PlatformDependent.hasUnsafe();
emptyBuf = new EmptyByteBuf(this);
}
@Override
public ByteBuf buffer() {
if (directByDefault) {
return directBuffer();
}
return heapBuffer();
}
@Override
public ByteBuf buffer(int initialCapacity) {
if (directByDefault) {
return directBuffer(initialCapacity);
}
return heapBuffer(initialCapacity);
}
@Override
public ByteBuf buffer(int initialCapacity, int maxCapacity) {
if (directByDefault) {
return directBuffer(initialCapacity, maxCapacity);
}
return heapBuffer(initialCapacity, maxCapacity);
}
@Override
public ByteBuf ioBuffer() {
if (PlatformDependent.hasUnsafe() || isDirectBufferPooled()) {
return directBuffer(DEFAULT_INITIAL_CAPACITY);
}
return heapBuffer(DEFAULT_INITIAL_CAPACITY);
}
@Override
public ByteBuf ioBuffer(int initialCapacity) {
if (PlatformDependent.hasUnsafe() || isDirectBufferPooled()) {
return directBuffer(initialCapacity);
}
return heapBuffer(initialCapacity);
}
@Override
public ByteBuf ioBuffer(int initialCapacity, int maxCapacity) {
if (PlatformDependent.hasUnsafe() || isDirectBufferPooled()) {
return directBuffer(initialCapacity, maxCapacity);
}
return heapBuffer(initialCapacity, maxCapacity);
}
@Override
public ByteBuf heapBuffer() {
return heapBuffer(DEFAULT_INITIAL_CAPACITY, DEFAULT_MAX_CAPACITY);
}
@Override
public ByteBuf heapBuffer(int initialCapacity) {
return heapBuffer(initialCapacity, DEFAULT_MAX_CAPACITY);
}
@Override
public ByteBuf heapBuffer(int initialCapacity, int maxCapacity) {
if (initialCapacity == 0 && maxCapacity == 0) {
return emptyBuf;
}
validate(initialCapacity, maxCapacity);
return newHeapBuffer(initialCapacity, maxCapacity);
}
@Override
public ByteBuf directBuffer() {
return directBuffer(DEFAULT_INITIAL_CAPACITY, DEFAULT_MAX_CAPACITY);
}
@Override
public ByteBuf directBuffer(int initialCapacity) {
return directBuffer(initialCapacity, DEFAULT_MAX_CAPACITY);
}
@Override
public ByteBuf directBuffer(int initialCapacity, int maxCapacity) {
if (initialCapacity == 0 && maxCapacity == 0) {
return emptyBuf;
}
validate(initialCapacity, maxCapacity);
return newDirectBuffer(initialCapacity, maxCapacity);
}
@Override
public CompositeByteBuf compositeBuffer() {
if (directByDefault) {
return compositeDirectBuffer();
}
return compositeHeapBuffer();
}
@Override
public CompositeByteBuf compositeBuffer(int maxNumComponents) {
if (directByDefault) {
return compositeDirectBuffer(maxNumComponents);
}
return compositeHeapBuffer(maxNumComponents);
}
@Override
public CompositeByteBuf compositeHeapBuffer() {
return compositeHeapBuffer(DEFAULT_MAX_COMPONENTS);
}
@Override
public CompositeByteBuf compositeHeapBuffer(int maxNumComponents) {
return toLeakAwareBuffer(new CompositeByteBuf(this, false, maxNumComponents));
}
@Override
public CompositeByteBuf compositeDirectBuffer() {
return compositeDirectBuffer(DEFAULT_MAX_COMPONENTS);
}
@Override
public CompositeByteBuf compositeDirectBuffer(int maxNumComponents) {
return toLeakAwareBuffer(new CompositeByteBuf(this, true, maxNumComponents));
}
private static void validate(int initialCapacity, int maxCapacity) {
checkPositiveOrZero(initialCapacity, "initialCapacity");
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity: %d (expected: not greater than maxCapacity(%d)",
initialCapacity, maxCapacity));
}
}
/**
* Create a heap {@link ByteBuf} with the given initialCapacity and maxCapacity.
*/
protected abstract ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity);
/**
* Create a direct {@link ByteBuf} with the given initialCapacity and maxCapacity.
*/
protected abstract ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity);
@Override
public String toString() {
return StringUtil.simpleClassName(this) + "(directByDefault: " + directByDefault + ')';
}
@Override
public int calculateNewCapacity(int minNewCapacity, int maxCapacity) {
checkPositiveOrZero(minNewCapacity, "minNewCapacity");
if (minNewCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"minNewCapacity: %d (expected: not greater than maxCapacity(%d)",
minNewCapacity, maxCapacity));
}
final int threshold = CALCULATE_THRESHOLD; // 4 MiB page
if (minNewCapacity == threshold) {
return threshold;
}
// If over threshold, do not double but just increase by threshold.
if (minNewCapacity > threshold) {
int newCapacity = minNewCapacity / threshold * threshold;
if (newCapacity > maxCapacity - threshold) {
newCapacity = maxCapacity;
} else {
newCapacity += threshold;
}
return newCapacity;
}
// 64 <= newCapacity is a power of 2 <= threshold
final int newCapacity = MathUtil.findNextPositivePowerOfTwo(Math.max(minNewCapacity, 64));
return Math.min(newCapacity, maxCapacity);
}
}

View file

@ -0,0 +1,129 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import java.nio.ByteBuffer;
/**
* Abstract base class for {@link ByteBuf} implementations that wrap another
* {@link ByteBuf}.
*
* @deprecated Do not use.
*/
@Deprecated
public abstract class AbstractDerivedByteBuf extends AbstractByteBuf {
protected AbstractDerivedByteBuf(int maxCapacity) {
super(maxCapacity);
}
@Override
final boolean isAccessible() {
return isAccessible0();
}
boolean isAccessible0() {
return unwrap().isAccessible();
}
@Override
public final int refCnt() {
return refCnt0();
}
int refCnt0() {
return unwrap().refCnt();
}
@Override
public final ByteBuf retain() {
return retain0();
}
ByteBuf retain0() {
unwrap().retain();
return this;
}
@Override
public final ByteBuf retain(int increment) {
return retain0(increment);
}
ByteBuf retain0(int increment) {
unwrap().retain(increment);
return this;
}
@Override
public final ByteBuf touch() {
return touch0();
}
ByteBuf touch0() {
unwrap().touch();
return this;
}
@Override
public final ByteBuf touch(Object hint) {
return touch0(hint);
}
ByteBuf touch0(Object hint) {
unwrap().touch(hint);
return this;
}
@Override
public final boolean release() {
return release0();
}
boolean release0() {
return unwrap().release();
}
@Override
public final boolean release(int decrement) {
return release0(decrement);
}
boolean release0(int decrement) {
return unwrap().release(decrement);
}
@Override
public boolean isReadOnly() {
return unwrap().isReadOnly();
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
return nioBuffer(index, length);
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
return unwrap().nioBuffer(index, length);
}
@Override
public boolean isContiguous() {
return unwrap().isContiguous();
}
}

View file

@ -0,0 +1,323 @@
/*
* Copyright 2016 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.Recycler.EnhancedHandle;
import io.netty.util.internal.ObjectPool.Handle;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
/**
* Abstract base class for derived {@link ByteBuf} implementations.
*/
abstract class AbstractPooledDerivedByteBuf extends AbstractReferenceCountedByteBuf {
private final EnhancedHandle<AbstractPooledDerivedByteBuf> recyclerHandle;
private AbstractByteBuf rootParent;
/**
* Deallocations of a pooled derived buffer should always propagate through the entire chain of derived buffers.
* This is because each pooled derived buffer maintains its own reference count and we should respect each one.
* If deallocations cause a release of the "root parent" then then we may prematurely release the underlying
* content before all the derived buffers have been released.
*/
private ByteBuf parent;
@SuppressWarnings("unchecked")
AbstractPooledDerivedByteBuf(Handle<? extends AbstractPooledDerivedByteBuf> recyclerHandle) {
super(0);
this.recyclerHandle = (EnhancedHandle<AbstractPooledDerivedByteBuf>) recyclerHandle;
}
// Called from within SimpleLeakAwareByteBuf and AdvancedLeakAwareByteBuf.
final void parent(ByteBuf newParent) {
assert newParent instanceof SimpleLeakAwareByteBuf;
parent = newParent;
}
@Override
public final AbstractByteBuf unwrap() {
return rootParent;
}
final <U extends AbstractPooledDerivedByteBuf> U init(
AbstractByteBuf unwrapped, ByteBuf wrapped, int readerIndex, int writerIndex, int maxCapacity) {
wrapped.retain(); // Retain up front to ensure the parent is accessible before doing more work.
parent = wrapped;
rootParent = unwrapped;
try {
maxCapacity(maxCapacity);
setIndex0(readerIndex, writerIndex); // It is assumed the bounds checking is done by the caller.
resetRefCnt();
@SuppressWarnings("unchecked")
final U castThis = (U) this;
wrapped = null;
return castThis;
} finally {
if (wrapped != null) {
parent = rootParent = null;
wrapped.release();
}
}
}
@Override
protected final void deallocate() {
// We need to first store a reference to the parent before recycle this instance. This is needed as
// otherwise it is possible that the same AbstractPooledDerivedByteBuf is again obtained and init(...) is
// called before we actually have a chance to call release(). This leads to call release() on the wrong parent.
ByteBuf parent = this.parent;
recyclerHandle.unguardedRecycle(this);
parent.release();
}
@Override
public final ByteBufAllocator alloc() {
return unwrap().alloc();
}
@Override
@Deprecated
public final ByteOrder order() {
return unwrap().order();
}
@Override
public boolean isReadOnly() {
return unwrap().isReadOnly();
}
@Override
public final boolean isDirect() {
return unwrap().isDirect();
}
@Override
public boolean hasArray() {
return unwrap().hasArray();
}
@Override
public byte[] array() {
return unwrap().array();
}
@Override
public boolean hasMemoryAddress() {
return unwrap().hasMemoryAddress();
}
@Override
public boolean isContiguous() {
return unwrap().isContiguous();
}
@Override
public final int nioBufferCount() {
return unwrap().nioBufferCount();
}
@Override
public final ByteBuffer internalNioBuffer(int index, int length) {
return nioBuffer(index, length);
}
@Override
public final ByteBuf retainedSlice() {
final int index = readerIndex();
return retainedSlice(index, writerIndex() - index);
}
@Override
public ByteBuf slice(int index, int length) {
ensureAccessible();
// All reference count methods should be inherited from this object (this is the "parent").
return new PooledNonRetainedSlicedByteBuf(this, unwrap(), index, length);
}
final ByteBuf duplicate0() {
ensureAccessible();
// All reference count methods should be inherited from this object (this is the "parent").
return new PooledNonRetainedDuplicateByteBuf(this, unwrap());
}
private static final class PooledNonRetainedDuplicateByteBuf extends UnpooledDuplicatedByteBuf {
private final ByteBuf referenceCountDelegate;
PooledNonRetainedDuplicateByteBuf(ByteBuf referenceCountDelegate, AbstractByteBuf buffer) {
super(buffer);
this.referenceCountDelegate = referenceCountDelegate;
}
@Override
boolean isAccessible0() {
return referenceCountDelegate.isAccessible();
}
@Override
int refCnt0() {
return referenceCountDelegate.refCnt();
}
@Override
ByteBuf retain0() {
referenceCountDelegate.retain();
return this;
}
@Override
ByteBuf retain0(int increment) {
referenceCountDelegate.retain(increment);
return this;
}
@Override
ByteBuf touch0() {
referenceCountDelegate.touch();
return this;
}
@Override
ByteBuf touch0(Object hint) {
referenceCountDelegate.touch(hint);
return this;
}
@Override
boolean release0() {
return referenceCountDelegate.release();
}
@Override
boolean release0(int decrement) {
return referenceCountDelegate.release(decrement);
}
@Override
public ByteBuf duplicate() {
ensureAccessible();
return new PooledNonRetainedDuplicateByteBuf(referenceCountDelegate, this);
}
@Override
public ByteBuf retainedDuplicate() {
return PooledDuplicatedByteBuf.newInstance(unwrap(), this, readerIndex(), writerIndex());
}
@Override
public ByteBuf slice(int index, int length) {
checkIndex(index, length);
return new PooledNonRetainedSlicedByteBuf(referenceCountDelegate, unwrap(), index, length);
}
@Override
public ByteBuf retainedSlice() {
// Capacity is not allowed to change for a sliced ByteBuf, so length == capacity()
return retainedSlice(readerIndex(), capacity());
}
@Override
public ByteBuf retainedSlice(int index, int length) {
return PooledSlicedByteBuf.newInstance(unwrap(), this, index, length);
}
}
private static final class PooledNonRetainedSlicedByteBuf extends UnpooledSlicedByteBuf {
private final ByteBuf referenceCountDelegate;
PooledNonRetainedSlicedByteBuf(ByteBuf referenceCountDelegate,
AbstractByteBuf buffer, int index, int length) {
super(buffer, index, length);
this.referenceCountDelegate = referenceCountDelegate;
}
@Override
boolean isAccessible0() {
return referenceCountDelegate.isAccessible();
}
@Override
int refCnt0() {
return referenceCountDelegate.refCnt();
}
@Override
ByteBuf retain0() {
referenceCountDelegate.retain();
return this;
}
@Override
ByteBuf retain0(int increment) {
referenceCountDelegate.retain(increment);
return this;
}
@Override
ByteBuf touch0() {
referenceCountDelegate.touch();
return this;
}
@Override
ByteBuf touch0(Object hint) {
referenceCountDelegate.touch(hint);
return this;
}
@Override
boolean release0() {
return referenceCountDelegate.release();
}
@Override
boolean release0(int decrement) {
return referenceCountDelegate.release(decrement);
}
@Override
public ByteBuf duplicate() {
ensureAccessible();
return new PooledNonRetainedDuplicateByteBuf(referenceCountDelegate, unwrap())
.setIndex(idx(readerIndex()), idx(writerIndex()));
}
@Override
public ByteBuf retainedDuplicate() {
return PooledDuplicatedByteBuf.newInstance(unwrap(), this, idx(readerIndex()), idx(writerIndex()));
}
@Override
public ByteBuf slice(int index, int length) {
checkIndex(index, length);
return new PooledNonRetainedSlicedByteBuf(referenceCountDelegate, unwrap(), idx(index), length);
}
@Override
public ByteBuf retainedSlice() {
// Capacity is not allowed to change for a sliced ByteBuf, so length == capacity()
return retainedSlice(0, capacity());
}
@Override
public ByteBuf retainedSlice(int index, int length) {
return PooledSlicedByteBuf.newInstance(unwrap(), this, idx(index), length);
}
}
}

View file

@ -0,0 +1,120 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
import io.netty.util.internal.ReferenceCountUpdater;
/**
* Abstract base class for {@link ByteBuf} implementations that count references.
*/
public abstract class AbstractReferenceCountedByteBuf extends AbstractByteBuf {
private static final long REFCNT_FIELD_OFFSET =
ReferenceCountUpdater.getUnsafeOffset(AbstractReferenceCountedByteBuf.class, "refCnt");
private static final AtomicIntegerFieldUpdater<AbstractReferenceCountedByteBuf> AIF_UPDATER =
AtomicIntegerFieldUpdater.newUpdater(AbstractReferenceCountedByteBuf.class, "refCnt");
private static final ReferenceCountUpdater<AbstractReferenceCountedByteBuf> updater =
new ReferenceCountUpdater<AbstractReferenceCountedByteBuf>() {
@Override
protected AtomicIntegerFieldUpdater<AbstractReferenceCountedByteBuf> updater() {
return AIF_UPDATER;
}
@Override
protected long unsafeOffset() {
return REFCNT_FIELD_OFFSET;
}
};
// Value might not equal "real" reference count, all access should be via the updater
@SuppressWarnings({"unused", "FieldMayBeFinal"})
private volatile int refCnt;
protected AbstractReferenceCountedByteBuf(int maxCapacity) {
super(maxCapacity);
updater.setInitialValue(this);
}
@Override
boolean isAccessible() {
// Try to do non-volatile read for performance as the ensureAccessible() is racy anyway and only provide
// a best-effort guard.
return updater.isLiveNonVolatile(this);
}
@Override
public int refCnt() {
return updater.refCnt(this);
}
/**
* An unsafe operation intended for use by a subclass that sets the reference count of the buffer directly
*/
protected final void setRefCnt(int refCnt) {
updater.setRefCnt(this, refCnt);
}
/**
* An unsafe operation intended for use by a subclass that resets the reference count of the buffer to 1
*/
protected final void resetRefCnt() {
updater.resetRefCnt(this);
}
@Override
public ByteBuf retain() {
return updater.retain(this);
}
@Override
public ByteBuf retain(int increment) {
return updater.retain(this, increment);
}
@Override
public ByteBuf touch() {
return this;
}
@Override
public ByteBuf touch(Object hint) {
return this;
}
@Override
public boolean release() {
return handleRelease(updater.release(this));
}
@Override
public boolean release(int decrement) {
return handleRelease(updater.release(this, decrement));
}
private boolean handleRelease(boolean result) {
if (result) {
deallocate();
}
return result;
}
/**
* Called once {@link #refCnt()} is equals 0.
*/
protected abstract void deallocate();
}

View file

@ -0,0 +1,477 @@
/*
* Copyright 2016 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ByteProcessor;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
import java.nio.charset.Charset;
import static io.netty.util.internal.MathUtil.isOutOfBounds;
abstract class AbstractUnpooledSlicedByteBuf extends AbstractDerivedByteBuf {
private final ByteBuf buffer;
private final int adjustment;
AbstractUnpooledSlicedByteBuf(ByteBuf buffer, int index, int length) {
super(length);
checkSliceOutOfBounds(index, length, buffer);
if (buffer instanceof AbstractUnpooledSlicedByteBuf) {
this.buffer = ((AbstractUnpooledSlicedByteBuf) buffer).buffer;
adjustment = ((AbstractUnpooledSlicedByteBuf) buffer).adjustment + index;
} else if (buffer instanceof DuplicatedByteBuf) {
this.buffer = buffer.unwrap();
adjustment = index;
} else {
this.buffer = buffer;
adjustment = index;
}
initLength(length);
writerIndex(length);
}
/**
* Called by the constructor before {@link #writerIndex(int)}.
* @param length the {@code length} argument from the constructor.
*/
void initLength(int length) {
}
int length() {
return capacity();
}
@Override
public ByteBuf unwrap() {
return buffer;
}
@Override
public ByteBufAllocator alloc() {
return unwrap().alloc();
}
@Override
@Deprecated
public ByteOrder order() {
return unwrap().order();
}
@Override
public boolean isDirect() {
return unwrap().isDirect();
}
@Override
public ByteBuf capacity(int newCapacity) {
throw new UnsupportedOperationException("sliced buffer");
}
@Override
public boolean hasArray() {
return unwrap().hasArray();
}
@Override
public byte[] array() {
return unwrap().array();
}
@Override
public int arrayOffset() {
return idx(unwrap().arrayOffset());
}
@Override
public boolean hasMemoryAddress() {
return unwrap().hasMemoryAddress();
}
@Override
public long memoryAddress() {
return unwrap().memoryAddress() + adjustment;
}
@Override
public byte getByte(int index) {
checkIndex0(index, 1);
return unwrap().getByte(idx(index));
}
@Override
protected byte _getByte(int index) {
return unwrap().getByte(idx(index));
}
@Override
public short getShort(int index) {
checkIndex0(index, 2);
return unwrap().getShort(idx(index));
}
@Override
protected short _getShort(int index) {
return unwrap().getShort(idx(index));
}
@Override
public short getShortLE(int index) {
checkIndex0(index, 2);
return unwrap().getShortLE(idx(index));
}
@Override
protected short _getShortLE(int index) {
return unwrap().getShortLE(idx(index));
}
@Override
public int getUnsignedMedium(int index) {
checkIndex0(index, 3);
return unwrap().getUnsignedMedium(idx(index));
}
@Override
protected int _getUnsignedMedium(int index) {
return unwrap().getUnsignedMedium(idx(index));
}
@Override
public int getUnsignedMediumLE(int index) {
checkIndex0(index, 3);
return unwrap().getUnsignedMediumLE(idx(index));
}
@Override
protected int _getUnsignedMediumLE(int index) {
return unwrap().getUnsignedMediumLE(idx(index));
}
@Override
public int getInt(int index) {
checkIndex0(index, 4);
return unwrap().getInt(idx(index));
}
@Override
protected int _getInt(int index) {
return unwrap().getInt(idx(index));
}
@Override
public int getIntLE(int index) {
checkIndex0(index, 4);
return unwrap().getIntLE(idx(index));
}
@Override
protected int _getIntLE(int index) {
return unwrap().getIntLE(idx(index));
}
@Override
public long getLong(int index) {
checkIndex0(index, 8);
return unwrap().getLong(idx(index));
}
@Override
protected long _getLong(int index) {
return unwrap().getLong(idx(index));
}
@Override
public long getLongLE(int index) {
checkIndex0(index, 8);
return unwrap().getLongLE(idx(index));
}
@Override
protected long _getLongLE(int index) {
return unwrap().getLongLE(idx(index));
}
@Override
public ByteBuf duplicate() {
return unwrap().duplicate().setIndex(idx(readerIndex()), idx(writerIndex()));
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex0(index, length);
return unwrap().copy(idx(index), length);
}
@Override
public ByteBuf slice(int index, int length) {
checkIndex0(index, length);
return unwrap().slice(idx(index), length);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkIndex0(index, length);
unwrap().getBytes(idx(index), dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkIndex0(index, length);
unwrap().getBytes(idx(index), dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
checkIndex0(index, dst.remaining());
unwrap().getBytes(idx(index), dst);
return this;
}
@Override
public ByteBuf setByte(int index, int value) {
checkIndex0(index, 1);
unwrap().setByte(idx(index), value);
return this;
}
@Override
public CharSequence getCharSequence(int index, int length, Charset charset) {
checkIndex0(index, length);
return unwrap().getCharSequence(idx(index), length, charset);
}
@Override
protected void _setByte(int index, int value) {
unwrap().setByte(idx(index), value);
}
@Override
public ByteBuf setShort(int index, int value) {
checkIndex0(index, 2);
unwrap().setShort(idx(index), value);
return this;
}
@Override
protected void _setShort(int index, int value) {
unwrap().setShort(idx(index), value);
}
@Override
public ByteBuf setShortLE(int index, int value) {
checkIndex0(index, 2);
unwrap().setShortLE(idx(index), value);
return this;
}
@Override
protected void _setShortLE(int index, int value) {
unwrap().setShortLE(idx(index), value);
}
@Override
public ByteBuf setMedium(int index, int value) {
checkIndex0(index, 3);
unwrap().setMedium(idx(index), value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
unwrap().setMedium(idx(index), value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
checkIndex0(index, 3);
unwrap().setMediumLE(idx(index), value);
return this;
}
@Override
protected void _setMediumLE(int index, int value) {
unwrap().setMediumLE(idx(index), value);
}
@Override
public ByteBuf setInt(int index, int value) {
checkIndex0(index, 4);
unwrap().setInt(idx(index), value);
return this;
}
@Override
protected void _setInt(int index, int value) {
unwrap().setInt(idx(index), value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
checkIndex0(index, 4);
unwrap().setIntLE(idx(index), value);
return this;
}
@Override
protected void _setIntLE(int index, int value) {
unwrap().setIntLE(idx(index), value);
}
@Override
public ByteBuf setLong(int index, long value) {
checkIndex0(index, 8);
unwrap().setLong(idx(index), value);
return this;
}
@Override
protected void _setLong(int index, long value) {
unwrap().setLong(idx(index), value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
checkIndex0(index, 8);
unwrap().setLongLE(idx(index), value);
return this;
}
@Override
protected void _setLongLE(int index, long value) {
unwrap().setLongLE(idx(index), value);
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
checkIndex0(index, length);
unwrap().setBytes(idx(index), src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
checkIndex0(index, length);
unwrap().setBytes(idx(index), src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
checkIndex0(index, src.remaining());
unwrap().setBytes(idx(index), src);
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
checkIndex0(index, length);
unwrap().getBytes(idx(index), out, length);
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
checkIndex0(index, length);
return unwrap().getBytes(idx(index), out, length);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
checkIndex0(index, length);
return unwrap().getBytes(idx(index), out, position, length);
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
checkIndex0(index, length);
return unwrap().setBytes(idx(index), in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
checkIndex0(index, length);
return unwrap().setBytes(idx(index), in, length);
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
checkIndex0(index, length);
return unwrap().setBytes(idx(index), in, position, length);
}
@Override
public int nioBufferCount() {
return unwrap().nioBufferCount();
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex0(index, length);
return unwrap().nioBuffer(idx(index), length);
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
checkIndex0(index, length);
return unwrap().nioBuffers(idx(index), length);
}
@Override
public int forEachByte(int index, int length, ByteProcessor processor) {
checkIndex0(index, length);
int ret = unwrap().forEachByte(idx(index), length, processor);
if (ret >= adjustment) {
return ret - adjustment;
} else {
return -1;
}
}
@Override
public int forEachByteDesc(int index, int length, ByteProcessor processor) {
checkIndex0(index, length);
int ret = unwrap().forEachByteDesc(idx(index), length, processor);
if (ret >= adjustment) {
return ret - adjustment;
} else {
return -1;
}
}
/**
* Returns the index with the needed adjustment.
*/
final int idx(int index) {
return index + adjustment;
}
static void checkSliceOutOfBounds(int index, int length, ByteBuf buffer) {
if (isOutOfBounds(index, length, buffer.capacity())) {
throw new IndexOutOfBoundsException(buffer + ".slice(" + index + ", " + length + ')');
}
}
}

View file

@ -0,0 +1,171 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.PlatformDependent;
import java.nio.ByteOrder;
import static io.netty.util.internal.PlatformDependent.BIG_ENDIAN_NATIVE_ORDER;
/**
* Special {@link SwappedByteBuf} for {@link ByteBuf}s that is using unsafe.
*/
abstract class AbstractUnsafeSwappedByteBuf extends SwappedByteBuf {
private final boolean nativeByteOrder;
private final AbstractByteBuf wrapped;
AbstractUnsafeSwappedByteBuf(AbstractByteBuf buf) {
super(buf);
assert PlatformDependent.isUnaligned();
wrapped = buf;
nativeByteOrder = BIG_ENDIAN_NATIVE_ORDER == (order() == ByteOrder.BIG_ENDIAN);
}
@Override
public final long getLong(int index) {
wrapped.checkIndex(index, 8);
long v = _getLong(wrapped, index);
return nativeByteOrder ? v : Long.reverseBytes(v);
}
@Override
public final float getFloat(int index) {
return Float.intBitsToFloat(getInt(index));
}
@Override
public final double getDouble(int index) {
return Double.longBitsToDouble(getLong(index));
}
@Override
public final char getChar(int index) {
return (char) getShort(index);
}
@Override
public final long getUnsignedInt(int index) {
return getInt(index) & 0xFFFFFFFFL;
}
@Override
public final int getInt(int index) {
wrapped.checkIndex(index, 4);
int v = _getInt(wrapped, index);
return nativeByteOrder ? v : Integer.reverseBytes(v);
}
@Override
public final int getUnsignedShort(int index) {
return getShort(index) & 0xFFFF;
}
@Override
public final short getShort(int index) {
wrapped.checkIndex(index, 2);
short v = _getShort(wrapped, index);
return nativeByteOrder ? v : Short.reverseBytes(v);
}
@Override
public final ByteBuf setShort(int index, int value) {
wrapped.checkIndex(index, 2);
_setShort(wrapped, index, nativeByteOrder ? (short) value : Short.reverseBytes((short) value));
return this;
}
@Override
public final ByteBuf setInt(int index, int value) {
wrapped.checkIndex(index, 4);
_setInt(wrapped, index, nativeByteOrder ? value : Integer.reverseBytes(value));
return this;
}
@Override
public final ByteBuf setLong(int index, long value) {
wrapped.checkIndex(index, 8);
_setLong(wrapped, index, nativeByteOrder ? value : Long.reverseBytes(value));
return this;
}
@Override
public final ByteBuf setChar(int index, int value) {
setShort(index, value);
return this;
}
@Override
public final ByteBuf setFloat(int index, float value) {
setInt(index, Float.floatToRawIntBits(value));
return this;
}
@Override
public final ByteBuf setDouble(int index, double value) {
setLong(index, Double.doubleToRawLongBits(value));
return this;
}
@Override
public final ByteBuf writeShort(int value) {
wrapped.ensureWritable0(2);
_setShort(wrapped, wrapped.writerIndex, nativeByteOrder ? (short) value : Short.reverseBytes((short) value));
wrapped.writerIndex += 2;
return this;
}
@Override
public final ByteBuf writeInt(int value) {
wrapped.ensureWritable0(4);
_setInt(wrapped, wrapped.writerIndex, nativeByteOrder ? value : Integer.reverseBytes(value));
wrapped.writerIndex += 4;
return this;
}
@Override
public final ByteBuf writeLong(long value) {
wrapped.ensureWritable0(8);
_setLong(wrapped, wrapped.writerIndex, nativeByteOrder ? value : Long.reverseBytes(value));
wrapped.writerIndex += 8;
return this;
}
@Override
public final ByteBuf writeChar(int value) {
writeShort(value);
return this;
}
@Override
public final ByteBuf writeFloat(float value) {
writeInt(Float.floatToRawIntBits(value));
return this;
}
@Override
public final ByteBuf writeDouble(double value) {
writeLong(Double.doubleToRawLongBits(value));
return this;
}
protected abstract short _getShort(AbstractByteBuf wrapped, int index);
protected abstract int _getInt(AbstractByteBuf wrapped, int index);
protected abstract long _getLong(AbstractByteBuf wrapped, int index);
protected abstract void _setShort(AbstractByteBuf wrapped, int index, short value);
protected abstract void _setInt(AbstractByteBuf wrapped, int index, int value);
protected abstract void _setLong(AbstractByteBuf wrapped, int index, long value);
}

View file

@ -0,0 +1,968 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ByteProcessor;
import io.netty.util.ResourceLeakDetector;
import io.netty.util.ResourceLeakTracker;
import io.netty.util.internal.SystemPropertyUtil;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
import java.nio.charset.Charset;
final class AdvancedLeakAwareByteBuf extends SimpleLeakAwareByteBuf {
// If set to true we will only record stacktraces for touch(...), release(...) and retain(...) calls.
private static final String PROP_ACQUIRE_AND_RELEASE_ONLY = "io.netty.leakDetection.acquireAndReleaseOnly";
private static final boolean ACQUIRE_AND_RELEASE_ONLY;
private static final InternalLogger logger = InternalLoggerFactory.getInstance(AdvancedLeakAwareByteBuf.class);
static {
ACQUIRE_AND_RELEASE_ONLY = SystemPropertyUtil.getBoolean(PROP_ACQUIRE_AND_RELEASE_ONLY, false);
if (logger.isDebugEnabled()) {
logger.debug("-D{}: {}", PROP_ACQUIRE_AND_RELEASE_ONLY, ACQUIRE_AND_RELEASE_ONLY);
}
ResourceLeakDetector.addExclusions(
AdvancedLeakAwareByteBuf.class, "touch", "recordLeakNonRefCountingOperation");
}
AdvancedLeakAwareByteBuf(ByteBuf buf, ResourceLeakTracker<ByteBuf> leak) {
super(buf, leak);
}
AdvancedLeakAwareByteBuf(ByteBuf wrapped, ByteBuf trackedByteBuf, ResourceLeakTracker<ByteBuf> leak) {
super(wrapped, trackedByteBuf, leak);
}
static void recordLeakNonRefCountingOperation(ResourceLeakTracker<ByteBuf> leak) {
if (!ACQUIRE_AND_RELEASE_ONLY) {
leak.record();
}
}
@Override
public ByteBuf order(ByteOrder endianness) {
recordLeakNonRefCountingOperation(leak);
return super.order(endianness);
}
@Override
public ByteBuf slice() {
recordLeakNonRefCountingOperation(leak);
return super.slice();
}
@Override
public ByteBuf slice(int index, int length) {
recordLeakNonRefCountingOperation(leak);
return super.slice(index, length);
}
@Override
public ByteBuf retainedSlice() {
recordLeakNonRefCountingOperation(leak);
return super.retainedSlice();
}
@Override
public ByteBuf retainedSlice(int index, int length) {
recordLeakNonRefCountingOperation(leak);
return super.retainedSlice(index, length);
}
@Override
public ByteBuf retainedDuplicate() {
recordLeakNonRefCountingOperation(leak);
return super.retainedDuplicate();
}
@Override
public ByteBuf readRetainedSlice(int length) {
recordLeakNonRefCountingOperation(leak);
return super.readRetainedSlice(length);
}
@Override
public ByteBuf duplicate() {
recordLeakNonRefCountingOperation(leak);
return super.duplicate();
}
@Override
public ByteBuf readSlice(int length) {
recordLeakNonRefCountingOperation(leak);
return super.readSlice(length);
}
@Override
public ByteBuf discardReadBytes() {
recordLeakNonRefCountingOperation(leak);
return super.discardReadBytes();
}
@Override
public ByteBuf discardSomeReadBytes() {
recordLeakNonRefCountingOperation(leak);
return super.discardSomeReadBytes();
}
@Override
public ByteBuf ensureWritable(int minWritableBytes) {
recordLeakNonRefCountingOperation(leak);
return super.ensureWritable(minWritableBytes);
}
@Override
public int ensureWritable(int minWritableBytes, boolean force) {
recordLeakNonRefCountingOperation(leak);
return super.ensureWritable(minWritableBytes, force);
}
@Override
public boolean getBoolean(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getBoolean(index);
}
@Override
public byte getByte(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getByte(index);
}
@Override
public short getUnsignedByte(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getUnsignedByte(index);
}
@Override
public short getShort(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getShort(index);
}
@Override
public int getUnsignedShort(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getUnsignedShort(index);
}
@Override
public int getMedium(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getMedium(index);
}
@Override
public int getUnsignedMedium(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getUnsignedMedium(index);
}
@Override
public int getInt(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getInt(index);
}
@Override
public long getUnsignedInt(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getUnsignedInt(index);
}
@Override
public long getLong(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getLong(index);
}
@Override
public char getChar(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getChar(index);
}
@Override
public float getFloat(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getFloat(index);
}
@Override
public double getDouble(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getDouble(index);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst) {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int length) {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst, length);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst, dstIndex, length);
}
@Override
public ByteBuf getBytes(int index, byte[] dst) {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst);
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst, dstIndex, length);
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, dst);
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, out, length);
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, out, length);
}
@Override
public CharSequence getCharSequence(int index, int length, Charset charset) {
recordLeakNonRefCountingOperation(leak);
return super.getCharSequence(index, length, charset);
}
@Override
public ByteBuf setBoolean(int index, boolean value) {
recordLeakNonRefCountingOperation(leak);
return super.setBoolean(index, value);
}
@Override
public ByteBuf setByte(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setByte(index, value);
}
@Override
public ByteBuf setShort(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setShort(index, value);
}
@Override
public ByteBuf setMedium(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setMedium(index, value);
}
@Override
public ByteBuf setInt(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setInt(index, value);
}
@Override
public ByteBuf setLong(int index, long value) {
recordLeakNonRefCountingOperation(leak);
return super.setLong(index, value);
}
@Override
public ByteBuf setChar(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setChar(index, value);
}
@Override
public ByteBuf setFloat(int index, float value) {
recordLeakNonRefCountingOperation(leak);
return super.setFloat(index, value);
}
@Override
public ByteBuf setDouble(int index, double value) {
recordLeakNonRefCountingOperation(leak);
return super.setDouble(index, value);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src) {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int length) {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src, length);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src, srcIndex, length);
}
@Override
public ByteBuf setBytes(int index, byte[] src) {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src);
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src, srcIndex, length);
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, src);
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, in, length);
}
@Override
public ByteBuf setZero(int index, int length) {
recordLeakNonRefCountingOperation(leak);
return super.setZero(index, length);
}
@Override
public int setCharSequence(int index, CharSequence sequence, Charset charset) {
recordLeakNonRefCountingOperation(leak);
return super.setCharSequence(index, sequence, charset);
}
@Override
public boolean readBoolean() {
recordLeakNonRefCountingOperation(leak);
return super.readBoolean();
}
@Override
public byte readByte() {
recordLeakNonRefCountingOperation(leak);
return super.readByte();
}
@Override
public short readUnsignedByte() {
recordLeakNonRefCountingOperation(leak);
return super.readUnsignedByte();
}
@Override
public short readShort() {
recordLeakNonRefCountingOperation(leak);
return super.readShort();
}
@Override
public int readUnsignedShort() {
recordLeakNonRefCountingOperation(leak);
return super.readUnsignedShort();
}
@Override
public int readMedium() {
recordLeakNonRefCountingOperation(leak);
return super.readMedium();
}
@Override
public int readUnsignedMedium() {
recordLeakNonRefCountingOperation(leak);
return super.readUnsignedMedium();
}
@Override
public int readInt() {
recordLeakNonRefCountingOperation(leak);
return super.readInt();
}
@Override
public long readUnsignedInt() {
recordLeakNonRefCountingOperation(leak);
return super.readUnsignedInt();
}
@Override
public long readLong() {
recordLeakNonRefCountingOperation(leak);
return super.readLong();
}
@Override
public char readChar() {
recordLeakNonRefCountingOperation(leak);
return super.readChar();
}
@Override
public float readFloat() {
recordLeakNonRefCountingOperation(leak);
return super.readFloat();
}
@Override
public double readDouble() {
recordLeakNonRefCountingOperation(leak);
return super.readDouble();
}
@Override
public ByteBuf readBytes(int length) {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(length);
}
@Override
public ByteBuf readBytes(ByteBuf dst) {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst);
}
@Override
public ByteBuf readBytes(ByteBuf dst, int length) {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst, length);
}
@Override
public ByteBuf readBytes(ByteBuf dst, int dstIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst, dstIndex, length);
}
@Override
public ByteBuf readBytes(byte[] dst) {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst);
}
@Override
public ByteBuf readBytes(byte[] dst, int dstIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst, dstIndex, length);
}
@Override
public ByteBuf readBytes(ByteBuffer dst) {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(dst);
}
@Override
public ByteBuf readBytes(OutputStream out, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(out, length);
}
@Override
public int readBytes(GatheringByteChannel out, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(out, length);
}
@Override
public CharSequence readCharSequence(int length, Charset charset) {
recordLeakNonRefCountingOperation(leak);
return super.readCharSequence(length, charset);
}
@Override
public ByteBuf skipBytes(int length) {
recordLeakNonRefCountingOperation(leak);
return super.skipBytes(length);
}
@Override
public ByteBuf writeBoolean(boolean value) {
recordLeakNonRefCountingOperation(leak);
return super.writeBoolean(value);
}
@Override
public ByteBuf writeByte(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeByte(value);
}
@Override
public ByteBuf writeShort(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeShort(value);
}
@Override
public ByteBuf writeMedium(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeMedium(value);
}
@Override
public ByteBuf writeInt(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeInt(value);
}
@Override
public ByteBuf writeLong(long value) {
recordLeakNonRefCountingOperation(leak);
return super.writeLong(value);
}
@Override
public ByteBuf writeChar(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeChar(value);
}
@Override
public ByteBuf writeFloat(float value) {
recordLeakNonRefCountingOperation(leak);
return super.writeFloat(value);
}
@Override
public ByteBuf writeDouble(double value) {
recordLeakNonRefCountingOperation(leak);
return super.writeDouble(value);
}
@Override
public ByteBuf writeBytes(ByteBuf src) {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src);
}
@Override
public ByteBuf writeBytes(ByteBuf src, int length) {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src, length);
}
@Override
public ByteBuf writeBytes(ByteBuf src, int srcIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src, srcIndex, length);
}
@Override
public ByteBuf writeBytes(byte[] src) {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src);
}
@Override
public ByteBuf writeBytes(byte[] src, int srcIndex, int length) {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src, srcIndex, length);
}
@Override
public ByteBuf writeBytes(ByteBuffer src) {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(src);
}
@Override
public int writeBytes(InputStream in, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(in, length);
}
@Override
public int writeBytes(ScatteringByteChannel in, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(in, length);
}
@Override
public ByteBuf writeZero(int length) {
recordLeakNonRefCountingOperation(leak);
return super.writeZero(length);
}
@Override
public int indexOf(int fromIndex, int toIndex, byte value) {
recordLeakNonRefCountingOperation(leak);
return super.indexOf(fromIndex, toIndex, value);
}
@Override
public int bytesBefore(byte value) {
recordLeakNonRefCountingOperation(leak);
return super.bytesBefore(value);
}
@Override
public int bytesBefore(int length, byte value) {
recordLeakNonRefCountingOperation(leak);
return super.bytesBefore(length, value);
}
@Override
public int bytesBefore(int index, int length, byte value) {
recordLeakNonRefCountingOperation(leak);
return super.bytesBefore(index, length, value);
}
@Override
public int forEachByte(ByteProcessor processor) {
recordLeakNonRefCountingOperation(leak);
return super.forEachByte(processor);
}
@Override
public int forEachByte(int index, int length, ByteProcessor processor) {
recordLeakNonRefCountingOperation(leak);
return super.forEachByte(index, length, processor);
}
@Override
public int forEachByteDesc(ByteProcessor processor) {
recordLeakNonRefCountingOperation(leak);
return super.forEachByteDesc(processor);
}
@Override
public int forEachByteDesc(int index, int length, ByteProcessor processor) {
recordLeakNonRefCountingOperation(leak);
return super.forEachByteDesc(index, length, processor);
}
@Override
public ByteBuf copy() {
recordLeakNonRefCountingOperation(leak);
return super.copy();
}
@Override
public ByteBuf copy(int index, int length) {
recordLeakNonRefCountingOperation(leak);
return super.copy(index, length);
}
@Override
public int nioBufferCount() {
recordLeakNonRefCountingOperation(leak);
return super.nioBufferCount();
}
@Override
public ByteBuffer nioBuffer() {
recordLeakNonRefCountingOperation(leak);
return super.nioBuffer();
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
recordLeakNonRefCountingOperation(leak);
return super.nioBuffer(index, length);
}
@Override
public ByteBuffer[] nioBuffers() {
recordLeakNonRefCountingOperation(leak);
return super.nioBuffers();
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
recordLeakNonRefCountingOperation(leak);
return super.nioBuffers(index, length);
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
recordLeakNonRefCountingOperation(leak);
return super.internalNioBuffer(index, length);
}
@Override
public String toString(Charset charset) {
recordLeakNonRefCountingOperation(leak);
return super.toString(charset);
}
@Override
public String toString(int index, int length, Charset charset) {
recordLeakNonRefCountingOperation(leak);
return super.toString(index, length, charset);
}
@Override
public ByteBuf capacity(int newCapacity) {
recordLeakNonRefCountingOperation(leak);
return super.capacity(newCapacity);
}
@Override
public short getShortLE(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getShortLE(index);
}
@Override
public int getUnsignedShortLE(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getUnsignedShortLE(index);
}
@Override
public int getMediumLE(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getMediumLE(index);
}
@Override
public int getUnsignedMediumLE(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getUnsignedMediumLE(index);
}
@Override
public int getIntLE(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getIntLE(index);
}
@Override
public long getUnsignedIntLE(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getUnsignedIntLE(index);
}
@Override
public long getLongLE(int index) {
recordLeakNonRefCountingOperation(leak);
return super.getLongLE(index);
}
@Override
public ByteBuf setShortLE(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setShortLE(index, value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setIntLE(index, value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
recordLeakNonRefCountingOperation(leak);
return super.setMediumLE(index, value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
recordLeakNonRefCountingOperation(leak);
return super.setLongLE(index, value);
}
@Override
public short readShortLE() {
recordLeakNonRefCountingOperation(leak);
return super.readShortLE();
}
@Override
public int readUnsignedShortLE() {
recordLeakNonRefCountingOperation(leak);
return super.readUnsignedShortLE();
}
@Override
public int readMediumLE() {
recordLeakNonRefCountingOperation(leak);
return super.readMediumLE();
}
@Override
public int readUnsignedMediumLE() {
recordLeakNonRefCountingOperation(leak);
return super.readUnsignedMediumLE();
}
@Override
public int readIntLE() {
recordLeakNonRefCountingOperation(leak);
return super.readIntLE();
}
@Override
public long readUnsignedIntLE() {
recordLeakNonRefCountingOperation(leak);
return super.readUnsignedIntLE();
}
@Override
public long readLongLE() {
recordLeakNonRefCountingOperation(leak);
return super.readLongLE();
}
@Override
public ByteBuf writeShortLE(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeShortLE(value);
}
@Override
public ByteBuf writeMediumLE(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeMediumLE(value);
}
@Override
public ByteBuf writeIntLE(int value) {
recordLeakNonRefCountingOperation(leak);
return super.writeIntLE(value);
}
@Override
public ByteBuf writeLongLE(long value) {
recordLeakNonRefCountingOperation(leak);
return super.writeLongLE(value);
}
@Override
public int writeCharSequence(CharSequence sequence, Charset charset) {
recordLeakNonRefCountingOperation(leak);
return super.writeCharSequence(sequence, charset);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.getBytes(index, out, position, length);
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.setBytes(index, in, position, length);
}
@Override
public int readBytes(FileChannel out, long position, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.readBytes(out, position, length);
}
@Override
public int writeBytes(FileChannel in, long position, int length) throws IOException {
recordLeakNonRefCountingOperation(leak);
return super.writeBytes(in, position, length);
}
@Override
public ByteBuf asReadOnly() {
recordLeakNonRefCountingOperation(leak);
return super.asReadOnly();
}
@Override
public ByteBuf retain() {
leak.record();
return super.retain();
}
@Override
public ByteBuf retain(int increment) {
leak.record();
return super.retain(increment);
}
@Override
public boolean release() {
leak.record();
return super.release();
}
@Override
public boolean release(int decrement) {
leak.record();
return super.release(decrement);
}
@Override
public ByteBuf touch() {
leak.record();
return this;
}
@Override
public ByteBuf touch(Object hint) {
leak.record(hint);
return this;
}
@Override
protected AdvancedLeakAwareByteBuf newLeakAwareByteBuf(
ByteBuf buf, ByteBuf trackedByteBuf, ResourceLeakTracker<ByteBuf> leakTracker) {
return new AdvancedLeakAwareByteBuf(buf, trackedByteBuf, leakTracker);
}
}

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,134 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* Implementations are responsible to allocate buffers. Implementations of this interface are expected to be
* thread-safe.
*/
public interface ByteBufAllocator {
ByteBufAllocator DEFAULT = ByteBufUtil.DEFAULT_ALLOCATOR;
/**
* Allocate a {@link ByteBuf}. If it is a direct or heap buffer
* depends on the actual implementation.
*/
ByteBuf buffer();
/**
* Allocate a {@link ByteBuf} with the given initial capacity.
* If it is a direct or heap buffer depends on the actual implementation.
*/
ByteBuf buffer(int initialCapacity);
/**
* Allocate a {@link ByteBuf} with the given initial capacity and the given
* maximal capacity. If it is a direct or heap buffer depends on the actual
* implementation.
*/
ByteBuf buffer(int initialCapacity, int maxCapacity);
/**
* Allocate a {@link ByteBuf}, preferably a direct buffer which is suitable for I/O.
*/
ByteBuf ioBuffer();
/**
* Allocate a {@link ByteBuf}, preferably a direct buffer which is suitable for I/O.
*/
ByteBuf ioBuffer(int initialCapacity);
/**
* Allocate a {@link ByteBuf}, preferably a direct buffer which is suitable for I/O.
*/
ByteBuf ioBuffer(int initialCapacity, int maxCapacity);
/**
* Allocate a heap {@link ByteBuf}.
*/
ByteBuf heapBuffer();
/**
* Allocate a heap {@link ByteBuf} with the given initial capacity.
*/
ByteBuf heapBuffer(int initialCapacity);
/**
* Allocate a heap {@link ByteBuf} with the given initial capacity and the given
* maximal capacity.
*/
ByteBuf heapBuffer(int initialCapacity, int maxCapacity);
/**
* Allocate a direct {@link ByteBuf}.
*/
ByteBuf directBuffer();
/**
* Allocate a direct {@link ByteBuf} with the given initial capacity.
*/
ByteBuf directBuffer(int initialCapacity);
/**
* Allocate a direct {@link ByteBuf} with the given initial capacity and the given
* maximal capacity.
*/
ByteBuf directBuffer(int initialCapacity, int maxCapacity);
/**
* Allocate a {@link CompositeByteBuf}.
* If it is a direct or heap buffer depends on the actual implementation.
*/
CompositeByteBuf compositeBuffer();
/**
* Allocate a {@link CompositeByteBuf} with the given maximum number of components that can be stored in it.
* If it is a direct or heap buffer depends on the actual implementation.
*/
CompositeByteBuf compositeBuffer(int maxNumComponents);
/**
* Allocate a heap {@link CompositeByteBuf}.
*/
CompositeByteBuf compositeHeapBuffer();
/**
* Allocate a heap {@link CompositeByteBuf} with the given maximum number of components that can be stored in it.
*/
CompositeByteBuf compositeHeapBuffer(int maxNumComponents);
/**
* Allocate a direct {@link CompositeByteBuf}.
*/
CompositeByteBuf compositeDirectBuffer();
/**
* Allocate a direct {@link CompositeByteBuf} with the given maximum number of components that can be stored in it.
*/
CompositeByteBuf compositeDirectBuffer(int maxNumComponents);
/**
* Returns {@code true} if direct {@link ByteBuf}'s are pooled
*/
boolean isDirectBufferPooled();
/**
* Calculate the new capacity of a {@link ByteBuf} that is used when a {@link ByteBuf} needs to expand by the
* {@code minNewCapacity} with {@code maxCapacity} as upper-bound.
*/
int calculateNewCapacity(int minNewCapacity, int maxCapacity);
}

View file

@ -0,0 +1,28 @@
/*
* Copyright 2017 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
public interface ByteBufAllocatorMetric {
/**
* Returns the number of bytes of heap memory used by a {@link ByteBufAllocator} or {@code -1} if unknown.
*/
long usedHeapMemory();
/**
* Returns the number of bytes of direct memory used by a {@link ByteBufAllocator} or {@code -1} if unknown.
*/
long usedDirectMemory();
}

View file

@ -0,0 +1,24 @@
/*
* Copyright 2017 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
public interface ByteBufAllocatorMetricProvider {
/**
* Returns a {@link ByteBufAllocatorMetric} for a {@link ByteBufAllocator}.
*/
ByteBufAllocatorMetric metric();
}

View file

@ -0,0 +1,32 @@
/*
* Copyright 2022 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* An interface that can be implemented by any object that know how to turn itself into a {@link ByteBuf}.
* All {@link ByteBuf} classes implement this interface, and return themselves.
*/
public interface ByteBufConvertible {
/**
* Turn this object into a {@link ByteBuf}.
* This does <strong>not</strong> increment the reference count of the {@link ByteBuf} instance.
* The conversion or exposure of the {@link ByteBuf} must be idempotent, so that this method can be called
* either once, or multiple times, without causing any change in program behaviour.
*
* @return A {@link ByteBuf} instance from this object.
*/
ByteBuf asByteBuf();
}

View file

@ -0,0 +1,63 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ReferenceCounted;
/**
* A packet which is send or receive.
*/
public interface ByteBufHolder extends ReferenceCounted {
/**
* Return the data which is held by this {@link ByteBufHolder}.
*/
ByteBuf content();
/**
* Creates a deep copy of this {@link ByteBufHolder}.
*/
ByteBufHolder copy();
/**
* Duplicates this {@link ByteBufHolder}. Be aware that this will not automatically call {@link #retain()}.
*/
ByteBufHolder duplicate();
/**
* Duplicates this {@link ByteBufHolder}. This method returns a retained duplicate unlike {@link #duplicate()}.
*
* @see ByteBuf#retainedDuplicate()
*/
ByteBufHolder retainedDuplicate();
/**
* Returns a new {@link ByteBufHolder} which contains the specified {@code content}.
*/
ByteBufHolder replace(ByteBuf content);
@Override
ByteBufHolder retain();
@Override
ByteBufHolder retain(int increment);
@Override
ByteBufHolder touch();
@Override
ByteBufHolder touch(Object hint);
}

View file

@ -0,0 +1,330 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.util.ReferenceCounted;
import io.netty.util.internal.ObjectUtil;
import io.netty.util.internal.StringUtil;
import java.io.DataInput;
import java.io.DataInputStream;
import java.io.EOFException;
import java.io.IOException;
import java.io.InputStream;
/**
* An {@link InputStream} which reads data from a {@link ByteBuf}.
* <p>
* A read operation against this stream will occur at the {@code readerIndex}
* of its underlying buffer and the {@code readerIndex} will increase during
* the read operation. Please note that it only reads up to the number of
* readable bytes determined at the moment of construction. Therefore,
* updating {@link ByteBuf#writerIndex()} will not affect the return
* value of {@link #available()}.
* <p>
* This stream implements {@link DataInput} for your convenience.
* The endianness of the stream is not always big endian but depends on
* the endianness of the underlying buffer.
*
* @see ByteBufOutputStream
*/
public class ByteBufInputStream extends InputStream implements DataInput {
private final ByteBuf buffer;
private final int startIndex;
private final int endIndex;
private boolean closed;
/**
* To preserve backwards compatibility (which didn't transfer ownership) we support a conditional flag which
* indicates if {@link #buffer} should be released when this {@link InputStream} is closed.
* However in future releases ownership should always be transferred and callers of this class should call
* {@link ReferenceCounted#retain()} if necessary.
*/
private final boolean releaseOnClose;
/**
* Creates a new stream which reads data from the specified {@code buffer}
* starting at the current {@code readerIndex} and ending at the current
* {@code writerIndex}.
* @param buffer The buffer which provides the content for this {@link InputStream}.
*/
public ByteBufInputStream(ByteBuf buffer) {
this(buffer, buffer.readableBytes());
}
/**
* Creates a new stream which reads data from the specified {@code buffer}
* starting at the current {@code readerIndex} and ending at
* {@code readerIndex + length}.
* @param buffer The buffer which provides the content for this {@link InputStream}.
* @param length The length of the buffer to use for this {@link InputStream}.
* @throws IndexOutOfBoundsException
* if {@code readerIndex + length} is greater than
* {@code writerIndex}
*/
public ByteBufInputStream(ByteBuf buffer, int length) {
this(buffer, length, false);
}
/**
* Creates a new stream which reads data from the specified {@code buffer}
* starting at the current {@code readerIndex} and ending at the current
* {@code writerIndex}.
* @param buffer The buffer which provides the content for this {@link InputStream}.
* @param releaseOnClose {@code true} means that when {@link #close()} is called then {@link ByteBuf#release()} will
* be called on {@code buffer}.
*/
public ByteBufInputStream(ByteBuf buffer, boolean releaseOnClose) {
this(buffer, buffer.readableBytes(), releaseOnClose);
}
/**
* Creates a new stream which reads data from the specified {@code buffer}
* starting at the current {@code readerIndex} and ending at
* {@code readerIndex + length}.
* @param buffer The buffer which provides the content for this {@link InputStream}.
* @param length The length of the buffer to use for this {@link InputStream}.
* @param releaseOnClose {@code true} means that when {@link #close()} is called then {@link ByteBuf#release()} will
* be called on {@code buffer}.
* @throws IndexOutOfBoundsException
* if {@code readerIndex + length} is greater than
* {@code writerIndex}
*/
public ByteBufInputStream(ByteBuf buffer, int length, boolean releaseOnClose) {
ObjectUtil.checkNotNull(buffer, "buffer");
if (length < 0) {
if (releaseOnClose) {
buffer.release();
}
checkPositiveOrZero(length, "length");
}
if (length > buffer.readableBytes()) {
if (releaseOnClose) {
buffer.release();
}
throw new IndexOutOfBoundsException("Too many bytes to be read - Needs "
+ length + ", maximum is " + buffer.readableBytes());
}
this.releaseOnClose = releaseOnClose;
this.buffer = buffer;
startIndex = buffer.readerIndex();
endIndex = startIndex + length;
buffer.markReaderIndex();
}
/**
* Returns the number of read bytes by this stream so far.
*/
public int readBytes() {
return buffer.readerIndex() - startIndex;
}
@Override
public void close() throws IOException {
try {
super.close();
} finally {
// The Closable interface says "If the stream is already closed then invoking this method has no effect."
if (releaseOnClose && !closed) {
closed = true;
buffer.release();
}
}
}
@Override
public int available() throws IOException {
return endIndex - buffer.readerIndex();
}
// Suppress a warning since the class is not thread-safe
@Override
public void mark(int readlimit) {
buffer.markReaderIndex();
}
@Override
public boolean markSupported() {
return true;
}
@Override
public int read() throws IOException {
int available = available();
if (available == 0) {
return -1;
}
return buffer.readByte() & 0xff;
}
@Override
public int read(byte[] b, int off, int len) throws IOException {
int available = available();
if (available == 0) {
return -1;
}
len = Math.min(available, len);
buffer.readBytes(b, off, len);
return len;
}
// Suppress a warning since the class is not thread-safe
@Override
public void reset() throws IOException {
buffer.resetReaderIndex();
}
@Override
public long skip(long n) throws IOException {
if (n > Integer.MAX_VALUE) {
return skipBytes(Integer.MAX_VALUE);
} else {
return skipBytes((int) n);
}
}
@Override
public boolean readBoolean() throws IOException {
checkAvailable(1);
return read() != 0;
}
@Override
public byte readByte() throws IOException {
int available = available();
if (available == 0) {
throw new EOFException();
}
return buffer.readByte();
}
@Override
public char readChar() throws IOException {
return (char) readShort();
}
@Override
public double readDouble() throws IOException {
return Double.longBitsToDouble(readLong());
}
@Override
public float readFloat() throws IOException {
return Float.intBitsToFloat(readInt());
}
@Override
public void readFully(byte[] b) throws IOException {
readFully(b, 0, b.length);
}
@Override
public void readFully(byte[] b, int off, int len) throws IOException {
checkAvailable(len);
buffer.readBytes(b, off, len);
}
@Override
public int readInt() throws IOException {
checkAvailable(4);
return buffer.readInt();
}
private StringBuilder lineBuf;
@Override
public String readLine() throws IOException {
int available = available();
if (available == 0) {
return null;
}
if (lineBuf != null) {
lineBuf.setLength(0);
}
loop: do {
int c = buffer.readUnsignedByte();
--available;
switch (c) {
case '\n':
break loop;
case '\r':
if (available > 0 && (char) buffer.getUnsignedByte(buffer.readerIndex()) == '\n') {
buffer.skipBytes(1);
--available;
}
break loop;
default:
if (lineBuf == null) {
lineBuf = new StringBuilder();
}
lineBuf.append((char) c);
}
} while (available > 0);
return lineBuf != null && lineBuf.length() > 0 ? lineBuf.toString() : StringUtil.EMPTY_STRING;
}
@Override
public long readLong() throws IOException {
checkAvailable(8);
return buffer.readLong();
}
@Override
public short readShort() throws IOException {
checkAvailable(2);
return buffer.readShort();
}
@Override
public String readUTF() throws IOException {
return DataInputStream.readUTF(this);
}
@Override
public int readUnsignedByte() throws IOException {
return readByte() & 0xff;
}
@Override
public int readUnsignedShort() throws IOException {
return readShort() & 0xffff;
}
@Override
public int skipBytes(int n) throws IOException {
int nBytes = Math.min(available(), n);
buffer.skipBytes(nBytes);
return nBytes;
}
private void checkAvailable(int fieldSize) throws IOException {
if (fieldSize < 0) {
throw new IndexOutOfBoundsException("fieldSize cannot be a negative number");
}
if (fieldSize > available()) {
throw new EOFException("fieldSize is too long! Length is " + fieldSize
+ ", but maximum is " + available());
}
}
}

View file

@ -0,0 +1,168 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.CharsetUtil;
import io.netty.util.internal.ObjectUtil;
import java.io.DataOutput;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.OutputStream;
/**
* An {@link OutputStream} which writes data to a {@link ByteBuf}.
* <p>
* A write operation against this stream will occur at the {@code writerIndex}
* of its underlying buffer and the {@code writerIndex} will increase during
* the write operation.
* <p>
* This stream implements {@link DataOutput} for your convenience.
* The endianness of the stream is not always big endian but depends on
* the endianness of the underlying buffer.
*
* @see ByteBufInputStream
*/
public class ByteBufOutputStream extends OutputStream implements DataOutput {
private final ByteBuf buffer;
private final int startIndex;
private DataOutputStream utf8out; // lazily-instantiated
private boolean closed;
/**
* Creates a new stream which writes data to the specified {@code buffer}.
*/
public ByteBufOutputStream(ByteBuf buffer) {
this.buffer = ObjectUtil.checkNotNull(buffer, "buffer");
startIndex = buffer.writerIndex();
}
/**
* Returns the number of written bytes by this stream so far.
*/
public int writtenBytes() {
return buffer.writerIndex() - startIndex;
}
@Override
public void write(byte[] b, int off, int len) throws IOException {
if (len == 0) {
return;
}
buffer.writeBytes(b, off, len);
}
@Override
public void write(byte[] b) throws IOException {
buffer.writeBytes(b);
}
@Override
public void write(int b) throws IOException {
buffer.writeByte(b);
}
@Override
public void writeBoolean(boolean v) throws IOException {
buffer.writeBoolean(v);
}
@Override
public void writeByte(int v) throws IOException {
buffer.writeByte(v);
}
@Override
public void writeBytes(String s) throws IOException {
buffer.writeCharSequence(s, CharsetUtil.US_ASCII);
}
@Override
public void writeChar(int v) throws IOException {
buffer.writeChar(v);
}
@Override
public void writeChars(String s) throws IOException {
int len = s.length();
for (int i = 0 ; i < len ; i ++) {
buffer.writeChar(s.charAt(i));
}
}
@Override
public void writeDouble(double v) throws IOException {
buffer.writeDouble(v);
}
@Override
public void writeFloat(float v) throws IOException {
buffer.writeFloat(v);
}
@Override
public void writeInt(int v) throws IOException {
buffer.writeInt(v);
}
@Override
public void writeLong(long v) throws IOException {
buffer.writeLong(v);
}
@Override
public void writeShort(int v) throws IOException {
buffer.writeShort((short) v);
}
@Override
public void writeUTF(String s) throws IOException {
DataOutputStream out = utf8out;
if (out == null) {
if (closed) {
throw new IOException("The stream is closed");
}
// Suppress a warning since the stream is closed in the close() method
utf8out = out = new DataOutputStream(this);
}
out.writeUTF(s);
}
/**
* Returns the buffer where this stream is writing data.
*/
public ByteBuf buffer() {
return buffer;
}
@Override
public void close() throws IOException {
if (closed) {
return;
}
closed = true;
try {
super.close();
} finally {
if (utf8out != null) {
utf8out.close();
}
}
}
}

View file

@ -0,0 +1,136 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ByteProcessor;
/**
* @deprecated Use {@link ByteProcessor}.
*/
@Deprecated
public interface ByteBufProcessor extends ByteProcessor {
/**
* @deprecated Use {@link ByteProcessor#FIND_NUL}.
*/
@Deprecated
ByteBufProcessor FIND_NUL = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value != 0;
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_NON_NUL}.
*/
@Deprecated
ByteBufProcessor FIND_NON_NUL = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value == 0;
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_CR}.
*/
@Deprecated
ByteBufProcessor FIND_CR = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value != '\r';
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_NON_CR}.
*/
@Deprecated
ByteBufProcessor FIND_NON_CR = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value == '\r';
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_LF}.
*/
@Deprecated
ByteBufProcessor FIND_LF = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value != '\n';
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_NON_LF}.
*/
@Deprecated
ByteBufProcessor FIND_NON_LF = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value == '\n';
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_CRLF}.
*/
@Deprecated
ByteBufProcessor FIND_CRLF = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value != '\r' && value != '\n';
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_NON_CRLF}.
*/
@Deprecated
ByteBufProcessor FIND_NON_CRLF = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value == '\r' || value == '\n';
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_LINEAR_WHITESPACE}.
*/
@Deprecated
ByteBufProcessor FIND_LINEAR_WHITESPACE = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value != ' ' && value != '\t';
}
};
/**
* @deprecated Use {@link ByteProcessor#FIND_NON_LINEAR_WHITESPACE}.
*/
@Deprecated
ByteBufProcessor FIND_NON_LINEAR_WHITESPACE = new ByteBufProcessor() {
@Override
public boolean process(byte value) throws Exception {
return value == ' ' || value == '\t';
}
};
}

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,158 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectUtil;
import io.netty.util.internal.StringUtil;
/**
* Default implementation of a {@link ByteBufHolder} that holds it's data in a {@link ByteBuf}.
*
*/
public class DefaultByteBufHolder implements ByteBufHolder {
private final ByteBuf data;
public DefaultByteBufHolder(ByteBuf data) {
this.data = ObjectUtil.checkNotNull(data, "data");
}
@Override
public ByteBuf content() {
return ByteBufUtil.ensureAccessible(data);
}
/**
* {@inheritDoc}
* <p>
* This method calls {@code replace(content().copy())} by default.
*/
@Override
public ByteBufHolder copy() {
return replace(data.copy());
}
/**
* {@inheritDoc}
* <p>
* This method calls {@code replace(content().duplicate())} by default.
*/
@Override
public ByteBufHolder duplicate() {
return replace(data.duplicate());
}
/**
* {@inheritDoc}
* <p>
* This method calls {@code replace(content().retainedDuplicate())} by default.
*/
@Override
public ByteBufHolder retainedDuplicate() {
return replace(data.retainedDuplicate());
}
/**
* {@inheritDoc}
* <p>
* Override this method to return a new instance of this object whose content is set to the specified
* {@code content}. The default implementation of {@link #copy()}, {@link #duplicate()} and
* {@link #retainedDuplicate()} invokes this method to create a copy.
*/
@Override
public ByteBufHolder replace(ByteBuf content) {
return new DefaultByteBufHolder(content);
}
@Override
public int refCnt() {
return data.refCnt();
}
@Override
public ByteBufHolder retain() {
data.retain();
return this;
}
@Override
public ByteBufHolder retain(int increment) {
data.retain(increment);
return this;
}
@Override
public ByteBufHolder touch() {
data.touch();
return this;
}
@Override
public ByteBufHolder touch(Object hint) {
data.touch(hint);
return this;
}
@Override
public boolean release() {
return data.release();
}
@Override
public boolean release(int decrement) {
return data.release(decrement);
}
/**
* Return {@link ByteBuf#toString()} without checking the reference count first. This is useful to implement
* {@link #toString()}.
*/
protected final String contentToString() {
return data.toString();
}
@Override
public String toString() {
return StringUtil.simpleClassName(this) + '(' + contentToString() + ')';
}
/**
* This implementation of the {@code equals} operation is restricted to
* work only with instances of the same class. The reason for that is that
* Netty library already has a number of classes that extend {@link DefaultByteBufHolder} and
* override {@code equals} method with an additional comparison logic and we
* need the symmetric property of the {@code equals} operation to be preserved.
*
* @param o the reference object with which to compare.
* @return {@code true} if this object is the same as the obj
* argument; {@code false} otherwise.
*/
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o != null && getClass() == o.getClass()) {
return data.equals(((DefaultByteBufHolder) o).data);
}
return false;
}
@Override
public int hashCode() {
return data.hashCode();
}
}

View file

@ -0,0 +1,410 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ByteProcessor;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
/**
* A derived buffer which simply forwards all data access requests to its
* parent. It is recommended to use {@link ByteBuf#duplicate()} instead
* of calling the constructor explicitly.
*
* @deprecated Do not use.
*/
@Deprecated
public class DuplicatedByteBuf extends AbstractDerivedByteBuf {
private final ByteBuf buffer;
public DuplicatedByteBuf(ByteBuf buffer) {
this(buffer, buffer.readerIndex(), buffer.writerIndex());
}
DuplicatedByteBuf(ByteBuf buffer, int readerIndex, int writerIndex) {
super(buffer.maxCapacity());
if (buffer instanceof DuplicatedByteBuf) {
this.buffer = ((DuplicatedByteBuf) buffer).buffer;
} else if (buffer instanceof AbstractPooledDerivedByteBuf) {
this.buffer = buffer.unwrap();
} else {
this.buffer = buffer;
}
setIndex(readerIndex, writerIndex);
markReaderIndex();
markWriterIndex();
}
@Override
public ByteBuf unwrap() {
return buffer;
}
@Override
public ByteBufAllocator alloc() {
return unwrap().alloc();
}
@Override
@Deprecated
public ByteOrder order() {
return unwrap().order();
}
@Override
public boolean isDirect() {
return unwrap().isDirect();
}
@Override
public int capacity() {
return unwrap().capacity();
}
@Override
public ByteBuf capacity(int newCapacity) {
unwrap().capacity(newCapacity);
return this;
}
@Override
public boolean hasArray() {
return unwrap().hasArray();
}
@Override
public byte[] array() {
return unwrap().array();
}
@Override
public int arrayOffset() {
return unwrap().arrayOffset();
}
@Override
public boolean hasMemoryAddress() {
return unwrap().hasMemoryAddress();
}
@Override
public long memoryAddress() {
return unwrap().memoryAddress();
}
@Override
public byte getByte(int index) {
return unwrap().getByte(index);
}
@Override
protected byte _getByte(int index) {
return unwrap().getByte(index);
}
@Override
public short getShort(int index) {
return unwrap().getShort(index);
}
@Override
protected short _getShort(int index) {
return unwrap().getShort(index);
}
@Override
public short getShortLE(int index) {
return unwrap().getShortLE(index);
}
@Override
protected short _getShortLE(int index) {
return unwrap().getShortLE(index);
}
@Override
public int getUnsignedMedium(int index) {
return unwrap().getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return unwrap().getUnsignedMedium(index);
}
@Override
public int getUnsignedMediumLE(int index) {
return unwrap().getUnsignedMediumLE(index);
}
@Override
protected int _getUnsignedMediumLE(int index) {
return unwrap().getUnsignedMediumLE(index);
}
@Override
public int getInt(int index) {
return unwrap().getInt(index);
}
@Override
protected int _getInt(int index) {
return unwrap().getInt(index);
}
@Override
public int getIntLE(int index) {
return unwrap().getIntLE(index);
}
@Override
protected int _getIntLE(int index) {
return unwrap().getIntLE(index);
}
@Override
public long getLong(int index) {
return unwrap().getLong(index);
}
@Override
protected long _getLong(int index) {
return unwrap().getLong(index);
}
@Override
public long getLongLE(int index) {
return unwrap().getLongLE(index);
}
@Override
protected long _getLongLE(int index) {
return unwrap().getLongLE(index);
}
@Override
public ByteBuf copy(int index, int length) {
return unwrap().copy(index, length);
}
@Override
public ByteBuf slice(int index, int length) {
return unwrap().slice(index, length);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
unwrap().getBytes(index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
unwrap().getBytes(index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
unwrap().getBytes(index, dst);
return this;
}
@Override
public ByteBuf setByte(int index, int value) {
unwrap().setByte(index, value);
return this;
}
@Override
protected void _setByte(int index, int value) {
unwrap().setByte(index, value);
}
@Override
public ByteBuf setShort(int index, int value) {
unwrap().setShort(index, value);
return this;
}
@Override
protected void _setShort(int index, int value) {
unwrap().setShort(index, value);
}
@Override
public ByteBuf setShortLE(int index, int value) {
unwrap().setShortLE(index, value);
return this;
}
@Override
protected void _setShortLE(int index, int value) {
unwrap().setShortLE(index, value);
}
@Override
public ByteBuf setMedium(int index, int value) {
unwrap().setMedium(index, value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
unwrap().setMedium(index, value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
unwrap().setMediumLE(index, value);
return this;
}
@Override
protected void _setMediumLE(int index, int value) {
unwrap().setMediumLE(index, value);
}
@Override
public ByteBuf setInt(int index, int value) {
unwrap().setInt(index, value);
return this;
}
@Override
protected void _setInt(int index, int value) {
unwrap().setInt(index, value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
unwrap().setIntLE(index, value);
return this;
}
@Override
protected void _setIntLE(int index, int value) {
unwrap().setIntLE(index, value);
}
@Override
public ByteBuf setLong(int index, long value) {
unwrap().setLong(index, value);
return this;
}
@Override
protected void _setLong(int index, long value) {
unwrap().setLong(index, value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
unwrap().setLongLE(index, value);
return this;
}
@Override
protected void _setLongLE(int index, long value) {
unwrap().setLongLE(index, value);
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
unwrap().setBytes(index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
unwrap().setBytes(index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
unwrap().setBytes(index, src);
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length)
throws IOException {
unwrap().getBytes(index, out, length);
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length)
throws IOException {
return unwrap().getBytes(index, out, length);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length)
throws IOException {
return unwrap().getBytes(index, out, position, length);
}
@Override
public int setBytes(int index, InputStream in, int length)
throws IOException {
return unwrap().setBytes(index, in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length)
throws IOException {
return unwrap().setBytes(index, in, length);
}
@Override
public int setBytes(int index, FileChannel in, long position, int length)
throws IOException {
return unwrap().setBytes(index, in, position, length);
}
@Override
public int nioBufferCount() {
return unwrap().nioBufferCount();
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return unwrap().nioBuffers(index, length);
}
@Override
public int forEachByte(int index, int length, ByteProcessor processor) {
return unwrap().forEachByte(index, length, processor);
}
@Override
public int forEachByteDesc(int index, int length, ByteProcessor processor) {
return unwrap().forEachByteDesc(index, length, processor);
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,688 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.RecyclableArrayList;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.ReadOnlyBufferException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
import java.util.Collections;
/**
* {@link ByteBuf} implementation which allows to wrap an array of {@link ByteBuf} in a read-only mode.
* This is useful to write an array of {@link ByteBuf}s.
*/
final class FixedCompositeByteBuf extends AbstractReferenceCountedByteBuf {
private static final ByteBuf[] EMPTY = { Unpooled.EMPTY_BUFFER };
private final int nioBufferCount;
private final int capacity;
private final ByteBufAllocator allocator;
private final ByteOrder order;
private final ByteBuf[] buffers;
private final boolean direct;
FixedCompositeByteBuf(ByteBufAllocator allocator, ByteBuf... buffers) {
super(AbstractByteBufAllocator.DEFAULT_MAX_CAPACITY);
if (buffers.length == 0) {
this.buffers = EMPTY;
order = ByteOrder.BIG_ENDIAN;
nioBufferCount = 1;
capacity = 0;
direct = Unpooled.EMPTY_BUFFER.isDirect();
} else {
ByteBuf b = buffers[0];
this.buffers = buffers;
boolean direct = true;
int nioBufferCount = b.nioBufferCount();
int capacity = b.readableBytes();
order = b.order();
for (int i = 1; i < buffers.length; i++) {
b = buffers[i];
if (buffers[i].order() != order) {
throw new IllegalArgumentException("All ByteBufs need to have same ByteOrder");
}
nioBufferCount += b.nioBufferCount();
capacity += b.readableBytes();
if (!b.isDirect()) {
direct = false;
}
}
this.nioBufferCount = nioBufferCount;
this.capacity = capacity;
this.direct = direct;
}
setIndex(0, capacity());
this.allocator = allocator;
}
@Override
public boolean isWritable() {
return false;
}
@Override
public boolean isWritable(int size) {
return false;
}
@Override
public ByteBuf discardReadBytes() {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setByte(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setByte(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setShort(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setShort(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setShortLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setMedium(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setMedium(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setMediumLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setInt(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setInt(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setIntLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setLong(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setLong(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setLongLE(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, InputStream in, int length) {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) {
throw new ReadOnlyBufferException();
}
@Override
public int capacity() {
return capacity;
}
@Override
public int maxCapacity() {
return capacity;
}
@Override
public ByteBuf capacity(int newCapacity) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBufAllocator alloc() {
return allocator;
}
@Override
public ByteOrder order() {
return order;
}
@Override
public ByteBuf unwrap() {
return null;
}
@Override
public boolean isDirect() {
return direct;
}
private Component findComponent(int index) {
int readable = 0;
for (int i = 0 ; i < buffers.length; i++) {
Component comp = null;
ByteBuf b = buffers[i];
if (b instanceof Component) {
comp = (Component) b;
b = comp.buf;
}
readable += b.readableBytes();
if (index < readable) {
if (comp == null) {
// Create a new component and store it in the array so it not create a new object
// on the next access.
comp = new Component(i, readable - b.readableBytes(), b);
buffers[i] = comp;
}
return comp;
}
}
throw new IllegalStateException();
}
/**
* Return the {@link ByteBuf} stored at the given index of the array.
*/
private ByteBuf buffer(int i) {
ByteBuf b = buffers[i];
return b instanceof Component ? ((Component) b).buf : b;
}
@Override
public byte getByte(int index) {
return _getByte(index);
}
@Override
protected byte _getByte(int index) {
Component c = findComponent(index);
return c.buf.getByte(index - c.offset);
}
@Override
protected short _getShort(int index) {
Component c = findComponent(index);
if (index + 2 <= c.endOffset) {
return c.buf.getShort(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return (short) ((_getByte(index) & 0xff) << 8 | _getByte(index + 1) & 0xff);
} else {
return (short) (_getByte(index) & 0xff | (_getByte(index + 1) & 0xff) << 8);
}
}
@Override
protected short _getShortLE(int index) {
Component c = findComponent(index);
if (index + 2 <= c.endOffset) {
return c.buf.getShortLE(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return (short) (_getByte(index) & 0xff | (_getByte(index + 1) & 0xff) << 8);
} else {
return (short) ((_getByte(index) & 0xff) << 8 | _getByte(index + 1) & 0xff);
}
}
@Override
protected int _getUnsignedMedium(int index) {
Component c = findComponent(index);
if (index + 3 <= c.endOffset) {
return c.buf.getUnsignedMedium(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return (_getShort(index) & 0xffff) << 8 | _getByte(index + 2) & 0xff;
} else {
return _getShort(index) & 0xFFFF | (_getByte(index + 2) & 0xFF) << 16;
}
}
@Override
protected int _getUnsignedMediumLE(int index) {
Component c = findComponent(index);
if (index + 3 <= c.endOffset) {
return c.buf.getUnsignedMediumLE(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return _getShortLE(index) & 0xffff | (_getByte(index + 2) & 0xff) << 16;
} else {
return (_getShortLE(index) & 0xffff) << 8 | _getByte(index + 2) & 0xff;
}
}
@Override
protected int _getInt(int index) {
Component c = findComponent(index);
if (index + 4 <= c.endOffset) {
return c.buf.getInt(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return (_getShort(index) & 0xffff) << 16 | _getShort(index + 2) & 0xffff;
} else {
return _getShort(index) & 0xFFFF | (_getShort(index + 2) & 0xFFFF) << 16;
}
}
@Override
protected int _getIntLE(int index) {
Component c = findComponent(index);
if (index + 4 <= c.endOffset) {
return c.buf.getIntLE(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return _getShortLE(index) & 0xFFFF | (_getShortLE(index + 2) & 0xFFFF) << 16;
} else {
return (_getShortLE(index) & 0xffff) << 16 | _getShortLE(index + 2) & 0xffff;
}
}
@Override
protected long _getLong(int index) {
Component c = findComponent(index);
if (index + 8 <= c.endOffset) {
return c.buf.getLong(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return (_getInt(index) & 0xffffffffL) << 32 | _getInt(index + 4) & 0xffffffffL;
} else {
return _getInt(index) & 0xFFFFFFFFL | (_getInt(index + 4) & 0xFFFFFFFFL) << 32;
}
}
@Override
protected long _getLongLE(int index) {
Component c = findComponent(index);
if (index + 8 <= c.endOffset) {
return c.buf.getLongLE(index - c.offset);
} else if (order() == ByteOrder.BIG_ENDIAN) {
return _getIntLE(index) & 0xffffffffL | (_getIntLE(index + 4) & 0xffffffffL) << 32;
} else {
return (_getIntLE(index) & 0xffffffffL) << 32 | _getIntLE(index + 4) & 0xffffffffL;
}
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.length);
if (length == 0) {
return this;
}
Component c = findComponent(index);
int i = c.index;
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
s.getBytes(index - adjustment, dst, dstIndex, localLength);
index += localLength;
dstIndex += localLength;
length -= localLength;
adjustment += s.readableBytes();
if (length <= 0) {
break;
}
s = buffer(++i);
}
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
int limit = dst.limit();
int length = dst.remaining();
checkIndex(index, length);
if (length == 0) {
return this;
}
try {
Component c = findComponent(index);
int i = c.index;
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
dst.limit(dst.position() + localLength);
s.getBytes(index - adjustment, dst);
index += localLength;
length -= localLength;
adjustment += s.readableBytes();
if (length <= 0) {
break;
}
s = buffer(++i);
}
} finally {
dst.limit(limit);
}
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.capacity());
if (length == 0) {
return this;
}
Component c = findComponent(index);
int i = c.index;
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
s.getBytes(index - adjustment, dst, dstIndex, localLength);
index += localLength;
dstIndex += localLength;
length -= localLength;
adjustment += s.readableBytes();
if (length <= 0) {
break;
}
s = buffer(++i);
}
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length)
throws IOException {
int count = nioBufferCount();
if (count == 1) {
return out.write(internalNioBuffer(index, length));
} else {
long writtenBytes = out.write(nioBuffers(index, length));
if (writtenBytes > Integer.MAX_VALUE) {
return Integer.MAX_VALUE;
} else {
return (int) writtenBytes;
}
}
}
@Override
public int getBytes(int index, FileChannel out, long position, int length)
throws IOException {
int count = nioBufferCount();
if (count == 1) {
return out.write(internalNioBuffer(index, length), position);
} else {
long writtenBytes = 0;
for (ByteBuffer buf : nioBuffers(index, length)) {
writtenBytes += out.write(buf, position + writtenBytes);
}
if (writtenBytes > Integer.MAX_VALUE) {
return Integer.MAX_VALUE;
} else {
return (int) writtenBytes;
}
}
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
checkIndex(index, length);
if (length == 0) {
return this;
}
Component c = findComponent(index);
int i = c.index;
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
s.getBytes(index - adjustment, out, localLength);
index += localLength;
length -= localLength;
adjustment += s.readableBytes();
if (length <= 0) {
break;
}
s = buffer(++i);
}
return this;
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex(index, length);
boolean release = true;
ByteBuf buf = alloc().buffer(length);
try {
buf.writeBytes(this, index, length);
release = false;
return buf;
} finally {
if (release) {
buf.release();
}
}
}
@Override
public int nioBufferCount() {
return nioBufferCount;
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex(index, length);
if (buffers.length == 1) {
ByteBuf buf = buffer(0);
if (buf.nioBufferCount() == 1) {
return buf.nioBuffer(index, length);
}
}
ByteBuffer merged = ByteBuffer.allocate(length).order(order());
ByteBuffer[] buffers = nioBuffers(index, length);
//noinspection ForLoopReplaceableByForEach
for (int i = 0; i < buffers.length; i++) {
merged.put(buffers[i]);
}
merged.flip();
return merged;
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
if (buffers.length == 1) {
return buffer(0).internalNioBuffer(index, length);
}
throw new UnsupportedOperationException();
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
checkIndex(index, length);
if (length == 0) {
return EmptyArrays.EMPTY_BYTE_BUFFERS;
}
RecyclableArrayList array = RecyclableArrayList.newInstance(buffers.length);
try {
Component c = findComponent(index);
int i = c.index;
int adjustment = c.offset;
ByteBuf s = c.buf;
for (;;) {
int localLength = Math.min(length, s.readableBytes() - (index - adjustment));
switch (s.nioBufferCount()) {
case 0:
throw new UnsupportedOperationException();
case 1:
array.add(s.nioBuffer(index - adjustment, localLength));
break;
default:
Collections.addAll(array, s.nioBuffers(index - adjustment, localLength));
}
index += localLength;
length -= localLength;
adjustment += s.readableBytes();
if (length <= 0) {
break;
}
s = buffer(++i);
}
return array.toArray(EmptyArrays.EMPTY_BYTE_BUFFERS);
} finally {
array.recycle();
}
}
@Override
public boolean hasArray() {
switch (buffers.length) {
case 0:
return true;
case 1:
return buffer(0).hasArray();
default:
return false;
}
}
@Override
public byte[] array() {
switch (buffers.length) {
case 0:
return EmptyArrays.EMPTY_BYTES;
case 1:
return buffer(0).array();
default:
throw new UnsupportedOperationException();
}
}
@Override
public int arrayOffset() {
switch (buffers.length) {
case 0:
return 0;
case 1:
return buffer(0).arrayOffset();
default:
throw new UnsupportedOperationException();
}
}
@Override
public boolean hasMemoryAddress() {
switch (buffers.length) {
case 0:
return Unpooled.EMPTY_BUFFER.hasMemoryAddress();
case 1:
return buffer(0).hasMemoryAddress();
default:
return false;
}
}
@Override
public long memoryAddress() {
switch (buffers.length) {
case 0:
return Unpooled.EMPTY_BUFFER.memoryAddress();
case 1:
return buffer(0).memoryAddress();
default:
throw new UnsupportedOperationException();
}
}
@Override
protected void deallocate() {
for (int i = 0; i < buffers.length; i++) {
buffer(i).release();
}
}
@Override
public String toString() {
String result = super.toString();
result = result.substring(0, result.length() - 1);
return result + ", components=" + buffers.length + ')';
}
private static final class Component extends WrappedByteBuf {
private final int index;
private final int offset;
private final int endOffset;
Component(int index, int offset, ByteBuf buf) {
super(buf);
this.index = index;
this.offset = offset;
endOffset = offset + buf.readableBytes();
}
}
}

View file

@ -0,0 +1,146 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* Utility class for heap buffers.
*/
final class HeapByteBufUtil {
static byte getByte(byte[] memory, int index) {
return memory[index];
}
static short getShort(byte[] memory, int index) {
return (short) (memory[index] << 8 | memory[index + 1] & 0xFF);
}
static short getShortLE(byte[] memory, int index) {
return (short) (memory[index] & 0xff | memory[index + 1] << 8);
}
static int getUnsignedMedium(byte[] memory, int index) {
return (memory[index] & 0xff) << 16 |
(memory[index + 1] & 0xff) << 8 |
memory[index + 2] & 0xff;
}
static int getUnsignedMediumLE(byte[] memory, int index) {
return memory[index] & 0xff |
(memory[index + 1] & 0xff) << 8 |
(memory[index + 2] & 0xff) << 16;
}
static int getInt(byte[] memory, int index) {
return (memory[index] & 0xff) << 24 |
(memory[index + 1] & 0xff) << 16 |
(memory[index + 2] & 0xff) << 8 |
memory[index + 3] & 0xff;
}
static int getIntLE(byte[] memory, int index) {
return memory[index] & 0xff |
(memory[index + 1] & 0xff) << 8 |
(memory[index + 2] & 0xff) << 16 |
(memory[index + 3] & 0xff) << 24;
}
static long getLong(byte[] memory, int index) {
return ((long) memory[index] & 0xff) << 56 |
((long) memory[index + 1] & 0xff) << 48 |
((long) memory[index + 2] & 0xff) << 40 |
((long) memory[index + 3] & 0xff) << 32 |
((long) memory[index + 4] & 0xff) << 24 |
((long) memory[index + 5] & 0xff) << 16 |
((long) memory[index + 6] & 0xff) << 8 |
(long) memory[index + 7] & 0xff;
}
static long getLongLE(byte[] memory, int index) {
return (long) memory[index] & 0xff |
((long) memory[index + 1] & 0xff) << 8 |
((long) memory[index + 2] & 0xff) << 16 |
((long) memory[index + 3] & 0xff) << 24 |
((long) memory[index + 4] & 0xff) << 32 |
((long) memory[index + 5] & 0xff) << 40 |
((long) memory[index + 6] & 0xff) << 48 |
((long) memory[index + 7] & 0xff) << 56;
}
static void setByte(byte[] memory, int index, int value) {
memory[index] = (byte) value;
}
static void setShort(byte[] memory, int index, int value) {
memory[index] = (byte) (value >>> 8);
memory[index + 1] = (byte) value;
}
static void setShortLE(byte[] memory, int index, int value) {
memory[index] = (byte) value;
memory[index + 1] = (byte) (value >>> 8);
}
static void setMedium(byte[] memory, int index, int value) {
memory[index] = (byte) (value >>> 16);
memory[index + 1] = (byte) (value >>> 8);
memory[index + 2] = (byte) value;
}
static void setMediumLE(byte[] memory, int index, int value) {
memory[index] = (byte) value;
memory[index + 1] = (byte) (value >>> 8);
memory[index + 2] = (byte) (value >>> 16);
}
static void setInt(byte[] memory, int index, int value) {
memory[index] = (byte) (value >>> 24);
memory[index + 1] = (byte) (value >>> 16);
memory[index + 2] = (byte) (value >>> 8);
memory[index + 3] = (byte) value;
}
static void setIntLE(byte[] memory, int index, int value) {
memory[index] = (byte) value;
memory[index + 1] = (byte) (value >>> 8);
memory[index + 2] = (byte) (value >>> 16);
memory[index + 3] = (byte) (value >>> 24);
}
static void setLong(byte[] memory, int index, long value) {
memory[index] = (byte) (value >>> 56);
memory[index + 1] = (byte) (value >>> 48);
memory[index + 2] = (byte) (value >>> 40);
memory[index + 3] = (byte) (value >>> 32);
memory[index + 4] = (byte) (value >>> 24);
memory[index + 5] = (byte) (value >>> 16);
memory[index + 6] = (byte) (value >>> 8);
memory[index + 7] = (byte) value;
}
static void setLongLE(byte[] memory, int index, long value) {
memory[index] = (byte) value;
memory[index + 1] = (byte) (value >>> 8);
memory[index + 2] = (byte) (value >>> 16);
memory[index + 3] = (byte) (value >>> 24);
memory[index + 4] = (byte) (value >>> 32);
memory[index + 5] = (byte) (value >>> 40);
memory[index + 6] = (byte) (value >>> 48);
memory[index + 7] = (byte) (value >>> 56);
}
private HeapByteBufUtil() { }
}

View file

@ -0,0 +1,107 @@
/*
* Copyright 2020 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import java.util.Arrays;
/**
* Internal primitive priority queue, used by {@link PoolChunk}.
* The implementation is based on the binary heap, as described in Algorithms by Sedgewick and Wayne.
*/
final class IntPriorityQueue {
public static final int NO_VALUE = -1;
private int[] array = new int[9];
private int size;
public void offer(int handle) {
if (handle == NO_VALUE) {
throw new IllegalArgumentException("The NO_VALUE (" + NO_VALUE + ") cannot be added to the queue.");
}
size++;
if (size == array.length) {
// Grow queue capacity.
array = Arrays.copyOf(array, 1 + (array.length - 1) * 2);
}
array[size] = handle;
lift(size);
}
public void remove(int value) {
for (int i = 1; i <= size; i++) {
if (array[i] == value) {
array[i] = array[size--];
lift(i);
sink(i);
return;
}
}
}
public int peek() {
if (size == 0) {
return NO_VALUE;
}
return array[1];
}
public int poll() {
if (size == 0) {
return NO_VALUE;
}
int val = array[1];
array[1] = array[size];
array[size] = 0;
size--;
sink(1);
return val;
}
public boolean isEmpty() {
return size == 0;
}
private void lift(int index) {
int parentIndex;
while (index > 1 && subord(parentIndex = index >> 1, index)) {
swap(index, parentIndex);
index = parentIndex;
}
}
private void sink(int index) {
int child;
while ((child = index << 1) <= size) {
if (child < size && subord(child, child + 1)) {
child++;
}
if (!subord(index, child)) {
break;
}
swap(index, child);
index = child;
}
}
private boolean subord(int a, int b) {
return array[a] > array[b];
}
private void swap(int a, int b) {
int value = array[a];
array[a] = array[b];
array[b] = value;
}
}

View file

@ -0,0 +1,129 @@
/*
* Copyright 2020 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* Internal primitive map implementation that is specifically optimised for the runs availability map use case in {@link
* PoolChunk}.
*/
final class LongLongHashMap {
private static final int MASK_TEMPLATE = ~1;
private int mask;
private long[] array;
private int maxProbe;
private long zeroVal;
private final long emptyVal;
LongLongHashMap(long emptyVal) {
this.emptyVal = emptyVal;
zeroVal = emptyVal;
int initialSize = 32;
array = new long[initialSize];
mask = initialSize - 1;
computeMaskAndProbe();
}
public long put(long key, long value) {
if (key == 0) {
long prev = zeroVal;
zeroVal = value;
return prev;
}
for (;;) {
int index = index(key);
for (int i = 0; i < maxProbe; i++) {
long existing = array[index];
if (existing == key || existing == 0) {
long prev = existing == 0? emptyVal : array[index + 1];
array[index] = key;
array[index + 1] = value;
for (; i < maxProbe; i++) { // Nerf any existing misplaced entries.
index = index + 2 & mask;
if (array[index] == key) {
array[index] = 0;
prev = array[index + 1];
break;
}
}
return prev;
}
index = index + 2 & mask;
}
expand(); // Grow array and re-hash.
}
}
public void remove(long key) {
if (key == 0) {
zeroVal = emptyVal;
return;
}
int index = index(key);
for (int i = 0; i < maxProbe; i++) {
long existing = array[index];
if (existing == key) {
array[index] = 0;
break;
}
index = index + 2 & mask;
}
}
public long get(long key) {
if (key == 0) {
return zeroVal;
}
int index = index(key);
for (int i = 0; i < maxProbe; i++) {
long existing = array[index];
if (existing == key) {
return array[index + 1];
}
index = index + 2 & mask;
}
return emptyVal;
}
private int index(long key) {
// Hash with murmur64, and mask.
key ^= key >>> 33;
key *= 0xff51afd7ed558ccdL;
key ^= key >>> 33;
key *= 0xc4ceb9fe1a85ec53L;
key ^= key >>> 33;
return (int) key & mask;
}
private void expand() {
long[] prev = array;
array = new long[prev.length * 2];
computeMaskAndProbe();
for (int i = 0; i < prev.length; i += 2) {
long key = prev[i];
if (key != 0) {
long val = prev[i + 1];
put(key, val);
}
}
}
private void computeMaskAndProbe() {
int length = array.length;
mask = length - 1 & MASK_TEMPLATE;
maxProbe = (int) Math.log(length);
}
}

View file

@ -0,0 +1,799 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.LongCounter;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.locks.ReentrantLock;
import static io.netty.buffer.PoolChunk.isSubpage;
import static java.lang.Math.max;
abstract class PoolArena<T> implements PoolArenaMetric {
private static final boolean HAS_UNSAFE = PlatformDependent.hasUnsafe();
enum SizeClass {
Small,
Normal
}
final PooledByteBufAllocator parent;
final PoolSubpage<T>[] smallSubpagePools;
private final PoolChunkList<T> q050;
private final PoolChunkList<T> q025;
private final PoolChunkList<T> q000;
private final PoolChunkList<T> qInit;
private final PoolChunkList<T> q075;
private final PoolChunkList<T> q100;
private final List<PoolChunkListMetric> chunkListMetrics;
// Metrics for allocations and deallocations
private long allocationsNormal;
// We need to use the LongCounter here as this is not guarded via synchronized block.
private final LongCounter allocationsSmall = PlatformDependent.newLongCounter();
private final LongCounter allocationsHuge = PlatformDependent.newLongCounter();
private final LongCounter activeBytesHuge = PlatformDependent.newLongCounter();
private long deallocationsSmall;
private long deallocationsNormal;
// We need to use the LongCounter here as this is not guarded via synchronized block.
private final LongCounter deallocationsHuge = PlatformDependent.newLongCounter();
// Number of thread caches backed by this arena.
final AtomicInteger numThreadCaches = new AtomicInteger();
// TODO: Test if adding padding helps under contention
//private long pad0, pad1, pad2, pad3, pad4, pad5, pad6, pad7;
private final ReentrantLock lock = new ReentrantLock();
final SizeClasses sizeClass;
protected PoolArena(PooledByteBufAllocator parent, SizeClasses sizeClass) {
assert null != sizeClass;
this.parent = parent;
this.sizeClass = sizeClass;
smallSubpagePools = newSubpagePoolArray(sizeClass.nSubpages);
for (int i = 0; i < smallSubpagePools.length; i ++) {
smallSubpagePools[i] = newSubpagePoolHead(i);
}
q100 = new PoolChunkList<T>(this, null, 100, Integer.MAX_VALUE, sizeClass.chunkSize);
q075 = new PoolChunkList<T>(this, q100, 75, 100, sizeClass.chunkSize);
q050 = new PoolChunkList<T>(this, q075, 50, 100, sizeClass.chunkSize);
q025 = new PoolChunkList<T>(this, q050, 25, 75, sizeClass.chunkSize);
q000 = new PoolChunkList<T>(this, q025, 1, 50, sizeClass.chunkSize);
qInit = new PoolChunkList<T>(this, q000, Integer.MIN_VALUE, 25, sizeClass.chunkSize);
q100.prevList(q075);
q075.prevList(q050);
q050.prevList(q025);
q025.prevList(q000);
q000.prevList(null);
qInit.prevList(qInit);
List<PoolChunkListMetric> metrics = new ArrayList<>(6);
metrics.add(qInit);
metrics.add(q000);
metrics.add(q025);
metrics.add(q050);
metrics.add(q075);
metrics.add(q100);
chunkListMetrics = Collections.unmodifiableList(metrics);
}
private PoolSubpage<T> newSubpagePoolHead(int index) {
PoolSubpage<T> head = new PoolSubpage<T>(index);
head.prev = head;
head.next = head;
return head;
}
@SuppressWarnings("unchecked")
private PoolSubpage<T>[] newSubpagePoolArray(int size) {
return new PoolSubpage[size];
}
abstract boolean isDirect();
PooledByteBuf<T> allocate(PoolThreadCache cache, int reqCapacity, int maxCapacity) {
PooledByteBuf<T> buf = newByteBuf(maxCapacity);
allocate(cache, buf, reqCapacity);
return buf;
}
private void allocate(PoolThreadCache cache, PooledByteBuf<T> buf, final int reqCapacity) {
final int sizeIdx = sizeClass.size2SizeIdx(reqCapacity);
if (sizeIdx <= sizeClass.smallMaxSizeIdx) {
tcacheAllocateSmall(cache, buf, reqCapacity, sizeIdx);
} else if (sizeIdx < sizeClass.nSizes) {
tcacheAllocateNormal(cache, buf, reqCapacity, sizeIdx);
} else {
int normCapacity = sizeClass.directMemoryCacheAlignment > 0
? sizeClass.normalizeSize(reqCapacity) : reqCapacity;
// Huge allocations are never served via the cache so just call allocateHuge
allocateHuge(buf, normCapacity);
}
}
private void tcacheAllocateSmall(PoolThreadCache cache, PooledByteBuf<T> buf, final int reqCapacity,
final int sizeIdx) {
if (cache.allocateSmall(this, buf, reqCapacity, sizeIdx)) {
// was able to allocate out of the cache so move on
return;
}
/*
* Synchronize on the head. This is needed as {@link PoolChunk#allocateSubpage(int)} and
* {@link PoolChunk#free(long)} may modify the doubly linked list as well.
*/
final PoolSubpage<T> head = smallSubpagePools[sizeIdx];
final boolean needsNormalAllocation;
head.lock();
try {
final PoolSubpage<T> s = head.next;
needsNormalAllocation = s == head;
if (!needsNormalAllocation) {
assert s.doNotDestroy && s.elemSize == sizeClass.sizeIdx2size(sizeIdx) : "doNotDestroy=" +
s.doNotDestroy + ", elemSize=" + s.elemSize + ", sizeIdx=" + sizeIdx;
long handle = s.allocate();
assert handle >= 0;
s.chunk.initBufWithSubpage(buf, null, handle, reqCapacity, cache);
}
} finally {
head.unlock();
}
if (needsNormalAllocation) {
lock();
try {
allocateNormal(buf, reqCapacity, sizeIdx, cache);
} finally {
unlock();
}
}
incSmallAllocation();
}
private void tcacheAllocateNormal(PoolThreadCache cache, PooledByteBuf<T> buf, final int reqCapacity,
final int sizeIdx) {
if (cache.allocateNormal(this, buf, reqCapacity, sizeIdx)) {
// was able to allocate out of the cache so move on
return;
}
lock();
try {
allocateNormal(buf, reqCapacity, sizeIdx, cache);
++allocationsNormal;
} finally {
unlock();
}
}
private void allocateNormal(PooledByteBuf<T> buf, int reqCapacity, int sizeIdx, PoolThreadCache threadCache) {
assert lock.isHeldByCurrentThread();
if (q050.allocate(buf, reqCapacity, sizeIdx, threadCache) ||
q025.allocate(buf, reqCapacity, sizeIdx, threadCache) ||
q000.allocate(buf, reqCapacity, sizeIdx, threadCache) ||
qInit.allocate(buf, reqCapacity, sizeIdx, threadCache) ||
q075.allocate(buf, reqCapacity, sizeIdx, threadCache)) {
return;
}
// Add a new chunk.
PoolChunk<T> c = newChunk(sizeClass.pageSize, sizeClass.nPSizes, sizeClass.pageShifts, sizeClass.chunkSize);
boolean success = c.allocate(buf, reqCapacity, sizeIdx, threadCache);
assert success;
qInit.add(c);
}
private void incSmallAllocation() {
allocationsSmall.increment();
}
private void allocateHuge(PooledByteBuf<T> buf, int reqCapacity) {
PoolChunk<T> chunk = newUnpooledChunk(reqCapacity);
activeBytesHuge.add(chunk.chunkSize());
buf.initUnpooled(chunk, reqCapacity);
allocationsHuge.increment();
}
void free(PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle, int normCapacity, PoolThreadCache cache) {
chunk.decrementPinnedMemory(normCapacity);
if (chunk.unpooled) {
int size = chunk.chunkSize();
destroyChunk(chunk);
activeBytesHuge.add(-size);
deallocationsHuge.increment();
} else {
SizeClass sizeClass = sizeClass(handle);
if (cache != null && cache.add(this, chunk, nioBuffer, handle, normCapacity, sizeClass)) {
// cached so not free it.
return;
}
freeChunk(chunk, handle, normCapacity, sizeClass, nioBuffer, false);
}
}
private static SizeClass sizeClass(long handle) {
return isSubpage(handle) ? SizeClass.Small : SizeClass.Normal;
}
void freeChunk(PoolChunk<T> chunk, long handle, int normCapacity, SizeClass sizeClass, ByteBuffer nioBuffer,
boolean finalizer) {
final boolean destroyChunk;
lock();
try {
// We only call this if freeChunk is not called because of the PoolThreadCache finalizer as otherwise this
// may fail due lazy class-loading in for example tomcat.
if (!finalizer) {
switch (sizeClass) {
case Normal:
++deallocationsNormal;
break;
case Small:
++deallocationsSmall;
break;
default:
throw new Error();
}
}
destroyChunk = !chunk.parent.free(chunk, handle, normCapacity, nioBuffer);
} finally {
unlock();
}
if (destroyChunk) {
// destroyChunk not need to be called while holding the synchronized lock.
destroyChunk(chunk);
}
}
void reallocate(final PooledByteBuf<T> buf, int newCapacity) {
assert newCapacity >= 0 && newCapacity <= buf.maxCapacity();
final int oldCapacity;
final PoolChunk<T> oldChunk;
final ByteBuffer oldNioBuffer;
final long oldHandle;
final T oldMemory;
final int oldOffset;
final int oldMaxLength;
final PoolThreadCache oldCache;
// We synchronize on the ByteBuf itself to ensure there is no "concurrent" reallocations for the same buffer.
// We do this to ensure the ByteBuf internal fields that are used to allocate / free are not accessed
// concurrently. This is important as otherwise we might end up corrupting our internal state of our data
// structures.
//
// Also note we don't use a Lock here but just synchronized even tho this might seem like a bad choice for Loom.
// This is done to minimize the overhead per ByteBuf. The time this would block another thread should be
// relative small and so not be a problem for Loom.
// See https://github.com/netty/netty/issues/13467
synchronized (buf) {
oldCapacity = buf.length;
if (oldCapacity == newCapacity) {
return;
}
oldChunk = buf.chunk;
oldNioBuffer = buf.tmpNioBuf;
oldHandle = buf.handle;
oldMemory = buf.memory;
oldOffset = buf.offset;
oldMaxLength = buf.maxLength;
oldCache = buf.cache;
// This does not touch buf's reader/writer indices
allocate(parent.threadCache(), buf, newCapacity);
}
int bytesToCopy;
if (newCapacity > oldCapacity) {
bytesToCopy = oldCapacity;
} else {
buf.trimIndicesToCapacity(newCapacity);
bytesToCopy = newCapacity;
}
memoryCopy(oldMemory, oldOffset, buf, bytesToCopy);
free(oldChunk, oldNioBuffer, oldHandle, oldMaxLength, oldCache);
}
@Override
public int numThreadCaches() {
return numThreadCaches.get();
}
@Override
public int numTinySubpages() {
return 0;
}
@Override
public int numSmallSubpages() {
return smallSubpagePools.length;
}
@Override
public int numChunkLists() {
return chunkListMetrics.size();
}
@Override
public List<PoolSubpageMetric> tinySubpages() {
return Collections.emptyList();
}
@Override
public List<PoolSubpageMetric> smallSubpages() {
return subPageMetricList(smallSubpagePools);
}
@Override
public List<PoolChunkListMetric> chunkLists() {
return chunkListMetrics;
}
private static List<PoolSubpageMetric> subPageMetricList(PoolSubpage<?>[] pages) {
List<PoolSubpageMetric> metrics = new ArrayList<PoolSubpageMetric>();
for (PoolSubpage<?> head : pages) {
if (head.next == head) {
continue;
}
PoolSubpage<?> s = head.next;
while (true) {
metrics.add(s);
s = s.next;
if (s == head) {
break;
}
}
}
return metrics;
}
@Override
public long numAllocations() {
final long allocsNormal;
lock();
try {
allocsNormal = allocationsNormal;
} finally {
unlock();
}
return allocationsSmall.value() + allocsNormal + allocationsHuge.value();
}
@Override
public long numTinyAllocations() {
return 0;
}
@Override
public long numSmallAllocations() {
return allocationsSmall.value();
}
@Override
public long numNormalAllocations() {
lock();
try {
return allocationsNormal;
} finally {
unlock();
}
}
@Override
public long numDeallocations() {
final long deallocs;
lock();
try {
deallocs = deallocationsSmall + deallocationsNormal;
} finally {
unlock();
}
return deallocs + deallocationsHuge.value();
}
@Override
public long numTinyDeallocations() {
return 0;
}
@Override
public long numSmallDeallocations() {
lock();
try {
return deallocationsSmall;
} finally {
unlock();
}
}
@Override
public long numNormalDeallocations() {
lock();
try {
return deallocationsNormal;
} finally {
unlock();
}
}
@Override
public long numHugeAllocations() {
return allocationsHuge.value();
}
@Override
public long numHugeDeallocations() {
return deallocationsHuge.value();
}
@Override
public long numActiveAllocations() {
long val = allocationsSmall.value() + allocationsHuge.value()
- deallocationsHuge.value();
lock();
try {
val += allocationsNormal - (deallocationsSmall + deallocationsNormal);
} finally {
unlock();
}
return max(val, 0);
}
@Override
public long numActiveTinyAllocations() {
return 0;
}
@Override
public long numActiveSmallAllocations() {
return max(numSmallAllocations() - numSmallDeallocations(), 0);
}
@Override
public long numActiveNormalAllocations() {
final long val;
lock();
try {
val = allocationsNormal - deallocationsNormal;
} finally {
unlock();
}
return max(val, 0);
}
@Override
public long numActiveHugeAllocations() {
return max(numHugeAllocations() - numHugeDeallocations(), 0);
}
@Override
public long numActiveBytes() {
long val = activeBytesHuge.value();
lock();
try {
for (PoolChunkListMetric chunkListMetric : chunkListMetrics) {
for (PoolChunkMetric m : chunkListMetric) {
val += m.chunkSize();
}
}
} finally {
unlock();
}
return max(0, val);
}
/**
* Return the number of bytes that are currently pinned to buffer instances, by the arena. The pinned memory is not
* accessible for use by any other allocation, until the buffers using have all been released.
*/
public long numPinnedBytes() {
long val = activeBytesHuge.value(); // Huge chunks are exact-sized for the buffers they were allocated to.
lock();
try {
for (PoolChunkListMetric chunkListMetric : chunkListMetrics) {
for (PoolChunkMetric m : chunkListMetric) {
val += ((PoolChunk<?>) m).pinnedBytes();
}
}
} finally {
unlock();
}
return max(0, val);
}
protected abstract PoolChunk<T> newChunk(int pageSize, int maxPageIdx, int pageShifts, int chunkSize);
protected abstract PoolChunk<T> newUnpooledChunk(int capacity);
protected abstract PooledByteBuf<T> newByteBuf(int maxCapacity);
protected abstract void memoryCopy(T src, int srcOffset, PooledByteBuf<T> dst, int length);
protected abstract void destroyChunk(PoolChunk<T> chunk);
@Override
public String toString() {
lock();
try {
StringBuilder buf = new StringBuilder()
.append("Chunk(s) at 0~25%:")
.append(StringUtil.NEWLINE)
.append(qInit)
.append(StringUtil.NEWLINE)
.append("Chunk(s) at 0~50%:")
.append(StringUtil.NEWLINE)
.append(q000)
.append(StringUtil.NEWLINE)
.append("Chunk(s) at 25~75%:")
.append(StringUtil.NEWLINE)
.append(q025)
.append(StringUtil.NEWLINE)
.append("Chunk(s) at 50~100%:")
.append(StringUtil.NEWLINE)
.append(q050)
.append(StringUtil.NEWLINE)
.append("Chunk(s) at 75~100%:")
.append(StringUtil.NEWLINE)
.append(q075)
.append(StringUtil.NEWLINE)
.append("Chunk(s) at 100%:")
.append(StringUtil.NEWLINE)
.append(q100)
.append(StringUtil.NEWLINE)
.append("small subpages:");
appendPoolSubPages(buf, smallSubpagePools);
buf.append(StringUtil.NEWLINE);
return buf.toString();
} finally {
unlock();
}
}
private static void appendPoolSubPages(StringBuilder buf, PoolSubpage<?>[] subpages) {
for (int i = 0; i < subpages.length; i ++) {
PoolSubpage<?> head = subpages[i];
if (head.next == head || head.next == null) {
continue;
}
buf.append(StringUtil.NEWLINE)
.append(i)
.append(": ");
PoolSubpage<?> s = head.next;
while (s != null) {
buf.append(s);
s = s.next;
if (s == head) {
break;
}
}
}
}
@Override
protected final void finalize() throws Throwable {
try {
super.finalize();
} finally {
destroyPoolSubPages(smallSubpagePools);
destroyPoolChunkLists(qInit, q000, q025, q050, q075, q100);
}
}
private static void destroyPoolSubPages(PoolSubpage<?>[] pages) {
for (PoolSubpage<?> page : pages) {
page.destroy();
}
}
private void destroyPoolChunkLists(PoolChunkList<T>... chunkLists) {
for (PoolChunkList<T> chunkList: chunkLists) {
chunkList.destroy(this);
}
}
static final class HeapArena extends PoolArena<byte[]> {
HeapArena(PooledByteBufAllocator parent, SizeClasses sizeClass) {
super(parent, sizeClass);
}
private static byte[] newByteArray(int size) {
return PlatformDependent.allocateUninitializedArray(size);
}
@Override
boolean isDirect() {
return false;
}
@Override
protected PoolChunk<byte[]> newChunk(int pageSize, int maxPageIdx, int pageShifts, int chunkSize) {
return new PoolChunk<byte[]>(
this, null, newByteArray(chunkSize), pageSize, pageShifts, chunkSize, maxPageIdx);
}
@Override
protected PoolChunk<byte[]> newUnpooledChunk(int capacity) {
return new PoolChunk<byte[]>(this, null, newByteArray(capacity), capacity);
}
@Override
protected void destroyChunk(PoolChunk<byte[]> chunk) {
// Rely on GC.
}
@Override
protected PooledByteBuf<byte[]> newByteBuf(int maxCapacity) {
return HAS_UNSAFE ? PooledUnsafeHeapByteBuf.newUnsafeInstance(maxCapacity)
: PooledHeapByteBuf.newInstance(maxCapacity);
}
@Override
protected void memoryCopy(byte[] src, int srcOffset, PooledByteBuf<byte[]> dst, int length) {
if (length == 0) {
return;
}
System.arraycopy(src, srcOffset, dst.memory, dst.offset, length);
}
}
static final class DirectArena extends PoolArena<ByteBuffer> {
DirectArena(PooledByteBufAllocator parent, SizeClasses sizeClass) {
super(parent, sizeClass);
}
@Override
boolean isDirect() {
return true;
}
@Override
protected PoolChunk<ByteBuffer> newChunk(int pageSize, int maxPageIdx,
int pageShifts, int chunkSize) {
if (sizeClass.directMemoryCacheAlignment == 0) {
ByteBuffer memory = allocateDirect(chunkSize);
return new PoolChunk<ByteBuffer>(this, memory, memory, pageSize, pageShifts,
chunkSize, maxPageIdx);
}
final ByteBuffer base = allocateDirect(chunkSize + sizeClass.directMemoryCacheAlignment);
final ByteBuffer memory = PlatformDependent.alignDirectBuffer(base, sizeClass.directMemoryCacheAlignment);
return new PoolChunk<ByteBuffer>(this, base, memory, pageSize,
pageShifts, chunkSize, maxPageIdx);
}
@Override
protected PoolChunk<ByteBuffer> newUnpooledChunk(int capacity) {
if (sizeClass.directMemoryCacheAlignment == 0) {
ByteBuffer memory = allocateDirect(capacity);
return new PoolChunk<ByteBuffer>(this, memory, memory, capacity);
}
final ByteBuffer base = allocateDirect(capacity + sizeClass.directMemoryCacheAlignment);
final ByteBuffer memory = PlatformDependent.alignDirectBuffer(base, sizeClass.directMemoryCacheAlignment);
return new PoolChunk<ByteBuffer>(this, base, memory, capacity);
}
private static ByteBuffer allocateDirect(int capacity) {
return PlatformDependent.useDirectBufferNoCleaner() ?
PlatformDependent.allocateDirectNoCleaner(capacity) : ByteBuffer.allocateDirect(capacity);
}
@Override
protected void destroyChunk(PoolChunk<ByteBuffer> chunk) {
if (PlatformDependent.useDirectBufferNoCleaner()) {
PlatformDependent.freeDirectNoCleaner((ByteBuffer) chunk.base);
} else {
PlatformDependent.freeDirectBuffer((ByteBuffer) chunk.base);
}
}
@Override
protected PooledByteBuf<ByteBuffer> newByteBuf(int maxCapacity) {
if (HAS_UNSAFE) {
return PooledUnsafeDirectByteBuf.newInstance(maxCapacity);
} else {
return PooledDirectByteBuf.newInstance(maxCapacity);
}
}
@Override
protected void memoryCopy(ByteBuffer src, int srcOffset, PooledByteBuf<ByteBuffer> dstBuf, int length) {
if (length == 0) {
return;
}
if (HAS_UNSAFE) {
PlatformDependent.copyMemory(
PlatformDependent.directBufferAddress(src) + srcOffset,
PlatformDependent.directBufferAddress(dstBuf.memory) + dstBuf.offset, length);
} else {
// We must duplicate the NIO buffers because they may be accessed by other Netty buffers.
src = src.duplicate();
ByteBuffer dst = dstBuf.internalNioBuffer();
src.position(srcOffset).limit(srcOffset + length);
dst.position(dstBuf.offset);
dst.put(src);
}
}
}
void lock() {
lock.lock();
}
void unlock() {
lock.unlock();
}
@Override
public int sizeIdx2size(int sizeIdx) {
return sizeClass.sizeIdx2size(sizeIdx);
}
@Override
public int sizeIdx2sizeCompute(int sizeIdx) {
return sizeClass.sizeIdx2sizeCompute(sizeIdx);
}
@Override
public long pageIdx2size(int pageIdx) {
return sizeClass.pageIdx2size(pageIdx);
}
@Override
public long pageIdx2sizeCompute(int pageIdx) {
return sizeClass.pageIdx2sizeCompute(pageIdx);
}
@Override
public int size2SizeIdx(int size) {
return sizeClass.size2SizeIdx(size);
}
@Override
public int pages2pageIdx(int pages) {
return sizeClass.pages2pageIdx(pages);
}
@Override
public int pages2pageIdxFloor(int pages) {
return sizeClass.pages2pageIdxFloor(pages);
}
@Override
public int normalizeSize(int size) {
return sizeClass.normalizeSize(size);
}
}

View file

@ -0,0 +1,155 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import java.util.List;
/**
* Expose metrics for an arena.
*/
public interface PoolArenaMetric extends SizeClassesMetric {
/**
* Returns the number of thread caches backed by this arena.
*/
int numThreadCaches();
/**
* Returns the number of tiny sub-pages for the arena.
*
* @deprecated Tiny sub-pages have been merged into small sub-pages.
*/
@Deprecated
int numTinySubpages();
/**
* Returns the number of small sub-pages for the arena.
*/
int numSmallSubpages();
/**
* Returns the number of chunk lists for the arena.
*/
int numChunkLists();
/**
* Returns an unmodifiable {@link List} which holds {@link PoolSubpageMetric}s for tiny sub-pages.
*
* @deprecated Tiny sub-pages have been merged into small sub-pages.
*/
@Deprecated
List<PoolSubpageMetric> tinySubpages();
/**
* Returns an unmodifiable {@link List} which holds {@link PoolSubpageMetric}s for small sub-pages.
*/
List<PoolSubpageMetric> smallSubpages();
/**
* Returns an unmodifiable {@link List} which holds {@link PoolChunkListMetric}s.
*/
List<PoolChunkListMetric> chunkLists();
/**
* Return the number of allocations done via the arena. This includes all sizes.
*/
long numAllocations();
/**
* Return the number of tiny allocations done via the arena.
*
* @deprecated Tiny allocations have been merged into small allocations.
*/
@Deprecated
long numTinyAllocations();
/**
* Return the number of small allocations done via the arena.
*/
long numSmallAllocations();
/**
* Return the number of normal allocations done via the arena.
*/
long numNormalAllocations();
/**
* Return the number of huge allocations done via the arena.
*/
long numHugeAllocations();
/**
* Return the number of deallocations done via the arena. This includes all sizes.
*/
long numDeallocations();
/**
* Return the number of tiny deallocations done via the arena.
*
* @deprecated Tiny deallocations have been merged into small deallocations.
*/
@Deprecated
long numTinyDeallocations();
/**
* Return the number of small deallocations done via the arena.
*/
long numSmallDeallocations();
/**
* Return the number of normal deallocations done via the arena.
*/
long numNormalDeallocations();
/**
* Return the number of huge deallocations done via the arena.
*/
long numHugeDeallocations();
/**
* Return the number of currently active allocations.
*/
long numActiveAllocations();
/**
* Return the number of currently active tiny allocations.
*
* @deprecated Tiny allocations have been merged into small allocations.
*/
@Deprecated
long numActiveTinyAllocations();
/**
* Return the number of currently active small allocations.
*/
long numActiveSmallAllocations();
/**
* Return the number of currently active normal allocations.
*/
long numActiveNormalAllocations();
/**
* Return the number of currently active huge allocations.
*/
long numActiveHugeAllocations();
/**
* Return the number of active bytes that are currently allocated by the arena.
*/
long numActiveBytes();
}

View file

@ -0,0 +1,709 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.LongCounter;
import io.netty.util.internal.PlatformDependent;
import java.nio.ByteBuffer;
import java.util.ArrayDeque;
import java.util.Deque;
import java.util.PriorityQueue;
import java.util.concurrent.locks.ReentrantLock;
/**
* Description of algorithm for PageRun/PoolSubpage allocation from PoolChunk
*
* Notation: The following terms are important to understand the code
* > page - a page is the smallest unit of memory chunk that can be allocated
* > run - a run is a collection of pages
* > chunk - a chunk is a collection of runs
* > in this code chunkSize = maxPages * pageSize
*
* To begin we allocate a byte array of size = chunkSize
* Whenever a ByteBuf of given size needs to be created we search for the first position
* in the byte array that has enough empty space to accommodate the requested size and
* return a (long) handle that encodes this offset information, (this memory segment is then
* marked as reserved so it is always used by exactly one ByteBuf and no more)
*
* For simplicity all sizes are normalized according to {@link PoolArena#sizeClass#size2SizeIdx(int)} method.
* This ensures that when we request for memory segments of size > pageSize the normalizedCapacity
* equals the next nearest size in {@link SizeClasses}.
*
*
* A chunk has the following layout:
*
* /-----------------\
* | run |
* | |
* | |
* |-----------------|
* | run |
* | |
* |-----------------|
* | unalloctated |
* | (freed) |
* | |
* |-----------------|
* | subpage |
* |-----------------|
* | unallocated |
* | (freed) |
* | ... |
* | ... |
* | ... |
* | |
* | |
* | |
* \-----------------/
*
*
* handle:
* -------
* a handle is a long number, the bit layout of a run looks like:
*
* oooooooo ooooooos ssssssss ssssssue bbbbbbbb bbbbbbbb bbbbbbbb bbbbbbbb
*
* o: runOffset (page offset in the chunk), 15bit
* s: size (number of pages) of this run, 15bit
* u: isUsed?, 1bit
* e: isSubpage?, 1bit
* b: bitmapIdx of subpage, zero if it's not subpage, 32bit
*
* runsAvailMap:
* ------
* a map which manages all runs (used and not in used).
* For each run, the first runOffset and last runOffset are stored in runsAvailMap.
* key: runOffset
* value: handle
*
* runsAvail:
* ----------
* an array of {@link PriorityQueue}.
* Each queue manages same size of runs.
* Runs are sorted by offset, so that we always allocate runs with smaller offset.
*
*
* Algorithm:
* ----------
*
* As we allocate runs, we update values stored in runsAvailMap and runsAvail so that the property is maintained.
*
* Initialization -
* In the beginning we store the initial run which is the whole chunk.
* The initial run:
* runOffset = 0
* size = chunkSize
* isUsed = no
* isSubpage = no
* bitmapIdx = 0
*
*
* Algorithm: [allocateRun(size)]
* ----------
* 1) find the first avail run using in runsAvails according to size
* 2) if pages of run is larger than request pages then split it, and save the tailing run
* for later using
*
* Algorithm: [allocateSubpage(size)]
* ----------
* 1) find a not full subpage according to size.
* if it already exists just return, otherwise allocate a new PoolSubpage and call init()
* note that this subpage object is added to subpagesPool in the PoolArena when we init() it
* 2) call subpage.allocate()
*
* Algorithm: [free(handle, length, nioBuffer)]
* ----------
* 1) if it is a subpage, return the slab back into this subpage
* 2) if the subpage is not used or it is a run, then start free this run
* 3) merge continuous avail runs
* 4) save the merged run
*
*/
final class PoolChunk<T> implements PoolChunkMetric {
private static final int SIZE_BIT_LENGTH = 15;
private static final int INUSED_BIT_LENGTH = 1;
private static final int SUBPAGE_BIT_LENGTH = 1;
private static final int BITMAP_IDX_BIT_LENGTH = 32;
static final int IS_SUBPAGE_SHIFT = BITMAP_IDX_BIT_LENGTH;
static final int IS_USED_SHIFT = SUBPAGE_BIT_LENGTH + IS_SUBPAGE_SHIFT;
static final int SIZE_SHIFT = INUSED_BIT_LENGTH + IS_USED_SHIFT;
static final int RUN_OFFSET_SHIFT = SIZE_BIT_LENGTH + SIZE_SHIFT;
final PoolArena<T> arena;
final Object base;
final T memory;
final boolean unpooled;
/**
* store the first page and last page of each avail run
*/
private final LongLongHashMap runsAvailMap;
/**
* manage all avail runs
*/
private final IntPriorityQueue[] runsAvail;
private final ReentrantLock runsAvailLock;
/**
* manage all subpages in this chunk
*/
private final PoolSubpage<T>[] subpages;
/**
* Accounting of pinned memory memory that is currently in use by ByteBuf instances.
*/
private final LongCounter pinnedBytes = PlatformDependent.newLongCounter();
private final int pageSize;
private final int pageShifts;
private final int chunkSize;
// Use as cache for ByteBuffer created from the memory. These are just duplicates and so are only a container
// around the memory itself. These are often needed for operations within the Pooled*ByteBuf and so
// may produce extra GC, which can be greatly reduced by caching the duplicates.
//
// This may be null if the PoolChunk is unpooled as pooling the ByteBuffer instances does not make any sense here.
private final Deque<ByteBuffer> cachedNioBuffers;
int freeBytes;
PoolChunkList<T> parent;
PoolChunk<T> prev;
PoolChunk<T> next;
// TODO: Test if adding padding helps under contention
//private long pad0, pad1, pad2, pad3, pad4, pad5, pad6, pad7;
@SuppressWarnings("unchecked")
PoolChunk(PoolArena<T> arena, Object base, T memory, int pageSize, int pageShifts, int chunkSize, int maxPageIdx) {
unpooled = false;
this.arena = arena;
this.base = base;
this.memory = memory;
this.pageSize = pageSize;
this.pageShifts = pageShifts;
this.chunkSize = chunkSize;
freeBytes = chunkSize;
runsAvail = newRunsAvailqueueArray(maxPageIdx);
runsAvailLock = new ReentrantLock();
runsAvailMap = new LongLongHashMap(-1);
subpages = new PoolSubpage[chunkSize >> pageShifts];
//insert initial run, offset = 0, pages = chunkSize / pageSize
int pages = chunkSize >> pageShifts;
long initHandle = (long) pages << SIZE_SHIFT;
insertAvailRun(0, pages, initHandle);
cachedNioBuffers = new ArrayDeque<ByteBuffer>(8);
}
/** Creates a special chunk that is not pooled. */
PoolChunk(PoolArena<T> arena, Object base, T memory, int size) {
unpooled = true;
this.arena = arena;
this.base = base;
this.memory = memory;
pageSize = 0;
pageShifts = 0;
runsAvailMap = null;
runsAvail = null;
runsAvailLock = null;
subpages = null;
chunkSize = size;
cachedNioBuffers = null;
}
private static IntPriorityQueue[] newRunsAvailqueueArray(int size) {
IntPriorityQueue[] queueArray = new IntPriorityQueue[size];
for (int i = 0; i < queueArray.length; i++) {
queueArray[i] = new IntPriorityQueue();
}
return queueArray;
}
private void insertAvailRun(int runOffset, int pages, long handle) {
int pageIdxFloor = arena.sizeClass.pages2pageIdxFloor(pages);
IntPriorityQueue queue = runsAvail[pageIdxFloor];
assert isRun(handle);
queue.offer((int) (handle >> BITMAP_IDX_BIT_LENGTH));
//insert first page of run
insertAvailRun0(runOffset, handle);
if (pages > 1) {
//insert last page of run
insertAvailRun0(lastPage(runOffset, pages), handle);
}
}
private void insertAvailRun0(int runOffset, long handle) {
long pre = runsAvailMap.put(runOffset, handle);
assert pre == -1;
}
private void removeAvailRun(long handle) {
int pageIdxFloor = arena.sizeClass.pages2pageIdxFloor(runPages(handle));
runsAvail[pageIdxFloor].remove((int) (handle >> BITMAP_IDX_BIT_LENGTH));
removeAvailRun0(handle);
}
private void removeAvailRun0(long handle) {
int runOffset = runOffset(handle);
int pages = runPages(handle);
//remove first page of run
runsAvailMap.remove(runOffset);
if (pages > 1) {
//remove last page of run
runsAvailMap.remove(lastPage(runOffset, pages));
}
}
private static int lastPage(int runOffset, int pages) {
return runOffset + pages - 1;
}
private long getAvailRunByOffset(int runOffset) {
return runsAvailMap.get(runOffset);
}
@Override
public int usage() {
final int freeBytes;
if (this.unpooled) {
freeBytes = this.freeBytes;
} else {
runsAvailLock.lock();
try {
freeBytes = this.freeBytes;
} finally {
runsAvailLock.unlock();
}
}
return usage(freeBytes);
}
private int usage(int freeBytes) {
if (freeBytes == 0) {
return 100;
}
int freePercentage = (int) (freeBytes * 100L / chunkSize);
if (freePercentage == 0) {
return 99;
}
return 100 - freePercentage;
}
boolean allocate(PooledByteBuf<T> buf, int reqCapacity, int sizeIdx, PoolThreadCache cache) {
final long handle;
if (sizeIdx <= arena.sizeClass.smallMaxSizeIdx) {
final PoolSubpage<T> nextSub;
// small
// Obtain the head of the PoolSubPage pool that is owned by the PoolArena and synchronize on it.
// This is need as we may add it back and so alter the linked-list structure.
PoolSubpage<T> head = arena.smallSubpagePools[sizeIdx];
head.lock();
try {
nextSub = head.next;
if (nextSub != head) {
assert nextSub.doNotDestroy && nextSub.elemSize == arena.sizeClass.sizeIdx2size(sizeIdx) :
"doNotDestroy=" + nextSub.doNotDestroy + ", elemSize=" + nextSub.elemSize + ", sizeIdx=" +
sizeIdx;
handle = nextSub.allocate();
assert handle >= 0;
assert isSubpage(handle);
nextSub.chunk.initBufWithSubpage(buf, null, handle, reqCapacity, cache);
return true;
}
handle = allocateSubpage(sizeIdx, head);
if (handle < 0) {
return false;
}
assert isSubpage(handle);
} finally {
head.unlock();
}
} else {
// normal
// runSize must be multiple of pageSize
int runSize = arena.sizeClass.sizeIdx2size(sizeIdx);
handle = allocateRun(runSize);
if (handle < 0) {
return false;
}
assert !isSubpage(handle);
}
ByteBuffer nioBuffer = cachedNioBuffers != null? cachedNioBuffers.pollLast() : null;
initBuf(buf, nioBuffer, handle, reqCapacity, cache);
return true;
}
private long allocateRun(int runSize) {
int pages = runSize >> pageShifts;
int pageIdx = arena.sizeClass.pages2pageIdx(pages);
runsAvailLock.lock();
try {
//find first queue which has at least one big enough run
int queueIdx = runFirstBestFit(pageIdx);
if (queueIdx == -1) {
return -1;
}
//get run with min offset in this queue
IntPriorityQueue queue = runsAvail[queueIdx];
long handle = queue.poll();
assert handle != IntPriorityQueue.NO_VALUE;
handle <<= BITMAP_IDX_BIT_LENGTH;
assert !isUsed(handle) : "invalid handle: " + handle;
removeAvailRun0(handle);
handle = splitLargeRun(handle, pages);
int pinnedSize = runSize(pageShifts, handle);
freeBytes -= pinnedSize;
return handle;
} finally {
runsAvailLock.unlock();
}
}
private int calculateRunSize(int sizeIdx) {
int maxElements = 1 << pageShifts - SizeClasses.LOG2_QUANTUM;
int runSize = 0;
int nElements;
final int elemSize = arena.sizeClass.sizeIdx2size(sizeIdx);
//find lowest common multiple of pageSize and elemSize
do {
runSize += pageSize;
nElements = runSize / elemSize;
} while (nElements < maxElements && runSize != nElements * elemSize);
while (nElements > maxElements) {
runSize -= pageSize;
nElements = runSize / elemSize;
}
assert nElements > 0;
assert runSize <= chunkSize;
assert runSize >= elemSize;
return runSize;
}
private int runFirstBestFit(int pageIdx) {
if (freeBytes == chunkSize) {
return arena.sizeClass.nPSizes - 1;
}
for (int i = pageIdx; i < arena.sizeClass.nPSizes; i++) {
IntPriorityQueue queue = runsAvail[i];
if (queue != null && !queue.isEmpty()) {
return i;
}
}
return -1;
}
private long splitLargeRun(long handle, int needPages) {
assert needPages > 0;
int totalPages = runPages(handle);
assert needPages <= totalPages;
int remPages = totalPages - needPages;
if (remPages > 0) {
int runOffset = runOffset(handle);
// keep track of trailing unused pages for later use
int availOffset = runOffset + needPages;
long availRun = toRunHandle(availOffset, remPages, 0);
insertAvailRun(availOffset, remPages, availRun);
// not avail
return toRunHandle(runOffset, needPages, 1);
}
//mark it as used
handle |= 1L << IS_USED_SHIFT;
return handle;
}
/**
* Create / initialize a new PoolSubpage of normCapacity. Any PoolSubpage created / initialized here is added to
* subpage pool in the PoolArena that owns this PoolChunk.
*
* @param sizeIdx sizeIdx of normalized size
* @param head head of subpages
*
* @return index in memoryMap
*/
private long allocateSubpage(int sizeIdx, PoolSubpage<T> head) {
//allocate a new run
int runSize = calculateRunSize(sizeIdx);
//runSize must be multiples of pageSize
long runHandle = allocateRun(runSize);
if (runHandle < 0) {
return -1;
}
int runOffset = runOffset(runHandle);
assert subpages[runOffset] == null;
int elemSize = arena.sizeClass.sizeIdx2size(sizeIdx);
PoolSubpage<T> subpage = new PoolSubpage<T>(head, this, pageShifts, runOffset,
runSize(pageShifts, runHandle), elemSize);
subpages[runOffset] = subpage;
return subpage.allocate();
}
/**
* Free a subpage or a run of pages When a subpage is freed from PoolSubpage, it might be added back to subpage pool
* of the owning PoolArena. If the subpage pool in PoolArena has at least one other PoolSubpage of given elemSize,
* we can completely free the owning Page so it is available for subsequent allocations
*
* @param handle handle to free
*/
void free(long handle, int normCapacity, ByteBuffer nioBuffer) {
if (isSubpage(handle)) {
int sIdx = runOffset(handle);
PoolSubpage<T> subpage = subpages[sIdx];
assert subpage != null;
PoolSubpage<T> head = subpage.chunk.arena.smallSubpagePools[subpage.headIndex];
// Obtain the head of the PoolSubPage pool that is owned by the PoolArena and synchronize on it.
// This is need as we may add it back and so alter the linked-list structure.
head.lock();
try {
assert subpage.doNotDestroy;
if (subpage.free(head, bitmapIdx(handle))) {
//the subpage is still used, do not free it
return;
}
assert !subpage.doNotDestroy;
// Null out slot in the array as it was freed and we should not use it anymore.
subpages[sIdx] = null;
} finally {
head.unlock();
}
}
int runSize = runSize(pageShifts, handle);
//start free run
runsAvailLock.lock();
try {
// collapse continuous runs, successfully collapsed runs
// will be removed from runsAvail and runsAvailMap
long finalRun = collapseRuns(handle);
//set run as not used
finalRun &= ~(1L << IS_USED_SHIFT);
//if it is a subpage, set it to run
finalRun &= ~(1L << IS_SUBPAGE_SHIFT);
insertAvailRun(runOffset(finalRun), runPages(finalRun), finalRun);
freeBytes += runSize;
} finally {
runsAvailLock.unlock();
}
if (nioBuffer != null && cachedNioBuffers != null &&
cachedNioBuffers.size() < PooledByteBufAllocator.DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK) {
cachedNioBuffers.offer(nioBuffer);
}
}
private long collapseRuns(long handle) {
return collapseNext(collapsePast(handle));
}
private long collapsePast(long handle) {
for (;;) {
int runOffset = runOffset(handle);
int runPages = runPages(handle);
long pastRun = getAvailRunByOffset(runOffset - 1);
if (pastRun == -1) {
return handle;
}
int pastOffset = runOffset(pastRun);
int pastPages = runPages(pastRun);
//is continuous
if (pastRun != handle && pastOffset + pastPages == runOffset) {
//remove past run
removeAvailRun(pastRun);
handle = toRunHandle(pastOffset, pastPages + runPages, 0);
} else {
return handle;
}
}
}
private long collapseNext(long handle) {
for (;;) {
int runOffset = runOffset(handle);
int runPages = runPages(handle);
long nextRun = getAvailRunByOffset(runOffset + runPages);
if (nextRun == -1) {
return handle;
}
int nextOffset = runOffset(nextRun);
int nextPages = runPages(nextRun);
//is continuous
if (nextRun != handle && runOffset + runPages == nextOffset) {
//remove next run
removeAvailRun(nextRun);
handle = toRunHandle(runOffset, runPages + nextPages, 0);
} else {
return handle;
}
}
}
private static long toRunHandle(int runOffset, int runPages, int inUsed) {
return (long) runOffset << RUN_OFFSET_SHIFT
| (long) runPages << SIZE_SHIFT
| (long) inUsed << IS_USED_SHIFT;
}
void initBuf(PooledByteBuf<T> buf, ByteBuffer nioBuffer, long handle, int reqCapacity,
PoolThreadCache threadCache) {
if (isSubpage(handle)) {
initBufWithSubpage(buf, nioBuffer, handle, reqCapacity, threadCache);
} else {
int maxLength = runSize(pageShifts, handle);
buf.init(this, nioBuffer, handle, runOffset(handle) << pageShifts,
reqCapacity, maxLength, arena.parent.threadCache());
}
}
void initBufWithSubpage(PooledByteBuf<T> buf, ByteBuffer nioBuffer, long handle, int reqCapacity,
PoolThreadCache threadCache) {
int runOffset = runOffset(handle);
int bitmapIdx = bitmapIdx(handle);
PoolSubpage<T> s = subpages[runOffset];
assert s.isDoNotDestroy();
assert reqCapacity <= s.elemSize : reqCapacity + "<=" + s.elemSize;
int offset = (runOffset << pageShifts) + bitmapIdx * s.elemSize;
buf.init(this, nioBuffer, handle, offset, reqCapacity, s.elemSize, threadCache);
}
void incrementPinnedMemory(int delta) {
assert delta > 0;
pinnedBytes.add(delta);
}
void decrementPinnedMemory(int delta) {
assert delta > 0;
pinnedBytes.add(-delta);
}
@Override
public int chunkSize() {
return chunkSize;
}
@Override
public int freeBytes() {
if (this.unpooled) {
return freeBytes;
}
runsAvailLock.lock();
try {
return freeBytes;
} finally {
runsAvailLock.unlock();
}
}
public int pinnedBytes() {
return (int) pinnedBytes.value();
}
@Override
public String toString() {
final int freeBytes;
if (this.unpooled) {
freeBytes = this.freeBytes;
} else {
runsAvailLock.lock();
try {
freeBytes = this.freeBytes;
} finally {
runsAvailLock.unlock();
}
}
return new StringBuilder()
.append("Chunk(")
.append(Integer.toHexString(System.identityHashCode(this)))
.append(": ")
.append(usage(freeBytes))
.append("%, ")
.append(chunkSize - freeBytes)
.append('/')
.append(chunkSize)
.append(')')
.toString();
}
void destroy() {
arena.destroyChunk(this);
}
static int runOffset(long handle) {
return (int) (handle >> RUN_OFFSET_SHIFT);
}
static int runSize(int pageShifts, long handle) {
return runPages(handle) << pageShifts;
}
static int runPages(long handle) {
return (int) (handle >> SIZE_SHIFT & 0x7fff);
}
static boolean isUsed(long handle) {
return (handle >> IS_USED_SHIFT & 1) == 1L;
}
static boolean isRun(long handle) {
return !isSubpage(handle);
}
static boolean isSubpage(long handle) {
return (handle >> IS_SUBPAGE_SHIFT & 1) == 1L;
}
static int bitmapIdx(long handle) {
return (int) handle;
}
}

View file

@ -0,0 +1,262 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.StringUtil;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import static java.lang.Math.*;
import java.nio.ByteBuffer;
final class PoolChunkList<T> implements PoolChunkListMetric {
private static final Iterator<PoolChunkMetric> EMPTY_METRICS = Collections.<PoolChunkMetric>emptyList().iterator();
private final PoolArena<T> arena;
private final PoolChunkList<T> nextList;
private final int minUsage;
private final int maxUsage;
private final int maxCapacity;
private PoolChunk<T> head;
private final int freeMinThreshold;
private final int freeMaxThreshold;
// This is only update once when create the linked like list of PoolChunkList in PoolArena constructor.
private PoolChunkList<T> prevList;
// TODO: Test if adding padding helps under contention
//private long pad0, pad1, pad2, pad3, pad4, pad5, pad6, pad7;
PoolChunkList(PoolArena<T> arena, PoolChunkList<T> nextList, int minUsage, int maxUsage, int chunkSize) {
assert minUsage <= maxUsage;
this.arena = arena;
this.nextList = nextList;
this.minUsage = minUsage;
this.maxUsage = maxUsage;
maxCapacity = calculateMaxCapacity(minUsage, chunkSize);
// the thresholds are aligned with PoolChunk.usage() logic:
// 1) basic logic: usage() = 100 - freeBytes * 100L / chunkSize
// so, for example: (usage() >= maxUsage) condition can be transformed in the following way:
// 100 - freeBytes * 100L / chunkSize >= maxUsage
// freeBytes <= chunkSize * (100 - maxUsage) / 100
// let freeMinThreshold = chunkSize * (100 - maxUsage) / 100, then freeBytes <= freeMinThreshold
//
// 2) usage() returns an int value and has a floor rounding during a calculation,
// to be aligned absolute thresholds should be shifted for "the rounding step":
// freeBytes * 100 / chunkSize < 1
// the condition can be converted to: freeBytes < 1 * chunkSize / 100
// this is why we have + 0.99999999 shifts. A example why just +1 shift cannot be used:
// freeBytes = 16777216 == freeMaxThreshold: 16777216, usage = 0 < minUsage: 1, chunkSize: 16777216
// At the same time we want to have zero thresholds in case of (maxUsage == 100) and (minUsage == 100).
//
freeMinThreshold = (maxUsage == 100) ? 0 : (int) (chunkSize * (100.0 - maxUsage + 0.99999999) / 100L);
freeMaxThreshold = (minUsage == 100) ? 0 : (int) (chunkSize * (100.0 - minUsage + 0.99999999) / 100L);
}
/**
* Calculates the maximum capacity of a buffer that will ever be possible to allocate out of the {@link PoolChunk}s
* that belong to the {@link PoolChunkList} with the given {@code minUsage} and {@code maxUsage} settings.
*/
private static int calculateMaxCapacity(int minUsage, int chunkSize) {
minUsage = minUsage0(minUsage);
if (minUsage == 100) {
// If the minUsage is 100 we can not allocate anything out of this list.
return 0;
}
// Calculate the maximum amount of bytes that can be allocated from a PoolChunk in this PoolChunkList.
//
// As an example:
// - If a PoolChunkList has minUsage == 25 we are allowed to allocate at most 75% of the chunkSize because
// this is the maximum amount available in any PoolChunk in this PoolChunkList.
return (int) (chunkSize * (100L - minUsage) / 100L);
}
void prevList(PoolChunkList<T> prevList) {
assert this.prevList == null;
this.prevList = prevList;
}
boolean allocate(PooledByteBuf<T> buf, int reqCapacity, int sizeIdx, PoolThreadCache threadCache) {
int normCapacity = arena.sizeClass.sizeIdx2size(sizeIdx);
if (normCapacity > maxCapacity) {
// Either this PoolChunkList is empty or the requested capacity is larger then the capacity which can
// be handled by the PoolChunks that are contained in this PoolChunkList.
return false;
}
for (PoolChunk<T> cur = head; cur != null; cur = cur.next) {
if (cur.allocate(buf, reqCapacity, sizeIdx, threadCache)) {
if (cur.freeBytes <= freeMinThreshold) {
remove(cur);
nextList.add(cur);
}
return true;
}
}
return false;
}
boolean free(PoolChunk<T> chunk, long handle, int normCapacity, ByteBuffer nioBuffer) {
chunk.free(handle, normCapacity, nioBuffer);
if (chunk.freeBytes > freeMaxThreshold) {
remove(chunk);
// Move the PoolChunk down the PoolChunkList linked-list.
return move0(chunk);
}
return true;
}
private boolean move(PoolChunk<T> chunk) {
assert chunk.usage() < maxUsage;
if (chunk.freeBytes > freeMaxThreshold) {
// Move the PoolChunk down the PoolChunkList linked-list.
return move0(chunk);
}
// PoolChunk fits into this PoolChunkList, adding it here.
add0(chunk);
return true;
}
/**
* Moves the {@link PoolChunk} down the {@link PoolChunkList} linked-list so it will end up in the right
* {@link PoolChunkList} that has the correct minUsage / maxUsage in respect to {@link PoolChunk#usage()}.
*/
private boolean move0(PoolChunk<T> chunk) {
if (prevList == null) {
// There is no previous PoolChunkList so return false which result in having the PoolChunk destroyed and
// all memory associated with the PoolChunk will be released.
assert chunk.usage() == 0;
return false;
}
return prevList.move(chunk);
}
void add(PoolChunk<T> chunk) {
if (chunk.freeBytes <= freeMinThreshold) {
nextList.add(chunk);
return;
}
add0(chunk);
}
/**
* Adds the {@link PoolChunk} to this {@link PoolChunkList}.
*/
void add0(PoolChunk<T> chunk) {
chunk.parent = this;
if (head == null) {
head = chunk;
chunk.prev = null;
chunk.next = null;
} else {
chunk.prev = null;
chunk.next = head;
head.prev = chunk;
head = chunk;
}
}
private void remove(PoolChunk<T> cur) {
if (cur == head) {
head = cur.next;
if (head != null) {
head.prev = null;
}
} else {
PoolChunk<T> next = cur.next;
cur.prev.next = next;
if (next != null) {
next.prev = cur.prev;
}
}
}
@Override
public int minUsage() {
return minUsage0(minUsage);
}
@Override
public int maxUsage() {
return min(maxUsage, 100);
}
private static int minUsage0(int value) {
return max(1, value);
}
@Override
public Iterator<PoolChunkMetric> iterator() {
arena.lock();
try {
if (head == null) {
return EMPTY_METRICS;
}
List<PoolChunkMetric> metrics = new ArrayList<PoolChunkMetric>();
for (PoolChunk<T> cur = head;;) {
metrics.add(cur);
cur = cur.next;
if (cur == null) {
break;
}
}
return metrics.iterator();
} finally {
arena.unlock();
}
}
@Override
public String toString() {
StringBuilder buf = new StringBuilder();
arena.lock();
try {
if (head == null) {
return "none";
}
for (PoolChunk<T> cur = head;;) {
buf.append(cur);
cur = cur.next;
if (cur == null) {
break;
}
buf.append(StringUtil.NEWLINE);
}
} finally {
arena.unlock();
}
return buf.toString();
}
void destroy(PoolArena<T> arena) {
PoolChunk<T> chunk = head;
while (chunk != null) {
arena.destroyChunk(chunk);
chunk = chunk.next;
}
head = null;
}
}

View file

@ -0,0 +1,32 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* Metrics for a list of chunks.
*/
public interface PoolChunkListMetric extends Iterable<PoolChunkMetric> {
/**
* Return the minimum usage of the chunk list before which chunks are promoted to the previous list.
*/
int minUsage();
/**
* Return the maximum usage of the chunk list after which chunks are promoted to the next list.
*/
int maxUsage();
}

View file

@ -0,0 +1,37 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* Metrics for a chunk.
*/
public interface PoolChunkMetric {
/**
* Return the percentage of the current usage of the chunk.
*/
int usage();
/**
* Return the size of the chunk in bytes, this is the maximum of bytes that can be served out of the chunk.
*/
int chunkSize();
/**
* Return the number of free bytes in the chunk.
*/
int freeBytes();
}

View file

@ -0,0 +1,300 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import java.util.concurrent.locks.ReentrantLock;
import static io.netty.buffer.PoolChunk.RUN_OFFSET_SHIFT;
import static io.netty.buffer.PoolChunk.SIZE_SHIFT;
import static io.netty.buffer.PoolChunk.IS_USED_SHIFT;
import static io.netty.buffer.PoolChunk.IS_SUBPAGE_SHIFT;
final class PoolSubpage<T> implements PoolSubpageMetric {
final PoolChunk<T> chunk;
final int elemSize;
private final int pageShifts;
private final int runOffset;
private final int runSize;
private final long[] bitmap;
private final int bitmapLength;
private final int maxNumElems;
final int headIndex;
PoolSubpage<T> prev;
PoolSubpage<T> next;
boolean doNotDestroy;
private int nextAvail;
private int numAvail;
final ReentrantLock lock;
// TODO: Test if adding padding helps under contention
//private long pad0, pad1, pad2, pad3, pad4, pad5, pad6, pad7;
/** Special constructor that creates a linked list head */
PoolSubpage(int headIndex) {
chunk = null;
lock = new ReentrantLock();
pageShifts = -1;
runOffset = -1;
elemSize = -1;
runSize = -1;
bitmap = null;
bitmapLength = -1;
maxNumElems = 0;
this.headIndex = headIndex;
}
PoolSubpage(PoolSubpage<T> head, PoolChunk<T> chunk, int pageShifts, int runOffset, int runSize, int elemSize) {
this.headIndex = head.headIndex;
this.chunk = chunk;
this.pageShifts = pageShifts;
this.runOffset = runOffset;
this.runSize = runSize;
this.elemSize = elemSize;
doNotDestroy = true;
maxNumElems = numAvail = runSize / elemSize;
int bitmapLength = maxNumElems >>> 6;
if ((maxNumElems & 63) != 0) {
bitmapLength ++;
}
this.bitmapLength = bitmapLength;
bitmap = new long[bitmapLength];
nextAvail = 0;
lock = null;
addToPool(head);
}
/**
* Returns the bitmap index of the subpage allocation.
*/
long allocate() {
if (numAvail == 0 || !doNotDestroy) {
return -1;
}
final int bitmapIdx = getNextAvail();
if (bitmapIdx < 0) {
removeFromPool(); // Subpage appear to be in an invalid state. Remove to prevent repeated errors.
throw new AssertionError("No next available bitmap index found (bitmapIdx = " + bitmapIdx + "), " +
"even though there are supposed to be (numAvail = " + numAvail + ") " +
"out of (maxNumElems = " + maxNumElems + ") available indexes.");
}
int q = bitmapIdx >>> 6;
int r = bitmapIdx & 63;
assert (bitmap[q] >>> r & 1) == 0;
bitmap[q] |= 1L << r;
if (-- numAvail == 0) {
removeFromPool();
}
return toHandle(bitmapIdx);
}
/**
* @return {@code true} if this subpage is in use.
* {@code false} if this subpage is not used by its chunk and thus it's OK to be released.
*/
boolean free(PoolSubpage<T> head, int bitmapIdx) {
int q = bitmapIdx >>> 6;
int r = bitmapIdx & 63;
assert (bitmap[q] >>> r & 1) != 0;
bitmap[q] ^= 1L << r;
setNextAvail(bitmapIdx);
if (numAvail ++ == 0) {
addToPool(head);
/* When maxNumElems == 1, the maximum numAvail is also 1.
* Each of these PoolSubpages will go in here when they do free operation.
* If they return true directly from here, then the rest of the code will be unreachable
* and they will not actually be recycled. So return true only on maxNumElems > 1. */
if (maxNumElems > 1) {
return true;
}
}
if (numAvail != maxNumElems) {
return true;
} else {
// Subpage not in use (numAvail == maxNumElems)
if (prev == next) {
// Do not remove if this subpage is the only one left in the pool.
return true;
}
// Remove this subpage from the pool if there are other subpages left in the pool.
doNotDestroy = false;
removeFromPool();
return false;
}
}
private void addToPool(PoolSubpage<T> head) {
assert prev == null && next == null;
prev = head;
next = head.next;
next.prev = this;
head.next = this;
}
private void removeFromPool() {
assert prev != null && next != null;
prev.next = next;
next.prev = prev;
next = null;
prev = null;
}
private void setNextAvail(int bitmapIdx) {
nextAvail = bitmapIdx;
}
private int getNextAvail() {
int nextAvail = this.nextAvail;
if (nextAvail >= 0) {
this.nextAvail = -1;
return nextAvail;
}
return findNextAvail();
}
private int findNextAvail() {
for (int i = 0; i < bitmapLength; i ++) {
long bits = bitmap[i];
if (~bits != 0) {
return findNextAvail0(i, bits);
}
}
return -1;
}
private int findNextAvail0(int i, long bits) {
final int baseVal = i << 6;
for (int j = 0; j < 64; j ++) {
if ((bits & 1) == 0) {
int val = baseVal | j;
if (val < maxNumElems) {
return val;
} else {
break;
}
}
bits >>>= 1;
}
return -1;
}
private long toHandle(int bitmapIdx) {
int pages = runSize >> pageShifts;
return (long) runOffset << RUN_OFFSET_SHIFT
| (long) pages << SIZE_SHIFT
| 1L << IS_USED_SHIFT
| 1L << IS_SUBPAGE_SHIFT
| bitmapIdx;
}
@Override
public String toString() {
final int numAvail;
if (chunk == null) {
// This is the head so there is no need to synchronize at all as these never change.
numAvail = 0;
} else {
final boolean doNotDestroy;
PoolSubpage<T> head = chunk.arena.smallSubpagePools[headIndex];
head.lock();
try {
doNotDestroy = this.doNotDestroy;
numAvail = this.numAvail;
} finally {
head.unlock();
}
if (!doNotDestroy) {
// Not used for creating the String.
return "(" + runOffset + ": not in use)";
}
}
return "(" + this.runOffset + ": " + (this.maxNumElems - numAvail) + '/' + this.maxNumElems +
", offset: " + this.runOffset + ", length: " + this.runSize + ", elemSize: " + this.elemSize + ')';
}
@Override
public int maxNumElements() {
return maxNumElems;
}
@Override
public int numAvailable() {
if (chunk == null) {
// It's the head.
return 0;
}
PoolSubpage<T> head = chunk.arena.smallSubpagePools[headIndex];
head.lock();
try {
return numAvail;
} finally {
head.unlock();
}
}
@Override
public int elementSize() {
return elemSize;
}
@Override
public int pageSize() {
return 1 << pageShifts;
}
boolean isDoNotDestroy() {
if (chunk == null) {
// It's the head.
return true;
}
PoolSubpage<T> head = chunk.arena.smallSubpagePools[headIndex];
head.lock();
try {
return doNotDestroy;
} finally {
head.unlock();
}
}
void destroy() {
if (chunk != null) {
chunk.destroy();
}
}
void lock() {
lock.lock();
}
void unlock() {
lock.unlock();
}
}

View file

@ -0,0 +1,43 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* Metrics for a sub-page.
*/
public interface PoolSubpageMetric {
/**
* Return the number of maximal elements that can be allocated out of the sub-page.
*/
int maxNumElements();
/**
* Return the number of available elements to be allocated.
*/
int numAvailable();
/**
* Return the size (in bytes) of the elements that will be allocated.
*/
int elementSize();
/**
* Return the page size (in bytes) of this page.
*/
int pageSize();
}

View file

@ -0,0 +1,504 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.buffer.PoolArena.SizeClass;
import io.netty.util.Recycler.EnhancedHandle;
import io.netty.util.internal.MathUtil;
import io.netty.util.internal.ObjectPool;
import io.netty.util.internal.ObjectPool.Handle;
import io.netty.util.internal.ObjectPool.ObjectCreator;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.Queue;
import java.util.concurrent.atomic.AtomicBoolean;
/**
* Acts a Thread cache for allocations. This implementation is moduled after
* <a href="https://people.freebsd.org/~jasone/jemalloc/bsdcan2006/jemalloc.pdf">jemalloc</a> and the descripted
* technics of
* <a href="https://www.facebook.com/notes/facebook-engineering/scalable-memory-allocation-using-jemalloc/480222803919">
* Scalable memory allocation using jemalloc</a>.
*/
final class PoolThreadCache {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(PoolThreadCache.class);
private static final int INTEGER_SIZE_MINUS_ONE = Integer.SIZE - 1;
final PoolArena<byte[]> heapArena;
final PoolArena<ByteBuffer> directArena;
// Hold the caches for the different size classes, which are small and normal.
private final MemoryRegionCache<byte[]>[] smallSubPageHeapCaches;
private final MemoryRegionCache<ByteBuffer>[] smallSubPageDirectCaches;
private final MemoryRegionCache<byte[]>[] normalHeapCaches;
private final MemoryRegionCache<ByteBuffer>[] normalDirectCaches;
private final int freeSweepAllocationThreshold;
private final AtomicBoolean freed = new AtomicBoolean();
@SuppressWarnings("unused") // Field is only here for the finalizer.
private final FreeOnFinalize freeOnFinalize;
private int allocations;
// TODO: Test if adding padding helps under contention
//private long pad0, pad1, pad2, pad3, pad4, pad5, pad6, pad7;
PoolThreadCache(PoolArena<byte[]> heapArena, PoolArena<ByteBuffer> directArena,
int smallCacheSize, int normalCacheSize, int maxCachedBufferCapacity,
int freeSweepAllocationThreshold, boolean useFinalizer) {
checkPositiveOrZero(maxCachedBufferCapacity, "maxCachedBufferCapacity");
this.freeSweepAllocationThreshold = freeSweepAllocationThreshold;
this.heapArena = heapArena;
this.directArena = directArena;
if (directArena != null) {
smallSubPageDirectCaches = createSubPageCaches(smallCacheSize, directArena.sizeClass.nSubpages);
normalDirectCaches = createNormalCaches(normalCacheSize, maxCachedBufferCapacity, directArena);
directArena.numThreadCaches.getAndIncrement();
} else {
// No directArea is configured so just null out all caches
smallSubPageDirectCaches = null;
normalDirectCaches = null;
}
if (heapArena != null) {
// Create the caches for the heap allocations
smallSubPageHeapCaches = createSubPageCaches(smallCacheSize, heapArena.sizeClass.nSubpages);
normalHeapCaches = createNormalCaches(normalCacheSize, maxCachedBufferCapacity, heapArena);
heapArena.numThreadCaches.getAndIncrement();
} else {
// No heapArea is configured so just null out all caches
smallSubPageHeapCaches = null;
normalHeapCaches = null;
}
// Only check if there are caches in use.
if ((smallSubPageDirectCaches != null || normalDirectCaches != null
|| smallSubPageHeapCaches != null || normalHeapCaches != null)
&& freeSweepAllocationThreshold < 1) {
throw new IllegalArgumentException("freeSweepAllocationThreshold: "
+ freeSweepAllocationThreshold + " (expected: > 0)");
}
freeOnFinalize = useFinalizer ? new FreeOnFinalize(this) : null;
}
private static <T> MemoryRegionCache<T>[] createSubPageCaches(
int cacheSize, int numCaches) {
if (cacheSize > 0 && numCaches > 0) {
@SuppressWarnings("unchecked")
MemoryRegionCache<T>[] cache = new MemoryRegionCache[numCaches];
for (int i = 0; i < cache.length; i++) {
// TODO: maybe use cacheSize / cache.length
cache[i] = new SubPageMemoryRegionCache<T>(cacheSize);
}
return cache;
} else {
return null;
}
}
@SuppressWarnings("unchecked")
private static <T> MemoryRegionCache<T>[] createNormalCaches(
int cacheSize, int maxCachedBufferCapacity, PoolArena<T> area) {
if (cacheSize > 0 && maxCachedBufferCapacity > 0) {
int max = Math.min(area.sizeClass.chunkSize, maxCachedBufferCapacity);
// Create as many normal caches as we support based on how many sizeIdx we have and what the upper
// bound is that we want to cache in general.
List<MemoryRegionCache<T>> cache = new ArrayList<MemoryRegionCache<T>>() ;
for (int idx = area.sizeClass.nSubpages; idx < area.sizeClass.nSizes &&
area.sizeClass.sizeIdx2size(idx) <= max; idx++) {
cache.add(new NormalMemoryRegionCache<T>(cacheSize));
}
return cache.toArray(new MemoryRegionCache[0]);
} else {
return null;
}
}
// val > 0
static int log2(int val) {
return INTEGER_SIZE_MINUS_ONE - Integer.numberOfLeadingZeros(val);
}
/**
* Try to allocate a small buffer out of the cache. Returns {@code true} if successful {@code false} otherwise
*/
boolean allocateSmall(PoolArena<?> area, PooledByteBuf<?> buf, int reqCapacity, int sizeIdx) {
return allocate(cacheForSmall(area, sizeIdx), buf, reqCapacity);
}
/**
* Try to allocate a normal buffer out of the cache. Returns {@code true} if successful {@code false} otherwise
*/
boolean allocateNormal(PoolArena<?> area, PooledByteBuf<?> buf, int reqCapacity, int sizeIdx) {
return allocate(cacheForNormal(area, sizeIdx), buf, reqCapacity);
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private boolean allocate(MemoryRegionCache<?> cache, PooledByteBuf buf, int reqCapacity) {
if (cache == null) {
// no cache found so just return false here
return false;
}
boolean allocated = cache.allocate(buf, reqCapacity, this);
if (++ allocations >= freeSweepAllocationThreshold) {
allocations = 0;
trim();
}
return allocated;
}
/**
* Add {@link PoolChunk} and {@code handle} to the cache if there is enough room.
* Returns {@code true} if it fit into the cache {@code false} otherwise.
*/
@SuppressWarnings({ "unchecked", "rawtypes" })
boolean add(PoolArena<?> area, PoolChunk chunk, ByteBuffer nioBuffer,
long handle, int normCapacity, SizeClass sizeClass) {
int sizeIdx = area.sizeClass.size2SizeIdx(normCapacity);
MemoryRegionCache<?> cache = cache(area, sizeIdx, sizeClass);
if (cache == null) {
return false;
}
if (freed.get()) {
return false;
}
return cache.add(chunk, nioBuffer, handle, normCapacity);
}
private MemoryRegionCache<?> cache(PoolArena<?> area, int sizeIdx, SizeClass sizeClass) {
switch (sizeClass) {
case Normal:
return cacheForNormal(area, sizeIdx);
case Small:
return cacheForSmall(area, sizeIdx);
default:
throw new Error();
}
}
/**
* Should be called if the Thread that uses this cache is about to exist to release resources out of the cache
*/
void free(boolean finalizer) {
// As free() may be called either by the finalizer or by FastThreadLocal.onRemoval(...) we need to ensure
// we only call this one time.
if (freed.compareAndSet(false, true)) {
int numFreed = free(smallSubPageDirectCaches, finalizer) +
free(normalDirectCaches, finalizer) +
free(smallSubPageHeapCaches, finalizer) +
free(normalHeapCaches, finalizer);
if (numFreed > 0 && logger.isDebugEnabled()) {
logger.debug("Freed {} thread-local buffer(s) from thread: {}", numFreed,
Thread.currentThread().getName());
}
if (directArena != null) {
directArena.numThreadCaches.getAndDecrement();
}
if (heapArena != null) {
heapArena.numThreadCaches.getAndDecrement();
}
} else {
// See https://github.com/netty/netty/issues/12749
checkCacheMayLeak(smallSubPageDirectCaches, "SmallSubPageDirectCaches");
checkCacheMayLeak(normalDirectCaches, "NormalDirectCaches");
checkCacheMayLeak(smallSubPageHeapCaches, "SmallSubPageHeapCaches");
checkCacheMayLeak(normalHeapCaches, "NormalHeapCaches");
}
}
private static void checkCacheMayLeak(MemoryRegionCache<?>[] caches, String type) {
for (MemoryRegionCache<?> cache : caches) {
if (!cache.queue.isEmpty()) {
logger.debug("{} memory may leak.", type);
return;
}
}
}
private static int free(MemoryRegionCache<?>[] caches, boolean finalizer) {
if (caches == null) {
return 0;
}
int numFreed = 0;
for (MemoryRegionCache<?> c: caches) {
numFreed += free(c, finalizer);
}
return numFreed;
}
private static int free(MemoryRegionCache<?> cache, boolean finalizer) {
if (cache == null) {
return 0;
}
return cache.free(finalizer);
}
void trim() {
trim(smallSubPageDirectCaches);
trim(normalDirectCaches);
trim(smallSubPageHeapCaches);
trim(normalHeapCaches);
}
private static void trim(MemoryRegionCache<?>[] caches) {
if (caches == null) {
return;
}
for (MemoryRegionCache<?> c: caches) {
trim(c);
}
}
private static void trim(MemoryRegionCache<?> cache) {
if (cache == null) {
return;
}
cache.trim();
}
private MemoryRegionCache<?> cacheForSmall(PoolArena<?> area, int sizeIdx) {
if (area.isDirect()) {
return cache(smallSubPageDirectCaches, sizeIdx);
}
return cache(smallSubPageHeapCaches, sizeIdx);
}
private MemoryRegionCache<?> cacheForNormal(PoolArena<?> area, int sizeIdx) {
// We need to subtract area.sizeClass.nSubpages as sizeIdx is the overall index for all sizes.
int idx = sizeIdx - area.sizeClass.nSubpages;
if (area.isDirect()) {
return cache(normalDirectCaches, idx);
}
return cache(normalHeapCaches, idx);
}
private static <T> MemoryRegionCache<T> cache(MemoryRegionCache<T>[] cache, int sizeIdx) {
if (cache == null || sizeIdx > cache.length - 1) {
return null;
}
return cache[sizeIdx];
}
/**
* Cache used for buffers which are backed by TINY or SMALL size.
*/
private static final class SubPageMemoryRegionCache<T> extends MemoryRegionCache<T> {
SubPageMemoryRegionCache(int size) {
super(size, SizeClass.Small);
}
@Override
protected void initBuf(
PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle, PooledByteBuf<T> buf, int reqCapacity,
PoolThreadCache threadCache) {
chunk.initBufWithSubpage(buf, nioBuffer, handle, reqCapacity, threadCache);
}
}
/**
* Cache used for buffers which are backed by NORMAL size.
*/
private static final class NormalMemoryRegionCache<T> extends MemoryRegionCache<T> {
NormalMemoryRegionCache(int size) {
super(size, SizeClass.Normal);
}
@Override
protected void initBuf(
PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle, PooledByteBuf<T> buf, int reqCapacity,
PoolThreadCache threadCache) {
chunk.initBuf(buf, nioBuffer, handle, reqCapacity, threadCache);
}
}
private abstract static class MemoryRegionCache<T> {
private final int size;
private final Queue<Entry<T>> queue;
private final SizeClass sizeClass;
private int allocations;
MemoryRegionCache(int size, SizeClass sizeClass) {
this.size = MathUtil.safeFindNextPositivePowerOfTwo(size);
queue = PlatformDependent.newFixedMpscQueue(this.size);
this.sizeClass = sizeClass;
}
/**
* Init the {@link PooledByteBuf} using the provided chunk and handle with the capacity restrictions.
*/
protected abstract void initBuf(PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle,
PooledByteBuf<T> buf, int reqCapacity, PoolThreadCache threadCache);
/**
* Add to cache if not already full.
*/
@SuppressWarnings("unchecked")
public final boolean add(PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle, int normCapacity) {
Entry<T> entry = newEntry(chunk, nioBuffer, handle, normCapacity);
boolean queued = queue.offer(entry);
if (!queued) {
// If it was not possible to cache the chunk, immediately recycle the entry
entry.unguardedRecycle();
}
return queued;
}
/**
* Allocate something out of the cache if possible and remove the entry from the cache.
*/
public final boolean allocate(PooledByteBuf<T> buf, int reqCapacity, PoolThreadCache threadCache) {
Entry<T> entry = queue.poll();
if (entry == null) {
return false;
}
initBuf(entry.chunk, entry.nioBuffer, entry.handle, buf, reqCapacity, threadCache);
entry.unguardedRecycle();
// allocations is not thread-safe which is fine as this is only called from the same thread all time.
++ allocations;
return true;
}
/**
* Clear out this cache and free up all previous cached {@link PoolChunk}s and {@code handle}s.
*/
public final int free(boolean finalizer) {
return free(Integer.MAX_VALUE, finalizer);
}
private int free(int max, boolean finalizer) {
int numFreed = 0;
for (; numFreed < max; numFreed++) {
Entry<T> entry = queue.poll();
if (entry != null) {
freeEntry(entry, finalizer);
} else {
// all cleared
return numFreed;
}
}
return numFreed;
}
/**
* Free up cached {@link PoolChunk}s if not allocated frequently enough.
*/
public final void trim() {
int free = size - allocations;
allocations = 0;
// We not even allocated all the number that are
if (free > 0) {
free(free, false);
}
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private void freeEntry(Entry entry, boolean finalizer) {
// Capture entry state before we recycle the entry object.
PoolChunk chunk = entry.chunk;
long handle = entry.handle;
ByteBuffer nioBuffer = entry.nioBuffer;
int normCapacity = entry.normCapacity;
if (!finalizer) {
// recycle now so PoolChunk can be GC'ed. This will only be done if this is not freed because of
// a finalizer.
entry.recycle();
}
chunk.arena.freeChunk(chunk, handle, normCapacity, sizeClass, nioBuffer, finalizer);
}
static final class Entry<T> {
final EnhancedHandle<Entry<?>> recyclerHandle;
PoolChunk<T> chunk;
ByteBuffer nioBuffer;
long handle = -1;
int normCapacity;
Entry(Handle<Entry<?>> recyclerHandle) {
this.recyclerHandle = (EnhancedHandle<Entry<?>>) recyclerHandle;
}
void recycle() {
chunk = null;
nioBuffer = null;
handle = -1;
recyclerHandle.recycle(this);
}
void unguardedRecycle() {
chunk = null;
nioBuffer = null;
handle = -1;
recyclerHandle.unguardedRecycle(this);
}
}
@SuppressWarnings("rawtypes")
private static Entry newEntry(PoolChunk<?> chunk, ByteBuffer nioBuffer, long handle, int normCapacity) {
Entry entry = RECYCLER.get();
entry.chunk = chunk;
entry.nioBuffer = nioBuffer;
entry.handle = handle;
entry.normCapacity = normCapacity;
return entry;
}
@SuppressWarnings("rawtypes")
private static final ObjectPool<Entry> RECYCLER = ObjectPool.newPool(new ObjectCreator<Entry>() {
@SuppressWarnings("unchecked")
@Override
public Entry newObject(Handle<Entry> handle) {
return new Entry(handle);
}
});
}
private static final class FreeOnFinalize {
private final PoolThreadCache cache;
private FreeOnFinalize(PoolThreadCache cache) {
this.cache = cache;
}
/// TODO: In the future when we move to Java9+ we should use java.lang.ref.Cleaner.
@SuppressWarnings({"FinalizeDeclaration", "deprecation"})
@Override
protected void finalize() throws Throwable {
try {
super.finalize();
} finally {
cache.free(true);
}
}
}
}

View file

@ -0,0 +1,269 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.Recycler.EnhancedHandle;
import io.netty.util.internal.ObjectPool.Handle;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
private final EnhancedHandle<PooledByteBuf<T>> recyclerHandle;
protected PoolChunk<T> chunk;
protected long handle;
protected T memory;
protected int offset;
protected int length;
int maxLength;
PoolThreadCache cache;
ByteBuffer tmpNioBuf;
private ByteBufAllocator allocator;
@SuppressWarnings("unchecked")
protected PooledByteBuf(Handle<? extends PooledByteBuf<T>> recyclerHandle, int maxCapacity) {
super(maxCapacity);
this.recyclerHandle = (EnhancedHandle<PooledByteBuf<T>>) recyclerHandle;
}
void init(PoolChunk<T> chunk, ByteBuffer nioBuffer,
long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
init0(chunk, nioBuffer, handle, offset, length, maxLength, cache);
}
void initUnpooled(PoolChunk<T> chunk, int length) {
init0(chunk, null, 0, 0, length, length, null);
}
private void init0(PoolChunk<T> chunk, ByteBuffer nioBuffer,
long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
assert handle >= 0;
assert chunk != null;
assert !PoolChunk.isSubpage(handle) ||
chunk.arena.sizeClass.size2SizeIdx(maxLength) <= chunk.arena.sizeClass.smallMaxSizeIdx:
"Allocated small sub-page handle for a buffer size that isn't \"small.\"";
chunk.incrementPinnedMemory(maxLength);
this.chunk = chunk;
memory = chunk.memory;
tmpNioBuf = nioBuffer;
allocator = chunk.arena.parent;
this.cache = cache;
this.handle = handle;
this.offset = offset;
this.length = length;
this.maxLength = maxLength;
}
/**
* Method must be called before reuse this {@link PooledByteBufAllocator}
*/
final void reuse(int maxCapacity) {
maxCapacity(maxCapacity);
resetRefCnt();
setIndex0(0, 0);
discardMarks();
}
@Override
public final int capacity() {
return length;
}
@Override
public int maxFastWritableBytes() {
return Math.min(maxLength, maxCapacity()) - writerIndex;
}
@Override
public final ByteBuf capacity(int newCapacity) {
if (newCapacity == length) {
ensureAccessible();
return this;
}
checkNewCapacity(newCapacity);
if (!chunk.unpooled) {
// If the request capacity does not require reallocation, just update the length of the memory.
if (newCapacity > length) {
if (newCapacity <= maxLength) {
length = newCapacity;
return this;
}
} else if (newCapacity > maxLength >>> 1 &&
(maxLength > 512 || newCapacity > maxLength - 16)) {
// here newCapacity < length
length = newCapacity;
trimIndicesToCapacity(newCapacity);
return this;
}
}
// Reallocation required.
chunk.arena.reallocate(this, newCapacity);
return this;
}
@Override
public final ByteBufAllocator alloc() {
return allocator;
}
@Override
public final ByteOrder order() {
return ByteOrder.BIG_ENDIAN;
}
@Override
public final ByteBuf unwrap() {
return null;
}
@Override
public final ByteBuf retainedDuplicate() {
return PooledDuplicatedByteBuf.newInstance(this, this, readerIndex(), writerIndex());
}
@Override
public final ByteBuf retainedSlice() {
final int index = readerIndex();
return retainedSlice(index, writerIndex() - index);
}
@Override
public final ByteBuf retainedSlice(int index, int length) {
return PooledSlicedByteBuf.newInstance(this, this, index, length);
}
protected final ByteBuffer internalNioBuffer() {
ByteBuffer tmpNioBuf = this.tmpNioBuf;
if (tmpNioBuf == null) {
this.tmpNioBuf = tmpNioBuf = newInternalNioBuffer(memory);
} else {
tmpNioBuf.clear();
}
return tmpNioBuf;
}
protected abstract ByteBuffer newInternalNioBuffer(T memory);
@Override
protected final void deallocate() {
if (handle >= 0) {
final long handle = this.handle;
this.handle = -1;
memory = null;
chunk.arena.free(chunk, tmpNioBuf, handle, maxLength, cache);
tmpNioBuf = null;
chunk = null;
cache = null;
this.recyclerHandle.unguardedRecycle(this);
}
}
protected final int idx(int index) {
return offset + index;
}
final ByteBuffer _internalNioBuffer(int index, int length, boolean duplicate) {
index = idx(index);
ByteBuffer buffer = duplicate ? newInternalNioBuffer(memory) : internalNioBuffer();
buffer.limit(index + length).position(index);
return buffer;
}
ByteBuffer duplicateInternalNioBuffer(int index, int length) {
checkIndex(index, length);
return _internalNioBuffer(index, length, true);
}
@Override
public final ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
return _internalNioBuffer(index, length, false);
}
@Override
public final int nioBufferCount() {
return 1;
}
@Override
public final ByteBuffer nioBuffer(int index, int length) {
return duplicateInternalNioBuffer(index, length).slice();
}
@Override
public final ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public final boolean isContiguous() {
return true;
}
@Override
public final int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
return out.write(duplicateInternalNioBuffer(index, length));
}
@Override
public final int readBytes(GatheringByteChannel out, int length) throws IOException {
checkReadableBytes(length);
int readBytes = out.write(_internalNioBuffer(readerIndex, length, false));
readerIndex += readBytes;
return readBytes;
}
@Override
public final int getBytes(int index, FileChannel out, long position, int length) throws IOException {
return out.write(duplicateInternalNioBuffer(index, length), position);
}
@Override
public final int readBytes(FileChannel out, long position, int length) throws IOException {
checkReadableBytes(length);
int readBytes = out.write(_internalNioBuffer(readerIndex, length, false), position);
readerIndex += readBytes;
return readBytes;
}
@Override
public final int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
try {
return in.read(internalNioBuffer(index, length));
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public final int setBytes(int index, FileChannel in, long position, int length) throws IOException {
try {
return in.read(internalNioBuffer(index, length), position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
}

View file

@ -0,0 +1,763 @@
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.util.NettyRuntime;
import io.netty.util.concurrent.EventExecutor;
import io.netty.util.concurrent.FastThreadLocal;
import io.netty.util.concurrent.FastThreadLocalThread;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
import io.netty.util.internal.SystemPropertyUtil;
import io.netty.util.internal.ThreadExecutorMap;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.TimeUnit;
public class PooledByteBufAllocator extends AbstractByteBufAllocator implements ByteBufAllocatorMetricProvider {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(PooledByteBufAllocator.class);
private static final int DEFAULT_NUM_HEAP_ARENA;
private static final int DEFAULT_NUM_DIRECT_ARENA;
private static final int DEFAULT_PAGE_SIZE;
private static final int DEFAULT_MAX_ORDER; // 8192 << 9 = 4 MiB per chunk
private static final int DEFAULT_SMALL_CACHE_SIZE;
private static final int DEFAULT_NORMAL_CACHE_SIZE;
static final int DEFAULT_MAX_CACHED_BUFFER_CAPACITY;
private static final int DEFAULT_CACHE_TRIM_INTERVAL;
private static final long DEFAULT_CACHE_TRIM_INTERVAL_MILLIS;
private static final boolean DEFAULT_USE_CACHE_FOR_ALL_THREADS;
private static final int DEFAULT_DIRECT_MEMORY_CACHE_ALIGNMENT;
static final int DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK;
private static final int MIN_PAGE_SIZE = 4096;
private static final int MAX_CHUNK_SIZE = (int) (((long) Integer.MAX_VALUE + 1) / 2);
private static final int CACHE_NOT_USED = 0;
private final Runnable trimTask = new Runnable() {
@Override
public void run() {
PooledByteBufAllocator.this.trimCurrentThreadCache();
}
};
static {
int defaultAlignment = SystemPropertyUtil.getInt(
"io.netty.allocator.directMemoryCacheAlignment", 0);
int defaultPageSize = SystemPropertyUtil.getInt("io.netty.allocator.pageSize", 8192);
Throwable pageSizeFallbackCause = null;
try {
validateAndCalculatePageShifts(defaultPageSize, defaultAlignment);
} catch (Throwable t) {
pageSizeFallbackCause = t;
defaultPageSize = 8192;
defaultAlignment = 0;
}
DEFAULT_PAGE_SIZE = defaultPageSize;
DEFAULT_DIRECT_MEMORY_CACHE_ALIGNMENT = defaultAlignment;
int defaultMaxOrder = SystemPropertyUtil.getInt("io.netty.allocator.maxOrder", 9);
Throwable maxOrderFallbackCause = null;
try {
validateAndCalculateChunkSize(DEFAULT_PAGE_SIZE, defaultMaxOrder);
} catch (Throwable t) {
maxOrderFallbackCause = t;
defaultMaxOrder = 9;
}
DEFAULT_MAX_ORDER = defaultMaxOrder;
// Determine reasonable default for nHeapArena and nDirectArena.
// Assuming each arena has 3 chunks, the pool should not consume more than 50% of max memory.
final Runtime runtime = Runtime.getRuntime();
/*
* We use 2 * available processors by default to reduce contention as we use 2 * available processors for the
* number of EventLoops in NIO and EPOLL as well. If we choose a smaller number we will run into hot spots as
* allocation and de-allocation needs to be synchronized on the PoolArena.
*
* See https://github.com/netty/netty/issues/3888.
*/
final int defaultMinNumArena = NettyRuntime.availableProcessors() * 2;
final int defaultChunkSize = DEFAULT_PAGE_SIZE << DEFAULT_MAX_ORDER;
DEFAULT_NUM_HEAP_ARENA = Math.max(0,
SystemPropertyUtil.getInt(
"io.netty.allocator.numHeapArenas",
(int) Math.min(
defaultMinNumArena,
runtime.maxMemory() / defaultChunkSize / 2 / 3)));
DEFAULT_NUM_DIRECT_ARENA = Math.max(0,
SystemPropertyUtil.getInt(
"io.netty.allocator.numDirectArenas",
(int) Math.min(
defaultMinNumArena,
PlatformDependent.maxDirectMemory() / defaultChunkSize / 2 / 3)));
// cache sizes
DEFAULT_SMALL_CACHE_SIZE = SystemPropertyUtil.getInt("io.netty.allocator.smallCacheSize", 256);
DEFAULT_NORMAL_CACHE_SIZE = SystemPropertyUtil.getInt("io.netty.allocator.normalCacheSize", 64);
// 32 kb is the default maximum capacity of the cached buffer. Similar to what is explained in
// 'Scalable memory allocation using jemalloc'
DEFAULT_MAX_CACHED_BUFFER_CAPACITY = SystemPropertyUtil.getInt(
"io.netty.allocator.maxCachedBufferCapacity", 32 * 1024);
// the number of threshold of allocations when cached entries will be freed up if not frequently used
DEFAULT_CACHE_TRIM_INTERVAL = SystemPropertyUtil.getInt(
"io.netty.allocator.cacheTrimInterval", 8192);
if (SystemPropertyUtil.contains("io.netty.allocation.cacheTrimIntervalMillis")) {
logger.warn("-Dio.netty.allocation.cacheTrimIntervalMillis is deprecated," +
" use -Dio.netty.allocator.cacheTrimIntervalMillis");
if (SystemPropertyUtil.contains("io.netty.allocator.cacheTrimIntervalMillis")) {
// Both system properties are specified. Use the non-deprecated one.
DEFAULT_CACHE_TRIM_INTERVAL_MILLIS = SystemPropertyUtil.getLong(
"io.netty.allocator.cacheTrimIntervalMillis", 0);
} else {
DEFAULT_CACHE_TRIM_INTERVAL_MILLIS = SystemPropertyUtil.getLong(
"io.netty.allocation.cacheTrimIntervalMillis", 0);
}
} else {
DEFAULT_CACHE_TRIM_INTERVAL_MILLIS = SystemPropertyUtil.getLong(
"io.netty.allocator.cacheTrimIntervalMillis", 0);
}
DEFAULT_USE_CACHE_FOR_ALL_THREADS = SystemPropertyUtil.getBoolean(
"io.netty.allocator.useCacheForAllThreads", false);
// Use 1023 by default as we use an ArrayDeque as backing storage which will then allocate an internal array
// of 1024 elements. Otherwise we would allocate 2048 and only use 1024 which is wasteful.
DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK = SystemPropertyUtil.getInt(
"io.netty.allocator.maxCachedByteBuffersPerChunk", 1023);
if (logger.isDebugEnabled()) {
logger.debug("-Dio.netty.allocator.numHeapArenas: {}", DEFAULT_NUM_HEAP_ARENA);
logger.debug("-Dio.netty.allocator.numDirectArenas: {}", DEFAULT_NUM_DIRECT_ARENA);
if (pageSizeFallbackCause == null) {
logger.debug("-Dio.netty.allocator.pageSize: {}", DEFAULT_PAGE_SIZE);
} else {
logger.debug("-Dio.netty.allocator.pageSize: {}", DEFAULT_PAGE_SIZE, pageSizeFallbackCause);
}
if (maxOrderFallbackCause == null) {
logger.debug("-Dio.netty.allocator.maxOrder: {}", DEFAULT_MAX_ORDER);
} else {
logger.debug("-Dio.netty.allocator.maxOrder: {}", DEFAULT_MAX_ORDER, maxOrderFallbackCause);
}
logger.debug("-Dio.netty.allocator.chunkSize: {}", DEFAULT_PAGE_SIZE << DEFAULT_MAX_ORDER);
logger.debug("-Dio.netty.allocator.smallCacheSize: {}", DEFAULT_SMALL_CACHE_SIZE);
logger.debug("-Dio.netty.allocator.normalCacheSize: {}", DEFAULT_NORMAL_CACHE_SIZE);
logger.debug("-Dio.netty.allocator.maxCachedBufferCapacity: {}", DEFAULT_MAX_CACHED_BUFFER_CAPACITY);
logger.debug("-Dio.netty.allocator.cacheTrimInterval: {}", DEFAULT_CACHE_TRIM_INTERVAL);
logger.debug("-Dio.netty.allocator.cacheTrimIntervalMillis: {}", DEFAULT_CACHE_TRIM_INTERVAL_MILLIS);
logger.debug("-Dio.netty.allocator.useCacheForAllThreads: {}", DEFAULT_USE_CACHE_FOR_ALL_THREADS);
logger.debug("-Dio.netty.allocator.maxCachedByteBuffersPerChunk: {}",
DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK);
}
}
public static final PooledByteBufAllocator DEFAULT =
new PooledByteBufAllocator(PlatformDependent.directBufferPreferred());
private final PoolArena<byte[]>[] heapArenas;
private final PoolArena<ByteBuffer>[] directArenas;
private final int smallCacheSize;
private final int normalCacheSize;
private final List<PoolArenaMetric> heapArenaMetrics;
private final List<PoolArenaMetric> directArenaMetrics;
private final PoolThreadLocalCache threadCache;
private final int chunkSize;
private final PooledByteBufAllocatorMetric metric;
public PooledByteBufAllocator() {
this(false);
}
@SuppressWarnings("deprecation")
public PooledByteBufAllocator(boolean preferDirect) {
this(preferDirect, DEFAULT_NUM_HEAP_ARENA, DEFAULT_NUM_DIRECT_ARENA, DEFAULT_PAGE_SIZE, DEFAULT_MAX_ORDER);
}
@SuppressWarnings("deprecation")
public PooledByteBufAllocator(int nHeapArena, int nDirectArena, int pageSize, int maxOrder) {
this(false, nHeapArena, nDirectArena, pageSize, maxOrder);
}
/**
* @deprecated use
* {@link PooledByteBufAllocator#PooledByteBufAllocator(boolean, int, int, int, int, int, int, boolean)}
*/
@Deprecated
public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder) {
this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder,
0, DEFAULT_SMALL_CACHE_SIZE, DEFAULT_NORMAL_CACHE_SIZE);
}
/**
* @deprecated use
* {@link PooledByteBufAllocator#PooledByteBufAllocator(boolean, int, int, int, int, int, int, boolean)}
*/
@Deprecated
public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder,
int tinyCacheSize, int smallCacheSize, int normalCacheSize) {
this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder, smallCacheSize,
normalCacheSize, DEFAULT_USE_CACHE_FOR_ALL_THREADS, DEFAULT_DIRECT_MEMORY_CACHE_ALIGNMENT);
}
/**
* @deprecated use
* {@link PooledByteBufAllocator#PooledByteBufAllocator(boolean, int, int, int, int, int, int, boolean)}
*/
@Deprecated
public PooledByteBufAllocator(boolean preferDirect, int nHeapArena,
int nDirectArena, int pageSize, int maxOrder, int tinyCacheSize,
int smallCacheSize, int normalCacheSize,
boolean useCacheForAllThreads) {
this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder,
smallCacheSize, normalCacheSize,
useCacheForAllThreads);
}
public PooledByteBufAllocator(boolean preferDirect, int nHeapArena,
int nDirectArena, int pageSize, int maxOrder,
int smallCacheSize, int normalCacheSize,
boolean useCacheForAllThreads) {
this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder,
smallCacheSize, normalCacheSize,
useCacheForAllThreads, DEFAULT_DIRECT_MEMORY_CACHE_ALIGNMENT);
}
/**
* @deprecated use
* {@link PooledByteBufAllocator#PooledByteBufAllocator(boolean, int, int, int, int, int, int, boolean, int)}
*/
@Deprecated
public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder,
int tinyCacheSize, int smallCacheSize, int normalCacheSize,
boolean useCacheForAllThreads, int directMemoryCacheAlignment) {
this(preferDirect, nHeapArena, nDirectArena, pageSize, maxOrder,
smallCacheSize, normalCacheSize,
useCacheForAllThreads, directMemoryCacheAlignment);
}
public PooledByteBufAllocator(boolean preferDirect, int nHeapArena, int nDirectArena, int pageSize, int maxOrder,
int smallCacheSize, int normalCacheSize,
boolean useCacheForAllThreads, int directMemoryCacheAlignment) {
super(preferDirect);
threadCache = new PoolThreadLocalCache(useCacheForAllThreads);
this.smallCacheSize = smallCacheSize;
this.normalCacheSize = normalCacheSize;
if (directMemoryCacheAlignment != 0) {
if (!PlatformDependent.hasAlignDirectByteBuffer()) {
throw new UnsupportedOperationException("Buffer alignment is not supported. " +
"Either Unsafe or ByteBuffer.alignSlice() must be available.");
}
// Ensure page size is a whole multiple of the alignment, or bump it to the next whole multiple.
pageSize = (int) PlatformDependent.align(pageSize, directMemoryCacheAlignment);
}
chunkSize = validateAndCalculateChunkSize(pageSize, maxOrder);
checkPositiveOrZero(nHeapArena, "nHeapArena");
checkPositiveOrZero(nDirectArena, "nDirectArena");
checkPositiveOrZero(directMemoryCacheAlignment, "directMemoryCacheAlignment");
if (directMemoryCacheAlignment > 0 && !isDirectMemoryCacheAlignmentSupported()) {
throw new IllegalArgumentException("directMemoryCacheAlignment is not supported");
}
if ((directMemoryCacheAlignment & -directMemoryCacheAlignment) != directMemoryCacheAlignment) {
throw new IllegalArgumentException("directMemoryCacheAlignment: "
+ directMemoryCacheAlignment + " (expected: power of two)");
}
int pageShifts = validateAndCalculatePageShifts(pageSize, directMemoryCacheAlignment);
if (nHeapArena > 0) {
heapArenas = newArenaArray(nHeapArena);
List<PoolArenaMetric> metrics = new ArrayList<PoolArenaMetric>(heapArenas.length);
final SizeClasses sizeClasses = new SizeClasses(pageSize, pageShifts, chunkSize, 0);
for (int i = 0; i < heapArenas.length; i ++) {
PoolArena.HeapArena arena = new PoolArena.HeapArena(this, sizeClasses);
heapArenas[i] = arena;
metrics.add(arena);
}
heapArenaMetrics = Collections.unmodifiableList(metrics);
} else {
heapArenas = null;
heapArenaMetrics = Collections.emptyList();
}
if (nDirectArena > 0) {
directArenas = newArenaArray(nDirectArena);
List<PoolArenaMetric> metrics = new ArrayList<PoolArenaMetric>(directArenas.length);
final SizeClasses sizeClasses = new SizeClasses(pageSize, pageShifts, chunkSize,
directMemoryCacheAlignment);
for (int i = 0; i < directArenas.length; i ++) {
PoolArena.DirectArena arena = new PoolArena.DirectArena(this, sizeClasses);
directArenas[i] = arena;
metrics.add(arena);
}
directArenaMetrics = Collections.unmodifiableList(metrics);
} else {
directArenas = null;
directArenaMetrics = Collections.emptyList();
}
metric = new PooledByteBufAllocatorMetric(this);
}
@SuppressWarnings("unchecked")
private static <T> PoolArena<T>[] newArenaArray(int size) {
return new PoolArena[size];
}
private static int validateAndCalculatePageShifts(int pageSize, int alignment) {
if (pageSize < MIN_PAGE_SIZE) {
throw new IllegalArgumentException("pageSize: " + pageSize + " (expected: " + MIN_PAGE_SIZE + ')');
}
if ((pageSize & pageSize - 1) != 0) {
throw new IllegalArgumentException("pageSize: " + pageSize + " (expected: power of 2)");
}
if (pageSize < alignment) {
throw new IllegalArgumentException("Alignment cannot be greater than page size. " +
"Alignment: " + alignment + ", page size: " + pageSize + '.');
}
// Logarithm base 2. At this point we know that pageSize is a power of two.
return Integer.SIZE - 1 - Integer.numberOfLeadingZeros(pageSize);
}
private static int validateAndCalculateChunkSize(int pageSize, int maxOrder) {
if (maxOrder > 14) {
throw new IllegalArgumentException("maxOrder: " + maxOrder + " (expected: 0-14)");
}
// Ensure the resulting chunkSize does not overflow.
int chunkSize = pageSize;
for (int i = maxOrder; i > 0; i --) {
if (chunkSize > MAX_CHUNK_SIZE / 2) {
throw new IllegalArgumentException(String.format(
"pageSize (%d) << maxOrder (%d) must not exceed %d", pageSize, maxOrder, MAX_CHUNK_SIZE));
}
chunkSize <<= 1;
}
return chunkSize;
}
@Override
protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) {
PoolThreadCache cache = threadCache.get();
PoolArena<byte[]> heapArena = cache.heapArena;
final ByteBuf buf;
if (heapArena != null) {
buf = heapArena.allocate(cache, initialCapacity, maxCapacity);
} else {
buf = PlatformDependent.hasUnsafe() ?
new UnpooledUnsafeHeapByteBuf(this, initialCapacity, maxCapacity) :
new UnpooledHeapByteBuf(this, initialCapacity, maxCapacity);
}
return toLeakAwareBuffer(buf);
}
@Override
protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
PoolThreadCache cache = threadCache.get();
PoolArena<ByteBuffer> directArena = cache.directArena;
final ByteBuf buf;
if (directArena != null) {
buf = directArena.allocate(cache, initialCapacity, maxCapacity);
} else {
buf = PlatformDependent.hasUnsafe() ?
UnsafeByteBufUtil.newUnsafeDirectByteBuf(this, initialCapacity, maxCapacity) :
new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
}
return toLeakAwareBuffer(buf);
}
/**
* Default number of heap arenas - System Property: io.netty.allocator.numHeapArenas - default 2 * cores
*/
public static int defaultNumHeapArena() {
return DEFAULT_NUM_HEAP_ARENA;
}
/**
* Default number of direct arenas - System Property: io.netty.allocator.numDirectArenas - default 2 * cores
*/
public static int defaultNumDirectArena() {
return DEFAULT_NUM_DIRECT_ARENA;
}
/**
* Default buffer page size - System Property: io.netty.allocator.pageSize - default 8192
*/
public static int defaultPageSize() {
return DEFAULT_PAGE_SIZE;
}
/**
* Default maximum order - System Property: io.netty.allocator.maxOrder - default 9
*/
public static int defaultMaxOrder() {
return DEFAULT_MAX_ORDER;
}
/**
* Default thread caching behavior - System Property: io.netty.allocator.useCacheForAllThreads - default false
*/
public static boolean defaultUseCacheForAllThreads() {
return DEFAULT_USE_CACHE_FOR_ALL_THREADS;
}
/**
* Default prefer direct - System Property: io.netty.noPreferDirect - default false
*/
public static boolean defaultPreferDirect() {
return PlatformDependent.directBufferPreferred();
}
/**
* Default tiny cache size - default 0
*
* @deprecated Tiny caches have been merged into small caches.
*/
@Deprecated
public static int defaultTinyCacheSize() {
return 0;
}
/**
* Default small cache size - System Property: io.netty.allocator.smallCacheSize - default 256
*/
public static int defaultSmallCacheSize() {
return DEFAULT_SMALL_CACHE_SIZE;
}
/**
* Default normal cache size - System Property: io.netty.allocator.normalCacheSize - default 64
*/
public static int defaultNormalCacheSize() {
return DEFAULT_NORMAL_CACHE_SIZE;
}
/**
* Return {@code true} if direct memory cache alignment is supported, {@code false} otherwise.
*/
public static boolean isDirectMemoryCacheAlignmentSupported() {
return PlatformDependent.hasUnsafe();
}
@Override
public boolean isDirectBufferPooled() {
return directArenas != null;
}
/**
* @deprecated will be removed
* Returns {@code true} if the calling {@link Thread} has a {@link ThreadLocal} cache for the allocated
* buffers.
*/
@Deprecated
public boolean hasThreadLocalCache() {
return threadCache.isSet();
}
/**
* @deprecated will be removed
* Free all cached buffers for the calling {@link Thread}.
*/
@Deprecated
public void freeThreadLocalCache() {
threadCache.remove();
}
private final class PoolThreadLocalCache extends FastThreadLocal<PoolThreadCache> {
private final boolean useCacheForAllThreads;
PoolThreadLocalCache(boolean useCacheForAllThreads) {
this.useCacheForAllThreads = useCacheForAllThreads;
}
@Override
protected synchronized PoolThreadCache initialValue() {
final PoolArena<byte[]> heapArena = leastUsedArena(heapArenas);
final PoolArena<ByteBuffer> directArena = leastUsedArena(directArenas);
final Thread current = Thread.currentThread();
final EventExecutor executor = ThreadExecutorMap.currentExecutor();
if (useCacheForAllThreads ||
// If the current thread is a FastThreadLocalThread we will always use the cache
current instanceof FastThreadLocalThread ||
// The Thread is used by an EventExecutor, let's use the cache as the chances are good that we
// will allocate a lot!
executor != null) {
final PoolThreadCache cache = new PoolThreadCache(
heapArena, directArena, smallCacheSize, normalCacheSize,
DEFAULT_MAX_CACHED_BUFFER_CAPACITY, DEFAULT_CACHE_TRIM_INTERVAL, true);
if (DEFAULT_CACHE_TRIM_INTERVAL_MILLIS > 0) {
if (executor != null) {
executor.scheduleAtFixedRate(trimTask, DEFAULT_CACHE_TRIM_INTERVAL_MILLIS,
DEFAULT_CACHE_TRIM_INTERVAL_MILLIS, TimeUnit.MILLISECONDS);
}
}
return cache;
}
// No caching so just use 0 as sizes.
return new PoolThreadCache(heapArena, directArena, 0, 0, 0, 0, false);
}
@Override
protected void onRemoval(PoolThreadCache threadCache) {
threadCache.free(false);
}
private <T> PoolArena<T> leastUsedArena(PoolArena<T>[] arenas) {
if (arenas == null || arenas.length == 0) {
return null;
}
PoolArena<T> minArena = arenas[0];
//optimized
//If it is the first execution, directly return minarena and reduce the number of for loop comparisons below
if (minArena.numThreadCaches.get() == CACHE_NOT_USED) {
return minArena;
}
for (int i = 1; i < arenas.length; i++) {
PoolArena<T> arena = arenas[i];
if (arena.numThreadCaches.get() < minArena.numThreadCaches.get()) {
minArena = arena;
}
}
return minArena;
}
}
@Override
public PooledByteBufAllocatorMetric metric() {
return metric;
}
/**
* Return the number of heap arenas.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#numHeapArenas()}.
*/
@Deprecated
public int numHeapArenas() {
return heapArenaMetrics.size();
}
/**
* Return the number of direct arenas.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#numDirectArenas()}.
*/
@Deprecated
public int numDirectArenas() {
return directArenaMetrics.size();
}
/**
* Return a {@link List} of all heap {@link PoolArenaMetric}s that are provided by this pool.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#heapArenas()}.
*/
@Deprecated
public List<PoolArenaMetric> heapArenas() {
return heapArenaMetrics;
}
/**
* Return a {@link List} of all direct {@link PoolArenaMetric}s that are provided by this pool.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#directArenas()}.
*/
@Deprecated
public List<PoolArenaMetric> directArenas() {
return directArenaMetrics;
}
/**
* Return the number of thread local caches used by this {@link PooledByteBufAllocator}.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#numThreadLocalCaches()}.
*/
@Deprecated
public int numThreadLocalCaches() {
PoolArena<?>[] arenas = heapArenas != null ? heapArenas : directArenas;
if (arenas == null) {
return 0;
}
int total = 0;
for (PoolArena<?> arena : arenas) {
total += arena.numThreadCaches.get();
}
return total;
}
/**
* Return the size of the tiny cache.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#tinyCacheSize()}.
*/
@Deprecated
public int tinyCacheSize() {
return 0;
}
/**
* Return the size of the small cache.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#smallCacheSize()}.
*/
@Deprecated
public int smallCacheSize() {
return smallCacheSize;
}
/**
* Return the size of the normal cache.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#normalCacheSize()}.
*/
@Deprecated
public int normalCacheSize() {
return normalCacheSize;
}
/**
* Return the chunk size for an arena.
*
* @deprecated use {@link PooledByteBufAllocatorMetric#chunkSize()}.
*/
@Deprecated
public final int chunkSize() {
return chunkSize;
}
final long usedHeapMemory() {
return usedMemory(heapArenas);
}
final long usedDirectMemory() {
return usedMemory(directArenas);
}
private static long usedMemory(PoolArena<?>[] arenas) {
if (arenas == null) {
return -1;
}
long used = 0;
for (PoolArena<?> arena : arenas) {
used += arena.numActiveBytes();
if (used < 0) {
return Long.MAX_VALUE;
}
}
return used;
}
/**
* Returns the number of bytes of heap memory that is currently pinned to heap buffers allocated by a
* {@link ByteBufAllocator}, or {@code -1} if unknown.
* A buffer can pin more memory than its {@linkplain ByteBuf#capacity() capacity} might indicate,
* due to implementation details of the allocator.
*/
public final long pinnedHeapMemory() {
return pinnedMemory(heapArenas);
}
/**
* Returns the number of bytes of direct memory that is currently pinned to direct buffers allocated by a
* {@link ByteBufAllocator}, or {@code -1} if unknown.
* A buffer can pin more memory than its {@linkplain ByteBuf#capacity() capacity} might indicate,
* due to implementation details of the allocator.
*/
public final long pinnedDirectMemory() {
return pinnedMemory(directArenas);
}
private static long pinnedMemory(PoolArena<?>[] arenas) {
if (arenas == null) {
return -1;
}
long used = 0;
for (PoolArena<?> arena : arenas) {
used += arena.numPinnedBytes();
if (used < 0) {
return Long.MAX_VALUE;
}
}
return used;
}
final PoolThreadCache threadCache() {
PoolThreadCache cache = threadCache.get();
assert cache != null;
return cache;
}
/**
* Trim thread local cache for the current {@link Thread}, which will give back any cached memory that was not
* allocated frequently since the last trim operation.
*
* Returns {@code true} if a cache for the current {@link Thread} exists and so was trimmed, false otherwise.
*/
public boolean trimCurrentThreadCache() {
PoolThreadCache cache = threadCache.getIfExists();
if (cache != null) {
cache.trim();
return true;
}
return false;
}
/**
* Returns the status of the allocator (which contains all metrics) as string. Be aware this may be expensive
* and so should not called too frequently.
*/
public String dumpStats() {
int heapArenasLen = heapArenas == null ? 0 : heapArenas.length;
StringBuilder buf = new StringBuilder(512)
.append(heapArenasLen)
.append(" heap arena(s):")
.append(StringUtil.NEWLINE);
if (heapArenasLen > 0) {
for (PoolArena<byte[]> a: heapArenas) {
buf.append(a);
}
}
int directArenasLen = directArenas == null ? 0 : directArenas.length;
buf.append(directArenasLen)
.append(" direct arena(s):")
.append(StringUtil.NEWLINE);
if (directArenasLen > 0) {
for (PoolArena<ByteBuffer> a: directArenas) {
buf.append(a);
}
}
return buf.toString();
}
}

View file

@ -0,0 +1,124 @@
/*
* Copyright 2017 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.StringUtil;
import java.util.List;
/**
* Exposed metric for {@link PooledByteBufAllocator}.
*/
@SuppressWarnings("deprecation")
public final class PooledByteBufAllocatorMetric implements ByteBufAllocatorMetric {
private final PooledByteBufAllocator allocator;
PooledByteBufAllocatorMetric(PooledByteBufAllocator allocator) {
this.allocator = allocator;
}
/**
* Return the number of heap arenas.
*/
public int numHeapArenas() {
return allocator.numHeapArenas();
}
/**
* Return the number of direct arenas.
*/
public int numDirectArenas() {
return allocator.numDirectArenas();
}
/**
* Return a {@link List} of all heap {@link PoolArenaMetric}s that are provided by this pool.
*/
public List<PoolArenaMetric> heapArenas() {
return allocator.heapArenas();
}
/**
* Return a {@link List} of all direct {@link PoolArenaMetric}s that are provided by this pool.
*/
public List<PoolArenaMetric> directArenas() {
return allocator.directArenas();
}
/**
* Return the number of thread local caches used by this {@link PooledByteBufAllocator}.
*/
public int numThreadLocalCaches() {
return allocator.numThreadLocalCaches();
}
/**
* Return the size of the tiny cache.
*
* @deprecated Tiny caches have been merged into small caches.
*/
@Deprecated
public int tinyCacheSize() {
return allocator.tinyCacheSize();
}
/**
* Return the size of the small cache.
*/
public int smallCacheSize() {
return allocator.smallCacheSize();
}
/**
* Return the size of the normal cache.
*/
public int normalCacheSize() {
return allocator.normalCacheSize();
}
/**
* Return the chunk size for an arena.
*/
public int chunkSize() {
return allocator.chunkSize();
}
@Override
public long usedHeapMemory() {
return allocator.usedHeapMemory();
}
@Override
public long usedDirectMemory() {
return allocator.usedDirectMemory();
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder(256);
sb.append(StringUtil.simpleClassName(this))
.append("(usedHeapMemory: ").append(usedHeapMemory())
.append("; usedDirectMemory: ").append(usedDirectMemory())
.append("; numHeapArenas: ").append(numHeapArenas())
.append("; numDirectArenas: ").append(numDirectArenas())
.append("; smallCacheSize: ").append(smallCacheSize())
.append("; normalCacheSize: ").append(normalCacheSize())
.append("; numThreadLocalCaches: ").append(numThreadLocalCaches())
.append("; chunkSize: ").append(chunkSize()).append(')');
return sb.toString();
}
}

View file

@ -0,0 +1,313 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectPool;
import io.netty.util.internal.ObjectPool.Handle;
import io.netty.util.internal.ObjectPool.ObjectCreator;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
final class PooledDirectByteBuf extends PooledByteBuf<ByteBuffer> {
private static final ObjectPool<PooledDirectByteBuf> RECYCLER = ObjectPool.newPool(
new ObjectCreator<PooledDirectByteBuf>() {
@Override
public PooledDirectByteBuf newObject(Handle<PooledDirectByteBuf> handle) {
return new PooledDirectByteBuf(handle, 0);
}
});
static PooledDirectByteBuf newInstance(int maxCapacity) {
PooledDirectByteBuf buf = RECYCLER.get();
buf.reuse(maxCapacity);
return buf;
}
private PooledDirectByteBuf(Handle<PooledDirectByteBuf> recyclerHandle, int maxCapacity) {
super(recyclerHandle, maxCapacity);
}
@Override
protected ByteBuffer newInternalNioBuffer(ByteBuffer memory) {
return memory.duplicate();
}
@Override
public boolean isDirect() {
return true;
}
@Override
protected byte _getByte(int index) {
return memory.get(idx(index));
}
@Override
protected short _getShort(int index) {
return memory.getShort(idx(index));
}
@Override
protected short _getShortLE(int index) {
return ByteBufUtil.swapShort(_getShort(index));
}
@Override
protected int _getUnsignedMedium(int index) {
index = idx(index);
return (memory.get(index) & 0xff) << 16 |
(memory.get(index + 1) & 0xff) << 8 |
memory.get(index + 2) & 0xff;
}
@Override
protected int _getUnsignedMediumLE(int index) {
index = idx(index);
return memory.get(index) & 0xff |
(memory.get(index + 1) & 0xff) << 8 |
(memory.get(index + 2) & 0xff) << 16;
}
@Override
protected int _getInt(int index) {
return memory.getInt(idx(index));
}
@Override
protected int _getIntLE(int index) {
return ByteBufUtil.swapInt(_getInt(index));
}
@Override
protected long _getLong(int index) {
return memory.getLong(idx(index));
}
@Override
protected long _getLongLE(int index) {
return ByteBufUtil.swapLong(_getLong(index));
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.capacity());
if (dst.hasArray()) {
getBytes(index, dst.array(), dst.arrayOffset() + dstIndex, length);
} else if (dst.nioBufferCount() > 0) {
for (ByteBuffer bb: dst.nioBuffers(dstIndex, length)) {
int bbLen = bb.remaining();
getBytes(index, bb);
index += bbLen;
}
} else {
dst.setBytes(dstIndex, this, index, length);
}
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.length);
_internalNioBuffer(index, length, true).get(dst, dstIndex, length);
return this;
}
@Override
public ByteBuf readBytes(byte[] dst, int dstIndex, int length) {
checkDstIndex(length, dstIndex, dst.length);
_internalNioBuffer(readerIndex, length, false).get(dst, dstIndex, length);
readerIndex += length;
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
dst.put(duplicateInternalNioBuffer(index, dst.remaining()));
return this;
}
@Override
public ByteBuf readBytes(ByteBuffer dst) {
int length = dst.remaining();
checkReadableBytes(length);
dst.put(_internalNioBuffer(readerIndex, length, false));
readerIndex += length;
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
getBytes(index, out, length, false);
return this;
}
private void getBytes(int index, OutputStream out, int length, boolean internal) throws IOException {
checkIndex(index, length);
if (length == 0) {
return;
}
ByteBufUtil.readBytes(alloc(), internal ? internalNioBuffer() : memory.duplicate(), idx(index), length, out);
}
@Override
public ByteBuf readBytes(OutputStream out, int length) throws IOException {
checkReadableBytes(length);
getBytes(readerIndex, out, length, true);
readerIndex += length;
return this;
}
@Override
protected void _setByte(int index, int value) {
memory.put(idx(index), (byte) value);
}
@Override
protected void _setShort(int index, int value) {
memory.putShort(idx(index), (short) value);
}
@Override
protected void _setShortLE(int index, int value) {
_setShort(index, ByteBufUtil.swapShort((short) value));
}
@Override
protected void _setMedium(int index, int value) {
index = idx(index);
memory.put(index, (byte) (value >>> 16));
memory.put(index + 1, (byte) (value >>> 8));
memory.put(index + 2, (byte) value);
}
@Override
protected void _setMediumLE(int index, int value) {
index = idx(index);
memory.put(index, (byte) value);
memory.put(index + 1, (byte) (value >>> 8));
memory.put(index + 2, (byte) (value >>> 16));
}
@Override
protected void _setInt(int index, int value) {
memory.putInt(idx(index), value);
}
@Override
protected void _setIntLE(int index, int value) {
_setInt(index, ByteBufUtil.swapInt(value));
}
@Override
protected void _setLong(int index, long value) {
memory.putLong(idx(index), value);
}
@Override
protected void _setLongLE(int index, long value) {
_setLong(index, ByteBufUtil.swapLong(value));
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.capacity());
if (src.hasArray()) {
setBytes(index, src.array(), src.arrayOffset() + srcIndex, length);
} else if (src.nioBufferCount() > 0) {
for (ByteBuffer bb: src.nioBuffers(srcIndex, length)) {
int bbLen = bb.remaining();
setBytes(index, bb);
index += bbLen;
}
} else {
src.getBytes(srcIndex, this, index, length);
}
return this;
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.length);
_internalNioBuffer(index, length, false).put(src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
int length = src.remaining();
checkIndex(index, length);
ByteBuffer tmpBuf = internalNioBuffer();
if (src == tmpBuf) {
src = src.duplicate();
}
index = idx(index);
tmpBuf.limit(index + length).position(index);
tmpBuf.put(src);
return this;
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
checkIndex(index, length);
byte[] tmp = ByteBufUtil.threadLocalTempArray(length);
int readBytes = in.read(tmp, 0, length);
if (readBytes <= 0) {
return readBytes;
}
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.position(idx(index));
tmpBuf.put(tmp, 0, readBytes);
return readBytes;
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex(index, length);
ByteBuf copy = alloc().directBuffer(length, maxCapacity());
return copy.writeBytes(this, index, length);
}
@Override
public boolean hasArray() {
return false;
}
@Override
public byte[] array() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
public int arrayOffset() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
public boolean hasMemoryAddress() {
return false;
}
@Override
public long memoryAddress() {
throw new UnsupportedOperationException();
}
}

View file

@ -0,0 +1,378 @@
/*
* Copyright 2016 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ByteProcessor;
import io.netty.util.internal.ObjectPool;
import io.netty.util.internal.ObjectPool.Handle;
import io.netty.util.internal.ObjectPool.ObjectCreator;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
final class PooledDuplicatedByteBuf extends AbstractPooledDerivedByteBuf {
private static final ObjectPool<PooledDuplicatedByteBuf> RECYCLER = ObjectPool.newPool(
new ObjectCreator<PooledDuplicatedByteBuf>() {
@Override
public PooledDuplicatedByteBuf newObject(Handle<PooledDuplicatedByteBuf> handle) {
return new PooledDuplicatedByteBuf(handle);
}
});
static PooledDuplicatedByteBuf newInstance(AbstractByteBuf unwrapped, ByteBuf wrapped,
int readerIndex, int writerIndex) {
final PooledDuplicatedByteBuf duplicate = RECYCLER.get();
duplicate.init(unwrapped, wrapped, readerIndex, writerIndex, unwrapped.maxCapacity());
duplicate.markReaderIndex();
duplicate.markWriterIndex();
return duplicate;
}
private PooledDuplicatedByteBuf(Handle<PooledDuplicatedByteBuf> handle) {
super(handle);
}
@Override
public int capacity() {
return unwrap().capacity();
}
@Override
public ByteBuf capacity(int newCapacity) {
unwrap().capacity(newCapacity);
return this;
}
@Override
public int arrayOffset() {
return unwrap().arrayOffset();
}
@Override
public long memoryAddress() {
return unwrap().memoryAddress();
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
return unwrap().nioBuffer(index, length);
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return unwrap().nioBuffers(index, length);
}
@Override
public ByteBuf copy(int index, int length) {
return unwrap().copy(index, length);
}
@Override
public ByteBuf retainedSlice(int index, int length) {
return PooledSlicedByteBuf.newInstance(unwrap(), this, index, length);
}
@Override
public ByteBuf duplicate() {
return duplicate0().setIndex(readerIndex(), writerIndex());
}
@Override
public ByteBuf retainedDuplicate() {
return PooledDuplicatedByteBuf.newInstance(unwrap(), this, readerIndex(), writerIndex());
}
@Override
public byte getByte(int index) {
return unwrap().getByte(index);
}
@Override
protected byte _getByte(int index) {
return unwrap()._getByte(index);
}
@Override
public short getShort(int index) {
return unwrap().getShort(index);
}
@Override
protected short _getShort(int index) {
return unwrap()._getShort(index);
}
@Override
public short getShortLE(int index) {
return unwrap().getShortLE(index);
}
@Override
protected short _getShortLE(int index) {
return unwrap()._getShortLE(index);
}
@Override
public int getUnsignedMedium(int index) {
return unwrap().getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return unwrap()._getUnsignedMedium(index);
}
@Override
public int getUnsignedMediumLE(int index) {
return unwrap().getUnsignedMediumLE(index);
}
@Override
protected int _getUnsignedMediumLE(int index) {
return unwrap()._getUnsignedMediumLE(index);
}
@Override
public int getInt(int index) {
return unwrap().getInt(index);
}
@Override
protected int _getInt(int index) {
return unwrap()._getInt(index);
}
@Override
public int getIntLE(int index) {
return unwrap().getIntLE(index);
}
@Override
protected int _getIntLE(int index) {
return unwrap()._getIntLE(index);
}
@Override
public long getLong(int index) {
return unwrap().getLong(index);
}
@Override
protected long _getLong(int index) {
return unwrap()._getLong(index);
}
@Override
public long getLongLE(int index) {
return unwrap().getLongLE(index);
}
@Override
protected long _getLongLE(int index) {
return unwrap()._getLongLE(index);
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
unwrap().getBytes(index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
unwrap().getBytes(index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
unwrap().getBytes(index, dst);
return this;
}
@Override
public ByteBuf setByte(int index, int value) {
unwrap().setByte(index, value);
return this;
}
@Override
protected void _setByte(int index, int value) {
unwrap()._setByte(index, value);
}
@Override
public ByteBuf setShort(int index, int value) {
unwrap().setShort(index, value);
return this;
}
@Override
protected void _setShort(int index, int value) {
unwrap()._setShort(index, value);
}
@Override
public ByteBuf setShortLE(int index, int value) {
unwrap().setShortLE(index, value);
return this;
}
@Override
protected void _setShortLE(int index, int value) {
unwrap()._setShortLE(index, value);
}
@Override
public ByteBuf setMedium(int index, int value) {
unwrap().setMedium(index, value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
unwrap()._setMedium(index, value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
unwrap().setMediumLE(index, value);
return this;
}
@Override
protected void _setMediumLE(int index, int value) {
unwrap()._setMediumLE(index, value);
}
@Override
public ByteBuf setInt(int index, int value) {
unwrap().setInt(index, value);
return this;
}
@Override
protected void _setInt(int index, int value) {
unwrap()._setInt(index, value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
unwrap().setIntLE(index, value);
return this;
}
@Override
protected void _setIntLE(int index, int value) {
unwrap()._setIntLE(index, value);
}
@Override
public ByteBuf setLong(int index, long value) {
unwrap().setLong(index, value);
return this;
}
@Override
protected void _setLong(int index, long value) {
unwrap()._setLong(index, value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
unwrap().setLongLE(index, value);
return this;
}
@Override
protected void _setLongLE(int index, long value) {
unwrap().setLongLE(index, value);
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
unwrap().setBytes(index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
unwrap().setBytes(index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
unwrap().setBytes(index, src);
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length)
throws IOException {
unwrap().getBytes(index, out, length);
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length)
throws IOException {
return unwrap().getBytes(index, out, length);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length)
throws IOException {
return unwrap().getBytes(index, out, position, length);
}
@Override
public int setBytes(int index, InputStream in, int length)
throws IOException {
return unwrap().setBytes(index, in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length)
throws IOException {
return unwrap().setBytes(index, in, length);
}
@Override
public int setBytes(int index, FileChannel in, long position, int length)
throws IOException {
return unwrap().setBytes(index, in, position, length);
}
@Override
public int forEachByte(int index, int length, ByteProcessor processor) {
return unwrap().forEachByte(index, length, processor);
}
@Override
public int forEachByteDesc(int index, int length, ByteProcessor processor) {
return unwrap().forEachByteDesc(index, length, processor);
}
}

View file

@ -0,0 +1,254 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectPool;
import io.netty.util.internal.ObjectPool.Handle;
import io.netty.util.internal.ObjectPool.ObjectCreator;
import io.netty.util.internal.PlatformDependent;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
class PooledHeapByteBuf extends PooledByteBuf<byte[]> {
private static final ObjectPool<PooledHeapByteBuf> RECYCLER = ObjectPool.newPool(
new ObjectCreator<PooledHeapByteBuf>() {
@Override
public PooledHeapByteBuf newObject(Handle<PooledHeapByteBuf> handle) {
return new PooledHeapByteBuf(handle, 0);
}
});
static PooledHeapByteBuf newInstance(int maxCapacity) {
PooledHeapByteBuf buf = RECYCLER.get();
buf.reuse(maxCapacity);
return buf;
}
PooledHeapByteBuf(Handle<? extends PooledHeapByteBuf> recyclerHandle, int maxCapacity) {
super(recyclerHandle, maxCapacity);
}
@Override
public final boolean isDirect() {
return false;
}
@Override
protected byte _getByte(int index) {
return HeapByteBufUtil.getByte(memory, idx(index));
}
@Override
protected short _getShort(int index) {
return HeapByteBufUtil.getShort(memory, idx(index));
}
@Override
protected short _getShortLE(int index) {
return HeapByteBufUtil.getShortLE(memory, idx(index));
}
@Override
protected int _getUnsignedMedium(int index) {
return HeapByteBufUtil.getUnsignedMedium(memory, idx(index));
}
@Override
protected int _getUnsignedMediumLE(int index) {
return HeapByteBufUtil.getUnsignedMediumLE(memory, idx(index));
}
@Override
protected int _getInt(int index) {
return HeapByteBufUtil.getInt(memory, idx(index));
}
@Override
protected int _getIntLE(int index) {
return HeapByteBufUtil.getIntLE(memory, idx(index));
}
@Override
protected long _getLong(int index) {
return HeapByteBufUtil.getLong(memory, idx(index));
}
@Override
protected long _getLongLE(int index) {
return HeapByteBufUtil.getLongLE(memory, idx(index));
}
@Override
public final ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.capacity());
if (dst.hasMemoryAddress()) {
PlatformDependent.copyMemory(memory, idx(index), dst.memoryAddress() + dstIndex, length);
} else if (dst.hasArray()) {
getBytes(index, dst.array(), dst.arrayOffset() + dstIndex, length);
} else {
dst.setBytes(dstIndex, memory, idx(index), length);
}
return this;
}
@Override
public final ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.length);
System.arraycopy(memory, idx(index), dst, dstIndex, length);
return this;
}
@Override
public final ByteBuf getBytes(int index, ByteBuffer dst) {
int length = dst.remaining();
checkIndex(index, length);
dst.put(memory, idx(index), length);
return this;
}
@Override
public final ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
checkIndex(index, length);
out.write(memory, idx(index), length);
return this;
}
@Override
protected void _setByte(int index, int value) {
HeapByteBufUtil.setByte(memory, idx(index), value);
}
@Override
protected void _setShort(int index, int value) {
HeapByteBufUtil.setShort(memory, idx(index), value);
}
@Override
protected void _setShortLE(int index, int value) {
HeapByteBufUtil.setShortLE(memory, idx(index), value);
}
@Override
protected void _setMedium(int index, int value) {
HeapByteBufUtil.setMedium(memory, idx(index), value);
}
@Override
protected void _setMediumLE(int index, int value) {
HeapByteBufUtil.setMediumLE(memory, idx(index), value);
}
@Override
protected void _setInt(int index, int value) {
HeapByteBufUtil.setInt(memory, idx(index), value);
}
@Override
protected void _setIntLE(int index, int value) {
HeapByteBufUtil.setIntLE(memory, idx(index), value);
}
@Override
protected void _setLong(int index, long value) {
HeapByteBufUtil.setLong(memory, idx(index), value);
}
@Override
protected void _setLongLE(int index, long value) {
HeapByteBufUtil.setLongLE(memory, idx(index), value);
}
@Override
public final ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.capacity());
if (src.hasMemoryAddress()) {
PlatformDependent.copyMemory(src.memoryAddress() + srcIndex, memory, idx(index), length);
} else if (src.hasArray()) {
setBytes(index, src.array(), src.arrayOffset() + srcIndex, length);
} else {
src.getBytes(srcIndex, memory, idx(index), length);
}
return this;
}
@Override
public final ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.length);
System.arraycopy(src, srcIndex, memory, idx(index), length);
return this;
}
@Override
public final ByteBuf setBytes(int index, ByteBuffer src) {
int length = src.remaining();
checkIndex(index, length);
src.get(memory, idx(index), length);
return this;
}
@Override
public final int setBytes(int index, InputStream in, int length) throws IOException {
checkIndex(index, length);
return in.read(memory, idx(index), length);
}
@Override
public final ByteBuf copy(int index, int length) {
checkIndex(index, length);
ByteBuf copy = alloc().heapBuffer(length, maxCapacity());
return copy.writeBytes(memory, idx(index), length);
}
@Override
final ByteBuffer duplicateInternalNioBuffer(int index, int length) {
checkIndex(index, length);
return ByteBuffer.wrap(memory, idx(index), length).slice();
}
@Override
public final boolean hasArray() {
return true;
}
@Override
public final byte[] array() {
ensureAccessible();
return memory;
}
@Override
public final int arrayOffset() {
return offset;
}
@Override
public final boolean hasMemoryAddress() {
return false;
}
@Override
public final long memoryAddress() {
throw new UnsupportedOperationException();
}
@Override
protected final ByteBuffer newInternalNioBuffer(byte[] memory) {
return ByteBuffer.wrap(memory);
}
}

View file

@ -0,0 +1,441 @@
/*
* Copyright 2016 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ByteProcessor;
import io.netty.util.internal.ObjectPool;
import io.netty.util.internal.ObjectPool.Handle;
import io.netty.util.internal.ObjectPool.ObjectCreator;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
import static io.netty.buffer.AbstractUnpooledSlicedByteBuf.checkSliceOutOfBounds;
final class PooledSlicedByteBuf extends AbstractPooledDerivedByteBuf {
private static final ObjectPool<PooledSlicedByteBuf> RECYCLER = ObjectPool.newPool(
new ObjectCreator<PooledSlicedByteBuf>() {
@Override
public PooledSlicedByteBuf newObject(Handle<PooledSlicedByteBuf> handle) {
return new PooledSlicedByteBuf(handle);
}
});
static PooledSlicedByteBuf newInstance(AbstractByteBuf unwrapped, ByteBuf wrapped,
int index, int length) {
checkSliceOutOfBounds(index, length, unwrapped);
return newInstance0(unwrapped, wrapped, index, length);
}
private static PooledSlicedByteBuf newInstance0(AbstractByteBuf unwrapped, ByteBuf wrapped,
int adjustment, int length) {
final PooledSlicedByteBuf slice = RECYCLER.get();
slice.init(unwrapped, wrapped, 0, length, length);
slice.discardMarks();
slice.adjustment = adjustment;
return slice;
}
int adjustment;
private PooledSlicedByteBuf(Handle<PooledSlicedByteBuf> handle) {
super(handle);
}
@Override
public int capacity() {
return maxCapacity();
}
@Override
public ByteBuf capacity(int newCapacity) {
throw new UnsupportedOperationException("sliced buffer");
}
@Override
public int arrayOffset() {
return idx(unwrap().arrayOffset());
}
@Override
public long memoryAddress() {
return unwrap().memoryAddress() + adjustment;
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex0(index, length);
return unwrap().nioBuffer(idx(index), length);
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
checkIndex0(index, length);
return unwrap().nioBuffers(idx(index), length);
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex0(index, length);
return unwrap().copy(idx(index), length);
}
@Override
public ByteBuf slice(int index, int length) {
checkIndex0(index, length);
return super.slice(idx(index), length);
}
@Override
public ByteBuf retainedSlice(int index, int length) {
checkIndex0(index, length);
return PooledSlicedByteBuf.newInstance0(unwrap(), this, idx(index), length);
}
@Override
public ByteBuf duplicate() {
return duplicate0().setIndex(idx(readerIndex()), idx(writerIndex()));
}
@Override
public ByteBuf retainedDuplicate() {
return PooledDuplicatedByteBuf.newInstance(unwrap(), this, idx(readerIndex()), idx(writerIndex()));
}
@Override
public byte getByte(int index) {
checkIndex0(index, 1);
return unwrap().getByte(idx(index));
}
@Override
protected byte _getByte(int index) {
return unwrap()._getByte(idx(index));
}
@Override
public short getShort(int index) {
checkIndex0(index, 2);
return unwrap().getShort(idx(index));
}
@Override
protected short _getShort(int index) {
return unwrap()._getShort(idx(index));
}
@Override
public short getShortLE(int index) {
checkIndex0(index, 2);
return unwrap().getShortLE(idx(index));
}
@Override
protected short _getShortLE(int index) {
return unwrap()._getShortLE(idx(index));
}
@Override
public int getUnsignedMedium(int index) {
checkIndex0(index, 3);
return unwrap().getUnsignedMedium(idx(index));
}
@Override
protected int _getUnsignedMedium(int index) {
return unwrap()._getUnsignedMedium(idx(index));
}
@Override
public int getUnsignedMediumLE(int index) {
checkIndex0(index, 3);
return unwrap().getUnsignedMediumLE(idx(index));
}
@Override
protected int _getUnsignedMediumLE(int index) {
return unwrap()._getUnsignedMediumLE(idx(index));
}
@Override
public int getInt(int index) {
checkIndex0(index, 4);
return unwrap().getInt(idx(index));
}
@Override
protected int _getInt(int index) {
return unwrap()._getInt(idx(index));
}
@Override
public int getIntLE(int index) {
checkIndex0(index, 4);
return unwrap().getIntLE(idx(index));
}
@Override
protected int _getIntLE(int index) {
return unwrap()._getIntLE(idx(index));
}
@Override
public long getLong(int index) {
checkIndex0(index, 8);
return unwrap().getLong(idx(index));
}
@Override
protected long _getLong(int index) {
return unwrap()._getLong(idx(index));
}
@Override
public long getLongLE(int index) {
checkIndex0(index, 8);
return unwrap().getLongLE(idx(index));
}
@Override
protected long _getLongLE(int index) {
return unwrap()._getLongLE(idx(index));
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkIndex0(index, length);
unwrap().getBytes(idx(index), dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkIndex0(index, length);
unwrap().getBytes(idx(index), dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
checkIndex0(index, dst.remaining());
unwrap().getBytes(idx(index), dst);
return this;
}
@Override
public ByteBuf setByte(int index, int value) {
checkIndex0(index, 1);
unwrap().setByte(idx(index), value);
return this;
}
@Override
protected void _setByte(int index, int value) {
unwrap()._setByte(idx(index), value);
}
@Override
public ByteBuf setShort(int index, int value) {
checkIndex0(index, 2);
unwrap().setShort(idx(index), value);
return this;
}
@Override
protected void _setShort(int index, int value) {
unwrap()._setShort(idx(index), value);
}
@Override
public ByteBuf setShortLE(int index, int value) {
checkIndex0(index, 2);
unwrap().setShortLE(idx(index), value);
return this;
}
@Override
protected void _setShortLE(int index, int value) {
unwrap()._setShortLE(idx(index), value);
}
@Override
public ByteBuf setMedium(int index, int value) {
checkIndex0(index, 3);
unwrap().setMedium(idx(index), value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
unwrap()._setMedium(idx(index), value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
checkIndex0(index, 3);
unwrap().setMediumLE(idx(index), value);
return this;
}
@Override
protected void _setMediumLE(int index, int value) {
unwrap()._setMediumLE(idx(index), value);
}
@Override
public ByteBuf setInt(int index, int value) {
checkIndex0(index, 4);
unwrap().setInt(idx(index), value);
return this;
}
@Override
protected void _setInt(int index, int value) {
unwrap()._setInt(idx(index), value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
checkIndex0(index, 4);
unwrap().setIntLE(idx(index), value);
return this;
}
@Override
protected void _setIntLE(int index, int value) {
unwrap()._setIntLE(idx(index), value);
}
@Override
public ByteBuf setLong(int index, long value) {
checkIndex0(index, 8);
unwrap().setLong(idx(index), value);
return this;
}
@Override
protected void _setLong(int index, long value) {
unwrap()._setLong(idx(index), value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
checkIndex0(index, 8);
unwrap().setLongLE(idx(index), value);
return this;
}
@Override
protected void _setLongLE(int index, long value) {
unwrap().setLongLE(idx(index), value);
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
checkIndex0(index, length);
unwrap().setBytes(idx(index), src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
checkIndex0(index, length);
unwrap().setBytes(idx(index), src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
checkIndex0(index, src.remaining());
unwrap().setBytes(idx(index), src);
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length)
throws IOException {
checkIndex0(index, length);
unwrap().getBytes(idx(index), out, length);
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length)
throws IOException {
checkIndex0(index, length);
return unwrap().getBytes(idx(index), out, length);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length)
throws IOException {
checkIndex0(index, length);
return unwrap().getBytes(idx(index), out, position, length);
}
@Override
public int setBytes(int index, InputStream in, int length)
throws IOException {
checkIndex0(index, length);
return unwrap().setBytes(idx(index), in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length)
throws IOException {
checkIndex0(index, length);
return unwrap().setBytes(idx(index), in, length);
}
@Override
public int setBytes(int index, FileChannel in, long position, int length)
throws IOException {
checkIndex0(index, length);
return unwrap().setBytes(idx(index), in, position, length);
}
@Override
public int forEachByte(int index, int length, ByteProcessor processor) {
checkIndex0(index, length);
int ret = unwrap().forEachByte(idx(index), length, processor);
if (ret < adjustment) {
return -1;
}
return ret - adjustment;
}
@Override
public int forEachByteDesc(int index, int length, ByteProcessor processor) {
checkIndex0(index, length);
int ret = unwrap().forEachByteDesc(idx(index), length, processor);
if (ret < adjustment) {
return -1;
}
return ret - adjustment;
}
private int idx(int index) {
return index + adjustment;
}
}

View file

@ -0,0 +1,273 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectPool;
import io.netty.util.internal.ObjectPool.Handle;
import io.netty.util.internal.ObjectPool.ObjectCreator;
import io.netty.util.internal.PlatformDependent;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
final class PooledUnsafeDirectByteBuf extends PooledByteBuf<ByteBuffer> {
private static final ObjectPool<PooledUnsafeDirectByteBuf> RECYCLER = ObjectPool.newPool(
new ObjectCreator<PooledUnsafeDirectByteBuf>() {
@Override
public PooledUnsafeDirectByteBuf newObject(Handle<PooledUnsafeDirectByteBuf> handle) {
return new PooledUnsafeDirectByteBuf(handle, 0);
}
});
static PooledUnsafeDirectByteBuf newInstance(int maxCapacity) {
PooledUnsafeDirectByteBuf buf = RECYCLER.get();
buf.reuse(maxCapacity);
return buf;
}
private long memoryAddress;
private PooledUnsafeDirectByteBuf(Handle<PooledUnsafeDirectByteBuf> recyclerHandle, int maxCapacity) {
super(recyclerHandle, maxCapacity);
}
@Override
void init(PoolChunk<ByteBuffer> chunk, ByteBuffer nioBuffer,
long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
super.init(chunk, nioBuffer, handle, offset, length, maxLength, cache);
initMemoryAddress();
}
@Override
void initUnpooled(PoolChunk<ByteBuffer> chunk, int length) {
super.initUnpooled(chunk, length);
initMemoryAddress();
}
private void initMemoryAddress() {
memoryAddress = PlatformDependent.directBufferAddress(memory) + offset;
}
@Override
protected ByteBuffer newInternalNioBuffer(ByteBuffer memory) {
return memory.duplicate();
}
@Override
public boolean isDirect() {
return true;
}
@Override
protected byte _getByte(int index) {
return UnsafeByteBufUtil.getByte(addr(index));
}
@Override
protected short _getShort(int index) {
return UnsafeByteBufUtil.getShort(addr(index));
}
@Override
protected short _getShortLE(int index) {
return UnsafeByteBufUtil.getShortLE(addr(index));
}
@Override
protected int _getUnsignedMedium(int index) {
return UnsafeByteBufUtil.getUnsignedMedium(addr(index));
}
@Override
protected int _getUnsignedMediumLE(int index) {
return UnsafeByteBufUtil.getUnsignedMediumLE(addr(index));
}
@Override
protected int _getInt(int index) {
return UnsafeByteBufUtil.getInt(addr(index));
}
@Override
protected int _getIntLE(int index) {
return UnsafeByteBufUtil.getIntLE(addr(index));
}
@Override
protected long _getLong(int index) {
return UnsafeByteBufUtil.getLong(addr(index));
}
@Override
protected long _getLongLE(int index) {
return UnsafeByteBufUtil.getLongLE(addr(index));
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst);
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
UnsafeByteBufUtil.getBytes(this, addr(index), index, out, length);
return this;
}
@Override
protected void _setByte(int index, int value) {
UnsafeByteBufUtil.setByte(addr(index), (byte) value);
}
@Override
protected void _setShort(int index, int value) {
UnsafeByteBufUtil.setShort(addr(index), value);
}
@Override
protected void _setShortLE(int index, int value) {
UnsafeByteBufUtil.setShortLE(addr(index), value);
}
@Override
protected void _setMedium(int index, int value) {
UnsafeByteBufUtil.setMedium(addr(index), value);
}
@Override
protected void _setMediumLE(int index, int value) {
UnsafeByteBufUtil.setMediumLE(addr(index), value);
}
@Override
protected void _setInt(int index, int value) {
UnsafeByteBufUtil.setInt(addr(index), value);
}
@Override
protected void _setIntLE(int index, int value) {
UnsafeByteBufUtil.setIntLE(addr(index), value);
}
@Override
protected void _setLong(int index, long value) {
UnsafeByteBufUtil.setLong(addr(index), value);
}
@Override
protected void _setLongLE(int index, long value) {
UnsafeByteBufUtil.setLongLE(addr(index), value);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
UnsafeByteBufUtil.setBytes(this, addr(index), index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
UnsafeByteBufUtil.setBytes(this, addr(index), index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
UnsafeByteBufUtil.setBytes(this, addr(index), index, src);
return this;
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
return UnsafeByteBufUtil.setBytes(this, addr(index), index, in, length);
}
@Override
public ByteBuf copy(int index, int length) {
return UnsafeByteBufUtil.copy(this, addr(index), index, length);
}
@Override
public boolean hasArray() {
return false;
}
@Override
public byte[] array() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
public int arrayOffset() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
public boolean hasMemoryAddress() {
return true;
}
@Override
public long memoryAddress() {
ensureAccessible();
return memoryAddress;
}
private long addr(int index) {
return memoryAddress + index;
}
@Override
protected SwappedByteBuf newSwappedByteBuf() {
if (PlatformDependent.isUnaligned()) {
// Only use if unaligned access is supported otherwise there is no gain.
return new UnsafeDirectSwappedByteBuf(this);
}
return super.newSwappedByteBuf();
}
@Override
public ByteBuf setZero(int index, int length) {
checkIndex(index, length);
UnsafeByteBufUtil.setZero(addr(index), length);
return this;
}
@Override
public ByteBuf writeZero(int length) {
ensureWritable(length);
int wIndex = writerIndex;
UnsafeByteBufUtil.setZero(addr(wIndex), length);
writerIndex = wIndex + length;
return this;
}
}

View file

@ -0,0 +1,166 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectPool;
import io.netty.util.internal.ObjectPool.Handle;
import io.netty.util.internal.ObjectPool.ObjectCreator;
import io.netty.util.internal.PlatformDependent;
final class PooledUnsafeHeapByteBuf extends PooledHeapByteBuf {
private static final ObjectPool<PooledUnsafeHeapByteBuf> RECYCLER = ObjectPool.newPool(
new ObjectCreator<PooledUnsafeHeapByteBuf>() {
@Override
public PooledUnsafeHeapByteBuf newObject(Handle<PooledUnsafeHeapByteBuf> handle) {
return new PooledUnsafeHeapByteBuf(handle, 0);
}
});
static PooledUnsafeHeapByteBuf newUnsafeInstance(int maxCapacity) {
PooledUnsafeHeapByteBuf buf = RECYCLER.get();
buf.reuse(maxCapacity);
return buf;
}
private PooledUnsafeHeapByteBuf(Handle<PooledUnsafeHeapByteBuf> recyclerHandle, int maxCapacity) {
super(recyclerHandle, maxCapacity);
}
@Override
protected byte _getByte(int index) {
return UnsafeByteBufUtil.getByte(memory, idx(index));
}
@Override
protected short _getShort(int index) {
return UnsafeByteBufUtil.getShort(memory, idx(index));
}
@Override
protected short _getShortLE(int index) {
return UnsafeByteBufUtil.getShortLE(memory, idx(index));
}
@Override
protected int _getUnsignedMedium(int index) {
return UnsafeByteBufUtil.getUnsignedMedium(memory, idx(index));
}
@Override
protected int _getUnsignedMediumLE(int index) {
return UnsafeByteBufUtil.getUnsignedMediumLE(memory, idx(index));
}
@Override
protected int _getInt(int index) {
return UnsafeByteBufUtil.getInt(memory, idx(index));
}
@Override
protected int _getIntLE(int index) {
return UnsafeByteBufUtil.getIntLE(memory, idx(index));
}
@Override
protected long _getLong(int index) {
return UnsafeByteBufUtil.getLong(memory, idx(index));
}
@Override
protected long _getLongLE(int index) {
return UnsafeByteBufUtil.getLongLE(memory, idx(index));
}
@Override
protected void _setByte(int index, int value) {
UnsafeByteBufUtil.setByte(memory, idx(index), value);
}
@Override
protected void _setShort(int index, int value) {
UnsafeByteBufUtil.setShort(memory, idx(index), value);
}
@Override
protected void _setShortLE(int index, int value) {
UnsafeByteBufUtil.setShortLE(memory, idx(index), value);
}
@Override
protected void _setMedium(int index, int value) {
UnsafeByteBufUtil.setMedium(memory, idx(index), value);
}
@Override
protected void _setMediumLE(int index, int value) {
UnsafeByteBufUtil.setMediumLE(memory, idx(index), value);
}
@Override
protected void _setInt(int index, int value) {
UnsafeByteBufUtil.setInt(memory, idx(index), value);
}
@Override
protected void _setIntLE(int index, int value) {
UnsafeByteBufUtil.setIntLE(memory, idx(index), value);
}
@Override
protected void _setLong(int index, long value) {
UnsafeByteBufUtil.setLong(memory, idx(index), value);
}
@Override
protected void _setLongLE(int index, long value) {
UnsafeByteBufUtil.setLongLE(memory, idx(index), value);
}
@Override
public ByteBuf setZero(int index, int length) {
if (PlatformDependent.javaVersion() >= 7) {
checkIndex(index, length);
// Only do on java7+ as the needed Unsafe call was only added there.
UnsafeByteBufUtil.setZero(memory, idx(index), length);
return this;
}
return super.setZero(index, length);
}
@Override
public ByteBuf writeZero(int length) {
if (PlatformDependent.javaVersion() >= 7) {
// Only do on java7+ as the needed Unsafe call was only added there.
ensureWritable(length);
int wIndex = writerIndex;
UnsafeByteBufUtil.setZero(memory, idx(wIndex), length);
writerIndex = wIndex + length;
return this;
}
return super.writeZero(length);
}
@Override
@Deprecated
protected SwappedByteBuf newSwappedByteBuf() {
if (PlatformDependent.isUnaligned()) {
// Only use if unaligned access is supported otherwise there is no gain.
return new UnsafeHeapSwappedByteBuf(this);
}
return super.newSwappedByteBuf();
}
}

View file

@ -0,0 +1,430 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ByteProcessor;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.ReadOnlyBufferException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
/**
* A derived buffer which forbids any write requests to its parent. It is
* recommended to use {@link Unpooled#unmodifiableBuffer(ByteBuf)}
* instead of calling the constructor explicitly.
*
* @deprecated Do not use.
*/
@Deprecated
public class ReadOnlyByteBuf extends AbstractDerivedByteBuf {
private final ByteBuf buffer;
public ReadOnlyByteBuf(ByteBuf buffer) {
super(buffer.maxCapacity());
if (buffer instanceof ReadOnlyByteBuf || buffer instanceof DuplicatedByteBuf) {
this.buffer = buffer.unwrap();
} else {
this.buffer = buffer;
}
setIndex(buffer.readerIndex(), buffer.writerIndex());
}
@Override
public boolean isReadOnly() {
return true;
}
@Override
public boolean isWritable() {
return false;
}
@Override
public boolean isWritable(int numBytes) {
return false;
}
@Override
public int ensureWritable(int minWritableBytes, boolean force) {
return 1;
}
@Override
public ByteBuf ensureWritable(int minWritableBytes) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf unwrap() {
return buffer;
}
@Override
public ByteBufAllocator alloc() {
return unwrap().alloc();
}
@Override
@Deprecated
public ByteOrder order() {
return unwrap().order();
}
@Override
public boolean isDirect() {
return unwrap().isDirect();
}
@Override
public boolean hasArray() {
return false;
}
@Override
public byte[] array() {
throw new ReadOnlyBufferException();
}
@Override
public int arrayOffset() {
throw new ReadOnlyBufferException();
}
@Override
public boolean hasMemoryAddress() {
return unwrap().hasMemoryAddress();
}
@Override
public long memoryAddress() {
return unwrap().memoryAddress();
}
@Override
public ByteBuf discardReadBytes() {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setByte(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setByte(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setShort(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setShort(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setShortLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setShortLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setMedium(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setMedium(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setMediumLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setMediumLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setInt(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setInt(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setIntLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setIntLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setLong(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setLong(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setLongLE(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setLongLE(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, InputStream in, int length) {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) {
throw new ReadOnlyBufferException();
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length)
throws IOException {
return unwrap().getBytes(index, out, length);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length)
throws IOException {
return unwrap().getBytes(index, out, position, length);
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length)
throws IOException {
unwrap().getBytes(index, out, length);
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
unwrap().getBytes(index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
unwrap().getBytes(index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
unwrap().getBytes(index, dst);
return this;
}
@Override
public ByteBuf duplicate() {
return new ReadOnlyByteBuf(this);
}
@Override
public ByteBuf copy(int index, int length) {
return unwrap().copy(index, length);
}
@Override
public ByteBuf slice(int index, int length) {
return Unpooled.unmodifiableBuffer(unwrap().slice(index, length));
}
@Override
public byte getByte(int index) {
return unwrap().getByte(index);
}
@Override
protected byte _getByte(int index) {
return unwrap().getByte(index);
}
@Override
public short getShort(int index) {
return unwrap().getShort(index);
}
@Override
protected short _getShort(int index) {
return unwrap().getShort(index);
}
@Override
public short getShortLE(int index) {
return unwrap().getShortLE(index);
}
@Override
protected short _getShortLE(int index) {
return unwrap().getShortLE(index);
}
@Override
public int getUnsignedMedium(int index) {
return unwrap().getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return unwrap().getUnsignedMedium(index);
}
@Override
public int getUnsignedMediumLE(int index) {
return unwrap().getUnsignedMediumLE(index);
}
@Override
protected int _getUnsignedMediumLE(int index) {
return unwrap().getUnsignedMediumLE(index);
}
@Override
public int getInt(int index) {
return unwrap().getInt(index);
}
@Override
protected int _getInt(int index) {
return unwrap().getInt(index);
}
@Override
public int getIntLE(int index) {
return unwrap().getIntLE(index);
}
@Override
protected int _getIntLE(int index) {
return unwrap().getIntLE(index);
}
@Override
public long getLong(int index) {
return unwrap().getLong(index);
}
@Override
protected long _getLong(int index) {
return unwrap().getLong(index);
}
@Override
public long getLongLE(int index) {
return unwrap().getLongLE(index);
}
@Override
protected long _getLongLE(int index) {
return unwrap().getLongLE(index);
}
@Override
public int nioBufferCount() {
return unwrap().nioBufferCount();
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
return unwrap().nioBuffer(index, length).asReadOnlyBuffer();
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return unwrap().nioBuffers(index, length);
}
@Override
public int forEachByte(int index, int length, ByteProcessor processor) {
return unwrap().forEachByte(index, length, processor);
}
@Override
public int forEachByteDesc(int index, int length, ByteProcessor processor) {
return unwrap().forEachByteDesc(index, length, processor);
}
@Override
public int capacity() {
return unwrap().capacity();
}
@Override
public ByteBuf capacity(int newCapacity) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf asReadOnly() {
return this;
}
}

View file

@ -0,0 +1,485 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.StringUtil;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.ReadOnlyBufferException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
/**
* Read-only ByteBuf which wraps a read-only ByteBuffer.
*/
class ReadOnlyByteBufferBuf extends AbstractReferenceCountedByteBuf {
protected final ByteBuffer buffer;
private final ByteBufAllocator allocator;
private ByteBuffer tmpNioBuf;
ReadOnlyByteBufferBuf(ByteBufAllocator allocator, ByteBuffer buffer) {
super(buffer.remaining());
if (!buffer.isReadOnly()) {
throw new IllegalArgumentException("must be a readonly buffer: " + StringUtil.simpleClassName(buffer));
}
this.allocator = allocator;
this.buffer = buffer.slice().order(ByteOrder.BIG_ENDIAN);
writerIndex(this.buffer.limit());
}
@Override
protected void deallocate() { }
@Override
public boolean isWritable() {
return false;
}
@Override
public boolean isWritable(int numBytes) {
return false;
}
@Override
public ByteBuf ensureWritable(int minWritableBytes) {
throw new ReadOnlyBufferException();
}
@Override
public int ensureWritable(int minWritableBytes, boolean force) {
return 1;
}
@Override
public byte getByte(int index) {
ensureAccessible();
return _getByte(index);
}
@Override
protected byte _getByte(int index) {
return buffer.get(index);
}
@Override
public short getShort(int index) {
ensureAccessible();
return _getShort(index);
}
@Override
protected short _getShort(int index) {
return buffer.getShort(index);
}
@Override
public short getShortLE(int index) {
ensureAccessible();
return _getShortLE(index);
}
@Override
protected short _getShortLE(int index) {
return ByteBufUtil.swapShort(buffer.getShort(index));
}
@Override
public int getUnsignedMedium(int index) {
ensureAccessible();
return _getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return (getByte(index) & 0xff) << 16 |
(getByte(index + 1) & 0xff) << 8 |
getByte(index + 2) & 0xff;
}
@Override
public int getUnsignedMediumLE(int index) {
ensureAccessible();
return _getUnsignedMediumLE(index);
}
@Override
protected int _getUnsignedMediumLE(int index) {
return getByte(index) & 0xff |
(getByte(index + 1) & 0xff) << 8 |
(getByte(index + 2) & 0xff) << 16;
}
@Override
public int getInt(int index) {
ensureAccessible();
return _getInt(index);
}
@Override
protected int _getInt(int index) {
return buffer.getInt(index);
}
@Override
public int getIntLE(int index) {
ensureAccessible();
return _getIntLE(index);
}
@Override
protected int _getIntLE(int index) {
return ByteBufUtil.swapInt(buffer.getInt(index));
}
@Override
public long getLong(int index) {
ensureAccessible();
return _getLong(index);
}
@Override
protected long _getLong(int index) {
return buffer.getLong(index);
}
@Override
public long getLongLE(int index) {
ensureAccessible();
return _getLongLE(index);
}
@Override
protected long _getLongLE(int index) {
return ByteBufUtil.swapLong(buffer.getLong(index));
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.capacity());
if (dst.hasArray()) {
getBytes(index, dst.array(), dst.arrayOffset() + dstIndex, length);
} else if (dst.nioBufferCount() > 0) {
for (ByteBuffer bb: dst.nioBuffers(dstIndex, length)) {
int bbLen = bb.remaining();
getBytes(index, bb);
index += bbLen;
}
} else {
dst.setBytes(dstIndex, this, index, length);
}
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.length);
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
tmpBuf.get(dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
checkIndex(index, dst.remaining());
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + dst.remaining());
dst.put(tmpBuf);
return this;
}
@Override
public ByteBuf setByte(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setByte(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setShort(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setShort(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setShortLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setShortLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setMedium(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setMedium(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setMediumLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setMediumLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setInt(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setInt(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setIntLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setIntLE(int index, int value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setLong(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setLong(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setLongLE(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
protected void _setLongLE(int index, long value) {
throw new ReadOnlyBufferException();
}
@Override
public int capacity() {
return maxCapacity();
}
@Override
public ByteBuf capacity(int newCapacity) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBufAllocator alloc() {
return allocator;
}
@Override
public ByteOrder order() {
return ByteOrder.BIG_ENDIAN;
}
@Override
public ByteBuf unwrap() {
return null;
}
@Override
public boolean isReadOnly() {
return buffer.isReadOnly();
}
@Override
public boolean isDirect() {
return buffer.isDirect();
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
ensureAccessible();
if (length == 0) {
return this;
}
if (buffer.hasArray()) {
out.write(buffer.array(), index + buffer.arrayOffset(), length);
} else {
byte[] tmp = ByteBufUtil.threadLocalTempArray(length);
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index);
tmpBuf.get(tmp, 0, length);
out.write(tmp, 0, length);
}
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
ensureAccessible();
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
ensureAccessible();
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf, position);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
throw new ReadOnlyBufferException();
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
throw new ReadOnlyBufferException();
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
throw new ReadOnlyBufferException();
}
protected final ByteBuffer internalNioBuffer() {
ByteBuffer tmpNioBuf = this.tmpNioBuf;
if (tmpNioBuf == null) {
this.tmpNioBuf = tmpNioBuf = buffer.duplicate();
}
return tmpNioBuf;
}
@Override
public ByteBuf copy(int index, int length) {
ensureAccessible();
ByteBuffer src;
try {
src = internalNioBuffer().clear().position(index).limit(index + length);
} catch (IllegalArgumentException ignored) {
throw new IndexOutOfBoundsException("Too many bytes to read - Need " + (index + length));
}
ByteBuf dst = src.isDirect() ? alloc().directBuffer(length) : alloc().heapBuffer(length);
dst.writeBytes(src);
return dst;
}
@Override
public int nioBufferCount() {
return 1;
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex(index, length);
return buffer.duplicate().position(index).limit(index + length);
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
ensureAccessible();
return internalNioBuffer().clear().position(index).limit(index + length);
}
@Override
public final boolean isContiguous() {
return true;
}
@Override
public boolean hasArray() {
return buffer.hasArray();
}
@Override
public byte[] array() {
return buffer.array();
}
@Override
public int arrayOffset() {
return buffer.arrayOffset();
}
@Override
public boolean hasMemoryAddress() {
return false;
}
@Override
public long memoryAddress() {
throw new UnsupportedOperationException();
}
}

View file

@ -0,0 +1,124 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectUtil;
import io.netty.util.internal.PlatformDependent;
import java.nio.ByteBuffer;
/**
* Read-only ByteBuf which wraps a read-only direct ByteBuffer and use unsafe for best performance.
*/
final class ReadOnlyUnsafeDirectByteBuf extends ReadOnlyByteBufferBuf {
private final long memoryAddress;
ReadOnlyUnsafeDirectByteBuf(ByteBufAllocator allocator, ByteBuffer byteBuffer) {
super(allocator, byteBuffer);
// Use buffer as the super class will slice the passed in ByteBuffer which means the memoryAddress
// may be different if the position != 0.
memoryAddress = PlatformDependent.directBufferAddress(buffer);
}
@Override
protected byte _getByte(int index) {
return UnsafeByteBufUtil.getByte(addr(index));
}
@Override
protected short _getShort(int index) {
return UnsafeByteBufUtil.getShort(addr(index));
}
@Override
protected int _getUnsignedMedium(int index) {
return UnsafeByteBufUtil.getUnsignedMedium(addr(index));
}
@Override
protected int _getInt(int index) {
return UnsafeByteBufUtil.getInt(addr(index));
}
@Override
protected long _getLong(int index) {
return UnsafeByteBufUtil.getLong(addr(index));
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkIndex(index, length);
ObjectUtil.checkNotNull(dst, "dst");
if (dstIndex < 0 || dstIndex > dst.capacity() - length) {
throw new IndexOutOfBoundsException("dstIndex: " + dstIndex);
}
if (dst.hasMemoryAddress()) {
PlatformDependent.copyMemory(addr(index), dst.memoryAddress() + dstIndex, length);
} else if (dst.hasArray()) {
PlatformDependent.copyMemory(addr(index), dst.array(), dst.arrayOffset() + dstIndex, length);
} else {
dst.setBytes(dstIndex, this, index, length);
}
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkIndex(index, length);
ObjectUtil.checkNotNull(dst, "dst");
if (dstIndex < 0 || dstIndex > dst.length - length) {
throw new IndexOutOfBoundsException(String.format(
"dstIndex: %d, length: %d (expected: range(0, %d))", dstIndex, length, dst.length));
}
if (length != 0) {
PlatformDependent.copyMemory(addr(index), dst, dstIndex, length);
}
return this;
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex(index, length);
ByteBuf copy = alloc().directBuffer(length, maxCapacity());
if (length != 0) {
if (copy.hasMemoryAddress()) {
PlatformDependent.copyMemory(addr(index), copy.memoryAddress(), length);
copy.setIndex(0, length);
} else {
copy.writeBytes(this, index, length);
}
}
return copy;
}
@Override
public boolean hasMemoryAddress() {
return true;
}
@Override
public long memoryAddress() {
return memoryAddress;
}
private long addr(int index) {
return memoryAddress + index;
}
}

View file

@ -0,0 +1,175 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ResourceLeakDetector;
import io.netty.util.ResourceLeakTracker;
import io.netty.util.internal.ObjectUtil;
import java.nio.ByteOrder;
class SimpleLeakAwareByteBuf extends WrappedByteBuf {
/**
* This object's is associated with the {@link ResourceLeakTracker}. When {@link ResourceLeakTracker#close(Object)}
* is called this object will be used as the argument. It is also assumed that this object is used when
* {@link ResourceLeakDetector#track(Object)} is called to create {@link #leak}.
*/
private final ByteBuf trackedByteBuf;
final ResourceLeakTracker<ByteBuf> leak;
SimpleLeakAwareByteBuf(ByteBuf wrapped, ByteBuf trackedByteBuf, ResourceLeakTracker<ByteBuf> leak) {
super(wrapped);
this.trackedByteBuf = ObjectUtil.checkNotNull(trackedByteBuf, "trackedByteBuf");
this.leak = ObjectUtil.checkNotNull(leak, "leak");
}
SimpleLeakAwareByteBuf(ByteBuf wrapped, ResourceLeakTracker<ByteBuf> leak) {
this(wrapped, wrapped, leak);
}
@Override
public ByteBuf slice() {
return newSharedLeakAwareByteBuf(super.slice());
}
@Override
public ByteBuf retainedSlice() {
return unwrappedDerived(super.retainedSlice());
}
@Override
public ByteBuf retainedSlice(int index, int length) {
return unwrappedDerived(super.retainedSlice(index, length));
}
@Override
public ByteBuf retainedDuplicate() {
return unwrappedDerived(super.retainedDuplicate());
}
@Override
public ByteBuf readRetainedSlice(int length) {
return unwrappedDerived(super.readRetainedSlice(length));
}
@Override
public ByteBuf slice(int index, int length) {
return newSharedLeakAwareByteBuf(super.slice(index, length));
}
@Override
public ByteBuf duplicate() {
return newSharedLeakAwareByteBuf(super.duplicate());
}
@Override
public ByteBuf readSlice(int length) {
return newSharedLeakAwareByteBuf(super.readSlice(length));
}
@Override
public ByteBuf asReadOnly() {
return newSharedLeakAwareByteBuf(super.asReadOnly());
}
@Override
public ByteBuf touch() {
return this;
}
@Override
public ByteBuf touch(Object hint) {
return this;
}
@Override
public boolean release() {
if (super.release()) {
closeLeak();
return true;
}
return false;
}
@Override
public boolean release(int decrement) {
if (super.release(decrement)) {
closeLeak();
return true;
}
return false;
}
private void closeLeak() {
// Close the ResourceLeakTracker with the tracked ByteBuf as argument. This must be the same that was used when
// calling DefaultResourceLeak.track(...).
boolean closed = leak.close(trackedByteBuf);
assert closed;
}
@Override
public ByteBuf order(ByteOrder endianness) {
if (order() == endianness) {
return this;
} else {
return newSharedLeakAwareByteBuf(super.order(endianness));
}
}
private ByteBuf unwrappedDerived(ByteBuf derived) {
// We only need to unwrap SwappedByteBuf implementations as these will be the only ones that may end up in
// the AbstractLeakAwareByteBuf implementations beside slices / duplicates and "real" buffers.
ByteBuf unwrappedDerived = unwrapSwapped(derived);
if (unwrappedDerived instanceof AbstractPooledDerivedByteBuf) {
// Update the parent to point to this buffer so we correctly close the ResourceLeakTracker.
((AbstractPooledDerivedByteBuf) unwrappedDerived).parent(this);
// force tracking of derived buffers (see issue #13414)
return newLeakAwareByteBuf(derived, AbstractByteBuf.leakDetector.trackForcibly(derived));
}
return newSharedLeakAwareByteBuf(derived);
}
@SuppressWarnings("deprecation")
private static ByteBuf unwrapSwapped(ByteBuf buf) {
if (buf instanceof SwappedByteBuf) {
do {
buf = buf.unwrap();
} while (buf instanceof SwappedByteBuf);
return buf;
}
return buf;
}
private SimpleLeakAwareByteBuf newSharedLeakAwareByteBuf(
ByteBuf wrapped) {
return newLeakAwareByteBuf(wrapped, trackedByteBuf, leak);
}
private SimpleLeakAwareByteBuf newLeakAwareByteBuf(
ByteBuf wrapped, ResourceLeakTracker<ByteBuf> leakTracker) {
return newLeakAwareByteBuf(wrapped, wrapped, leakTracker);
}
protected SimpleLeakAwareByteBuf newLeakAwareByteBuf(
ByteBuf buf, ByteBuf trackedByteBuf, ResourceLeakTracker<ByteBuf> leakTracker) {
return new SimpleLeakAwareByteBuf(buf, trackedByteBuf, leakTracker);
}
}

View file

@ -0,0 +1,126 @@
/*
* Copyright 2016 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.ResourceLeakTracker;
import io.netty.util.internal.ObjectUtil;
import java.nio.ByteOrder;
class SimpleLeakAwareCompositeByteBuf extends WrappedCompositeByteBuf {
final ResourceLeakTracker<ByteBuf> leak;
SimpleLeakAwareCompositeByteBuf(CompositeByteBuf wrapped, ResourceLeakTracker<ByteBuf> leak) {
super(wrapped);
this.leak = ObjectUtil.checkNotNull(leak, "leak");
}
@Override
public boolean release() {
// Call unwrap() before just in case that super.release() will change the ByteBuf instance that is returned
// by unwrap().
ByteBuf unwrapped = unwrap();
if (super.release()) {
closeLeak(unwrapped);
return true;
}
return false;
}
@Override
public boolean release(int decrement) {
// Call unwrap() before just in case that super.release() will change the ByteBuf instance that is returned
// by unwrap().
ByteBuf unwrapped = unwrap();
if (super.release(decrement)) {
closeLeak(unwrapped);
return true;
}
return false;
}
private void closeLeak(ByteBuf trackedByteBuf) {
// Close the ResourceLeakTracker with the tracked ByteBuf as argument. This must be the same that was used when
// calling DefaultResourceLeak.track(...).
boolean closed = leak.close(trackedByteBuf);
assert closed;
}
@Override
public ByteBuf order(ByteOrder endianness) {
if (order() == endianness) {
return this;
} else {
return newLeakAwareByteBuf(super.order(endianness));
}
}
@Override
public ByteBuf slice() {
return newLeakAwareByteBuf(super.slice());
}
@Override
public ByteBuf retainedSlice() {
return newLeakAwareByteBuf(super.retainedSlice());
}
@Override
public ByteBuf slice(int index, int length) {
return newLeakAwareByteBuf(super.slice(index, length));
}
@Override
public ByteBuf retainedSlice(int index, int length) {
return newLeakAwareByteBuf(super.retainedSlice(index, length));
}
@Override
public ByteBuf duplicate() {
return newLeakAwareByteBuf(super.duplicate());
}
@Override
public ByteBuf retainedDuplicate() {
return newLeakAwareByteBuf(super.retainedDuplicate());
}
@Override
public ByteBuf readSlice(int length) {
return newLeakAwareByteBuf(super.readSlice(length));
}
@Override
public ByteBuf readRetainedSlice(int length) {
return newLeakAwareByteBuf(super.readRetainedSlice(length));
}
@Override
public ByteBuf asReadOnly() {
return newLeakAwareByteBuf(super.asReadOnly());
}
private SimpleLeakAwareByteBuf newLeakAwareByteBuf(ByteBuf wrapped) {
return newLeakAwareByteBuf(wrapped, unwrap(), leak);
}
protected SimpleLeakAwareByteBuf newLeakAwareByteBuf(
ByteBuf wrapped, ByteBuf trackedByteBuf, ResourceLeakTracker<ByteBuf> leakTracker) {
return new SimpleLeakAwareByteBuf(wrapped, trackedByteBuf, leakTracker);
}
}

View file

@ -0,0 +1,413 @@
/*
* Copyright 2020 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import static io.netty.buffer.PoolThreadCache.*;
/**
* SizeClasses requires {@code pageShifts} to be defined prior to inclusion,
* and it in turn defines:
* <p>
* LOG2_SIZE_CLASS_GROUP: Log of size class count for each size doubling.
* LOG2_MAX_LOOKUP_SIZE: Log of max size class in the lookup table.
* sizeClasses: Complete table of [index, log2Group, log2Delta, nDelta, isMultiPageSize,
* isSubPage, log2DeltaLookup] tuples.
* index: Size class index.
* log2Group: Log of group base size (no deltas added).
* log2Delta: Log of delta to previous size class.
* nDelta: Delta multiplier.
* isMultiPageSize: 'yes' if a multiple of the page size, 'no' otherwise.
* isSubPage: 'yes' if a subpage size class, 'no' otherwise.
* log2DeltaLookup: Same as log2Delta if a lookup table size class, 'no'
* otherwise.
* <p>
* nSubpages: Number of subpages size classes.
* nSizes: Number of size classes.
* nPSizes: Number of size classes that are multiples of pageSize.
*
* smallMaxSizeIdx: Maximum small size class index.
*
* lookupMaxClass: Maximum size class included in lookup table.
* log2NormalMinClass: Log of minimum normal size class.
* <p>
* The first size class and spacing are 1 << LOG2_QUANTUM.
* Each group has 1 << LOG2_SIZE_CLASS_GROUP of size classes.
*
* size = 1 << log2Group + nDelta * (1 << log2Delta)
*
* The first size class has an unusual encoding, because the size has to be
* split between group and delta*nDelta.
*
* If pageShift = 13, sizeClasses looks like this:
*
* (index, log2Group, log2Delta, nDelta, isMultiPageSize, isSubPage, log2DeltaLookup)
* <p>
* ( 0, 4, 4, 0, no, yes, 4)
* ( 1, 4, 4, 1, no, yes, 4)
* ( 2, 4, 4, 2, no, yes, 4)
* ( 3, 4, 4, 3, no, yes, 4)
* <p>
* ( 4, 6, 4, 1, no, yes, 4)
* ( 5, 6, 4, 2, no, yes, 4)
* ( 6, 6, 4, 3, no, yes, 4)
* ( 7, 6, 4, 4, no, yes, 4)
* <p>
* ( 8, 7, 5, 1, no, yes, 5)
* ( 9, 7, 5, 2, no, yes, 5)
* ( 10, 7, 5, 3, no, yes, 5)
* ( 11, 7, 5, 4, no, yes, 5)
* ...
* ...
* ( 72, 23, 21, 1, yes, no, no)
* ( 73, 23, 21, 2, yes, no, no)
* ( 74, 23, 21, 3, yes, no, no)
* ( 75, 23, 21, 4, yes, no, no)
* <p>
* ( 76, 24, 22, 1, yes, no, no)
*/
final class SizeClasses implements SizeClassesMetric {
static final int LOG2_QUANTUM = 4;
private static final int LOG2_SIZE_CLASS_GROUP = 2;
private static final int LOG2_MAX_LOOKUP_SIZE = 12;
private static final int LOG2GROUP_IDX = 1;
private static final int LOG2DELTA_IDX = 2;
private static final int NDELTA_IDX = 3;
private static final int PAGESIZE_IDX = 4;
private static final int SUBPAGE_IDX = 5;
private static final int LOG2_DELTA_LOOKUP_IDX = 6;
private static final byte no = 0, yes = 1;
final int pageSize;
final int pageShifts;
final int chunkSize;
final int directMemoryCacheAlignment;
final int nSizes;
final int nSubpages;
final int nPSizes;
final int lookupMaxSize;
final int smallMaxSizeIdx;
private final int[] pageIdx2sizeTab;
// lookup table for sizeIdx <= smallMaxSizeIdx
private final int[] sizeIdx2sizeTab;
// lookup table used for size <= lookupMaxClass
// spacing is 1 << LOG2_QUANTUM, so the size of array is lookupMaxClass >> LOG2_QUANTUM
private final int[] size2idxTab;
SizeClasses(int pageSize, int pageShifts, int chunkSize, int directMemoryCacheAlignment) {
int group = log2(chunkSize) - LOG2_QUANTUM - LOG2_SIZE_CLASS_GROUP + 1;
//generate size classes
//[index, log2Group, log2Delta, nDelta, isMultiPageSize, isSubPage, log2DeltaLookup]
short[][] sizeClasses = new short[group << LOG2_SIZE_CLASS_GROUP][7];
int normalMaxSize = -1;
int nSizes = 0;
int size = 0;
int log2Group = LOG2_QUANTUM;
int log2Delta = LOG2_QUANTUM;
int ndeltaLimit = 1 << LOG2_SIZE_CLASS_GROUP;
//First small group, nDelta start at 0.
//first size class is 1 << LOG2_QUANTUM
for (int nDelta = 0; nDelta < ndeltaLimit; nDelta++, nSizes++) {
short[] sizeClass = newSizeClass(nSizes, log2Group, log2Delta, nDelta, pageShifts);
sizeClasses[nSizes] = sizeClass;
size = sizeOf(sizeClass, directMemoryCacheAlignment);
}
log2Group += LOG2_SIZE_CLASS_GROUP;
//All remaining groups, nDelta start at 1.
for (; size < chunkSize; log2Group++, log2Delta++) {
for (int nDelta = 1; nDelta <= ndeltaLimit && size < chunkSize; nDelta++, nSizes++) {
short[] sizeClass = newSizeClass(nSizes, log2Group, log2Delta, nDelta, pageShifts);
sizeClasses[nSizes] = sizeClass;
size = normalMaxSize = sizeOf(sizeClass, directMemoryCacheAlignment);
}
}
//chunkSize must be normalMaxSize
assert chunkSize == normalMaxSize;
int smallMaxSizeIdx = 0;
int lookupMaxSize = 0;
int nPSizes = 0;
int nSubpages = 0;
for (int idx = 0; idx < nSizes; idx++) {
short[] sz = sizeClasses[idx];
if (sz[PAGESIZE_IDX] == yes) {
nPSizes++;
}
if (sz[SUBPAGE_IDX] == yes) {
nSubpages++;
smallMaxSizeIdx = idx;
}
if (sz[LOG2_DELTA_LOOKUP_IDX] != no) {
lookupMaxSize = sizeOf(sz, directMemoryCacheAlignment);
}
}
this.smallMaxSizeIdx = smallMaxSizeIdx;
this.lookupMaxSize = lookupMaxSize;
this.nPSizes = nPSizes;
this.nSubpages = nSubpages;
this.nSizes = nSizes;
this.pageSize = pageSize;
this.pageShifts = pageShifts;
this.chunkSize = chunkSize;
this.directMemoryCacheAlignment = directMemoryCacheAlignment;
//generate lookup tables
this.sizeIdx2sizeTab = newIdx2SizeTab(sizeClasses, nSizes, directMemoryCacheAlignment);
this.pageIdx2sizeTab = newPageIdx2sizeTab(sizeClasses, nSizes, nPSizes, directMemoryCacheAlignment);
this.size2idxTab = newSize2idxTab(lookupMaxSize, sizeClasses);
}
//calculate size class
private static short[] newSizeClass(int index, int log2Group, int log2Delta, int nDelta, int pageShifts) {
short isMultiPageSize;
if (log2Delta >= pageShifts) {
isMultiPageSize = yes;
} else {
int pageSize = 1 << pageShifts;
int size = calculateSize(log2Group, nDelta, log2Delta);
isMultiPageSize = size == size / pageSize * pageSize? yes : no;
}
int log2Ndelta = nDelta == 0? 0 : log2(nDelta);
byte remove = 1 << log2Ndelta < nDelta? yes : no;
int log2Size = log2Delta + log2Ndelta == log2Group? log2Group + 1 : log2Group;
if (log2Size == log2Group) {
remove = yes;
}
short isSubpage = log2Size < pageShifts + LOG2_SIZE_CLASS_GROUP? yes : no;
int log2DeltaLookup = log2Size < LOG2_MAX_LOOKUP_SIZE ||
log2Size == LOG2_MAX_LOOKUP_SIZE && remove == no
? log2Delta : no;
return new short[] {
(short) index, (short) log2Group, (short) log2Delta,
(short) nDelta, isMultiPageSize, isSubpage, (short) log2DeltaLookup
};
}
private static int[] newIdx2SizeTab(short[][] sizeClasses, int nSizes, int directMemoryCacheAlignment) {
int[] sizeIdx2sizeTab = new int[nSizes];
for (int i = 0; i < nSizes; i++) {
short[] sizeClass = sizeClasses[i];
sizeIdx2sizeTab[i] = sizeOf(sizeClass, directMemoryCacheAlignment);
}
return sizeIdx2sizeTab;
}
private static int calculateSize(int log2Group, int nDelta, int log2Delta) {
return (1 << log2Group) + (nDelta << log2Delta);
}
private static int sizeOf(short[] sizeClass, int directMemoryCacheAlignment) {
int log2Group = sizeClass[LOG2GROUP_IDX];
int log2Delta = sizeClass[LOG2DELTA_IDX];
int nDelta = sizeClass[NDELTA_IDX];
int size = calculateSize(log2Group, nDelta, log2Delta);
return alignSizeIfNeeded(size, directMemoryCacheAlignment);
}
private static int[] newPageIdx2sizeTab(short[][] sizeClasses, int nSizes, int nPSizes,
int directMemoryCacheAlignment) {
int[] pageIdx2sizeTab = new int[nPSizes];
int pageIdx = 0;
for (int i = 0; i < nSizes; i++) {
short[] sizeClass = sizeClasses[i];
if (sizeClass[PAGESIZE_IDX] == yes) {
pageIdx2sizeTab[pageIdx++] = sizeOf(sizeClass, directMemoryCacheAlignment);
}
}
return pageIdx2sizeTab;
}
private static int[] newSize2idxTab(int lookupMaxSize, short[][] sizeClasses) {
int[] size2idxTab = new int[lookupMaxSize >> LOG2_QUANTUM];
int idx = 0;
int size = 0;
for (int i = 0; size <= lookupMaxSize; i++) {
int log2Delta = sizeClasses[i][LOG2DELTA_IDX];
int times = 1 << log2Delta - LOG2_QUANTUM;
while (size <= lookupMaxSize && times-- > 0) {
size2idxTab[idx++] = i;
size = idx + 1 << LOG2_QUANTUM;
}
}
return size2idxTab;
}
@Override
public int sizeIdx2size(int sizeIdx) {
return sizeIdx2sizeTab[sizeIdx];
}
@Override
public int sizeIdx2sizeCompute(int sizeIdx) {
int group = sizeIdx >> LOG2_SIZE_CLASS_GROUP;
int mod = sizeIdx & (1 << LOG2_SIZE_CLASS_GROUP) - 1;
int groupSize = group == 0? 0 :
1 << LOG2_QUANTUM + LOG2_SIZE_CLASS_GROUP - 1 << group;
int shift = group == 0? 1 : group;
int lgDelta = shift + LOG2_QUANTUM - 1;
int modSize = mod + 1 << lgDelta;
return groupSize + modSize;
}
@Override
public long pageIdx2size(int pageIdx) {
return pageIdx2sizeTab[pageIdx];
}
@Override
public long pageIdx2sizeCompute(int pageIdx) {
int group = pageIdx >> LOG2_SIZE_CLASS_GROUP;
int mod = pageIdx & (1 << LOG2_SIZE_CLASS_GROUP) - 1;
long groupSize = group == 0? 0 :
1L << pageShifts + LOG2_SIZE_CLASS_GROUP - 1 << group;
int shift = group == 0? 1 : group;
int log2Delta = shift + pageShifts - 1;
int modSize = mod + 1 << log2Delta;
return groupSize + modSize;
}
@Override
public int size2SizeIdx(int size) {
if (size == 0) {
return 0;
}
if (size > chunkSize) {
return nSizes;
}
size = alignSizeIfNeeded(size, directMemoryCacheAlignment);
if (size <= lookupMaxSize) {
//size-1 / MIN_TINY
return size2idxTab[size - 1 >> LOG2_QUANTUM];
}
int x = log2((size << 1) - 1);
int shift = x < LOG2_SIZE_CLASS_GROUP + LOG2_QUANTUM + 1
? 0 : x - (LOG2_SIZE_CLASS_GROUP + LOG2_QUANTUM);
int group = shift << LOG2_SIZE_CLASS_GROUP;
int log2Delta = x < LOG2_SIZE_CLASS_GROUP + LOG2_QUANTUM + 1
? LOG2_QUANTUM : x - LOG2_SIZE_CLASS_GROUP - 1;
int mod = size - 1 >> log2Delta & (1 << LOG2_SIZE_CLASS_GROUP) - 1;
return group + mod;
}
@Override
public int pages2pageIdx(int pages) {
return pages2pageIdxCompute(pages, false);
}
@Override
public int pages2pageIdxFloor(int pages) {
return pages2pageIdxCompute(pages, true);
}
private int pages2pageIdxCompute(int pages, boolean floor) {
int pageSize = pages << pageShifts;
if (pageSize > chunkSize) {
return nPSizes;
}
int x = log2((pageSize << 1) - 1);
int shift = x < LOG2_SIZE_CLASS_GROUP + pageShifts
? 0 : x - (LOG2_SIZE_CLASS_GROUP + pageShifts);
int group = shift << LOG2_SIZE_CLASS_GROUP;
int log2Delta = x < LOG2_SIZE_CLASS_GROUP + pageShifts + 1?
pageShifts : x - LOG2_SIZE_CLASS_GROUP - 1;
int mod = pageSize - 1 >> log2Delta & (1 << LOG2_SIZE_CLASS_GROUP) - 1;
int pageIdx = group + mod;
if (floor && pageIdx2sizeTab[pageIdx] > pages << pageShifts) {
pageIdx--;
}
return pageIdx;
}
// Round size up to the nearest multiple of alignment.
private static int alignSizeIfNeeded(int size, int directMemoryCacheAlignment) {
if (directMemoryCacheAlignment <= 0) {
return size;
}
int delta = size & directMemoryCacheAlignment - 1;
return delta == 0? size : size + directMemoryCacheAlignment - delta;
}
@Override
public int normalizeSize(int size) {
if (size == 0) {
return sizeIdx2sizeTab[0];
}
size = alignSizeIfNeeded(size, directMemoryCacheAlignment);
if (size <= lookupMaxSize) {
int ret = sizeIdx2sizeTab[size2idxTab[size - 1 >> LOG2_QUANTUM]];
assert ret == normalizeSizeCompute(size);
return ret;
}
return normalizeSizeCompute(size);
}
private static int normalizeSizeCompute(int size) {
int x = log2((size << 1) - 1);
int log2Delta = x < LOG2_SIZE_CLASS_GROUP + LOG2_QUANTUM + 1
? LOG2_QUANTUM : x - LOG2_SIZE_CLASS_GROUP - 1;
int delta = 1 << log2Delta;
int delta_mask = delta - 1;
return size + delta_mask & ~delta_mask;
}
}

View file

@ -0,0 +1,87 @@
/*
* Copyright 2020 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* Expose metrics for an SizeClasses.
*/
public interface SizeClassesMetric {
/**
* Computes size from lookup table according to sizeIdx.
*
* @return size
*/
int sizeIdx2size(int sizeIdx);
/**
* Computes size according to sizeIdx.
*
* @return size
*/
int sizeIdx2sizeCompute(int sizeIdx);
/**
* Computes size from lookup table according to pageIdx.
*
* @return size which is multiples of pageSize.
*/
long pageIdx2size(int pageIdx);
/**
* Computes size according to pageIdx.
*
* @return size which is multiples of pageSize
*/
long pageIdx2sizeCompute(int pageIdx);
/**
* Normalizes request size up to the nearest size class.
*
* @param size request size
*
* @return sizeIdx of the size class
*/
int size2SizeIdx(int size);
/**
* Normalizes request size up to the nearest pageSize class.
*
* @param pages multiples of pageSizes
*
* @return pageIdx of the pageSize class
*/
int pages2pageIdx(int pages);
/**
* Normalizes request size down to the nearest pageSize class.
*
* @param pages multiples of pageSizes
*
* @return pageIdx of the pageSize class
*/
int pages2pageIdxFloor(int pages);
/**
* Normalizes usable size that would result from allocating an object with the
* specified size and alignment.
*
* @param size request size
*
* @return normalized size
*/
int normalizeSize(int size);
}

View file

@ -0,0 +1,49 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* A derived buffer which exposes its parent's sub-region only. It is
* recommended to use {@link ByteBuf#slice()} and
* {@link ByteBuf#slice(int, int)} instead of calling the constructor
* explicitly.
*
* @deprecated Do not use.
*/
@Deprecated
public class SlicedByteBuf extends AbstractUnpooledSlicedByteBuf {
private int length;
public SlicedByteBuf(ByteBuf buffer, int index, int length) {
super(buffer, index, length);
}
@Override
final void initLength(int length) {
this.length = length;
}
@Override
final int length() {
return length;
}
@Override
public int capacity() {
return length;
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,923 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.buffer.CompositeByteBuf.ByteWrapper;
import io.netty.util.internal.ObjectUtil;
import io.netty.util.CharsetUtil;
import io.netty.util.internal.PlatformDependent;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.CharBuffer;
import java.nio.charset.Charset;
import java.util.Arrays;
/**
* Creates a new {@link ByteBuf} by allocating new space or by wrapping
* or copying existing byte arrays, byte buffers and a string.
*
* <h3>Use static import</h3>
* This classes is intended to be used with Java 5 static import statement:
*
* <pre>
* import static io.netty.buffer.{@link Unpooled}.*;
*
* {@link ByteBuf} heapBuffer = buffer(128);
* {@link ByteBuf} directBuffer = directBuffer(256);
* {@link ByteBuf} wrappedBuffer = wrappedBuffer(new byte[128], new byte[256]);
* {@link ByteBuf} copiedBuffer = copiedBuffer({@link ByteBuffer}.allocate(128));
* </pre>
*
* <h3>Allocating a new buffer</h3>
*
* Three buffer types are provided out of the box.
*
* <ul>
* <li>{@link #buffer(int)} allocates a new fixed-capacity heap buffer.</li>
* <li>{@link #directBuffer(int)} allocates a new fixed-capacity direct buffer.</li>
* </ul>
*
* <h3>Creating a wrapped buffer</h3>
*
* Wrapped buffer is a buffer which is a view of one or more existing
* byte arrays and byte buffers. Any changes in the content of the original
* array or buffer will be visible in the wrapped buffer. Various wrapper
* methods are provided and their name is all {@code wrappedBuffer()}.
* You might want to take a look at the methods that accept varargs closely if
* you want to create a buffer which is composed of more than one array to
* reduce the number of memory copy.
*
* <h3>Creating a copied buffer</h3>
*
* Copied buffer is a deep copy of one or more existing byte arrays, byte
* buffers or a string. Unlike a wrapped buffer, there's no shared data
* between the original data and the copied buffer. Various copy methods are
* provided and their name is all {@code copiedBuffer()}. It is also convenient
* to use this operation to merge multiple buffers into one buffer.
*/
public final class Unpooled {
private static final ByteBufAllocator ALLOC = UnpooledByteBufAllocator.DEFAULT;
/**
* Big endian byte order.
*/
public static final ByteOrder BIG_ENDIAN = ByteOrder.BIG_ENDIAN;
/**
* Little endian byte order.
*/
public static final ByteOrder LITTLE_ENDIAN = ByteOrder.LITTLE_ENDIAN;
/**
* A buffer whose capacity is {@code 0}.
*/
@SuppressWarnings("checkstyle:StaticFinalBuffer") // EmptyByteBuf is not writeable or readable.
public static final ByteBuf EMPTY_BUFFER = ALLOC.buffer(0, 0);
static {
assert EMPTY_BUFFER instanceof EmptyByteBuf: "EMPTY_BUFFER must be an EmptyByteBuf.";
}
/**
* Creates a new big-endian Java heap buffer with reasonably small initial capacity, which
* expands its capacity boundlessly on demand.
*/
public static ByteBuf buffer() {
return ALLOC.heapBuffer();
}
/**
* Creates a new big-endian direct buffer with reasonably small initial capacity, which
* expands its capacity boundlessly on demand.
*/
public static ByteBuf directBuffer() {
return ALLOC.directBuffer();
}
/**
* Creates a new big-endian Java heap buffer with the specified {@code capacity}, which
* expands its capacity boundlessly on demand. The new buffer's {@code readerIndex} and
* {@code writerIndex} are {@code 0}.
*/
public static ByteBuf buffer(int initialCapacity) {
return ALLOC.heapBuffer(initialCapacity);
}
/**
* Creates a new big-endian direct buffer with the specified {@code capacity}, which
* expands its capacity boundlessly on demand. The new buffer's {@code readerIndex} and
* {@code writerIndex} are {@code 0}.
*/
public static ByteBuf directBuffer(int initialCapacity) {
return ALLOC.directBuffer(initialCapacity);
}
/**
* Creates a new big-endian Java heap buffer with the specified
* {@code initialCapacity}, that may grow up to {@code maxCapacity}
* The new buffer's {@code readerIndex} and {@code writerIndex} are
* {@code 0}.
*/
public static ByteBuf buffer(int initialCapacity, int maxCapacity) {
return ALLOC.heapBuffer(initialCapacity, maxCapacity);
}
/**
* Creates a new big-endian direct buffer with the specified
* {@code initialCapacity}, that may grow up to {@code maxCapacity}.
* The new buffer's {@code readerIndex} and {@code writerIndex} are
* {@code 0}.
*/
public static ByteBuf directBuffer(int initialCapacity, int maxCapacity) {
return ALLOC.directBuffer(initialCapacity, maxCapacity);
}
/**
* Creates a new big-endian buffer which wraps the specified {@code array}.
* A modification on the specified array's content will be visible to the
* returned buffer.
*/
public static ByteBuf wrappedBuffer(byte[] array) {
if (array.length == 0) {
return EMPTY_BUFFER;
}
return new UnpooledHeapByteBuf(ALLOC, array, array.length);
}
/**
* Creates a new big-endian buffer which wraps the sub-region of the
* specified {@code array}. A modification on the specified array's
* content will be visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(byte[] array, int offset, int length) {
if (length == 0) {
return EMPTY_BUFFER;
}
if (offset == 0 && length == array.length) {
return wrappedBuffer(array);
}
return wrappedBuffer(array).slice(offset, length);
}
/**
* Creates a new buffer which wraps the specified NIO buffer's current
* slice. A modification on the specified buffer's content will be
* visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(ByteBuffer buffer) {
if (!buffer.hasRemaining()) {
return EMPTY_BUFFER;
}
if (!buffer.isDirect() && buffer.hasArray()) {
return wrappedBuffer(
buffer.array(),
buffer.arrayOffset() + buffer.position(),
buffer.remaining()).order(buffer.order());
} else if (PlatformDependent.hasUnsafe()) {
if (buffer.isReadOnly()) {
if (buffer.isDirect()) {
return new ReadOnlyUnsafeDirectByteBuf(ALLOC, buffer);
} else {
return new ReadOnlyByteBufferBuf(ALLOC, buffer);
}
} else {
return new UnpooledUnsafeDirectByteBuf(ALLOC, buffer, buffer.remaining());
}
} else {
if (buffer.isReadOnly()) {
return new ReadOnlyByteBufferBuf(ALLOC, buffer);
} else {
return new UnpooledDirectByteBuf(ALLOC, buffer, buffer.remaining());
}
}
}
/**
* Creates a new buffer which wraps the specified memory address. If {@code doFree} is true the
* memoryAddress will automatically be freed once the reference count of the {@link ByteBuf} reaches {@code 0}.
*/
public static ByteBuf wrappedBuffer(long memoryAddress, int size, boolean doFree) {
return new WrappedUnpooledUnsafeDirectByteBuf(ALLOC, memoryAddress, size, doFree);
}
/**
* Creates a new buffer which wraps the specified buffer's readable bytes.
* A modification on the specified buffer's content will be visible to the
* returned buffer.
* @param buffer The buffer to wrap. Reference count ownership of this variable is transferred to this method.
* @return The readable portion of the {@code buffer}, or an empty buffer if there is no readable portion.
* The caller is responsible for releasing this buffer.
*/
public static ByteBuf wrappedBuffer(ByteBuf buffer) {
if (buffer.isReadable()) {
return buffer.slice();
} else {
buffer.release();
return EMPTY_BUFFER;
}
}
/**
* Creates a new big-endian composite buffer which wraps the specified
* arrays without copying them. A modification on the specified arrays'
* content will be visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(byte[]... arrays) {
return wrappedBuffer(arrays.length, arrays);
}
/**
* Creates a new big-endian composite buffer which wraps the readable bytes of the
* specified buffers without copying them. A modification on the content
* of the specified buffers will be visible to the returned buffer.
* @param buffers The buffers to wrap. Reference count ownership of all variables is transferred to this method.
* @return The readable portion of the {@code buffers}. The caller is responsible for releasing this buffer.
*/
public static ByteBuf wrappedBuffer(ByteBuf... buffers) {
return wrappedBuffer(buffers.length, buffers);
}
/**
* Creates a new big-endian composite buffer which wraps the slices of the specified
* NIO buffers without copying them. A modification on the content of the
* specified buffers will be visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(ByteBuffer... buffers) {
return wrappedBuffer(buffers.length, buffers);
}
static <T> ByteBuf wrappedBuffer(int maxNumComponents, ByteWrapper<T> wrapper, T[] array) {
switch (array.length) {
case 0:
break;
case 1:
if (!wrapper.isEmpty(array[0])) {
return wrapper.wrap(array[0]);
}
break;
default:
for (int i = 0, len = array.length; i < len; i++) {
T bytes = array[i];
if (bytes == null) {
return EMPTY_BUFFER;
}
if (!wrapper.isEmpty(bytes)) {
return new CompositeByteBuf(ALLOC, false, maxNumComponents, wrapper, array, i);
}
}
}
return EMPTY_BUFFER;
}
/**
* Creates a new big-endian composite buffer which wraps the specified
* arrays without copying them. A modification on the specified arrays'
* content will be visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(int maxNumComponents, byte[]... arrays) {
return wrappedBuffer(maxNumComponents, CompositeByteBuf.BYTE_ARRAY_WRAPPER, arrays);
}
/**
* Creates a new big-endian composite buffer which wraps the readable bytes of the
* specified buffers without copying them. A modification on the content
* of the specified buffers will be visible to the returned buffer.
* @param maxNumComponents Advisement as to how many independent buffers are allowed to exist before
* consolidation occurs.
* @param buffers The buffers to wrap. Reference count ownership of all variables is transferred to this method.
* @return The readable portion of the {@code buffers}. The caller is responsible for releasing this buffer.
*/
public static ByteBuf wrappedBuffer(int maxNumComponents, ByteBuf... buffers) {
switch (buffers.length) {
case 0:
break;
case 1:
ByteBuf buffer = buffers[0];
if (buffer.isReadable()) {
return wrappedBuffer(buffer.order(BIG_ENDIAN));
} else {
buffer.release();
}
break;
default:
for (int i = 0; i < buffers.length; i++) {
ByteBuf buf = buffers[i];
if (buf.isReadable()) {
return new CompositeByteBuf(ALLOC, false, maxNumComponents, buffers, i);
}
buf.release();
}
break;
}
return EMPTY_BUFFER;
}
/**
* Creates a new big-endian composite buffer which wraps the slices of the specified
* NIO buffers without copying them. A modification on the content of the
* specified buffers will be visible to the returned buffer.
*/
public static ByteBuf wrappedBuffer(int maxNumComponents, ByteBuffer... buffers) {
return wrappedBuffer(maxNumComponents, CompositeByteBuf.BYTE_BUFFER_WRAPPER, buffers);
}
/**
* Returns a new big-endian composite buffer with no components.
*/
public static CompositeByteBuf compositeBuffer() {
return compositeBuffer(AbstractByteBufAllocator.DEFAULT_MAX_COMPONENTS);
}
/**
* Returns a new big-endian composite buffer with no components.
*/
public static CompositeByteBuf compositeBuffer(int maxNumComponents) {
return new CompositeByteBuf(ALLOC, false, maxNumComponents);
}
/**
* Creates a new big-endian buffer whose content is a copy of the
* specified {@code array}. The new buffer's {@code readerIndex} and
* {@code writerIndex} are {@code 0} and {@code array.length} respectively.
*/
public static ByteBuf copiedBuffer(byte[] array) {
if (array.length == 0) {
return EMPTY_BUFFER;
}
return wrappedBuffer(array.clone());
}
/**
* Creates a new big-endian buffer whose content is a copy of the
* specified {@code array}'s sub-region. The new buffer's
* {@code readerIndex} and {@code writerIndex} are {@code 0} and
* the specified {@code length} respectively.
*/
public static ByteBuf copiedBuffer(byte[] array, int offset, int length) {
if (length == 0) {
return EMPTY_BUFFER;
}
byte[] copy = PlatformDependent.allocateUninitializedArray(length);
System.arraycopy(array, offset, copy, 0, length);
return wrappedBuffer(copy);
}
/**
* Creates a new buffer whose content is a copy of the specified
* {@code buffer}'s current slice. The new buffer's {@code readerIndex}
* and {@code writerIndex} are {@code 0} and {@code buffer.remaining}
* respectively.
*/
public static ByteBuf copiedBuffer(ByteBuffer buffer) {
int length = buffer.remaining();
if (length == 0) {
return EMPTY_BUFFER;
}
byte[] copy = PlatformDependent.allocateUninitializedArray(length);
// Duplicate the buffer so we not adjust the position during our get operation.
// See https://github.com/netty/netty/issues/3896
ByteBuffer duplicate = buffer.duplicate();
duplicate.get(copy);
return wrappedBuffer(copy).order(duplicate.order());
}
/**
* Creates a new buffer whose content is a copy of the specified
* {@code buffer}'s readable bytes. The new buffer's {@code readerIndex}
* and {@code writerIndex} are {@code 0} and {@code buffer.readableBytes}
* respectively.
*/
public static ByteBuf copiedBuffer(ByteBuf buffer) {
int readable = buffer.readableBytes();
if (readable > 0) {
ByteBuf copy = buffer(readable);
copy.writeBytes(buffer, buffer.readerIndex(), readable);
return copy;
} else {
return EMPTY_BUFFER;
}
}
/**
* Creates a new big-endian buffer whose content is a merged copy of
* the specified {@code arrays}. The new buffer's {@code readerIndex}
* and {@code writerIndex} are {@code 0} and the sum of all arrays'
* {@code length} respectively.
*/
public static ByteBuf copiedBuffer(byte[]... arrays) {
switch (arrays.length) {
case 0:
return EMPTY_BUFFER;
case 1:
if (arrays[0].length == 0) {
return EMPTY_BUFFER;
} else {
return copiedBuffer(arrays[0]);
}
}
// Merge the specified arrays into one array.
int length = 0;
for (byte[] a: arrays) {
if (Integer.MAX_VALUE - length < a.length) {
throw new IllegalArgumentException(
"The total length of the specified arrays is too big.");
}
length += a.length;
}
if (length == 0) {
return EMPTY_BUFFER;
}
byte[] mergedArray = PlatformDependent.allocateUninitializedArray(length);
for (int i = 0, j = 0; i < arrays.length; i ++) {
byte[] a = arrays[i];
System.arraycopy(a, 0, mergedArray, j, a.length);
j += a.length;
}
return wrappedBuffer(mergedArray);
}
/**
* Creates a new buffer whose content is a merged copy of the specified
* {@code buffers}' readable bytes. The new buffer's {@code readerIndex}
* and {@code writerIndex} are {@code 0} and the sum of all buffers'
* {@code readableBytes} respectively.
*
* @throws IllegalArgumentException
* if the specified buffers' endianness are different from each
* other
*/
public static ByteBuf copiedBuffer(ByteBuf... buffers) {
switch (buffers.length) {
case 0:
return EMPTY_BUFFER;
case 1:
return copiedBuffer(buffers[0]);
}
// Merge the specified buffers into one buffer.
ByteOrder order = null;
int length = 0;
for (ByteBuf b: buffers) {
int bLen = b.readableBytes();
if (bLen <= 0) {
continue;
}
if (Integer.MAX_VALUE - length < bLen) {
throw new IllegalArgumentException(
"The total length of the specified buffers is too big.");
}
length += bLen;
if (order != null) {
if (!order.equals(b.order())) {
throw new IllegalArgumentException("inconsistent byte order");
}
} else {
order = b.order();
}
}
if (length == 0) {
return EMPTY_BUFFER;
}
byte[] mergedArray = PlatformDependent.allocateUninitializedArray(length);
for (int i = 0, j = 0; i < buffers.length; i ++) {
ByteBuf b = buffers[i];
int bLen = b.readableBytes();
b.getBytes(b.readerIndex(), mergedArray, j, bLen);
j += bLen;
}
return wrappedBuffer(mergedArray).order(order);
}
/**
* Creates a new buffer whose content is a merged copy of the specified
* {@code buffers}' slices. The new buffer's {@code readerIndex} and
* {@code writerIndex} are {@code 0} and the sum of all buffers'
* {@code remaining} respectively.
*
* @throws IllegalArgumentException
* if the specified buffers' endianness are different from each
* other
*/
public static ByteBuf copiedBuffer(ByteBuffer... buffers) {
switch (buffers.length) {
case 0:
return EMPTY_BUFFER;
case 1:
return copiedBuffer(buffers[0]);
}
// Merge the specified buffers into one buffer.
ByteOrder order = null;
int length = 0;
for (ByteBuffer b: buffers) {
int bLen = b.remaining();
if (bLen <= 0) {
continue;
}
if (Integer.MAX_VALUE - length < bLen) {
throw new IllegalArgumentException(
"The total length of the specified buffers is too big.");
}
length += bLen;
if (order != null) {
if (!order.equals(b.order())) {
throw new IllegalArgumentException("inconsistent byte order");
}
} else {
order = b.order();
}
}
if (length == 0) {
return EMPTY_BUFFER;
}
byte[] mergedArray = PlatformDependent.allocateUninitializedArray(length);
for (int i = 0, j = 0; i < buffers.length; i ++) {
// Duplicate the buffer so we not adjust the position during our get operation.
// See https://github.com/netty/netty/issues/3896
ByteBuffer b = buffers[i].duplicate();
int bLen = b.remaining();
b.get(mergedArray, j, bLen);
j += bLen;
}
return wrappedBuffer(mergedArray).order(order);
}
/**
* Creates a new big-endian buffer whose content is the specified
* {@code string} encoded in the specified {@code charset}.
* The new buffer's {@code readerIndex} and {@code writerIndex} are
* {@code 0} and the length of the encoded string respectively.
*/
public static ByteBuf copiedBuffer(CharSequence string, Charset charset) {
ObjectUtil.checkNotNull(string, "string");
if (CharsetUtil.UTF_8.equals(charset)) {
return copiedBufferUtf8(string);
}
if (CharsetUtil.US_ASCII.equals(charset)) {
return copiedBufferAscii(string);
}
if (string instanceof CharBuffer) {
return copiedBuffer((CharBuffer) string, charset);
}
return copiedBuffer(CharBuffer.wrap(string), charset);
}
private static ByteBuf copiedBufferUtf8(CharSequence string) {
boolean release = true;
// Mimic the same behavior as other copiedBuffer implementations.
ByteBuf buffer = ALLOC.heapBuffer(ByteBufUtil.utf8Bytes(string));
try {
ByteBufUtil.writeUtf8(buffer, string);
release = false;
return buffer;
} finally {
if (release) {
buffer.release();
}
}
}
private static ByteBuf copiedBufferAscii(CharSequence string) {
boolean release = true;
// Mimic the same behavior as other copiedBuffer implementations.
ByteBuf buffer = ALLOC.heapBuffer(string.length());
try {
ByteBufUtil.writeAscii(buffer, string);
release = false;
return buffer;
} finally {
if (release) {
buffer.release();
}
}
}
/**
* Creates a new big-endian buffer whose content is a subregion of
* the specified {@code string} encoded in the specified {@code charset}.
* The new buffer's {@code readerIndex} and {@code writerIndex} are
* {@code 0} and the length of the encoded string respectively.
*/
public static ByteBuf copiedBuffer(
CharSequence string, int offset, int length, Charset charset) {
ObjectUtil.checkNotNull(string, "string");
if (length == 0) {
return EMPTY_BUFFER;
}
if (string instanceof CharBuffer) {
CharBuffer buf = (CharBuffer) string;
if (buf.hasArray()) {
return copiedBuffer(
buf.array(),
buf.arrayOffset() + buf.position() + offset,
length, charset);
}
buf = buf.slice();
buf.limit(length);
buf.position(offset);
return copiedBuffer(buf, charset);
}
return copiedBuffer(CharBuffer.wrap(string, offset, offset + length), charset);
}
/**
* Creates a new big-endian buffer whose content is the specified
* {@code array} encoded in the specified {@code charset}.
* The new buffer's {@code readerIndex} and {@code writerIndex} are
* {@code 0} and the length of the encoded string respectively.
*/
public static ByteBuf copiedBuffer(char[] array, Charset charset) {
ObjectUtil.checkNotNull(array, "array");
return copiedBuffer(array, 0, array.length, charset);
}
/**
* Creates a new big-endian buffer whose content is a subregion of
* the specified {@code array} encoded in the specified {@code charset}.
* The new buffer's {@code readerIndex} and {@code writerIndex} are
* {@code 0} and the length of the encoded string respectively.
*/
public static ByteBuf copiedBuffer(char[] array, int offset, int length, Charset charset) {
ObjectUtil.checkNotNull(array, "array");
if (length == 0) {
return EMPTY_BUFFER;
}
return copiedBuffer(CharBuffer.wrap(array, offset, length), charset);
}
private static ByteBuf copiedBuffer(CharBuffer buffer, Charset charset) {
return ByteBufUtil.encodeString0(ALLOC, true, buffer, charset, 0);
}
/**
* Creates a read-only buffer which disallows any modification operations
* on the specified {@code buffer}. The new buffer has the same
* {@code readerIndex} and {@code writerIndex} with the specified
* {@code buffer}.
*
* @deprecated Use {@link ByteBuf#asReadOnly()}.
*/
@Deprecated
public static ByteBuf unmodifiableBuffer(ByteBuf buffer) {
ByteOrder endianness = buffer.order();
if (endianness == BIG_ENDIAN) {
return new ReadOnlyByteBuf(buffer);
}
return new ReadOnlyByteBuf(buffer.order(BIG_ENDIAN)).order(LITTLE_ENDIAN);
}
/**
* Creates a new 4-byte big-endian buffer that holds the specified 32-bit integer.
*/
public static ByteBuf copyInt(int value) {
ByteBuf buf = buffer(4);
buf.writeInt(value);
return buf;
}
/**
* Create a big-endian buffer that holds a sequence of the specified 32-bit integers.
*/
public static ByteBuf copyInt(int... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length * 4);
for (int v: values) {
buffer.writeInt(v);
}
return buffer;
}
/**
* Creates a new 2-byte big-endian buffer that holds the specified 16-bit integer.
*/
public static ByteBuf copyShort(int value) {
ByteBuf buf = buffer(2);
buf.writeShort(value);
return buf;
}
/**
* Create a new big-endian buffer that holds a sequence of the specified 16-bit integers.
*/
public static ByteBuf copyShort(short... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length * 2);
for (int v: values) {
buffer.writeShort(v);
}
return buffer;
}
/**
* Create a new big-endian buffer that holds a sequence of the specified 16-bit integers.
*/
public static ByteBuf copyShort(int... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length * 2);
for (int v: values) {
buffer.writeShort(v);
}
return buffer;
}
/**
* Creates a new 3-byte big-endian buffer that holds the specified 24-bit integer.
*/
public static ByteBuf copyMedium(int value) {
ByteBuf buf = buffer(3);
buf.writeMedium(value);
return buf;
}
/**
* Create a new big-endian buffer that holds a sequence of the specified 24-bit integers.
*/
public static ByteBuf copyMedium(int... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length * 3);
for (int v: values) {
buffer.writeMedium(v);
}
return buffer;
}
/**
* Creates a new 8-byte big-endian buffer that holds the specified 64-bit integer.
*/
public static ByteBuf copyLong(long value) {
ByteBuf buf = buffer(8);
buf.writeLong(value);
return buf;
}
/**
* Create a new big-endian buffer that holds a sequence of the specified 64-bit integers.
*/
public static ByteBuf copyLong(long... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length * 8);
for (long v: values) {
buffer.writeLong(v);
}
return buffer;
}
/**
* Creates a new single-byte big-endian buffer that holds the specified boolean value.
*/
public static ByteBuf copyBoolean(boolean value) {
ByteBuf buf = buffer(1);
buf.writeBoolean(value);
return buf;
}
/**
* Create a new big-endian buffer that holds a sequence of the specified boolean values.
*/
public static ByteBuf copyBoolean(boolean... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length);
for (boolean v: values) {
buffer.writeBoolean(v);
}
return buffer;
}
/**
* Creates a new 4-byte big-endian buffer that holds the specified 32-bit floating point number.
*/
public static ByteBuf copyFloat(float value) {
ByteBuf buf = buffer(4);
buf.writeFloat(value);
return buf;
}
/**
* Create a new big-endian buffer that holds a sequence of the specified 32-bit floating point numbers.
*/
public static ByteBuf copyFloat(float... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length * 4);
for (float v: values) {
buffer.writeFloat(v);
}
return buffer;
}
/**
* Creates a new 8-byte big-endian buffer that holds the specified 64-bit floating point number.
*/
public static ByteBuf copyDouble(double value) {
ByteBuf buf = buffer(8);
buf.writeDouble(value);
return buf;
}
/**
* Create a new big-endian buffer that holds a sequence of the specified 64-bit floating point numbers.
*/
public static ByteBuf copyDouble(double... values) {
if (values == null || values.length == 0) {
return EMPTY_BUFFER;
}
ByteBuf buffer = buffer(values.length * 8);
for (double v: values) {
buffer.writeDouble(v);
}
return buffer;
}
/**
* Return a unreleasable view on the given {@link ByteBuf} which will just ignore release and retain calls.
*/
public static ByteBuf unreleasableBuffer(ByteBuf buf) {
return new UnreleasableByteBuf(buf);
}
/**
* Wrap the given {@link ByteBuf}s in an unmodifiable {@link ByteBuf}. Be aware the returned {@link ByteBuf} will
* not try to slice the given {@link ByteBuf}s to reduce GC-Pressure.
*
* @deprecated Use {@link #wrappedUnmodifiableBuffer(ByteBuf...)}.
*/
@Deprecated
public static ByteBuf unmodifiableBuffer(ByteBuf... buffers) {
return wrappedUnmodifiableBuffer(true, buffers);
}
/**
* Wrap the given {@link ByteBuf}s in an unmodifiable {@link ByteBuf}. Be aware the returned {@link ByteBuf} will
* not try to slice the given {@link ByteBuf}s to reduce GC-Pressure.
*
* The returned {@link ByteBuf} may wrap the provided array directly, and so should not be subsequently modified.
*/
public static ByteBuf wrappedUnmodifiableBuffer(ByteBuf... buffers) {
return wrappedUnmodifiableBuffer(false, buffers);
}
private static ByteBuf wrappedUnmodifiableBuffer(boolean copy, ByteBuf... buffers) {
switch (buffers.length) {
case 0:
return EMPTY_BUFFER;
case 1:
return buffers[0].asReadOnly();
default:
if (copy) {
buffers = Arrays.copyOf(buffers, buffers.length, ByteBuf[].class);
}
return new FixedCompositeByteBuf(ALLOC, buffers);
}
}
private Unpooled() {
// Unused
}
}

View file

@ -0,0 +1,269 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.LongCounter;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
import java.nio.ByteBuffer;
/**
* Simplistic {@link ByteBufAllocator} implementation that does not pool anything.
*/
public final class UnpooledByteBufAllocator extends AbstractByteBufAllocator implements ByteBufAllocatorMetricProvider {
private final UnpooledByteBufAllocatorMetric metric = new UnpooledByteBufAllocatorMetric();
private final boolean disableLeakDetector;
private final boolean noCleaner;
/**
* Default instance which uses leak-detection for direct buffers.
*/
public static final UnpooledByteBufAllocator DEFAULT =
new UnpooledByteBufAllocator(PlatformDependent.directBufferPreferred());
/**
* Create a new instance which uses leak-detection for direct buffers.
*
* @param preferDirect {@code true} if {@link #buffer(int)} should try to allocate a direct buffer rather than
* a heap buffer
*/
public UnpooledByteBufAllocator(boolean preferDirect) {
this(preferDirect, false);
}
/**
* Create a new instance
*
* @param preferDirect {@code true} if {@link #buffer(int)} should try to allocate a direct buffer rather than
* a heap buffer
* @param disableLeakDetector {@code true} if the leak-detection should be disabled completely for this
* allocator. This can be useful if the user just want to depend on the GC to handle
* direct buffers when not explicit released.
*/
public UnpooledByteBufAllocator(boolean preferDirect, boolean disableLeakDetector) {
this(preferDirect, disableLeakDetector, PlatformDependent.useDirectBufferNoCleaner());
}
/**
* Create a new instance
*
* @param preferDirect {@code true} if {@link #buffer(int)} should try to allocate a direct buffer rather than
* a heap buffer
* @param disableLeakDetector {@code true} if the leak-detection should be disabled completely for this
* allocator. This can be useful if the user just want to depend on the GC to handle
* direct buffers when not explicit released.
* @param tryNoCleaner {@code true} if we should try to use {@link PlatformDependent#allocateDirectNoCleaner(int)}
* to allocate direct memory.
*/
public UnpooledByteBufAllocator(boolean preferDirect, boolean disableLeakDetector, boolean tryNoCleaner) {
super(preferDirect);
this.disableLeakDetector = disableLeakDetector;
noCleaner = tryNoCleaner && PlatformDependent.hasUnsafe()
&& PlatformDependent.hasDirectBufferNoCleanerConstructor();
}
@Override
protected ByteBuf newHeapBuffer(int initialCapacity, int maxCapacity) {
return PlatformDependent.hasUnsafe() ?
new InstrumentedUnpooledUnsafeHeapByteBuf(this, initialCapacity, maxCapacity) :
new InstrumentedUnpooledHeapByteBuf(this, initialCapacity, maxCapacity);
}
@Override
protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) {
final ByteBuf buf;
if (PlatformDependent.hasUnsafe()) {
buf = noCleaner ? new InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf(this, initialCapacity, maxCapacity) :
new InstrumentedUnpooledUnsafeDirectByteBuf(this, initialCapacity, maxCapacity);
} else {
buf = new InstrumentedUnpooledDirectByteBuf(this, initialCapacity, maxCapacity);
}
return disableLeakDetector ? buf : toLeakAwareBuffer(buf);
}
@Override
public CompositeByteBuf compositeHeapBuffer(int maxNumComponents) {
CompositeByteBuf buf = new CompositeByteBuf(this, false, maxNumComponents);
return disableLeakDetector ? buf : toLeakAwareBuffer(buf);
}
@Override
public CompositeByteBuf compositeDirectBuffer(int maxNumComponents) {
CompositeByteBuf buf = new CompositeByteBuf(this, true, maxNumComponents);
return disableLeakDetector ? buf : toLeakAwareBuffer(buf);
}
@Override
public boolean isDirectBufferPooled() {
return false;
}
@Override
public ByteBufAllocatorMetric metric() {
return metric;
}
void incrementDirect(int amount) {
metric.directCounter.add(amount);
}
void decrementDirect(int amount) {
metric.directCounter.add(-amount);
}
void incrementHeap(int amount) {
metric.heapCounter.add(amount);
}
void decrementHeap(int amount) {
metric.heapCounter.add(-amount);
}
private static final class InstrumentedUnpooledUnsafeHeapByteBuf extends UnpooledUnsafeHeapByteBuf {
InstrumentedUnpooledUnsafeHeapByteBuf(UnpooledByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
@Override
protected byte[] allocateArray(int initialCapacity) {
byte[] bytes = super.allocateArray(initialCapacity);
((UnpooledByteBufAllocator) alloc()).incrementHeap(bytes.length);
return bytes;
}
@Override
protected void freeArray(byte[] array) {
int length = array.length;
super.freeArray(array);
((UnpooledByteBufAllocator) alloc()).decrementHeap(length);
}
}
private static final class InstrumentedUnpooledHeapByteBuf extends UnpooledHeapByteBuf {
InstrumentedUnpooledHeapByteBuf(UnpooledByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
@Override
protected byte[] allocateArray(int initialCapacity) {
byte[] bytes = super.allocateArray(initialCapacity);
((UnpooledByteBufAllocator) alloc()).incrementHeap(bytes.length);
return bytes;
}
@Override
protected void freeArray(byte[] array) {
int length = array.length;
super.freeArray(array);
((UnpooledByteBufAllocator) alloc()).decrementHeap(length);
}
}
private static final class InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf
extends UnpooledUnsafeNoCleanerDirectByteBuf {
InstrumentedUnpooledUnsafeNoCleanerDirectByteBuf(
UnpooledByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
@Override
protected ByteBuffer allocateDirect(int initialCapacity) {
ByteBuffer buffer = super.allocateDirect(initialCapacity);
((UnpooledByteBufAllocator) alloc()).incrementDirect(buffer.capacity());
return buffer;
}
@Override
ByteBuffer reallocateDirect(ByteBuffer oldBuffer, int initialCapacity) {
int capacity = oldBuffer.capacity();
ByteBuffer buffer = super.reallocateDirect(oldBuffer, initialCapacity);
((UnpooledByteBufAllocator) alloc()).incrementDirect(buffer.capacity() - capacity);
return buffer;
}
@Override
protected void freeDirect(ByteBuffer buffer) {
int capacity = buffer.capacity();
super.freeDirect(buffer);
((UnpooledByteBufAllocator) alloc()).decrementDirect(capacity);
}
}
private static final class InstrumentedUnpooledUnsafeDirectByteBuf extends UnpooledUnsafeDirectByteBuf {
InstrumentedUnpooledUnsafeDirectByteBuf(
UnpooledByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
@Override
protected ByteBuffer allocateDirect(int initialCapacity) {
ByteBuffer buffer = super.allocateDirect(initialCapacity);
((UnpooledByteBufAllocator) alloc()).incrementDirect(buffer.capacity());
return buffer;
}
@Override
protected void freeDirect(ByteBuffer buffer) {
int capacity = buffer.capacity();
super.freeDirect(buffer);
((UnpooledByteBufAllocator) alloc()).decrementDirect(capacity);
}
}
private static final class InstrumentedUnpooledDirectByteBuf extends UnpooledDirectByteBuf {
InstrumentedUnpooledDirectByteBuf(
UnpooledByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
@Override
protected ByteBuffer allocateDirect(int initialCapacity) {
ByteBuffer buffer = super.allocateDirect(initialCapacity);
((UnpooledByteBufAllocator) alloc()).incrementDirect(buffer.capacity());
return buffer;
}
@Override
protected void freeDirect(ByteBuffer buffer) {
int capacity = buffer.capacity();
super.freeDirect(buffer);
((UnpooledByteBufAllocator) alloc()).decrementDirect(capacity);
}
}
private static final class UnpooledByteBufAllocatorMetric implements ByteBufAllocatorMetric {
final LongCounter directCounter = PlatformDependent.newLongCounter();
final LongCounter heapCounter = PlatformDependent.newLongCounter();
@Override
public long usedHeapMemory() {
return heapCounter.value();
}
@Override
public long usedDirectMemory() {
return directCounter.value();
}
@Override
public String toString() {
return StringUtil.simpleClassName(this) +
"(usedHeapMemory: " + usedHeapMemory() + "; usedDirectMemory: " + usedDirectMemory() + ')';
}
}
}

View file

@ -0,0 +1,654 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectUtil;
import io.netty.util.internal.PlatformDependent;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
/**
* A NIO {@link ByteBuffer} based buffer. It is recommended to use
* {@link UnpooledByteBufAllocator#directBuffer(int, int)}, {@link Unpooled#directBuffer(int)} and
* {@link Unpooled#wrappedBuffer(ByteBuffer)} instead of calling the constructor explicitly.
*/
public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
private final ByteBufAllocator alloc;
ByteBuffer buffer; // accessed by UnpooledUnsafeNoCleanerDirectByteBuf.reallocateDirect()
private ByteBuffer tmpNioBuf;
private int capacity;
private boolean doNotFree;
/**
* Creates a new direct buffer.
*
* @param initialCapacity the initial capacity of the underlying direct buffer
* @param maxCapacity the maximum capacity of the underlying direct buffer
*/
public UnpooledDirectByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(maxCapacity);
ObjectUtil.checkNotNull(alloc, "alloc");
checkPositiveOrZero(initialCapacity, "initialCapacity");
checkPositiveOrZero(maxCapacity, "maxCapacity");
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialCapacity, maxCapacity));
}
this.alloc = alloc;
setByteBuffer(allocateDirect(initialCapacity), false);
}
/**
* Creates a new direct buffer by wrapping the specified initial buffer.
*
* @param maxCapacity the maximum capacity of the underlying direct buffer
*/
protected UnpooledDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer, int maxCapacity) {
this(alloc, initialBuffer, maxCapacity, false, true);
}
UnpooledDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer,
int maxCapacity, boolean doFree, boolean slice) {
super(maxCapacity);
ObjectUtil.checkNotNull(alloc, "alloc");
ObjectUtil.checkNotNull(initialBuffer, "initialBuffer");
if (!initialBuffer.isDirect()) {
throw new IllegalArgumentException("initialBuffer is not a direct buffer.");
}
if (initialBuffer.isReadOnly()) {
throw new IllegalArgumentException("initialBuffer is a read-only buffer.");
}
int initialCapacity = initialBuffer.remaining();
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialCapacity, maxCapacity));
}
this.alloc = alloc;
doNotFree = !doFree;
setByteBuffer((slice ? initialBuffer.slice() : initialBuffer).order(ByteOrder.BIG_ENDIAN), false);
writerIndex(initialCapacity);
}
/**
* Allocate a new direct {@link ByteBuffer} with the given initialCapacity.
*/
protected ByteBuffer allocateDirect(int initialCapacity) {
return ByteBuffer.allocateDirect(initialCapacity);
}
/**
* Free a direct {@link ByteBuffer}
*/
protected void freeDirect(ByteBuffer buffer) {
PlatformDependent.freeDirectBuffer(buffer);
}
void setByteBuffer(ByteBuffer buffer, boolean tryFree) {
if (tryFree) {
ByteBuffer oldBuffer = this.buffer;
if (oldBuffer != null) {
if (doNotFree) {
doNotFree = false;
} else {
freeDirect(oldBuffer);
}
}
}
this.buffer = buffer;
tmpNioBuf = null;
capacity = buffer.remaining();
}
@Override
public boolean isDirect() {
return true;
}
@Override
public int capacity() {
return capacity;
}
@Override
public ByteBuf capacity(int newCapacity) {
checkNewCapacity(newCapacity);
int oldCapacity = capacity;
if (newCapacity == oldCapacity) {
return this;
}
int bytesToCopy;
if (newCapacity > oldCapacity) {
bytesToCopy = oldCapacity;
} else {
trimIndicesToCapacity(newCapacity);
bytesToCopy = newCapacity;
}
ByteBuffer oldBuffer = buffer;
ByteBuffer newBuffer = allocateDirect(newCapacity);
oldBuffer.position(0).limit(bytesToCopy);
newBuffer.position(0).limit(bytesToCopy);
newBuffer.put(oldBuffer).clear();
setByteBuffer(newBuffer, true);
return this;
}
@Override
public ByteBufAllocator alloc() {
return alloc;
}
@Override
public ByteOrder order() {
return ByteOrder.BIG_ENDIAN;
}
@Override
public boolean hasArray() {
return false;
}
@Override
public byte[] array() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
public int arrayOffset() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
public boolean hasMemoryAddress() {
return false;
}
@Override
public long memoryAddress() {
throw new UnsupportedOperationException();
}
@Override
public byte getByte(int index) {
ensureAccessible();
return _getByte(index);
}
@Override
protected byte _getByte(int index) {
return buffer.get(index);
}
@Override
public short getShort(int index) {
ensureAccessible();
return _getShort(index);
}
@Override
protected short _getShort(int index) {
return buffer.getShort(index);
}
@Override
protected short _getShortLE(int index) {
return ByteBufUtil.swapShort(buffer.getShort(index));
}
@Override
public int getUnsignedMedium(int index) {
ensureAccessible();
return _getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return (getByte(index) & 0xff) << 16 |
(getByte(index + 1) & 0xff) << 8 |
getByte(index + 2) & 0xff;
}
@Override
protected int _getUnsignedMediumLE(int index) {
return getByte(index) & 0xff |
(getByte(index + 1) & 0xff) << 8 |
(getByte(index + 2) & 0xff) << 16;
}
@Override
public int getInt(int index) {
ensureAccessible();
return _getInt(index);
}
@Override
protected int _getInt(int index) {
return buffer.getInt(index);
}
@Override
protected int _getIntLE(int index) {
return ByteBufUtil.swapInt(buffer.getInt(index));
}
@Override
public long getLong(int index) {
ensureAccessible();
return _getLong(index);
}
@Override
protected long _getLong(int index) {
return buffer.getLong(index);
}
@Override
protected long _getLongLE(int index) {
return ByteBufUtil.swapLong(buffer.getLong(index));
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.capacity());
if (dst.hasArray()) {
getBytes(index, dst.array(), dst.arrayOffset() + dstIndex, length);
} else if (dst.nioBufferCount() > 0) {
for (ByteBuffer bb: dst.nioBuffers(dstIndex, length)) {
int bbLen = bb.remaining();
getBytes(index, bb);
index += bbLen;
}
} else {
dst.setBytes(dstIndex, this, index, length);
}
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
getBytes(index, dst, dstIndex, length, false);
return this;
}
void getBytes(int index, byte[] dst, int dstIndex, int length, boolean internal) {
checkDstIndex(index, length, dstIndex, dst.length);
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = buffer.duplicate();
}
tmpBuf.clear().position(index).limit(index + length);
tmpBuf.get(dst, dstIndex, length);
}
@Override
public ByteBuf readBytes(byte[] dst, int dstIndex, int length) {
checkReadableBytes(length);
getBytes(readerIndex, dst, dstIndex, length, true);
readerIndex += length;
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
getBytes(index, dst, false);
return this;
}
void getBytes(int index, ByteBuffer dst, boolean internal) {
checkIndex(index, dst.remaining());
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = buffer.duplicate();
}
tmpBuf.clear().position(index).limit(index + dst.remaining());
dst.put(tmpBuf);
}
@Override
public ByteBuf readBytes(ByteBuffer dst) {
int length = dst.remaining();
checkReadableBytes(length);
getBytes(readerIndex, dst, true);
readerIndex += length;
return this;
}
@Override
public ByteBuf setByte(int index, int value) {
ensureAccessible();
_setByte(index, value);
return this;
}
@Override
protected void _setByte(int index, int value) {
buffer.put(index, (byte) value);
}
@Override
public ByteBuf setShort(int index, int value) {
ensureAccessible();
_setShort(index, value);
return this;
}
@Override
protected void _setShort(int index, int value) {
buffer.putShort(index, (short) value);
}
@Override
protected void _setShortLE(int index, int value) {
buffer.putShort(index, ByteBufUtil.swapShort((short) value));
}
@Override
public ByteBuf setMedium(int index, int value) {
ensureAccessible();
_setMedium(index, value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
setByte(index, (byte) (value >>> 16));
setByte(index + 1, (byte) (value >>> 8));
setByte(index + 2, (byte) value);
}
@Override
protected void _setMediumLE(int index, int value) {
setByte(index, (byte) value);
setByte(index + 1, (byte) (value >>> 8));
setByte(index + 2, (byte) (value >>> 16));
}
@Override
public ByteBuf setInt(int index, int value) {
ensureAccessible();
_setInt(index, value);
return this;
}
@Override
protected void _setInt(int index, int value) {
buffer.putInt(index, value);
}
@Override
protected void _setIntLE(int index, int value) {
buffer.putInt(index, ByteBufUtil.swapInt(value));
}
@Override
public ByteBuf setLong(int index, long value) {
ensureAccessible();
_setLong(index, value);
return this;
}
@Override
protected void _setLong(int index, long value) {
buffer.putLong(index, value);
}
@Override
protected void _setLongLE(int index, long value) {
buffer.putLong(index, ByteBufUtil.swapLong(value));
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.capacity());
if (src.nioBufferCount() > 0) {
for (ByteBuffer bb: src.nioBuffers(srcIndex, length)) {
int bbLen = bb.remaining();
setBytes(index, bb);
index += bbLen;
}
} else {
src.getBytes(srcIndex, this, index, length);
}
return this;
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.length);
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
tmpBuf.put(src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
ensureAccessible();
ByteBuffer tmpBuf = internalNioBuffer();
if (src == tmpBuf) {
src = src.duplicate();
}
tmpBuf.clear().position(index).limit(index + src.remaining());
tmpBuf.put(src);
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
getBytes(index, out, length, false);
return this;
}
void getBytes(int index, OutputStream out, int length, boolean internal) throws IOException {
ensureAccessible();
if (length == 0) {
return;
}
ByteBufUtil.readBytes(alloc(), internal ? internalNioBuffer() : buffer.duplicate(), index, length, out);
}
@Override
public ByteBuf readBytes(OutputStream out, int length) throws IOException {
checkReadableBytes(length);
getBytes(readerIndex, out, length, true);
readerIndex += length;
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
return getBytes(index, out, length, false);
}
private int getBytes(int index, GatheringByteChannel out, int length, boolean internal) throws IOException {
ensureAccessible();
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = buffer.duplicate();
}
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
return getBytes(index, out, position, length, false);
}
private int getBytes(int index, FileChannel out, long position, int length, boolean internal) throws IOException {
ensureAccessible();
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf = internal ? internalNioBuffer() : buffer.duplicate();
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf, position);
}
@Override
public int readBytes(GatheringByteChannel out, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public int readBytes(FileChannel out, long position, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, position, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
ensureAccessible();
if (buffer.hasArray()) {
return in.read(buffer.array(), buffer.arrayOffset() + index, length);
} else {
byte[] tmp = ByteBufUtil.threadLocalTempArray(length);
int readBytes = in.read(tmp, 0, length);
if (readBytes <= 0) {
return readBytes;
}
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index);
tmpBuf.put(tmp, 0, readBytes);
return readBytes;
}
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
ensureAccessible();
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
ensureAccessible();
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf, position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int nioBufferCount() {
return 1;
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public final boolean isContiguous() {
return true;
}
@Override
public ByteBuf copy(int index, int length) {
ensureAccessible();
ByteBuffer src;
try {
src = (ByteBuffer) buffer.duplicate().clear().position(index).limit(index + length);
} catch (IllegalArgumentException ignored) {
throw new IndexOutOfBoundsException("Too many bytes to read - Need " + (index + length));
}
return alloc().directBuffer(length, maxCapacity()).writeBytes(src);
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
return (ByteBuffer) internalNioBuffer().clear().position(index).limit(index + length);
}
private ByteBuffer internalNioBuffer() {
ByteBuffer tmpNioBuf = this.tmpNioBuf;
if (tmpNioBuf == null) {
this.tmpNioBuf = tmpNioBuf = buffer.duplicate();
}
return tmpNioBuf;
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex(index, length);
return ((ByteBuffer) buffer.duplicate().position(index).limit(index + length)).slice();
}
@Override
protected void deallocate() {
ByteBuffer buffer = this.buffer;
if (buffer == null) {
return;
}
this.buffer = null;
if (!doNotFree) {
freeDirect(buffer);
}
}
@Override
public ByteBuf unwrap() {
return null;
}
}

View file

@ -0,0 +1,121 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* {@link DuplicatedByteBuf} implementation that can do optimizations because it knows the duplicated buffer
* is of type {@link AbstractByteBuf}.
*/
class UnpooledDuplicatedByteBuf extends DuplicatedByteBuf {
UnpooledDuplicatedByteBuf(AbstractByteBuf buffer) {
super(buffer);
}
@Override
public AbstractByteBuf unwrap() {
return (AbstractByteBuf) super.unwrap();
}
@Override
protected byte _getByte(int index) {
return unwrap()._getByte(index);
}
@Override
protected short _getShort(int index) {
return unwrap()._getShort(index);
}
@Override
protected short _getShortLE(int index) {
return unwrap()._getShortLE(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return unwrap()._getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMediumLE(int index) {
return unwrap()._getUnsignedMediumLE(index);
}
@Override
protected int _getInt(int index) {
return unwrap()._getInt(index);
}
@Override
protected int _getIntLE(int index) {
return unwrap()._getIntLE(index);
}
@Override
protected long _getLong(int index) {
return unwrap()._getLong(index);
}
@Override
protected long _getLongLE(int index) {
return unwrap()._getLongLE(index);
}
@Override
protected void _setByte(int index, int value) {
unwrap()._setByte(index, value);
}
@Override
protected void _setShort(int index, int value) {
unwrap()._setShort(index, value);
}
@Override
protected void _setShortLE(int index, int value) {
unwrap()._setShortLE(index, value);
}
@Override
protected void _setMedium(int index, int value) {
unwrap()._setMedium(index, value);
}
@Override
protected void _setMediumLE(int index, int value) {
unwrap()._setMediumLE(index, value);
}
@Override
protected void _setInt(int index, int value) {
unwrap()._setInt(index, value);
}
@Override
protected void _setIntLE(int index, int value) {
unwrap()._setIntLE(index, value);
}
@Override
protected void _setLong(int index, long value) {
unwrap()._setLong(index, value);
}
@Override
protected void _setLongLE(int index, long value) {
unwrap()._setLongLE(index, value);
}
}

View file

@ -0,0 +1,556 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.PlatformDependent;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
/**
* Big endian Java heap buffer implementation. It is recommended to use
* {@link UnpooledByteBufAllocator#heapBuffer(int, int)}, {@link Unpooled#buffer(int)} and
* {@link Unpooled#wrappedBuffer(byte[])} instead of calling the constructor explicitly.
*/
public class UnpooledHeapByteBuf extends AbstractReferenceCountedByteBuf {
private final ByteBufAllocator alloc;
byte[] array;
private ByteBuffer tmpNioBuf;
/**
* Creates a new heap buffer with a newly allocated byte array.
*
* @param initialCapacity the initial capacity of the underlying byte array
* @param maxCapacity the max capacity of the underlying byte array
*/
public UnpooledHeapByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(maxCapacity);
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialCapacity, maxCapacity));
}
this.alloc = checkNotNull(alloc, "alloc");
setArray(allocateArray(initialCapacity));
setIndex(0, 0);
}
/**
* Creates a new heap buffer with an existing byte array.
*
* @param initialArray the initial underlying byte array
* @param maxCapacity the max capacity of the underlying byte array
*/
protected UnpooledHeapByteBuf(ByteBufAllocator alloc, byte[] initialArray, int maxCapacity) {
super(maxCapacity);
checkNotNull(alloc, "alloc");
checkNotNull(initialArray, "initialArray");
if (initialArray.length > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialArray.length, maxCapacity));
}
this.alloc = alloc;
setArray(initialArray);
setIndex(0, initialArray.length);
}
protected byte[] allocateArray(int initialCapacity) {
return new byte[initialCapacity];
}
protected void freeArray(byte[] array) {
// NOOP
}
private void setArray(byte[] initialArray) {
array = initialArray;
tmpNioBuf = null;
}
@Override
public ByteBufAllocator alloc() {
return alloc;
}
@Override
public ByteOrder order() {
return ByteOrder.BIG_ENDIAN;
}
@Override
public boolean isDirect() {
return false;
}
@Override
public int capacity() {
return array.length;
}
@Override
public ByteBuf capacity(int newCapacity) {
checkNewCapacity(newCapacity);
byte[] oldArray = array;
int oldCapacity = oldArray.length;
if (newCapacity == oldCapacity) {
return this;
}
int bytesToCopy;
if (newCapacity > oldCapacity) {
bytesToCopy = oldCapacity;
} else {
trimIndicesToCapacity(newCapacity);
bytesToCopy = newCapacity;
}
byte[] newArray = allocateArray(newCapacity);
System.arraycopy(oldArray, 0, newArray, 0, bytesToCopy);
setArray(newArray);
freeArray(oldArray);
return this;
}
@Override
public boolean hasArray() {
return true;
}
@Override
public byte[] array() {
ensureAccessible();
return array;
}
@Override
public int arrayOffset() {
return 0;
}
@Override
public boolean hasMemoryAddress() {
return false;
}
@Override
public long memoryAddress() {
throw new UnsupportedOperationException();
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.capacity());
if (dst.hasMemoryAddress()) {
PlatformDependent.copyMemory(array, index, dst.memoryAddress() + dstIndex, length);
} else if (dst.hasArray()) {
getBytes(index, dst.array(), dst.arrayOffset() + dstIndex, length);
} else {
dst.setBytes(dstIndex, array, index, length);
}
return this;
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.length);
System.arraycopy(array, index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
ensureAccessible();
dst.put(array, index, dst.remaining());
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
ensureAccessible();
out.write(array, index, length);
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
ensureAccessible();
return getBytes(index, out, length, false);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
ensureAccessible();
return getBytes(index, out, position, length, false);
}
private int getBytes(int index, GatheringByteChannel out, int length, boolean internal) throws IOException {
ensureAccessible();
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = ByteBuffer.wrap(array);
}
return out.write(tmpBuf.clear().position(index).limit(index + length));
}
private int getBytes(int index, FileChannel out, long position, int length, boolean internal) throws IOException {
ensureAccessible();
ByteBuffer tmpBuf = internal ? internalNioBuffer() : ByteBuffer.wrap(array);
return out.write(tmpBuf.clear().position(index).limit(index + length), position);
}
@Override
public int readBytes(GatheringByteChannel out, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public int readBytes(FileChannel out, long position, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, position, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.capacity());
if (src.hasMemoryAddress()) {
PlatformDependent.copyMemory(src.memoryAddress() + srcIndex, array, index, length);
} else if (src.hasArray()) {
setBytes(index, src.array(), src.arrayOffset() + srcIndex, length);
} else {
src.getBytes(srcIndex, array, index, length);
}
return this;
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.length);
System.arraycopy(src, srcIndex, array, index, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
ensureAccessible();
src.get(array, index, src.remaining());
return this;
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
ensureAccessible();
return in.read(array, index, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
ensureAccessible();
try {
return in.read(internalNioBuffer().clear().position(index).limit(index + length));
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
ensureAccessible();
try {
return in.read(internalNioBuffer().clear().position(index).limit(index + length), position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int nioBufferCount() {
return 1;
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
ensureAccessible();
return ByteBuffer.wrap(array, index, length).slice();
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
return internalNioBuffer().clear().position(index).limit(index + length);
}
@Override
public final boolean isContiguous() {
return true;
}
@Override
public byte getByte(int index) {
ensureAccessible();
return _getByte(index);
}
@Override
protected byte _getByte(int index) {
return HeapByteBufUtil.getByte(array, index);
}
@Override
public short getShort(int index) {
ensureAccessible();
return _getShort(index);
}
@Override
protected short _getShort(int index) {
return HeapByteBufUtil.getShort(array, index);
}
@Override
public short getShortLE(int index) {
ensureAccessible();
return _getShortLE(index);
}
@Override
protected short _getShortLE(int index) {
return HeapByteBufUtil.getShortLE(array, index);
}
@Override
public int getUnsignedMedium(int index) {
ensureAccessible();
return _getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return HeapByteBufUtil.getUnsignedMedium(array, index);
}
@Override
public int getUnsignedMediumLE(int index) {
ensureAccessible();
return _getUnsignedMediumLE(index);
}
@Override
protected int _getUnsignedMediumLE(int index) {
return HeapByteBufUtil.getUnsignedMediumLE(array, index);
}
@Override
public int getInt(int index) {
ensureAccessible();
return _getInt(index);
}
@Override
protected int _getInt(int index) {
return HeapByteBufUtil.getInt(array, index);
}
@Override
public int getIntLE(int index) {
ensureAccessible();
return _getIntLE(index);
}
@Override
protected int _getIntLE(int index) {
return HeapByteBufUtil.getIntLE(array, index);
}
@Override
public long getLong(int index) {
ensureAccessible();
return _getLong(index);
}
@Override
protected long _getLong(int index) {
return HeapByteBufUtil.getLong(array, index);
}
@Override
public long getLongLE(int index) {
ensureAccessible();
return _getLongLE(index);
}
@Override
protected long _getLongLE(int index) {
return HeapByteBufUtil.getLongLE(array, index);
}
@Override
public ByteBuf setByte(int index, int value) {
ensureAccessible();
_setByte(index, value);
return this;
}
@Override
protected void _setByte(int index, int value) {
HeapByteBufUtil.setByte(array, index, value);
}
@Override
public ByteBuf setShort(int index, int value) {
ensureAccessible();
_setShort(index, value);
return this;
}
@Override
protected void _setShort(int index, int value) {
HeapByteBufUtil.setShort(array, index, value);
}
@Override
public ByteBuf setShortLE(int index, int value) {
ensureAccessible();
_setShortLE(index, value);
return this;
}
@Override
protected void _setShortLE(int index, int value) {
HeapByteBufUtil.setShortLE(array, index, value);
}
@Override
public ByteBuf setMedium(int index, int value) {
ensureAccessible();
_setMedium(index, value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
HeapByteBufUtil.setMedium(array, index, value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
ensureAccessible();
_setMediumLE(index, value);
return this;
}
@Override
protected void _setMediumLE(int index, int value) {
HeapByteBufUtil.setMediumLE(array, index, value);
}
@Override
public ByteBuf setInt(int index, int value) {
ensureAccessible();
_setInt(index, value);
return this;
}
@Override
protected void _setInt(int index, int value) {
HeapByteBufUtil.setInt(array, index, value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
ensureAccessible();
_setIntLE(index, value);
return this;
}
@Override
protected void _setIntLE(int index, int value) {
HeapByteBufUtil.setIntLE(array, index, value);
}
@Override
public ByteBuf setLong(int index, long value) {
ensureAccessible();
_setLong(index, value);
return this;
}
@Override
protected void _setLong(int index, long value) {
HeapByteBufUtil.setLong(array, index, value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
ensureAccessible();
_setLongLE(index, value);
return this;
}
@Override
protected void _setLongLE(int index, long value) {
HeapByteBufUtil.setLongLE(array, index, value);
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex(index, length);
return alloc().heapBuffer(length, maxCapacity()).writeBytes(array, index, length);
}
private ByteBuffer internalNioBuffer() {
ByteBuffer tmpNioBuf = this.tmpNioBuf;
if (tmpNioBuf == null) {
this.tmpNioBuf = tmpNioBuf = ByteBuffer.wrap(array);
}
return tmpNioBuf;
}
@Override
protected void deallocate() {
freeArray(array);
array = EmptyArrays.EMPTY_BYTES;
}
@Override
public ByteBuf unwrap() {
return null;
}
}

View file

@ -0,0 +1,126 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
/**
* A special {@link AbstractUnpooledSlicedByteBuf} that can make optimizations because it knows the sliced buffer is of
* type {@link AbstractByteBuf}.
*/
class UnpooledSlicedByteBuf extends AbstractUnpooledSlicedByteBuf {
UnpooledSlicedByteBuf(AbstractByteBuf buffer, int index, int length) {
super(buffer, index, length);
}
@Override
public int capacity() {
return maxCapacity();
}
@Override
public AbstractByteBuf unwrap() {
return (AbstractByteBuf) super.unwrap();
}
@Override
protected byte _getByte(int index) {
return unwrap()._getByte(idx(index));
}
@Override
protected short _getShort(int index) {
return unwrap()._getShort(idx(index));
}
@Override
protected short _getShortLE(int index) {
return unwrap()._getShortLE(idx(index));
}
@Override
protected int _getUnsignedMedium(int index) {
return unwrap()._getUnsignedMedium(idx(index));
}
@Override
protected int _getUnsignedMediumLE(int index) {
return unwrap()._getUnsignedMediumLE(idx(index));
}
@Override
protected int _getInt(int index) {
return unwrap()._getInt(idx(index));
}
@Override
protected int _getIntLE(int index) {
return unwrap()._getIntLE(idx(index));
}
@Override
protected long _getLong(int index) {
return unwrap()._getLong(idx(index));
}
@Override
protected long _getLongLE(int index) {
return unwrap()._getLongLE(idx(index));
}
@Override
protected void _setByte(int index, int value) {
unwrap()._setByte(idx(index), value);
}
@Override
protected void _setShort(int index, int value) {
unwrap()._setShort(idx(index), value);
}
@Override
protected void _setShortLE(int index, int value) {
unwrap()._setShortLE(idx(index), value);
}
@Override
protected void _setMedium(int index, int value) {
unwrap()._setMedium(idx(index), value);
}
@Override
protected void _setMediumLE(int index, int value) {
unwrap()._setMediumLE(idx(index), value);
}
@Override
protected void _setInt(int index, int value) {
unwrap()._setInt(idx(index), value);
}
@Override
protected void _setIntLE(int index, int value) {
unwrap()._setIntLE(idx(index), value);
}
@Override
protected void _setLong(int index, long value) {
unwrap()._setLong(idx(index), value);
}
@Override
protected void _setLongLE(int index, long value) {
unwrap()._setLongLE(idx(index), value);
}
}

View file

@ -0,0 +1,315 @@
/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.PlatformDependent;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
/**
* A NIO {@link ByteBuffer} based buffer. It is recommended to use
* {@link UnpooledByteBufAllocator#directBuffer(int, int)}, {@link Unpooled#directBuffer(int)} and
* {@link Unpooled#wrappedBuffer(ByteBuffer)} instead of calling the constructor explicitly.}
*/
public class UnpooledUnsafeDirectByteBuf extends UnpooledDirectByteBuf {
long memoryAddress;
/**
* Creates a new direct buffer.
*
* @param initialCapacity the initial capacity of the underlying direct buffer
* @param maxCapacity the maximum capacity of the underlying direct buffer
*/
public UnpooledUnsafeDirectByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
/**
* Creates a new direct buffer by wrapping the specified initial buffer.
*
* @param maxCapacity the maximum capacity of the underlying direct buffer
*/
protected UnpooledUnsafeDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer, int maxCapacity) {
// We never try to free the buffer if it was provided by the end-user as we don't know if this is a duplicate or
// a slice. This is done to prevent an IllegalArgumentException when using Java9 as Unsafe.invokeCleaner(...)
// will check if the given buffer is either a duplicate or slice and in this case throw an
// IllegalArgumentException.
//
// See https://hg.openjdk.java.net/jdk9/hs-demo/jdk/file/0d2ab72ba600/src/jdk.unsupported/share/classes/
// sun/misc/Unsafe.java#l1250
//
// We also call slice() explicitly here to preserve behaviour with previous netty releases.
super(alloc, initialBuffer, maxCapacity, /* doFree = */ false, /* slice = */ true);
}
UnpooledUnsafeDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer, int maxCapacity, boolean doFree) {
super(alloc, initialBuffer, maxCapacity, doFree, false);
}
@Override
final void setByteBuffer(ByteBuffer buffer, boolean tryFree) {
super.setByteBuffer(buffer, tryFree);
memoryAddress = PlatformDependent.directBufferAddress(buffer);
}
@Override
public boolean hasMemoryAddress() {
return true;
}
@Override
public long memoryAddress() {
ensureAccessible();
return memoryAddress;
}
@Override
public byte getByte(int index) {
checkIndex(index);
return _getByte(index);
}
@Override
protected byte _getByte(int index) {
return UnsafeByteBufUtil.getByte(addr(index));
}
@Override
public short getShort(int index) {
checkIndex(index, 2);
return _getShort(index);
}
@Override
protected short _getShort(int index) {
return UnsafeByteBufUtil.getShort(addr(index));
}
@Override
protected short _getShortLE(int index) {
return UnsafeByteBufUtil.getShortLE(addr(index));
}
@Override
public int getUnsignedMedium(int index) {
checkIndex(index, 3);
return _getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return UnsafeByteBufUtil.getUnsignedMedium(addr(index));
}
@Override
protected int _getUnsignedMediumLE(int index) {
return UnsafeByteBufUtil.getUnsignedMediumLE(addr(index));
}
@Override
public int getInt(int index) {
checkIndex(index, 4);
return _getInt(index);
}
@Override
protected int _getInt(int index) {
return UnsafeByteBufUtil.getInt(addr(index));
}
@Override
protected int _getIntLE(int index) {
return UnsafeByteBufUtil.getIntLE(addr(index));
}
@Override
public long getLong(int index) {
checkIndex(index, 8);
return _getLong(index);
}
@Override
protected long _getLong(int index) {
return UnsafeByteBufUtil.getLong(addr(index));
}
@Override
protected long _getLongLE(int index) {
return UnsafeByteBufUtil.getLongLE(addr(index));
}
@Override
public ByteBuf getBytes(int index, ByteBuf dst, int dstIndex, int length) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst, dstIndex, length);
return this;
}
@Override
void getBytes(int index, byte[] dst, int dstIndex, int length, boolean internal) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst, dstIndex, length);
}
@Override
void getBytes(int index, ByteBuffer dst, boolean internal) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst);
}
@Override
public ByteBuf setByte(int index, int value) {
checkIndex(index);
_setByte(index, value);
return this;
}
@Override
protected void _setByte(int index, int value) {
UnsafeByteBufUtil.setByte(addr(index), value);
}
@Override
public ByteBuf setShort(int index, int value) {
checkIndex(index, 2);
_setShort(index, value);
return this;
}
@Override
protected void _setShort(int index, int value) {
UnsafeByteBufUtil.setShort(addr(index), value);
}
@Override
protected void _setShortLE(int index, int value) {
UnsafeByteBufUtil.setShortLE(addr(index), value);
}
@Override
public ByteBuf setMedium(int index, int value) {
checkIndex(index, 3);
_setMedium(index, value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
UnsafeByteBufUtil.setMedium(addr(index), value);
}
@Override
protected void _setMediumLE(int index, int value) {
UnsafeByteBufUtil.setMediumLE(addr(index), value);
}
@Override
public ByteBuf setInt(int index, int value) {
checkIndex(index, 4);
_setInt(index, value);
return this;
}
@Override
protected void _setInt(int index, int value) {
UnsafeByteBufUtil.setInt(addr(index), value);
}
@Override
protected void _setIntLE(int index, int value) {
UnsafeByteBufUtil.setIntLE(addr(index), value);
}
@Override
public ByteBuf setLong(int index, long value) {
checkIndex(index, 8);
_setLong(index, value);
return this;
}
@Override
protected void _setLong(int index, long value) {
UnsafeByteBufUtil.setLong(addr(index), value);
}
@Override
protected void _setLongLE(int index, long value) {
UnsafeByteBufUtil.setLongLE(addr(index), value);
}
@Override
public ByteBuf setBytes(int index, ByteBuf src, int srcIndex, int length) {
UnsafeByteBufUtil.setBytes(this, addr(index), index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
UnsafeByteBufUtil.setBytes(this, addr(index), index, src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
UnsafeByteBufUtil.setBytes(this, addr(index), index, src);
return this;
}
@Override
void getBytes(int index, OutputStream out, int length, boolean internal) throws IOException {
UnsafeByteBufUtil.getBytes(this, addr(index), index, out, length);
}
@Override
public int setBytes(int index, InputStream in, int length) throws IOException {
return UnsafeByteBufUtil.setBytes(this, addr(index), index, in, length);
}
@Override
public ByteBuf copy(int index, int length) {
return UnsafeByteBufUtil.copy(this, addr(index), index, length);
}
final long addr(int index) {
return memoryAddress + index;
}
@Override
protected SwappedByteBuf newSwappedByteBuf() {
if (PlatformDependent.isUnaligned()) {
// Only use if unaligned access is supported otherwise there is no gain.
return new UnsafeDirectSwappedByteBuf(this);
}
return super.newSwappedByteBuf();
}
@Override
public ByteBuf setZero(int index, int length) {
checkIndex(index, length);
UnsafeByteBufUtil.setZero(addr(index), length);
return this;
}
@Override
public ByteBuf writeZero(int length) {
ensureWritable(length);
int wIndex = writerIndex;
UnsafeByteBufUtil.setZero(addr(wIndex), length);
writerIndex = wIndex + length;
return this;
}
}

View file

@ -0,0 +1,282 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.PlatformDependent;
/**
* Big endian Java heap buffer implementation. It is recommended to use
* {@link UnpooledByteBufAllocator#heapBuffer(int, int)}, {@link Unpooled#buffer(int)} and
* {@link Unpooled#wrappedBuffer(byte[])} instead of calling the constructor explicitly.
*/
public class UnpooledUnsafeHeapByteBuf extends UnpooledHeapByteBuf {
/**
* Creates a new heap buffer with a newly allocated byte array.
*
* @param initialCapacity the initial capacity of the underlying byte array
* @param maxCapacity the max capacity of the underlying byte array
*/
public UnpooledUnsafeHeapByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
@Override
protected byte[] allocateArray(int initialCapacity) {
return PlatformDependent.allocateUninitializedArray(initialCapacity);
}
@Override
public byte getByte(int index) {
checkIndex(index);
return _getByte(index);
}
@Override
protected byte _getByte(int index) {
return UnsafeByteBufUtil.getByte(array, index);
}
@Override
public short getShort(int index) {
checkIndex(index, 2);
return _getShort(index);
}
@Override
protected short _getShort(int index) {
return UnsafeByteBufUtil.getShort(array, index);
}
@Override
public short getShortLE(int index) {
checkIndex(index, 2);
return _getShortLE(index);
}
@Override
protected short _getShortLE(int index) {
return UnsafeByteBufUtil.getShortLE(array, index);
}
@Override
public int getUnsignedMedium(int index) {
checkIndex(index, 3);
return _getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return UnsafeByteBufUtil.getUnsignedMedium(array, index);
}
@Override
public int getUnsignedMediumLE(int index) {
checkIndex(index, 3);
return _getUnsignedMediumLE(index);
}
@Override
protected int _getUnsignedMediumLE(int index) {
return UnsafeByteBufUtil.getUnsignedMediumLE(array, index);
}
@Override
public int getInt(int index) {
checkIndex(index, 4);
return _getInt(index);
}
@Override
protected int _getInt(int index) {
return UnsafeByteBufUtil.getInt(array, index);
}
@Override
public int getIntLE(int index) {
checkIndex(index, 4);
return _getIntLE(index);
}
@Override
protected int _getIntLE(int index) {
return UnsafeByteBufUtil.getIntLE(array, index);
}
@Override
public long getLong(int index) {
checkIndex(index, 8);
return _getLong(index);
}
@Override
protected long _getLong(int index) {
return UnsafeByteBufUtil.getLong(array, index);
}
@Override
public long getLongLE(int index) {
checkIndex(index, 8);
return _getLongLE(index);
}
@Override
protected long _getLongLE(int index) {
return UnsafeByteBufUtil.getLongLE(array, index);
}
@Override
public ByteBuf setByte(int index, int value) {
checkIndex(index);
_setByte(index, value);
return this;
}
@Override
protected void _setByte(int index, int value) {
UnsafeByteBufUtil.setByte(array, index, value);
}
@Override
public ByteBuf setShort(int index, int value) {
checkIndex(index, 2);
_setShort(index, value);
return this;
}
@Override
protected void _setShort(int index, int value) {
UnsafeByteBufUtil.setShort(array, index, value);
}
@Override
public ByteBuf setShortLE(int index, int value) {
checkIndex(index, 2);
_setShortLE(index, value);
return this;
}
@Override
protected void _setShortLE(int index, int value) {
UnsafeByteBufUtil.setShortLE(array, index, value);
}
@Override
public ByteBuf setMedium(int index, int value) {
checkIndex(index, 3);
_setMedium(index, value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
UnsafeByteBufUtil.setMedium(array, index, value);
}
@Override
public ByteBuf setMediumLE(int index, int value) {
checkIndex(index, 3);
_setMediumLE(index, value);
return this;
}
@Override
protected void _setMediumLE(int index, int value) {
UnsafeByteBufUtil.setMediumLE(array, index, value);
}
@Override
public ByteBuf setInt(int index, int value) {
checkIndex(index, 4);
_setInt(index, value);
return this;
}
@Override
protected void _setInt(int index, int value) {
UnsafeByteBufUtil.setInt(array, index, value);
}
@Override
public ByteBuf setIntLE(int index, int value) {
checkIndex(index, 4);
_setIntLE(index, value);
return this;
}
@Override
protected void _setIntLE(int index, int value) {
UnsafeByteBufUtil.setIntLE(array, index, value);
}
@Override
public ByteBuf setLong(int index, long value) {
checkIndex(index, 8);
_setLong(index, value);
return this;
}
@Override
protected void _setLong(int index, long value) {
UnsafeByteBufUtil.setLong(array, index, value);
}
@Override
public ByteBuf setLongLE(int index, long value) {
checkIndex(index, 8);
_setLongLE(index, value);
return this;
}
@Override
protected void _setLongLE(int index, long value) {
UnsafeByteBufUtil.setLongLE(array, index, value);
}
@Override
public ByteBuf setZero(int index, int length) {
if (PlatformDependent.javaVersion() >= 7) {
// Only do on java7+ as the needed Unsafe call was only added there.
checkIndex(index, length);
UnsafeByteBufUtil.setZero(array, index, length);
return this;
}
return super.setZero(index, length);
}
@Override
public ByteBuf writeZero(int length) {
if (PlatformDependent.javaVersion() >= 7) {
// Only do on java7+ as the needed Unsafe call was only added there.
ensureWritable(length);
int wIndex = writerIndex;
UnsafeByteBufUtil.setZero(array, wIndex, length);
writerIndex = wIndex + length;
return this;
}
return super.writeZero(length);
}
@Override
@Deprecated
protected SwappedByteBuf newSwappedByteBuf() {
if (PlatformDependent.isUnaligned()) {
// Only use if unaligned access is supported otherwise there is no gain.
return new UnsafeHeapSwappedByteBuf(this);
}
return super.newSwappedByteBuf();
}
}

View file

@ -0,0 +1,55 @@
/*
* Copyright 2016 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.PlatformDependent;
import java.nio.ByteBuffer;
class UnpooledUnsafeNoCleanerDirectByteBuf extends UnpooledUnsafeDirectByteBuf {
UnpooledUnsafeNoCleanerDirectByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}
@Override
protected ByteBuffer allocateDirect(int initialCapacity) {
return PlatformDependent.allocateDirectNoCleaner(initialCapacity);
}
ByteBuffer reallocateDirect(ByteBuffer oldBuffer, int initialCapacity) {
return PlatformDependent.reallocateDirectNoCleaner(oldBuffer, initialCapacity);
}
@Override
protected void freeDirect(ByteBuffer buffer) {
PlatformDependent.freeDirectNoCleaner(buffer);
}
@Override
public ByteBuf capacity(int newCapacity) {
checkNewCapacity(newCapacity);
int oldCapacity = capacity();
if (newCapacity == oldCapacity) {
return this;
}
trimIndicesToCapacity(newCapacity);
setByteBuffer(reallocateDirect(buffer, newCapacity), false);
return this;
}
}

View file

@ -0,0 +1,133 @@
/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.ObjectUtil;
import java.nio.ByteOrder;
/**
* A {@link ByteBuf} implementation that wraps another buffer to prevent a user from increasing or decreasing the
* wrapped buffer's reference count.
*/
final class UnreleasableByteBuf extends WrappedByteBuf {
private SwappedByteBuf swappedBuf;
UnreleasableByteBuf(ByteBuf buf) {
super(buf instanceof UnreleasableByteBuf ? buf.unwrap() : buf);
}
@Override
public ByteBuf order(ByteOrder endianness) {
if (ObjectUtil.checkNotNull(endianness, "endianness") == order()) {
return this;
}
SwappedByteBuf swappedBuf = this.swappedBuf;
if (swappedBuf == null) {
this.swappedBuf = swappedBuf = new SwappedByteBuf(this);
}
return swappedBuf;
}
@Override
public ByteBuf asReadOnly() {
return buf.isReadOnly() ? this : new UnreleasableByteBuf(buf.asReadOnly());
}
@Override
public ByteBuf readSlice(int length) {
return new UnreleasableByteBuf(buf.readSlice(length));
}
@Override
public ByteBuf readRetainedSlice(int length) {
// We could call buf.readSlice(..), and then call buf.release(). However this creates a leak in unit tests
// because the release method on UnreleasableByteBuf will never allow the leak record to be cleaned up.
// So we just use readSlice(..) because the end result should be logically equivalent.
return readSlice(length);
}
@Override
public ByteBuf slice() {
return new UnreleasableByteBuf(buf.slice());
}
@Override
public ByteBuf retainedSlice() {
// We could call buf.retainedSlice(), and then call buf.release(). However this creates a leak in unit tests
// because the release method on UnreleasableByteBuf will never allow the leak record to be cleaned up.
// So we just use slice() because the end result should be logically equivalent.
return slice();
}
@Override
public ByteBuf slice(int index, int length) {
return new UnreleasableByteBuf(buf.slice(index, length));
}
@Override
public ByteBuf retainedSlice(int index, int length) {
// We could call buf.retainedSlice(..), and then call buf.release(). However this creates a leak in unit tests
// because the release method on UnreleasableByteBuf will never allow the leak record to be cleaned up.
// So we just use slice(..) because the end result should be logically equivalent.
return slice(index, length);
}
@Override
public ByteBuf duplicate() {
return new UnreleasableByteBuf(buf.duplicate());
}
@Override
public ByteBuf retainedDuplicate() {
// We could call buf.retainedDuplicate(), and then call buf.release(). However this creates a leak in unit tests
// because the release method on UnreleasableByteBuf will never allow the leak record to be cleaned up.
// So we just use duplicate() because the end result should be logically equivalent.
return duplicate();
}
@Override
public ByteBuf retain(int increment) {
return this;
}
@Override
public ByteBuf retain() {
return this;
}
@Override
public ByteBuf touch() {
return this;
}
@Override
public ByteBuf touch(Object hint) {
return this;
}
@Override
public boolean release() {
return false;
}
@Override
public boolean release(int decrement) {
return false;
}
}

View file

@ -0,0 +1,691 @@
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.internal.PlatformDependent;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.ReadOnlyBufferException;
import static io.netty.util.internal.MathUtil.isOutOfBounds;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static io.netty.util.internal.PlatformDependent.BIG_ENDIAN_NATIVE_ORDER;
/**
* All operations get and set as {@link ByteOrder#BIG_ENDIAN}.
*/
final class UnsafeByteBufUtil {
private static final boolean UNALIGNED = PlatformDependent.isUnaligned();
private static final byte ZERO = 0;
private static final int MAX_HAND_ROLLED_SET_ZERO_BYTES = 64;
static byte getByte(long address) {
return PlatformDependent.getByte(address);
}
static short getShort(long address) {
if (UNALIGNED) {
short v = PlatformDependent.getShort(address);
return BIG_ENDIAN_NATIVE_ORDER ? v : Short.reverseBytes(v);
}
return (short) (PlatformDependent.getByte(address) << 8 | PlatformDependent.getByte(address + 1) & 0xff);
}
static short getShortLE(long address) {
if (UNALIGNED) {
short v = PlatformDependent.getShort(address);
return BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes(v) : v;
}
return (short) (PlatformDependent.getByte(address) & 0xff | PlatformDependent.getByte(address + 1) << 8);
}
static int getUnsignedMedium(long address) {
if (UNALIGNED) {
return (PlatformDependent.getByte(address) & 0xff) << 16 |
(BIG_ENDIAN_NATIVE_ORDER ? PlatformDependent.getShort(address + 1)
: Short.reverseBytes(PlatformDependent.getShort(address + 1))) & 0xffff;
}
return (PlatformDependent.getByte(address) & 0xff) << 16 |
(PlatformDependent.getByte(address + 1) & 0xff) << 8 |
PlatformDependent.getByte(address + 2) & 0xff;
}
static int getUnsignedMediumLE(long address) {
if (UNALIGNED) {
return (PlatformDependent.getByte(address) & 0xff) |
((BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes(PlatformDependent.getShort(address + 1))
: PlatformDependent.getShort(address + 1)) & 0xffff) << 8;
}
return PlatformDependent.getByte(address) & 0xff |
(PlatformDependent.getByte(address + 1) & 0xff) << 8 |
(PlatformDependent.getByte(address + 2) & 0xff) << 16;
}
static int getInt(long address) {
if (UNALIGNED) {
int v = PlatformDependent.getInt(address);
return BIG_ENDIAN_NATIVE_ORDER ? v : Integer.reverseBytes(v);
}
return PlatformDependent.getByte(address) << 24 |
(PlatformDependent.getByte(address + 1) & 0xff) << 16 |
(PlatformDependent.getByte(address + 2) & 0xff) << 8 |
PlatformDependent.getByte(address + 3) & 0xff;
}
static int getIntLE(long address) {
if (UNALIGNED) {
int v = PlatformDependent.getInt(address);
return BIG_ENDIAN_NATIVE_ORDER ? Integer.reverseBytes(v) : v;
}
return PlatformDependent.getByte(address) & 0xff |
(PlatformDependent.getByte(address + 1) & 0xff) << 8 |
(PlatformDependent.getByte(address + 2) & 0xff) << 16 |
PlatformDependent.getByte(address + 3) << 24;
}
static long getLong(long address) {
if (UNALIGNED) {
long v = PlatformDependent.getLong(address);
return BIG_ENDIAN_NATIVE_ORDER ? v : Long.reverseBytes(v);
}
return ((long) PlatformDependent.getByte(address)) << 56 |
(PlatformDependent.getByte(address + 1) & 0xffL) << 48 |
(PlatformDependent.getByte(address + 2) & 0xffL) << 40 |
(PlatformDependent.getByte(address + 3) & 0xffL) << 32 |
(PlatformDependent.getByte(address + 4) & 0xffL) << 24 |
(PlatformDependent.getByte(address + 5) & 0xffL) << 16 |
(PlatformDependent.getByte(address + 6) & 0xffL) << 8 |
(PlatformDependent.getByte(address + 7)) & 0xffL;
}
static long getLongLE(long address) {
if (UNALIGNED) {
long v = PlatformDependent.getLong(address);
return BIG_ENDIAN_NATIVE_ORDER ? Long.reverseBytes(v) : v;
}
return (PlatformDependent.getByte(address)) & 0xffL |
(PlatformDependent.getByte(address + 1) & 0xffL) << 8 |
(PlatformDependent.getByte(address + 2) & 0xffL) << 16 |
(PlatformDependent.getByte(address + 3) & 0xffL) << 24 |
(PlatformDependent.getByte(address + 4) & 0xffL) << 32 |
(PlatformDependent.getByte(address + 5) & 0xffL) << 40 |
(PlatformDependent.getByte(address + 6) & 0xffL) << 48 |
((long) PlatformDependent.getByte(address + 7)) << 56;
}
static void setByte(long address, int value) {
PlatformDependent.putByte(address, (byte) value);
}
static void setShort(long address, int value) {
if (UNALIGNED) {
PlatformDependent.putShort(
address, BIG_ENDIAN_NATIVE_ORDER ? (short) value : Short.reverseBytes((short) value));
} else {
PlatformDependent.putByte(address, (byte) (value >>> 8));
PlatformDependent.putByte(address + 1, (byte) value);
}
}
static void setShortLE(long address, int value) {
if (UNALIGNED) {
PlatformDependent.putShort(
address, BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes((short) value) : (short) value);
} else {
PlatformDependent.putByte(address, (byte) value);
PlatformDependent.putByte(address + 1, (byte) (value >>> 8));
}
}
static void setMedium(long address, int value) {
PlatformDependent.putByte(address, (byte) (value >>> 16));
if (UNALIGNED) {
PlatformDependent.putShort(address + 1, BIG_ENDIAN_NATIVE_ORDER ? (short) value
: Short.reverseBytes((short) value));
} else {
PlatformDependent.putByte(address + 1, (byte) (value >>> 8));
PlatformDependent.putByte(address + 2, (byte) value);
}
}
static void setMediumLE(long address, int value) {
PlatformDependent.putByte(address, (byte) value);
if (UNALIGNED) {
PlatformDependent.putShort(address + 1, BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes((short) (value >>> 8))
: (short) (value >>> 8));
} else {
PlatformDependent.putByte(address + 1, (byte) (value >>> 8));
PlatformDependent.putByte(address + 2, (byte) (value >>> 16));
}
}
static void setInt(long address, int value) {
if (UNALIGNED) {
PlatformDependent.putInt(address, BIG_ENDIAN_NATIVE_ORDER ? value : Integer.reverseBytes(value));
} else {
PlatformDependent.putByte(address, (byte) (value >>> 24));
PlatformDependent.putByte(address + 1, (byte) (value >>> 16));
PlatformDependent.putByte(address + 2, (byte) (value >>> 8));
PlatformDependent.putByte(address + 3, (byte) value);
}
}
static void setIntLE(long address, int value) {
if (UNALIGNED) {
PlatformDependent.putInt(address, BIG_ENDIAN_NATIVE_ORDER ? Integer.reverseBytes(value) : value);
} else {
PlatformDependent.putByte(address, (byte) value);
PlatformDependent.putByte(address + 1, (byte) (value >>> 8));
PlatformDependent.putByte(address + 2, (byte) (value >>> 16));
PlatformDependent.putByte(address + 3, (byte) (value >>> 24));
}
}
static void setLong(long address, long value) {
if (UNALIGNED) {
PlatformDependent.putLong(address, BIG_ENDIAN_NATIVE_ORDER ? value : Long.reverseBytes(value));
} else {
PlatformDependent.putByte(address, (byte) (value >>> 56));
PlatformDependent.putByte(address + 1, (byte) (value >>> 48));
PlatformDependent.putByte(address + 2, (byte) (value >>> 40));
PlatformDependent.putByte(address + 3, (byte) (value >>> 32));
PlatformDependent.putByte(address + 4, (byte) (value >>> 24));
PlatformDependent.putByte(address + 5, (byte) (value >>> 16));
PlatformDependent.putByte(address + 6, (byte) (value >>> 8));
PlatformDependent.putByte(address + 7, (byte) value);
}
}
static void setLongLE(long address, long value) {
if (UNALIGNED) {
PlatformDependent.putLong(address, BIG_ENDIAN_NATIVE_ORDER ? Long.reverseBytes(value) : value);
} else {
PlatformDependent.putByte(address, (byte) value);
PlatformDependent.putByte(address + 1, (byte) (value >>> 8));
PlatformDependent.putByte(address + 2, (byte) (value >>> 16));
PlatformDependent.putByte(address + 3, (byte) (value >>> 24));
PlatformDependent.putByte(address + 4, (byte) (value >>> 32));
PlatformDependent.putByte(address + 5, (byte) (value >>> 40));
PlatformDependent.putByte(address + 6, (byte) (value >>> 48));
PlatformDependent.putByte(address + 7, (byte) (value >>> 56));
}
}
static byte getByte(byte[] array, int index) {
return PlatformDependent.getByte(array, index);
}
static short getShort(byte[] array, int index) {
if (UNALIGNED) {
short v = PlatformDependent.getShort(array, index);
return BIG_ENDIAN_NATIVE_ORDER ? v : Short.reverseBytes(v);
}
return (short) (PlatformDependent.getByte(array, index) << 8 |
PlatformDependent.getByte(array, index + 1) & 0xff);
}
static short getShortLE(byte[] array, int index) {
if (UNALIGNED) {
short v = PlatformDependent.getShort(array, index);
return BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes(v) : v;
}
return (short) (PlatformDependent.getByte(array, index) & 0xff |
PlatformDependent.getByte(array, index + 1) << 8);
}
static int getUnsignedMedium(byte[] array, int index) {
if (UNALIGNED) {
return (PlatformDependent.getByte(array, index) & 0xff) << 16 |
(BIG_ENDIAN_NATIVE_ORDER ? PlatformDependent.getShort(array, index + 1)
: Short.reverseBytes(PlatformDependent.getShort(array, index + 1)))
& 0xffff;
}
return (PlatformDependent.getByte(array, index) & 0xff) << 16 |
(PlatformDependent.getByte(array, index + 1) & 0xff) << 8 |
PlatformDependent.getByte(array, index + 2) & 0xff;
}
static int getUnsignedMediumLE(byte[] array, int index) {
if (UNALIGNED) {
return (PlatformDependent.getByte(array, index) & 0xff) |
((BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes(PlatformDependent.getShort(array, index + 1))
: PlatformDependent.getShort(array, index + 1)) & 0xffff) << 8;
}
return PlatformDependent.getByte(array, index) & 0xff |
(PlatformDependent.getByte(array, index + 1) & 0xff) << 8 |
(PlatformDependent.getByte(array, index + 2) & 0xff) << 16;
}
static int getInt(byte[] array, int index) {
if (UNALIGNED) {
int v = PlatformDependent.getInt(array, index);
return BIG_ENDIAN_NATIVE_ORDER ? v : Integer.reverseBytes(v);
}
return PlatformDependent.getByte(array, index) << 24 |
(PlatformDependent.getByte(array, index + 1) & 0xff) << 16 |
(PlatformDependent.getByte(array, index + 2) & 0xff) << 8 |
PlatformDependent.getByte(array, index + 3) & 0xff;
}
static int getIntLE(byte[] array, int index) {
if (UNALIGNED) {
int v = PlatformDependent.getInt(array, index);
return BIG_ENDIAN_NATIVE_ORDER ? Integer.reverseBytes(v) : v;
}
return PlatformDependent.getByte(array, index) & 0xff |
(PlatformDependent.getByte(array, index + 1) & 0xff) << 8 |
(PlatformDependent.getByte(array, index + 2) & 0xff) << 16 |
PlatformDependent.getByte(array, index + 3) << 24;
}
static long getLong(byte[] array, int index) {
if (UNALIGNED) {
long v = PlatformDependent.getLong(array, index);
return BIG_ENDIAN_NATIVE_ORDER ? v : Long.reverseBytes(v);
}
return ((long) PlatformDependent.getByte(array, index)) << 56 |
(PlatformDependent.getByte(array, index + 1) & 0xffL) << 48 |
(PlatformDependent.getByte(array, index + 2) & 0xffL) << 40 |
(PlatformDependent.getByte(array, index + 3) & 0xffL) << 32 |
(PlatformDependent.getByte(array, index + 4) & 0xffL) << 24 |
(PlatformDependent.getByte(array, index + 5) & 0xffL) << 16 |
(PlatformDependent.getByte(array, index + 6) & 0xffL) << 8 |
(PlatformDependent.getByte(array, index + 7)) & 0xffL;
}
static long getLongLE(byte[] array, int index) {
if (UNALIGNED) {
long v = PlatformDependent.getLong(array, index);
return BIG_ENDIAN_NATIVE_ORDER ? Long.reverseBytes(v) : v;
}
return PlatformDependent.getByte(array, index) & 0xffL |
(PlatformDependent.getByte(array, index + 1) & 0xffL) << 8 |
(PlatformDependent.getByte(array, index + 2) & 0xffL) << 16 |
(PlatformDependent.getByte(array, index + 3) & 0xffL) << 24 |
(PlatformDependent.getByte(array, index + 4) & 0xffL) << 32 |
(PlatformDependent.getByte(array, index + 5) & 0xffL) << 40 |
(PlatformDependent.getByte(array, index + 6) & 0xffL) << 48 |
((long) PlatformDependent.getByte(array, index + 7)) << 56;
}
static void setByte(byte[] array, int index, int value) {
PlatformDependent.putByte(array, index, (byte) value);
}
static void setShort(byte[] array, int index, int value) {
if (UNALIGNED) {
PlatformDependent.putShort(array, index,
BIG_ENDIAN_NATIVE_ORDER ? (short) value : Short.reverseBytes((short) value));
} else {
PlatformDependent.putByte(array, index, (byte) (value >>> 8));
PlatformDependent.putByte(array, index + 1, (byte) value);
}
}
static void setShortLE(byte[] array, int index, int value) {
if (UNALIGNED) {
PlatformDependent.putShort(array, index,
BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes((short) value) : (short) value);
} else {
PlatformDependent.putByte(array, index, (byte) value);
PlatformDependent.putByte(array, index + 1, (byte) (value >>> 8));
}
}
static void setMedium(byte[] array, int index, int value) {
PlatformDependent.putByte(array, index, (byte) (value >>> 16));
if (UNALIGNED) {
PlatformDependent.putShort(array, index + 1,
BIG_ENDIAN_NATIVE_ORDER ? (short) value
: Short.reverseBytes((short) value));
} else {
PlatformDependent.putByte(array, index + 1, (byte) (value >>> 8));
PlatformDependent.putByte(array, index + 2, (byte) value);
}
}
static void setMediumLE(byte[] array, int index, int value) {
PlatformDependent.putByte(array, index, (byte) value);
if (UNALIGNED) {
PlatformDependent.putShort(array, index + 1,
BIG_ENDIAN_NATIVE_ORDER ? Short.reverseBytes((short) (value >>> 8))
: (short) (value >>> 8));
} else {
PlatformDependent.putByte(array, index + 1, (byte) (value >>> 8));
PlatformDependent.putByte(array, index + 2, (byte) (value >>> 16));
}
}
static void setInt(byte[] array, int index, int value) {
if (UNALIGNED) {
PlatformDependent.putInt(array, index, BIG_ENDIAN_NATIVE_ORDER ? value : Integer.reverseBytes(value));
} else {
PlatformDependent.putByte(array, index, (byte) (value >>> 24));
PlatformDependent.putByte(array, index + 1, (byte) (value >>> 16));
PlatformDependent.putByte(array, index + 2, (byte) (value >>> 8));
PlatformDependent.putByte(array, index + 3, (byte) value);
}
}
static void setIntLE(byte[] array, int index, int value) {
if (UNALIGNED) {
PlatformDependent.putInt(array, index, BIG_ENDIAN_NATIVE_ORDER ? Integer.reverseBytes(value) : value);
} else {
PlatformDependent.putByte(array, index, (byte) value);
PlatformDependent.putByte(array, index + 1, (byte) (value >>> 8));
PlatformDependent.putByte(array, index + 2, (byte) (value >>> 16));
PlatformDependent.putByte(array, index + 3, (byte) (value >>> 24));
}
}
static void setLong(byte[] array, int index, long value) {
if (UNALIGNED) {
PlatformDependent.putLong(array, index, BIG_ENDIAN_NATIVE_ORDER ? value : Long.reverseBytes(value));
} else {
PlatformDependent.putByte(array, index, (byte) (value >>> 56));
PlatformDependent.putByte(array, index + 1, (byte) (value >>> 48));
PlatformDependent.putByte(array, index + 2, (byte) (value >>> 40));
PlatformDependent.putByte(array, index + 3, (byte) (value >>> 32));
PlatformDependent.putByte(array, index + 4, (byte) (value >>> 24));
PlatformDependent.putByte(array, index + 5, (byte) (value >>> 16));
PlatformDependent.putByte(array, index + 6, (byte) (value >>> 8));
PlatformDependent.putByte(array, index + 7, (byte) value);
}
}
static void setLongLE(byte[] array, int index, long value) {
if (UNALIGNED) {
PlatformDependent.putLong(array, index, BIG_ENDIAN_NATIVE_ORDER ? Long.reverseBytes(value) : value);
} else {
PlatformDependent.putByte(array, index, (byte) value);
PlatformDependent.putByte(array, index + 1, (byte) (value >>> 8));
PlatformDependent.putByte(array, index + 2, (byte) (value >>> 16));
PlatformDependent.putByte(array, index + 3, (byte) (value >>> 24));
PlatformDependent.putByte(array, index + 4, (byte) (value >>> 32));
PlatformDependent.putByte(array, index + 5, (byte) (value >>> 40));
PlatformDependent.putByte(array, index + 6, (byte) (value >>> 48));
PlatformDependent.putByte(array, index + 7, (byte) (value >>> 56));
}
}
private static void batchSetZero(byte[] data, int index, int length) {
int longBatches = length / 8;
for (int i = 0; i < longBatches; i++) {
PlatformDependent.putLong(data, index, ZERO);
index += 8;
}
final int remaining = length % 8;
for (int i = 0; i < remaining; i++) {
PlatformDependent.putByte(data, index + i, ZERO);
}
}
static void setZero(byte[] array, int index, int length) {
if (length == 0) {
return;
}
// fast-path for small writes to avoid thread-state change JDK's handling
if (UNALIGNED && length <= MAX_HAND_ROLLED_SET_ZERO_BYTES) {
batchSetZero(array, index, length);
} else {
PlatformDependent.setMemory(array, index, length, ZERO);
}
}
static ByteBuf copy(AbstractByteBuf buf, long addr, int index, int length) {
buf.checkIndex(index, length);
ByteBuf copy = buf.alloc().directBuffer(length, buf.maxCapacity());
if (length != 0) {
if (copy.hasMemoryAddress()) {
PlatformDependent.copyMemory(addr, copy.memoryAddress(), length);
copy.setIndex(0, length);
} else {
copy.writeBytes(buf, index, length);
}
}
return copy;
}
static int setBytes(AbstractByteBuf buf, long addr, int index, InputStream in, int length) throws IOException {
buf.checkIndex(index, length);
ByteBuf tmpBuf = buf.alloc().heapBuffer(length);
try {
byte[] tmp = tmpBuf.array();
int offset = tmpBuf.arrayOffset();
int readBytes = in.read(tmp, offset, length);
if (readBytes > 0) {
PlatformDependent.copyMemory(tmp, offset, addr, readBytes);
}
return readBytes;
} finally {
tmpBuf.release();
}
}
static void getBytes(AbstractByteBuf buf, long addr, int index, ByteBuf dst, int dstIndex, int length) {
buf.checkIndex(index, length);
checkNotNull(dst, "dst");
if (isOutOfBounds(dstIndex, length, dst.capacity())) {
throw new IndexOutOfBoundsException("dstIndex: " + dstIndex);
}
if (dst.hasMemoryAddress()) {
PlatformDependent.copyMemory(addr, dst.memoryAddress() + dstIndex, length);
} else if (dst.hasArray()) {
PlatformDependent.copyMemory(addr, dst.array(), dst.arrayOffset() + dstIndex, length);
} else {
dst.setBytes(dstIndex, buf, index, length);
}
}
static void getBytes(AbstractByteBuf buf, long addr, int index, byte[] dst, int dstIndex, int length) {
buf.checkIndex(index, length);
checkNotNull(dst, "dst");
if (isOutOfBounds(dstIndex, length, dst.length)) {
throw new IndexOutOfBoundsException("dstIndex: " + dstIndex);
}
if (length != 0) {
PlatformDependent.copyMemory(addr, dst, dstIndex, length);
}
}
static void getBytes(AbstractByteBuf buf, long addr, int index, ByteBuffer dst) {
buf.checkIndex(index, dst.remaining());
if (dst.remaining() == 0) {
return;
}
if (dst.isDirect()) {
if (dst.isReadOnly()) {
// We need to check if dst is ready-only so we not write something in it by using Unsafe.
throw new ReadOnlyBufferException();
}
// Copy to direct memory
long dstAddress = PlatformDependent.directBufferAddress(dst);
PlatformDependent.copyMemory(addr, dstAddress + dst.position(), dst.remaining());
dst.position(dst.position() + dst.remaining());
} else if (dst.hasArray()) {
// Copy to array
PlatformDependent.copyMemory(addr, dst.array(), dst.arrayOffset() + dst.position(), dst.remaining());
dst.position(dst.position() + dst.remaining());
} else {
dst.put(buf.nioBuffer());
}
}
static void setBytes(AbstractByteBuf buf, long addr, int index, ByteBuf src, int srcIndex, int length) {
buf.checkIndex(index, length);
checkNotNull(src, "src");
if (isOutOfBounds(srcIndex, length, src.capacity())) {
throw new IndexOutOfBoundsException("srcIndex: " + srcIndex);
}
if (length != 0) {
if (src.hasMemoryAddress()) {
PlatformDependent.copyMemory(src.memoryAddress() + srcIndex, addr, length);
} else if (src.hasArray()) {
PlatformDependent.copyMemory(src.array(), src.arrayOffset() + srcIndex, addr, length);
} else {
src.getBytes(srcIndex, buf, index, length);
}
}
}
static void setBytes(AbstractByteBuf buf, long addr, int index, byte[] src, int srcIndex, int length) {
buf.checkIndex(index, length);
// we need to check not null for src as it may cause the JVM crash
// See https://github.com/netty/netty/issues/10791
checkNotNull(src, "src");
if (isOutOfBounds(srcIndex, length, src.length)) {
throw new IndexOutOfBoundsException("srcIndex: " + srcIndex);
}
if (length != 0) {
PlatformDependent.copyMemory(src, srcIndex, addr, length);
}
}
static void setBytes(AbstractByteBuf buf, long addr, int index, ByteBuffer src) {
final int length = src.remaining();
if (length == 0) {
return;
}
if (src.isDirect()) {
buf.checkIndex(index, length);
// Copy from direct memory
long srcAddress = PlatformDependent.directBufferAddress(src);
PlatformDependent.copyMemory(srcAddress + src.position(), addr, length);
src.position(src.position() + length);
} else if (src.hasArray()) {
buf.checkIndex(index, length);
// Copy from array
PlatformDependent.copyMemory(src.array(), src.arrayOffset() + src.position(), addr, length);
src.position(src.position() + length);
} else {
if (length < 8) {
setSingleBytes(buf, addr, index, src, length);
} else {
//no need to checkIndex: internalNioBuffer is already taking care of it
assert buf.nioBufferCount() == 1;
final ByteBuffer internalBuffer = buf.internalNioBuffer(index, length);
internalBuffer.put(src);
}
}
}
private static void setSingleBytes(final AbstractByteBuf buf, final long addr, final int index,
final ByteBuffer src, final int length) {
buf.checkIndex(index, length);
final int srcPosition = src.position();
final int srcLimit = src.limit();
long dstAddr = addr;
for (int srcIndex = srcPosition; srcIndex < srcLimit; srcIndex++) {
final byte value = src.get(srcIndex);
PlatformDependent.putByte(dstAddr, value);
dstAddr++;
}
src.position(srcLimit);
}
static void getBytes(AbstractByteBuf buf, long addr, int index, OutputStream out, int length) throws IOException {
buf.checkIndex(index, length);
if (length != 0) {
int len = Math.min(length, ByteBufUtil.WRITE_CHUNK_SIZE);
if (len <= ByteBufUtil.MAX_TL_ARRAY_LEN || !buf.alloc().isDirectBufferPooled()) {
getBytes(addr, ByteBufUtil.threadLocalTempArray(len), 0, len, out, length);
} else {
// if direct buffers are pooled chances are good that heap buffers are pooled as well.
ByteBuf tmpBuf = buf.alloc().heapBuffer(len);
try {
byte[] tmp = tmpBuf.array();
int offset = tmpBuf.arrayOffset();
getBytes(addr, tmp, offset, len, out, length);
} finally {
tmpBuf.release();
}
}
}
}
private static void getBytes(long inAddr, byte[] in, int inOffset, int inLen, OutputStream out, int outLen)
throws IOException {
do {
int len = Math.min(inLen, outLen);
PlatformDependent.copyMemory(inAddr, in, inOffset, len);
out.write(in, inOffset, len);
outLen -= len;
inAddr += len;
} while (outLen > 0);
}
private static void batchSetZero(long addr, int length) {
int longBatches = length / 8;
for (int i = 0; i < longBatches; i++) {
PlatformDependent.putLong(addr, ZERO);
addr += 8;
}
final int remaining = length % 8;
for (int i = 0; i < remaining; i++) {
PlatformDependent.putByte(addr + i, ZERO);
}
}
static void setZero(long addr, int length) {
if (length == 0) {
return;
}
// fast-path for small writes to avoid thread-state change JDK's handling
if (length <= MAX_HAND_ROLLED_SET_ZERO_BYTES) {
if (!UNALIGNED) {
// write bytes until the address is aligned
int bytesToGetAligned = zeroTillAligned(addr, length);
addr += bytesToGetAligned;
length -= bytesToGetAligned;
if (length == 0) {
return;
}
assert addr % 8 == 0;
}
batchSetZero(addr, length);
} else {
PlatformDependent.setMemory(addr, length, ZERO);
}
}
private static int zeroTillAligned(long addr, int length) {
// write bytes until the address is aligned
int bytesToGetAligned = Math.min((int) (addr % 8), length);
for (int i = 0; i < bytesToGetAligned; i++) {
PlatformDependent.putByte(addr + i, ZERO);
}
return bytesToGetAligned;
}
static UnpooledUnsafeDirectByteBuf newUnsafeDirectByteBuf(
ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
if (PlatformDependent.useDirectBufferNoCleaner()) {
return new UnpooledUnsafeNoCleanerDirectByteBuf(alloc, initialCapacity, maxCapacity);
}
return new UnpooledUnsafeDirectByteBuf(alloc, initialCapacity, maxCapacity);
}
private UnsafeByteBufUtil() { }
}

Some files were not shown because too many files have changed in this diff Show more