Package com.arangodb.model
Class AqlQueryOptions
- All Implemented Interfaces:
Cloneable
public final class AqlQueryOptions
extends TransactionalOptions<AqlQueryOptions>
implements Cloneable
- Author:
- Mark Vollmary, Michele Rastelli
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic final class
static final class
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionallowDirtyRead
(Boolean allowDirtyRead) Sets the headerx-arango-allow-dirty-read
totrue
to allow the Coordinator to ask any shard replica for the data, not only the shard leader.allowDirtyReads
(Boolean allowDirtyReads) allowRetry
(Boolean allowRetry) clone()
customOption
(String key, Object value) Set an additional custom option in the form of key-value pair.failOnWarning
(Boolean failOnWarning) fillBlockCache
(Boolean fillBlockCache) forceOneShardAttributeValue
(String forceOneShardAttributeValue) getCache()
getCount()
Deprecated.getQuery()
getRules()
getTtl()
intermediateCommitCount
(Long intermediateCommitCount) intermediateCommitSize
(Long intermediateCommitSize) maxDNFConditionMembers
(Integer maxDNFConditionMembers) maxNodesPerCallstack
(Integer maxNodesPerCallstack) maxNumberOfPlans
(Integer maxNumberOfPlans) Deprecated.for removal, usemaxNumberOfPlans(Integer)
insteadmaxRuntime
(Double maxRuntime) maxTransactionSize
(Long maxTransactionSize) maxWarningCount
(Long maxWarningCount) memoryLimit
(Long memoryLimit) optimizer
(AqlQueryOptions.Optimizer optimizer) options
(AqlQueryOptions.Options options) rules
(Collection<String> rules) satelliteSyncWait
(Double satelliteSyncWait) Restrict query to shards by given ids.skipInaccessibleCollections
(Boolean skipInaccessibleCollections) spillOverThresholdMemoryUsage
(Long spillOverThresholdMemoryUsage) spillOverThresholdNumRows
(Long spillOverThresholdNumRows) Methods inherited from class com.arangodb.model.TransactionalOptions
getStreamTransactionId, streamTransactionId
-
Constructor Details
-
AqlQueryOptions
public AqlQueryOptions()
-
-
Method Details
-
getAllowDirtyRead
-
allowDirtyRead
Sets the headerx-arango-allow-dirty-read
totrue
to allow the Coordinator to ask any shard replica for the data, not only the shard leader. This may result in “dirty reads”. The header is ignored if this operation is part of a Stream Transaction (TransactionalOptions.streamTransactionId(String)
). The header set when creating the transaction decides about dirty reads for the entire transaction, not the individual read operations.- Parameters:
allowDirtyRead
- Set totrue
allows reading from followers in an active-failover setup.- Returns:
- this
- See Also:
-
getBatchSize
-
batchSize
- Parameters:
batchSize
- maximum number of result documents to be transferred from the server to the client in one roundtrip. If this attribute is not set, a server-controlled default value will be used. A batchSize value of 0 is disallowed.- Returns:
- this
-
getBindVars
-
getCache
-
cache
- Parameters:
cache
- flag to determine whether the AQL query results cache shall be used. If set to false, then any query cache lookup will be skipped for the query. If set to true, it will lead to the query cache being checked for the query if the query cache mode is either on or demand.- Returns:
- this
-
getCount
-
count
- Parameters:
count
- indicates whether the number of documents in the result set should be returned and made accessible viaArangoCursor.getCount()
. Calculating thecount
attribute might have a performance impact for some queries in the future so this option is turned off by default, andcount
is only returned when requested.- Returns:
- this
-
getMemoryLimit
-
memoryLimit
- Parameters:
memoryLimit
- the maximum number of memory (measured in bytes) that the query is allowed to use. If set, then the query will fail with errorresource limit exceeded
in case it allocates too much memory. A value of0
indicates that there is no memory limit.- Returns:
- this
- Since:
- ArangoDB 3.1.0
-
getOptions
-
options
- Parameters:
options
- extra options for the query- Returns:
- this
-
getQuery
-
query
- Parameters:
query
- the query to be executed- Returns:
- this
-
getTtl
-
ttl
- Parameters:
ttl
- The time-to-live for the cursor (in seconds). If the result set is small enough (less than or equal to batchSize) then results are returned right away. Otherwise, they are stored in memory and will be accessible via the cursor with respect to the ttl. The cursor will be removed on the server automatically after the specified amount of time. This is useful to ensure garbage collection of cursors that are not fully fetched by clients. If not set, a server-defined value will be used (default: 30 seconds). The time-to-live is renewed upon every access to the cursor.- Returns:
- this
-
clone
-
getCustomOptions
-
customOption
Set an additional custom option in the form of key-value pair.- Parameters:
key
- option namevalue
- option value- Returns:
- this
-
getAllowDirtyReads
-
allowDirtyReads
- Parameters:
allowDirtyReads
- If you set this option to true and execute the query against a cluster deployment, then the Coordinator is allowed to read from any shard replica and not only from the leader. You may observe data inconsistencies (dirty reads) when reading from followers, namely obsolete revisions of documents because changes have not yet been replicated to the follower, as well as changes to documents before they are officially committed on the leader. This feature is only available in the Enterprise Edition.- Returns:
- this
-
getAllowRetry
-
allowRetry
- Parameters:
allowRetry
- Set this option to true to make it possible to retry fetching the latest batch from a cursor. This makes possible to safely retry invokingArangoCursor.next()
in case of I/O exceptions (which are actually thrown asArangoDBException
with causeIOException
) If set to false (default), then it is not safe to retry invokingArangoCursor.next()
in case of I/O exceptions, since the request to fetch the next batch is not idempotent (i.e. the cursor may advance multiple times on the server). Note: once you successfully received the last batch, you should callCloseable.close()
so that the server does not unnecessary keep the batch until the cursor times out (ttl(Integer)
).- Returns:
- this
- Since:
- ArangoDB 3.11
-
getFailOnWarning
-
failOnWarning
- Parameters:
failOnWarning
- When set to true, the query will throw an exception and abort instead of producing a warning. This option should be used during development to catch potential issues early. When the attribute is set to false, warnings will not be propagated to exceptions and will be returned with the query result. There is also a server configuration option --query.fail-on-warning for setting the default value for failOnWarning so it does not need to be set on a per-query level.- Returns:
- this
-
getFillBlockCache
-
fillBlockCache
- Parameters:
fillBlockCache
- if set totrue
or not specified, this will make the query store the data it reads via the RocksDB storage engine in the RocksDB block cache. This is usually the desired behavior. The option can be set tofalse
for queries that are known to either read a lot of data that would thrash the block cache, or for queries that read data known to be outside of the hot set. By setting the option tofalse
, data read by the query will not make it into the RocksDB block cache if it is not already in there, thus leaving more room for the actual hot set.- Returns:
- this
- Since:
- ArangoDB 3.8.1
-
getForceOneShardAttributeValue
-
forceOneShardAttributeValue
- Parameters:
forceOneShardAttributeValue
- This query option can be used in complex queries in case the query optimizer cannot automatically detect that the query can be limited to only a single server (e.g. in a disjoint smart graph case). If the option is set incorrectly, i.e. to a wrong shard key value, then the query may be shipped to a wrong DB server and may not return results (i.e. empty result set). Use at your own risk.- Returns:
- this
-
getFullCount
-
fullCount
- Parameters:
fullCount
- if set to true and the query contains a LIMIT clause, then the result will have an extra attribute with the sub-attributes stats and fullCount, { ... , "extra": { "stats": { "fullCount": 123 } } }. The fullCount attribute will contain the number of documents in the result before the last LIMIT in the query was applied. It can be used to count the number of documents that match certain filter criteria, but only return a subset of them, in one go. It is thus similar to MySQL's SQL_CALC_FOUND_ROWS hint. Note that setting the option will disable a few LIMIT optimizations and may lead to more documents being processed, and thus make queries run longer. Note that the fullCount attribute will only be present in the result if the query has a LIMIT clause and the LIMIT clause is actually used in the query.- Returns:
- this
-
getIntermediateCommitCount
-
intermediateCommitCount
- Parameters:
intermediateCommitCount
- Maximum number of operations after which an intermediate commit is performed automatically. Honored by the RocksDB storage engine only.- Returns:
- this
- Since:
- ArangoDB 3.2.0
-
getIntermediateCommitSize
-
intermediateCommitSize
- Parameters:
intermediateCommitSize
- Maximum total size of operations after which an intermediate commit is performed automatically. Honored by the RocksDB storage engine only.- Returns:
- this
- Since:
- ArangoDB 3.2.0
-
getMaxDNFConditionMembers
-
maxDNFConditionMembers
- Parameters:
maxDNFConditionMembers
- A threshold for the maximum number of OR sub-nodes in the internal representation of an AQL FILTER condition. Yon can use this option to limit the computation time and memory usage when converting complex AQL FILTER conditions into the internal DNF (disjunctive normal form) format. FILTER conditions with a lot of logical branches (AND, OR, NOT) can take a large amount of processing time and memory. This query option limits the computation time and memory usage for such conditions. Once the threshold value is reached during the DNF conversion of a FILTER condition, the conversion is aborted, and the query continues with a simplified internal representation of the condition, which cannot be used for index lookups. You can set the threshold globally instead of per query with the --query.max-dnf-condition-members startup option.- Returns:
- this
-
getMaxNodesPerCallstack
-
maxNodesPerCallstack
- Parameters:
maxNodesPerCallstack
- The number of execution nodes in the query plan after that stack splitting is performed to avoid a potential stack overflow. Defaults to the configured value of the startup option --query.max-nodes-per-callstack. This option is only useful for testing and debugging and normally does not need any adjustment.- Returns:
- this
-
getMaxNumberOfPlans
-
maxNumberOfPlans
- Parameters:
maxNumberOfPlans
- Limits the maximum number of plans that are created by the AQL query optimizer.- Returns:
- this
-
getMaxPlans
Deprecated.for removal, usegetMaxNumberOfPlans()
instead -
maxPlans
Deprecated.for removal, usemaxNumberOfPlans(Integer)
instead- Parameters:
maxPlans
- Limits the maximum number of plans that are created by the AQL query optimizer.- Returns:
- this
-
getMaxRuntime
-
maxRuntime
- Parameters:
maxRuntime
- The query has to be executed within the given runtime or it will be killed. The value is specified in seconds. The default value is 0.0 (no timeout).- Returns:
- this
-
getMaxTransactionSize
-
maxTransactionSize
- Parameters:
maxTransactionSize
- Transaction size limit in bytes. Honored by the RocksDB storage engine only.- Returns:
- this
- Since:
- ArangoDB 3.2.0
-
getMaxWarningCount
-
maxWarningCount
- Parameters:
maxWarningCount
- Limits the maximum number of warnings a query will return. The number of warnings a query will return is limited to 10 by default, but that number can be increased or decreased by setting this attribute.- Returns:
- this
- Since:
- ArangoDB 3.2.0
-
getOptimizer
-
optimizer
- Parameters:
optimizer
- Options related to the query optimizer.- Returns:
- this
-
getProfile
-
profile
- Parameters:
profile
- If set to true, then the additional query profiling information will be returned in the sub-attribute profile of the extra return attribute if the query result is not served from the query cache.- Returns:
- this
-
getSatelliteSyncWait
-
satelliteSyncWait
- Parameters:
satelliteSyncWait
- This enterprise parameter allows to configure how long a DBServer will have time to bring the satellite collections involved in the query into sync. The default value is 60.0 (seconds). When the max time has been reached the query will be stopped.- Returns:
- this
- Since:
- ArangoDB 3.2.0
-
getShardIds
-
shardIds
Restrict query to shards by given ids. This is an internal option. Use at your own risk.- Parameters:
shardIds
-- Returns:
- this
-
getSkipInaccessibleCollections
-
skipInaccessibleCollections
- Parameters:
skipInaccessibleCollections
- AQL queries (especially graph traversals) will treat collection to which a user has no access rights as if these collections were empty. Instead of returning a forbidden access error, your queries will execute normally. This is intended to help with certain use-cases: A graph contains several collections and different users execute AQL queries on that graph. You can now naturally limit the accessible results by changing the access rights of users on collections. This feature is only available in the Enterprise Edition.- Returns:
- this
- Since:
- ArangoDB 3.2.0
-
getSpillOverThresholdMemoryUsage
-
spillOverThresholdMemoryUsage
- Parameters:
spillOverThresholdMemoryUsage
- This option allows queries to store intermediate and final results temporarily on disk if the amount of memory used (in bytes) exceeds the specified value. This is used for decreasing the memory usage during the query execution. This option only has an effect on queries that use the SORT operation but without a LIMIT, and if you enable the spillover feature by setting a path for the directory to store the temporary data in with the --temp.intermediate-results-path startup option. Default value: 128MB. Spilling data from RAM onto disk is an experimental feature and is turned off by default. The query results are still built up entirely in RAM on Coordinators and single servers for non-streaming queries. To avoid the buildup of the entire query result in RAM, use a streaming query (see the stream option).- Returns:
- this
-
getSpillOverThresholdNumRows
-
spillOverThresholdNumRows
- Parameters:
spillOverThresholdNumRows
- This option allows queries to store intermediate and final results temporarily on disk if the number of rows produced by the query exceeds the specified value. This is used for decreasing the memory usage during the query execution. In a query that iterates over a collection that contains documents, each row is a document, and in a query that iterates over temporary values (i.e. FOR i IN 1..100), each row is one of such temporary values. This option only has an effect on queries that use the SORT operation but without a LIMIT, and if you enable the spillover feature by setting a path for the directory to store the temporary data in with the --temp.intermediate-results-path startup option. Default value: 5000000 rows. Spilling data from RAM onto disk is an experimental feature and is turned off by default. The query results are still built up entirely in RAM on Coordinators and single servers for non-streaming queries. To avoid the buildup of the entire query result in RAM, use a streaming query (see the stream option).- Returns:
- this
-
getStream
-
stream
- Parameters:
stream
- Specify true and the query will be executed in a streaming fashion. The query result is not stored on the server, but calculated on the fly. Beware: long-running queries will need to hold the collection locks for as long as the query cursor exists. When set to false a query will be executed right away in its entirety. In that case query results are either returned right away (if the resultset is small enough), or stored on the arangod instance and accessible via the cursor API (with respect to the ttl). It is advisable to only use this option on short-running queries or without exclusive locks (write-locks on MMFiles). Please note that the query options cache, count and fullCount will not work on streaming queries. Additionally query statistics, warnings and profiling data will only be available after the query is finished. The default value is false- Returns:
- this
- Since:
- ArangoDB 3.4.0
-
getRules
-
rules
- Parameters:
rules
- A list of to-be-included or to-be-excluded optimizer rules can be put into this attribute, telling the optimizer to include or exclude specific rules. To disable a rule, prefix its name with a -, to enable a rule, prefix it with a +. There is also a pseudo-rule all, which will match all optimizer rules- Returns:
- this
-
getMaxNumberOfPlans()
instead