Table of Contents


Using SPARQL versus using Prolog

Choosing a Query Engine


Valid result formats

Exported functions

Extension functions

SELECT bindings and ASK results

Returning triples from CONSTRUCT and DESCRIBE queries


SPARQL and first-class triples


Dataset loading

Default dataset handling

Verbose output

SPARQL and encoded values

SPARQL Query Options


Function index


This document describes AllegroGraph's SPARQL implementation. Each of the following functions are exported from the db.agraph.sparql package. This package is also nicknamed sparql.

For notes on AllegroGraph's conformance to the W3C specification please see this document. For notes on the SPARQL 1.1 query engine, see the release notes.

As of version 4.4, AllegroGraph's supports all of SPARQL 1.1 query including 1 :

AllegroGraph also provides partial support for SPIN.

Conceptually, SPARQL has three layers:

Currently, input and output from each of these layers is limited (for example, the query plan is not available to user code, but parsed output is). This may change in a future release.

Queries can be planned and executed by different query engines. AllegroGraph currently has these query engines (some of which are just alternate names): :algebra, :remote, :sbqe, :set-based, :sparql-1.0, :sparql-1.1 . See below for details.

The engine to use is specified by the :engine keyword to run-sparql; acceptable values are returned by valid-query-engines. More information on how to choose the engine to use is given below.

The engine used if the :engine argument is not supplied is returned by the function default-query-engine. You can change the default engine using the QueryEngine parameter in the AllegroGraph configuration file. For example:

QueryEngine sparql-1.1 

This parameter should be placed alongside other global parameters like license or port. You can also set the default engine for the current session by setfing default-query-engine. Finally, you can specify the engine on a per-query basis by using the PREFIX notation. For example, to specify that a query use the SPARQL 1.0 query engine, you would prepend

 PREFIX franzOption_queryEngine: <franz:sparql-1.0> 

Using SPARQL versus using Prolog

Prolog is an alternative query mechanism for AllegroGraph. The Prolog tutorial provides an introduction to using Prolog and AllegroGraph together. Prolog is further described here in the Lisp Reference (where further links are provided). This section is a brief note on the differences between our SPARQL query engine and the Prolog select query engine. The main differences are:

Choosing a Query Engine

AllegroGraph currently has two query engines:

This is query engine used in all previous versions of AllegroGraph. It fully supports SPARQL 1.0. It operates in essentially depth-first fashion streaming solutions as it finds them.
This engine supports all of SPARQL 1.1 query and operates in a breadth-first style that processes the entire query in one pass. The set-engine also provides partial support for SPIN functions and magic properties.

Both the sparql-1.0 and the sparql-1.1 engines are based around the SPARQL algebraic description. They differ in which version of SPARQL they support, in the sorts of optimizations they perform and in how they execute a query plan.

Consider a query like the following:

select * {  
  ?x :p1 ?o .  
  ?o :p2 ?y .  

The sparql-1.0 engine will ask the AllegroGraph storage layer for a cursor to traverse the triples whose predicate is :p1 (call this the p1 cursor). For each of these triples, it will ask the storage layer for a cursor to traverse the triples whose subject is the object of p1's triple and whose predicate is :p2 (call this the p2 cursor). It will traverse the p2 cursor and return results. When it reaches the end of p2, it will move the p1 forward, create a new p2 and continue iterating.

The sparql-1.1 engine will create a cursor just like p1 above. Unlike the sparql-1.0 engine, it will immediately grab all of the resulting values for the triples returned. It will then create a single p2 cursor to iterate over all of the triples whose predicate is :p2. It will iterate over p2 and merge the results in memory. In short, the sparql-1.1 engine proceeds through the query plan one step at a time and accumulates all of the results immediately.

Because it accumulates results incrementally, the sparql-1.0 engine is typically better for queries that use a LIMIT (which also do not have an ORDER specified). The sparql-1.1 engine does almost the same amount of work to gather a single result as it does to gather all of them. 2

On the other hand, the sparql-1.1 engine is often many, many times faster for queries with multiple joins or complex filters.

In summary then, the sparql-1.0 engine is space efficient and (often) time profligate whereas the sparql-1.1 engine is time efficient and (sometimes) space profligate. Queries that must build up very large intermediate results sets can fail with memory errors when using the standard sparql-1.1 executor (similar queries using the sparql-1.0 engine would instead simply timeout). Because of this, AllegroGraph 4.7's sparql-1.1 engine adds a second query execution mode that provides control between the time and space tradeoffs. Whereas the sparql-1.0 engine processes results one-at-a-time and the standard sparql-1.1 engine processes them all at a time, the new executor processes them in chunks. Small chunks are closer in spirit to the older sparql-1.0 engine 3 whereas larger chunks are closer to the entire set at a time executor.


AllegroGraph has long supported a version of SPARQL Update based on a draft specification. The sparql-1.1 engine supports the new version.

Valid result formats

There are three possible outputs from a SPARQL query:

twinql provides a number of different ways to serialize these results to a stream, provided as keyword symbols to the query functions. The results-format argument controls how ASK and SELECT query results are serialized; some possible formats are :sparql-xml, which serializes the result into the SPARQL XML result format, and :sparql-json, which uses the JSON format.

For CONSTRUCT and DESCRIBE, the value of the rdf-format argument applies.

The default formats are :sparql-xml and :rdf/xml respectively. Providing an unrecognized format will signal an error.

You can find out which formats are allowed for a particular verb by using get-allowed-results-formats and get-allowed-rdf-formats.

Exported functions

parse-sparql string  &optional  default-prefixes  default-base

Parse a SPARQL query string into an s-expression.

This function is useful for three reasons: validation and inspection of queries, manual manipulation of query expressions without text processing, and performing parsing at a more convenient time than during query execution.

You do not need an open triple-store in order to parse a query. Any parse errors will signal a sparql-parse-error.

The optional arguments provide BASE and PREFIX arguments to the parser without inserting them textually into the query.

A string to use as the BASE for the SPARQL query.
A hash-table mapping string prefixes to their expansions or a list of two element lists where each sublist contains the prefix and its expansion. For example:
(("rdf" "")  
 ("rdfs" "")  
 ("owl" "")  
 ("xsd" "")  
 ("xs" "")  
 ("fn" "")  
 ("err" "")) 

This list uses the same format as db.agraph:standard-namespaces.

parse-sparql returns the s-expression representation of the query string.

run-sparql query  &rest  args  &key  engine  db  output-stream  rdf-format  results-format  host  &allow-other-keys

run-sparql takes a SPARQL query as input and returns bindings or new triples as output.

Since AllegroGraph 3.0 it is a convenient wrapper for the db-run-sparql methods specialized on particular database classes and query engines. You might consider using those methods directly to gain more control over the execution of your queries.

You should consider specifying an engine argument in your invocations of run-sparql; the choice of default execution engine is not guaranteed to remain the same in future releases.

Allowable values for engine are keyword symbols returned by valid-query-engines.

The precise arguments supplied to run-sparql vary according to the query engine. These are the typical arguments expected by the default engines.

SELECT and ASK query results will be presented according to the value provided for results-format, whilst the RDF output of DESCRIBE and CONSTRUCT will be serialized according to rdf-format. Both of these arguments take keyword values.

If the format is programmatic (that is, it is intended to return values rather than print a representation; :arrays is an example) then any results will be returned as the first value, and nothing will be printed on output-stream.

  • The query can be a string, which will be parsed by parse-sparql, or an s-expression as produced by parse-sparql. The s-expression syntax is described in greater detail in the reference. If you expect to run a query many times, you can avoid some parser overhead by parsing your query once and calling run-sparql with the parsed representation.
    If `query` is a string, then `default-base` and  
    `default-prefixes` are provided to [parse-sparql][]  
    to use when parsing the query. See the documentation  
    for that function for details. Parser errors signaled  
    within [parse-sparql][] will be propagated onwards by  
  • default-base A string to use as the BASE for the SPARQL query (only used when query is a string).

  • default-prefixes A hash-table mapping string prefixes to their expansions or a list of two element lists where each sublist contains the prefix and its expansion (only used when query is a string; see parse-sparql for details).

  • Results or new triples will be serialized to output-stream. If a programmatic format is chosen for output, the stream is irrelevant. An error will be signaled if output-stream is not a stream, t, or nil.

  • If limit, offset, from, or from-named are provided, they override the corresponding values specified in the query string itself. As FROM and FROM NAMED together define a dataset, and the SPARQL Protocol specification states that a dataset specified in the protocol (in this case, the programmatic API) overrides that in the query, if either from or from-named are non-nil then any dataset specifications in the query are ignored. You can specify that the contents of the query are to be partially overridden by providing t as the value of one of these arguments. This is interpreted as 'use the contents of the query'. from and from-named should be lists of URIs: future-parts, UPIs, or strings.

  • default-dataset-behavior controls how the query engine builds the dataset environment if FROM or FROM NAMED are not provided. Valid options are :all (ignore graphs; include all triples) and :default (include only the store's default graph).

  • default-graph-uris allows you to specify a list of resources which, when encountered in the SPARQL dataset specification, are to be treated as the default graph of the store. Each resource can be a resource UPI, resource future-part, or a URI string. For example, specifying '("") will cause a query featuring

    FROM <>  
    FROM <> 
  • to execute against the union of the contents of the named graph <> and the store's default graph, as determined by (default-graph-upi db).

  • with-variables should be an alist of variable names and values. The variable names can be strings (which will be interned in the package in which the query is parsed) or symbols (which should be interned in the package in which the query is to be, or was, parsed). The variable names can include or omit a leading '?'. Note that a query literal in code might be parsed at compile time. Using strings is the most reliable method for naming variables.
  • Before the query is executed, the variables named after symbols will be bound to the provided values.

    This allows you to use variables in your query which are externally imposed, or generated by other queries. The format expected by with-variables is the same as that used for each element of the list returned by the :alists results-format.

  • db (*db* by default) specifies the triple store against which queries should run.

  • destination-db (db by default) specifies the triple store against which Update modifications should take place. This is primarily of use when db is a read-only wrapper around a writable store, such as when reasoning has been applied.

  • If verbosep is non-nil, status information is written to *sparql-log-stream* (*standard-output* by default).

Three additional extensions are provided for your use.

  • If extendedp is true (or *use-extended-sparql-verbs-p* is true, and the argument omitted) then extensions like AllegroGraph's GEO syntax are enabled. Extensions are enabled by default in all versions of AllegroGraph after 3.2.

  • If memoizep is true (or *build-filter-memoizes-p* is true, and the argument omitted) calls to SPARQL query functions (such as STR, fn:matches, and extension functions) will be memoized for the duration of the query. For most queries this will yield speed increases when FILTER or ORDER BY are used, at the cost of additional memory consumption (and consequent GC activity). For some queries (those where repetition of function calls is rare) the cost of memoization will outweigh the benefits. In large queries which call SPARQL functions on many values, the size of the memos can grow large.

  • Memoization also requires that your extension functions do not depend on side-effects. The standard library is correct in this regard.

  • In some circumstances you can achieve substantial speed increases by sharing your memos between queries. Create a normal eql hash-table with (make-hash-table), passing it as the value of the memos argument to run-sparql. This hash-table will gradually fill with memos for each used query function.

If you wish to globally enable memoization, set the variables as follows:

  (setf *build-filter-memoizes-p* t)  
  (setf *sparql-sop-memos* (make-hash-table))) 

Be aware that the size of *sparql-sop-memos* could grow very large indeed. You might consider using a weak hash-table, or periodically discarding the contents of the hash-table.

  • load-function is a function with signature (uri db &optional type) or nil. If it is a function, it is called once for each FROM and FROM NAMED parameter making up the dataset of the query. The execution of the query commences once each parameter has been processed. The type argument is either :from or :from-named, and the uri argument is a part (ordinarily a future-part) naming a URI. The default value is taken from *dataset-load-function*. You can use this hook function to implement loading of RDF before the query is executed.

  • permitted-verbs is a keyword, either :all or :read-only. This defaults to :all, and will permit any kind of SPARQL or SPARQL/Update query. Use :read-only to allow only SELECT, ASK, DESCRIBE, and CONSTRUCT queries. Note that you must also enable extended mode (using :extendedp :update) to use SPARQL/Update operations.

The values returned by run-sparql are dependent on the verb used. The first value is typically disregarded in the case of results being written to output-stream. If output-stream is nil, the first value will be the results collected into a string (similar to the way in which cl:format operates).

The second value is the query verb: one of :select, :ask, :construct, or :describe. Other values are possible in extended mode.

The third value, for SELECT queries only, is a list of variables. This list can be used as a key into the values returned by the :arrays and lists results formats, amongst other things.

Individual results formats are permitted to return additional values.

default-query-engine &optional  engine  db

Returns the query-engine that will be used if no other engine is specified.

If a triple-store is opened, then the QueryEngine parameter of the AllegroGraph configuration file will be used. You can use setf to change the default for the current session.

get-allowed-results-formats &optional  verb  engine
Returns a list of keyword symbols that are valid when applied as values of results-format to a query with the given verb. if verb is not provided, the intersection of :ask and :select (the two permitted values) is returned. With AllegroGraph 3.0, an additional engine argument is available. In a similar manner to verb, omitting this restricts the returned values to those that apply to all built-in query engines.
get-allowed-rdf-formats &optional  verb  engine

Returns a list of keyword symbols that are valid when applied as values of rdf-format to a query with the given verb. if verb is not provided, the intersection of :construct and :describe (the two permitted values) is returned. With AllegroGraph 3.0, an additional engine argument is available. In a similar manner to verb, omitting this restricts the returned values to those that apply to all built-in query engines.


  • Get formats for CONSTRUCT queries executed by the algebra query engine.

    (get-allowed-rdf-formats :construct :algebra)

Returns a list of keyword symbols that are valid when applied as values of the engine argument to run-sparql or db-run-sparql.
Returns a list of keyword symbols that are valid when applied as values of the engine argument to run-sparql or db-run-sparql.

run-sparql is a convenient way to call the generic function db-run-sparql. The latter specializes on the the triple-store class and query engine. In general, you should continue to use run-sparql in your code.

db-run-sparql db  engine  query  &rest  args  &key  &allow-other-keys

A generic function to dispatch query execution across different SPARQL engines and database types.

N.B., if you request a results-format of :cursor, you should yield bindings from it within a (with-query-environment) form, or avoid the use of filter functions that rely on the implicit environment (such as fn:currentDate).

Serialized results formats are provided with a managed environment; only returned cursors need this.

Extension functions

SPARQL allows for query engines to associate extension functions with URIs, and call them from within queries.

You can define your own URI functions in twinql through defurifun, or associate existing functions with a URI through associate-function-with-uri. defurifun does some manipulation of the arguments, so you should use it whenever possible.

associate-function-with-uri function  uri  &key  cache-now-p  db
Assert a mapping between uri, which is a string or a valid part, and the provided function, which is a symbol or a function. If cache-now-p, and function is a symbol, its function binding is stored instead of the symbol itself.
print-function-uri-mappings &key  stream  db
Print all mappings between URIs and functions to stream *standard-output* by default).
defurifun name  uri  args  &body  body
Define a new function, name, and associate it with uri as with associate-function-with-uri. args is not evaluated, exactly as with defun.

Here's an example: a function that will do an HTTP HEAD request against the provided URL, returning the HTTP status code as an integer literal, or 0 if there's a problem.

(The built-in functions are quite robust, so a Lisp integer will be treated as an RDF literal with data type xsd:integer.)

(defurifun ex-head-request !<> (uri)  
    (when uri  
        (format t "~&Performing HTTP HEAD request on <~A>...~%"  
                  (upi->value uri))  
            (net.aserve.client:do-http-request (upi->value uri)  
                                               :method :head)))))  

You can use this function in a query exactly as you would a built-in function.

Using this data as an example:

<> <> "200"^^<> . 

we can run a query like so:

sparql(54): (run-sparql  
PREFIX f: <>  
SELECT ?x {  
  ?x <> ?y .  
  FILTER ( ?y = f:head("\") )  
  :results-format :count) 

which produces this output:

Performing HTTP HEAD request on <>...  

… we know, then, that is returning a 200 status code.

Note that these filter functions can be called an arbitrary number of times during the execution of a query. It's not a good idea to actually perform expensive operations like HTTP requests in your queries.

SELECT bindings and ASK results

run-sparql allows you programmatic access to results in a number of ways.

Any of the following results-formats are suitable as arguments to SELECT or ASK queries:

The following results-formats are suitable as arguments to SELECT queries:

The following results-formats are suitable as arguments to ASK queries:

Returning triples from CONSTRUCT and DESCRIBE queries

Any of the following rdf-formats are suitable as arguments to CONSTRUCT or DESCRIBE queries:

The following rdf-format is suitable for DESCRIBE queries:

The following rdf-format is suitable for CONSTRUCT queries:

Finally, SPARQL can return results from CONSTRUCT and DESCRIBE queries as in-memory triple stores, using the :in-memory format. These triple-stores support the full AllegroGraph API and can therefore be queried and serialized just like a regular triple-store. When no references to them remain, they will be garbage collected just like any other Lisp data-structure.

You can use get-allowed-results-formats and get-allowed-rdf-formats to access these allowed values dynamically at run-time.


Programmatic results associate values with variables. Variables are parsed into symbols by the query parser.

The mapping from variables to symbols is straightforward, and best illustrated by example:

If you provide variables in a with-variables argument, a leading ? is prepended to the variable name. Your queries will run correctly if you provide them as s-expressions and do not prepend ?, but:

All variables created by the parser are interned in the current package, as if by a call to cl:intern. You should adhere to these rules when processing results or providing bindings using with-variables.

SPARQL and first-class triples

AllegroGraph permits you to make assertions about triple IDs (UPIs of type triple-id). SPARQL offers no support for this: only named graphs are supported. First-class triples are entirely outside the scope of both RDF and SPARQL.

SPARQL queries against stores using first-class triples are not supported. twinql makes only limited provisions for such queries:

It bears repeating that SPARQL is not intended to work with first-class triples; any queries that run successfully are little more than accidents, and named graphs are a better choice in all cases.


Dataset loading

It is sometimes useful to be able to process the SPARQL dataset — the set of URIs provided as FROM and FROM NAMED parameters — when a query is executed. AllegroGraph provides a dataset load hook for your convenience.

You may bind a function to *dataset-load-function* to specify a default, or pass one as the :load-function argument to run-sparql. Passing nil disables the hook for that query. The argument list of the function is described in *dataset-load-function*.

Default dataset handling

When no dataset (FROM and FROM NAMED) are provided to a query, the actual dataset against which the query is run is not defined by the SPARQL specification. twinql provides you with two options: :default, meaning that the default part of the dataset contains only the default graph of the store; and :all, whereby both the default and named parts of the dataset contain every graph in the store.

You can control the default behavior by setting *default-dataset-behavior* (formerly *sparql-default-graph-behavior*), and set the behavior for specific queries by passing the :default-dataset-behavior argument to run-sparql.

Verbose output

Logging output when queries are run in verbose mode is written to db.agraph.query.sparql:*sparql-log-stream*. This is *standard-output* by default.

SPARQL and encoded values

AllegroGraph offers the ability to directly encode a range of literal values — numbers, geospatial values, and more — directly within a UPI, without the overhead of a string representation as an RDF literal. Whenever these encoded values are encountered by AllegroGraph's printing functions, and in many other situations, they are seamlessly treated as RDF literals, but with significant time and space savings.

AllegroGraph's implementation of most SPARQL and XQuery operators also handles encoded values transparently.

SPARQL Query Options

AllegroGraph provides control over a number of internal settings by extending the SPARQL PREFIX notation. Options are changed by prepending a PREFIX of the form:

PREFIX franzOption_optionName: <franz:optionValue> 

where optionName and optionValue are replaced by the name and value of the option being changed.

Options can also be specified in the configuration file, which is described in the Server Configuration document. See here in that document for how options are specified.

The available options are subject to change as some of them are experimental. The following is a list of the currently available options:


Controls whether or not filter constraints are merged into BGP patterns.The possible values are:

  • yes - turn the option on
  • no - turn the option off

Controls whether to use Chunk at a Time (CaaT) processing.

It can be:

  • :possibly - use CaaT based on the query engine's heuristics. For example, CaaT is often (but not always) the best choice for unordered queries with small limits.
  • :yes - always use CaaT when possible (some query clauses like EXISTS filters and SPIN magic properties do not yet support CaaT).
  • :no - never use CaaT.

The default value is :possibly which means that AllegroGraph is optimizing for speed rather than space.


Specifies the maximum amount of memory used by a single chunk. Defaults to 8G, minimium allowed is 200M.

See the chunkProcessingAllowed for additional control.


Specifies the chunk processing size

Control the size (in rows of answers) of the chunks used by the CaaT executor. The higher the number, the larger the chunks processed will be which is both more efficient and more memory intensive. A typical value is 400000 or 1000000.

See the chunkProcessingAllowed option for additional control. Deprecated in favor of chunkProcessingMemory.


The strategy used to re-order BGPs in a clause.

The available strategies will depend on the query-engine used to run the query but will always include identity which tells the query planner to not reorder the triple-patterns of the BGPs. Another common choice is statistical which uses the statistics of the triple-store to try to reorder clauses most efficiently.


Specifies the number of rows to keep in memory before writing temporary files to disk.

This should be a number like 500000. The larger the value, the more memory AllegroGraph will use during query processing. Smaller values can be more memory efficient but also can perform more slowly because the will be more disk activity.


Specifies whether or not to log full scan warnings.

Generally speaking, a full triple-store scan is an indication that something has gone awry during query processing. This setting controls whether or not AllegroGraph logs a warning and is based on the size of the triple-store.

The default value is 1-million. Setting it higher will prevent the warning from appearing in the log.


Controls whether or not query execution details are logged.

If logging is on, the query engine prints additional information to the log as it plans and executes a query.The possible values are:

  • yes - turn the option on
  • no - turn the option off

Specifies an upper limit on the number of solutions that are allowed during query processing before a warning is signaled.

Queries run best when the solution space is kept small. This warning is in an indication that a query is generating many intermediate results.


Control how much free system memory must be available for a query to continue.

If the query process is more than this setting's percentage of total physical memory, then the query will be canceled. The default value is 90%.


Specifies the memory limit per query.

If a query tries to use more than this, it will be canceled.

The timezone in which xsd:dateTimes and xsd:times are serialized. For example, if presentationTimeZone is "-02:00", then "2013-10-01T15:21:23+03:00" is serialized as "2013-10-01T10:21:23-02:00". Zoneless xsd:datetimes and xsd:times are always presented without a timezone. This option has no effect on what gets stored in the database. The allowed values are strings representing the timezone. The format of these strings is the same as in xsd:dateTimes. The special value "none" (the default) means that no conversion will take place.

Specifies the query engine to use when executing queries.

For example, to use the SPARQL 1.0 query engine, set this to :sparql-1.0.


Specifies a query timeout value in seconds.

Note that the timeout is not an interrupt; AllegroGraph checks for query timeout relatively infrequently so that a query can run for many seconds longer than the specified timeout. This is especially true for operations involving reasoning or non-triple-pattern based queries like free-text indexing or SNA path planning operators.


Controls whether or not AllegroGraph interleaves query execution and BGP clause reordering. If no, then AllegroGraph will perform all reordering during query planning. If yes, then AllegroGraph will defer reordering until query execution time. In many cases, the additional information available at execution time can enhance query performance.

Note that interleaving reordering is not always a win because performing all ordering at query planning time allows for the query engine to introduce joins which can sometimes enhance query performance.The possible values are:

  • yes - turn the option on
  • no - turn the option off

The number of seconds to wait before a remote query times out.

This will also have an effect on SPARQL Federated query (i.e., using the SERVICE clause).

Specifies the number of results to return from a given SOLR query.

Specifies the max amount of temporary disk space that may be used by a query. Defaults to no limit. If a query tries to use more disk space than this, it will be canceled.

Sometimes queries write intermediate results to disk when they will not fit in memory. With a huge query it is possible for such temporary files to fill the filesystem. In order to prevent this, the temporaryFileDiskSpace query option may be set.


If true, then range queries will not scan typed literal triples.

This means that only encoded triples will be considered.The possible values are:

  • yes - turn the option on
  • no - turn the option off

If true, then predicate type mappings will be used for range queries.

This means that any triples whose encoded data-type does not match their predicate mapping will be ignored.The possible values are:

  • yes - turn the option on
  • no - turn the option off

Controls the use of the Negation As Failure (NAF) transform.

If yes, then the planner will look for a pattern like

optional { b-clauses }  
filter( !bound(?b) ) 

and transform it into

minus b-clauses 

The possible values are:

  • yes - turn the option on
  • no - turn the option off

Use subject and object UPI type-codes to improve constraint inference

Defaults to no.

If yes, then the query engine will gather information about the subjects and objects associated with particular predicates. This can be used in constraint analysis and query transformations. As an example, suppose we have a query like:

?one ex:date ?date1 .  
?two ex:date ?date2 .  
filter( ?date1 > ?date2 ) 

If there is no predicate type-mapping, then the query engine can not make any assumptions about the range comparison. If there is a predicate type-mapping (and trustPredicateTypeMappingsForRangeQueries) is true, then the engine can know that the filter can be treated as a date comparison.

If usePredicateConstraintedUpiTypeInformation is true, then the query engine will check the triple-store to determine which UPI type-codes the subjects and objects associated with ex:date can take on. If the objects of ex:date can only have UPI type-code +rdf-date+, then the filter will be handled more efficiently.

The type-code information is cached but if the store is changing rapidly, then the cache will often be invalid and this computation will add to the cost of queries.The possible values are:

  • yes - turn the option on
  • no - turn the option off

Use typed-literal XSD types to improve constraint inference.

Similar to usePredicateConstraintedUpiTypeInformation but involves a scan of all typed-literals (which can be expensive). This is currently not cached!The possible values are:

  • yes - turn the option on
  • no - turn the option off


The log stream to which SPARQL verbose output is written.
This variable specifies how wide to draw the results table in characters.

Controls how SPARQL behaves when no dataset is specified.

If nil, then the default value of the defaultDatasetBehavior query property will be used. Otherwise, this value will be used.

See the defaultDatasetBehavior query option for details

Set this to a function of two or three arguments, (uri db &optional type), to load dataset parameters before a query is executed.

Function index


  1. Note that SPARQL 1.1 is only supported by the sparql-1.1 query engine.
  2. Note that in a future release the sparql-1.1 engine will be optimized to handle this case as well.
  3. Note that the chunk based processor also uses chunks that are larger than one and uses a completely new set of code so that the resemblance is not complete.