This is a description of the Python Application Programmer's Interface (API) to Franz Inc.'s AllegroGraph RDFStore™
The Python API offers convenient and efficient access to an AllegroGraph server from a Python-based application. This API provides methods for creating, querying and maintaining RDF data, and for managing the stored triples.
The Python API deliberately emulates the Aduna Sesame API to make it easier to migrate from Sesame to AllegroGraph. The Python API has also been extended in ways that make it easier and more intuitive than the Sesame API. |
The AllegroGraphServer object represents a remote AllegroGraph server on the network. It is used to inventory and access the catalogs of that server.
Source: /AllegroGraphDirectory/src/franz/openrdf/sail/allegrographserver.py.
AllegroGraphServer(self, host, port=10035,user=None,password=None)
server = AllegroGraphServer(host="localhost", port=8080, user="test", password="pw")
getInitfile(self) | Retrieve the contents of the server initialization file. |
listCatalogs(self) | Returns a string containing the names of the server's catalogs. |
openCatalog(self, name=None) | Returns a Catalog object. Name is one of the catalog names from listCatalogs() or "None" to open the rootCatalog described in the AllegroGraph configuration file. |
openFederated(self, repositories, autocommit=False, lifetime=None, loadinitfile=False) | Open a session that federates several repositories. The repositories argument should be an array containing store designators, which can be Repository or RepositoryConnection objects, strings (naming a store in the root catalog, or the URL of a store), or (storename, catalogname) tuples. |
openSession(self,spec, autocommit=False, lifetime=None, loadinitfile=False) | Open a session on a federated, reasoning, or filtered store. Use the helper functions in the franz.openrdf.sail.spec module to create the spec string. |
setInitfile(self, content=None, restart=True) | Replace the current initialization file contents with the content string or remove if None. Restart specifies whether to shut down any current running back ends, so that subsequent requests will be handled by back ends that have loaded the new init file. |
url(self) | Return the server's URL. |
version(self) | Return the server's version as a string. |
A Catalog object is a container for multiple repositories.
Source: /AllegroGraphDirectory/src/franz/openrdf/sail/allegrographserver.py.
Invoke the Catalog constructor using the AllegroGraphServer.openCatalog() method.
catalog = server.openCatalog('scratch')
createRepository(self, name) | Creates a new Repository within the Catalog. name is a string identifying the repository. |
deleteRepository(self, name) | Deletes the named Respository from the Catalog. |
getName(self) | Returns a string containing the name of this Catalog. |
getRepository(self, name, access_verb) | Returns a Repository object. name is a repository name from listRepositories(). access_verb is one of the following:
|
listRepositories(self) | Returns a list of repository names (triple stores) managed by this Catalog. |
A repository contains RDF data that can be queried and updated. Access to the repository can be acquired by opening a connection to it. This connection can then be used to query and/or update the contents of the repository. Depending on the implementation of the repository, it may or may not support multiple concurrent connections.
Please note that a repository needs to be initialized before it can be used and that it should be shut down before it is discarded/garbage collected. Forgetting the latter can result in loss of data (depending on the Repository implementation)!
Source: /AllegroGraphDirectory/src/franz/openrdf/repository/repository.py.
Invoke the Repository constructor using the AllegroGraphServer.getRepository() method.
myRepository = catalog.getRepository("agraph_test", Repository.ACCESS)
getConnection(self) | Creates a RepositoryConnection object that can be used for querying and |
getDatabaseName(self) | Returns a string containing the name of this Repository. |
getSpec(self) | Returns a string consisting of the catalog name concatenated with the repository name. |
getValueFactory(self) | Return a ValueFactory for this store. This is present for Aduma Sesame compatibility, but in the Python API all ValueFactory functionality has been duplicated or subsumed in the RepositoryConnection class. It isn't necessary to manipulate the ValueFactory class at all. |
initialize(self) | A Repository must be initialized before it can be used. Returns the initialized Repository object. |
isWritable(self) | Checks whether this Repository is writable, i.e. if the data contained in this store can be changed. |
registerDatatypeMapping(self, predicate=None, datatype=None, nativeType=None) | Register an inlined datatype. Predicate is the URI of predicate used in the triple store. Datatype may be one of: XMLSchema.INT, XMLSchema.LONG, XMLSchema.FLOAT, XMLSchema.DATE, and XMLSchema.DATETIME. NativeType may be "int", "datetime", or "float". You must supply nativeType and either predicate or datatype. If predicate, then object arguments to triples with that predicate will use an inlined encoding of type nativeType in their internal representation. If datatype, then typed literal objects with a datatype matching datatype will use an inlined encoding of type nativeType. (Duplicated in the RepositoryConnection class for Python user convenience.) |
shutDown(self) | Shuts the Repository down, releasing any resources that it keeps hold of. |
The RepositoryConnection class is the main interface for updating data in and performing queries on a Repository. By default, a RespositoryConnection is in autoCommit mode, meaning that each operation corresponds to a single transaction on the underlying triple store. autoCommit can be switched off, in which case it is up to the user to handle transaction commit/rollback. Note that care should be taken to always properly close a RepositoryConnection after one is finished with it, to free up resources and avoid unnecessary locks.
Note that concurrent access to the same connection object is explicitly forbidden. The client must perform its own synchronization to ensure non-concurrent access.
Several methods take a vararg argument that optionally specifies a (set of) context(s) on which the method should operate. (A context is the URI of a subgraph.) Note that a vararg parameter is optional, it can be completely left out of the method call, in which case a method either operates on a provided statement's context (if one of the method parameters is a statement or collection of statements), or operates on the repository as a whole, completely ignoring context. A vararg argument may also be null (cast to Resource) meaning that the method operates on those statements which have no associated context only.
Source: /AllegroGraphDirectory/src/franz/openrdf/repository/repositoryconnection.py.
RepositoryConnection(self, repository)
where repository is the Repository object that created this RepositoryConnection.
Example: The best practice is to use the Repository.getConnection() method, which supplies the repository parameter to the construction method. .
connection = myRepository.getConnection()
This table contains the repositoryConnection methods that create, maintain, search, and delete triple stores. There are following tables that list special methods for Free Text Search, Prolog Rule Inference, Geospatial Reasoning, Social Network Analysis, Transactions and Subject Triples Caching.
add(self, arg0, arg1=None, arg2=None, contexts=None, base=None, format=None, serverSide=False) | Calls addTriple(), addStatement(), or addFile(). Best practice is to avoid add() and use addFile(), addStatement(), and addTriple() instead. arg0 may be a Statement or a filepath. If so, arg1 and arg2 default to None. arg0, arg1, and arg2 may be the subject, predicate and object of a single triple. contexts is an optional list of contexts (subgraph URIs), defaulting to None. A context is the URI of a subgraph. If None, the triple(s) will be added to the null context (the default or background graph). base is the baseURI to associate with loading a file. Defaults to None. format is an RDFFormat instance. Defaults to None, which means "guess from file extension". serverSide indicates whether the filepath refers to a file on the client computer or on the server. Defaults to False. |
addFile(self, filePath, base=None, format=None, context=None, serverSide=False) | Loads a file into the triple store. Note that a file can be loaded into only one context. filepath identifies the file to load. context is an optional context URI (subgraph URI), defaulting to None. If None, the triple(s) will be added to the null context (the default or background graph). base is the baseURI to associate with loading a file. Defaults to None. format is an RDFFormat instance. Defaults to None, which means "guess from file extension". serverSide indicates whether the filepath refers to a file on the client computer or on the server. Defaults to False. |
addData(self, data, rdf_format=RDFFormat.TURTLE, base_uri=None, context=None) | Loads data from a string into the triple store. Note that the data can be loaded into only one context. data is the text to be loaded. context is an optional context URI (subgraph URI), defaulting to None. If None, the triple(s) will be added to the null context (the default or background graph). base_uri is used to resolve relative URIs in data. rdf_format is the data format (an instance of RDFFormat). Defaults to RDFFormat.TURTLE. |
addStatement(self, statement, contexts=None) | Add the supplied Statement to the specified contexts of the repository. contexts defaults to None, which adds the statement to the null context (the default or background graph). |
addTriple(self, subject, predicate, object, contexts=None) | Adds a single triple to the repository. subject, predicate and object are the three values of the triple. contexts is an optional list of context URIs to add the triple to, defaulting to None. If None, the triple will be added to the null context (the default or background graph). |
addTriples(self, triples_or_quads, context=ALL_CONTEXTS, ntriples=False) | Add the supplied triples_or_quads to this repository. Each triple can be a list or a tuple of Values. context is the URI of a subgraph, which will be stored in the fourth field of the "triple," defaulting to ALL_CONTEXTS. If ntriples is True, then the triples or quads are assumed to contain valid ntriples strings, and they are passed to the server with no conversion. The default value is False. |
clear(self, contexts=ALL_CONTEXTS) | Removes all statements from the designated list of contexts (subgraphs) in the repository. If contexts is ALL_CONTEXTS (the default), it clears the repository of all statements. |
clearNamespaces(self) | Remove all namespace declarations from the current environment. |
close(self) | Closes the connection in order to free up resources. |
createBNode(self, nodeID=None) | Creates a new blank node with the given node identifier. nodeID defaults to None. If nodeID is None, a new, unused node ID is generated. |
createLiteral(self, value, datatype=None, language=None) | Create a new literal with value. datatype if supplied, should be a URI, in which case value should be a string. You may optionally include an RDF language attribute. datatype and language default to None. |
createRange(self, lowerBound, upperBound) | Create a compound literal representing a range from lowerBound to upperBound. |
createStatement(self, subject, predicate, object, context=None) | Create a new Statement object using the supplied subject, predicate and object and associated context, which defaults to None. The context is the URI of a subgraph. |
createURI(self, uri=None, namespace=None, localname=None) | Creates a new URI object from the supplied string-representation(s). uri is a string representing an entire URI. namespace and localname are combined to create a URI. If two non-keyword arguments are passed, it assumes they represent a namespace/localname pair. |
deleteDuplicateStatements(mode) | Deletes duplicate triples from the store. mode can be "spo" (triples are duplicates if they have the same subject, predicate, and object, regardless of the graph) or "spog" (triples are duplicates if they have the same subject, predicate, object, and graph). See also getDuplicateStatements below. |
export(self, handler, contexts=ALL_CONTEXTS) | Exports all triples in the repository to an external file. handler is either an NTriplesWriter() object or an RDFXMLWriter() object. The export may be optionally confined to a list of contexts (default is ALL_CONTEXTS). Each context is the URI of a subgraph. |
exportStatements(self, subj, pred, obj, includeInferred, handler, contexts=ALL_CONTEXTS) | Exports all triples that match subj, pred and/or obj. May optionally includeInferred statements provided by RDFS++ inference (default is False). handler is either an NTriplesWriter() object or an RDFXMLWriter() object. The export may be optionally confined to a list of contexts (default is ALL_CONTEXTS). Each context is the URI of a subgraph. |
getAddCommitSize(self) | Returns the current setting of the add_commit_size property. See setAddCommitSize(). |
getContextIDs(self) | Return a list of context URIs, one for each subgraph referenced by a quad in the triple store. Omits the default context because its ID would be null. |
getDuplicateStatements(mode) | Gets duplicate triples in the store. mode can be "spo" (triples are duplicates if they have the same subject, predicate, and object, regardless of the graph) or "spog" (triples are duplicates if they have the same subject, predicate, object, and graph). See also deleteDuplicateStatements above. |
getNamespace(self, prefix) | Returns the namespace that is associated with prefix, if any. |
getNamespaces(self) | Returns a Python dictionary of prefix/namespace pairings. The default namespaces are: rdf, rdfs, xsd, owl, fti, dc, and dcterms. |
getSpec(self) | Returns a string composed of the catalog name concatenated with the repository name. |
getStatements(self, subject, predicate, object, contexts=ALL_CONTEXTS, includeInferred=False, limit=None, tripleIDs=False) |
Gets all statements with a specific subject, predicate and/or object from the repository. The result is optionally restricted to the specified set of named contexts (default is ALL_CONTEXTS). A context is the URI of a subgraph. Returns a RepositoryResult iterator that produces a 'Statement' each time that 'next' is called. May optionally includeInferred statements provided by RDFS++ inference (default is False). Takes an optional limit on the number of statements to return. If tripleIDs is True, the output includes the triple ID field (the fifth field of the quad). |
getStatementsById(self, ids) | Return all statements whose triple ID matches an ID in the list of ids. |
getValueFactory(self) | Returns the ValueFactory object associated with this RepositoryConnection. |
isEmpty(self) | Returns True if size() is zero. |
prepareBooleanQuery(self, queryLanguage, queryString, baseURI=None) | Parse queryString into a Query object which can be executed against the RDF storage. queryString must be an ASK query. The result is true or false. queryLanguage is one of SPARQL, PROLOG, or COMMON_LOGIC. baseURI optionally provides a URI prefix (defaults to None). Returns a Query object. The result of query execution will be True of False. |
prepareGraphQuery(self, queryLanguage, queryString, baseURI=None) | Parse queryString into a Query object which can be executed against the RDF storage. queryString must be a CONSTRUCT or DESCRIBE query. queryLanguage is one of SPARQL, PROLOG, or COMMON_LOGIC. baseURI optionally provides a URI prefix (defaults to None). Returns a Query object. The result of query execution is an iterator of Statements/quads. |
prepareTupleQuery(self, queryLanguage, queryString, baseURI=None) | Embed queryString into a Query object which can be executed against the RDF storage. queryString must be a SELECT query. queryLanguage is one of SPARQL, PROLOG, or COMMON_LOGIC. baseURI optionally provides a URI prefix (defaults to None). Returns a Query object. The result of query execution is an iterator of tuples. |
registerDatatypeMapping(self, predicate=None, datatype=None, nativeType=None) | Register an inlined datatype. Predicate is the URI of predicate used in the triple store. Datatype may be one of: XMLSchema.INT, XMLSchema.LONG, XMLSchema.FLOAT, XMLSchema.DATE, and XMLSchema.DATETIME. NativeType may be "int", "datetime", or "float". You must supply nativeType and either predicate or datatype. If predicate, then object arguments to triples with that predicate will use an inlined encoding of type nativeType in their internal representation. If datatype, then typed literal objects with a datatype matching datatype will use an inlined encoding of type nativeType. |
remove(self, arg0, arg1=None, arg2=None, contexts=None) | Calls removeTriples() or removeStatement(). Best practice would be to avoid remove() and use removeTriples() or removeStatement() directly. arg0 may be a Statement. If so, then arg1 and arg2 default to None. arg0, arg1, and arg2 may be the subject, predicate and object of a triple. contexts is an optional list of contexts, defaulting to None. |
removeNamespace(self, prefix) | Remove the namespace associate with prefix. |
removeQuads(self, quads, ntriples=False) | Remove enumerated quads from this repository. Each quad can be a list or a tuple of Values. If ntriples is True (default is False), then the quads are assumed to contain valid ntriples strings, and they are passed to the server with no conversion. |
removeQuadsByID(self, tids) | tids contains a list of triple IDs (integers). Remove all quads with IDs that match. |
removeStatement(self, statement, contexts=None) | Removes the supplied Statement(s) from the specified contexts (default is None). |
removeTriples(self, subject, predicate, object, contexts=None) | Removes the triples with the specified subject, predicate and object from the repository, optionally restricted to the specified contexts (defaults to None).. |
setAddCommitSize(self, triple_count) | The threshold for commit size during triple add operations. "Set to 0 (zero) or None to clear size-based autocommit behavior. When set to an integer triple_count > 0, loads and adds commit each triple_count triples added and at the end of the triples being added. |
setNamespace(self, prefix, namespace) | Define (or redefine) a namespace associated with prefix. |
size(self, contexts=ALL_CONTEXTS) | Returns the number of (explicit) statements that are in the specified contexts in this repository. contexts defaults to ALL_CONTEXTS, but can be a context URI or a tuple of context URIs from getContextIDs(). Use 'null' to get the size of the default graph (the unnamed context). |
These repositoryConnection methods support user-defined triple indices. See AllegroGraph Triple Indices for more information on this topic.
listIndices(self) | Returns a tuple containing a list of the current set of triple indices. |
listValidIndices(self) | Returns a tuple containing the list of all possible triple indices. |
addIndex(self, type) | Adds a specific type of index to the current set of triple indices. type is a string containing one of the following index names: spogi, spgoi, sopgi, sogpi, sgpoi, sgopi, psogi, psgoi, posgi, pogsi, pgsoi, pgosi, ospgi, osgpi, opsgi, opgsi, ogspi, ogpsi, gspoi, gsopi, gpsoi, gposi, gospi, gopsi, or i. |
dropIndex(self, type) | Removes a specific type of index to the current set of triple indices. type is a string containing one of the following index names: spogi, spgoi, sopgi, sogpi, sgpoi, sgopi, psogi, psgoi, posgi, pogsi, pgsoi, pgosi, ospgi, osgpi, opsgi, opgsi, ogspi, ogpsi, gspoi, gsopi, gpsoi, gposi, gospi, gopsi, or i. |
The following repositoryConnection method supports free-text indexing in AllegroGraph.
createFreeTextIndex(self, name, predicates=None, indexLiterals=None, indexResources=None, indexFields=None, minimumWordSize=None, stopWords=None, wordFilters=None) |
Create a free-text index with the given parameters. name is a string identifying the new index.
If no predicates are given, triples are indexed regardless of
predicate.
indexLiterals determines which literals to index. It can be
True (the default), False, or a list of resources, indicating
the literal types that should be indexed.
indexResources determines which resources are indexed. It can
be True, False (the default), or "short", to index only the
part of resources after the last slash or hash character.
indexFields can be a list containing any combination of the elements "subject", "predicate", "object", and "graph". The default is ["object"]. minimumWordSize, an integer, and determines the minimum size a word must have to be indexed. The default is 3. stopWords should hold a list of words that should not be indexed. When not given, a list of common English words is used. wordFilters can be used to apply some normalizing filters to words as they are indexed or queried. Can be a list of filter names. Currently, only "drop-accents" and "stem.english" are supported. |
deleteFreeTextIndex(self, name) | Deletes the named index. |
evalFreeTextSearch(self, pattern, infer=False, limit=None, index=None) | Return an array of statements for the given free-text pattern search. If no index is provided, all indices will be used. |
getFreeTextIndexConfiguration(self, name) | Returns a Python dictionary containing all of the configuration settings of the named index. |
listFreeTextIndices(self) | List the free-text indices. |
modifyFreeTextIndex(self, name, predicates=None, indexLiterals=None, indexResources=None, indexFields=None, minimumWordSize=None, stopWords=None, wordFilters=None, reIndex=None) |
name is a string identifying the index to be modified. If no predicates are given, triples are indexed regardless of predicate. indexLiterals determines which literals to index. It can be True (the default), False, or a list of resources, indicating the literal types that should be indexed. indexResources determines which resources are indexed. It can be True, False (the default), or "short", to index only the part of resources after the last slash or hash character. indexFields can be a list containing any combination of the elements "subject", "predicate", "object", and "graph". The default is ["object"]. minimumWordSize, an integer, and determines the minimum size a word must have to be indexed. The default is 3. stopWords should hold a list of words that should not be indexed. When not given, a list of common English words is used. wordFilters can be used to apply some normalizing filters to words as they are indexed or queried. Can be a list of filter names. Currently, only "drop-accents" and "stem.english" are supported. reIndex if True (the default) will rebuild the index. If False, it will apply the new settings to new triples only, while maintaining the index data for existing triples. |
Note that text search is implemented through a SPARQL query using a "magic" predicate called fti:search. See the AllegroGraph Python API Tutorial for an example of how to set up this search.
These repositoryConnection methods support the use of Prolog rules in AllegroGraph. Any use of Prolog rules requires that you create a dedicated session to run them in.
addRules(self, rules, language=None) | Add a sequence of one or more rules (in ASCII format). |
loadRules(self, file ,language=None) | Load a file of rules. file is assumed to reside on the client machine. language defaults to QueryLanguage.PROLOG. For use with a dedicated session. |
These repositoryConnection methods support geospatial reasoning.
createBox(self, xMin=None, xMax=None, yMin=None, yMax=None) | Create a rectangular search region (a box) for geospatial search. This method works for both Cartesian and spherical coordinate systems. xMin, xMax may be used to input latitude. yMin, yMax may be used to input longitude. |
createCircle(self, x, y, radius, unit=None) | Create a circular search region for geospatial search. This method works for both Cartesian and spherical coordinate systems. radius is the radius of the circle expressed in the designated unit, which defaults to the unit assigned to the coordinate system. x and y locate the center of the circle and may be used for latitude and longitude. |
createCoordinate(self, x=None, y=None, lat=None, long=None) | Create a coordinate point in a geospatial coordinate system. Must include x and y, or lat and long. Use this method to create the object value for a location triple. |
createLatLongSystem(self, unit='degree', scale=None, latMin=None, latMax=None, longMin=None, longMax=None) | Create a spherical coordinate system for geospatial location matching. unit can be 'degree', 'mile', 'radian', or 'km'. scale should be your estimate of the size of a typical search region in the latitudinal direction. latMin and latMax are the bottom and top borders of the coordinate system. longMin and longMax are the left and right sides of the coordinate system. |
createPolygon(self, vertices, uri=None, geoType=None) | Create a polygonal search region for geospatial search. The vertices are saved as triples in AllegroGraph. vertices is a list of (x, y) pairs such as [(51.0, 2.00),(60.0, -5.0),(48.0,-12.5)]. uri is an optional subject value for the vertex triples, in case you want to manipulate them. geoType is 'CARTESIAN' or 'SPHERICAL', but defaults to None. |
createRectangularSystem(self, scale=1, unit=None, xMin=0, xMax=None, yMin=0, yMax=None) | Create a Cartesian coordinate system for geospatial location matching. scale should be your estimate of the Y size of a typical search region. unit must be None. xMin and xMax are the left and right edges of the rectangle. yMin and yMax are the bottom and top edges of the rectangle. |
getGeoType(self) | Returns what type of geospatial object it is. |
setGeoType(self) | Sets the geoType of a geospatial object. |
The following repositoryConnection methods support Social Network Analysis in AllegroGraph. The Python API to the Social Network Analysis methods of AllegroGraph requires Prolog queries, and therefore must be run in a dedicated session.
registerNeighborMatrix(self, name, generator, group_uris, max_depth=2) | Construct a neighbor matrix named 'name'. The generator named 'generator' is applied
to each URI in 'group_uris' (a collection of fullURIs or qnames (strings)), computing edges to max depth 'max_depth'. For use in a dedicated session. |
registerSNAGenerator(self, name, subjectOf=None, objectOf=None, undirected=None, generator_query=None) | Create (and remember) a generator named 'name'.
If one already exists with the same name; redefine it.
'subjectOf', 'objectOf' and 'undirected' expect a list of predicate URIs, expressed as
fullURIs or qnames, that define the edges traversed by the generator.
Alternatively, instead of an adjacency map, one may provide a 'generator_query',
that defines the edges. For use in a dedicated session. |
AllegroGraph lets you set up a special RepositoryConnection (a "session") that supports transaction semantics. You can add statements to this session until you accumulate all the triples you need for a specific transaction. Then you can commit the triples in a single act. Up to that moment the triples will not be visible to other users of the repository.
If anything interrupts the accumulation of triples building to the transaction, you can roll back the session. This discards all of the uncommitted triples and resynchronizes the session with the repository as a whole.
Closing the session deletes all uncommitted triples, all rules, generators and matrices that were created in the session. Rules, generators and matrices cannot be committed. They persist as long as the session persists.
openSession(self) | Open a dedicated session. |
closeSession(self) | Close a dedicated session connection. |
session(self, autocommit=False, lifetime=None, loadinitfile=False) | A dedicated connection context manager for use with the 'with' statement. Automatically calls openSession() at block start and closeSession() at block end. If autocommit is True, commits are done on each request, otherwise you will need to call commit() or rollback() as appropriate for your application. lifetime is an integer specifying the time to live in seconds of If loadinitfile is True, then the current initfile will be loaded |
commit(self) | Commits changes on a dedicated connection. |
rollback(self) | Rolls back changes on a dedicated connection. |
You can enable subject triple caching to speed up queries where the same subject URI appears in multiple patterns. The first time AllegroGraph retrieves triples for a specific resource, it caches the triples in memory. Subsequent query patterns that ask for the same subject URI can retrieve the matching triples very quickly from the cache. The cache has a size limit and automatically rolls over as that limit is exceeded.
enableSubjectTriplesCache(self, size=None) | Maintain a cache of size 'size' that caches, for each accessed resource, quads where the resource appears in subject position. This can accelerate the performance of certain types of queries. The size is the maximum number of subjects whose triples will be cached. Default is 100,000. |
disableSubjectTriplesCache(self) | Turn of caching. |
getSubjectTriplesCacheSize(self) | Return the current size of the subject triples cache. |
The RDFFormat class is an enumeration describing the data formats supported by AllegroGraph when importing RDF data.
Source: /AllegroGraphDirectory/src/franz/openrdf/rio/rdfformat.py.
RDFFormat.RDFXML | The RDF/XML file format. |
RDFFormat.NTRIPLES | The N-Triples file format. |
RDFFormat.NQUADS | The N-Quads file format. |
RDFFormat.NQX | The NQX file format - an extension to N-Quads that can encode triple attributes. |
RDFFormat.TURTLE | The Turtle file format. |
RDFFormat.TRIX | The TriX file format. |
RDFFormat.TRIG | The TriG file format. |
The Query class is non-instantiable. It is an abstract class from which the three query subclasses are derived. It is included here because of its methods, which are inherited by the subclasses.
A query on a Repository that can be formulated in one of the supported query languages (for example SPARQL). It allows one to predefine bindings in the query to be able to reuse the same query with different bindings.
Source: /AllegroGraphDirectory/src/franz/openrdf/query/query.py.
The best practice is to allow the RepositoryConnection object to create an instance of one of the Query subclasses (TupleQuery, GraphQuery, BooleanQuery). There is no reason for the Python application programmer to create a Query object directly.
tupleQuery = conn.prepareTupleQuery(QueryLanguage.SPARQL, queryString) result = tupleQuery.evaluate()
evaluate_generic_query(self, count=False, accept=None) | Evaluate a SPARQL or PROLOG query. If SPARQL, it may be a 'select', 'construct', 'describe' or 'ask' query. Return a QueryResult object, unless the accept parameter is set to 'application/sparql-results+xml' or 'application/sparql-results+json' to return the results as a string in xml or json format. (Best practice is to use (and evaluate) one of the more specific query subclasses instead of using the Query class directly.) |
getBindings(self) | Retrieves the bindings that have been set on this query in the form of a dictionary. |
getDataset(self) | Returns the current dataset setting for this query. |
getIncludeInferred(self) | Returns whether or not this query will return inferred statements (if any are present in the repository). |
removeBinding(self, name) | Removes the named binding so that it has no value. |
setBinding(self, name, value) | Binds the named attribute to the supplied value. Any value that was previously bound to the specified attribute will be overwritten. |
setBindings(self, dict) | Sets multiple bindings using a dictionary of attribute names and values. |
setCheckVariables(self, setting) | If true, the presence of variables in the SELECT clause not referenced in a triple pattern are flagged. |
setContexts(self, contexts) | Assert a set of contexts (a list of subgraph URIs) that filter all triples. |
setDataset(self, dataset) | Specifies the dataset against which to evaluate a query, overriding any dataset that is specified in the query itself. |
setIncludeInferred(self, includeInferred) | Determines whether results of this query should include inferred statements (if any inferred statements are present in the repository). Inference is turned off by default (which is the opposite of standard Sesame behavior). The default value of setIncludeInferred() is True. |
This subclass is used with SELECT queries. Use the RepositoryConnection object's prepareTupleQuery() method to create a TupleQuery object. The results of the query are returned in a QueryResult iterator that yields a sequence of bindingSets.
Methods
TupleQuery uses all the methods of the Query class, plus one more:
evaluate(self, count=False) | Execute the embedded query against the RDF store. Return an iterator that produces for each step a tuple of values (resources and literals) corresponding to the variables or expressions in a 'select' clause (or its equivalent). |
This subclass is used with CONSTRUCT and DESCRIBE queries. Use the RepositoryConnection object's prepareGraphQuery() method to create a GraphQuery object. The results of the query are returned in a RepositoryResult iterator that yields a sequence of bindingSets.
Methods
GraphQuery uses all the methods of the Query class, plus one more:
evaluate(self) | Execute the embedded query against the RDF store. |
This subclass is used with ASK queries. Use the RepositoryConnection object's prepareBooleanQuery() method to create a BooleanQuery object. The results of the query are True or False.
Methods
BooleanQuery uses all the methods of the Query class, plus one more:
evaluate(self) | Execute the embedded query against the RDF store. |
Source: /AllegroGraphDirectory/src/franz/openrdf/query/queryresult.py.
A QueryResult object is simply an iterator that also has a close() method that must be called to free resources. Such objects are returned as a result of SPARQL and PROLOG query evaluation and should not be constructed directly. The recommended usage looks like this:
tupleQuery = conn.prepareTupleQuery(QueryLanguage.SPARQL, queryString) with contextlib.closing(tupleQuery.evaluate()) as results: for result in results: print result
close(self) | Shut down the iterator to be sure the resources are freed up. It is safe to call this method multiple times. |
next(self) | Return the next Statement in the answer, if there is one. Otherwise raise a StopIteration exception. |
A QueryResult subclass used for queries that return tuples.
getBindingNames(self) | Get the names of the bindings (a list of strings), in order of projection. |
A QueryResult subclass used for queries that return statements. Objects of this class are also RepositoryResult instances.
A RepositoryResult object is a result collection of Statement that can be iterated over. It keeps an open connection to the backend for lazy retrieval of individual results. Additionally it has some utility methods to fetch all results and add them to a collection.
By default, a RepositoryResult is not necessarily a (mathematical) set: it may contain duplicate objects. Duplicate filtering can be switched on, but this should not be used lightly as the filtering mechanism is potentially memory-intensive.
A RepositoryResult must be closed after use to free up any resources (open connections, read locks, etc.) it has on the underlying repository.
Source: /AllegroGraphDirectory/src/franz/openrdf/repository/repositoryresult.py.
Best practice is to allow a querySubclass.evaluate() method to create and return the RepositoryResult object. There is no reason for the Python application programmer to create a RepositoryResult object directly.
graphQuery = conn.prepareGraphQuery(QueryLanguage.SPARQL, queryString) with contextlib.closing(graphQuery.evaluate()) as results: for result in results: print result
close(self) | Shut down the iterator to be sure the resources are freed up. It is safe to call this method multiple times. |
next(self) | Return the next Statement in the answer, if there is one. Otherwise raise a StopIteration exception. |
enableDuplicateFilter(self) | Switches on duplicate filtering while iterating over objects. The RepositoryResult will keep track of the previously returned objects in a set and on calling next() will ignore any objects that already occur in this set. Caution: use of this filtering mechanism is potentially memory-intensive. |
asList(self) | Returns a list containing all objects of this RepositoryResult in order of iteration. The RepositoryResult is fully consumed and automatically closed by this operation. |
addTo(self, collection) | Adds all objects of this RepositoryResult to the supplied collection. The RepositoryResult is fully consumed and automatically closed by this operation. |
rowCount(self) | Returns the number of result items stored in this object. |
A Statement is a client-side triple. It encapsulates the subject, predicate, object and context (subgraph) values of a single triple and makes them available.
Source: /AllegroGraphDirectory/src/franz/openrdf/model/statement.py.
Statement(self, subject, predicate, object, context=None)
Example: Best practice is to allow the RepositoryConnection.createStatement() method to create and return the Statement object. There is no reason for the Python application programmer to create a Statement object directly.
stmt1 = conn.createStatement(alice, age, fortyTwo)
getContext(self) | Returns the value in the fourth position of the stored tuple (the subgraph URI). |
getObject(self) | Returns the value in the third position of the stored tuple. |
getPredicate(self) | Returns the value in the second position of the stored tuple. |
getSubject(self) | Returns the value in the first position of the stored tuple. |
setQuad(self, string_tuple) | Stores a string_tuple of a triple or quad. This method is called only by an internal method of the RepositoryResult class. There is no need for a Python application programmer to use it. |
A ValueFactory is a factory for creating URIs, blank nodes, literals and Statements. In the AllegroGraph Python interface, the ValueFactory class would be regarded as obsolete. Its functions have been subsumed by the expanded capability of the RepositoryConnection class. It is documented here for the convenience of the person who is porting an application from Aduma Sesame.
Source: /AllegroGraphDirectory/src/franz/openrdf/model/valuefactory.py.
ValueFactory(self, store)
Example: Best practice is to allow the Repository constructor to generate the ValueFactory automatically at the same time that the Repository object is created. There is no reason for a Python application programmer to attempt this step manually.
createBNode() | See RepositoryConnection class. |
createLiteral() | See RepositoryConnection class. |
createStatement() | See RepositoryConnection class. |
createURI() | See RepositoryConnection class. |