stores Package

stores Package

This package contains modules for additional RDFLib stores

auditable Module

This wrapper intercepts calls through the store interface and implements thread-safe logging of destructive operations (adds / removes) in reverse. This is persisted on the store instance and the reverse operations are executed In order to return the store to the state it was when the transaction began Since the reverse operations are persisted on the store, the store itself acts as a transaction.

Calls to commit or rollback, flush the list of reverse operations This provides thread-safe atomicity and isolation (assuming concurrent operations occur with different store instances), but no durability (transactions are persisted in memory and wont be available to reverse operations after the system fails): A and I out of ACID.

class rdflib.plugins.stores.auditable.AuditableStore(store)[source]

Bases: rdflib.store.Store

__init__(store)[source]
__len__(context=None)[source]
__module__ = 'rdflib.plugins.stores.auditable'
add(triple, context, quoted=False)[source]
bind(prefix, namespace)[source]
close(commit_pending_transaction=False)[source]
commit()[source]
contexts(triple=None)[source]
destroy(configuration)[source]
namespace(prefix)[source]
namespaces()[source]
open(configuration, create=True)[source]
prefix(namespace)[source]
query(*args, **kw)[source]
remove((subject, predicate, object_), context=None)[source]
rollback()[source]
triples(triple, context=None)[source]

concurrent Module

class rdflib.plugins.stores.concurrent.ConcurrentStore(store)[source]

Bases: object

__init__(store)[source]
__len__()[source]
__module__ = 'rdflib.plugins.stores.concurrent'
add(triple)[source]
remove(triple)[source]
triples(triple)[source]
class rdflib.plugins.stores.concurrent.ResponsibleGenerator(gen, cleanup)[source]

Bases: object

A generator that will help clean up when it is done being used.

__del__()[source]
__init__(gen, cleanup)[source]
__iter__()[source]
__module__ = 'rdflib.plugins.stores.concurrent'
__slots__ = ['cleanup', 'gen']
cleanup
gen
next()[source]

regexmatching Module

This wrapper intercepts calls through the store interface which make use of the REGEXTerm class to represent matches by REGEX instead of literal comparison.

Implemented for stores that don’t support this and essentially provides the support by replacing the REGEXTerms by wildcards (None) and matching against the results from the store it’s wrapping.

class rdflib.plugins.stores.regexmatching.REGEXMatching(storage)[source]

Bases: rdflib.store.Store

__init__(storage)[source]
__len__(context=None)[source]
__module__ = 'rdflib.plugins.stores.regexmatching'
add(triple, context, quoted=False)[source]
bind(prefix, namespace)[source]
close(commit_pending_transaction=False)[source]
commit()[source]
contexts(triple=None)[source]
destroy(configuration)[source]
namespace(prefix)[source]
namespaces()[source]
open(configuration, create=True)[source]
prefix(namespace)[source]
remove(triple, context=None)[source]
remove_context(identifier)[source]
rollback()[source]
triples(triple, context=None)[source]
class rdflib.plugins.stores.regexmatching.REGEXTerm(expr)[source]

Bases: unicode

REGEXTerm can be used in any term slot and is interpreted as a request to perform a REGEX match (not a string comparison) using the value (pre-compiled) for checking rdf:type matches

__init__(expr)[source]
__module__ = 'rdflib.plugins.stores.regexmatching'
__reduce__()[source]
rdflib.plugins.stores.regexmatching.regexCompareQuad(quad, regexQuad)[source]

sparqlstore Module

This is an RDFLib store around Ivan Herman et al.’s SPARQL service wrapper. This was first done in layer-cake, and then ported to RDFLib

rdflib.plugins.stores.sparqlstore.CastToTerm(node)[source]

Helper function that casts XML node in SPARQL results to appropriate rdflib term

class rdflib.plugins.stores.sparqlstore.NSSPARQLWrapper(endpoint, updateEndpoint=None, returnFormat='xml', defaultGraph=None, agent='sparqlwrapper 1.7.6 (rdflib.github.io/sparqlwrapper)')[source]

Bases: SPARQLWrapper.Wrapper.SPARQLWrapper

__module__ = 'rdflib.plugins.stores.sparqlstore'
injectPrefixes(query)[source]
nsBindings = {}
setNamespaceBindings(bindings)[source]

A shortcut for setting namespace bindings that will be added to the prolog of the query

@param bindings: A dictionary of prefixs to URIs

setQuery(query)[source]

Set the SPARQL query text. Note: no check is done on the validity of the query (syntax or otherwise) by this module, except for testing the query type (SELECT, ASK, etc).

Syntax and validity checking is done by the SPARQL service itself.

@param query: query text @type query: string @bug: #2320024

class rdflib.plugins.stores.sparqlstore.SPARQLStore(endpoint=None, bNodeAsURI=False, sparql11=True, context_aware=True, **sparqlwrapper_kwargs)[source]

Bases: rdflib.plugins.stores.sparqlstore.NSSPARQLWrapper, rdflib.store.Store

An RDFLib store around a SPARQL endpoint

This is in theory context-aware and should work as expected when a context is specified.

For ConjunctiveGraphs, reading is done from the “default graph”. Exactly what this means depends on your endpoint, because SPARQL does not offer a simple way to query the union of all graphs as it would be expected for a ConjuntiveGraph. This is why we recommend using Dataset instead, which is motivated by the SPARQL 1.1.

Fuseki/TDB has a flag for specifying that the default graph is the union of all graphs (tdb:unionDefaultGraph in the Fuseki config).

Warning

The SPARQL Store does not support blank-nodes!

As blank-nodes act as variables in SPARQL queries there is no way to query for a particular blank node.

See http://www.w3.org/TR/sparql11-query/#BGPsparqlBNodes

__init__(endpoint=None, bNodeAsURI=False, sparql11=True, context_aware=True, **sparqlwrapper_kwargs)[source]
__len__(context=None)[source]
__module__ = 'rdflib.plugins.stores.sparqlstore'
add((subject, predicate, obj), context=None, quoted=False)[source]
addN(quads)[source]
add_graph(graph)[source]
bind(prefix, namespace)[source]
commit()[source]
contexts(triple=None)[source]

Iterates over results to “SELECT ?NAME { GRAPH ?NAME { ?s ?p ?o } }” or “SELECT ?NAME { GRAPH ?NAME {} }” if triple is None.

Returns instances of this store with the SPARQL wrapper object updated via addNamedGraph(?NAME).

This causes a named-graph-uri key / value pair to be sent over the protocol.

Please note that some SPARQL endpoints are not able to find empty named graphs.

create(configuration)[source]
destroy(configuration)[source]
formula_aware = False
graph_aware = True
namespace(prefix)[source]
namespaces()[source]
open(configuration, create=False)[source]

sets the endpoint URL for this SPARQLStore if create==True an exception is thrown.

prefix(namespace)[source]
query(query, initNs={}, initBindings={}, queryGraph=None, DEBUG=False)[source]
query_endpoint
regex_matching = 0
remove((subject, predicate, obj), context)[source]
remove_graph(graph)[source]
rollback()[source]
transaction_aware = False
triples((s, p, o), context=None)[source]
  • tuple (s, o, p)
    the triple used as filter for the SPARQL select. (None, None, None) means anything.
  • context context
    the graph effectively calling this method.

Returns a tuple of triples executing essentially a SPARQL like SELECT ?subj ?pred ?obj WHERE { ?subj ?pred ?obj }

context may include three parameter to refine the underlying query:

  • LIMIT: an integer to limit the number of results
  • OFFSET: an integer to enable paging of results
  • ORDERBY: an instance of Variable(‘s’), Variable(‘o’) or Variable(‘p’)

or, by default, the first ‘None’ from the given triple

  • Using LIMIT or OFFSET automatically include ORDERBY otherwise this is

because the results are retrieved in a not deterministic way (depends on the walking path on the graph) - Using OFFSET without defining LIMIT will discard the first OFFSET - 1 results

`` a_graph.LIMIT = limit a_graph.OFFSET = offset triple_generator = a_graph.triples(mytriple):

#do something

#Removes LIMIT and OFFSET if not required for the next triple() calls del a_graph.LIMIT del a_graph.OFFSET ``

triples_choices((subject, predicate, object_), context=None)[source]

A variant of triples that can take a list of terms instead of a single term in any slot. Stores can implement this to optimize the response time from the import default ‘fallback’ implementation, which will iterate over each term in the list and dispatch to triples.

class rdflib.plugins.stores.sparqlstore.SPARQLUpdateStore(queryEndpoint=None, update_endpoint=None, bNodeAsURI=False, sparql11=True, context_aware=True, postAsEncoded=True, autocommit=True)[source]

Bases: rdflib.plugins.stores.sparqlstore.SPARQLStore

A store using SPARQL queries for reading and SPARQL Update for changes.

This can be context-aware, if so, any changes will be to the given named graph only.

In favor of the SPARQL 1.1 motivated Dataset, we advise against using this with ConjunctiveGraphs, as it reads and writes from and to the “default graph”. Exactly what this means depends on the endpoint and can result in confusion.

For Graph objects, everything works as expected.

Warning

The SPARQL Update Store does not support blank-nodes!

As blank-nodes acts as variables in SPARQL queries there is no way to query for a particular blank node.

See http://www.w3.org/TR/sparql11-query/#BGPsparqlBNodes

BLOCK_END = u'}'
BLOCK_FINDING_PATTERN = <_sre.SRE_Pattern object>
BLOCK_START = u'{'
BlockContent = u'((\'([^\'\\\\]|\\\\.)*\')|("([^"\\\\]|\\\\.)*")|(\'\'\'((\'|\'\')?([^\'\\\\]|\\\\.))*\'\'\')|("""(("|"")?([^"\\\\]|\\\\.))*"""))|(<([^<>"{}|^`\\]\\\\\\[\\x00-\\x20])*>)|(#[^\\x0D\\x0A]*([\\x0D\\x0A]|\\Z))|(\\\\.)'
BlockFinding = u'(?P<block_start>{)|(?P<block_end>})|(?P<block_content>((\'([^\'\\\\]|\\\\.)*\')|("([^"\\\\]|\\\\.)*")|(\'\'\'((\'|\'\')?([^\'\\\\]|\\\\.))*\'\'\')|("""(("|"")?([^"\\\\]|\\\\.))*"""))|(<([^<>"{}|^`\\]\\\\\\[\\x00-\\x20])*>)|(#[^\\x0D\\x0A]*([\\x0D\\x0A]|\\Z))|(\\\\.))'
COMMENT = u'#[^\\x0D\\x0A]*([\\x0D\\x0A]|\\Z)'
ESCAPED = u'\\\\.'
IRIREF = u'<([^<>"{}|^`\\]\\\\\\[\\x00-\\x20])*>'
STRING_LITERAL1 = u"'([^'\\\\]|\\\\.)*'"
STRING_LITERAL2 = u'"([^"\\\\]|\\\\.)*"'
STRING_LITERAL_LONG1 = u"'''(('|'')?([^'\\\\]|\\\\.))*'''"
STRING_LITERAL_LONG2 = u'"""(("|"")?([^"\\\\]|\\\\.))*"""'
String = u'(\'([^\'\\\\]|\\\\.)*\')|("([^"\\\\]|\\\\.)*")|(\'\'\'((\'|\'\')?([^\'\\\\]|\\\\.))*\'\'\')|("""(("|"")?([^"\\\\]|\\\\.))*""")'
__init__(queryEndpoint=None, update_endpoint=None, bNodeAsURI=False, sparql11=True, context_aware=True, postAsEncoded=True, autocommit=True)[source]
__len__(*args, **kwargs)[source]
__module__ = 'rdflib.plugins.stores.sparqlstore'
add(spo, context=None, quoted=False)[source]

Add a triple to the store of triples.

addN(quads)[source]

Add a list of quads to the store.

add_graph(graph)[source]
commit()[source]

add(), addN(), and remove() are transactional to reduce overhead of many small edits. Read and update() calls will automatically commit any outstanding edits. This should behave as expected most of the time, except that alternating writes and reads can degenerate to the original call-per-triple situation that originally existed.

contexts(*args, **kwargs)[source]
open(configuration, create=False)[source]

sets the endpoint URLs for this SPARQLStore :param configuration: either a tuple of (queryEndpoint, update_endpoint),

or a string with the query endpoint
Parameters:create – if True an exception is thrown.
query(*args, **kwargs)[source]
remove(spo, context)[source]

Remove a triple from the store

remove_graph(graph)[source]
rollback()[source]
triples(*args, **kwargs)[source]
update(query, initNs={}, initBindings={}, queryGraph=None, DEBUG=False)[source]

Perform a SPARQL Update Query against the endpoint, INSERT, LOAD, DELETE etc. Setting initNs adds PREFIX declarations to the beginning of the update. Setting initBindings adds inline VALUEs to the beginning of every WHERE clause. By the SPARQL grammar, all operations that support variables (namely INSERT and DELETE) require a WHERE clause. Important: initBindings fails if the update contains the substring ‘WHERE {‘ which does not denote a WHERE clause, e.g. if it is part of a literal.

Context-aware query rewriting

  • When: If context-awareness is enabled and the graph is not the default graph of the store.
  • Why: To ensure consistency with the IOMemory store. The graph must except “local” SPARQL requests (requests with no GRAPH keyword) like if it was the default graph.
  • What is done: These “local” queries are rewritten by this store. The content of each block of a SPARQL Update operation is wrapped in a GRAPH block except if the block is empty. This basically causes INSERT, INSERT DATA, DELETE, DELETE DATA and WHERE to operate only on the context.
  • Example: “INSERT DATA { <urn:michel> <urn:likes> <urn:pizza> }” is converted into “INSERT DATA { GRAPH <urn:graph> { <urn:michel> <urn:likes> <urn:pizza> } }”.
  • Warning: Queries are presumed to be “local” but this assumption is not checked. For instance, if the query already contains GRAPH blocks, the latter will be wrapped in new GRAPH blocks.
  • Warning: A simplified grammar is used that should tolerate extensions of the SPARQL grammar. Still, the process may fail in uncommon situations and produce invalid output.
update_endpoint

the HTTP URL for the Update endpoint, typicallysomething like http://server/dataset/update

where_pattern = <_sre.SRE_Pattern object>
rdflib.plugins.stores.sparqlstore.TraverseSPARQLResultDOM(doc, asDictionary=False)[source]

Returns a generator over tuples of results

rdflib.plugins.stores.sparqlstore.localName(qname)[source]