Using a List as a Named Placeholder in Python-sqlite

Sometimes as part of a sqlite query, I want to use a list or other iterable as an argument.  Most commonly, this is useful for “IN” queries.  The sqlite (or possibly the python-sqlite wrapper), can’t properly interpret this sort of named place holder (cf. using lists as sqlite arguments).

For example:


SELECT  *  FROM  table  WHERE   id   IN (:arglist)

This won’t work in sqlite.   A workaround is given below.  Use the function “sql_in_to_org” change this construct to from an “IN” to an “OR”, and get a dict of the arguments.


def sql_in_to_or(sqlvar, args, name="arg" ):
    '''
    convert: sqlvar IN (args) to:
      sqlvar=:arg0 OR sqlvar=:arg1 ....

    returns "new query", dict of args

    To use in execute:

    Q = 'select * from table where (%(where)s) and time > :ts'
    iq, iq_args = sql_in_to_or( "table.id", range(5), name='id')
    placeholders = dict(ts=12323425)
    conn.execute(Q % dict(where=iq), placeholders.update(iq_args))     
    '''
    q_parts = list()
    q_args = dict()
    for (ii, arg) in enumerate(args):
        argname = "%s%i" % (name, ii)
        q_parts.append( "%s=:%s" % (sqlvar,argname) )
        q_args[argname] = arg

    query =  " OR ".join( q_parts )
    return (query, q_args)

(Thanks to Jon Nelson of the Pycurious Blog for the idea (we work together at Renesys))


Simple “object-db” using JSON and python-sqlite

As part of a much larger project, I have a group of “snapshots” of a complicated data structure.   I need to save these in a persistent way, and continue to have access to them, when needed.  My solution is to output the snapshots as JSON, and store them into a sqlite database*, where they will be persistent on disk as “jlobs” (json large objects).

This “sqlite as object-db”  has several advantages:

  1. atomic transactions,
  2. easy database replication,
  3. jlob can easily change format without affecting schema
  4. very light runtime requirements.

Building off of the sqlite3 manual, it is easy to see how to  extract the json back *out* of the database.

There are  drawbacks to this approach, of course:

  1. you’re responsible for building and maintaining tables indexing any queryable elements of your jlob, if you want to be able to access them using SQL.
  2. sql normalization purists will throw up when they look at your schema

(*Note: if you are on centos 5, and do not have access to Python 2.5, make sure that you install python-sqlite2, for example from one of these rpms) rather than updating your python-sqlite in place.  BAD THINGS WILL HAPPEN, including breaking yum. )

#!/usr/bin/python
import sys
if sys.version_info >= (2,5):
    import sqlite3
else:
    from pysqlite2 import dbapi2 as sqlite3

try:
    import json
except ImportError:
    import simplejson as json

sqlite3.register_converter("json", json.loads)

conn = sqlite3.connect(":memory:",   \
    detect_types=sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES)
c = conn.cursor()
c.row_factory = sqlite3.Row  # fields by name
d = conn.cursor()  # normal row

json_string = json.dumps( dict(a=1,b=[1,2,3]))
conn.execute('''
    create table snapshot(
          id INTEGER PRIMARY KEY AUTOINCREMENT,
          mydata json);
    ''')
conn.execute('''
    insert into snapshot values
       (null, ?)''', (json_string,))

R1 = c.execute("select * from snapshot").fetchone()['mydata']
R2 = d.execute("select * from snapshot").fetchone()[1]
R3 = conn.execute("select * from snapshot").fetchone()[1]

assert R1==R2==R3 == {'a': 1, 'b': [1, 2, 3]}, "all should be equal"