Question

Considering Storm, a python ORM, I would like to automatically generate the schema for a (mysql) database. The home page states

"Storm works well with existing database schemas." ( https://storm.canonical.com/FrontPage ),

hence I was hoping to not having to create model classes. However, the 'getting started' tutorial ( https://storm.canonical.com/Tutorial ) suggests that a class, like the one below, needs to be created manually for each table, and each field needs to be manually specified:

class Person(object):
  __storm_table__ = "person"
  id = Int(primary=True)
  name = Unicode()

Alternatively, SQLAlchemy doesn't seem to support a reverse engineering feature either, but does need a schema like this one:

user = Table('user', metadata,
  Column('user_id', Integer, primary_key = True),
  Column('user_name', String(16), nullable = False),
  Column('email_address', String(60)),
  Column('password', String(20), nullable = False)
)

Of course, these classes/schemas make sense, since each table will likely represent some 'object of interest' and one can extend them with all kinds of functionality. However, they are tedious to create, and their (initial) content is straight forward when a database already exists.

One ORM that does allow for reverse engineering is:

http://docs.doctrine-project.org/en/2.0.x/reference/tools.html

Are there similar reverse engineering tools for Storm or SQLAlchemy or any python ORM or python database fancyfier?

Was it helpful?

Solution

I am not aware of how Storm manages this process, but you can certainly reflect tables in a database with sqlalchemy. For example, below is a basic example using a SQL Server instance that I have access to at the moment.

AN ENTIRE DATABASE

>>> from sqlalchemy import create_engine, MetaData
>>> engine = create_engine('mssql+pyodbc://<username>:<password>@<host>/<database>')  # replace <username> with user name etc.
>>> meta = MetaData()
>>> meta.reflect(bind=engine)
>>> funds_table = meta.tables['funds']  # tables are stored in meta.tables dict
>>> funds_table  # now stores database schema object
Table(u'funds', MetaData(bind=None), Column(u'fund_token', INTEGER(), table=<funds>, primary_key=True, nullable=False), Column(u'award_year_token', INTEGER(), ForeignKey(u'award_year_defn.award_year_token'), table=<funds>, nullable=False), ... Column(u'fin_aid_disclosure_category', VARCHAR(length=3, collation=u'SQL_Latin1_General_CP1_CI_AS'), table=<funds>), Column(u'report_as_additional_unsub', BIT(), table=<funds>, server_default=DefaultClause(<sqlalchemy.sql.expression.TextClause object at 0x000000000545B6D8>, for_update=False)), schema=None)

If you merely want to reflect one table at a time, you can use the following code instead.

ONE TABLE AT A TIME (much faster)

>>> from sqlalchemy import Table, create_engine, MetaData
>>> engine = create_engine('mssql+pyodbc://<username>:<password>@<host>/<database>')
>>> meta = MetaData()
>>> funds_table = Table('funds', meta, autoload=True, autoload_with=engine)  # indicate table name (here 'funds') with a string passed to Table as the first argument
>>> funds_table  # now stores database schema object
Table(u'funds', MetaData(bind=None), Column(u'fund_token', INTEGER(), table=<funds>, primary_key=True, nullable=False), Column(u'award_year_token', INTEGER(), ForeignKey(u'award_year_defn.award_year_token'), table=<funds>, nullable=False), ... Column(u'fin_aid_disclosure_category', VARCHAR(length=3, collation=u'SQL_Latin1_General_CP1_CI_AS'), table=<funds>), Column(u'report_as_additional_unsub', BIT(), table=<funds>, server_default=DefaultClause(<sqlalchemy.sql.expression.TextClause object at 0x000000000545B6D8>, for_update=False)), schema=None)

As you can probably imagine, you can then save the relevant tables' data for accessing the tables more quickly again in the future.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top