#515 crash when using a fresh DB after FAS integration
Closed: Fixed 6 years ago Opened 6 years ago by ryanlerch.

Pretty regularly, when i wipe my DB, and recreate it with hreset, i get the following error when first starting with honcho start

if i restart it again, (i.e. just run honcho start again, not reseting the DB), it seems to work ok from then on.

"""
02:43:09 worker.1 | Traceback (most recent call last):
02:43:09 worker.1 | File "/srv/hubs/venv/bin/fedora-hubs-worker", line 11, in <module>
02:43:09 worker.1 | load_entry_point('fedora-hubs', 'console_scripts', 'fedora-hubs-worker')()
02:43:09 worker.1 | File "/srv/hubs/fedora-hubs/hubs/backend/worker.py", line 168, in main
02:43:09 worker.1 | item["username"], item.get("hub"))
02:43:09 worker.1 | File "/srv/hubs/fedora-hubs/hubs/utils/fas.py", line 224, in sync_user_roles
02:43:09 worker.1 | affected_hubs = fas_client.sync_user_roles(user, hub)
02:43:09 worker.1 | File "/srv/hubs/fedora-hubs/hubs/utils/fas.py", line 156, in sync_user_roles
02:43:09 worker.1 | hub.subscribe(user, role)
02:43:09 worker.1 | File "/srv/hubs/fedora-hubs/hubs/models/hub.py", line 139, in subscribe
02:43:09 worker.1 | session.flush()
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2112, in flush
02:43:09 worker.1 | self._flush(objects)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2230, in _flush
02:43:09 worker.1 | transaction.rollback(_capture_exception=True)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in exit
02:43:09 worker.1 | compat.reraise(exc_type, exc_value, exc_tb)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2194, in _flush
02:43:09 worker.1 | flush_context.execute()
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 373, in execute
02:43:09 worker.1 | rec.execute(self)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 532, in execute
02:43:09 worker.1 | uow
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 178, in save_obj
02:43:09 worker.1 | mapper, table, insert)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 771, in _emit_insert_statements
02:43:09 worker.1 | execute(statement, multiparams)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
02:43:09 worker.1 | return meth(self, multiparams, params)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
02:43:09 worker.1 | return connection._execute_clauseelement(self, multiparams, params)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
02:43:09 worker.1 | compiled_sql, distilled_params
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
02:43:09 worker.1 | context)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
02:43:09 worker.1 | exc_info
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
02:43:09 worker.1 | reraise(type(exception), exception, tb=exc_tb, cause=cause)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
02:43:09 worker.1 | context)
02:43:09 worker.1 | File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
02:43:09 worker.1 | cursor.execute(statement, parameters)
02:43:09 worker.1 | sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) database is locked [SQL: u'INSERT INTO association (hub_id, user_id, role) VALUES (?, ?, ?)'] [parameters: (35, u'lmacken', u'member')]
02:43:09 worker.1 | process terminated
"""


Yeah, you're getting a "database is locked" error because SQLite is crappy and does not handle concurrency well. And in the vagrant VM, we have several worker and triage processes accessing the DB at the same time.

In staging, we will be connecting to PostgreSQL which does not have this limitation. Maybe it would also be useful to switch the hubs-dev machine to PGSQL?

Metadata Update from @ryanlerch:
- Issue close_status updated to: Fixed

6 years ago

Login to comment on this ticket.

Metadata