Is there a performance advantage (or disadvantage) when using default
instead of server_default
for mapping table column default values when using SQLAlchemy with PostgreSQL?
My understanding is that default
renders the expression in the INSERT
(usually) and that server_default
places the expression in the CREATE TABLE
statement. Seems like server_default
is analogous to typical handling of defaults directly in the db such as CREATE TABLE example (..., date_col timestamptz DEFAULT now())
, but not clear to me if it is more efficient to handle defaults on INSERT
or via table creation.
Would there be any performance improvement or degradation for row inserts if each of the default
parameters in the example below were changed to server_default
?
from uuid import uuid4
from sqlalchemy import Column, Boolean, DateTime, Integer
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.sql import func
Base = declarative_base()
class Item(Base):
__tablename__ = 'items'
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid4)
count = Column(Integer, nullable=False, default=0)
flag = Column(Boolean, nullable=False, default=False)
dated = Column(DateTime(timezone=True), nullable=False, default=func.now())
The best explanation I found so far for when to use default
instead of server_default
does not address performance (see Mike Bayer's SO answer on the subject). My oversimplified summary of that explanation is that default
is preferred over server_default
when...
- The db can't handle the expression you need or want to use for the default value.
- You can't or don't want to modify the schema directly.
...so the question remains as to whether performance should be considered when choosing between default
and server_default
?
from Is there a performance reason to use "default" instead of "server_default" for table mapping in SQLAlchemy with PostgreSQL?
No comments:
Post a Comment