For the discussion, let's assume your execution plan looks like
QUERY PLAN -------------------------- Update on mytab -> Seq Scan on mytab Filter: (id = 1)
I also assume that you are using the default READ COMMITTED
isolation level.
Then PostgreSQL will sequentially read to the table.
Whenever it finds a row that matches the filter, that row will be locked and updated.
If locking a row is blocked by a concurrent query, PostgreSQL waits until the lock goes away. Then it re-evaluates the filter condition and either moves on (if the condition no longer applies on account of a concurrent modification) or it locks and updates the modified row.
See the documentation:
UPDATE
,DELETE
,SELECT FOR UPDATE
, andSELECT FOR SHARE
commands behave the same asSELECT
in terms of searching for target rows: they will only find target rows that were committed as of the command start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the would-be updater will wait for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the second updater can proceed with updating the originally found row. If the first updater commits, the second updater will ignore the row if the first updater deleted it, otherwise it will attempt to apply its operation to the updated version of the row. The search condition of the command (theWHERE
clause) is re-evaluated to see if the updated version of the row still matches the search condition. If so, the second updater proceeds with its operation using the updated version of the row.
In particular, it is possible that two UPDATE
statements that each modify several rows deadlock with each other, since they acquire locks as the proceed and locks are always held until the end of the transaction.