I understand that using subprocess.Popen(..., preexec_fn=func)
makes Popen
thread-unsafe, and might deadlock the child process if used within multi-threaded programs:
Warning: The preexec_fn parameter is not safe to use in the presence of threads in your application. The child process could deadlock before exec is called. If you must use it, keep it trivial! Minimize the number of libraries you call into.
Are there any circumstances under which it is actually safe to use it within a multi-threaded environment? E.g. would passing a C-compiled extension function, one that does not acquire any interpreter locks by itself, be safe?
I looked through the relevant interpreter code and am unable to find any trivially occurring deadlocks. Could passing a simple, pure-Python function such as lambda: os.nice(20)
ever make the child process deadlock?
Note: most of the obvious deadlocks are avoided via a call to PyOS_AfterFork_Child()
(PyOS_AfterFork()
in earlier versions of Python).
from Is `preexec_fn` ever safe in multi-threaded programs? Under what circumstances?
No comments:
Post a Comment