I'm coding a program that requires high memory usage. I use python 3.7.10. During the program I create about 3GB of python objects, modifying them. Some objects I create contain pointer to other objects. Also, sometimes I need to deepcopy one object to create another.
My problem is that these objects creation and modification takes a lot of time and causing some performance issues. I wish I could do some of the creation and modification in parallel. However, there are some limitations:
- the program is very CPU-bound and there is almost no usage of IO/network - so multithreading library will not work due to the GIL
- the system I work with has no Read-on-write feature- so using multiprocessing python library spend a lot of time on forking the process
- the objects do not contain numbers and most of the work in the program are not mathematical - so I cannot benefit from numpy and ctypes
What can be a good alternative for this kind of memory to allow me to parallelize better my code?
from Python - alternatives for internal memory
No comments:
Post a Comment