multiprocessing - Is looping and blocking on recv the best way to handle a long-lived process in python? -
i have set of long-lived process expensive setup want push bank of worker processes doing work parallel. each worker different, building different parts of our database. shut down workers , rebuild them, every 4 hours or so.
the python examples i've seen multiprocessing module seem all-the-same, short-lived processes 1 thing once , exit.
here's sample came distributing work bank of all-different, long-lived workers.
is on track, or there better way this?
class worker(process): def __init__(self): process.__init__(self) # ...or super self.parent_side, self.pipe = pipe() def do_super_slow_initialization(self): print 'all work , no play makes jack dull boy' def run(self): self.do_super_slow_initialization() while true: message = self.pipe.recv() if not message: break self.pipe.send({'message': message, 'pid': os.getpid(), 'ppid': os.getppid() }) self.pipe.close() def main(): print '+++ (parent)', os.getpid(), os.getppid() workers = [worker() _ in xrange(10)] # start workers w in workers: w.start() # bunch of messages through x in xrange(10): # send message each worker y, w in enumerate(workers): w.parent_side.send('work%s_%s' % (x, y)) # results w in workers: print w.parent_side.recv() # shut down w in workers: w.parent_side.send(none) w.join()
Comments
Post a Comment