Job: + whenStarted, whenFinished callbacks, now with additional job argument; let Scheduler.schedule() return the real startTime (may be used as jobId)

git-svn-id: svn://svn.cy55.de/Zope3/src/loops/trunk@1914 fd906abe-77d9-0310-91a1-e0d9ade77398
This commit is contained in:
helmutm 2007-08-14 08:24:49 +00:00
parent 5348071d34
commit 2ac3c63888
4 changed files with 25 additions and 11 deletions

View file

@ -137,7 +137,7 @@ How does this work?
>>> from time import time
>>> scheduler = agent.scheduler
>>> scheduler.schedule(TestJob())
>>> startTime = scheduler.schedule(TestJob())
>>> tester.iterate()
executing
@ -152,9 +152,19 @@ classes from the testing package.
>>> transporter = transport.Transporter(agent)
>>> transportJob = transporter.createJob()
>>> crawlJob.successors.append(transportJob)
>>> scheduler.schedule(crawlJob)
>>> startTime = scheduler.schedule(crawlJob)
The Job class offers two callback hooks: ``whenStarted`` and ``whenFinished``.
Use this for getting notified about the starting and finishing of a job.
>>> def finishedCB(job, result):
... print 'Crawling finished, result:', result
>>> crawlJob.whenFinished = finishedCB
Now let the reactor run...
>>> tester.iterate()
Crawling finished, result: [<loops.agent.testing.crawl.DummyResource ...>]
Transferring: Dummy resource data for testing purposes.
Using configuration with scheduling
@ -216,6 +226,9 @@ Metadata sources
- path, filename
Implementation and documentation: see loops/agent/crawl/filesystem.py
and .../filesystem.txt.
E-Mail-Clients
--------------

View file

@ -12,7 +12,7 @@ loops.agent.crawl.filesystem - The Filesystem Crawler
>>> from loops.agent.crawl.filesystem import CrawlingJob
>>> agent = Agent()
>>> scheduler = agent.scheduler
>>> startTime = scheduler = agent.scheduler
We create a crawling job that should scan the data subdirectory
of the testing directory in the loops.agent package.
@ -32,7 +32,7 @@ transferred.
We are now ready to schedule the job and let the reactor execute it.
>>> scheduler.schedule(crawlJob)
>>> startTime = scheduler.schedule(crawlJob)
>>> tester.iterate()
Metadata: {'path': '...data...file1.txt'}

View file

@ -68,9 +68,9 @@ class IScheduledJob(Interface):
'rescheduled. Do not repeat if 0.')
successors = Attribute('Jobs to execute immediately after this '
'one has been finished.')
whenStarted = Attribute('A callable with no arguments that will '
whenStarted = Attribute('A callable with one argument (the job) that will '
'be called when the job has started.')
whenfinished = Attribute('A callable with one argument, the '
whenfinished = Attribute('A callable with two arguments, the job and the '
'result of running the job, that will be called when '
'the job has finished.')

View file

@ -48,6 +48,7 @@ class Scheduler(object):
startTime += 1
self.queue[startTime] = job
reactor.callLater(startTime-int(time()), job.run)
return startTime
def getJobsToExecute(startTime=None):
return [j for j in self.queue.values() if (startTime or 0) <= j.startTime]
@ -59,8 +60,8 @@ class Job(object):
scheduler = None
whenStarted = lambda self: None
whenFinished = lambda self, result: None
whenStarted = lambda self, job: None
whenFinished = lambda self, job, result: None
def __init__(self, **params):
self.startTime = 0
@ -74,12 +75,12 @@ class Job(object):
return succeed('OK')
def reschedule(self, startTime):
self.scheduler.schedule(self.copy(), startTime)
return self.scheduler.schedule(self.copy(), startTime)
def run(self):
d = self.execute()
d.addCallback(self.finishRun)
self.whenStarted()
self.whenStarted(self)
# TODO: logging
def finishRun(self, result):
@ -90,7 +91,7 @@ class Job(object):
job.params['result'] = result
#job.run()
self.scheduler.schedule(job)
self.whenFinished(result)
self.whenFinished(self, result)
# TODO: logging
# reschedule if necessary
if self.repeat: