-
-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: How can one self-re-enqueue a job when using _job_id? #416
Comments
Hi @joshwilson-dbx, I think we have somewhat similar use cases (1 particular job that can be triggered multiple times, only one should exist at one point in time, last one is the only one I am interested in), thought I'd reference my issue here #394 to vote up this scenario and also in case it helps to read about the problems I had encountered with aborting/deleting. |
Thanks for taking a look @gerazenobi. I agree our issues both arise from not being able to manage redundant job behavior well with the current API. I don't know what the solution would be yet, but perhaps we could add additional parameters to It doesn't seem like this kind of configuration would belong on the worker side of things. Maybe it belongs on the I also wonder if there's a need to formalize the concept of a Queue in the code. Right now it looks like it's just a convention specified by passing around a queue name between the various job and worker functionality. |
@joshwilson-dbx In our use case, the linked ticket: we work around by tracking ourselves (in redis we well) which is the last job that I am actually interested; subsequent jobs of the same type continue to override this value; whenever we need the result of the job we get the arq id from this saved value. |
I describe a (experimental?) approach to re-enqueue using |
I have a use case where once a job completes, i would like it to continuously re-schedule itself for another run at some point in the future. I would also like to ensure that only one instance is queued/running at any given time, so I'm using the
_job_id
parameter when enqueuing. I cannot use thecron
functionality as the delay time is somewhat dynamic and not easily translated to cron.Options that I've explored so far:
redis.enqueue_job(..., _job_id=ctx['job_id'])
from within the job itselfRetry
exception from within the job after the work has completed_expires
andmax_tries
settingskeep_result=0
and enqueue a second job (different name) with a small delay that in turn re-enqueues the original job againkeep_result=0
and re-enqueue in theafter_job_end
function to be sure the job and result keys are no longer present so the re-enqueue can occur.Is there a better way to do this?
The text was updated successfully, but these errors were encountered: