Dear All,
As we know the default timeout for Task Queues is 90 seconds. How do we extend that timeout from VAF?
Thanks.
The 90s timeout is down to the lifetime of a transaction. Realistically you should be aiming for significantly lower than 90s.
You should consider a different transaction mode. A hybrid transaction mode may give you the benefit of a longer amount of time to run your code, plus a section of transactional safety.
Currently I'm pulling data from a third-party API that generate hundreds of records in which could take more than two minutes for retrieving and creating new objects in M-Files. The hybrid transaction mode only allows 90 seconds also. Unless there is a way to extend it?
Currently I'm pulling data from a third-party API that generate hundreds of records in which could take more than two minutes for retrieving and creating new objects in M-Files. The hybrid transaction mode only allows 90 seconds also. Unless there is a way to extend it?
The hybrid transaction mode does not only allow 90s. The "Commit" section must be within 90s, but you can have the part outside of the hybrid taking as long as you need. This is how the PDF Processor works, from recollection.
However, in this instance I wouldn't use hybrid either. For what you've described I would use an unsafe transaction processor. This has no timeout limit and no transactional safety.
Within this you can use the transaction runner to create each object - or batch of objects - within a transaction; each single transaction will still be limited to 90s, but your overall process can take as long as you need.
We're actually lacking a page on the Developer Portal on this, but the idea would be:
// Not shown: use the extensions library to make this recur on a frequency. [TaskProcessor(TaskQueueId, TaskTypeId, TransactionMode = TransactionMode.Unsafe)] [ShowOnDashboard("Import data from web service", ShowRunCommand = true)] public void ImportData(ITaskProcessingJob<TaskDirective> job) { // This section of code runs outside of a transaction so can take hours if you really want. // NOTE: This needs lots of logging calls. // The transaction runner can be used to create transactions within // an unsafe processor. var transactionRunner = this.GetTransactionRunner(); // Get the data to import. job.Update(0, "Retrieving items to import"); List<object> itemsToImport = new List<object>(); // TODO: Retrieve items... job.Update(0, $"{itemsToImport.Count} items to be imported."); // Get a sensible batch size from config, or default to 10. int batchSize = this.Configuration.BatchSize ?? 10; batchSize = batchSize > 0 && batchSize < 50 ? batchSize ?? 10; // Iterate over the items that need importing. var offset = 0; while(offset < itemsToImport.Count) { var batch = itemsToImport.Skip(offset).Take(batchSize).ToList(); if (batch.Count == 0) break; var percentComplete = (offset * 100) / itemsToImport.Count; // Create the batch. try { transactionRunner.Run((transactionalVault) => { job.Update ( percentComplete: percentComplete, details: $"Starting import of {batch.Count} items" ); // TODO: // Use the transactional vault reference to import the batch. // Each specific batch is imported within a transaction. // Each EXECUTION of this lambda is limited to 90s. var i = 0; foreach(var o in batch) { // TODO: Create the object. transactionalVault.ObjectOperations.CreateObject(...); // Ensure we call job.Update. job.Update ( percentComplete: ((offset + i) * 100) / items.Count, details: $"Imported item" ); i++; } job.Update ( percentComplete: percentComplete, details: $"Successfully imported of items" ); }); } catch(Exception e) { this.Logger.Error(e, $"Could not import batch...."); } finally { offset += batchSize; job.Update ( percentComplete: percentComplete, details: $"Imported {offset} items of {itemsToImport.Count}" ); } } // Awesome; done. job.Update(100, $"Imported {itemsToImport.Count}"); }
Edit: there is actually an example of doing this on the Developer Portal, but I think we need a separate page to go into more detail.
In this example the execution of the processor method has no timeout at all, provided you keep calling job. Update so that the system knows it's being processed. The only timeout is where the comment indicates: within the "transactionRunner.Run()" call. Each of these calls - so each batch - is within a transaction, therefore can take up to 90s.
As I said, though, your design process should try to make that transaction significantly shorter than 90s. That's the idea of the batches; they allow you to create short sections of code that run within a transaction, but you still get some performance improvement vs creating them one-by-one.
To be clear, in the example above, the following can occur:
The above code may take 510s - over 8 minutes - but each transaction (each batch) takes 10s. So nothing times out.
The key points are:
Perhaps you're re-using the property values collection in your loop, so the "class" property (and others?) get added twice?
I don't know; you'd need to debug. Alternatively, good logging (e.g. logging the items you add to the collection, and the size of the collection, as well as when the loops start and end), will give you some good understanding of what is happening.
I don't think it's directly related to the structure we've been discussing above.
© 2025 M-Files, All Rights Reserved.