VAF Workflow State - Not enough quota is available to process this command

- I saw a similar post you responded to regarding an error "Not enough quota is available to process this command." here:  Not enough quota is available to process this command. 

And I'm currently experiencing the same issue. I notice in the linked question a reference was made to transitioning to VAF, however in my case this is with a VAF workflow state operation. The VAF uses a REST API in C# in the background, however on the client side I'm receiving the same not enough quota errors when there is a file in this state. Currently the REST API is being queried using HttpWebRequest as a series of operations, and these are running synchronously in .NET.

Can you confirm whether there are any known issues caused by a workflow state with a long running time? Reason I ask is that it sometimes takes 5 minutes or more to execute the script, and this is when I'm noticing the "not enough quota" errors. I'm also unsure what quota is specifically referring to, if this is memory, disk space, swap file, or some other parameter?

Parents
  • Oh, and as an aside: you should not design a system that has a synchronous script (event handler, workflow state action, etc.) that takes that long.  Synchronous scripts should take a few seconds at a maximum.  Not only do synchronous scripts have a negative effect on the user experience, they can also limit system scalability.

    If you need to run something that takes longer then consider switching to an asynchronous process (a task processor) running in unsafe mode, using the transaction runner to opt-in to transactions for specific sections of code that write to the M-Files vault.

    That said: I would be very interested in that stack trace...

  • Thanks Craig, all valid points! I’ll try to recreate that error and get you the stack trace. This is actually only reading from mfiles, no writes, and the normal execution time is only a few seconds, the issue occurred when the REST API was responding very slowly and therefore each step was taking much longer.

     Will get back to you asap 

  • Whilst there are some negatives (e.g. if the user needs to see the response from the remote system, or if a failure in the code should roll back the current transaction, then it doesn't work), migrating to a task queue could be a real benefit in these sorts of situations, as it natively supports things like retries.

  • Thanks Craig, are you suggesting using the queue setup described here https://developer.m-files.com/Frameworks/Vault-Application-Framework/Task-Queues/ ?

    Essentially each step that I use the REST API feeds into the subsequent step, should I still be running each step as synchronous REST calls, or is this task queue supposed to make use of async rest api calls as well?

    Apologies for the onslaught of queries, I’m new to the REST API setup and unsure of what is required to release Mfiles back to normal processing when the function is running as a workflow state. Critically, with the way it works now I’m running this such that it occurs on a transition and doesn’t allow the transition to proceed if an error occcurs - not sure if it’s better to run this so that it runs as a state and triggers a transition on success so that the client isn’t sitting waiting for the operation to complete (although this makes it harder to tell if there’s an error)

  • Yes, this is what I was talking about.  I should be a little more clear; let me try to explain.

    Within M-Files there are two ways in which your code can be run: synchronously or asynchronously:

    • Synchronous: the code is run when an event fires in M-Files (e.g. an object is created) and is executed within the context of that event transaction.  All VBScript runs this way as an example, as do all the standard entry points (e.g. [EventHandler]) in VAF.  One benefit of synchronous code is that you can throw an exception and M-Files will roll back the transaction and the user be shown the error code; great if you want to stop the user moving forward if something isn't right.

    • Asynchronous: restricted to vault applications (at a practical level at least) and revolves around the concept of task queues and processors.  Items are added to a queue and, at the requested time, the appropriate processor picks up the item and processes it.  The processing itself may or may not occur within a transaction, depending upon configuration, but the processing is independent from anything that happened before or afterwards.

    I also want to be clear that when I say "asynchronous" above it is specifically that the task processor runs at some point in the future, and is not directly related to the "async/await" keywords in C#.  The task processor method cannot actually be an async method; if you want to use the actual async/await approach in your task processor (e.g. if you are using a HttpClient that supports async methods) then say and I'll give an example of how to do it below.

    The synchronous approach is what you have right now.  It means that you can easily throw if the item isn't added to the remote system properly, but it means the availability of the M-Files process is directly attached to the remote system; if it is unavailable or slow then so is M-Files.

    If you were to change to the asynchronous approach then you would do the following:

    • Create the event handler or state action that runs when the object should be sent.  Instead of actually sending it, though, it adds an item (a "directive") into the task queue including information on the specific object that needs to be sent.
      When the object hits that workflow state the code is very small: the item is added to the task and the object is then saved into the state.  From the user's perspective it "stops" here and waits.

    • Create an unsafe task processor.  This method is provided with the job/directive, uses that to load the associated object, and sends the object to the remote service.  If the remote service call fails then you would update the object and move it to an error state.  If the remote service call fails then you would update the object and move it to a successful state.

    Designing that asynchronous approach can be awkward if you have lots of operations that need to happen in sequence, all with various error trapping.  It may take some time to get the approach right, and it may be that you need some small tweaks to your workflow to get it to work properly.  I would probably recommend reaching out to your implementation team, customer success manager, or partner, to get some support and to make this go as smoothly as possible.

  • Thanks Craig - with the API I'm using I can split the work up into 4 discrete steps:

    1. create
    2. upload
    3. share
    4. publish

    I could potentially make a workflow state for each of these, and under each workflow state spawn a directive. However, each step has data (usually a single string) that needs to be available in the subsequent steps - is there a clean way to make the data from one task available to the subsequent tasks? I'm guessing the best approach would probably be to just create additional M-Files properties for the object and update those with the relevant data?

  • You could set properties on the object, yes.

    You could also create a custom Directive type (or extend the ObjIDTaskDirective from the Extensions library) and set properties on it to contain the data you need.  You would simply set those properties when you create the task, the system would serialise them into the queue, then read them from the directive when the task is run.

Reply
  • You could set properties on the object, yes.

    You could also create a custom Directive type (or extend the ObjIDTaskDirective from the Extensions library) and set properties on it to contain the data you need.  You would simply set those properties when you create the task, the system would serialise them into the queue, then read them from the directive when the task is run.

Children
No Data