Online Forums
Technical support is provided through Support Forums below. Anybody can view them; you need to Register/Login to our site (see links in upper right corner) in order to Post questions. You do not have to be a licensed user of our product.
Please read Rules for forum posts before reporting your issue or asking a question. OPC Labs team is actively monitoring the forums, and replies as soon as possible. Various technical information can also be found in our Knowledge Base. For your convenience, we have also assembled a Frequently Asked Questions page.
Do not use the Contact page for technical issues.
- Forum
- Discussions
- QuickOPC-Classic in .NET
- Reading, Writing, Subscriptions, Property Access
- Subscription mode. Catastrophic failure error -> blocked Subscribe/Unsubscribe
Subscription mode. Catastrophic failure error -> blocked Subscribe/Unsubscribe
They are for our internal use.
Best regards
Please Log in or Create an account to join the conversation.
Andriy wrote: Hello,
QuickOpc version 5.59.0-rev13
EnableNativeClient: True
Calls to SubscribeMultipleItems and UnsubscribeMultipleItems are still blocked.
I made 2 memory dumps of my program and filtered all threads not related to OPC.
The second app memory dump was made with an hour delay
Please Log in or Create an account to join the conversation.
QuickOpc version 5.59.0-rev13
EnableNativeClient: True
Calls to SubscribeMultipleItems and UnsubscribeMultipleItems are still blocked.
I made 2 memory dumps of my program and filtered all threads not related to OPC.
The second app memory dump was made with an hour delay
Please Log in or Create an account to join the conversation.
But it remains unknown what is causing it.
In two types of cases, the threads are waiting on a critical section that does not seem to be entered anywhere else. A hypothesis for this is that the thread that had entered the critical section might have terminated unexpectedly (earlier), in which case (according to Microsoft documentation) it can remain in locked state.
In the third type of case, the call waits on a critical section entered by an internal thread, which (among others) takes care of "garbage collection", which includes disconnection from servers that are no longer needed (there are items subscribed on them). This can block until the disconnection is finished, that is not an ideal design but that's how it is. It looks like that there is a bug that even though the server might have been disconnected, the garbage collector does not get the corresponding notification. According to the call stack the garbage collector is waiting, but I could not find the thread that actually "works" on the disconnection. This is something I have tried to at least partially address in the earlier "fix", but it looks like that is still there. And is similar to the first two types of cases - it looks like that if some threads are "gone", leaving the synchronization objects in an undefined state.
I am sad to report that further progress on this is probably not possible; certainly not without having a reproducible scenario on our computer.
I can only suggest some workarounds to reduce the likeliness that this happens:
- Increase EasyDAEngineParameters.GarbageCollectionPeriod (default 2000), in static EasyDAClient.SharedParameters.EngineParameters
- Increase EasyDAEngineParameters.AutoAdjustmentPeriod (default 500)
- Increase EasyDAEngineParameters.MaxTopicAge (default 1000)
- Increase EasyDAEngineParameters.MaxClientAge (default 5000)
- Do not unnecessarily unsubscribe
Depending on the application, it might be possible to increase these e.g. 10 times or 100 times.
And, add some higher-level functionality checking and app restart if necessary.
Regards
Please Log in or Create an account to join the conversation.
Please Log in or Create an account to join the conversation.
Do you have any updates?
Regards
Please Log in or Create an account to join the conversation.
In the attached archive you can find 2 snapshots of threads call stacks. The 2nd memory dump I made in 30 min after 1st.
Call stacks and threads id are the same for threads that are subscribing/unsubscribing to/from opc items and related to OPC DA. That is, they are stuck
Whereas OPC XML-DA call stacks are changing
Regards
Please Log in or Create an account to join the conversation.
There are OPC-DA related threads in the all stacks, that is true. But none of them is a user thread.
If you were subscribed to OPC-DA items, and the notifications has stopped coming "forever", then yes, it would probably be possible that the call stacks represent
such state. Was that the case?
But, it is impossible that the call stacks represent a state in which, for example, a call to ReadMultipleItems on OPC-DA items was hung, because there is no such thread in them.
Re "The lot of them are stuck somewhere internally subscribing/unsubscribing to/from opc tags.". I do not see anything like that in the call stacks. For is the base of your claim?
Re "Can you send me debug symbols if you cannot fix it?": No.
Re "Do you have an option to provide access to your source code? : No.
Regards
Please Log in or Create an account to join the conversation.
The problem is related to OPC DA not to OPC XML-DA!
I completely sure that opc-da related threads are blocked. I took 2 application memory dumps and compared threads callstacks and threads id. The lot of them are stuck somewhere internally subscribing/unsubscribing to/from opc tags.
I provided threads callstacks 3 weeks ago and there is no progress
Can you send me debug symbols if you cannot fix it?
Do you have an option to provide access to your source code? How much does it cost?
Regards
Please Log in or Create an account to join the conversation.
The user threads (your threads) are all related to OPC XML-DA. There are also threads for OPC DA, but they are internal threads, so even if they were blocked, you could not have perceived them as blocked. So I assume your problem was that the methods called (ReadMultipleItems or so) had not ever returned - or had not returned in time.
This is therefore a different situation from what has been addressed (regarding this topic thread) earlier.
Because the current problem now seems to be related to OPC XML-DA, I have forwarded my preliminary analysis to the programmer that has developed this part (I cannot determine more myself).
Regards
Please Log in or Create an account to join the conversation.
- Forum
- Discussions
- QuickOPC-Classic in .NET
- Reading, Writing, Subscriptions, Property Access
- Subscription mode. Catastrophic failure error -> blocked Subscribe/Unsubscribe