Archive for the ·

rendezvous

· Category...

Library not found: tibrvnativesd

6 comments

I have a new installation, and when I tried to run newly deployed BW-application, the following error occurred:

java.lang.UnsatisfiedLinkError: Library not found: tibrvnativesd
at com.tibco.tibrv.Tibrv.loadNativeLibrary(Tibrv.java:388)
at com.tibco.tibrv.Tibrv.(Tibrv.java:79)
at com.tibco.sdk.m.byte(MAppImpl.java:700)
at com.tibco.sdk.m.v(MAppImpl.java:478)
at com.tibco.sdk.m.(MAppImpl.java:95)
at com.tibco.sdk.a.(MThinAppImpl.java:21)
at com.tibco.sdk.MApp.(MApp.java:149)
at com.tibco.share.util.TraceApp.(Unknown Source)
at com.tibco.share.util.Trace.if(Unknown Source)
at com.tibco.share.util.Trace.a(Unknown Source)
at com.tibco.share.util.Trace.(Unknown Source)
at com.tibco.pe.core.JobPoolCreator.createTrace(Unknown Source)
at com.tibco.pe.PEMain.a(Unknown Source)
at com.tibco.pe.PEMain.do(Unknown Source)
at com.tibco.pe.PEMain.a(Unknown Source)
at com.tibco.pe.PEMain.(Unknown Source)
at com.tibco.pe.PEMain.main(Unknown Source)
Caused by: java.lang.UnsatisfiedLinkError: /export/home/tibco/tibrv/8.3/lib/libtibrvnativesd.so: ld.so.1: bwengine: fatal: /export/home/tibco/tibrv/8.3/lib/libtibrvnativesd.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at com.tibco.tibrv.Tibrv.loadNativeLibrary(Tibrv.java:385)
... 16 more

Thanks to support, they gave me the clue that helped:

Could you please replace the %RV_HOME%/lib with %RV_HOME%/lib/tibrvj.jar in the property tibco.env.STD_EXT_CP in bwengine.tra, redeploy your application and test if the issue gets resolved?

Saving the contents of TIBCO Rendezvous binary messages

1 comment

Once a situation may arise when there is need to view or save the contents of TIBCO Rendezvous Active Enterprise binary messages. If you just get them using tibrvlisten, then messages appear like this:

message={_data_=[521 opaque bytes]}

I know that TIBCO Support experts have some tools to display and save these Rendezvous opaque bytes of AE messages, but my quick solution was to create small BusinessWorks process, which will do capture and store job. There are two activities: Rendezvous Subscriber and Write File. Rendezvous Subscriber will listen appropriate subject and has only one output complex element to represent message body. Write File has “write as binary” option and Rendezvous Subscriber’s output body is input for a file binary content. There is a formula error, but in this case it can be ignored. File name will form from Process ID to save each message in a separate file.

When this process are running, binary files will appear in the specified folder. One file per message. You can open it in your favorite binary editor/viewer and have fun!

TIBCO Rendezvous undocumented profiling feature

2 comments

If you experience out of memory issues, or just want to debug function invocations or check consuming memory by Rendezvous daemon internals, you can enable undocumented profiling feature. You need go to http://rvd_host:7580/profiling URL. On the page select the desired check-boxes and then press submit.

Daemons profiling data will be in the log, you can view it on the screen, however better to start rvd or rvrd with log parameters -logfile c:\logs\rvd.log -log-max-size 1024 -log-max-rotations 5 and look at the log files.

TIBCO Rendezvous and MS NLB Cluster

Comments Off on TIBCO Rendezvous and MS NLB Cluster

TIBCO Rendezvous is multicast-based messaging. Network Load Balancing (NLB) is a way to configure a pool of machines so they take turns responding to requests. It’s commonly implemented in server farms: identically configured machines that spread out the load for a web site or work as terminal services cluster.

Task was to cross both of these things – Rendezvous based application on servers in MS NLB terminal services cluster. I’ve done some tests using different settings, but the result was an inappropriate. I received RV messages only on one server or one message on the first server, next message on second, and so on, it depend on “Filtering mode”. NLB for multicast packets works even better than I would like! But users of an application work on every server and need all messages delivered to all users on all servers.

What happens with every frame that the Network Load Balancing driver (wlbs.sys) receives is:

  1. on every node wlbs.sys checks if the received packet is send to a virtual IP
  2. on every node wlbs.sys checks the source IP and port
  3. one node decides to accept the packet and passes it up to the TCP/IP driver
  4. all other nodes drop the packet

The issue is that there is no special treatment for multicast IPs. NLB driver treats them like every other IP that is not the dedicated IP of that machine.

What are the possible solutions?

  • Receive the IP multicast traffic over a NIC where no NLB is bound to. Additional NIC in every server.
  • Use TCP connection to remote Rendezvous daemon (rvd). Daemon parameter in RV transport: -daemon "tcp:remotemachine:7500"
  • Use local Rendezvous routing daemon (rvrd) instead of rvd. It requires rvrd on every terminal server and additional rvrd somewhere in the network.

If you would like read more, here is the list of clustering and high availability cluster resources from MS.

Comments Off on TIBCO Rendezvous and MS NLB Cluster