Saturday, March 10, 2012

Jrocket and "[WARN] Use of -Djrockit.optfile is deprecated and discouraged"

Hello Technical Family,


In this post, I am going to discuss the Jrocket and its error during the Weblogic Startup. This Error may not impact the install but since it is deprecated - it make sense to fix it. I had to do some hours of research and would not want other people to do the same - hence posting here. What I get out of this - A pure satisfaction that someone out there will read this post and apply the solution. 


The platform is OS= RedHat 5.5 x86_64 with jrockit-jdk1.6.0_29-R28.2.2-4.1.0-linux-x64


Error:


During the Weblogic Server startup - I saw the following error in the logs:


[WARN] Use of -Djrockit.optfile is deprecated and discouraged.


Solution:


This is what I had to do, to get this issue gone from the logs and make Jrocket like what it wants to do.


1- I put the following into the environment variable in weblogic domain environment file.



-XX:+UnlockDiagnosticVMOptions  -XX:OptFile=${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrocket_optfile.txt

2- Recreated the jrocket_optfile.txt as :

From:

- oracle/bpel/services/workflow/repos/driver/WFTask
- oracle/bpel/services/workflow/task/model/SystemMessageAttributesTypeImpl.*
- oracle/bpel/services/workflow/task/model/TaskTypeImpl.*
- oracle/bpel/services/workflow/task/model/SystemAttributesTypeImpl.*
- oracle/bpel/services/workflow/repos/Util.*
- oracle/xquery/parser/XPathTokenManager.*



To:

{
match: ["oracle.bpel.services.workflow.repos.driver.WFTask",
"oracle.bpel.services.workflow.task.model.SystemMessageAttributesTypeImpl.",
"oracle.bpel.services.workflow.task.model.TaskTypeImpl.*",
"oracle.bpel.services.workflow.task.model.SystemAttributesTypeImpl.*",
"oracle.bpel.services.workflow.repos.Util.*",
"oracle.xquery.parser.XPathTokenManager.*"],
enable: jit
}

Restart the Weblogic and you just resolved another issue.

Great feeling - right...

Enjoy life as it comes - It will be less of good times and more of bad times - but hey I try to find the refuge in solving the issue as I have yet to see the good time since last 11 years !!


OBIEE 11g Install and "error creating asinstance instance1" Error

Hello Tech World,


In this post, I am going to give you insight of my own created issue. However, it can be helpful. Sometimes, you  come across issues by and accident and this is one of those. The platform is - OS RedHat 5.5 x86_64 OBIEE 11g (11.1.1.5).


Error:


While installing OBIEE 11g, following error displays in the orainventory logs and the configuration fails.


"error creating asinstance instance1"


Cause:


The environment variable was set with the name that the install was trying to create.


Debugging:


In most of the cases - you may not come across this issue as you may not have set this environment variable. I read too much into the installation guide and before even starting the install, I set the ORACLE_INSTANCE variable to what I wanted to create during the install. While doing configuration the configuration thinks that you already have this instance created for OPMN and will not proceed. 


This could be a bug - The install can always check whether that directory or path that is mentioned in the environment variable - exists or not but like a dumb it pops the error and will not proceed.




Solution:


I had to unset the ORACLE_INSTANCE variable to get the install going. To think of it, you can choose a different name than the one is set in the ORACLE_INSTANCE and get the install going - so trivial - but shocking when the error is so illusive.


I thought that it will help the community and less time at Oracle Support.


Hopefully Helps others.


Another day of a tech life.

Weblogic on Linux x86-64 libmuxer Library Error (BEA-280101)

In this post, I will be discussing the Java I/O and performance pack error that is seen in the admin logs of Weblogic. This is Weblogic 10.3.5 on RedHat Linux 5.5 64 bit.


Error:



<Jan 5, 2012 8:10:20 AM PST> <Error> <Socket> <BEA-000438> <Unable to load performance pack. Using Java I/O instead. Please ensure that libmuxer library is in :
####<Jan 5, 2012 8:52:29 AM PST>  <Warning> <Store> <BEA-280101> <amghost1.cup.com> <WLS_Portlet> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1325782349291> <BEA-280101> <The persistent file store "_WLS_WLS_Portlet" is forced to use buffered I/O and so may have significantly degraded performance. Either the OS/hardware environment does not support the chosen write policy or the native wlfileio library is missing. See store open log messages for the requested and final write policies. See the documentation on store synchronous write policy configuration for advice.

Cause:

There are  some libraray those exist in the $WL_HOME/server/native/linux/x86_64 and the Weblogic start is looking those to be in the CLASSPATH. Since the default CLASSPATH (After a Fresh Install) does not have above path, it is looking the libmuxer library in i686 directory. This may be fine if you are running in a  32 bit OS  but on 64 it is in a different place.

Solution:

I had  two options - one - copy those libraries to the path that is available to CLASSPATH - two - add the new path.

I decided the later and changed the CLASSPATH in the environment file as:

From:

LD_LIBRARY_PATH="${WL_HOME}/server/native/linux/i686${CLASSPATHSEP}${LD_LIBRARY_PATH}"
export LD_LIBRARY_PATH

TO:

LD_LIBRARY_PATH="${WL_HOME}/server/native/linux/x86_64${CLASSPATHSEP}${WL_HOME}/server/native/linux/i686${CLASSPATHSEP}${LD_LIBRARY_PATH}"
export LD_LIBRARY_PATH

After above change the result from the admin log is:


####<Jan 5, 2012 9:21:19 AM PST> <Info> <Socket> <ebiz1.cup.hp.com> <WLS_Portlet> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1325784079574> <BEA-000447> <Native IO Disabled. Using Java IO.>




4110978> <BEA-280008> <Opening the persistent file store "_WLS_WLS_Portlet" for recovery: directory=/home/oracle/SOA/Middleware/user_projects/domains/base_domain/servers/WLS_Portlet/data/store/default requestedWritePolicy="Direct-Write" fileLockingEnabled=true driver="wlfileio3".>

####<Jan 5, 2012 9:21:19 AM PST> <Info> <Socket> <ebiz1.cup.hp.com> <WLS_Portlet> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1325784079574> <BEA-000447> <Native IO Disabled. Using Java IO.>

Please Note that I also changed the I/O to Direct Write from the Admin Console and that is what is reflected here. I did this change because I noticed that this was better in this particular scenario whereas this is not recommended. I do not go with recommendations - I do my test and choose the setting that is appropriate to my environment. This was better here but cannot be used as a rule of thumb. Here the SAN and the latency was not that high. May be my volume was not that high. Please do some testing in the DEV environment and choose the best setting for your environment.

Issue Resolved !!

Friday, December 30, 2011

HPUX IA64 JVM issue with Weblogic

I thought, it would be interesting to add this as a post for Oracle Fusion Middleware application. Recently I was asked to help a large healthcare company that was having issue with processing the messages those are being generated by OSB for claims. They had a separate weblogic clustered domain for handling of messages only and OSB was on a separate weblogic domain.


Issue: 1- The JVM was doing Garbage Collection every 10 to 15 seconds and spiking the CPU to grinding halt.
        2- The thread dumps revealed that the reflect class unloading is happening on a Full GC - see below:


       sun.reflect.GeneratedSerializationConstructorAccessor
     
       3- The message file size exception occurred as the maximum file size limit was set to 30MB and the messages were more than this size


      4- The full Garbage collection was taking more time and JVM pause are seen in the thread dump.


Environment: HPUX 11iV3 with itanium 64 bit


Solution:


This is how it got resolved - tweaked the JAVA memory and mapping the memory page separately for each call with increasing the stacksize and applying some missed libraries and Patches. 


The issue is that on HPUX Itanium 64 bit - the java runs in a 32 bit mode by default unless you ask it to run on 64 bit mode. This can be verified by  "java -version" and the "java -d64 -version" and then by "java -d32 -version". However if the libraries exist and the kernel patches are installed then Weblogic makes a call and understand that there is 64 bit installed and will add these flages "-client -d64" but in some cases it does not do thatand we need to add. 


Also the JDK that gets installed on HPUX is not  Oracle JDK but Oracle JDK ported by HP on HPUX and hence the version 6.0.10 is same as oracle JDK 1.6 update 24 and the latest oracle JDK 1.6 update 29 is in 6.0.13. Oracle JDK 1.7 is just release in December 2011  and Version 7.0.00 is Oracle JDK 7.0u1 .




Besides all of the above - to get all the huge piles of messages processed the JAVA tuning is needed as well and after careful consideration, below is the solution.


Here are the Steps:


1-    Go to the Web site

     http://hpux.connect.org.uk/
   Use the search button to find the following libraries:
 

libiconv-1.14
libxml2-2.7.8
libxslt-1.1.26
zlib-1.2.5
 2--    Make sure that you have all of the following patches installed. 

             PHSS_37501
             PHCO_38050
             PHSS_38139
             PHKL_40208
             PHKL_35552
  
 3- Make sure that following HPUX kernel parameters have following value at the least.

max_thread_proc  1024
maxfiles                256
nkthread               3635
nproc                    2068

  
 4- The JVM by default runs on "client" mode - its okay but when the volume is large - it need to be changed to the server mode. To do this, I have added the following line in startWeblogic.sh (This is if you are running in a development mode).



JAVA_VM=-server
Export JAVA_VM


5- Make the changes in setDomainEnv.sh and make sure that these MEM arguments take effect – I mean if you are setting something in startManagedServer.sh or somewhere else then it need to be modified there. 




If your system is not enabled with NUMA then:


if [ "${SERVER_NAME}" = "AdminServer" ] ; then
        USER_MEM_ARGS="-d64 -Xmpas:on -Xss1024k -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m  -XX:+UseConcMarkSweepGC"

else
        USER_MEM_ARGS="-Xmpas:on –Xss2048k –Xmx8g –Xmn6g –Xingc -XX:+ForceMmapReserved –XX:PermSize=2g –XX:MaxPermSize=2g –XX:+UseConcMarkSweepGC  -XX:ParallelGCThreads=3 –XX:NewRatio=4 –XXCMSTriggerRatio=50 -XX:+UseCompressedOops”

fi

If your system is enabled with NUMA then:



if [ "${SERVER_NAME}" = "AdminServer" ] ; then
        USER_MEM_ARGS="-d64 -Xmpas:on -Xss1024k -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m  -XX:+UseConcMarkSweepGC"

else
        USER_MEM_ARGS="-Xmpas:on –Xss2048k –Xmx8g –Xmn6g –Xingc -XX:+ForceMmapReserved –XX:PermSize=2g –XX:MaxPermSize=2g –XX:+UseConcMarkSweepGC  -XX:ParallelGCThreads=6 – -XX:+UseCompressedOops -XX:+UseNUMA  -XX:-UseLargePages

fi


[The difference is that when NUMA is enabled the I/O to memory is not an issue and I let the Full GC at 92% as it is not going to impact. Also note the thread - I changed it to 6 as I have a 8 CPU machine with NUMA so I keep 2 CPU  free whereas in the non-NUMA case I only had 4 CPUs]


Here you can add other conditions to have each managed server different memory setting as per need basis.


Explanation: -Xmn  (New Size) need to be set in HPUX environments – If not set then it is 1/3 the value of the Xmx. So I changed it to go 6G to make the New size bigger to avoid out of room and less frequent GC.

Not setting the Xms option and replacing it with -XX:+ForceMmapReserved  is more efficient than asking the JVM to allocate pages. This way the OS MMAP  reserves the pages. 


-- More details from HP document:



-XX:+ForceMmapReserved 


Tells the JVM to reserve the swap space for all large memory regions used by the JVM 
va™ heap). This effectively removes the MAP_NORESERVE flag from the mmap call
used to map the Java™ heaps and ensures that swap is reserved for the full memory
mapped region when it is first created. When using this option the JVM no longer needs
to touch the memory pages within the committed area to reserve the swap and as a
result , no physical memory is allocated until the page is actually used by the application

Adding the ParallelGC thread – to make sure to change the default behavior. Bt default it is equivalent to the number of processor. SO I want to make CPU available while GC is going on – as I remember correctly that we only have 4 CPU.

Adding NewRatio and making it to 4  is also to change the default behavior which is  a ration of new to old generation and by default it is 1:8 and that seems to be small for this setup at CIGNA and hence  increasing it to make a bigger size – Now the GC will not run too often as it is now 1:4 ratio.

CMSTriggerRatio is being suggested at 50 percent – so there will always be a heap size available when the Full GC is in progress. This is the ratio between free to non-free heap.  So up to 4Gig of space will be available when Full GC starts and will only run on 2 CPUs

Finally -XX:+UseCompressedOops – directing JVM to save memory by using 32bit pointers  whenever possible and hence use less memory.


-Xingc = Use incremental GC  that means run the GC on unused memory  in the concurrent mark sweep generation.



6- Added following to the JAVA_OPTIONS of setDomainEnv.sh file:


   “-Dsun.reflect.noInflation=true”


sometime the environment can be tricky so alternate to setDomainEnv.sh would be to add to startWeblogic.sh

7- Implement the message size increase across the Weblogic Server by setting following (110MB)
 -Dweblogic.MaxMessageSize=115343360


8- In other case of OSB on IA64 I also end up adding following options.



     -XX:+UseNUMA  -XX:-UseLargePages

Explanation:  When NUMA is enabled the LargePages are enabled by default – so we want to use NUMA power but disabling the large pages.


More Details on NUMA from HP Document:



Starting in JDK 6.0.06, the Parallel Scavenger garbage collector has been extended to take advantage of the machines with NUMA (Non Uniform Memory Access) architecture. Most modern computers are based on NUMA architecture, in which it takes a different amount of time to access different parts of memory. Typically, every processor in the system has a local memory that provides low access latency and high bandwidth, and remote memory that is considerably slower to access.


In the Java HotSpot Virtual Machine, the NUMA-aware allocator has been implemented to take advantage of such systems and provide automatic memory placement optimizations for Java applications. The allocator controls the eden space of the young  generation of the heap, where most of the new objects are created. The allocator divides the space into regions each of which is placed in the memory of a specific node. The allocator relies on a hypothesis that a thread that allocates the object will be the most likely to use the object. To ensure the fastest access to the new object, the allocator places it in the region local to the allocating thread. The regions can be dynamically resized to reflect the allocation rate of the application threads running on different nodes. That makes it possible to increase performance even of single-threaded applications. In addition, "from" and "to" survivor spaces of the young generation, the old generation, and the permanent generation have page interleaving turned on for them. This ensures that all threads have equal access latencies to these spaces on average.


The NUMA-aware allocator can be turned on with the -XX:+UseNUMA flag in
conjunction with the selection of the Parallel Scavenger garbage collector. The Parallel Scavenger garbage collector is the default for a server-class machine. The Parallel Scavenger garbage collector can also be turned on explicitly by specifying the -XX:+UseParallelGC option.


Applications that create a large amount of thread-specific data are likely to benefit most  from UseNUMA. For example, the SPECjbb2005 benchmark improves by about 25% on NUMA-aware IA-64 systems. Some applications might require a larger heap, and especially a larger young generation, to see benefit from UseNUMA, because of the division of eden space as described above. Use -Xmx, -Xms, and -Xmn to increase the overall heap and young generation sizes, respectively. There are some applications that ultimately do not benefit because of their heap-usage patterns.


Specifying UseNUMA also enables UseLargePages by default. UseLargePages can
have the side effect of consuming more address space, because of the stronger alignment of memory regions. This means that in environments where memory is tight but a large Java heap is specified, UseLargePages might require the heap size to be reduced, or Java will fail to start up. If this occurs when UseNUMA is specified, you can disable UseLargePages on your command line and still use UseNUMA; for example:
-XX:+UseNUMA -XX:-UseLargePages.

Please note that after applying the OS patches and libraries - it may not require the "-d64" flag. Please check with console log and see if this flag is getting added twice - in that case just remove it from above.

 Happy troubleshooting !!