Wednesday, January 11, 2017

SSO in Multiple EBS R12.2 by Single OID/OAM 11g for Multiple Sets of Users
SSO implementation in EBS R12.2 with common user base is well documented by Oracle. However, complexity of different sets of users for multiple EBS R12.2 is not well documented or not achieved before. The problem argument states that there are multiple sets of users – such as Developers, Testers, & setup users. These users will connect to multiple EBS environments. A scenario where one user may have access to only one, some, or all environments. Multiple groups – such as CVDV, CVIT, & CVUT - were created in Microsoft AD. These groups, then sync to OID and put them in multiple user containers under the same domain – such as CVDV, CVIT, and CVUT (Not the default Users). These user containers, then should get either linked or provisioned in the respective EBS environment.
In this BLOG, I will discuss a way to address the challenge described above. Most often, it seems that there are different OID and OAM combination to serve a specific environment or a group of environment with same set of users. For example, please see the flow chart below.


In the above example, I am using a single AD; which is the case mostly. However, there could be a development AD but you get the point.  Here there some potential pitfalls that I see:

  1. The same set of users have access to UAT and DEV environment. If they want to separate this then other OID and OAM environment has to be stacked up.
  2. Some development users may need access to UAT and or SIT or the granularity of access is not there.
  3. Licensing cost for stacking up multiple environment. Further maintenance cost and patch cycle time and effort.
  4. After cloning from PROD, if the users are not segregated at OID for UAT and DEV then all of those users may have access to UAT and DEV environments unless a precaution was taken to end date users after the clone.
After thinking through, I thought why not only have a single OID/OAM combination for all non-prod environment and separate set of users for all those – such as some users will only have access to UAT while others to DEV and SIT and some common users can have access to all or some. This will address issues mentioned above. The depiction will look like.


So the question is – how do we do it?
Here is how:
Before, I proceed; I will put a architecture diagram to illustrate. I have created python scripts to failover and failback in an active-passive cluster for Weblogic where the OID (opmn component) is an active-active cluster.


It is assumed that you already installed OID and OAM and is up and running with latest versions – 11g at the time of post. Further, EBS 12.2 is ready for OAM/OID.

Step 1: Please login to ODSM using a browser, I use FireFox. After addressing username password, you will end up on the home screen. Please navigate to Data browser and to your domain. I deleted the client domain name and inserted “abc” in place of it. This is now corp.abc.com.

Step -2 Please create the number of directory that you will be serving the EBS environments – such as CVIT for SIT environment. Here I am connecting three environments – such as UAT, SIT, and DEV. I took a prefix of CV for checking validity but you can put some meaningful name for your organization. You can do it either way and the easier way is create as and select Users as a source.





Step -3: Now we need ACLs for these directories similar to which Users has. SO we navigate to Security tab and create the same directories while selecting the Users as create as reference – such as CVIT, CVUT, and CVDV as indicated below.


Step-4: Now we navigate to advanced tab to create attribute uniqueness and create three attribute uniqueness pertaining to each one of them.

Step-5: Now we need to set the default search for Oracle to be a directory up than what default is. Please note that the default search and create are pointed to Users directory. We do not want to change the default create as that is still the case – Users. However, default search should be changed to a directory up so that entire directory is searched by OAM as indicated in the picture below.


While staying in advanced tab, please update the subscription for AD plugin from default Users to your specific one as:
Click on “oidexplg_bind_ad” à optional properties à Plug-in Subscriber DN List
Update it and add the three that we have created – such as CVIT, CVUT, and CVDV as:





And enable the plugin as:



Please repeat same for the “oidexplg_compare_ad” property on same page.
Step-6: Now we have to leave the ODSM and go to DIP through the enterprise manager console and create three synchronization profile in which each pointing to its directory structure created in ODSM – such as CVIT to CVIT and so forth. Here I am opening one such profile to demonstrate the process.



Step -7: In the same synchronization profile you need to get a user separation by filter. Here you need to ask Microsoft AD administrator to create the three groups who will have the memberships for the users those are pointed to each one of those environments. For example, we have created three groups as “CVDV”, “CVIT”, and “CVUT”. We can add or remove the memberships as often as we need and as often as required for changing users and their responsibility form DEV to UAT progression. However, there need to be a manual script at the OID UNIX level where it will delete the users who are already SYNCED to OID if the membership change requires one user to get removed from any users. So once the delete on OID happens, the users will get end dated at EBS site. We cannot use Oracle’s default delete SYNC as we are not using the directory tree mapping as 1:1 from AD to OID and the privilege that is needed for such process. Please ping me if you need the script to do this.


Step -8: Please edit the event configuration on the synchronization profile to match the directory that this profile is intended to. By default its pointing to Users directory. You can use “oidprovtools” or combination of ldapsearch and ldapdelete on this synchronization profile or use this screen to edit. It should look like:



Step -9: Now we should start making some changes in OAM to get things in perspective.  Here we will create three identity store, three authorization scheme, and there authentication module to separate the search. Please remember this authentication scheme is what we need to present when registering the EBS to OAM so that we separate all the environment with directories in ODSM. Let’s create three identity store that then can be selected for authentication scheme and module.
 So when you login to OAM, this is what is displayed:


Now select the configuration tab at the top where application security tab is selected.




Now to user identity store as:


Here is the example of one identity store that is pointing to the directory structure in OID.

Here is the example of creating an authentication module: [First navigate to the launch pad then authentication module]:





And here is the example of creating authentication scheme [Please see the authentication scheme in Access manager tab above on launch pad]:




Please repeat to have three of each - such as:







Step-10: Now we are ready for the OID and OAM registration at the EBS side. Please execute them in PTCH instance after enabling the ADOP phase – such as adop phase=prepare. Please note to execute it to all there environments if configuring them together!!
Please use following for OID registration on EBS R12.2.  Please pass the provisiontype as by default it will be bi-directional. You will need ldap hostname, port, orcladmin password and apps password to perform the following step.

$FND_TOP/bin/txkrun.pl -script=SetSSOReg -registeroid=yes
-provisiontype=3 
[One can provide non-default name for following parameters if CONTEXT_FILE name is not desired which is default. -appname=  -svcname=]

Please use following process to register the OAM in the same ADOP phase:
  • Install WebGate:
txkrun.pl -script=SetOAMReg -installWebgate=yes -webgatestagedir=/ebsstagesw/ebs1223/MCG [Please change to your directory]

  • Deploye AccessGate:
      perl $AD_TOP/patch/115/bin/adProvisionEBS.pl ebs-create-oaea_resources -contextfile=$CONTEXT_FILE  -deployApps=accessgate -SSOServerURL=http://oamdevuat.corp.abc.com:14100 -logfile=deployeag.log
  • Register EBS to OAM as [Please Note the Authentication Scheme here..]:
txkrun.pl -script=SetOAMReg -registeroam=yes -oamHost=http://oamdevuat.corp.abc.com:7002 -oamUserName=weblogic -ldapUrl=ldap://oamdevuat.corp.abc.com:3060 -oidUserName=cn=orcladmin -ldapSearchBase=cn=cvit,dc=corp,dc=abc,dc=com -ldapGroupSearchBase=cn=Groups,dc=corp,dc=abc,dc=com -authScheme=CVITAuthScheme -authSchemeMode=reference
Now ADOP phase should end as:
adop phase=finalize,cutover,cleanup finalize_mode=full cleanup_mode=full mtrestart=no

Step-11: Please start the new RUN FS and do following:
Sqlplus  apps/<apps password>
execute fnd_oid_plug.setPlugin();

Now select and verify that all is well here:
Verify the OID registration as:
select fnd_preference.get('#INTERNAL','LDAP_SYNCH','HOST') from dual;
Ldaphost.abc.com  -- This should return !!

select fnd_preference.get('#INTERNAL','LDAP_SYNCH','PORT') from dual;
3060  -- This should return !!

col preference_value format a45
set lines 120
SELECT preference_name,preference_value FROM fnd_user_preferences
WHERE user_name='#INTERNAL' AND module_name= 'OID_CONF';

PREFERENCE_NAME                PREFERENCE_VALUE
------------------------------ ---------------------------------------------
CREATE_BASE                    cn=cvit,dc=corp,dc=abc,dc=com
CREATE_BASE_opt_mode           STATIC
DEFAULT_CREATE_BASE            cn=cvit,dc=corp,dc=abc,dc=com
DEFAULT_CREATE_BASE_opt_mode   STATIC
DEFAULT_REALM                  dc=corp,dc=abc,dc=com
DEFAULT_REALM_opt_mode         STATIC
FIXUP                          NONE
FIXUP_opt_mode                 STATIC
PLUGIN_VERSION                 1.1
RDN                            cn
RDN_opt_mode                   STATIC
REALM                          dc=corp,dc=abc,dc=com
REALM_opt_mode                 STATIC

Step-12: Now please set the profile option at EBS level – such as.

NAME
USER_PROFILE_OPTION_NAME
LEVEL
VALUE
APPS_SSO
Applications SSO Type
SITE
SSWA w/SSO
APPS_SSO_LINK_SAME_NAMES
Link Applications user with OID user with same username
SITE
Enable
APPS_SSO_AUTO_LINK_USER
Applications SSO Auto Link User
SITE
Enable
APPS_SSO_OID_IDENTITY
Applications SSO Enable OID Identity Add Event
SITE
Enable

Step -13: Please now run the OID SYNC to manually pull the data based on the filter set on the SYNC profile to OID and see that those users are created in EBS (different environments will have different users based on the filter and directory on OID): I ran with 5 parallelism but if the user number is not that much then with “lp” be just fine.

syncProfileBootstrap -host oamdevuat.corp.abc.com -port 7005 -D weblogic -profile CVIT_AD2OID -lp 5
syncProfileBootstrap -host oamdevuat.corp.abc.com -port 7005 -D weblogic -profile CVUT_AD2OID -lp 5
syncProfileBootstrap -host oamdevuat.corp.abc.com -port 7005 -D weblogic -profile CVDV_AD2OID -lp 5

Now verify the users via ODSM first and then to EBS. Please note that some users were removed to protect the identity of the client.


Please use following SQL to check on EBS:
select user_name,start_date,end_date,user_guid,to_char(creation_date,'DD-MON-YY hh24:mi:ss') from fnd_user where trunc(creation_date) >= trunc(sysdate -2);

This should match with what OID reports in ODSM.
Once all is well, please try the EBS URL to get the SSO page and use the AD password to login.
This should redirect to OAM page with login screen. Please test!!

Saturday, March 10, 2012

Jrocket and "[WARN] Use of -Djrockit.optfile is deprecated and discouraged"

Hello Technical Family,


In this post, I am going to discuss the Jrocket and its error during the Weblogic Startup. This Error may not impact the install but since it is deprecated - it make sense to fix it. I had to do some hours of research and would not want other people to do the same - hence posting here. What I get out of this - A pure satisfaction that someone out there will read this post and apply the solution. 


The platform is OS= RedHat 5.5 x86_64 with jrockit-jdk1.6.0_29-R28.2.2-4.1.0-linux-x64


Error:


During the Weblogic Server startup - I saw the following error in the logs:


[WARN] Use of -Djrockit.optfile is deprecated and discouraged.


Solution:


This is what I had to do, to get this issue gone from the logs and make Jrocket like what it wants to do.


1- I put the following into the environment variable in weblogic domain environment file.



-XX:+UnlockDiagnosticVMOptions  -XX:OptFile=${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrocket_optfile.txt

2- Recreated the jrocket_optfile.txt as :

From:

- oracle/bpel/services/workflow/repos/driver/WFTask
- oracle/bpel/services/workflow/task/model/SystemMessageAttributesTypeImpl.*
- oracle/bpel/services/workflow/task/model/TaskTypeImpl.*
- oracle/bpel/services/workflow/task/model/SystemAttributesTypeImpl.*
- oracle/bpel/services/workflow/repos/Util.*
- oracle/xquery/parser/XPathTokenManager.*



To:

{
match: ["oracle.bpel.services.workflow.repos.driver.WFTask",
"oracle.bpel.services.workflow.task.model.SystemMessageAttributesTypeImpl.",
"oracle.bpel.services.workflow.task.model.TaskTypeImpl.*",
"oracle.bpel.services.workflow.task.model.SystemAttributesTypeImpl.*",
"oracle.bpel.services.workflow.repos.Util.*",
"oracle.xquery.parser.XPathTokenManager.*"],
enable: jit
}

Restart the Weblogic and you just resolved another issue.

Great feeling - right...

Enjoy life as it comes - It will be less of good times and more of bad times - but hey I try to find the refuge in solving the issue as I have yet to see the good time since last 11 years !!


OBIEE 11g Install and "error creating asinstance instance1" Error

Hello Tech World,


In this post, I am going to give you insight of my own created issue. However, it can be helpful. Sometimes, you  come across issues by and accident and this is one of those. The platform is - OS RedHat 5.5 x86_64 OBIEE 11g (11.1.1.5).


Error:


While installing OBIEE 11g, following error displays in the orainventory logs and the configuration fails.


"error creating asinstance instance1"


Cause:


The environment variable was set with the name that the install was trying to create.


Debugging:


In most of the cases - you may not come across this issue as you may not have set this environment variable. I read too much into the installation guide and before even starting the install, I set the ORACLE_INSTANCE variable to what I wanted to create during the install. While doing configuration the configuration thinks that you already have this instance created for OPMN and will not proceed. 


This could be a bug - The install can always check whether that directory or path that is mentioned in the environment variable - exists or not but like a dumb it pops the error and will not proceed.




Solution:


I had to unset the ORACLE_INSTANCE variable to get the install going. To think of it, you can choose a different name than the one is set in the ORACLE_INSTANCE and get the install going - so trivial - but shocking when the error is so illusive.


I thought that it will help the community and less time at Oracle Support.


Hopefully Helps others.


Another day of a tech life.

Weblogic on Linux x86-64 libmuxer Library Error (BEA-280101)

In this post, I will be discussing the Java I/O and performance pack error that is seen in the admin logs of Weblogic. This is Weblogic 10.3.5 on RedHat Linux 5.5 64 bit.


Error:



<Jan 5, 2012 8:10:20 AM PST> <Error> <Socket> <BEA-000438> <Unable to load performance pack. Using Java I/O instead. Please ensure that libmuxer library is in :
####<Jan 5, 2012 8:52:29 AM PST>  <Warning> <Store> <BEA-280101> <amghost1.cup.com> <WLS_Portlet> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1325782349291> <BEA-280101> <The persistent file store "_WLS_WLS_Portlet" is forced to use buffered I/O and so may have significantly degraded performance. Either the OS/hardware environment does not support the chosen write policy or the native wlfileio library is missing. See store open log messages for the requested and final write policies. See the documentation on store synchronous write policy configuration for advice.

Cause:

There are  some libraray those exist in the $WL_HOME/server/native/linux/x86_64 and the Weblogic start is looking those to be in the CLASSPATH. Since the default CLASSPATH (After a Fresh Install) does not have above path, it is looking the libmuxer library in i686 directory. This may be fine if you are running in a  32 bit OS  but on 64 it is in a different place.

Solution:

I had  two options - one - copy those libraries to the path that is available to CLASSPATH - two - add the new path.

I decided the later and changed the CLASSPATH in the environment file as:

From:

LD_LIBRARY_PATH="${WL_HOME}/server/native/linux/i686${CLASSPATHSEP}${LD_LIBRARY_PATH}"
export LD_LIBRARY_PATH

TO:

LD_LIBRARY_PATH="${WL_HOME}/server/native/linux/x86_64${CLASSPATHSEP}${WL_HOME}/server/native/linux/i686${CLASSPATHSEP}${LD_LIBRARY_PATH}"
export LD_LIBRARY_PATH

After above change the result from the admin log is:


####<Jan 5, 2012 9:21:19 AM PST> <Info> <Socket> <ebiz1.cup.hp.com> <WLS_Portlet> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1325784079574> <BEA-000447> <Native IO Disabled. Using Java IO.>




4110978> <BEA-280008> <Opening the persistent file store "_WLS_WLS_Portlet" for recovery: directory=/home/oracle/SOA/Middleware/user_projects/domains/base_domain/servers/WLS_Portlet/data/store/default requestedWritePolicy="Direct-Write" fileLockingEnabled=true driver="wlfileio3".>

####<Jan 5, 2012 9:21:19 AM PST> <Info> <Socket> <ebiz1.cup.hp.com> <WLS_Portlet> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1325784079574> <BEA-000447> <Native IO Disabled. Using Java IO.>

Please Note that I also changed the I/O to Direct Write from the Admin Console and that is what is reflected here. I did this change because I noticed that this was better in this particular scenario whereas this is not recommended. I do not go with recommendations - I do my test and choose the setting that is appropriate to my environment. This was better here but cannot be used as a rule of thumb. Here the SAN and the latency was not that high. May be my volume was not that high. Please do some testing in the DEV environment and choose the best setting for your environment.

Issue Resolved !!

Friday, December 30, 2011

HPUX IA64 JVM issue with Weblogic

I thought, it would be interesting to add this as a post for Oracle Fusion Middleware application. Recently I was asked to help a large healthcare company that was having issue with processing the messages those are being generated by OSB for claims. They had a separate weblogic clustered domain for handling of messages only and OSB was on a separate weblogic domain.


Issue: 1- The JVM was doing Garbage Collection every 10 to 15 seconds and spiking the CPU to grinding halt.
        2- The thread dumps revealed that the reflect class unloading is happening on a Full GC - see below:


       sun.reflect.GeneratedSerializationConstructorAccessor
     
       3- The message file size exception occurred as the maximum file size limit was set to 30MB and the messages were more than this size


      4- The full Garbage collection was taking more time and JVM pause are seen in the thread dump.


Environment: HPUX 11iV3 with itanium 64 bit


Solution:


This is how it got resolved - tweaked the JAVA memory and mapping the memory page separately for each call with increasing the stacksize and applying some missed libraries and Patches. 


The issue is that on HPUX Itanium 64 bit - the java runs in a 32 bit mode by default unless you ask it to run on 64 bit mode. This can be verified by  "java -version" and the "java -d64 -version" and then by "java -d32 -version". However if the libraries exist and the kernel patches are installed then Weblogic makes a call and understand that there is 64 bit installed and will add these flages "-client -d64" but in some cases it does not do thatand we need to add. 


Also the JDK that gets installed on HPUX is not  Oracle JDK but Oracle JDK ported by HP on HPUX and hence the version 6.0.10 is same as oracle JDK 1.6 update 24 and the latest oracle JDK 1.6 update 29 is in 6.0.13. Oracle JDK 1.7 is just release in December 2011  and Version 7.0.00 is Oracle JDK 7.0u1 .




Besides all of the above - to get all the huge piles of messages processed the JAVA tuning is needed as well and after careful consideration, below is the solution.


Here are the Steps:


1-    Go to the Web site

     http://hpux.connect.org.uk/
   Use the search button to find the following libraries:
 

libiconv-1.14
libxml2-2.7.8
libxslt-1.1.26
zlib-1.2.5
 2--    Make sure that you have all of the following patches installed. 

             PHSS_37501
             PHCO_38050
             PHSS_38139
             PHKL_40208
             PHKL_35552
  
 3- Make sure that following HPUX kernel parameters have following value at the least.

max_thread_proc  1024
maxfiles                256
nkthread               3635
nproc                    2068

  
 4- The JVM by default runs on "client" mode - its okay but when the volume is large - it need to be changed to the server mode. To do this, I have added the following line in startWeblogic.sh (This is if you are running in a development mode).



JAVA_VM=-server
Export JAVA_VM


5- Make the changes in setDomainEnv.sh and make sure that these MEM arguments take effect – I mean if you are setting something in startManagedServer.sh or somewhere else then it need to be modified there. 




If your system is not enabled with NUMA then:


if [ "${SERVER_NAME}" = "AdminServer" ] ; then
        USER_MEM_ARGS="-d64 -Xmpas:on -Xss1024k -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m  -XX:+UseConcMarkSweepGC"

else
        USER_MEM_ARGS="-Xmpas:on –Xss2048k –Xmx8g –Xmn6g –Xingc -XX:+ForceMmapReserved –XX:PermSize=2g –XX:MaxPermSize=2g –XX:+UseConcMarkSweepGC  -XX:ParallelGCThreads=3 –XX:NewRatio=4 –XXCMSTriggerRatio=50 -XX:+UseCompressedOops”

fi

If your system is enabled with NUMA then:



if [ "${SERVER_NAME}" = "AdminServer" ] ; then
        USER_MEM_ARGS="-d64 -Xmpas:on -Xss1024k -Xms4096m -Xmx4096m -XX:MaxPermSize=1024m  -XX:+UseConcMarkSweepGC"

else
        USER_MEM_ARGS="-Xmpas:on –Xss2048k –Xmx8g –Xmn6g –Xingc -XX:+ForceMmapReserved –XX:PermSize=2g –XX:MaxPermSize=2g –XX:+UseConcMarkSweepGC  -XX:ParallelGCThreads=6 – -XX:+UseCompressedOops -XX:+UseNUMA  -XX:-UseLargePages

fi


[The difference is that when NUMA is enabled the I/O to memory is not an issue and I let the Full GC at 92% as it is not going to impact. Also note the thread - I changed it to 6 as I have a 8 CPU machine with NUMA so I keep 2 CPU  free whereas in the non-NUMA case I only had 4 CPUs]


Here you can add other conditions to have each managed server different memory setting as per need basis.


Explanation: -Xmn  (New Size) need to be set in HPUX environments – If not set then it is 1/3 the value of the Xmx. So I changed it to go 6G to make the New size bigger to avoid out of room and less frequent GC.

Not setting the Xms option and replacing it with -XX:+ForceMmapReserved  is more efficient than asking the JVM to allocate pages. This way the OS MMAP  reserves the pages. 


-- More details from HP document:



-XX:+ForceMmapReserved 


Tells the JVM to reserve the swap space for all large memory regions used by the JVM 
va™ heap). This effectively removes the MAP_NORESERVE flag from the mmap call
used to map the Java™ heaps and ensures that swap is reserved for the full memory
mapped region when it is first created. When using this option the JVM no longer needs
to touch the memory pages within the committed area to reserve the swap and as a
result , no physical memory is allocated until the page is actually used by the application

Adding the ParallelGC thread – to make sure to change the default behavior. Bt default it is equivalent to the number of processor. SO I want to make CPU available while GC is going on – as I remember correctly that we only have 4 CPU.

Adding NewRatio and making it to 4  is also to change the default behavior which is  a ration of new to old generation and by default it is 1:8 and that seems to be small for this setup at CIGNA and hence  increasing it to make a bigger size – Now the GC will not run too often as it is now 1:4 ratio.

CMSTriggerRatio is being suggested at 50 percent – so there will always be a heap size available when the Full GC is in progress. This is the ratio between free to non-free heap.  So up to 4Gig of space will be available when Full GC starts and will only run on 2 CPUs

Finally -XX:+UseCompressedOops – directing JVM to save memory by using 32bit pointers  whenever possible and hence use less memory.


-Xingc = Use incremental GC  that means run the GC on unused memory  in the concurrent mark sweep generation.



6- Added following to the JAVA_OPTIONS of setDomainEnv.sh file:


   “-Dsun.reflect.noInflation=true”


sometime the environment can be tricky so alternate to setDomainEnv.sh would be to add to startWeblogic.sh

7- Implement the message size increase across the Weblogic Server by setting following (110MB)
 -Dweblogic.MaxMessageSize=115343360


8- In other case of OSB on IA64 I also end up adding following options.



     -XX:+UseNUMA  -XX:-UseLargePages

Explanation:  When NUMA is enabled the LargePages are enabled by default – so we want to use NUMA power but disabling the large pages.


More Details on NUMA from HP Document:



Starting in JDK 6.0.06, the Parallel Scavenger garbage collector has been extended to take advantage of the machines with NUMA (Non Uniform Memory Access) architecture. Most modern computers are based on NUMA architecture, in which it takes a different amount of time to access different parts of memory. Typically, every processor in the system has a local memory that provides low access latency and high bandwidth, and remote memory that is considerably slower to access.


In the Java HotSpot Virtual Machine, the NUMA-aware allocator has been implemented to take advantage of such systems and provide automatic memory placement optimizations for Java applications. The allocator controls the eden space of the young  generation of the heap, where most of the new objects are created. The allocator divides the space into regions each of which is placed in the memory of a specific node. The allocator relies on a hypothesis that a thread that allocates the object will be the most likely to use the object. To ensure the fastest access to the new object, the allocator places it in the region local to the allocating thread. The regions can be dynamically resized to reflect the allocation rate of the application threads running on different nodes. That makes it possible to increase performance even of single-threaded applications. In addition, "from" and "to" survivor spaces of the young generation, the old generation, and the permanent generation have page interleaving turned on for them. This ensures that all threads have equal access latencies to these spaces on average.


The NUMA-aware allocator can be turned on with the -XX:+UseNUMA flag in
conjunction with the selection of the Parallel Scavenger garbage collector. The Parallel Scavenger garbage collector is the default for a server-class machine. The Parallel Scavenger garbage collector can also be turned on explicitly by specifying the -XX:+UseParallelGC option.


Applications that create a large amount of thread-specific data are likely to benefit most  from UseNUMA. For example, the SPECjbb2005 benchmark improves by about 25% on NUMA-aware IA-64 systems. Some applications might require a larger heap, and especially a larger young generation, to see benefit from UseNUMA, because of the division of eden space as described above. Use -Xmx, -Xms, and -Xmn to increase the overall heap and young generation sizes, respectively. There are some applications that ultimately do not benefit because of their heap-usage patterns.


Specifying UseNUMA also enables UseLargePages by default. UseLargePages can
have the side effect of consuming more address space, because of the stronger alignment of memory regions. This means that in environments where memory is tight but a large Java heap is specified, UseLargePages might require the heap size to be reduced, or Java will fail to start up. If this occurs when UseNUMA is specified, you can disable UseLargePages on your command line and still use UseNUMA; for example:
-XX:+UseNUMA -XX:-UseLargePages.

Please note that after applying the OS patches and libraries - it may not require the "-d64" flag. Please check with console log and see if this flag is getting added twice - in that case just remove it from above.

 Happy troubleshooting !!