webMethods Performance tuning

IDN Card

Tagged in: Untagged 

webMethods Performance tuning

There is no standard approach one should follow while implementing Performance Tuning at their landscape. Here I am just trying to list out a approach, if anyone think that i would be missing something please comment it out.

We can improve webMethods performance by tuning various components: MWS, Broker Server, TN, Integration Server, Optimize.. etc. Apart from this you have to look into other aspects like Architecture, Hardware, Target systems, Database, Operating system, Backend system etc..The below approach is defined taking webMethods  version(6.1) as standard.

1. Architecture:

ü Place components close together if they communicate frequently. This is fairly obvious, but it cannot be stated too often. Networks can very easily become the performance bottleneck in an integration project. Properties database and logging F/S or database should be closer. The most complex interactions in integration projects usually occur between an Adapter and the resource it is adapting. Minimizing the network distance between the Adapter and the resource can be extremely important for performance because it minimizes communications across the network.

ü Commonly-invoked services can be identified using the Integration Server’s service usage statistics in the Administrator. Use this in conjunction with audit logs to determine which services to target during performance tuning.

ü If you can measure processing one “item” at a time, try to identify whether the average processing time gets worse and worse as time goes on. For instance, you might have 100 items (all about the same size) and each of the first 20 takes 200ms each, then each one successively takes more and more time until the last 10 take 600ms each. This might be an indication of inefficiencies in your implementation, a cluttered pipeline for instance. If you do detect this situation, use enable/disable step to try to simplify the processing so that each of the 100 items takes the same amount of time, and then methodically add back the functionality to discover the code that exhibits the non-linear time

ü When using the WmDB or WmJDBC adapter often the performance throughput will be dictated by the performance of the Database.

ü Install all performance-related patches and service packs.

ü Turn off all non-essential listeners (e.g. ftp, email).

ü SSL can cause a significant performance hit. In some cases, we have seen the introduction of SSL increase processing time by roughly 2 to 5 times. Consider the use of a Hardware SSL accelerator (e.g. nCipher™) to improve SSL performance.

Performance Considerations knowing the Characteristics of Your Solution

ü webMethods Broker and webMethods Integration Server interact using a high-speed wire protocol. But the speed at which these two components exchange documents is ultimately determined by resource availability (memory, threads, CPU cycles and so forth) and the characteristics of your solution (for example, guaranteed or volatile documents, serial or parallel processing).

ü You can use the information in this document more effectively if you understand the following characteristics of your solution:

― Average document size

― Maximum document size

― Average document arrival rate

― Peak document arrival rate

― Average document processing time (by subscriber)

― Expected demand cycles (recurring intervals of high or low demand)

― Use of guaranteed versus volatile documents

― Number of active triggers on the Integration Server

― Number of trigger execution threads on the Integration Server

― Volume of audit events

― Overall performance of audit database

― Number of concurrent connections allowed by audit database system

ü Understanding these characteristics allow you to anticipate resource usage at various points of a pub/sub integration and provides essential information for capacity planning efforts. For example, knowing the average document size and arrival rate will help you estimate the amount of memory your Broker and Integration Server need under ‘ordinary’ operating conditions.

ü Similarly, understanding your peak values can help you decide whether one platform configuration will support your maximum requirements better than another. For example, it can help you determine whether you should install certain components on the same machine, or whether, under peak loads, these components would perform better if you installed them on separate machines.

ü The more you understand the characteristics and requirements of your solution, the more successful your tuning efforts will be.

ü Scalability

Scalability is the ability of a system to handle increased load.Scaling the webMethods Integration Platform can help to improve the performance of the solution. Using webMethods, scalability can be achieved in two primary manners:

ü Vertical Scaling – Ability to increase the power of a particular resource to support higher capacity – for example adding CPUs, RAM, cache, and disks to a server. Vertical scaling offers quick solutions to many scalability issues, but the diminishing law of returns means that there is always a level at which simply adding more bits to a box has very little effect.

ü Integration Server 6.0.1 with SP2 can achieve a scalability factors of 1.6 to 1.9 when going from a single to dual processor system averaging approximately 1.7 for the XML transformation test. The scalability factor for four processors ranged from 2.8 to 3.0 with an average value of 2.9.

ü Horizontal Scaling – Ability to increase the number of components and load balance across these components to handle increased loads. There are several approaches to scaling in a horizontal manner in webMethods:

ü External Load Balancer – involves placing a router and/or switch before the Integration Server, which uses an algorithm (such as round-robin) to distribute document load across servers.

ü Split Notifications – It is possible to split notification processing such that notifications are handled by multiple Integration Servers based on some data-driven trigger logic.  However this only applies if notifications are being used.

ü Distribute Document Load by Filtering – It is possible to split processing among Integration Servers using filtering logic in the document triggers.


ü The benchmark reports can be generated from audit information that is written to the wM audit log database by the Integration Server for the successful execution of all high level flow services:

  1. Avg trans / unit of time.
  2. Min time / txn.
  3. Max time / txn.
  4. Max concurrent transactions.
  5. Uptime

Reverse Invoke Tunning:

ü The number of reverse invoke connections you create should be the peak number of concurrent requests you expect and a margin of error to allow for growth. Like any performance tuning, experiment to see what works best.

ü Securing Behind Your Firewall:

To eliminate this exposure, you can deploy the Integration Server in a unique reverseinvoke configuration. In this configuration, you deploy two Integration Servers. One server sits in the DMZ and is called the reverse invoke server. The other sits behind the inner firewall and is called the internal server.

ü Under a reverse invoke configuration, the reverse invoke server receives requests from the Internet (which have been filtered by the outer firewall) and passes those requests back to the internal server through a reserved port using a secure, proprietary protocol. This eliminates the need to open a hole in the inner-firewall for inbound Internet traffic.


2. Hardware:

Tune the operating system you are using. Review OS-specific tuning recommendations available in webMethods Advantage from the Performance Team. It can have a big impact on the performance of the server, the following instructions on Advantage for a HP-UX machine:

3. CPU:

The business logic in the Integration Server is the place where processing power is demanded. Defining number of threads/process is an engineering decision where the volume of transactions at any given point of time and available hardware resources (e.g. memory and Number of CPU’s) is taken into account.

ü In general, you want the Integration Server to run at about 80% to 90% CPU capacity during peak load. Running at higher percentages may result in thrashing, a condition where the machine expends more cycles switching contexts than it does executing productive work. (Services that spend time waiting on external resources generally perform better when you increase the size of the thread pool than do services that are CPU bound.)

ü Optimum Integration Server threadPool settings will depend on the nature of the application, and can only be determined through some experimentation. For any given scenario, there is an optimum threadPool setting, a thread setting above or below this setting will result in decreased performance.


SUN has excellent resources for JVM tuning. Look for information relating to HOTSPOT.

ü I allocate as much memory as possible to the JVM. This is done by modifying the server.bat/server.sh file. Set the min/max memory to the same exact amount. For example, 768min 768max.

ü Run some processes through your server and observe how many threads you have running. Set the minimum thread pool to this amount. Run several tests to get an average. I usually double the amount of threads to set the max threadpool.

4. Development:

ü Every time conduct a peer review of code. Adopt best practices, refer gear methodology. ü Develop frameworks that other solutions can adopt.ü Implement a configurable solution rather changing the code.

ü Optimizing efficiency of Java code calling APIs Caching for static data.

ü Logical partitioning by business area / domain, Partitioning by BP (batch processing) or RT (real-time processing).

ü Every package should include foldes connections,listeners, notifications, triggers.. and all db connections, listeners, notifications, triggers.. should be there

ü All test, timing, log, and debug calls should be in DEV and QA code base, and should NOT be in PROD code base after the confidence in this process is high.

Package Organization

ü It is important to spend time up front to design your package structure. Designing a good package structure up front will still save the hassle of having to reorganize folders and services.

ü Break up your services into as many logical folders as possible. Do not put too many flow services under the same folders. This will improve package maintainability and also increases Developer design-time performance.

ü Place services used strictly for testing purposes into a dedicated test package and never deploy these test packages to production.

webMethods provides two Service signatures:

ü One based on the IData object, the other based on the Values object, use IData objects rather than Values objects wherever possible. IData objects consume less memory, and have more features (preserve order, allow duplicate keys, etc.) than Values objects.

Drop pipeline variables as soon as they are no longer needed in the flow

This will make the flows easier to read and edit, and minimizes the Integration Server’s memory consumption.

ü Be careful to close all file descriptors, I/O streams, etc. using the java “finally” clause. This will ensure that objects are cleaned up even if an exception occurs

ü When you call a service within a flow, remember that the variables created in the service will be carried forward from the point of execution of that service. Also remember that input parameters of services that are set or mapped to will bleed into the pipeline, except for transformers. This can cause problems where multiple services with the same inputs/outputs names are called in succession, or inside loops where the same service is called multiple times. So DROP what is not necessary immediately after calling a service.

ü When developing Java services, it is always a good idea to check the “Validate input” box on the Input/Output tab. Otherwise, the presence of each parameter will need to validated and an error thrown if the variable is not found

ü Always specify a timeout value, appropriate to the document’s time-to-live, for request/reply services.

ü Use XQL over WQL. XQL is more open.

Error handling:

In the Catch Sequence, always make sure to close all FTP, GD, DB and other persistent connections. This will avoid leaving multiple connections open when services throw errors. Use the service wm.flow.getLastError to get the key or session ID from the pipeline to close the connections.

ü Always check to make sure that calls to savePipeline, savePipelineToFile, restorePipeline, restorePipelineToFile, are removed prior to moving to production.

ü There is no user or session associated with a scheduled service. This causes thefollowing several complication:

Trading Networks: The wm.tn:receive service will throw an exception when called inside a scheduled service because the user check will fail.Use wm.tn.doc.route:routeXml instead. This will bypass the user check.

ü The Microsoft & Sun JDBC-ODBC bridges are not thread-safe. These were not meant for production environments. Use a driver from Oracle, Merant, Microsoft, etc instead.

ü Custom Canonical

Defining a custom canonical offers the following benefits

ü Organization-specific: Canonical tailored to the individual organization’s needs. Only necessary fields are included.

ü Easier to Maintain– Because the canonical contains only the fields that are needed, custom canonicals tend to be much smaller than standards-based canonicals, and are therefore easier to maintain.

ü High-Performance – Because the canonical contains only the fields that are needed, the canonicals tend to be much smaller, which makes them perform better. At one customer in the past, a quick test was performed to understand the performance impact of defining a custom canonical vs. using an industry standard canonical. For a purchase order, a custom canonical was defined with roughly 40 fields, versus the default canonical, which was based on the SAP IDOC standard (> 200 fields). The result was that the integration with the custom canonical ran 10 times faster.

ü If transaction volume is high and performance is potentially a concern, consider defining a custom canonical.

ü Neutral vs. Application-Specific: If a company has one ERP vendor that makes up the overwhelming majority of their application base, it might make sense to simply use that ERP vendor’s canonicals rather than defining or using a neutral document format. For example if 90% of the integrations involve integration of legacy applications to SAP, it probably does not make sense to go with a neutral document format. By using the SAP document formats as the document format standard, we eliminate half of the maps required and also make the integrations much easier to understand to SAP users

ü Leave business validation to the backend systems.

ü Avoid defining too much business validation in the canonical, or you may find yourself needing to change the canonical often to accommodate new integrations.

ü Document Ordering – Parallel processing can be used to improve both performance and scalability, but only when you can accept out-of-order results.

ü Document Grouping – Combining a set of documents into one larger document is more efficient than sending and processing each of the smaller documents independently

ü Never run the server from the console. Always run it in the background. Not only does writing to the console take time, but it synchronizes all the server threads that are writing.

ü Although Event Handlers work asynchronously, they require a deepClone to get their pipeline. Where ever possible remove Event Handlers

ü Consider minimizing audit logging and server logging in a production environment. The server log level should be kept at 1, unless debugging a specific production issue

ü By default, every service will return its entire pipeline back to its client. If the pipeline is large, it can increase the network traffic significantly. Two tips: 1) Use pub.flow:setResponse to return a customized response back to the client, 2) Drop variables once they are no longer needed in the pipeline. Use pub.flow:clearPipeline to clean up the pipeline

ü Examine the use of high overhead services, particularly inside loops. For example,

ü pub.record:addToRecordList and pub.record:addToStringList do a full allocation of object.length+1 and a copy of the contents on each call. This is fine for arrays with 10s or maybe 100s of items, but bogs down when called often on large arrays.

ü Make sure the document is only validated once. Look for use of pub.schema:validate. Check if services have validate-input or validate-output options checked.

ü Make sure pub.flow:tracePipeline is only used for debugging.

ü Generally pub.flow:savePipeline / pub.flow:savePipelineToFile and pub.flow:restorePipeline / pub.flow:restorePipelineFromFile should only be used for debugging/testing.

ü In webMethods 6.x, transformers offer two performance benefits when compared to regular service invocations:

1. Transformer input cloning – In webMethods 6.x, the input to a transformer is copied by reference from the pipeline. For large pipelines, transformers may improve performance because a normal flow invoke of another flow results in a shallow clone of the pipeline, whereas a transformer does not. NOTE: This is different than transformer behavior in previous webMethods versions.

2. Pipeline efficiency – Transformers are passed only a reference to their mapped inputs, whereas regular invokes (pub.flow:invoke) operate on the same pipeline object as the callingservice. Therefore, transformers operate on a subset of the calling service’s pipeline.

ü Consider using java instead of flow, especially for CPU-intensive logic involving looping or frequent invocations. Write transformer logic using Java services instead of Flow services to be used in transformers. Individual transformers can run 5-15x faster in Java than in Flow.

ü Remove disabled services from the flows. A disabled service actually has a negative performance impact, because the flow engine still checks the service to see if it should be invoked or not.

ü Use the SCOPE property to pass only a portion of the entire pipeline to a service.

ü Consider enabling service caching to cache the output of services that perform lookups for static data (e.g. a customer master data database table).

ü Remove services and records that are not called. A service or record does not consume a large amount of runtime memory, but a large number of unused services and records will use RAM that the JVM could be using for the application. If there is a full set of test transactions, run them all and then review the IS Service Usage page and remove and services that were never called. Make sure packages like WmSamples are not live on a production or QA machine. Make sure WmPartners and WmDB are not enabled unless they are used.

ü Always use POST when working with HTTP. GET is at least 25% slower.

ü Consider using stored procedures on repeated database calls instead of regular JDBC SQL statements.

ü If you are loading static pieces of data frequently, consider loading and persisting the data into memory at startup instead

ü BatchInsertSQL service can insert a large volume of data into a table more efficiently than an InsertSQL service, improving performance when a large data volume is involved.


5. Administration:

Extended Settings:

ü watt.server.broker.producer.multiclient

Specifies the number of sessions for the default client. The default client is the Broker client that the Integration Server uses to publish documents to the Broker and to retrieve documents delivered to the default client. When you set this parameter to a value greater than 1, the Integration Server creates a new multi–session, shared state Broker client named clientPrefix_DefaultClient_MultiPub, to use for publishing documents to the Broker. Using a publishing client with multiple sessions can lead to increased performance because it allows multiple threads to publish documents concurrently. The default is 1 session.

ü watt.server.broker.replyConsumer.multiclient

Specifies the number of sessions for the request/reply client. The request/reply client is the Broker client that the Integration Server uses to send request documents to the Broker and to retrieve reply documents from the Broker. Increasing the number of sessions for the equest/reply client can lead to improved performance because it allows multiple requests and replies to be sent and retrieved concurrently. The default is 1 session.

ü watt.server.control.triggerInputControl.delays

which corresponds to Poll delay Interval). By setting this to a specific value say X ms, you can force the trigger to keep polling the Broker after every X ms indefinitely until a document is available.

ü watt.server.control.triggerInputControl.delayIncrementInterval

which corresponds to poll delay Interval Increment.

ü watt.net.clientKeepaliveTimeout=180

ü watt.server.cache.isPersistent=false

Especially to for watt.server.cache.isPersistent=false, as this is related to the WM caching its very important option that improves the caching tremendously. For more information about how caching of services works at webMethods..refer.

ü Use system profiling tools to make sure that processing, disk, and memory resources are being used effectively by the overall system For example, a multiprocessor system will not be faster than a single processor system unless the various parts of the system can work in parallel, and this may require some profiling and load balancing. Similarly, make sure that the software is configured to use all the available memory if it is abundant, and to share it efficiently if it is not. Eliminate bottlenecks with hardware upgrades - additional memory, faster disk drives, faster networking - once they have been identified and if the hardware is cheaper than the human time or business cost of extensive tuning.

ü Performance tuning tools:

ü Wily can be uesd to detect troublesome memory leaks and flows http://www.wilytech.com

ü SAP actually use it themselves for fixing problems in NetWeaver http://www.wilytech.com/solutions/products/IntroscopeForNetWeaver.html

ü How to analyse thread usage

ü In HPUX the glance tool can be quite useful to determine whether the operating system threads per process limits are getting close to being reached and also we can user HP Performance manager.

ü J console…have enabled the JMX feature for your JVM, you can also use jconsole

ü The Integration Server itself gives important information about current number of threads, memory usage, sessions etc. This is accessible from the main web admin screen

ü For extracting Thread dumps and analyzing Thread Dumps by invoking “wMRoot:wm.server.admin:getDiagnosticData” service.

ü I tried to use the below tools to analyze the Thread Dump information…but these tools didn’t consider the files we generated as a Java Thread Dump files.

Thread Dump analyzing tools:

ü IBM Thread and Monitor Dump Analyzer for Java [http://www.alphaworks.ibm.com/tech/jca]

ü Samurai Thread Dump viewer [http://yusuke.homeip.net/samurai/en/index.html]


Normally logs after 30 days should be cleaned . After 2 weeks you can zip the logs - overall, its never good practice to have logging on the same disc where the applications are running as those slow down everything.

ü Audit logging is typically logged to a central logging database

ü I have 3 Projects with 34 monitors creating 34 instances at single point of time for every 5 minutes and the max connections defined in adapter is 5 with Block timeout 16.66 mintes and exp timeout is 1 sec.

ü Since the maximum pool size is 5 which is too low, changed the maximum pool size to 10.

ü But if it would be a problem with hanging services then it needs to be investigated differently, IS restart may resolve hanging services problem and needs to find out why the services were hanging.

Integration Server JDBC Pools tunning:

The Integration Server includes a connection management service that dynamically manages connections and connection pools based on configuration settings that you specify for the connection. All adapter services use connection pooling. A connection pool is a collection of connections with the same set of attributes. The Integration Server maintains connection pools in memory. Connection pools improve performance by enabling adapter services to reuse open connections instead of opening new connections. Run-Time Behavior of Connection Pools when you enable a connection, the Integration Server initializes the connection pool, creating the number of connection instances you specified in the connection’s Minimum Pool Size field when you configured the connection. Whenever an adapter service needs a connection, the Integration Server provides a connection from the pool. If no connections are available in the pool, and the maximum pool size has not been reached, the server creates one or more new connections (according to the number specified in the Pool Increment Size field) and adds them to the connection pool. If the pool is full (as specified in Maximum Pool Size field), the requesting service will wait for the Integration Server to obtain a connection, up to the length of time specified in the Block Timeout field, until a connection becomes available. Periodically, the Integration Server inspects the pool and removes inactive connections that have exceeded the expiration period that you specified in the Expire Timeout field. If the connection pool initialization fails because of a network connection failure or some other type of exception, you can enable the system to retry the initialization any number of times, at specified intervals.

ü I only tried these tips out for Oracle JDBC adapter pools, they should also be applicable for other webMethods (adapter) pools, because they use similar parameters and mechanisms.

ü When processing large amounts of messages that each require 1 or more Oracle RDBMS interactions (select statement, stored procedure invocation, ...), it's important to properly configure your JDBC adapter connection pools to achieve optimal throughput.

ü If you have adapter notifications and adapter services then you will need to have two separate connections. Otherwise you may get strange errors about transactions and the like.

ü Avoid having connection pools shared across different functional areas, even if they are pointing to the same database. Reasons for this are:

ü Tuning the size of the pool becomes quite difficult if you have multiple types of usage of a pool.

ü It is not possible to easily change the database settings for one without impacting on the other.

ü One approach is to have separate pools for each package (generally; not a hard rule though), since your packages should generally be divided up according to functional area too.

ü A good strategy: create a different user for each functional area, even if you are going to use the same database. That way it can be easier to detect performance problems.

ü First of all you have to make sure that your maximum pool size is large enough. The value of this parameter should be at least equal to or higher than the maximum number of threads that can use the JDBC pool. If not, threads can get blocked waiting for a free connection in the pool or even throw errors when the expire timeout of the JDBC pool is reached while waiting.

ü Another important parameter is the number of free connections in your pool. A free connection is an active connection to the database that is not being used by any Integration Server thread. When processing a large bulk of messages, you have to make sure that there are always enough free connections available in your pool. If this is not the case, new connections will have to be created during the processing of the message bulk, which can have a significant performance impact. Therefore you have to make sure that during the bulk processing your connections in the pool don't expire before the complete bulk has been processed. This can be controlled by properly configuring the Expire Timeout parameter. The default value for this parameter is 1 second. This is too low in a lot of cases, so it is often a good idea to provide a higher value. You can detect a too small Expire Timeout parameter value by executing the pub.art.connection:getConnectionStatistics service while the bulk message load is being processed and checking the Total Connections and Free Connections values. The Total Connections value should not decrease while there are still messages to process. If it fluctuates, it means that new connections have to be created, which will slow down the processing of your messages. The number of Free Connections may go up and down. Don't be misguided by the Total Hits value. It only indicates the number of successfully provided connections, both pooled and newly created. It's not an indication of the pool performance. If the Total Misses value is larger than 0, this means that some DB calls may have returned an error if no connection was made available within the Block Timeout period.

ü An indication of JDBC connection pool configuration problems might be JDBC adapter services that make use of the pool, which appear frequently as running services on the Service Usage page of the Integration Server admin web interface.

ü If you notice a significant difference between the execution time of the SQL statement directly on the DB, using Oracle SQLPlus from the command line on your Integration Server, and the execution time of exactly the same SQL statement from an adapter service, this also indicates that you are taking a performance hit from having to create a JDBC connection instead of using one from the pool. I've noticed execution time differences in the order of several seconds, so don't underestimate the impact of an incorrectly configured pool when processing thousands of messages, because it can increase your processing time in the order of hours and more.

ü You can also check for threads that are waiting for JDBC connections by performing a small number (3 should already be sufficient) of JVM thread dumps shortly (couple of seconds) after each another. If you have enabled the JMX feature for your JVM, you can also use jconsole to check for blocked threads waiting for a JDBC connection. The thread dump will look something like this:

Name: TriggerTask:90:78cERP.triggers:testTrigger

State: BLOCKED on com.wm.app.b2b.server.jca.WmConnectionPool@1b1498c owned by:      TriggerTask:90:78cERP.triggers:testTrigger

Total blocked: 310 Total waited: 1.244

Stack trace:



Product Tuning

Integration Server(Some settings):

watt.server.threadPool = 800

JVM minimum heap size = 4000MB

JVM maximum heap size = 4000MB

The following JVM parameters were added:



WmPRT package

- Database Operation Retry Limit = 1000

- Database Operation Retry Interval (sec) = 5

- Cleanup Service Execution Interval (sec) = 0

- Completed Process Expiration (sec) = 6

- Failed Process Expiration (sec) = 3600

Oracle Database Tuning:

Set REDO logs to at least 1.5 GB

Eliminate statement parsing - add the string “MaxPooledStatements=35” in the JDBC pool URL


6. Database:

ü Cache, indexes

ü Optimizing indexes for the most common queries

ü Optimizing query formulation


7. Operating System improvements:

ü Tune windows platform, virus scanners, port scanners, etc…


8. Back End Systems/Applications:

ü Seek the opinion of SME’s if bottlenecks are highlighted within these systems.

ü Updating components of the system written by third parties or open source projects


9. Deployer Tunning:

Dependency Checking Options – A key functionality of webMethods Deployer is to check the inter-dependencies of the components being deployed. Dependency check can be set to (per deployment project) fully automatic, partially automatic or manual setting. The default setting is checking dependency always (fully automatic) which checks the dependencies every time the project is modified and through all the steps of deployment lifecycle. This mode of dependency check process is quite performance intensive and can slow down the system. Depending on the quantum of changes in the build system the dependency check option can be set to reduce (done only while build and deploy) or manual. This will give some performance boost to the Deployer server. Use ‘reduced’ and ‘manual’ for changes with moderate and minimal impacts respectively.

Source/Target Server Settings – Target and source integration servers are defined as remote server alias in the Deployer IS. While configuring the remote server alias the default keep-alive time is set to 0. This causes the Deployer to reconnect to the target continuously taking a performance hit. Change the default value to 10/20 (minutes) which reduces the frequency of the checks. Note, this setting can significantly improve the performance if there are several source/target servers configured.

Target Server Response – It was observed if the target server becomes unresponsive it can compromise the performance of the Deployer. In some cases the Deployer screens will fail to load. In the event of such an, the responsiveness of the target servers should be checked. If the target server is available but the request not getting completed try reloading the WmDeployerResource package in the target (for Integration Servers). If a single project is used to deploy packages in multiple target servers, the unavailability/unresponsiveness of a single target server can stop deployment to the rest. In this case the deployment can be continued by removing the faulty target server settings in deployment map .
If the screen does not respond, try editing the configuration files in WmDeployer package.

Use Clustered Deployment Approach – Instead of deploying to multiple nodes of a clustered environment separately use the clustered deployment approach where the deployment is done to primary node (IS, TN or PRT) and the primary node deploys to the secondary nodes. This saves valuable resources in the Deployer server.

Consider Automation – Most of the webMethods Deployer functionalities can be automated through services and out of the box scripts supplied with the product. The response time of the services and scripts are better than the user interface in that order. However some effort is required to write wrapper scripts/services to combine the Deployer scripts to achieve the process automation. If the enterprise is big and has an ongoing deployment requirement, investing some time and money to achieve the automation will prove to be productive in the long run.


10. Process Tunning:

Performance Considerations

Archiving the active process tables:

ü Minimizing the amount of data in the active process tables will improve performance, so it is recommended that you archive or delete this data on a regular basis. Also, when auditGuaranteed is set to true, the temporary storage tablespace associated with the Database user starts to fill up.

ü This tablespace should be re-initialized time to time.

ü Guaranteed high quality of service is costly:

ü Quality of service options  in both ends of the spectrum were used to get an indication of the full range of performance.

ü These tests reinforced the concept that the higher the QoS, the higher the performance penalty will be.

ü Designers need to balance the need for performance with the need for data security. Designer and Integration Server provide a wide range of QoS options that allow one to strike a balance between these seemingly competing objectives.

Database utilization:

ü Because the limiting factor for HQoS performance is the database utilization, you should always use a powerful machine to host it. Also, deploy the fastest available storage to avoid an I/O bottleneck.

ü Configuration of the ProcessAudit JDBC pool is a key factor in HQOS performance. The Maximum Connections parameter should be manipulated carefully because values that are too large can lead to poor database performance.

How to choose QoS:

QoS has a big impact on throughput. Choosing QoS settings is a very expensive decision. When maximum reliability is needed, HQoS is the only choice; whereas when higher throughput is needed, LQoS is considered to be the best choice. For increased capacity where less reliability can be tolerated, MQoS is an option.


11, Broker performance Tunning:

ü Generally, only one broker should be deployed per broker server.

ü Although filters are define in the Integration Server trigger, they are applied in Broker to minimize document traffic.

ü Client Group Storage Type Guidelines

ü Client groups receiving ONLY synchronous request/reply documents should be volatile.

ü Client Groups for integrations that ONLY publish documents should be volatile.

ü Client Groups receiving ONLY asynchronous control documents (canonicals) should be guaranteed.

ü If a Client Group handles both synchronous and asynchronous documents, it should be guaranteed, UNLESS the synchronous document load is very large. In this case, the load should be split to two clients, one on a Volatile client group for synchronous documents and another on a guaranteed client group for asynchronous documents

ü Use Entire X Refer :



Comments (0)Add Comment

Write comment
You must be logged in to post a comment. Please register if you do not have an account yet.