All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This article applies to Splunk Phantom versions 4.6 , 4.5 , 4.2 , 4.1 , 4.0 , 3.5 , 3.0 , 2.1 , 2.0   The Active Directory/LDAP debug script is used to view a detailed output of the connection ... See more...
This article applies to Splunk Phantom versions 4.6 , 4.5 , 4.2 , 4.1 , 4.0 , 3.5 , 3.0 , 2.1 , 2.0   The Active Directory/LDAP debug script is used to view a detailed output of the connection and authentication attempt between Splunk Phantom and an Active Directory instance. The script accesses the Splunk Phantom database and uses the Active Directory server configuration and credentials as configured in Splunk Phantom. For a copy of the debug script, open a Support case. WARNING: The debug script output will contain the Active Directory password in plain text. It is your responsibility to sanitize the report before sharing with unauthorized persons. Before running the script, verify the Splunk Phantom Active Directory settings are configured with the credentials intended for the debug script to use. In Splunk Phantom, select Administration > System Settings > Authentication. Verify the credentials listed in the Active Directory Settings fields. Click Save Changes. Run the Active Directory/LDAP connection and authentication debug script. Transfer the script "test_ldap.pyc" to the Phantom server. Phantom 2.1 and previous: Change the current user to apache. [root@localhost user]# sudo -u apache bash Phantom 2.1 and previous: Run the test_ldap.pyc script. [root@localhost user]# python2.7 test_ldap.pyc Phantom 3.0 to current: Change the current user to nginx. [root@localhost user]# sudo -u nginx bash Phantom 3.0 to current: Run the test_ldap.pyc script. [root@localhost user]# phenv python2.7 test_ldap.pyc The debug script output will contain the Active Directory password in plain text. It is your responsibility to sanitize the report before sharing with unauthorized persons. Below is an example output from the script in Splunk Phantom 3.0 showing a successful Active Directory connection: [root@localhost user]# sudo -u nginx bash bash-4.1$ phenv python2.7 test_ldap.pyc ldap_create ldap_url_parse_ext(ldap://dc1.corp.contoso.com) *** ldap://dc1.corp.contoso.com - SimpleLDAPObject.set_option ((17, 3), {}) *** ldap://dc1.corp.contoso.com - SimpleLDAPObject.set_option ((8, 0), {}) *** ldap://dc1.corp.contoso.com - SimpleLDAPObject.set_option ((20485, 10.0), {}) *** ldap://dc1.corp.contoso.com - SimpleLDAPObject.set_option ((20482, 10.0), {}) *** ldap://dc1.corp.contoso.com - SimpleLDAPObject.simple_bind (('administrator@corp.contoso.com', 'PASSWORD', None, None), {}) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP dc1.corp.contoso.com:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 10.17.1.42:389 ldap_pvt_connect: fd: 4 tm: 10 async: 0 ldap_ndelay_on: 4 attempting to connect: connect errno: 115 ldap_int_poll: fd: 4 tm: 10 ldap_is_sock_ready: 4 ldap_ndelay_off: 4 ldap_pvt_connect: 0 ldap_open_defconn: successful ldap_send_server_request *** ldap://dc1.corp.contoso.com - SimpleLDAPObject.result4 ((1, 1, -1, 0, 0, 0), {}) ldap_result ld 0x1676e00 msgid 1 wait4msg ld 0x1676e00 msgid 1 (timeout 10000000 usec) wait4msg continue ld 0x1676e00 msgid 1 all 1 ** ld 0x1676e00 Connections: * host: dc1.corp.contoso.com port: 389 (default) refcnt: 2 status: Connected last used: Thu Dec 1 16:00:38 2016 ** ld 0x1676e00 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x1676e00 request count 1 (abandoned 0) ** ld 0x1676e00 Response Queue: Empty ld 0x1676e00 response count 0 ldap_chkResponseList ld 0x1676e00 msgid 1 all 1 ldap_chkResponseList returns ld 0x1676e00 NULL ldap_int_select read1msg: ld 0x1676e00 msgid 1 all 1 read1msg: ld 0x1676e00 msgid 1 message type bind read1msg: ld 0x1676e00 0 new referrals read1msg: mark request completed, ld 0x1676e00 msgid 1 request done: ld 0x1676e00 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ldap_free_connection: actually freed
Preemptive note, I am not looking for instructions on how to run a subsearch.   I have results from a completed search that goes back 90 days which took an extremely long time to run. Lets say the ... See more...
Preemptive note, I am not looking for instructions on how to run a subsearch.   I have results from a completed search that goes back 90 days which took an extremely long time to run. Lets say the search was:     index=foo     Now that I have those results, I need to filter them down, lets say I'd like to filter them down to:     index=foo host=bar     I'd like to search within those results without Splunk having to query the entire indexer again to reduce the amount of time the second search takes. Basically, is there any way to have Splunk query the existing, cached results already on the search head rather than having to query the indexer twice?
Hello. Good afternoon.  Just installed the Kusto Grabber TA on our Splunk HF.  After adding a new input, I receive an "access token" error.  See below ... 04-20-2021 19:13:55.089 +0000 ERROR ExecPr... See more...
Hello. Good afternoon.  Just installed the Kusto Grabber TA on our Splunk HF.  After adding a new input, I receive an "access token" error.  See below ... 04-20-2021 19:13:55.089 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-azure-log-analytics-kql-grabber/bin/azure_log_analytics.py" ApiToken = json.loads(ApiReturn.content)["access_token"] 04-20-2021 19:13:55.089 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-azure-log-analytics-kql-grabber/bin/azure_log_analytics.py" KeyError: 'access_token' 04-20-2021 19:13:55.118 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-azure-log-analytics-kql-grabber/bin/azure_log_analytics.py" ERROR'access_token' How can I best resolve this issue? Regards, Max  
Trying to use splunk.  Installed ta-pfsense, and I have data showing up from my pfsense firewall, the problem is it seems none of the fields are showing up.  Where do I begin trying to figure out wha... See more...
Trying to use splunk.  Installed ta-pfsense, and I have data showing up from my pfsense firewall, the problem is it seems none of the fields are showing up.  Where do I begin trying to figure out what's wrong?
This article describes a workaround when you run a playbook and see the "user parameter must be of type string" error, but the parameter is a string.   An example of this error is shown below: ... See more...
This article describes a workaround when you run a playbook and see the "user parameter must be of type string" error, but the parameter is a string.   An example of this error is shown below: TypeError: Error in phantom.prompt(): user parameter must be of type string An example call that generates this error is shown below: phantom.prompt(user="support", message="Verify incident details", respond_in_mins=30, name="prompt_1", callback=prompt_1_callback) The issue can be caused by an import that changes what is considered a string, such as: import unicode_literals To resolve the issue, specify that the string values are strings using str(). For example: phantom.prompt(user=str("support"), message=str("Verify incident details"), respond_in_mins=30, name=str("prompt_1"), callback=prompt_1_callback)
NOTE: These steps were verified using Phantom version 4.2.7532 and Splunk Universal Forwarder version 7.2.6. Network port conflicts may arise when Splunk Phantom is installed on a system with an exi... See more...
NOTE: These steps were verified using Phantom version 4.2.7532 and Splunk Universal Forwarder version 7.2.6. Network port conflicts may arise when Splunk Phantom is installed on a system with an existing Splunk universal forwarder that is using default port settings. By default, the universal forwarder uses port 8088 for the HTTP event collector (HEC) and port 8089 for the REST API. Those are the same ports used by the Splunk instance that is bundled with Splunk Phantom, which results in conflicts when provisioning the bundled Splunk instance. Symptoms of this problem include an error in the Splunk Phantom wen interface when applying a license key ("Failed to update license: status"), and "ConnectionError", "AttributeError: status", and "Http400: status" messages in Splunk Phantom's wsgi.log file. This is because the bundled Splunk instance does not start due to the port conflicts, so Splunk Phantom cannot contact it to completely apply the license key. The steps to correct this involve selecting new ports for the Splunk universal forwarder to use by modifying Splunk and Splunk Phantom configuration files, aligning the value of the HEC token in the bundled Splunk instance with that in the Splunk Phantom search settings, and restarting services. Change the ports being used by the existing Splunk universal forwarder Select two available ports other than 8088 and 8089 for use by the existing Splunk UF HEC and REST API. Override the HEC port specification by editing or creating the {SPLUNK_UF_HOME}/etc/apps/splunk_httpinput/local/inputs.conf file and defining the [http] stanza and port attribute. For example: [http] port = <your-custom-HEC-port> Override the REST API port specification by editing or creating the {SPLUNK_UF_HOME}/etc/system/local/web.conf file and defining the [settings] stanza and mgmtHostPort attribute. For example: [settings] mgmtHostPort=127.0.0.1:<your-custom-REST-port> Restart the universal forwarder to apply the changes to the configuration files. {SPLUNK_UF_HOME}/bin/splunk restart Use the netstat command to verify the universal forwarder is no longer using ports 8088 and 8089 and that they are free. If they are in use then verify the configuration changes detailed above are correct. The absence of output from this command indicates the ports are free: netstat --numeric --listening | grep 808[89] Align the value of the HEC tokens Rename {PHANTOM_HOME}/splunk/etc/passwd to {PHANTOM_HOME}/splunk/etc/passwd.bak. Edit {PHANTOM_HOME}/splunk/etc/apps/splunk_httpinput/local/inputs.conf. Save the token attribute value from the [http://phantom-token] stanza. For example: [http://phantom-token] token = e0d171a1-641c-4ef9-873e-2b2d58ef0e8b Your token attribute value will be different from the example. Save the token attribute value, then delete the entire [http://phantom-token] stanza. After saving the file, ensure it is still owned by phantom and is in the group phantom. Open a Django shell: phenv python2.7 /opt/phantom/www/manage.py shell Run the following commands from the Django shell: from phantom_ui.ui.models.system import SystemSettings s = SystemSettings.get_settings() del s.search_settings['splunk']['local'] s.save() After running these commands, leave the Django shell open. Open a Bash shell and restart the Splunk instance bundled with Splunk Phantom {PHANTOM_HOME}/bin/phsvc restart splunk Run the following command: su - phantom --shell=/bin/bash -c "phenv python2.7 /opt/phantom/bin/insert_splunk_config_to_db.pyc" From the DJango shell, run the following commands: s = SystemSettings.get_settings() s.search_settings['splunk']['local']['hec']['token'] In the Django shell, run the following commands only if the output from the preceding command is blank: s.search_settings['splunk']['local']['hec']['token'] = 'token_saved_in_previous_step' s.save() Press CTRL-D to exit hte DJango shell. Edit {PHANTOM_HOME}/splunk/etc/apps/splunk_httpinput/local/inputs.conf and verify whether the [http://phantom-token] stanza exists. If not, add the following and save the file: [http://phantom=token] token = <token-saved-in-previous-step> At this point the value of the HEC token in the bundled Splunk configuration (/opt/phantom/splunk/etc/apps/splunk_httpinput/local/inputs.conf) matches the value of the HEC token in the Splunk Phantom search settings (search_settings['splunk']['local']['hec']['token']). Restart the Splunk instance bundled with Splunk Phantom: {PHANTOM_HOME}/bin/phsvc restart splunk
Hey there, I have a _raw where I am extracting a timestamp. But this is in a bad format. So I wanted to have a "calculated field" (via the splunk interface option, not in the conf to which I dont ha... See more...
Hey there, I have a _raw where I am extracting a timestamp. But this is in a bad format. So I wanted to have a "calculated field" (via the splunk interface option, not in the conf to which I dont have access). But while other calculated fields seem to work. basically I have a field called "exTimeString". I want to create a calculated field exTimeStamp What I put into the eval field is: strptime(exTimeString,"%Y-%m-%dT%H:%M:%S") Unfortunately it doesn't work. Is it because of the strptime? Ormaybe the % characters cause issues here?
Hello Friends, I was wondering if there was a way in the New Dashboard Studio d visualize charts and single values only after the queries of data is fully calculated or loaded... I know that in the ... See more...
Hello Friends, I was wondering if there was a way in the New Dashboard Studio d visualize charts and single values only after the queries of data is fully calculated or loaded... I know that in the classic version or the "old" dashboard we could choose to see a progress bar so that we know when the values were fully calculated... is there something similar ion the new version? Thank you so much I have read a lot of documentation without any luck... Thanks for the help! have great week 
I'm trying to get the bytes of indexed events to find out by event code in our windows event log security events how much indexing they are taking up. Below is what I have, but I'm not sure if that w... See more...
I'm trying to get the bytes of indexed events to find out by event code in our windows event log security events how much indexing they are taking up. Below is what I have, but I'm not sure if that will really get me the bytes. Sure it will get me the relative sizes, but I'm specifically looking for the bytes. My hunch is what I have below is totally correct because the data could be in ASCII (one byte per character), UTF-8 (one to four bytes per character), UTF-16 (two to four bytes per character), etc. Does Splunk store the actual bytes anywhere,  if not is there a way to get it to? Thoughts?     index="wineventlog" source="WinEventLog:Security" | eval bytes = len(_raw) | stats sum(bytes) by EventCode | sort sum(bytes) desc  
We have a 51-Node Index Cluster where we do not replicate Index bucket copies. We only have a primary copy of the buckets (so actually no "copy", just a single instance of each bucket on a single Ind... See more...
We have a 51-Node Index Cluster where we do not replicate Index bucket copies. We only have a primary copy of the buckets (so actually no "copy", just a single instance of each bucket on a single Indexer), and we use the CM/IDX Cluster for it's management capabilities. Our repfactor=1. Will data rebalancing, for the sole purpose of getting those single primary buckets that are amassed on earlier built Indexers moved between newer built Indexers, work to average out storage across the Cluster?  I've heard from PS that it will, but have heard from other Splunk Admins that it will only work with "copies" of bucket data, and since we don't have "copies", but single instances of primary buckets, it will not work. We are not yet using SmartStore. We have over 600TB of storage between hot/warm/cold. All of it is through GCP and is attached/mounted to the VMs. We would probably want to do a searchable-rebalance if it would even work on our cluster. Thanks in advance!!
Hi All, I am having challenge to filter the highest value and prepare a new column. Code:    index=nw_ppm | table "From Device", "To Device", "Latency", "Time (UTC/GMT)" | search Latency!=0 | eva... See more...
Hi All, I am having challenge to filter the highest value and prepare a new column. Code:    index=nw_ppm | table "From Device", "To Device", "Latency", "Time (UTC/GMT)" | search Latency!=0 | eval Latency = round(Latency, 2) | rename "Time (UTC/GMT)" as Time | xyseries Time "From Device" "Latency"     Table am getting: Time IPSLA1 IPSLA2 IPSLA3 10:13:00   38  10.1 10:14:00 77.77     10:23:00 77  35  9.89 10:34:00 78.35     10:37:00     10.76 10:43:00 78 36.29 10.61 11:13:00 79     11:14:00 72.82     11:23:00   36.33   11:24:00 73.02     11:33:00   37.67   Requirement :  I want the highest value to be  populated on to a last new Colum. Expected output table:  Time IPSLA1 IPSLA2 IPSLA3 Highest 10:13:00   38  10.1 38 10:14:00 77.77     77.77 10:23:00 77  35  9.89 77 10:34:00 78.35     78.35 10:37:00     10.76 10.76 10:43:00 78 36.29 10.61 78 11:13:00 79     79 11:14:00 72.82     72.82 11:23:00   36.33   36.33 11:24:00 73.02     73.02 11:33:00   37.67   37.67   Also the "From Device" list is :    index=nw_ppm | table "From Device"   From Device IPSLA1 IPSLA2 IPSLA3
We are moving from one datacenter to another one which require to change IP addresses of all Splunk instances. All of these instances are running with Splunk 8.1.1 and soon will be upgraded to 8.1.3.... See more...
We are moving from one datacenter to another one which require to change IP addresses of all Splunk instances. All of these instances are running with Splunk 8.1.1 and soon will be upgraded to 8.1.3. License Master / Monitoring Console.  - Single server. SH ( non-clustered) -  Separate servers for SH1 and SH2. SH is configured with 3 indexer peers in three different regions and only one indexer will be moved at a time.  Indexer servers - separate servers for each indexer total 3.  UF  servers.  Is there any procedure/doc available. I could not find one in Splunk doc. 
Hey gang - searching for missing data is probably the weakest part of my Splunk skillset.  I just have a hard time thinking through how to even write such a query.  I have a transaction that usually... See more...
Hey gang - searching for missing data is probably the weakest part of my Splunk skillset.  I just have a hard time thinking through how to even write such a query.  I have a transaction that usually goes event_name=A, event_name=B, event_name=C.  But I'm trying to research a situation where B is missing sometimes.  I'd like to be able to build a timechart that shows the number of transactions in which B is missing.   The events in transaction are all linked by a common identifier that we'll call session_id. I've looked at the transaction command and I suppose I understand that. index=indexname event_name=A OR event_name=B OR event_name=C | transaction session_id startsWith=A endsWith=C maxspan=10m But I don't know where to go from here.
When trying to install the UberAgent applications (UberAgent UXM and UberAgent Indexer) both fail with 'error - Forbidden' messages.   There does not appear to be 'install app from file' option for... See more...
When trying to install the UberAgent applications (UberAgent UXM and UberAgent Indexer) both fail with 'error - Forbidden' messages.   There does not appear to be 'install app from file' option for Splunk cloud either.   How can I get these 2 apps installed. I am on a trial for a POC so I don't have a support plan.
I am trying to split some data into difference source types using a lookup table.  I am testing this locally. I have a source type called A and wish to extract fields to source type B.  A snippet ... See more...
I am trying to split some data into difference source types using a lookup table.  I am testing this locally. I have a source type called A and wish to extract fields to source type B.  A snippet of my data is below.   4/23/21 11:30:29.000 AM 23 Fri Apr 23 2021 11:30:29 www1 sshd[4878]: Failed password for invalid user SAMPLE123:ABC11:snmp from 10.0.0.1 port 3118 ssh host = 192.168.1.1 source = /A.log sourcetype = A     props.conf     [a] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 LOOKUP-alookup = lookuptable snmp_trap AS host OUTPUT host AS host_output TRANSFORMS-changesourcetype = B     Transforms.conf    [lookuptable] batch_index_query = 0 case_sensitive_match = 0 filename = lookuptable.csv max_matches = 1 min_matches = 1       host snmp_poll syslog snmp_trap 10.0.01 SAMPLE123:ABC11:ipfix SAMPLE123:ABC11:snmp_trap SAMPLE123:ABC11:syslog     I have achieved similar in the past using Regex to separate source type but having issues doing this via a lookup table   Any help appreciated.   
Hi, New to AppDynamics I was curious if it was possible to create a dashboard or query to identify which endpoints were performing the most number of SQL queries. Thanks Wayne
Hi , I am trying very hard to get the API for disabling the alerts in splunk. my view in splunk: APP: xyz Alert name: test_alert Need help to API for disabling the above alert .
I am working with JavaSDK to execute custom python script.  In below code, i want to execute errorfunc (highlighted) in case "myResults.on("data", function ()" even is not triggered. How i can put no... See more...
I am working with JavaSDK to execute custom python script.  In below code, i want to execute errorfunc (highlighted) in case "myResults.on("data", function ()" even is not triggered. How i can put non event condition.  require([ 'underscore', 'jquery', 'splunkjs/mvc', "splunkjs/mvc/searchmanager", 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc, SearchManager) { console.log("sample.js has been loaded"); buttonClick() function buttonClick(){ $("#run_search_btn").on("click",function(){ console.log("Clicked") $('#run_search_btn').attr( "disabled", "disabled" ); script1 = "| script XPDM_Creation" function success(val_cnt,val){ console.log("Search Result: Successful"); console.log("Result:",val_cnt); console.log(val); alert("Generated"); $("#run_search_btn").removeAttr( "disabled"); } function err(){ console.log("Search Result: Failed"); alert("Generation Failed") $("#run_search_btn").removeAttr( "disabled"); } runScript1(script1,success,err) }) } function runScript1(script1, successfunc, errorfunc) { var result = []; var lookupSearch = new SearchManager({ earliest_time: "-24h@h", latest_time: "now", search: script1 }); var myResults = lookupSearch.data("results"); lookupSearch.on("search:done", function () { console.log(myResults) myResults.on("data", function () { result_cnt = myResults.data().rows; myResults_text = myResults.data().rows[0][0]; successfunc(result_cnt,myResults_text) }) errorfunc() }); lookupSearch.on('search:failed', function(properties) { // Print the entire properties object console.log("FAILED:", properties); errorfunc() }); lookupSearch.on('search:error', function(properties) { // Print the entire properties object console.log("FAILED:", properties); errorfunc() }); lookupSearch.on('search:progress', function(properties) { // Print just the event count from the search job console.log("IN PROGRESS.\nEvents so far:", properties.content.eventCount); }); } })
We are trying to create a data model with a custom _time field. We created the data model, and added a calculated field (SUBMIT_DATE_cron_e) that calculates a UNIX time with microseconds (like 161909... See more...
We are trying to create a data model with a custom _time field. We created the data model, and added a calculated field (SUBMIT_DATE_cron_e) that calculates a UNIX time with microseconds (like 1619093900.0043). We then created another calculated field called _time, and set this equal to SUBMIT_DATE_cron_e. This effectively overwrites the inherited (or original) _time field. These steps worked well.  A problem occurred when setting the data model to accelerated. If I do a search on the data model during the acceleration build process, and I inspect the _time field, I see times in UNIX format with microseconds (like 1619093900.0043), as intended. However as the build progresses, the _time values change to UNIX time with no microseconds (like 1619093900). It looks like the _time field is truncated to have only seconds. Is this according to design for accelerated data models? Is there a way to have a _time field in UNIX format with micro seconds?
Hi Team,   I got a requirement to filter out for the source [WinEventLog:Security] for 14 host (Host and Computer Name are same) & for this particular EventCode 4624,4634 if the condition Account N... See more...
Hi Team,   I got a requirement to filter out for the source [WinEventLog:Security] for 14 host (Host and Computer Name are same) & for this particular EventCode 4624,4634 if the condition Account Name : - & Account Name : *$ (actually * represent the host information 14 hosts) then it should filter out the logs before ingestion.   So initially we have a deployment master server in place and we have a separate customized app "windows_inputs" for pushing the windows parameters to all the client machines.   In the app "windows_inputs" we have a inputs.conf file and there is a stanza for source [Wineventlog:Security] already and in that stanza I can see around 11 blacklist already in place.   Sample: [WinEventLog://Security] disabled=0 current_only=1 blacklist = xyz blacklist1 = abc blacklist2 = def blacklist3 = ghi : : blacklist11 = xyz renderXml=0 index = wineventlog   My requirement: Host & ComputerName are same totally 14.   exmirr01 exmirr02 exmirr03 exmirr04 exmirr05 exmirr06 exmirr07 exmirr08 exmirr09 exmirr10 exmirr11 exmirr12 exmirr13 exmirr14   And the Account Names are like this.   Account Name : - Account Name : exmirr01$ Account Name : exmirr02$ Account Name : exmirr03$ Account Name : exmirr04$ Account Name : exmirr05$ Account Name : exmirr06$ Account Name : exmirr07$ Account Name : exmirr08$ Account Name : exmirr09$ Account Name : exmirr10$ Account Name : exmirr11$ Account Name : exmirr12$ Account Name : exmirr13$ Account Name : exmirr14$     So how can I filter out the logs for the particular ComputerNames for those EventCodes if the Account Name as - & *$ ? So should I need to enter below in the same app "windows_inputs" under the inputs.conf something like continuation   blacklist12 = ... and so on   Or should I need move those 14 hosts outputs to HF server and from there it will reach the indexers. So if in this case can i place the props and transforms in the HF server to filter out for this condition? Kindly help me on this.   And also what would be the stanza we need to mention if I should I place in the inputs.conf and what would be the props and transforms if it is which should be placed in the Heavy Forwarder server?   So kindly help on my request.