All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using Splunk to extract a number of fields from xml data this is contained in a log file. The file is very large. This is part of it. xmlns:ns2="http://ground.fedex.com/schemas/linehau... See more...
I am using Splunk to extract a number of fields from xml data this is contained in a log file. The file is very large. This is part of it. xmlns:ns2="http://ground.fedex.com/schemas/linehaul/TMSCommon"> PURCHASEDLINEHAUL APPROVE 116029927 104257037 104257037 1 2020-02-20T21:53:39.000Z .... more lines here that are not important 1587040 FXTR DRAY RULE PZ1 923 RLTO 330 RESOURCE DRIVE LH PHONE 877-851-3543 true This query selects the xml part text in the logging file and some of the fields are extracted and I can add to a table. (not including the source and sourcetype..) | xmlkv | table purchCostReference, eventType, carrier, billingMethod But need more fields that are child elements within the xml data. One of them is the numberCode. I am trying to use xpath to extract these additional fields. | xmlkv | xpath "//tmsTrip/purchasedCost/purchasedCostTripSegment/origin/ns2:numberCode" outfield=Origin | table purchCostReference, eventType, carrier, billingMethod, Origin But no Origin data is returned when I add the field to the table. There is no error. The Origin column is empty. What am I doing wrong with the xpath command that it is not showing any data?
Hi All, I have the logs below and need to get an HTTP status code count. 10.176.242.7 - app [21/May/2020:16:09:01 +0000] "GET /data/app1/2016-11-04/2582478/0CA087DB-8F72-4E5D-9F9C-F4E0C362601F... See more...
Hi All, I have the logs below and need to get an HTTP status code count. 10.176.242.7 - app [21/May/2020:16:09:01 +0000] "GET /data/app1/2016-11-04/2582478/0CA087DB-8F72-4E5D-9F9C-F4E0C362601F.pdf.zip HTTP/1.1" 200 95098 10.176.242.7 - app [21/May/2020:16:09:01 +0000] "GET /data/app2/2016-11-04/2582478/0CA087DB-8F72-4E5D-9F9C-F4E0C362601F.pdf.zip HTTP/1.1" 401 95098 10.176.242.7 - app [21/May/2020:16:09:01 +0000] "GET /data/app3/2016-11-04/2582478/0CA087DB-8F72-4E5D-9F9C-F4E0C362601F.pdf.zip HTTP/1.1" 404 95098 Please help me create a Splunk search. --Raja
Hi, Can someone help me understand the difference between pass4symmkey and SSL settings for secure Splunk connections in a distributed environment? What should we use for indexing? Cluster peers ... See more...
Hi, Can someone help me understand the difference between pass4symmkey and SSL settings for secure Splunk connections in a distributed environment? What should we use for indexing? Cluster peers communications?
Would like to have chart of the total disk space vs used for all mounts at the current time for a host for comparison. Would like to have trend chart of the total disk space vs used for all mounts... See more...
Would like to have chart of the total disk space vs used for all mounts at the current time for a host for comparison. Would like to have trend chart of the total disk space vs used for all mounts for a host for comparison. Any suggestions?
Hi, I want to establish connectivity between 2 controllers. One controller is located in a cloud environment; another is an on-premise fully functional AppD setup. We want the Controller inst... See more...
Hi, I want to establish connectivity between 2 controllers. One controller is located in a cloud environment; another is an on-premise fully functional AppD setup. We want the Controller installed in Cloud to communicate with the on-premise Controller (and then we can use on-premise AppD setup to see transactions, generate a report for the Cloud system data) Please suggest. ^ Edited by @Ryan.Paredez for readability
Hi, We are using the latest version for TA-dmarc add-on for Splunk (3.2.1). Connection seems to be successful and debug logs show that add-on is able to read the messages: 2020-05-21 09:00:54,... See more...
Hi, We are using the latest version for TA-dmarc add-on for Splunk (3.2.1). Connection seems to be successful and debug logs show that add-on is able to read the messages: 2020-05-21 09:00:54,162 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | get_dmarc_messages: successfully connected to x.y.com 2020-05-21 09:00:54,769 INFO pid=165192 tid=MainThread file=base_modinput.py:log_info:293 | get_dmarc_messages: 5 messages in folder INBOX 2020-05-21 09:00:54,916 INFO pid=165192 tid=MainThread file=base_modinput.py:log_info:293 | get_dmarc_messages: 5 messages in folder INBOX match subject "Report domain:" For new messages I can also see Add-on writing the data. 2020-05-21 09:30:54,469 DEBUG pid=131084 tid=MainThread file=base_modinput.py:log_debug:286 | write_part_to_file: saved file /tmp/tmpEH0vFq/test.com!xyz.com!1589932800!1590019200!star.xml from uid 48 But I do not see anything showing up under index=dmarc (This is the index set in the config). I do see the following repeated: 2020-05-21 09:00:54,938 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_28 (body: {}) 2020-05-21 09:00:54,942 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_28 HTTP/1.1" 200 389 2020-05-21 09:00:54,943 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.005052 2020-05-21 09:00:50,982 INFO pid=165192 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-05-21 09:00:51,953 INFO pid=165192 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-05-21 09:00:52,712 INFO pid=165192 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-05-21 09:00:54,026 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | Success creating temporary directory /tmp/tmptGt5ME 2020-05-21 09:00:54,026 INFO pid=165192 tid=MainThread file=base_modinput.py:log_info:293 | Start processing imap server x.y.com with use_ssl True 2020-05-21 09:00:54,162 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | get_dmarc_messages: successfully connected to x.y.com 2020-05-21 09:00:54,769 INFO pid=165192 tid=MainThread file=base_modinput.py:log_info:293 | get_dmarc_messages: 5 messages in folder INBOX 2020-05-21 09:00:54,916 INFO pid=165192 tid=MainThread file=base_modinput.py:log_info:293 | get_dmarc_messages: 5 messages in folder INBOX match subject "Report domain:" 2020-05-21 09:00:54,917 INFO pid=165192 tid=MainThread file=splunk_rest_client.py:_request_handler:100 | Use HTTP connection pooling 2020-05-21 09:00:54,917 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/config/TA_dmarc_checkpointer (body: {}) 2020-05-21 09:00:54,918 INFO pid=165192 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-05-21 09:00:54,927 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/config/TA_dmarc_checkpointer HTTP/1.1" 200 5241 2020-05-21 09:00:54,928 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.010589 2020-05-21 09:00:54,928 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/config/ (body: {'offset': 0, 'count': -1, 'search': 'TA_dmarc_checkpointer'}) 2020-05-21 09:00:54,931 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/config/?offset=0&count=-1&search=TA_dmarc_checkpointer HTTP/1.1" 200 4439 2020-05-21 09:00:54,932 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.003590 2020-05-21 09:00:54,934 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_25 (body: {}) 2020-05-21 09:00:54,937 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_25 HTTP/1.1" 200 388 2020-05-21 09:00:54,938 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.003830 2020-05-21 09:00:54,938 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_28 (body: {}) 2020-05-21 09:00:54,942 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_28 HTTP/1.1" 200 389 2020-05-21 09:00:54,943 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.005052 2020-05-21 09:00:54,943 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_31 (body: {}) 2020-05-21 09:00:54,947 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_31 HTTP/1.1" 200 340 2020-05-21 09:00:54,948 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.004498 2020-05-21 09:00:54,948 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_38 (body: {}) 2020-05-21 09:00:54,952 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_38 HTTP/1.1" 200 388 2020-05-21 09:00:54,953 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.004457 2020-05-21 09:00:54,953 DEBUG pid=165192 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_42 (body: {}) 2020-05-21 09:00:54,956 DEBUG pid=165192 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-dmarc/storage/collections/data/TA_dmarc_checkpointer/x.y.com_dmarc%40xyz.com_42 HTTP/1.1" 200 340 2020-05-21 09:00:54,957 DEBUG pid=165192 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.004431 2020-05-21 09:00:54,958 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | filter_seen_messages: uids on imap set([25, 42, 28, 38, 31]) 2020-05-21 09:00:54,958 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | filter_seen_messages: uids on imap set([25, 42, 28, 38, 31]) 2020-05-21 09:00:54,958 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | filter_seen_messages: uids in checkp set([25, 42, 28, 38, 31]) 2020-05-21 09:00:54,958 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | filter_seen_messages: uids new set([]) 2020-05-21 09:00:54,958 INFO pid=165192 tid=MainThread file=base_modinput.py:log_info:293 | Ended processing imap server x.y.com 2020-05-21 09:00:54,959 DEBUG pid=165192 tid=MainThread file=base_modinput.py:log_debug:286 | Success deleting temporary directory /tmp/tmptGt5ME How can I troubleshoot this further? Thanks, ~ Abhi
Hi everyone, I was attempting to utilize this dashboard, but am having difficulty populating the user accounts. https://gosplunk.com/windows-dashboard-showing-who-was-logged-on-to/ This is ... See more...
Hi everyone, I was attempting to utilize this dashboard, but am having difficulty populating the user accounts. https://gosplunk.com/windows-dashboard-showing-who-was-logged-on-to/ This is what the dashboard currently looks like, as you can see, the user account section is not populated. My goal is to have either the TargetUserName or TargetUserSID populated in the account section with a regex that will catch all user accounts. Any help will be greatly appreciated. This is the search being performed index="wineventlog" source="XmlWinEventLog:Security" EventCode=4624 (Logon_Type=10 OR Logon_Type=7 OR Logon_Type=2) host=$HostName$ | rex "New Logon:\s+Security ID:\s+(?<account>.*)" | eval Type=case(Logon_Type=10,"Remote Logon", Logon_Type=2,"Local Logon", Logon_Type=7,"Screen Unlock") | table _time host Type account | sort _time desc Here is an example of the Windows XML event <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/><EventID>4624</EventID><Version>1</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2020-05-21T14:23:42.544642200Z'/><EventRecordID>20131980</EventRecordID><Correlation/><Execution ProcessID='560' ThreadID='872'/><Channel>Security</Channel><Computer>Computer.AD.computer.com</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='SubjectUserName'>Computer$</Data><Data Name='SubjectDomainName'>AD</Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='TargetUserSid'>AD\admin-v</Data><Data Name='TargetUserName'>admin-v</Data><Data Name='TargetDomainName'>AD</Data><Data Name='TargetLogonId'>0x1f02e303</Data><Data Name='LogonType'>10</Data><Data Name='LogonProcessName'>User32 </Data><Data Name='AuthenticationPackageName'>Negotiate</Data><Data Name='WorkstationName'>Computer</Data><Data Name='LogonGuid'>{00000000-0000-0000-0000-000000000000}</Data><Data Name='TransmittedServices'>-</Data><Data Name='LmPackageName'>-</Data><Data Name='KeyLength'>0</Data><Data Name='ProcessId'>0x20b8</Data><Data Name='ProcessName'>C:\Windows\System32\winlogon.exe</Data><Data Name='IpAddress'>10.0.0.0</Data><Data Name='IpPort'>0</Data><Data Name='ImpersonationLevel'>%%1833</Data></EventData></Event>
I was previously using Splunk jar 1.7.2 which uses Apache Http client, I was able to setup log4j config to hit the splunk HEC endpoint. <SplunkHttp name="splunk" url="https://ENDPOINT.splun... See more...
I was previously using Splunk jar 1.7.2 which uses Apache Http client, I was able to setup log4j config to hit the splunk HEC endpoint. <SplunkHttp name="splunk" url="https://ENDPOINT.splunkcloud.com" token=“MYTOKEN” includeMDC="true" messageFormat="json" disableCertificateValidation="true"> <PatternLayout pattern="%m%n"/> </SplunkHttp> This worked great, my log object was a simple object converted to Json via gson Logger.info(sp.toJson()); In splunk all the object fields parsed as message.myobjectfield. All was good in the world, but then I needed to use a proxy for this endpoint in another environment. I was unable to find a way, splunk used .custom() to build the request\client and setting system properties did nothing. Setting JVM to use system proxy was too broad because there were other connections that should not use the proxy. So I had a look at the latest splunk java logging 1.8 and it switched from using Apache Http to OkHttp, and seemed to imply that builder connection configuration was shared, so I think I can setup my proxy prior to using and it would work. However, I didn't even get that far because my initial baseline test on the open environment with same configuration that worked on 1.7.2 no longer works on 1.8. On 1.8 okhttp returned error 400 Bad Request with body {"text":"Incorrect index","code":7,"invalid-event-number":1} Can someone provide a usage of 1.8 using HttpEventCollectorLog4jAppender with and without a proxy. What log4J config did you use and what did you log, I need an example of a multi-field object not just a curl plain text example as I know that already works.
I have deployed Splunk Add-on for Unix and Linux to collect data from all Linux machines. However, I need this one parameter cpu_wait_queue which is not collected from the default scripts. C... See more...
I have deployed Splunk Add-on for Unix and Linux to collect data from all Linux machines. However, I need this one parameter cpu_wait_queue which is not collected from the default scripts. Command used to view this : vmstat Could you please help me to collect this parameter?
I am using Splunk Free, and the Splunk add-on for AWS, attempting to index and forward generic s3 data with a custom index name to a Splunk Enterprise instance. It looks like data is being indexed, a... See more...
I am using Splunk Free, and the Splunk add-on for AWS, attempting to index and forward generic s3 data with a custom index name to a Splunk Enterprise instance. It looks like data is being indexed, and the ssl connection is connecting, but not forwarding data. I have indexed data that shows in the web client. I am getting the following repeated output in splunkd.log 05-21-2020 10:23:16.119 -0400 INFO TcpOutputProc - Found currently active indexer. Connected to idx=ip:9998, reuse=1. 05-21-2020 10:23:25.150 -0400 INFO LMStackMgr - license_warnings_update_interval=auto has reached the minimum threshold 10. Will not reduce license_warnings_update_interval beyond this value In outputs.conf to account for sending all indexes I used 'forwardedindex.0.whitelist = .*' inputs.conf [default] host = hostname disabled=0 outputs.conf [tcpout] defaultGroup = default-autolb-group indexAndForward = true disabled = false forwardedindex.0.whitelist = .* [tcpout:default-autolb-group] compressed = true server = ip:9998 clientCert = /opt/splunk/etc/auth/server.pem sslPassword = passwordHere sslRootCAPath = /opt/splunk/etc/auth/ca.pem sslVerifyServerCert = false sendCookedData = true What is the required change in my forwarder configuration?
I have AWS app running there are some fields for some guardduty events that are visible to the HF search only also I see events with all fields visible on the search head as expected what ca... See more...
I have AWS app running there are some fields for some guardduty events that are visible to the HF search only also I see events with all fields visible on the search head as expected what can be the reason ? the integration is working with S3
Hi team, We are trying to integrate appdynamics (saas) with splunk using https://docs.appdynamics.com/display/PRO44/Integrate+AppDynamics+with+Splunk link,  https://www.appdynamics.com/community/exc... See more...
Hi team, We are trying to integrate appdynamics (saas) with splunk using https://docs.appdynamics.com/display/PRO44/Integrate+AppDynamics+with+Splunk link,  https://www.appdynamics.com/community/exchange/extension/splunk-alerting-extension/ . To enable splunk alerting extension it needs to be uploaded in custom/actions in controller. As Appdynamics is on Saas what is the procedure to do it? .  Thanks.
We would like to find out who has access to a certain index. How can we do that?
Once the asset environment variables have been created (mySpecificKey -> mySpecificValue), how do I access these values inside a playbook? $ENV{'mySpecificKey'} does not seem to work.
Hi, I'm setting up an integration test between a third-party app and Splunk Cloud trail using an HTTP event collector. I have done this several times before without issue, but this time I can't f... See more...
Hi, I'm setting up an integration test between a third-party app and Splunk Cloud trail using an HTTP event collector. I have done this several times before without issue, but this time I can't find the Global setting dialog to enable the tokens. I've looked in settings->data input->event collector but the page has no link to the Global setting dialog. Has it moved? Mike
Hi All, Today I had a question from my customer, that he wants to monitor the bunch of software running in his environment, one of the software product is AutoCad, which he wants to monitor its he... See more...
Hi All, Today I had a question from my customer, that he wants to monitor the bunch of software running in his environment, one of the software product is AutoCad, which he wants to monitor its health and performance, currently they are monitored but not efficiently (unable to find out where is the problem based on hard ware or software or network communication or data base) so they need monitoring solution which can provide them analytics and also find the exact problem like network issue, hardware issue etc. So that it can be resolved by the concern team or application owner. They have the splunk application running in the environment, but not sure how to get AutoCad into Splunk?
Hi, i have a query that returns two lines of results based on two hosts. i then get a result from another query that only returns one line. When i do the eval command i get a correct 'Match' f... See more...
Hi, i have a query that returns two lines of results based on two hosts. i then get a result from another query that only returns one line. When i do the eval command i get a correct 'Match' for the first line but no entry for the second. How do i apply the 'appendcol' result to both lines? index =systems sourcetype = stream_stack PID=0x0055 | eval Packets=packets*208 | stats latest(Packets) AS Packets by host | appendcols [ search index=systems sourcetype=soms_file_size process=soms | stats latest(file_size) AS file_size latest(file_name) AS file_name by process ] | eval match=if(Packets=file_size,"OK","Error") | table process match Packets file_size file_name host RESULT process match file_size file_name host soms OK 27666832 DR_270919_P_5068_719_750_750.out chietrp01 Error chietrp02 thanks,
I'm creating an remote performance monitor input through Settings => Data Input => Remote performance monitoring on a Windows system. Even though I select my custom app in "App Context" while setting... See more...
I'm creating an remote performance monitor input through Settings => Data Input => Remote performance monitoring on a Windows system. Even though I select my custom app in "App Context" while setting up the input but every time the wmi.conf file gets written to the search app's local folder. I've tried it on Splunk 8.03 and 7.3.1 and the behaviour is consistent. I know we can manually port the stanza to app of our choice but the actual concern here is that why the option for selecting "App Context" while creating input through Splunk Web is not working. Can somebody please verify if this is the case or I'm doing something silly?
@splunk Team, In some cases I want to Skip a specific KPI calculation or Stop it from calculating its next value. its more like Maintenance mode concept but limited to KPI not at service level? ... See more...
@splunk Team, In some cases I want to Skip a specific KPI calculation or Stop it from calculating its next value. its more like Maintenance mode concept but limited to KPI not at service level? Example- We had a issue with SPlunk and due to which few of Indexes were not populating due to forwarder issues or data injection . In that Case when we know that issue persist for a while rather than Reducing Importance of KPI that is change to the model , I was wondering if there is any means to stop that? I dont like the idea to touch my PROD model config( importance for these issues I need way to suppress the alert, I need if we have any other config level thing for this? Curious to know. Other EXAMPLE - Any Splunk Injection level failure will impact model calculation , how to handle these? Thanks satya Thanks
Hello, I read https://docs.splunk.com/Documentation/Splunk/8.0.2/Forwarding/Routeandfilterdatad#Perform_selective_indexing_and_forwarding and think about: "Index one input locally and then f... See more...
Hello, I read https://docs.splunk.com/Documentation/Splunk/8.0.2/Forwarding/Routeandfilterdatad#Perform_selective_indexing_and_forwarding and think about: "Index one input locally and then forward all inputs" How does this affect my licensed volume. Basically I'm processing the same date twice. Does I have to pay this volume twice? This may affect my planning ... Cheers Robert