All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do you have a SPL Code hint for me?
This is part of the splunkd health report. It is configured in health.conf Would suggest reviewing if this "forwarder" is sending old files or actually is falling behind or have some clean up nee... See more...
This is part of the splunkd health report. It is configured in health.conf Would suggest reviewing if this "forwarder" is sending old files or actually is falling behind or have some clean up needed on its ingestion tracker values.  
Hi!  If I am following your question, you are concerned because the lab and prod file paths are the same? You are not required to set the source path to the file in props.conf to get the desired ... See more...
Hi!  If I am following your question, you are concerned because the lab and prod file paths are the same? You are not required to set the source path to the file in props.conf to get the desired outcome. If your sourcetype is being set in your inputs that pick up this file, you can simply configure the props to match on the sourcetype to do the processing.  Also I don't think you want to duplicate the stanza names in transforms.conf ie. [setnull] is named twice. could lead to unintended consequences.  What does your inputs.conf stanza look like? How are you sending this file (UF to indexers? Uf to HF to idx?) Also are you on-prem or cloud? I ask because "Ingest Actions" (and other solutions like ingest or edge processor) provides a UI for you to do this to help validate and avoid config mistakes.  Regardless, please always test your configs in a local lab environment to avoid having a bad day
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry... See more...
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry in props.conf. [source::<Log file path>] Transforms-null= setnull   transforms.conf [setnull] REGEX = BODY DEST_KEY = queue FORMAT = nullQueue [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   But i want same log file from stage and not from production - in props.conf adding the sourctype of prod will restrict the logs from production and ingest the logs from stage where sourcetype name is different?? [source::<Log file path>] [sourcetype = <Prod Sourcetype>] Transforms-null= setnull   in Addition - Prod Source Type i have other two logs and i don't want that get stopped because of this configuration changes. Thanks
Thank you repling, Rick. I don't little write English. But I'll Challange. I misstake Splunk version. Not 7.3 but 9.3.1 Why do my splunkd loopbacked? I install splunk-9.3.1-0b8d769cb912-x64-rel... See more...
Thank you repling, Rick. I don't little write English. But I'll Challange. I misstake Splunk version. Not 7.3 but 9.3.1 Why do my splunkd loopbacked? I install splunk-9.3.1-0b8d769cb912-x64-release.msi and I don't change settings perhaps. In this server I hit netstat. the result is next. C:\>netstat -an -p tcp アクティブな接続 プロトコル ローカル アドレス 外部アドレス 状態 TCP 0.0.0.0:80 0.0.0.0:0 LISTENING TCP 0.0.0.0:88 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:389 0.0.0.0:0 LISTENING TCP 0.0.0.0:443 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:464 0.0.0.0:0 LISTENING TCP 0.0.0.0:593 0.0.0.0:0 LISTENING TCP 0.0.0.0:636 0.0.0.0:0 LISTENING TCP 0.0.0.0:3268 0.0.0.0:0 LISTENING TCP 0.0.0.0:3269 0.0.0.0:0 LISTENING TCP 0.0.0.0:4112 0.0.0.0:0 LISTENING TCP 0.0.0.0:4430 0.0.0.0:0 LISTENING TCP 0.0.0.0:4649 0.0.0.0:0 LISTENING TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING TCP 0.0.0.0:8191 0.0.0.0:0 LISTENING TCP 0.0.0.0:9389 0.0.0.0:0 LISTENING TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING TCP 0.0.0.0:49664 0.0.0.0:0 LISTENING TCP 0.0.0.0:49665 0.0.0.0:0 LISTENING TCP 0.0.0.0:49666 0.0.0.0:0 LISTENING TCP 0.0.0.0:49667 0.0.0.0:0 LISTENING TCP 0.0.0.0:49668 0.0.0.0:0 LISTENING TCP 0.0.0.0:49670 0.0.0.0:0 LISTENING TCP 0.0.0.0:49671 0.0.0.0:0 LISTENING TCP 0.0.0.0:49672 0.0.0.0:0 LISTENING TCP 0.0.0.0:49674 0.0.0.0:0 LISTENING TCP 0.0.0.0:49677 0.0.0.0:0 LISTENING TCP 0.0.0.0:49681 0.0.0.0:0 LISTENING TCP 0.0.0.0:49697 0.0.0.0:0 LISTENING TCP 0.0.0.0:51142 0.0.0.0:0 LISTENING TCP 0.0.0.0:62000 0.0.0.0:0 LISTENING TCP 127.0.0.1:53 0.0.0.0:0 LISTENING TCP 127.0.0.1:8000 127.0.0.1:59455 ESTABLISHED TCP 127.0.0.1:8000 127.0.0.1:59484 ESTABLISHED TCP 127.0.0.1:8065 0.0.0.0:0 LISTENING TCP 127.0.0.1:8089 127.0.0.1:60730 ESTABLISHED TCP 127.0.0.1:8089 127.0.0.1:62099 TIME_WAIT TCP 127.0.0.1:8191 127.0.0.1:53438 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53439 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53443 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53448 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53501 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53504 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53506 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53508 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53509 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53510 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53511 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:53512 ESTABLISHED TCP 127.0.0.1:8191 127.0.0.1:58525 ESTABLISHED TCP 127.0.0.1:53422 0.0.0.0:0 LISTENING TCP 127.0.0.1:53422 127.0.0.1:53473 ESTABLISHED TCP 127.0.0.1:53426 0.0.0.0:0 LISTENING TCP 127.0.0.1:53438 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53439 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53443 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53448 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53473 127.0.0.1:53422 ESTABLISHED TCP 127.0.0.1:53501 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53504 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53506 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53508 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53509 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53510 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53511 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:53512 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:58525 127.0.0.1:8191 ESTABLISHED TCP 127.0.0.1:59455 127.0.0.1:8000 ESTABLISHED TCP 127.0.0.1:59484 127.0.0.1:8000 ESTABLISHED TCP 127.0.0.1:60730 127.0.0.1:8089 ESTABLISHED TCP 127.0.0.1:61987 127.0.0.1:8089 TIME_WAIT TCP 192.168.0.8:53 0.0.0.0:0 LISTENING TCP 192.168.0.8:139 0.0.0.0:0 LISTENING TCP 192.168.0.8:445 192.168.0.1:51760 ESTABLISHED TCP 192.168.0.8:445 192.168.0.44:59017 ESTABLISHED TCP 192.168.0.8:4649 192.168.0.44:59008 ESTABLISHED TCP 192.168.0.8:58220 20.198.118.190:443 ESTABLISHED TCP 192.168.0.8:59051 20.194.180.207:443 ESTABLISHED TCP 192.168.0.8:59103 3.216.246.128:443 ESTABLISHED TCP 192.168.0.8:59125 50.16.88.233:443 ESTABLISHED TCP 192.168.0.8:59149 54.228.78.235:443 ESTABLISHED TCP 192.168.0.8:59174 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59204 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59207 35.186.194.58:443 ESTABLISHED TCP 192.168.0.8:59218 151.101.193.140:443 ESTABLISHED TCP 192.168.0.8:59261 34.149.224.134:443 ESTABLISHED TCP 192.168.0.8:59275 151.101.228.157:443 ESTABLISHED TCP 192.168.0.8:59297 54.228.78.235:443 ESTABLISHED TCP 192.168.0.8:59301 151.101.129.181:443 TIME_WAIT TCP 192.168.0.8:59507 184.72.249.85:443 ESTABLISHED TCP 192.168.0.8:60773 104.26.13.205:443 TIME_WAIT TCP 192.168.0.8:60785 23.50.118.133:443 ESTABLISHED TCP 192.168.0.8:60829 34.107.204.85:443 TIME_WAIT TCP 192.168.0.8:60851 13.225.183.97:443 ESTABLISHED TCP 192.168.0.8:60887 172.66.0.227:443 TIME_WAIT TCP 192.168.0.8:60994 18.154.132.17:443 TIME_WAIT TCP 192.168.0.8:61016 34.66.73.214:443 ESTABLISHED TCP 192.168.0.8:61027 3.226.63.48:443 ESTABLISHED TCP 192.168.0.8:61047 35.186.224.24:443 ESTABLISHED TCP 192.168.0.8:61050 34.117.162.98:443 TIME_WAIT TCP 192.168.0.8:61074 34.111.113.62:443 ESTABLISHED TCP 192.168.0.8:61099 107.178.240.89:443 ESTABLISHED TCP 192.168.0.8:61108 35.244.154.8:443 ESTABLISHED TCP 192.168.0.8:61109 107.178.254.65:443 ESTABLISHED TCP 192.168.0.8:61111 34.98.64.218:443 ESTABLISHED TCP 192.168.0.8:61184 20.198.118.190:443 ESTABLISHED TCP 192.168.0.8:61212 151.101.1.140:443 ESTABLISHED TCP 192.168.0.8:61412 35.163.74.134:443 ESTABLISHED TCP 192.168.0.8:61452 35.163.74.134:443 ESTABLISHED TCP 192.168.0.8:61986 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62010 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62030 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62043 65.9.42.42:443 TIME_WAIT TCP 192.168.0.8:62056 65.9.42.28:443 TIME_WAIT TCP 192.168.0.8:62079 192.168.0.8:443 TIME_WAIT TCP 192.168.0.8:62080 192.168.0.8:62000 TIME_WAIT TCP 192.168.0.8:62082 65.9.42.62:443 TIME_WAIT TCP 192.168.0.8:62098 65.9.42.62:443 TIME_WAIT TCP 192.168.0.8:62103 13.107.21.239:443 ESTABLISHED TCP 192.168.0.8:62104 13.107.21.239:443 ESTABLISHED TCP 192.168.0.8:62117 65.9.42.62:443 TIME_WAIT Why is 80xx ports "ESTABLISHED"? It must appear "LISTENING", don't it? How can I change the status? Tell me please. Thank you.
To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk... See more...
To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk ES analysts to create security-related incidents and events in ServiceNow. It features on-demand single ServiceNow event or incident creation from Splunk Event Scheduled Alerts, enabling the creation of both single and multiple ServiceNow events and incidents. For Detailed integrations steps refer The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method.  If this reply is helpful, karma would be appreciated  .
Hi @Nicolas2203 , ok, good for you, let me know, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Ahhh... the SOURCE_KEY part I missed   Good catch!
Hello, I just checked, and the Microsoft Cloud Services manage checkpoints locally on heavy forwarders. However, there is a configuration in the app that allows you to store checkpoints in a contain... See more...
Hello, I just checked, and the Microsoft Cloud Services manage checkpoints locally on heavy forwarders. However, there is a configuration in the app that allows you to store checkpoints in a container within an Azure storage account. This way, when you need to start log collection on another heavy forwarder, it could facilitate the process. Will configure that and test, I let you know ! Thanks Nico
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272... See more...
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272246. Message from D97C3DE9-B0CE-408F-9620-5274BAC12C72:192.168.1.191:50409 How do you solve the problem?
Notable creation as ServiceNow Incident:- The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method. To send specific notable... See more...
Notable creation as ServiceNow Incident:- The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method. To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk ES analysts to create security-related incidents and events in ServiceNow. It features on-demand single ServiceNow event or incident creation from Splunk Event Scheduled Alerts, enabling the creation of both single and multiple ServiceNow events and incidents. Another approach is to customize the Splunk Add-on for ServiceNow by modifying the /opt/splunk/etc/apps/Splunk_TA_snow/local/alert_actions.conf file with the following configuration, which should be applied to your deployer and pushed to your Search Head Cluster (SHC):     [snow_incident] param._cam = {\ "category": ["others"],\ "task": ["others"],\ "subject": ["others"],\ "technology": [{"vendor": "unknown", "product": "unknown"}],\ "supports_adhoc": true\ } param.state = 1 param.correlation_id = $job.sid$ param.configuration_item = splunk param.contact_type = param.assignment_group = param.category = param.subcategory = param.account = splunk_integration param.short_description =   All the param.* fields can be hardcoded in this configuration file to prepopulate the ad hoc invocation, if that is your preference. If you need any further assistance, please let me know. Note: Using both add-ons will facilitate sending notables to the ServiceNow Incident Review. 2. Notable Closure:- Updating Splunk Notables When Incidents Are Opened or Closed in ServiceNow (Need to Configure at ServiceNow) Step 1: Create an Outbound REST Message in ServiceNow Navigate to System Web Services > Outbound > REST Message in your ServiceNow instance. Click New to create a new REST message. Name the message and specify the endpoint, which should be the URL of your Splunk instance. Step 2: Define HTTP Methods In the new REST message, go to the HTTP Methods related list. Create a new record and select the appropriate HTTP method (usually POST). In the Endpoint field, add the specific API endpoint for updating notables. Step 3: Define Headers and Parameters If your Splunk instance requires specific headers or parameters, define them in this step. For example, you may need to set authentication headers or other required parameters. Step 4: Create a Business Rule Navigate to System Definition > Business Rules in ServiceNow. Create a new business rule: Set the table to Incident. Define the conditions to trigger the rule, typically "After" an insert or update when the incident state changes to "Closed." In the Advanced tab, write a script to send the REST message when the specified conditions are met. Here’s a sample script:   // Sample script to send the REST message var restMessage = new sn_ws.RESTMessageV2(); restMessage.setHttpMethod('POST'); // or 'PUT' restMessage.setEndpoint('https://your-splunk-instance/api/update_notables'); // Update with your endpoint restMessage.setRequestHeader('Content-Type', 'application/json'); restMessage.setRequestHeader('Authorization', 'Bearer your_api_token'); // If required var requestBody = { "incident_id": current.sys_id, "state": current.state, // Add other relevant fields here }; restMessage.setRequestBody(JSON.stringify(requestBody)); var response = restMessage.execute(); var responseBody = response.getBody(); var httpStatus = response.getStatusCode(); // Handle the response as needed   Step 5: Test the Integration Close an incident in ServiceNow and verify whether the corresponding alert is also closed in Splunk. Ensure that you replace 'Your REST Message' and 'Your HTTP Method' with the actual names you provided when creating the REST message. Adjust parameters and headers as required by your Splunk instance's API. Additional Configuration To properly configure the REST call for updating notables in Splunk, ensure you pass the necessary parameters and headers, particularly the ruleID as  mentioned in below document. NotableEventAPIreference /services/notable_update.  Splunk Notable Update Endpoint Endpoint URL:    https://<host>:<mPort>/services/notable_update​   HTTP Method: POST If this reply is helpful, karma would be appreciated .
You might be better off using eventstats to add the average to all the events, then use the where command to keep the events you want to delete, then remove the average field (with the fields command... See more...
You might be better off using eventstats to add the average to all the events, then use the where command to keep the events you want to delete, then remove the average field (with the fields command) before deleting the events.
Hi, delete ist not a mus have... to exclude the vaulty results to the search is another option... My logig: timechart avg > get the avg min and avg max from this timechart > exclude events with t... See more...
Hi, delete ist not a mus have... to exclude the vaulty results to the search is another option... My logig: timechart avg > get the avg min and avg max from this timechart > exclude events with the min max avg > new timechart
Hi @CMEOGNAD , at first, I suppose that you know that you must have the can_delete role associated to your user. Then, I suppose that you know that this is a logical not a physical removing, in oth... See more...
Hi @CMEOGNAD , at first, I suppose that you know that you must have the can_delete role associated to your user. Then, I suppose that you know that this is a logical not a physical removing, in other words, removed events are marked as deleted but not removed from the buckets until the end of the bucket life cycle. In other words you don't have any useful effect to the removing in terms of storage or license (because they are already indexed). Anyway, I'm not sure that's possible to apply the delete command to a streaming command: you should select the events to delete and use the delete command after the main search. Ciao. Giuseppe
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve,... See more...
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve, and from this curve i get the max and min with stats  to get the upper and bottom from this curves. ...but my search wont work, and i need your help, here is a makeresult sample: | makeresults format=json data="[{\"_time\":\"1729115947\", \"humidity\":70.7},{\"_time\":\"1729115887\", \"humidity\":70.6},{\"_time\":\"1729115827\", \"humidity\":70.5},{\"_time\":\"1729115762\", \"humidity\":30.9},{\"_time\":\"1729115707\", \"humidity\":70.6}]" [ search | timechart eval(round(avg(humidity),1)) AS avg_humidity | stats min(avg_humidity) as min_avg_humidity ] | where humidity < min_avg_humidity ```| delete ```
Hi @PickleRick, Thank you for your suggestions.    After following your suggestions, the configurations are now working correctly for my use case. Here are the changes I made for [route_to_teamid_i... See more...
Hi @PickleRick, Thank you for your suggestions.    After following your suggestions, the configurations are now working correctly for my use case. Here are the changes I made for [route_to_teamid_index] stanza in transforms.conf: 1) For [route_to_teamid_index] - Set FORMAT = $1 - Updated SOURCE_KEY = MetaData:Source Current working configs for my use cases: ----------------------------------------------------------------------------- props ----------------------------------------------------------------------------- #custom-props-for-starflow-logs [source::.../starflow-app-logs...] TRANSFORMS-set_new_sourcetype = new_sourcetype TRANSFORMS-set_route_to_teamid_index = route_to_teamid_index ----------------------------------------------------------------------------- transforms ----------------------------------------------------------------------------- #custom-transforms-for-starflow-logs [new_sourcetype] REGEX = .* DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = .*\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = MetaData:Source FORMAT = $1 DEST_KEY = _MetaData:Index WRITE_META = true Previously, the configuration had SOURCE_KEY = source, which was causing issues. The SOURCE_KEY = <field> setting essentially tells Splunk where the regex should be applied. In my configuration, it was set to "source" but Splunk might not have been able to apply the regex to just the source field. After spending time reading through transforms.conf, I noticed that under the global settings, there was a specific mention of this. SOURCE_KEY = <string> * NOTE: This setting is valid for both index-time and search-time field extractions. * Optional. Defines the KEY that Splunk software applies the REGEX to. * For search time extractions, you can use this setting to extract one or more values from the values of another field. You can use any field that is available at the time of the execution of this field extraction * For index-time extractions use the KEYs described at the bottom of this file. * KEYs are case-sensitive, and should be used exactly as they appear in the KEYs list at the bottom of this file. (For example, you would say SOURCE_KEY = MetaData:Host, *not* SOURCE_KEY = metadata:host .) Keys  MetaData:Source  : The source associated with the event.    Thank you sincerely for all of your genuine help!
Hi @Real_captain , you could try using the area charts and eventually using white for the min area so it seems that it's coloured only the difference between min and max. Ciao. Giuseppe
Hi @tbayer82 , the order of filters isn't relevant, but if you have OR operators I'd prefer to use parenthesis: index=* (dstip="192.168.1.0/24" OR srcip="192.168.1.0/24") action=deny and you don't... See more...
Hi @tbayer82 , the order of filters isn't relevant, but if you have OR operators I'd prefer to use parenthesis: index=* (dstip="192.168.1.0/24" OR srcip="192.168.1.0/24") action=deny and you don't need to use the AND operator that's the default. Ciao. Giuseppe
Hi @Strangertinz , check again the results in the rising column field: usually the issue is there. You have results executing the SQL query in DB-Connect, but it extracts only the records with the ... See more...
Hi @Strangertinz , check again the results in the rising column field: usually the issue is there. You have results executing the SQL query in DB-Connect, but it extracts only the records with the risong column values gretaer than the checkpoint, but if the rising column isn't correct or there are duplicated values you risk to lose records. Ciao. Giuseppe
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. ... See more...
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. first SQL query: select distinct a.item from price a, skus b, deps c,supp_country s where zone_id in (5, 25) and a.item = b.sku and b.dept = c.dept and a.item = s.item and s.primary_supp_ind = 'Y' and s.primary_pack_ind = 'Y' and b.dept in (7106, 1666, 1650, 1651, 1654, 1058, 4158, 4159, 489, 491, 492, 493, 495, 496, 497, 498, 499, 501, 7003, 502, 503, 7004, 450, 451, 464, 465, 455, 457, 458, 459, 460, 461, 467, 494, 7013, 448, 462, 310, 339, 7012, 7096, 200, 303, 304, 1950, 1951, 1952, 1970, 1976, 1201, 1206, 1207, 1273, 1352, 1274, 1969, 1987, 342, 343, 7107, 7098, 7095, 7104, 2101, 2117, 7107, 7098, 1990, 477, 162, 604, 900, 901, 902, 903, 904, 905, 906, 908, 910, 912, 916, 918, 7032, 919, 7110, 7093, 7101, 913, 915, 118, 119, 2701, 917) and b.js_status in ('CO'); Second SQL: WITH RankedData AS (SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated, ROW_NUMBER() OVER (PARTITION BY Product_Id, BusinessUnit_Id ORDER BY LastUpdated DESC) AS RowNum FROM RETAIL.DBO.CAT_PRICE(nolock) WHERE BusinessUnit_Id IN ('zone_5', 'zone_25') AND Product_Id IN ($ITEM$) ) SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated FROM RankedData WHERE RowNum = 1; When I am using map command as shown below, expected results are fetched but only 10k records as per map command limitations. But I want to to fetch all the records(around 30K) Splunk query: | dbxquery query="First SQL query" connection="ABC" |eval comma="'" |eval ITEM='comma' + 'ITEM' + 'comma'+"," |mvcombine ITEM |nomv ITEM |fields - comma |eval ITEM=rtrim(tostring(ITEM),",")| map search="| dbxquery query=\"Second SQL query" connection=\"XYZ\"" But when i am using join command as shown below to get all the results(more than 10K), I am not getting the desired output. The output only contains results from first query. I tried replacing the column name Product_Id in second sql with ITEM at all places, but still no luck. | dbxquery query="First SQL query" connection="ABC" |fields ITEM | join type=outer ITEM[search dbxquery query=\"Second SQL query" connection=\"XYZ\"" Could someone help me in understanding what is going wrong and how can i get all the matching results from second query?