All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, The below 10 Error Records have, Last 3 Error records need not ingested, the above 7 error records data only be ingested.How do we write the Regular Expression ,please guide me.     2023-... See more...
Hi All, The below 10 Error Records have, Last 3 Error records need not ingested, the above 7 error records data only be ingested.How do we write the Regular Expression ,please guide me.     2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: No package class found with name: PRD-QDB35801A 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: (INVALID_DATA) Invalid value [V5100000003P211] specified for parameter [package_class__c] : Object record ID does not resolve to a valid active [package_class__c] 2023-11-06 15:30:48,941 ERROR https-jsse-nio-8443-exec-9 com.veeva.brp.batchrecordprint.BatchRecordPrintController - PRINT ERROR: Print failure response 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - Unknown error: {errorType=GENERAL, responseStatus=EXCEPTION, responseMessage=502 Bad Gateway} 2023-11-06 15:30:48,941 ERROR https-jsse-nio-8443-exec-2 com.veeva.brp.batchrecordprint.BatchRecordPrintController - (API_LIMIT_EXCEEDED) You have exceeded the maximum number of authentication API calls allowed in a [1] minute period. 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: No package class found with name: PR01-PU3227V1MSPS 0001 2023-11-08 06:19:49,539 ERROR https-jsse-nio-8443-exec-1 com.veeva.brp.batchrecordprint.BatchRecordPrintController - DOCLIFECYCLE ERROR: Error initiating lifecycle action for document: 5742459, Version: 0.1 2023-10-25 10:56:46,710 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - Header Field Name: bom_uom_1_c, value:E3HR5teHlfOQjzUJ74jTdKh1Tu0yajHqT/H98klZOyU= 2023-10-25 10:56:46,711 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_Added_1, value is out of Bounds using beginIndex:770, endIndex:771 from line: 2023-10-25 10:56:46,711 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise... See more...
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise alert is triggered. So basically I have a lookup built like this IP HOST 1.1.1.1 host1 2.2.2.2 host2 3.3.3.3 host3 My purpose is to build a query search that finds whenever the IP-HOST association is not respected. 1.1.1.1 connects to host1 ---> OK 1.1.1.1 connects to host2 ---> BAD 2.2.2.2 connects to host1 ---> BAD The connection from host1 should arrive only from 1.1.1.1, etc.. How can I text this query?  Thank you
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig... See more...
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig] sslRootCAPath = <path_to_rootCA> sslPassword = mycertpass enableSplunkdSSL = true serverCert = <path_to_manager_or_indexer_cert> requireClientCert = true sslAltNameToCheck = manage-node.example.com   I check rootCA and my server certificate in Manager Node and Indexers with `openssl verify` and it return OK. I use the same certificate for Indexers and one for Manager Node. All my certificate have purpose is SSL server and SSL client: X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication But when I set `requireClientCert = true`, it return "unsupported certificate" error and I can't access to Splunk Web of Manager Node. Please help me to fix this! 
Hi, I'm looking Security Use case on Salesforce application. Request to suggest if any please. Regards BT
I need a python file/ function to be triggered while deleting a input/ configuration
Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype... See more...
Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype values based on regexes and I'm able to do this. The issue is that the data flow is an elaboration of an external systel (logstash) that takes raw logs (e.g. from linux systems) and saves them in a fields of the json format ("message") adding many other fields. So, after host, source and sourcetype overriding (that is fine working) I would remove all the extra contents in the events and maintain only the content of the message field (the raw logs). I'm able to do this, but the issue is that I'm not able to do both the transformations: in other words I'm able to override values but the extra contents removing doesn't work or I can remove extra contents but the overriding doesn't work. I have in my props. conf the following configurations: [logstash] # set host TRANSFORMS-sethost = set_hostname_logstash # set sourcetype Linux TRANSFORMS-setsourcetype_linux_audit = set_sourcetype_logstash_linux_audit # set source TRANSFORMS-setsource = set_source_logstash_linux # restoring original raw log [linux_audit] SEDCMD-raw_data_linux_audit = s/.*\"message\":\"([^\"]+).*/\1/g as you can see in the first stanza I override sourcetype from logstash to linux_audit and in the second I try to remove the extra contents using the linux audit sourcetype. If I use the logstash sourcetype also in the second stanza, the extra contents are removed, but the fields overriding (that runs using the extra contents) doesn't work. I also tried to setup a priority using the props.conf "priority" option with no luck. I also tried to use source for the first stanza because source usually has an higher priority than sourcetype, but with the same result. Can anyone give me an hint how to solve this issue? Thank you in advance. Ciao. Giuseppe
Hi, I'd like to ask about the version of Splunk TA "Palo Alto Networks App for Splunk" (Splunk_TA_paloalto). Our Palotlo machines will be replaced and the PAN OS change from 9.1 to 10.2.4. What is... See more...
Hi, I'd like to ask about the version of Splunk TA "Palo Alto Networks App for Splunk" (Splunk_TA_paloalto). Our Palotlo machines will be replaced and the PAN OS change from 9.1 to 10.2.4. What is the appropriate version of TA for PAN OS 10.2.4? Our "Splunk_TA_paloalto" is now 7.1.0.   Thanks  in advance.
I am currently integrating Splunk SOAR with Forcepoint Web Security. I am testing out the connectivity but getting error that SSL:UNSUPPORTED PROTOCOL. Forcepoint currently support up till TLS 1.1 ... See more...
I am currently integrating Splunk SOAR with Forcepoint Web Security. I am testing out the connectivity but getting error that SSL:UNSUPPORTED PROTOCOL. Forcepoint currently support up till TLS 1.1 anyway I can set/modify for SOAR/forcepoint to utilize 1.1 in the meantime instead of 1.2?
Hi all, I have facing  an issue where exactly we can troubleshoot when a Host Stops Sending cmd Logs to Splunk.   Thanks 
I am trying to create a pie chart of success vs. failure with stats command with the following: search | stats c(assigned_user) AS Success c(authorization_failure_user) AS Failed
I've seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data. I have the following json: { "Record": { "contentId": "429636", "levelId": "57... See more...
I've seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data. I have the following json: { "Record": { "contentId": "429636", "levelId": "57", "levelGuid": "3c5b481a-6698-49f5-8111-e43bb7604486", "moduleId": "83", "parentId": "0", "Field": [ { "id": "22811", "guid": "6c6bbe96-deab-46ab-b83b-461364a204e0", "type": "1", "_value": "Need This with 22811 as the field name" }, { "id": "22810", "guid": "08f66941-8f2f-42ce-87ae-7bec95bb5d3b", "type": "1", "p": "need this with 22810 as the field name" }, { "id": "478", "guid": "4e17baea-f624-4d1a-9c8c-83dd18448689", "type": "1", "p": [ "Needs to have 478 as field name", "Needs to have 478 as field name" ] }, { "id": "22859", "guid": "f45d3578-100e-44aa-b3d3-1526aa080742", "type": "3", "xmlConvertedValue": "2023-06-16T00:00:00Z", "_value": "needs 22859 as field name" }, { "id": "482", "guid": "a7ae0730-508b-4545-8cdc-fb68fc2e985a", "type": "3", "xmlConvertedValue": "2023-08-22T00:00:00Z", "_value": "needs 482 as field name" }, { "id": "22791", "guid": "89fb3582-c325-4bc9-812e-0d25e319bc52", "type": "4", "ListValues": { "ListValue": { "id": "74192", "displayName": "Exception Closed", "_value": "needs 22791 as field name" } } }, { "id": "22818", "guid": "e2388e72-cace-42e6-9364-4f936df1b7f4", "type": "4", "ListValues": { "ListValue": { "id": "74414", "displayName": "Yes", "_value": "needs 22818 as field name" } } }, { "id": "22981", "guid": "8f8df6e3-8fb8-478b-8aa0-0be02bec24e3", "type": "4", "ListValues": { "ListValue": { "id": "74550", "displayName": "Critical", "_value": "needs 22981 as field name" } } }, { "id": "22876", "guid": "4cc725ad-d78d-4fc0-a3b2-c2805da8f29a", "type": "9", "Reference": { "id": "256681", "_value": "needs 22876 as field name" } }, { "id": "23445", "guid": "f4f262f7-290a-4ffc-af2b-dcccde673dba", "type": "9", "Reference": { "id": "255761", "_value": "needs 23445 as field name" } }, { "id": "1675", "guid": "ea8f9a24-3d35-49f9-b74e-e3b9e48f8b3b", "type": "2" }, { "id": "22812", "guid": "e563eb9e-6390-406a-ac79-386e1c3006a3", "type": "2", "_value": "needs 22812 as field name" }, { "id": "22863", "guid": "a9fe7505-5877-4bdf-aa28-9f6c86af90ae", "type": "8", "Users": { "User": { "id": "5117", "firstName": "data", "middleName": "data", "lastName": "data", "_value": "needs 22863 as field name" } } }, { "id": "22784", "guid": "4466fd31-3ab3-4117-8aa0-40f765d20c10", "type": "3", "xmlConvertedValue": "2023-07-18T00:00:00Z", "_value": "7/18/2023" }, { "id": "22786", "guid": "d1c7af3e-a350-4e59-9353-132a04a73641", "type": "1" }, { "id": "2808", "guid": "4392ae76-9ee1-45bf-ac31-9e323a518622", "type": "1", "p": "needs 2808 as field name" }, { "id": "22802", "guid": "ad7d4268-e386-441d-90b1-2da2fba0d002", "type": "1", "table": { "style": "width: 954px", "border": "1", "cellspacing": "0", "cellpadding": "0", "tbody": { "tr": { "style": "height: 73.05pt", "td": { "style": "width: 715.5pt", "valign": "top", "p": "needs 22802 as field name" } } } } }, { "id": "8031", "guid": "fbcfdf2c-2990-41d1-9139-8a1d255688b0", "type": "1", "table": { "style": "width: 954px", "border": "1", "cellspacing": "0", "cellpadding": "0", "tbody": { "tr": { "style": "height: 71.1pt", "td": { "style": "width: 715.5pt", "valign": "top", "p": [ "needs 8031 as field name", "needs 8031 as field name" ] } } } } }, { "id": "22820", "guid": "0f98830d-48b3-497c-b965-55be276037f2", "type": "1", "p": "needs 22820 as field name" }, { "id": "22807", "guid": "8aa0d0fa-632d-4dfa-9867-b0cc407fa96b", "type": "3" }, { "id": "22855", "guid": "e55cbc59-ad8d-4831-8e6f-d350046026e9", "type": "1" }, { "id": "8032", "guid": "f916365b-e6eb-4ab9-a4ff-c7812a404854", "type": "1", "p": "needs 8032 as field name" }, { "id": "22792", "guid": "8e70c28a-2eec-4e38-b78b-5495c2854b3e", "type": "1", "_value": "needs 22792 as field name " }, { "id": 22793, "guid": "ffeaa385-643a-4f04-8a00-c28ddd026b7f", "type": "4", "ListValues": "" }, { "id": "22795", "guid": "c46eac60-d86e-4af4-9292-d194a601f8b6", "type": "1" }, { "id": "22797", "guid": "8cd6e398-e565-4034-8db8-2e2ecb2f0b31", "type": "4", "ListValues": { "ListValue": { "id": "73060", "displayName": "data", "_value": "needs 22797 as field name" } } }, { "id": "22799", "guid": "20823b18-cb9b-47a3-854d-58f874164b27", "type": "4", "ListValues": { "ListValue": { "id": "74410", "displayName": "Other", "_value": "needs 22799 as field name" } } }, { "id": "22798", "guid": "5b32be4c-bc40-45b3-add4-1b22162fd882", "type": "4", "ListValues": { "ListValue": { "id": "74405", "displayName": "N/A", "_value": "needs 22798 as field name" } } }, { "id": "22800", "guid": "6b020db0-780f-4eaf-8381-c122425b71ed", "type": "1", "p": "needs 22800 as field name" }, { "id": "22801", "guid": "06334da8-5392-4a9d-a3eb-d4075ee30787", "type": "1", "p": "needs 22801 as field name" }, { "id": "22794", "guid": "25da1de8-8e81-4281-8ef3-d82d1dc005ad", "type": "4", "ListValues": { "ListValue": { "id": "74398", "displayName": "Yes", "_value": "needs 22794 as field name" } } }, { "id": "22813", "guid": "89760b4f-49be-40ad-8429-89c247e3e95a", "type": "1", "p": "needs 22813 as field name" }, { "id": "22803", "guid": "03b6c826-e15c-4356-89e8-b0bd509aaeb5", "type": "3", "xmlConvertedValue": "2023-06-15T00:00:00Z", "_value": "needs 22803 as field name" }, { "id": "22804", "guid": "d7683f9c-97bb-461a-97df-36ec6596b4fc", "type": "1", "p": "needs 22804 as field name" }, { "id": "22805", "guid": "33386a3a-c331-4d8c-9825-166c0a5235c2", "type": "3", "xmlConvertedValue": "2023-06-15T00:00:00Z", "_value": "needs 22805 as field name" }, { "id": "22806", "guid": "cd486293-9857-475c-9da3-a06f836edb59", "type": "1", "p": "needs 22806 as field name" } ] } } and have been able to extract id, (some) p data and _value data from Record.Field{} using: | spath path=Record.Field{} output=Field | mvexpand Field | spath input=Field | rename id AS Field_id, value AS Field_value, p AS Field_p , but have been unable get any other data out. The p values that I can get out are single value only. In particular, I need to get the multi-value fields for ListValues{}.ListValue out. In addition, I need to map the values in _value and p to the top ID field in that array. I think the code sample provided above explains what's needed. I know I can do a |eval {id}=value but it's complicated when there are so many more fields other than value, or complicated when the fields are nested. Can someone help with this?
Hello, How do I give same rank for same score? Student d and e has the same score of 73, thus they both Rank 4, but student f has Rank 6. Rank 5 is skipped because Student d and e has the same scor... See more...
Hello, How do I give same rank for same score? Student d and e has the same score of 73, thus they both Rank 4, but student f has Rank 6. Rank 5 is skipped because Student d and e has the same score.  Thank you for your help Expected result: Student Score Rank a 100 1 b 95 2 c 84 3 d 73 4 e 73 4 f 54 6 g 43 7 h 37 8 i 22 9 j 12 10   This is what I figured out so far, but i won't take into consideration of same Score     | makeresults format=csv data="Student, Score a,100 b,95 c,84 d,73 e,73 f,54 g,43 h,37 i,22 j,12" | streamstats count      
Hello! This is probably a simple question but I've been kind of struggling with it. I'm building out my first playbook which triggers off of new artifacts. The artifacts include fields for: type, val... See more...
Hello! This is probably a simple question but I've been kind of struggling with it. I'm building out my first playbook which triggers off of new artifacts. The artifacts include fields for: type, value, tag. What I'm trying to do is have those fields from the artifact passed directly into a custom code block in my playbook. How do I go about accessing those fields? I've tried using phantom.collect2(container=container, datapath=["artifact:FIELD_NAME*"]) in the code block but it doesn't return anything. I thought maybe I needed to setup custom fields to define type, value and tag in the custom fields settings, but that didn't change anything either. Any help would be appreciated, thank you!
I am looking to extract some information from a Values field that has two values within it.  How can i specify which one of the values I need in a search as the two values is meant to be "read" ... See more...
I am looking to extract some information from a Values field that has two values within it.  How can i specify which one of the values I need in a search as the two values is meant to be "read" and "written"? This is my current search right now and I think it is including both values together. index="collectd_test" plugin=disk type=disk_octets plugin_instance=$plugin_instance1$ | stats min(value) as min max(value) as max avg(value) as avg | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)
All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the... See more...
All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the script to: 1. Run using an API token (as opposed to credentials). 2. Get it to run a search I am interested in returning data from. I am however running into an error with my search (shown below).   <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unparsable URI-encoded request data</msg> </messages> </response>    The script itself now looks like this (I have removed the token and obscured the Splunk endpoint for obvious reasons.   #!/bin/bash # A simple bash script example of how to get notable events details from REST API # EXECUTE search and retrieve SID SID=$(curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs -d search=" search index=index sourcetype="sourcetype" source="source" [ search index="index" sourcetype="sourcetype" source="source" deleted_at="null" | rename uuid AS host_uuid | stats count by host_uuid | fields host_uuid ] | rename data.id AS Data_ID host_uuid AS Host_ID port AS Network_Port | mvexpand data.xrefs{}.type | strcat Host_ID : Data_ID : Network_Port Custom_ID_1 | strcat Host_ID : Data_ID Custom_ID_2 | stats latest(*) as * by Custom_ID_1 | search state!="fixed" | search category!="informational" | eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S")" <removed some of the search for brevity> \ | grep "sid" | awk -F\> '{print $2}' | awk -F\< '{print $1}') echo "SID=${SID}" Omitted the remaining portion of the script for brevity....     It is at this point shown in brackets (| eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S") that I am getting the error in question. The search returns fine up to the point where I am converting time ---- I tried escaping using "\", but that did not seem to help. I am sure I am missing something simple and looking for some help.
  Hello Community, I'm seeking some guidance with optimizing a Splunk search query that involves multiple table searches and joins. The primary issue I'm encountering is the limitation imposed by... See more...
  Hello Community, I'm seeking some guidance with optimizing a Splunk search query that involves multiple table searches and joins. The primary issue I'm encountering is the limitation imposed by subqueries, restricting the total records to 50,000. Here's the current query structure I'm working with: index="sample" "message.process"="*app-name1" "message.flowName"="*| *" | rex field=message.correlationId "(?<UUID>^[0-9a-z-]{0,36})" | rename "message.flowName" as sapi-outbound-call | stats count by sapi-outbound-call UUID | join type=inner UUID [search index="sample" "message.process"="*app-name2" "message.flowName"="*| *" | rex field=message.correlationId "(?<UUID>^[0-9a-z-]{0,36})" | rename "message.flowName" as exp-inbound-call] | stats count by exp-inbound-call sapi-outbound-call | join left=L right=R where L.exp-inbound-call = R.exp-inbound-call [search index="sample" "message.process"="*app-name2" "message.flowName"="*| *" | rename "message.flowName" as exp-inbound-call | stats count by exp-inbound-call] | stats list(*) AS * by R.exp-inbound-call R.count | table R.exp-inbound-call R.count L.sapi-outbound-call L.count The intention behind this query is to generate statistics based on two query searches or tables while filtering out data based on a common UUID. However, the usage of multiple joins within subqueries is causing limitations due to the 50,000 record cap. I'm looking for alternative approaches or optimizations to achieve the same result without relying heavily on joins within subqueries. Any insights, suggestions, or examples would be incredibly valuable. Thank you in advance for your help and expertise! Regards
index=netlogs [| inputlookup baddomains.csv | eval url = "*.domain."*" | fields url] NOT [| inputlookup good_domains.csv | fields domain] I don't think my search is doing what I want it to do. I wou... See more...
index=netlogs [| inputlookup baddomains.csv | eval url = "*.domain."*" | fields url] NOT [| inputlookup good_domains.csv | fields domain] I don't think my search is doing what I want it to do. I would like to take the bad domains from the first lookup table and search the netlogs index to see if there are any hits. however, i would like to remove the good domains from the second lookup table from the search. Anyone know if there is a better way to do this?
Hello All,  I am setting up a multisite indexer cluster with cluster manager redundancy,  I am setting up 2 clustermanager (site1 and site2) Below is the config e.g. [clustering] mode = manag... See more...
Hello All,  I am setting up a multisite indexer cluster with cluster manager redundancy,  I am setting up 2 clustermanager (site1 and site2) Below is the config e.g. [clustering] mode = manager manager_switchover_mode = auto manager_uri = clustermanager:cm1,clustermanager:cm2 pass4SymmKey = changeme [clustermanager:cm1] manager_uri = https://10.16.88.3:8089 [clustermanager:cm2] manager_uri = https://10.16.88.4:8089 My question is, I have 2 indexers on each site, should I give the manager_uri in the peer (indexer) of site1 to point to cm1 and manager_uri in the peer (indexer) of site2 to  point to cm2. or all should point to the same indexer? indexer 1 / indexer 2 -  manager_uri = https://10.16.88.3:8089 indexer 3 / indexer 4 -  manager_uri = https://10.16.88.4:8089   Also in the SearhHeads what should I define for the manager_uri? please advice.   Thanks, Dhana
Hi,  We have enabled all the default JMX metric collection in the configuration like, kafka, tomcat, weblogic, PMi, cassandra,etc., But when very limited metrics are available under Metric Browser. ... See more...
Hi,  We have enabled all the default JMX metric collection in the configuration like, kafka, tomcat, weblogic, PMi, cassandra,etc., But when very limited metrics are available under Metric Browser.  Only JVM --> classes, garbage collection, memory, threads are visible.  None of the above.  Why is it so? We are more interested in looking at Tomcat related JMX metrics.  Your inputs are much appreciated.  Thanks, Viji
I have an index that provides a Date and a row count to populate a line chart on a dashboard using DBConnect.  The data looks like this: Date Submissions 2023-11-13 7 2023-11-14 35 20... See more...
I have an index that provides a Date and a row count to populate a line chart on a dashboard using DBConnect.  The data looks like this: Date Submissions 2023-11-13 7 2023-11-14 35 2023-11-15 19   When the line chart displays the data, the dates show up like this:  2023-11-12T19:00:00-05:00,  2023-11-13T19:00:00-05:00, 2023-11-14T19:00:00-05:00.  Is there some setting/configuration that needs to be updated?