All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. ... See more...
The main question is - Is the config file precedence applicable to the savedsearches.conf file? The documentation for savedsearches.conf states that I should read the configuration file precedence. https://docs.splunk.com/Documentation/Splunk/9.3.0/admin/Savedsearchesconf https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Wheretofindtheconfigurationfiles According to the config file precedence page, the priority of savedsearches is determined by the application/user context, it is a reverse lexicographic order. That is, the configuration from add-on B overrides the configuration from add-on A. I have savesearch defined in addon A (an addon from Splunkbase). There is a missing index call in the SPL. I created app B with savedsearches.conf. I created an identically named "stanza" there and provided a single parameter "search=". In the parameter I put a new SPL query that contains the paricula index call. I was hoping that my new add-in named "B" would override the search query in add-in A, but it didn't. Splunk reports that I have a duplicate configuration. I hope I described this in understandable way. I must be missing something.    
Wondering if there are any industry best practices and/or recommendation for  setting fileSizeGB AND fileCount thresholds when searching\detecting  data exfiltration over USB device with the help of ... See more...
Wondering if there are any industry best practices and/or recommendation for  setting fileSizeGB AND fileCount thresholds when searching\detecting  data exfiltration over USB device with the help of Proofpoint ITM events in Splunk. I know we all have diff levels of risk  as we try to limit the number of false positives to our SOC team.   We started out with eval fileSizeGB=(Total/1000000) | where fileSizeGB > 100 AND fileCount > 100.  These thresholds are yielding few detections\alerts so we know we need to lower.  You can prob guess any insider threat team would want fileSizeGB > 10 AND fileCount > 1.          Just trying to find happy medium for all so any best practices or suggestions appreciated.
Example: 1st report Date is from 1st June~16th June 2nd report Date is from 17thJune ~ 30 June and have it send the two reports on the end of the beginning of the next month.  July... See more...
Example: 1st report Date is from 1st June~16th June 2nd report Date is from 17thJune ~ 30 June and have it send the two reports on the end of the beginning of the next month.  July 1st. Next month rolls in.... 1st report Date is from 1st July~16th July 2nd report Date is from 17th July ~ 31 July and have it send two reports on the end of the beginning of the next month.  August 1st, ect... and so on.    
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, ... See more...
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, apiEndTime, search_,et, search_lt, search |search apiStartTime='ZERO_TIME' OR apiEndTime='ZERO_TIME' |convert ctime(search_*)     I get results with  apiStartTime as Empty apiEndTime as 'ZERO_TIME' search_et 07/31/2024 00:00:00 search_lt 08/29/2024 13:10:58   Thus, how do I interpret the above results and how do I modify the SPL to fetch correct results?   Thank you Taruchit
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks!... See more...
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks! #Dyntraceaddon  
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteon... See more...
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteone.com?suvathp001?443 What would be the regex to only extract suvathp001 Thanks again for your help, Tom    
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line eve... See more...
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line events, which dump XML documents containing separate timestamps into log messages. Issue Because these multiline messages contain a timestamp within the body of the XML, and this becomes part of the body of the log message, Splunk is inserting events with "impossible" timestamps. For example an event will get indexed as happening in 2019 when this is actually a log event from 2024, which output an XML body containing an <example></example>  element which contains a 2019 timestamp, and part of this body is stored as a Splunk event from 5 years ago. Constraints I cannot modify the configuration of the Splunk indexer/search head/anything other than the Universal Forwarder that I control I do not have access to licensing to be able to run any Heavy Forwarders; I can only go from Universal Forwarder on hosts which I control directly to a HTTP Event Collector endpoint that I do not control I cannot (easily) change the log format to not dump these bodies. There is a long term ask on the team to fix up logging to be a) consistent and b) more ingest-friendly - but I'm looking for any interim solution that I can apply on the component I control directly, which is basically the Universal Forwarder only. Ideas? My only idea so far is a custom sourcetype which specifies the log timestamp format exactly including a regex anchor to the start of the line, and also reduces/removes the MAX_TIMESTAMP_LOOKAHEAD value to stop Splunk from looking past the first match - I believe this would mean that all the lines in an event would be considered correctly because the XML document would start with either whitespace or a < character. However my understanding is that this would require a change either to the indexer or to a Heavy Forwarder which I can't do. I'm looking for any alternatives this community can offer as a potential workaround until the log sanitization effort gets off the ground.
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__r... See more...
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__raw/servicesNS/nobody/ta-mdi-health-splunk/TA_microsoft_graph_security_add_on_for_splunk_microsoft_graph_security?output_mode=json&count=-1 HTTP/1.1" 500 303 "-" "   Thanks
Hello, I am currently working on project that involves integrating Splunk with Azure Virtual Desktop (AVD). Could you please provide me with any available documentation or resources that detail th... See more...
Hello, I am currently working on project that involves integrating Splunk with Azure Virtual Desktop (AVD). Could you please provide me with any available documentation or resources that detail the process or best practices for this integration? Any guidance or links to relevant materials would be greatly appreciated. Thank you in advance for your assistance. Best regards,
Hi! I am working as an IAM Specialist but I am looking to pivot to Splunk. I would like to set up a Splunk Enterprise environment using VMware where I can practice the basics and move to more advanc... See more...
Hi! I am working as an IAM Specialist but I am looking to pivot to Splunk. I would like to set up a Splunk Enterprise environment using VMware where I can practice the basics and move to more advanced functions including getting a solid base and understanding of networking. After seeing many videos for all kinds of set ups, I am not sure which would be best for me; set up wise, and I was wondering if anyone can help give me a set up that works best based off my laptop configurations. I would like to practice on VMs for both Windows/Linux. Laptop Config:  Lenovo IdeaPad touchscreen - AMD Ryzen 7 7730U : WUXGA - 16GB - 1TB SSD - OS Windows 11 Any information would be highly appreciated. 
I want to create one static field by looking status value = Issue host m_nname status A cpu Ok B disk Ok C memory Issue D netwok Ok E storage Issue   Issue fou... See more...
I want to create one static field by looking status value = Issue host m_nname status A cpu Ok B disk Ok C memory Issue D netwok Ok E storage Issue   Issue found in status column few field heath created with Bad value. Like below. host m_nname status Health A cpu Ok Bad B disk Ok Bad C memory Issue Bad D netwok Ok Bad E storage Issue Bad  
So i am using multiselect to take dynamic input from user and it is working fine when i have individual searches running to populate dynamic list for each input but since for all those inputs my base... See more...
So i am using multiselect to take dynamic input from user and it is working fine when i have individual searches running to populate dynamic list for each input but since for all those inputs my base search is same so i had thought to use Splunk's base search feature to populate the list which works fine at first submit but now when the panels are loaded and user wants to change the value in multiselect input it does not list all the values which were available at first . So wanted to know if is there something we can do to have this working in same fashion as it works for individual dynamics searches meaning the underlying values which were returned at first should remain intact or at least when the user is selecting "All" option it should repopulate that list. I had tried using tokens set unset and stuff but no luck. I also tried having different base search for multiselect dropdown and panel but that too didn't worked. Following is xml with base search which has the issue of reselecting multiselect dropdown values after submission - <form version="1.1" theme="light"> <label>testing Clone</label> <search id="base_dropdown"> <query>index=main sourcetype=access_combined_wcookie status IN ($status_tok$) file IN ($file_tok$) itemId IN ($itemId_tok$)</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <search id="base_panel"> <query>index=main sourcetype=access_combined_wcookie status IN ($status_tok$) file IN ($file_tok$) itemId IN ($itemId_tok$)</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <fieldset submitButton="true" autoRun="true"> <input type="time" token="time_tok"> <label>Time</label> <default> <earliest>-7d@d</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="status_tok"> <label>status</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>status</fieldForLabel> <fieldForValue>status</fieldForValue> <search base="base_dropdown"> <query>|stats count by status|sort 0 + status</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> <input type="multiselect" token="file_tok"> <label>file</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>file</fieldForLabel> <fieldForValue>file</fieldForValue> <search base="base_dropdown"> <query>|stats count by file|sort 0 + file</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> <input type="multiselect" token="itemId_tok"> <label>itemId</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>itemId</fieldForLabel> <fieldForValue>itemId</fieldForValue> <search base="base_dropdown"> <query>|stats count by itemId|sort 0 + itemId</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </fieldset> <row> <panel> <table> <title>Count </title> <search base="base_panel"> <query>| stats count</query> <!--- <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest>--> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>   Following is without base search for multiselect drop down which works as expected- <form version="1.1" theme="light"> <label>testing</label> <!--<search id="base_dropdown"> <query>index=main sourcetype=access_combined_wcookie status IN ($status_tok$) file IN ($file_tok$) itemId IN ($itemId_tok$)</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search>--> <search id="base_panel"> <query>index=main sourcetype=access_combined_wcookie status IN ($status_tok$) file IN ($file_tok$) itemId IN ($itemId_tok$)</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <fieldset submitButton="true" autoRun="true"> <input type="time" token="time_tok"> <label>Time</label> <default> <earliest>-7d@d</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="status_tok"> <label>status</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>status</fieldForLabel> <fieldForValue>status</fieldForValue> <search> <query>index=main sourcetype=access_combined_wcookie earliest="$time_tok.earliest$" latest="$time_tok.latest$" |stats count by status|sort 0 + status</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> <input type="multiselect" token="file_tok"> <label>file</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>file</fieldForLabel> <fieldForValue>file</fieldForValue> <search> <query>index=main sourcetype=access_combined_wcookie earliest=$time_tok.earliest$ latest="$time_tok.latest$"|stats count by file|sort 0 + file</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> <input type="multiselect" token="itemId_tok"> <label>itemId</label> <choice value="*">All</choice> <default>*</default> <delimiter>,</delimiter> <fieldForLabel>itemId</fieldForLabel> <fieldForValue>itemId</fieldForValue> <search> <query>index=main sourcetype=access_combined_wcookie earliest=$time_tok.earliest$ latest="$time_tok.latest$"|stats count by itemId|sort 0 + itemId</query> </search> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </fieldset> <row> <panel> <table> <title>Count</title> <search base="base_panel"> <query>| stats count</query> <!--- <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest>--> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> Dashboard 
I have a sample data pushed to splunk as below: Help me with splunk query where I want only unique server names with final status as second column. compare both horizantally & vertically for each ser... See more...
I have a sample data pushed to splunk as below: Help me with splunk query where I want only unique server names with final status as second column. compare both horizantally & vertically for each server second column status, if any of the second column value is No for that server then consider No as final status for that server, if all the second column values are Yes for a Server, then consider that server final status as Yes. sample.csv: ServerName,Status Server1,Yes Server1,No Server1,Yes Server2,No Server2,No Server3,Yes Server3,Yes Server4,Yes Server5,No Server6,Yes Server6,No Server6,Yes Server6,No Server7,Yes Server7,Yes Server7,Yes Server7,Yes Server8,No Server8,No Server8,No Server8,No Output should looks similar to below:  ServerName,FinalStatus Server1,No Server2,No Server3,Yes Server4,Yes Server5,No Server6,No Server7,Yes Server8,No
All I learning for prompt is that I need to open broser and prompt with SOAR GUI. Is any Rest API or link available for answer prompt ? I want to pass some variable in the mail. If somebody click ... See more...
All I learning for prompt is that I need to open broser and prompt with SOAR GUI. Is any Rest API or link available for answer prompt ? I want to pass some variable in the mail. If somebody click certain link, It will accept or reject the prompt for event "4" base on API automatically. It will reduce IT's workload!
Hello Splunkers,  I have 7 files in JSON format ( the JSON format is the same for each files) , so i applied one parsing for all * On UF *     [source::/opt/splunk/etc/apps/app_name/result/*.j... See more...
Hello Splunkers,  I have 7 files in JSON format ( the JSON format is the same for each files) , so i applied one parsing for all * On UF *     [source::/opt/splunk/etc/apps/app_name/result/*.json] INDEXED_EXTRACTIONS=json EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)     *On IDX*     [sourcetype_name] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 TIME_PREFIX=\"timestamp\"\:\s\" MAX_TIMESTAMP_LOOKAHEAD=19 TIME_FORMAT=%Y-%m-%dT%H:%M:%S TRUNCATE=999999     *on Search Head*     [sourcetype_name] KV_MODE=none       Parsing works for all files except one Here is an excerpt, timestamp with none value Can you help me on this ?   
HI Team, When i am trying to exclude one field by inserting condition sessionId!=X its not working . even though I used "NOT" condition but the field which i am trying to exclude is still showing in... See more...
HI Team, When i am trying to exclude one field by inserting condition sessionId!=X its not working . even though I used "NOT" condition but the field which i am trying to exclude is still showing in results. could you please help how i can exclude  particular field host="*"  sessionId!=X  host="*" NOT sessionId!=X 
Hello, I've created a dashboard that is showing 4 teams in dropdown menu. Now when I choose one of the team, I want to only see the panels for the specific team.  I've created the drop-down input a... See more...
Hello, I've created a dashboard that is showing 4 teams in dropdown menu. Now when I choose one of the team, I want to only see the panels for the specific team.  I've created the drop-down input and given it a label called Team. I have created statis options like Team 1, Team 2, Team 3, Team 4. So, my question is how do I assign each panel chart to one of the teams in the drop down? From some of the online searching I've done - it is asking to use tokenization concept. Could you please help me achieve this result.
  Hi, I'm trying to get the Guard duty log using the Splunk Add-on for AWS app. The input method is Generic S3, and logs from cloudtrail or WAF come in well, but the Guard duty log is not comi... See more...
  Hi, I'm trying to get the Guard duty log using the Splunk Add-on for AWS app. The input method is Generic S3, and logs from cloudtrail or WAF come in well, but the Guard duty log is not coming in. Of course, the data is in the S3 bucket. I'm attaching the guard duty.log.   Thank you.
Hello everyone! How can we solve the problem of searching for secrets in all or some splunk indexes so that splunk is not heavily loaded: how can this be implemented? (approach).  It is obvious that... See more...
Hello everyone! How can we solve the problem of searching for secrets in all or some splunk indexes so that splunk is not heavily loaded: how can this be implemented? (approach).  It is obvious that the list of indexes needs to be limited. What else?
We have below data in json format, i need help with a custom json response handler so splunk can break every event separately.  Each event starts with the record_id { "eventData": [ { "record_id"... See more...
We have below data in json format, i need help with a custom json response handler so splunk can break every event separately.  Each event starts with the record_id { "eventData": [ { "record_id": "19643", "eventID": "1179923", "loginID": "PLI", "userDN": "cn=564SD21FS8DF32A1D87FAD1F,cn=Users,dc=us,dc=oracle,dc=com", "type": "CredentialValidation", "ipAddress": "w.w.w.w", "status": "success", "accessTime": "2024-08-29T06:23:03.487Z", "oooppd": "5648sd1csd-952f-d630a41c87ed-000a3e2d", "attributekey": "User-Agent", "attributevalue": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" }, { "record_id": "19644", "eventID": "1179924", "loginID": "OKP", "userDN": "cn=54S6DF45S212XCV6S8DF7,cn=Users,dc=us,dc=CVGH,dc=com", "type": "Logout", "ipAddress": "X.X.X.X", "status": "success", "accessTime": "2024-08-29T06:24:05.040Z", "oooppd": "54678S3D2FS962SDFV3246S8DF", "attributekey": "User-Agent", "attributevalue": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" } ] }