All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, Thanks for your time. I am sorry in advance as this is very basic question. just started exploring the search query.. If i have something like below index=ADFS_AWS AND clientId IN ("Abc12... See more...
Hi All, Thanks for your time. I am sorry in advance as this is very basic question. just started exploring the search query.. If i have something like below index=ADFS_AWS AND clientId IN ("Abc123","ABC123",ABC_ABC","abc_abc") This is searching only for these clientIds   - option1 or with Where Where clientID IN (clientId IN ("Abc123","ABC123",ABC_ABC","abc_abc") which one should we be using and more efficient      
I'm using the splunk-otel-collector, and attempting to get multi-line java exceptions into a standardly formatted event. Using the example, my values file contains         multilineConfigs:... See more...
I'm using the splunk-otel-collector, and attempting to get multi-line java exceptions into a standardly formatted event. Using the example, my values file contains         multilineConfigs: - namespaceName: value: example useRegexp: true firstEntryRegex: ^[^\s].* combineWith: ""     The rendered configMap contains   - combine_field: attributes.log combine_with: "" id: example is_first_entry: (attributes.log) matches "^[^\\s].*" max_log_size: 1048576 output: clean-up-log-record source_identifier: resource["com.splunk.source"] type: recombine   With that config, the logs continue to split . Then I change the value to       combineWith: "\t"         the following happens with the logs:     Has anyone experienced this and worked around it?
Hello, I am looking to calculate how long it takes to refresh the view using the time of the events "End View Refresh" and "Start View Refresh" i.e. find the difference in time for each of these e... See more...
Hello, I am looking to calculate how long it takes to refresh the view using the time of the events "End View Refresh" and "Start View Refresh" i.e. find the difference in time for each of these events whenever these 2 events occur. Tried number of things using streamstat and range, but it does provide me the desired result. Any assistance would be appreciated. Regards  
I have two rex queries and want know how to combine Query : 1 index=test1 sourcetype=teams | search "osversion=" | rex field=_raw "\s+(?<osVersion>.*?)$" | table Time(utc) "OSVersion" output ... See more...
I have two rex queries and want know how to combine Query : 1 index=test1 sourcetype=teams | search "osversion=" | rex field=_raw "\s+(?<osVersion>.*?)$" | table Time(utc) "OSVersion" output :        time      osversion 1.1 123 1.2 1234 1.3 12345 1.4 123456 Query : 2 index=test1 sourcetype=teams | search "host=12* | rex field=_raw "\w+(?<host>*)$" | table Time(utc) "OSVersion" output :        time      host 1.1 abc 1.2 abcd 1.3 abcde Pls help me how to combine above queries and should show table like below time      osversion        host 1.1          123                    abc 1.2          1234                abcd 1.3           12345            abcde   
Question with regards to "Default value change for the 'max_documents_per_batch_save' setting causes restore from KV store backups made using versions earlier than Splunk Enterprise 9.3.0 to fail". ... See more...
Question with regards to "Default value change for the 'max_documents_per_batch_save' setting causes restore from KV store backups made using versions earlier than Splunk Enterprise 9.3.0 to fail".  The "9.3 READ THIS FIRST" documentation says that I must restore KV backups made using Splunk Enterprise 9.2.2 and earlier versions before upgrading to Splunk Enterprise version 9.3.0. I am new to Splunk administration and would appreciate steps (with detailed explanation) for hot to accomplish this task and get to the point of upgrading Splunk from 9.2.2 to 9.3.1. This is a single-instance (one server) environment, no distributed components, no clusters . Not running ES, ITSI, or ITE Work Thanks
Hi all, New to splunk, running out of ideas, please help! I have created a search to show: | bin span=10m _time | stat count by _time This gives me two columns - the time interval in 10 minu... See more...
Hi all, New to splunk, running out of ideas, please help! I have created a search to show: | bin span=10m _time | stat count by _time This gives me two columns - the time interval in 10 minutes bins, and the number of results within that bin. What I would like to do is expand on this search and show the % of bins over a time range that have > =10 results    cheers
Hi  Is it possible to use same input with the 2 different panels :  It works fine with the 1 panel as below :  <panel depends= "$tokShowPanelB$ "> But i want to use the same input with the... See more...
Hi  Is it possible to use same input with the 2 different panels :  It works fine with the 1 panel as below :  <panel depends= "$tokShowPanelB$ "> But i want to use the same input with the panelC too. But below command doesnot works:  <panel depends= "$tokShowPanelB$ , $tokShowPanelC$"> Can someone please help. 
Hi Team,   I'm trying to trigger a autosys job based on alert we recieved in splunk.   Any idea how to acheive it ?
For the past 2 days I'm trying to figure something out. I'll try to be clear as possible and hopefully that someone can guide me or explain why this is working like this. I'm trying to index a CSV f... See more...
For the past 2 days I'm trying to figure something out. I'll try to be clear as possible and hopefully that someone can guide me or explain why this is working like this. I'm trying to index a CSV file stored in S3, but unfortunately the sourcetype aws:s3:csv is not indexing the file "properly" (meaning it is not extracting any fields - check left screenshot from the attached file). I've modified the sourcetype aws:s3:csv (under the Splunk Addon for AWS application) and configured it exactly like the default CSV sourcetype (under system/default/proprs.conf). After doing this if I index a file manually via "Settings/Add data" it is being indexed properly (fields are being extracted), but if the very same file is indexed by the app Splunk Addon for AWS,again  configured with the same sourcetype, there are no extracted fields. Check attached screenshot for reference. I've also tried to add other different configurations to the not-modified aws:s3:csv sourcetype like INDEXED_EXTRACTIONS = CSV; HEADER_FIELD_LINE_NUMBER = 1; FIELD_NAMES = field1,field2,field3 and various other configurations in props.conf (under Splunk Addon for AWS) but without success. The only "workaround" is if I use REPORT-extract_fields in props.conf for that sourcetype and in transforms.conf configure it, but this is not ideal. Additionally I've set the sourcetype to csv  (default Splunk sourcetype) in the inputs.conf but this also seems to not work. Splunk 9.2.1 Splunk Add-on for AWS 7.7.0 Similar questions without proper answer: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Add-on-for-Amazon-Web-Services-How-to-get-a-CSV-file/m-p/131725 https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Add-on-for-AWS-Ingesting-csv-files-and-he-fields-are-not/td-p/656923 https://community.splunk.com/t5/All-Apps-and-Add-ons/S3-bucket-with-CSV-files-not-extracting-fields-at-index-time/m-p/458671 https://community.splunk.com/t5/Getting-Data-In/No-fields-or-timestamps-extracted-when-indexing-TSV-from-S3/m-p/660436 https://community.splunk.com/t5/Getting-Data-In/Why-is-CSV-data-not-getting-parsed-while-being-monitored-on/td-p/275515
Hello!  Wanted to ask if anyone has experience with receiving SNMPv2 trap alerts in Splunk 8.2.5 (Win 2019)?  Background: we have an environment monitor device that sends high/low temperature alerts ... See more...
Hello!  Wanted to ask if anyone has experience with receiving SNMPv2 trap alerts in Splunk 8.2.5 (Win 2019)?  Background: we have an environment monitor device that sends high/low temperature alerts to the local SNMP Trap svc, from there picked up by a generic WMI SNMP provider, from which Splunk pulls the data.   "wmi.conf":       [WMI:SNMP]namespace = \\.\root\snmp\localhost interval = 10wql = SELECT * FROM SnmpNotification disabled = 0 index = snmpindex current_only = 1         Problem we're running into is that when the data is ingested, Splunk has an issue translating the "VarBindList" object it gets from WMI, containing the SNMP variable binding ("varbind") info that describes the SNMP trap alert from the device (ticks, OID, text msg of what alert was tripped).   Sample Splunk search result from "snmpindex": (see: VarBindList=<unknown variant result type 8205> below):       20241007120551.314854 AgentAddress=10.2.13.19 AgentTransport Address=10.2.13.19 AgentTransportProtocol=IP Community=alispub Identification=1.3.6.1.4.1.20916.1.13.2.1 SECURITY_DESCRIPTOR=NULL TIME_CREATED=133727763449700336 TimeStamp=1894 VarBindList=<unknown variant result type 8205> wmi_type=SNMP host=MS source=WMI:SNMP sourcetype=WMI:SNMP       Been trying various Splunk configs/transforms, XML, etc. but all are basically contingent on getting good data into "_raw", and "_raw" col just has that msg.  Our RoomAlert3S device we need to upgrade to only sends SNMPv2 or v3.  Everything seems to work fine when the trap is v1 (from past behavior/our test util).
I'm still learning Splunk and would like to learn how to combine some searches. Goal: Use the VPN search results to perform firewall searches according to how many VPN records found. Example: ... See more...
I'm still learning Splunk and would like to learn how to combine some searches. Goal: Use the VPN search results to perform firewall searches according to how many VPN records found. Example: 1. Search the vpn index to get a table of assigned_ip and the login/logout time:   index=vpn computer_name=Desktop_1 | table assigned_ip login_time logout_time     assigned_ip login_time logout_time 10.255.111.112 1728409500 1728459000 10.255.119.199 1728392083 1728401383   2. Use the result above to do a firewall search (I'd like to use results from step 1 instead of the hardcoded values. I also want to append separate rows found in step 1 to find firewall records during different ip assignments):   index=firewall source_ip=10.255.111.112 earliest=1728409500latest=1728459000 | append [ search index=firewall source_ip=10.2555.119.199 earliest=1728392083 latest=1728401383 ] | stats count by destination_ip     The closest I got so far is using separate subsearch returns, which takes longer to run and doesn't seem to return more than 1 value:   index=firewall source_ip=[ search index=vpn computer_name=Desktop_1 | return $assigned_ip ] latest=[ search index=vpn computer_name=Desktop_1 | return $logout_time ] earliest=[ search index=vpn computer_name=Desktop_1 | return $login_time] | stats count by destination_ip     Is there a way to do this? I also tried to use tojson(), but it returns 1 table row into its own json object that I can't use together for the firewall search. Thank you so much in advance
I tried to run the Indexing Performance: Instance dashboard but was not getting any data, on exploring the search I found out index=_internal is not doing the field extractions for this data in the l... See more...
I tried to run the Indexing Performance: Instance dashboard but was not getting any data, on exploring the search I found out index=_internal is not doing the field extractions for this data in the log: group=per_host_thruput, ingest_pipe=1, series="splunkserver.local", kbps=8.451, eps=32.903, kb=261.974, ev=1020, avg_age=2.716, max_age=3 If I manually extract the fields using rex I can view it in the search but the dashboard still doesn't show the results. Is there a way to extract these fields for the internal index? Thanks!
We have some events coming in to Splunk that show as following: time="09/10/2024 11:41:15" URL="[Redacted String]" Name="[Redacted String]" Issuer="[Redacted String]" Issued="27/10/2023 13:27:22" E... See more...
We have some events coming in to Splunk that show as following: time="09/10/2024 11:41:15" URL="[Redacted String]" Name="[Redacted String]" Issuer="[Redacted String]" Issued="27/10/2023 13:27:22" Expires="26/10/2025 12:27:22" Splunk is using ingest time instead of the time field. In props.conf for this sourcetype I have the following: SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = time= TIME_FORMAT = "%d/%m/%Y %H:%M:%S" CHARSET = UTF-8 KV_MODE = none DISABLED = false However the time isn't being extracted properly, what do I need to change / add? Thanks.
Hello Team We have a usecase where we want to integrate cloudwatch metrics to AppDynamics. What could be the best suggested way here to proceed? Regards Gaurav
After a rolling restart of peer nodes one index is not fully searchable . It seems fixup status is pending  looks like buckets is not properly sync  .   need help to sort it out 
Hi New to Splunk On-Call , I have setup a new Team with 3 members, and I've created a rotation and shift with all three as members. I'm stuck with the best way to setup the Escalation Policy, I wan... See more...
Hi New to Splunk On-Call , I have setup a new Team with 3 members, and I've created a rotation and shift with all three as members. I'm stuck with the best way to setup the Escalation Policy, I want it to call the initial person on call and then contact the other two in turn if they don't respond e.g. Contact Member 1 Wait 10mins Contact Member 2 Wait 10mins Contact Member 3   They way I have it at the moment is having three steps in the Escalation Policy: Step 1 - Immediate - Notify the On-Duty user(s) in rotation Step 2 - Wait 10 mins - Notify the next user(s) in the current on-duty shift Step 2 - Wait 20 mins - Notify the next user(s) in the current on-duty shift   Is this the best way to do it, the text "Notify the On-Duty user(s) in rotation" has confused me as it suggests that it should call multiple members in a rotation, but I can't find anything that describes how it calls more then the initial on-call person?
Hi ,   I want to ask community how you do health check of servers after patching? Is there any automation you have build in order to identify if server health check is good after patching activity ... See more...
Hi ,   I want to ask community how you do health check of servers after patching? Is there any automation you have build in order to identify if server health check is good after patching activity for multiple server in one shot? Using any tool to identify or any query build up or  any dashboard to enter the server details and get stats?
Hello,  I have a dashboard that shows network traffic based on 4 simple text boxes for the user to input SRC_IP SRC_PORT DEST_IP DEST_PORT  How can we create a filter such as "EQUAL" and "NOT ... See more...
Hello,  I have a dashboard that shows network traffic based on 4 simple text boxes for the user to input SRC_IP SRC_PORT DEST_IP DEST_PORT  How can we create a filter such as "EQUAL" and "NOT EQUAL TO" options for a  DEST_IP input box ?  Requirement is that end user should be to select "NOT EQUAL and enter an ip-address or range to exclude whatever they want to  in the input box and accordingly the panels will display the corresponding data. For example , if they want to exclude all private ips (10.x.x.x)  from DEST_IP ,   they need to be able to select "NOT EQUAL TO" along with entering "10.0.0.0\8"  for this ask.  Hope clear.  I tried creating MULTISELECT input box as follows but in MULTISELECT box, it does not let a user enter/type any data that they want to manually filter .   Any assistance will be highly appreciated.    
Hi,   I'm pretty new to Splunk and I have a simple question that maybe one of you guys could help me figure out. I have a search that I'm using to find the latest login events for a specific set ... See more...
Hi,   I'm pretty new to Splunk and I have a simple question that maybe one of you guys could help me figure out. I have a search that I'm using to find the latest login events for a specific set of users. The problem is that there are about 130 users and I tried specifying the users in the search using (Account_Name=user1 OR Account_Name=user2 OR Account_Name=user3.......) I tried entering all 130 but it didn't work I noticed there was a limit after some point, and then I'd stop receiving results. So I did some research and I noticed people mentioned lookup files. So I created a CSV file with the list of actual users that I'd like to run a report on. how can I join the lookup file to the query so I'm only joining the values from the "UserID" field in my lookup table to the field "Account_Name" that comes with the windows event logs that I'm using to build the query. So far this is my query how could I use the lookup to assist to only filter the 130 users.    index=wineventlog sourcetype=wineventlog EventCode=4624 Account_Name!=*$ | stats latest(_time) as last_login_time by Account_Name | convert ctime(last_login_time) as "Last Login Time" | rename Account_Name as "User" | sort - last_login_time | table User "Last Login Time"
Hi, I'm exploring a way to get the search results for the name of Indexes, who created those indexes and creation date. So far I have got the DDAS Retention Days, DDAS Index Size, DDAA Retention ... See more...
Hi, I'm exploring a way to get the search results for the name of Indexes, who created those indexes and creation date. So far I have got the DDAS Retention Days, DDAS Index Size, DDAA Retention Days, DDAA Usage, along with the Earliest and Latest Event Dates. I'm trying with the owner of the indexes but am not getting the desired results. The search query I've been using is given below: | rest splunk_server=local /servicesNS/-/-/data/indexes | rename title as indexName, owner as creator | append [ search index=summary source="splunk-storage-detail" (host="*.personalsplunktesting.*" OR host=*.splunk*.*) | fillnull rawSizeGB value=0 | eval rawSizeGB=round(rawSizeBytes/1024/1024/1024,2) | rename idxName as indexName ] | append [ search index=summary source="splunk-ddaa-detail" (host="*.personalsplunktesting.*" OR host=*.splunk*.*) | eval archiveUsage=round(archiveUsage,2) | rename idxName as indexName ] | stats latest(retentionDays) as "Searchable Storage (DDAS) Retention Days", latest(rawSizeGB) as "Searchable Storage (DDAS) Index Size GB", max(archiver.coldStorageRetentionPeriod) as "Archive Storage (DDAA) Retention Days", latest(archiveUsage) as "Archive Storage (DDAA) Usage GB", latest(ninetyDayArchived) as "Archived GB Last 90 Days", latest(ninetyDayExpired) as "Expired GB Last 90 Days" by indexName | append [ | tstats earliest(_time) as earliestTime latest(_time) as latestTime where index=* by index | eval earliest_event=strftime(earliestTime, "%Y-%m-%d %H:%M:%S"), latest_event=strftime(latestTime, "%Y-%m-%d %H:%M:%S") | rename index as indexName | fields indexName earliest_event latest_event ] | stats values("Searchable Storage (DDAS) Retention Days") as "Searchable Storage (DDAS) Retention Days", values("Searchable Storage (DDAS) Index Size GB") as "Searchable Storage (DDAS) Index Size GB", values("Archive Storage (DDAA) Retention Days") as "Archive Storage (DDAA) Retention Days", values("Archive Storage (DDAA) Usage GB") as "Archive Storage (DDAA) Usage GB", values(earliest_event) as "Earliest Event", values(latest_event) as "Latest Event", values(creator) as "Creator" by indexName Please can anyone help me on this? Thanks in advance!