All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Before 7:05                         – green Between 7:05 and 7:45 – yellow After 7: 45                           – red How can I implement this logic in Splunk? written a Javasc... See more...
Before 7:05                         – green Between 7:05 and 7:45 – yellow After 7: 45                           – red How can I implement this logic in Splunk? written a Javascript logic but its throwing an error where < and > are used. Can someone please help me to process this? Thanks in Advance. !  
My sample logs: 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  4 ... See more...
My sample logs: 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  4  hr DDDLLClientApplication - Done 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  10  hr DDDLLClientApplication - Done 2022-11-12 04: 12:34, 123 [IMP] [application thread=1:00] - http:com.ap.ddd.group.ll.clentip.DDDLLClientApplication-<overalltimetaken> (100)  11/12/22 5:12 AM to 11/25/23 5:12 AM  12  hr DDDLLClientApplication - Done here i want to get the response time from 12 hr ,10hr which are mentioned in the sample logs and  i need to get the info by using the DDDLLClientApplication - Done  i want to do field extractions for response time and info  here i want to do via sourcetype, and the type should be inline
Hello I am perplexed: when I run firebrigade, and choose "detail | index detail" and having chosen a host, my index list is incomplete. I see in the source where it gets input on line 20: "inputl... See more...
Hello I am perplexed: when I run firebrigade, and choose "detail | index detail" and having chosen a host, my index list is incomplete. I see in the source where it gets input on line 20: "inputlookup fb_hostname_index_cache". I see the contents of the fb_hostname_index_cache.csv file is the same incomplete list I found the periodic search where it extracts the data to put into "fb_hostname_index_cache.csv", the search command being: index=summary search_name="DB inspection" | dedup orig_host, orig_index | sort orig_host, orig_index | table orig_host,orig_index | outputlookup fb_hostname_index_cache When I run this search, I get "Error in 'outputlookup' command: The lookup table 'fb_hostname_index_cache' is invalid." when I run the search without "| outputlookup fb_hostname_index_cache", I get an incomplete list of my indexes. so a few things might be happening that I don't know how to determine. splunk doesn't like something about the "fb_hostname_index_cache.csv" file not all of the indexes are being returned from the query something isn't right about the contents of "index=summary" This issue appeared shortly after i upgraded from 8.0.3 to 8.1.6 to 9.0.0.1. My current system has six dedicated indexers, and an independent search head.  there are also three heavy forwarders. Can someone shed some light on this?   Thank you!    
All, I have this search       index=sro sourcetype=sro-cosmo "DL Cert OK" "Security Posture End of sweep report" | extract pairdelim="\n" kvdelim=":" | rex field=_raw "--ticket \'(?<ticket... See more...
All, I have this search       index=sro sourcetype=sro-cosmo "DL Cert OK" "Security Posture End of sweep report" | extract pairdelim="\n" kvdelim=":" | rex field=_raw "--ticket \'(?<ticket>.+)\' --summary" | fillnull value=0 | table _time ticket SA_Fail_Total_Count SA_Success_Count SA_Unreachables LP_Firmware_too_old | dedup _time ticket       That results in: But my user wants in this format: I am using Splunk 8.2.6. Is there any way to format this report? So my user does not need to manipulate it in Excel? Thank you, Gerson Garcia
Hello there! I'm trying to ingest JSON data via the Splunk Add-on for Microsoft Cloud Services app.  I created a sourcetype with INDEXED_EXTRACTIONS=json and left all other settings to their defaul... See more...
Hello there! I'm trying to ingest JSON data via the Splunk Add-on for Microsoft Cloud Services app.  I created a sourcetype with INDEXED_EXTRACTIONS=json and left all other settings to their default values.  The data got ingested, however, when I table my events I start seeing mv fields with duplicate data.  I'm even seeing the "Interesting Fields" section add up to 200% (instead of the expected 100%). Sourcetype settings   Interesting Fields   MV Fields with duplicate data   https://community.splunk.com/t5/All-Apps-and-Add-ons/JSON-format-Duplicate-value-in-field/m-p/306811 I then followed the advice given in this post ^^^ (i.e., setting KV_MODE=none, AUTO_KV_JSON=false, etc.) but the issue persists. I have attached screenshots to this post to better understand my situation.  I'm currently on Splunk Cloud. Any help with this is greatly appreciated
  index="main" sourcetype="vrea" | eval nested_payload=mvzip(info, solution, "---") | mvexpand nested_payload | eval info=mvindex(split(nested_payload, "---"), 1) | eval solution=mvindex(split(nest... See more...
  index="main" sourcetype="vrea" | eval nested_payload=mvzip(info, solution, "---") | mvexpand nested_payload | eval info=mvindex(split(nested_payload, "---"), 1) | eval solution=mvindex(split(nested_payload, "---"), 0) | eval nested_payload=mvzip(line, more, "---") | mvexpand nested_payload | eval line=mvindex(split(nested_payload, "---"), 1) | eval more=mvindex(split(nested_payload, "---"), 0) | eval nested_payload=mvzip(ID, Severity, "---") | mvexpand nested_payload | eval Severity=mvindex(split(nested_payload, "---"), 1) | eval CWE_ID=mvindex(split(nested_payload, "---"), 0) | table info solution ID Severity line more     when use the SPL my table fields value first 4 fields keep repeating same value but last 2 field "line and more" correct value?  anyone know why it is happening?
Good Afternoon, I'm new to splunk - I've pulled a copy of the demo software and have question concerning forwarders - Are forwarders required to be installed on each device supplying logs or can ... See more...
Good Afternoon, I'm new to splunk - I've pulled a copy of the demo software and have question concerning forwarders - Are forwarders required to be installed on each device supplying logs or can one central forwarder "receive" logs from multiple devices (i.e. windows, linux, cisco switches)? I want to setup a raspberry pi to receive logs from a few low use windows boxes, and linux boxes, possibly a switch or two.   Thanks in advance   John Bond 
Hi All. I am trying to calculate the response time from the logs below. 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121,... See more...
Hi All. I am trying to calculate the response time from the logs below. 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=100sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=10sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=50sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=40sec 11-12-2019 23:34:45, 678 this event will calculate the sign in and sign out of the application, Success,=67, failed=121, |sumsuo=1.0|CompleteTime=130sec       |tstats count where index=xxxx host=abc OR host=cvb OR host=dgf OR host=ujh sourcetype=xxxx by PREFIX(completetime=) |rename completetime= as Time |timechart span=1d avg(Time) by host |eval ResTime =round(,Time2)   When i am trying to run this query i am not bale to calculate the average of time because when i am doing PREFIX(completetime=) here sec word is also taking up. How can i ignore it.
Does anybody know where the failures of sendemail are being logged? I wonder about cases where the e-mail address no longer exists and what type of error is generated and where. _internal and _audit ... See more...
Does anybody know where the failures of sendemail are being logged? I wonder about cases where the e-mail address no longer exists and what type of error is generated and where. _internal and _audit don't seem to have this data.
I have an event id 4674 that I would like to block from being indexed.  I have the following in my in inputs.conf in local not default. [WinEventLog://Security] disabled = true blacklist1 = Eve... See more...
I have an event id 4674 that I would like to block from being indexed.  I have the following in my in inputs.conf in local not default. [WinEventLog://Security] disabled = true blacklist1 = EventCode = "4764" Message="*"
my subject may not be worded correctly but i need some help. i have the below raw data, and i would like to group them together into its own line for reporting. Redundancy group: 0 , Failover ... See more...
my subject may not be worded correctly but i need some help. i have the below raw data, and i would like to group them together into its own line for reporting. Redundancy group: 0 , Failover count: 0 node0 200 primary no no None node1 2 secondary no no None Redundancy group: 1 , Failover count: 0 node0 200 primary no no None node1 20 secondary no no None Redundancy group: 2 , Failover count: 0 node0 200 primary no no None node1 2 secondary no no None how can i have the output with the following: would like to group them based on Redundancy Group # and Node # Redundancy group: 0 , Failover count: 0,node0 200 primary no no None Redundancy group: 0 , Failover count: 0,node1 2 secondary no no None Redundancy group: 1 , Failover count: 0,node0 200 primary no no None Redundancy group: 1 , Failover count: 0,node1 2 secondary no no None Redundancy group: 2 , Failover count: 0,node0 200 primary no no None Redundancy group: 2 , Failover count: 0,node1 2 secondary no no None
Hi All, Is there any way to populate fixed date range in a dropdown in Dashboard Studio ?  I have done it in the XML dashboard, but not finding a way to do it in Studio (JSON). Can anyone suggest ?... See more...
Hi All, Is there any way to populate fixed date range in a dropdown in Dashboard Studio ?  I have done it in the XML dashboard, but not finding a way to do it in Studio (JSON). Can anyone suggest ? XML Dashboard example:  (looking for equivalent in JSON) <input type="dropdown" token="simple"> <label>Simple Time Picker</label> <choice value="last_24h">Last 24 Hours</choice> <choice value="last_7d">Last 7 days</choice> <choice value="last_30d">Last 30 days</choice> <default>last_24h</default> <change> <condition value="last_24h"> <set token="simple.label">$label$</set> <set token="simple.earliest">-24h</set> <set token="simple.latest">now</set> </condition> <condition value="last_7d"> <set token="simple.label">$label$</set> <set token="simple.earliest">-7d</set> <set token="simple.latest">now</set> </condition> <condition value="last_30d"> <set token="simple.label">$label$</set> <set token="simple.earliest">-30d</set> <set token="simple.latest">now</set> </condition> </change> </input>
Is anyone aware of a way, other than manually, of creating a MITRE ATT&CK Navigator Layer based on the rules enabled in Splunk Enterprise Security ?    https://mitre-attack.github.io/attack-navigat... See more...
Is anyone aware of a way, other than manually, of creating a MITRE ATT&CK Navigator Layer based on the rules enabled in Splunk Enterprise Security ?    https://mitre-attack.github.io/attack-navigator/ 
hello, each time I upgrade DB Connect to the v3.7, I got sistematically the below error I have tried on multiple splunk instances, all same version v.9.0.2 same behaviour using CLI or web interf... See more...
hello, each time I upgrade DB Connect to the v3.7, I got sistematically the below error I have tried on multiple splunk instances, all same version v.9.0.2 same behaviour using CLI or web interface any ideas ? thanks     In handler 'localapps': Error installing application: Failed to copy: C:\Program Files\Splunk\var\run\splunk\bundle_tmp\3767c00e4cddc2c3\splunk_app_db_connect to C:\Program Files\Splunk\etc\apps\splunk_app_db_connect. 4 errors occurred. Description for first 4: [ { "operation":"renaming .tmp file to destination file", "error":"The process cannot access the file because it is being used by another process.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect\\jars\\dbxquery.jar", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect\\jars\\dbxquery.jar" }, { "operation":"renaming .tmp file to destination file", "error":"The process cannot access the file because it is being used by another process.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect\\jars\\server.jar", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect\\jars\\server.jar" }, { "operation":"copying contents from source to destination", "error":"There are no more files.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect\\jars", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect\\jars" }, { "operation":"copying contents from source to destination", "error":"There are no more files.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect" } ]    
Hey,   I'm trying to use pandas in the backend python script for an alert. I copied the module into the /bin folder. Trying to import it, I get the error "module 'os' has no attribute 'add_dll_di... See more...
Hey,   I'm trying to use pandas in the backend python script for an alert. I copied the module into the /bin folder. Trying to import it, I get the error "module 'os' has no attribute 'add_dll_directory'". Does somebody know how to solve / work around that? Im running Splunk + Python on a Windows 10 Machine Regards  
Hello Is there a way to play  with ITSI with a trial license or something else or is it mandatory to buy a premium app license? Thanks
Hello Splunkers, I have a Splunk HF that will receive multiple logs coming from different machines, all sending via UDP. I am wondering it I need to configures the external sources to send the lo... See more...
Hello Splunkers, I have a Splunk HF that will receive multiple logs coming from different machines, all sending via UDP. I am wondering it I need to configures the external sources to send the logs via UDP but with different port (on port for each sources), or if I can simply tell all my sources to send over UDP port 514 for instance. I am wondering if the UDP port 514 could become a "network bottleneck" because of too many logs coming from multiple sources on the same port.  Thanks for your help, GaetanVP
index="redis" sourcetype="csv" total_commands_processed="*" | timechart span=5m total_commands_processed In the search command above, I want to display value of field "total_commands_processed", an... See more...
index="redis" sourcetype="csv" total_commands_processed="*" | timechart span=5m total_commands_processed In the search command above, I want to display value of field "total_commands_processed", anyone can help
I'm Trying  to get oracle DB data using DB Connect  app and I have successfully scheduled my job and set up my connection. When executing query in the Preview Data window my results are as expected. ... See more...
I'm Trying  to get oracle DB data using DB Connect  app and I have successfully scheduled my job and set up my connection. When executing query in the Preview Data window my results are as expected. I Now comes the problem, the job is executing without error and on time from what I can see in the DB Connect Input Health tab, however, none of my data is being ingested. when i was trying to ingest using Input Type rising but getting error as invalid column index let me know settings while selecting rising input type and why data was not getting while selecting batcH
Hi Splunk community, I have an excel file that sorts a field at certain order and possibly changes over time The excel file look something like this field1 AS AC RO BE .. Is it possible for ... See more...
Hi Splunk community, I have an excel file that sorts a field at certain order and possibly changes over time The excel file look something like this field1 AS AC RO BE .. Is it possible for me to sort with that particular field order in a search ? Thanks for your help.