All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Has anyone tried to use the Splunk Attack Range with a preconfigured VPC in AWS?  What I'm trying to do is use the attack range hosts to use a Splunk server on our local enterprise network.  The VPC ... See more...
Has anyone tried to use the Splunk Attack Range with a preconfigured VPC in AWS?  What I'm trying to do is use the attack range hosts to use a Splunk server on our local enterprise network.  The VPC that is already setup in AWS is configured for a direct connect back to our local network.   Any tips or suggestions would be helpful.     Thanks,   Jon 
Hi, I'm having trouble with a regex field extraction. I'm looking to extract the numeric ID after the "x-client-id" key: .........pp_code":["{IVR-US}. CPC"],"x-client-id":["1234567890"],"x-requested... See more...
Hi, I'm having trouble with a regex field extraction. I'm looking to extract the numeric ID after the "x-client-id" key: .........pp_code":["{IVR-US}. CPC"],"x-client-id":["1234567890"],"x-requested-with":["DA_ONLINE_IV............ This is how that field appears in the event string. This (client-id) is the only field I need in the entire string. The quotes are throwing off all of my normal regex formats. Any help would be super appreciated.
Hello, we are trying to diagnose a parsing error from AWS Firehose to Splunk using HEC. The endpoint is configured properly but we are getting "no data" parsing errors. To try and debug this I have s... See more...
Hello, we are trying to diagnose a parsing error from AWS Firehose to Splunk using HEC. The endpoint is configured properly but we are getting "no data" parsing errors. To try and debug this I have switched DEBUG on for httpeventcollector on the heavy forwarder receiving the data. However the introspection log is still only showing INFO.  Am i setting debug in the wrong place? or has anyone else overcome this?
Hi all, I'm interested in bringing Snowflake query history into Splunk and there are posts on how to do it with DBConnect; however, it seems like the app is only available for Splunk enterprise, not ... See more...
Hi all, I'm interested in bringing Snowflake query history into Splunk and there are posts on how to do it with DBConnect; however, it seems like the app is only available for Splunk enterprise, not Splunk Cloud.  Is that correct? If so, is there any way to bring in Snowflake data into Splunk Cloud? Thanks!
Hi - I have a few dashboards that use expressions like eval var=ifnull(x,"true","false") ...which assigns "true" or "false" to var depending on x being NULL Those dashboards still work, but I not... See more...
Hi - I have a few dashboards that use expressions like eval var=ifnull(x,"true","false") ...which assigns "true" or "false" to var depending on x being NULL Those dashboards still work, but I notice that ifnull() does not show up in any of the current documentation, and it seems the current way to get the same result would be eval var=if(isnull(x),"true","false") Did I miss some kind of deprecation of that syntax ages ago (must have been before 6.3.0), and it just happens to still be parsed?
Hi guys, Does anyone have any advice on what would be a good search to carry out on local performance data. I am trying to create some sort of dashboard that shows the performance of my local machin... See more...
Hi guys, Does anyone have any advice on what would be a good search to carry out on local performance data. I am trying to create some sort of dashboard that shows the performance of my local machine and not sure what I could be searching for to put in the dashboard. If anyone has any advice on what I could search for please let me know.   Thank You 
I am trying to remove duplicates in my result using the |dedup command. Even though I am seeing 2 entries in my result. Kindly help me to remove 1 duplicate.    
Been trying to get the AWS app working and the ec2 dashboards are not working... I have traced it down to it looking like every search is just plain wrong...  as an example: `aws-description-sourcet... See more...
Been trying to get the AWS app working and the ec2 dashboards are not working... I have traced it down to it looking like every search is just plain wrong...  as an example: `aws-description-sourcetype` $accountId$ $region$ source="*:$resource$" | eventstats latest(_time) as latest_time | eval latest_time=relative_time(latest_time,"-55m") | where _time > latest_time | dedup id sortby -start_time The problem is at `dedup id sortby -start_time`.  There is no "id" field on the data... there is however "InstanceId".  It is a similar situation for every dashboard that is not populating which leads me to believe there is a job somewhere that is not running or I am missing some very fundamental thing.  Any help would be greatly appreciated... Thanks!
Hi. I'm using TA for Windows and everything is mostly working OK. But. In some events I'm receiving values like ReadOperation %%8100 If I understand correctly, that's _not_ what evt_resolv... See more...
Hi. I'm using TA for Windows and everything is mostly working OK. But. In some events I'm receiving values like ReadOperation %%8100 If I understand correctly, that's _not_ what evt_resolve_ad_obj option should affect, right? That option affects only resolving (or not) SID-s to usernames/groups and this is something completely different, right? What is it then? And can I force my UF to forward the same contents that I see in Event Log Viewer? In this case it's Read Operation: Enumerate Credentials I understand that it's something that event log viewer is rendering on its own, because in detail view of the event, it does indeed show %%8100 as ReadOperation so it's apparently the program's intepretation of this data that says "Enumerate Credentials". So I suppose there'd have to be some lookups to "humanize" the events, right?
I'm learning how to use the HTTP Event collector, but no events ever show up in search. I have the inputs enabled and my token set up as shown: When I run the command 'curl -k http://<instance-h... See more...
I'm learning how to use the HTTP Event collector, but no events ever show up in search. I have the inputs enabled and my token set up as shown: When I run the command 'curl -k http://<instance-host>:8088/services/collector -H "Authorization:Splunk 4f99809e-55d3-4677-b418-c0be66693311" -d "{\"sourcetype\": \"trial\", \"event\":\"Hello World!\"}"' in my command prompt, I get back {"text": "Success", "code": 0}. I followed along with the tutorial on this site here: https://www.youtube.com/watch?v=qROXrFGqWAU I've also tried changing the sourcetype to json_no_timestamp, but this didn't work either. I'm confident that I've set up everything correctly, but nothing seems to be working. Is there a fix for this? Because I'm trying to do the same with collectd metrics.
I have error messages in the following formats     { "level":"error", "message":"Log: \"error in action {\\\"status\\\":\\\"error\\\",\\\"message_error\\\":\\\"blacklisted\\\"}\"", "timestamp":"20... See more...
I have error messages in the following formats     { "level":"error", "message":"Log: \"error in action {\\\"status\\\":\\\"error\\\",\\\"message_error\\\":\\\"blacklisted\\\"}\"", "timestamp":"2021-09-27T16:39:07-04:00" }     and     { "level":"error", "message":"Log: \"error in action \\\"&lt;HTML&gt;&lt;HEAD&gt;\\\\n&lt;TITLE&gt;Service Unavailable&lt;/TITLE&gt;\\\\n&lt;/HEAD&gt;&lt;BODY&gt;\\\\n<h1>Service Unavailable - Zero size object</h1>\\\\nThe server is temporarily unable to service your request. Please try again\\\\nlater.<p>\\\\nReference&#32;&#35;15\\\\n&lt;/BODY&gt;&lt;/HTML&gt;\\\\n\\\"\"", "timestamp":"2021-09-26T23:12:25-04:00" }     Now I am creating a dashboard for displaying the overall error counts for a period of time. The following query gives me the count based on the message_error.     index=my_index_name sourcetype=my_source_type_name:app | spath message | regex message="^.*error in action.*$" | eval error_json=replace(ltrim(message, "Log: \"error in action"),"\\\\\"","\"") | spath input=error_json output=error_message path=message_error | top error_message     As what I am doing is JSON parsing, it is not applicable for the second type of error message. This is basically HTML after the common error string. I would like to print the count for this error along with the counts of the errors which belong to the first group. For the first group of errors, by using the above query, I am getting the following result error_message count blacklisted 10 captcha error 9 Internal Server Error 8   What I need is error_message count blacklisted 10 captcha error 9 Internal Server Error 8 Service Unavailable 5   That is I need to show the count of errors even if it is not in the JSON format. Both the errors start with the common string "Log: error in action".  If I use another query like :     index=my_index_name sourcetype=my_source_type_name:app | spath message | regex message="^.*Service Unavailable - Zero size object.*$"| stats count as error_count     it will give the count. But first I want to combine the results and show them as a single result and second the above query is limited for a specific error message. So I would like to show a part of the message after "Log: error in action", if it is not in JSON format and the corresponding count. I am new to Splunk and It will be very much helpful if someone can point out the solution for this. 
Is there any way to hide system messages on dashboard panels? I already set "depends" at html tags but the messages show up for a second. If somebody know that there is an option to make  such messa... See more...
Is there any way to hide system messages on dashboard panels? I already set "depends" at html tags but the messages show up for a second. If somebody know that there is an option to make  such messages invisible before searches run, please teach me. My source XML is below. <row> <panel depends="$panel_show2$"> <html> <div style="float:left"><span style="font-size: 1.5em;">$HOST2$</span></div> <div style="float:right">$latest_log_time2$</div> <style> div.left{ text-align: left; } div.right{ text-align: right; } </style> </html> <single> <search depends="$panel_show2$"> <query>`cae-real_time_monitoring_tiles($company$, $host2$)`</query> <earliest>-1y</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="height">139</option> <option name="numberPrecision">0.0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,60,80,100]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unit">%</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> <option name="link.openSearch.visible">false</option> <drilldown> <link target="_blank">app_utilization_realtime_detail?form.company=$company$&amp;form.application=$host2$&amp;form.feature=$trellis.value$&amp;form.department=.+&amp;drilldowned=1</link> </drilldown> </single> </panel> </row>
Here's my challenge, I've got multiple same IP with Different Attack_type categories, I'm trying to combine all the same IP  together and make a chart that shows the Attack_type and just 1 IP   S... See more...
Here's my challenge, I've got multiple same IP with Different Attack_type categories, I'm trying to combine all the same IP  together and make a chart that shows the Attack_type and just 1 IP   Something like this but a dashboard that list out what IP is associated to all the different Attack_type     
Hi, I have around 5 panels in a dashboard which have their own child panel also.. Each these panels have table, where the datas are fetched from log. I was in need to convert all the parent panels i... See more...
Hi, I have around 5 panels in a dashboard which have their own child panel also.. Each these panels have table, where the datas are fetched from log. I was in need to convert all the parent panels into a tabs in a dashboard.
Hi  I have a message body text as like below message : RequestBody : :  :{                                  individualValue : {                                                    xxxxxx;        ... See more...
Hi  I have a message body text as like below message : RequestBody : :  :{                                  individualValue : {                                                    xxxxxx;                                                    YYYY; -----------------------having many txt in between these } } How can i fetch the string "individualValue" from the message body
I was trying to extract an ip address field. During a search, using |rex "[[ipv4]]" works fine and creates an ip field.  I then wanted to save this field extraction, so I used the field extractor t... See more...
I was trying to extract an ip address field. During a search, using |rex "[[ipv4]]" works fine and creates an ip field.  I then wanted to save this field extraction, so I used the field extractor to do so, edited the regular expression to [[ipv4]] and saved it, but it did not work. I tried taking it down a level, editing the saved regular expression to  (?<ip>[[octet]](?:\.[[octet]]){3}) which also works while using the rex command during a search, but did not work saving it in the field extractor. I took it down one final level changing it to (?<ip>(?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)(?:\.(?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)){3}) which doesn't use modular regular expressions, but finally does work in both the search and the saved field extraction. I haven't found anything in the splunk docs that say modular regular expressions can't be used in the field extractor, so I thought it would be best to check here if that was the case, or if there is maybe some other issue I can't think of.
I have a column that has events recorded in an interval of 1 hour. Example: Date                                                          Value 2010-1-1 1:00                                       ... See more...
I have a column that has events recorded in an interval of 1 hour. Example: Date                                                          Value 2010-1-1 1:00                                         20                2010-1-1 2:00                                         22 2010-1-1 3:00                                          21 2010-1-1 4:00                                          19 2010-1-1 5:00                                           16 ...............................                                              ........ 2010-1-1 24:00                                         12   I want to group this as one row i.e display in the following format Date                                              Value 2010-1-1                                    Calculate average of 24 values   I want to achieve this in splunk
Hello  here's is my problem, I made a search which calculates duration between two jobs. Jobs are supposed to run during our ovn. So the first starts around 10pm and the last around 00.30 so +- 2h3... See more...
Hello  here's is my problem, I made a search which calculates duration between two jobs. Jobs are supposed to run during our ovn. So the first starts around 10pm and the last around 00.30 so +- 2h30 after. it's working fine but if the job A starts later (e.g.  09/04 at 00.09) then I can't get the calculation and get two empty rows . 09/06/21 02:30:42 21:50:00 00:20:41 09/04/21 00:00:00 03:19:24 03:19:24  << 09/03/21 00:00:00 00:09:52 00:09:52  << 09/02/21 02:31:56 21:56:44 00:28:40 it should display only 1 line for that ovn : 09/04/21 03:09:32 00:09:52 03:19:24  sometimes it's ok i guess it's because job A started very later 4.36AM and after previous Job B run. 09/20/21 02:19:10 22:02:02 00:21:12 09/18/21 02:48:11 04:36:59 07:25:10  <<< ??  09/16/21 02:14:33 22:22:41 00:37:13     <query>| tstats latest(evt_tsk_id) as evt_tsk_id, latest(evt_tsk_status) as evt_tsk_status, latest(evt_mes_occ_ts) as evt_mes_occ_ts, latest(evt_mes_occ_ts_epoch) as evt_mes_occ_ts_epoch where index=INDEX1 APP_env=ENV1 APP_inst=INSTANCE (evt_tsk_id ="JOB_A" AND evt_tsk_status="1") OR (evt_tsk_id ="JOB_B" AND evt_tsk_status="2") by _time span=1H | bucket _time span=6H | stats min(evt_mes_occ_ts_epoch) as start, max(evt_mes_occ_ts_epoch) as end by _time | eval N_duration = tostring(round(end-start,0), "duration") | eval _time = strftime(_time,"%m/%d/%y") | convert timeformat="%H:%M:%S" ctime(start) AS JOB1 | convert timeformat="%H:%M:%S" ctime(end) AS JOB2 | rename _time as date | table date N_duration JOB1 JOB2 | reverse</query>     thanks in advance
I have recently created a field extraction on one search head that I have assigned all apps and users to read and write and was wondering how long is would take for a change done in one search head t... See more...
I have recently created a field extraction on one search head that I have assigned all apps and users to read and write and was wondering how long is would take for a change done in one search head to get replicated to other search heads? Also from what I know is that changes done via the GUI are always replicated to other SHs, is this true? If so what CAN and CANNOT be replicated across other search heads via gui. Thanks, Regards,
Hi All, We are planning to configure some of our universal forwarders to use multiple pipeline sets. Do you have some sort of SPL that we can use to identify which forwarders have blocking queues an... See more...
Hi All, We are planning to configure some of our universal forwarders to use multiple pipeline sets. Do you have some sort of SPL that we can use to identify which forwarders have blocking queues and needs to increase the number of pipeline set.