All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a quick query i can use to search which EC2 instance(s) are using a specific AMI for audit purposes
Hi. I've created the following macro: sessionCount(1) With this definition: datamodel Test summariesonly=true search | search "TEST.date"=$date$ | stats count(exchangeId) But when I enter this se... See more...
Hi. I've created the following macro: sessionCount(1) With this definition: datamodel Test summariesonly=true search | search "TEST.date"=$date$ | stats count(exchangeId) But when I enter this search: | `sessionCount(2021-05-18)` it doesn't work But this search does: | datamodel Test summariesonly=true search | search "TEST.date"=2021-05-18 | stats count(exchangeId)   What am I doing wrong?
Hi Splunkers, Good day. My HEC tokens are currently configured in the Indexer Cluster, and during Indexer Bundle Push specifically during bundle reload, the HEC logging drops to 0. Is this normal? ... See more...
Hi Splunkers, Good day. My HEC tokens are currently configured in the Indexer Cluster, and during Indexer Bundle Push specifically during bundle reload, the HEC logging drops to 0. Is this normal? HEC logs are indexing during bundle validation, indexer rolling restart, but not during the bundle reload. Bundle Validation -> Bundle Reload -> Indexer Rolling Restart HEC logging is also not distributed properly across indexers. Seeking advise. Thank you and Kind Regards, Ariel
Hi, I'm sending AWS SSM patching logs to splunk.  I'm transforming these via a Lambda and getting the following events: (snipped for brevity)     { <SNIP> missing_count: 0 not_applicable... See more...
Hi, I'm sending AWS SSM patching logs to splunk.  I'm transforming these via a Lambda and getting the following events: (snipped for brevity)     { <SNIP> missing_count: 0 not_applicable_count: 1762 operation_end_time: 2021-05-18T16:08:27.1678125Z operation_start_time: 2021-05-18T16:00:29.0000000Z operation_type: Install other_non_compliant_count: 0 owner_information: patch_group: test-grp6-wed patches: [ [ KB5001879 Yes Success ] [ KB890830 Yes Success ] ] }       What I'm after is table selected fields like server name, start/finish times etc. and to get the patches column in the format (space or comma seperated on 2 lines with the same row as the rest of the row for that server) KB5001879, Yes, Success KB890830, Yes, Success I can extract the field using the following: index="aws" sourcetype="aws:ssmpatchinglogs" | spath patches{}{} output=patches I've tried some things with mvexpand, streamstats and mvindex (which didn't recognise the command - we're on splunk Version:8.0.1 Build:6db836e2fb9e). Cheers
H Team  I tried the below command , but the output is incorrect where all the count are showing under other instead .  SPL - | stats count(eval(match(User_Agent, "Firefox"))) as "Firefox", count(ev... See more...
H Team  I tried the below command , but the output is incorrect where all the count are showing under other instead .  SPL - | stats count(eval(match(User_Agent, "Firefox"))) as "Firefox", count(eval(match(User_Agent, "Chrome"))) as "Chrome", count(eval(match(User_Agent, "Safari"))) as "Safari", count(eval(match(User_Agent, "MSIE"))) as "IE", count(eval(match(User_Agent, "Trident"))) as "Trident", count(eval(NOT match(User_Agent, "Chrome|Firefox|Safari|MSIE|Trident"))) as "Other" | transpose | sort by User_Agent    Updating Media  
Hi Splunkers, Good day. I am experiencing an issue in our cluster where the searches are all skipping with the reason "Searchable rolling restart or upgrade is in progress". My understanding is tha... See more...
Hi Splunkers, Good day. I am experiencing an issue in our cluster where the searches are all skipping with the reason "Searchable rolling restart or upgrade is in progress". My understanding is that having a searchable rolling restart enabled in the Cluster Manager (indexer) during bundle push minimizes impact to running searches. However, my case is that all the searches are getting skipped regardless. Seeking advise. Splunk installed in the SH cluster and Indexer Cluster all has the same version at 8.0.2. Thank you in advance.
Hi I have created an alert which checks the transaction's response time, if the response time is more than 10 mins splunk will send an email alert Here is the search query: sourcetype="access_log"... See more...
Hi I have created an alert which checks the transaction's response time, if the response time is more than 10 mins splunk will send an email alert Here is the search query: sourcetype="access_log" host=hostname* | eval headers=split(_raw," ") | eval username=mvindex(headers,2) | eval method=mvindex(headers,5) | eval Request=mvindex(headers,6) | eval Status=mvindex(headers,8) | eval Payload=mvindex(headers,9) | eval req_time=mvindex(headers,10) | eval uri=mvindex(headers,11) | eval Method=replace(method,"\"","") | eval uri=replace(uri,"\"","") | eval RequestTime_Minutes = req_time*0.0000166667 | eval Response_Time_in_Minutes= round(RequestTime_Minutes,2) | table Response_Time_in_Minutes host username _time uri Request Status | search Response_Time_in_Minutes > 10   My Question: I want to exclude 1 particular transaction: "searchrequest-excel-all-fields" I do not want the alerts if its the above mentioned transaction since it doesn't affect our app in any way, how do i go about it?  
How do I draw a Sparkline from data that comes from a metrics index (ie accessed via the mstats command)? I've tried various combinations of: | mstats span=5m latest(LogicalDisk.%_Free_Space) as Fr... See more...
How do I draw a Sparkline from data that comes from a metrics index (ie accessed via the mstats command)? I've tried various combinations of: | mstats span=5m latest(LogicalDisk.%_Free_Space) as FreePercentSpace WHERE index=metrics_index host=HOSTYMCHOSTFACE instance="*:" | stats sparkline count by instance And tried adding "prestats=true" to the end of the mstats command but still nothing happens. I assume there is some sort of intermediate command I need to put between the two lines to make the data palatable for stats (or chart) to process? Thanks Eddie
How to enable Splunk Web UI on RHL version 8?
Hi all   Need a help in hurry. I used 8.1.2 splunk enterprise and add-on builder to create an add-on and export to tgz file. But failed to pass the verification step by our colleague. [ Failure Su... See more...
Hi all   Need a help in hurry. I used 8.1.2 splunk enterprise and add-on builder to create an add-on and export to tgz file. But failed to pass the verification step by our colleague. [ Failure Summary ] Failures will block the Cloud Vetting. They must be fixed. check_that_extracted_splunk_app_contains_default_app_conf_file   What step I missed?   Emily
  Hello Everyone I hope you are all having a great day. I have been trying to understand how to properly work with multi values in Splunk, but it has been a true struggle for me,  after i execute t... See more...
  Hello Everyone I hope you are all having a great day. I have been trying to understand how to properly work with multi values in Splunk, but it has been a true struggle for me,  after i execute this command:     | stats values(CODE) as CODE values(DATE_IN) as DATE_IN values(DATE_REQUESTS) as DATE_REQUESTS by HOST     I get something like this: HOST CODE DATE_IN DATE_REQUESTS A UYUJ XYHH 6/05/2021 6/06/2021 6/07/2021 B OLP NMJ TYU WER BIYU 7/06/2021 8/06/2021 9/06/2021 10/06/2021 11/06/2021 8/09/2021 9/09/2021   But what I really want is to create a table like this one:    Where I can create a single row for each ID , CODE and DATE_IN and to later on subtract each DATE_REQUEST to the single value in DATE_IN in a field named DATE_DIFF and finally create a field called SELECT in which I will be picking the smallest positive number from the  potentially multivalue field DATE_DIFF I appreciate everyone's good will to help me out on this one I'm a rookie when it comes to Splunk and all of my attempts at using the mvexpand fucntion have not returned my desired outcome, thank you guys for your help.   Kindly,   Cindy
Populating a data model with json feed: One of the fields "mnemonic" looks like this in _raw "mnemonic":"119fw3q-wrl-834v:abc:E10251:2048:119fw3q:You can do it! - TKO" Strangely the mnemonic field ... See more...
Populating a data model with json feed: One of the fields "mnemonic" looks like this in _raw "mnemonic":"119fw3q-wrl-834v:abc:E10251:2048:119fw3q:You can do it! - TKO" Strangely the mnemonic field in the data model only captures until the first colon ":" mnemonic = "119fw3q-wrl-834v" Does anyone have advise how to get around this without changing the character? Tried some stuff with rex in eval bu _raw is not available for use in eval functions. Thanks in advance
Hello everyone, Seeking your help. I have  logs where Transaction_ID is unique to  transaction. Depending on each transaction there can be multiple action. But if there is an error there would be a ... See more...
Hello everyone, Seeking your help. I have  logs where Transaction_ID is unique to  transaction. Depending on each transaction there can be multiple action. But if there is an error there would be a log generated with Action=Error. I have created two search  One for successfull creation of transaction: `base_search` | search action=Error | timechart distinct_count(Transaction_ID) as Successfull And for errors. `base_search` | search action!=Error | timechart distinct_count(Transaction_ID) as Error. I would like to simply display these two in one chart to see number of successfull events vs failed. What would be the best method to combine these two ?   Appreciate any guidance.
Hello. I am using Splunk configured as a reverse proxy. The root_endpoint in web.conf is set to /splunk. Most of the pages work fine, and most of the functions work fine as well. However, in the ... See more...
Hello. I am using Splunk configured as a reverse proxy. The root_endpoint in web.conf is set to /splunk. Most of the pages work fine, and most of the functions work fine as well. However, in the case of the job inspector page, the path of the icon is broken and it is not displayed. Are there any other options that need to be set?
Hi guys, For a dashboard panel, I am running base search and hoping to have a checkbox that returns the timechart data while selecting the check box. Not sure what changes needs to be done to the fo... See more...
Hi guys, For a dashboard panel, I am running base search and hoping to have a checkbox that returns the timechart data while selecting the check box. Not sure what changes needs to be done to the following query- (also I guess the tag placement for row, panel & chart needs to be aligned) <search id="base_search"> <query>index=magic host="abc*" $check1$</query> <earliest>-120m@m</earliest> <latest>now</latest> </search> <fieldset submitButton="false"> <input type="checkbox" token="check1" searchWhenChanged="true"> <label>box</label> <choice value="*">All</choice> <search base="base_search"> <query>| search "abracadabra" | timechart span=5m count </query> </search> <fieldForLabel>check1</fieldForLabel> <fieldForValue>check1</fieldForValue> </input> </fieldset>
hello, I have a splunk macro which is being used to alert for system saturation when i am passing numeric values in the macro, i am getting the results and able to send alerts, however , if im tryi... See more...
hello, I have a splunk macro which is being used to alert for system saturation when i am passing numeric values in the macro, i am getting the results and able to send alerts, however , if im trying to pass arguments in my macro, i stop getting any result and there is no error. eventtype="nmon:performance" type=DF_STORAGE storage_used_percent>0 env::$env$| stats latest(storage_used_percent) as storage_used_percent by _time, frameID, host, env, mount | lookup nmon_alerting_threshold_template_filesystem frameID mount OUTPUT alert_fs_max_percent as template_alert_fs_max_percent, alert_fs_min_time_seconds as template_alert_fs_min_time_seconds | lookup nmon_alerting_threshold_filesystem frameID host mount OUTPUT alert_fs_max_percent as server_alert_fs_max_percent, alert_fs_min_time_seconds as server_alert_fs_min_time_seconds | eval default_alert_fs_max_percent="$threshold$", default_alert_fs_min_time_seconds="$time$" | eval alert_fs_max_percent=case(isnum(server_alert_fs_max_percent), server_alert_fs_max_percent, isnum(template_alert_fs_max_percent), template_alert_fs_max_percent, isnum(default_alert_fs_max_percent), default_alert_fs_max_percent), alert_fs_min_time_seconds=case(isnum(server_alert_fs_min_time_seconds), server_alert_fs_min_time_seconds, isnum(template_alert_fs_min_time_seconds), template_alert_fs_min_time_seconds, isnum(default_alert_fs_min_time_seconds), default_alert_fs_min_time_seconds), alert_threshold_source=case(isnum(server_alert_fs_max_percent), "server_thresholds", isnum(template_alert_fs_max_percent), "template_thresholds", isnum(default_alert_fs_max_percent), "default_threshold") | where (storage_used_percent>=alert_fs_max_percent) | lookup nmon_alerting_filesystem_global_exclusion mount OUTPUT exclude as global_exclude | lookup nmon_alerting_filesystem_template_exclusion frameID mount OUTPUT exclude as template_exclude | lookup nmon_alerting_filesystem_per_server_exclusion host mount OUTPUT exclude as host_exclude | fillnull value="false" global_exclude template_exclude host_exclude | where (global_exclude!="true" AND template_exclude!="true" AND host_exclude!="true") | stats latest(_time) as _time range(_time) as duration latest(storage_used_percent) as latest_storage_used_percent, values(alert_fs_max_percent) as alert_fs_max_percent, values(alert_fs_min_time_seconds) as alert_fs_min_time_seconds, values(alert_threshold_source) as alert_threshold_source by frameID,host,env,mount | where (latest_storage_used_percent>=alert_fs_max_percent) AND (duration >= alert_fs_min_time_seconds) | eval "duration (hh:mm:ss)"=tostring(duration,"duration") | fields frameID,host,env,_time,mount,duration,"duration (hh:mm:ss)",latest_storage_used_percent,alert_fs_max_percent,alert_fs_min_time_seconds,alert_threshold_source. Can someone help me pass numeric values in arguments to get the right response.   Thanks In Advance.  
Is it possible to get a particular value from search results in my final output. I'm having a hard time getting them the way I want them to display in a table. Search="mpmstats" Here is the output ... See more...
Is it possible to get a particular value from search results in my final output. I'm having a hard time getting them the way I want them to display in a table. Search="mpmstats" Here is the output of my search. Out of this I need only bsy value to be displayed in a table as my output like below format Still learning .. Pleas help.... Thanks in Advance
I used the multiple chart option to achieve the below visualisation. However i would like the customise the Y-axis for each chart to show the visualisation better. For instance, the Y axis max and mi... See more...
I used the multiple chart option to achieve the below visualisation. However i would like the customise the Y-axis for each chart to show the visualisation better. For instance, the Y axis max and min for the 1st and 2nd charts would be 2 and 0 and for the third chart would be 20 and 0.    help would be greatly appreciated!! thank u!
I want to upgrade Splunk Enterprise Version from 8.0.5.1 to 8.1 on a stand alone environment. Please suggest me the steps to perform this migration. Also my current python version is 2.7. Please su... See more...
I want to upgrade Splunk Enterprise Version from 8.0.5.1 to 8.1 on a stand alone environment. Please suggest me the steps to perform this migration. Also my current python version is 2.7. Please suggest if there is any need to upgrade the python version to 3.7 as well or 2.7 will also work for Splunk Enterprise Version 8.1.
I have copied ITSI app from one  Splunk server to another server . But later when i am trying to access the service analyser, getting the error " Could not load service analyse . Check that you have ... See more...
I have copied ITSI app from one  Splunk server to another server . But later when i am trying to access the service analyser, getting the error " Could not load service analyse . Check that you have proper roles and permissions . Detals : An internal error occured." . I dont have this error in my old server where i copued the ITSI configurations . But I verified that all configurations were copied . Can someone help me on this ?