All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In my dashboard, I have "Alerts Open" timechart single value panels with colour ranges that are using the following searches:     index="<client>" case_id | dedup 1 case_id sortby -_time | search ... See more...
In my dashboard, I have "Alerts Open" timechart single value panels with colour ranges that are using the following searches:     index="<client>" case_id | dedup 1 case_id sortby -_time | search (status=new OR status=under_investigation) | timechart sum(alert_count) as alert_count_total | addcoltotals     This works fine in all aspects when there are actually alerts open. However, I found that when no alerts are open then it simply displays "No results found" but I wanted it to stay on 0. I tried using  "if(isnull" and "fillnull" neither of which worked but I found that using the following search resolves this:     index="<client>" case_id | dedup 1 case_id sortby -_time | search (status=new OR status=under_investigation) | timechart sum(alert_count) as alert_count_total | append [| stats count as alert_count] | addcoltotals     However, a side of that is that the panels are now using the colours for the max ranges, even though the value is 0 and the max ranges are, for example, "from 100 to max". This can be seen below. For some reason, it seems that it's the timechart that's causing this because removing it uses the correct colours. This can be seen below.  
Hey Guys, I get 4 types of logs in  different formats. If the log is from type 1, I want to use 1 regex. If the log is of type 2, I want to use another regex. And similarly for all 4 types of logs, ... See more...
Hey Guys, I get 4 types of logs in  different formats. If the log is from type 1, I want to use 1 regex. If the log is of type 2, I want to use another regex. And similarly for all 4 types of logs, I want to use 4 different regex and finally put all the types and Values returned by the regex in a table. How can I do this?
I'm trying to get the average time that a case is open in a system. To get the latest event per case that's closed and calculate the time between open and close, I use the following search (I'll ref... See more...
I'm trying to get the average time that a case is open in a system. To get the latest event per case that's closed and calculate the time between open and close, I use the following search (I'll refer to this as "<base command>") which works: index="<client>" case_id | dedup 1 case_id sortby -_time | search (status!=new AND status!=under_investigation) | convert mktime(_time) as modification_time_epoch | eval creation_time_epoch = strptime(creation_time, "%Y/%m/%d %H:%M:%S") | eval timedifference_seconds = modification_time_epoch - creation_time_epoch | eval timedifference_minutes = timedifference_seconds / 60 | eval timedifference_hours = timedifference_minutes / 60 Appending ' | timechart avg(timedifference_hours)' doesn't work as expected because it simply uses the last value instead of the average across all of the values. Appending ' | stats avg(timedifference_hours)' does correctly calculate the average but I can't get that output to be accepted by timechart. I've tried every solution that I can find online but none of them have worked, hence this post.
This is my query and I have some challenges in the log. The thing is my daily job will start at 11 PM. If the job runs successfully it will complete before 11:30. So I set status as success. But in c... See more...
This is my query and I have some challenges in the log. The thing is my daily job will start at 11 PM. If the job runs successfully it will complete before 11:30. So I set status as success. But in case of job time out the job time out at next day at 1:30 AM. Again, the job started on the next day 11:PM and ran successfully, but now I have failure and success in same day. How can I check the event and set status as a failure? index=xx* app_name="xxx" OR cf_app_name="yyy*" OR app_name="ccc" |span_time span=1d |eval dayweek=strftime(_time,"%H")|convert timeformat="%m-%d-%y" ctime(_time) as c_time|eval Job = case(like(msg, "%first%"), "first Job", like(msg, "%second%"), "second Job", like(msg, "%third%"), "third job",like(msg, "%fourth%"), "fourth job")| stats count(eval(like(msg, "%All feed is completed%") OR like(msg, "%Success:%") OR like(msg, "%Success: %") OR like(msg, "%Finished success%"))) as Successcount count(eval(like(msg, "%Fatal Error: %") OR like(msg, "%Fatal Error:%") OR like(msg, "%Job raised exception%") AND like(msg, "% job error%"))) as failurecount by Job c_time dayweek|eval status=case((Job="fourth job") AND (dayweek=="Saturday" OR dayweek=="Sunday"),"NA",Successcount>0,"Success",failurecount>0,"Failure")| xyseries Job c_time status  
Hello Experts, Greetings! I am new to Splunk and I have created Dashboard which will show the CPU Utilization value for two different time frames for the single server. Data is showing correctly as ... See more...
Hello Experts, Greetings! I am new to Splunk and I have created Dashboard which will show the CPU Utilization value for two different time frames for the single server. Data is showing correctly as expected by I need to show "Date" in Legend values. For example: Time period1 indicates Aug 3rd, 2020 (from 5AM to 7AM) and Time period2 indicates Aug 4th, 2020 (from 5AM to 7AM). I need to show Aug 3, 2020 for Time Period1 and show Aug 4, 2020 for Time Period2. Please find my code and snapshot below for more details: <form> <label>Working Report</label> <fieldset submitButton="false"> <input type="dropdown" token="iSeries" searchWhenChanged="true"> <label>iSeries Host</label> <search> <query>index=application_name sourcetype="eview-iSeries" | stats values(host) by host</query> <earliest>$time_period1.earliest$</earliest> <latest>$time_period1.latest$</latest> </search> <search> <query>index=application_name sourcetype="eview-iSeries" | stats values(host) by host</query> <earliest>$time_period2.earliest$</earliest> <latest>$time_period2.latest$</latest> </search> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> </input> <input type="time" token="time_period1"> <label>Time Period1</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="time" token="time_period2"> <label>Time Period2</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <title>CPU Utilization</title> <search> <query>index=application_name sourcetype="eview-iSeries" host=$iSeries$ earliest=$time_period1.earliest$ latest=$time_period1.latest$ | timechart span=5min avg("AvgPercentCPUUsed") as "Time Period1" | appendcols [search index=application_name sourcetype="eview-iSeries" host=$iSeries$ earliest=$time_period2.earliest$ latest=$time_period2.latest$ | timechart span=5min avg("AvgPercentCPUUsed") as "Time Period2"] | convert timeformat=""%H:%M"" ctime(_time) AS Time | sort _time | fields - _time | table Time *</query> </search> <option name="charting.chart">line</option> <option name="charting.seriesColors">[0xFF0000,0x0000FF]</option> <option name="charting.chart.columnSpacing">100</option> </chart> </panel> </row> </form>
Hi Splunkers, we need to monitor who, when, where and what was changed in macros, searches and so on. Internal index can answer to "who, when, where" (audit POST requests).  Which is the right and... See more...
Hi Splunkers, we need to monitor who, when, where and what was changed in macros, searches and so on. Internal index can answer to "who, when, where" (audit POST requests).  Which is the right and preferred way to answer to "what" exactly was added or removed to/from the knowledge object during the change operation. P.S. We have to have this information in Splunk and correlate with _internal audit
Hi you all, I'm very new on Splunk and I'm trying to learn the SPL code. Pour le moment j'ai un graphique qui a dans l'axe X la date (JJ / MM / AAAA en désordre) et dans l'axe Y une valeur. Je veux... See more...
Hi you all, I'm very new on Splunk and I'm trying to learn the SPL code. Pour le moment j'ai un graphique qui a dans l'axe X la date (JJ / MM / AAAA en désordre) et dans l'axe Y une valeur. Je veux savoir comment puis-je classer l'axe X dans l'ordre chronologique par mois et réaliser la moyenne de toute la valeur par mois? For the moment I have a graphic that has the date in the X axis (DD/MM/YYYY in disorder) and Y axis the value. I want to know how can I sort the X axis in the chronogical order per month and realize the average of all the value per month ? Thanks you for your help.  
Hello, I need to schedule an alert in 2:30 AM to 4:00 AM in splunk alert.  Please suggest the cron expression . Thanks
When I run the playbook in debugger mode by passing the event ID to test it, for the first time the playbook is working good, but when run the Playbook from the event, I am getting the error: phant... See more...
When I run the playbook in debugger mode by passing the event ID to test it, for the first time the playbook is working good, but when run the Playbook from the event, I am getting the error: phantom.act(): 'enrich_ip_5' cannot be run on asset 'sample asset'. The "enrich ip" action requires the following parameters: ip. The given parameters look like they were automatically generated by phantom.act() because an empty parameters list was passed to phantom.act(). The parameters list may have been empty because the preceding call to phantom.collect2() returned an empty list. Check your calling code in the action that generated this error
Experts, I have raised this concern to my company associated appdynamic account managers last year but thought to share my views to see if anyone working/worked/informed on this requirement. So my... See more...
Experts, I have raised this concern to my company associated appdynamic account managers last year but thought to share my views to see if anyone working/worked/informed on this requirement. So my ask is to have continuous Availability of appdynamics controller with minimal outage or downtime for controller maintenance is required i.e. somewhat an active-active architecture. Believe me - current appd architecture is too old school with watchdog and all. We have multiple deployments and each time we face some or the other issue. For e.g. we use to have hygiene infra updates and get the servers restarted each quarter breaking replication. Though we plan it out in such a way that either of nodes (primary/secondary) is active. But post which as soon as final replication is triggered, it demands for outage. I know appd is working to re-architect the architecture but as per my info controller is out of scope i.e. controllers (esp app server of appdynamics) will still be HA and not CA. I don't want to compare with other APM tools but I really want to say that lot of work needs to be done on appdynamics architecture to meet new design requirements (resilient, horizontal scalable, non-disruptive, continuous available, stable) Hope it makes any sense. Regards, Vaibhav
Hi    I am unable to search my data unless I specify all time.
      <dashboard> <label>drilldown time</label> <init> <unset token="epoch" /> <unset token="human" /> </init> <row> <panel> <table> <search> <query>|ma... See more...
      <dashboard> <label>drilldown time</label> <init> <unset token="epoch" /> <unset token="human" /> </init> <row> <panel> <table> <search> <query>|makeresults |eval time=_time| fieldformat time=strftime(time,"%F %T")</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <drilldown> <set token="epoch">$click.value2$</set> <eval token="human">strftime($epoch$,"%F%T")</eval> </drilldown> </table> </panel> </row> <row> <panel> <html> <p> <h2>$epoch$</h2> </p> <p> <h2>$human$</h2> </p> </html> </panel> </row> </dashboard>       -   click _time, fine -   click time, not work What else can I do besides rename X as _time?
Hi I have a requirement where I need to monitor certain registry key values on Windows server 2016. I am using the below configs in inputs.conf for monitoring but unable to index the data and also d... See more...
Hi I have a requirement where I need to monitor certain registry key values on Windows server 2016. I am using the below configs in inputs.conf for monitoring but unable to index the data and also dont see any results in search. Tried following the Splunk doc as well but couldnt get much help.  Let me know if you have come across any such issues and rectified it.  Contents of inputs.conf [WinRegMon://HKLM] baseline=1 disabled=0 hive=\\REGISTRY\\SYSTEM\\*ControlSet*\\Services\\LanManServer\\Shares\\?.* hive=\\HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\?.* hive=\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\RunOnceEx\\?.* hive=\\HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon\\Userinit hive=\\HKEY_LOCAL_MACHINE\\SYSTEM\\*ControlSet*\\Services\\LanmanServer\\Parameters\\autodisconnect index=windows proc=.* source=WinRegistry type=set|create|delete|rename|query
I have written a rule that is trying to use a transaction and based on the transaction value to either alert or not.  For example the main fields I have in my index are as follows: rid label titl... See more...
I have written a rule that is trying to use a transaction and based on the transaction value to either alert or not.  For example the main fields I have in my index are as follows: rid label title rid would be the common field and each rid could have from 2-5 events.  For example: Event 1 --> Aug 10 00:10:00 ......... label="News" Title="mytitle" user="bob" rid=12345 Event 2 --> Aug 10 00:10:00 ......... label="Catalog" Title="mytitle"  user="bob" rid=12345 Event 3 --> Aug 10 00:10:00 ......... label="Match" Title="mytitle"  user="bob" rid=12345 The search I have configured is as follows:     index=main NOT (sender="administrator" AND label="System Generated") NOT label="Match" | transaction rid       If I run the above search my transaction shows 2 of the events as above and removes Event 3.  What I am trying to do is alert on rid's where they have not yet been labelled "Match".  However the above search would trigger for ones that have already been matched.  An example may be Event 1 --> Aug 10 00:12:00 ......... label="News" Title="mytitle" user="bob" rid=45678 Event 2 --> Aug 10 00:12:00 ......... label="Catalog" Title="mytitle"  user="bob" rid=45678 So using the examples above, I want to be able to alert when rid "45678" occurs but not "12345". I have tried using a ....| transaction rid keepevictions=true and then searching for where this doesn't occur.  I have also tried the following which doesn't seem to garner the results I want.     index=main NOT (sender="administrator" AND label="System Generated") | transaction startswith="News" endswith="Match" mid | search NOT label IN ("Match")       This generates 0 results.  I have also tried running with keepevicted=true with no results either     index=main NOT (sender="administrator" AND label="System Generated") NOT label="Match" | transaction rid keepevicted=true      
Hi Team, The following inputs.conf works on localhost to monitor a registry key, but not working on the universal forwarder. [WinRegMon://HKLM] baseline=1 disabled=0 hive=\\REGISTRY\\MACHINE\\SY... See more...
Hi Team, The following inputs.conf works on localhost to monitor a registry key, but not working on the universal forwarder. [WinRegMon://HKLM] baseline=1 disabled=0 hive=\\REGISTRY\\MACHINE\\SYSTEM\\*ControlSet*\\Services\\LanManServer\\Shares\\?.* index=windows proc=.* type=set|create|delete|rename BTW even the following hive attribute too works fine on local host but not on universal forwarder hive=HKEY_LOCAL_MACHINE\\SYSTEM\\*ControlSet*\\Services\\LanManServer\\Shares\\?.* But the default configuraiton of inputs.conf works on both local host and the universal forwarder. [WinRegMon://default] disabled = 0 hive = .* proc = .* type = rename|set|delete|create index = windows   Any references are much helpful.
Using Splunk ES 5.3.1, I have a saved search that reached the 25GB limit (srchDiskQuota) before being finalized.  This ran two days in a row and ended up filling my dispatch directory.  In total it w... See more...
Using Splunk ES 5.3.1, I have a saved search that reached the 25GB limit (srchDiskQuota) before being finalized.  This ran two days in a row and ended up filling my dispatch directory.  In total it was searching over 65 billion events over the 30 day time period in the Web datamodel. Looking through the jobs I was able to identify the search and disabled it from running further.  However, I don't know where this search is used in ES and where the results are used.  I'd like to determine that so I know what will be missing and where by disabling this search.  The only information I have found is that it is used in the Machine Learning Tool Kit but I don't have MLTK installed in ES nor is it an applicable version. Name: Web - Web Event Count By Src By HTTP Method Per 1d - Context Gen App: SA-NetworkProtection Type: saved search Location: /opt/splunk/etc/apps/SA-NetworkProtection/default/savedsearches.conf [Web - Web Event Count By Src By HTTP Method Per 1d - Context Gen] action.email.sendresults = 0 cron_schedule = 0 0 * * * disabled = False dispatch.earliest_time = -31d@d dispatch.latest_time = -1d@d enableSched = 1 is_visible = false schedule_window = 20 search = | tstats `summariesonly` count as web_event_count from datamodel=Web.Web by Web.src, Web.http_method, _time span=24h | `drop_dm_object_name("Web")` | where match(http_method, "^[A-Za-z]+$") | `context_stats(web_event_count, http_method)` | eval min=0 | eval max=median*2 | xscreateddcontext name=count_by_http_method_by_src_1d container=web class=http_method app="SA-NetworkProtection" scope=app type=domain terms=`xs_default_magnitude_concepts` | stats count  
Hello, how to get maxstats based on another col Thank you!  
I get this error when i select the 'All' value from my dropdown input option as shown below     These are the static and dynamic options that i've added in my dropdown input    This ... See more...
I get this error when i select the 'All' value from my dropdown input option as shown below     These are the static and dynamic options that i've added in my dropdown input    This is the search query i used for the panel where i received the error   How do I fix this error?  
Can custom visualizations access secondary searches?  Some charts access a search with type="annotation" - can other visualizations do the same type of thing?  If so, how do you access that search re... See more...
Can custom visualizations access secondary searches?  Some charts access a search with type="annotation" - can other visualizations do the same type of thing?  If so, how do you access that search result?
Hi I have Splunk_TA_aws installed on the heave forwarder   the input are  [aws_s3://aws_dome9_logs_amdocsdome9logs] aws_account = IS account bucket_name = amdocsdome9logs character_set = auto ... See more...
Hi I have Splunk_TA_aws installed on the heave forwarder   the input are  [aws_s3://aws_dome9_logs_amdocsdome9logs] aws_account = IS account bucket_name = amdocsdome9logs character_set = auto ct_blacklist = ^$ host_name = s3.amazonaws.com index = aws_dome9_logs initial_scan_datetime = 2018-01-01T21:54:23-0700 interval = 30 is_secure = True max_items = 100000 max_retries = 3 recursion_depth = -1 sourcetype = _json_current_time [aws_s3://aws_dome9_logs_amdocsdome9remediationlogs] aws_account = IS account bucket_name = amdocsdome9remediationlogs character_set = auto ct_blacklist = ^$ host_name = s3.amazonaws.com index = aws_dome9_logs initial_scan_datetime = 2018-01-01T21:54:23-0700 interval = 30 is_secure = True max_items = 100000 max_retries = 3 recursion_depth = -1 sourcetype = _json_current_time   what can be the reason the same event if indexed twice (day after day )  according to the json file diff the files are identical