All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  One of my panel's is based off the start and end token of a pan and zoom selection (like the example shown). However without any user selection,  I want to hide the 2nd panel until the use... See more...
Hi,  One of my panel's is based off the start and end token of a pan and zoom selection (like the example shown). However without any user selection,  I want to hide the 2nd panel until the user has decided to select a pan and zoom time range.  I have tried doing depends="$selection.earliest$" in the 2nd panel's xml but it doesn't seem to work. How do i go about this? Thanks! 
Hi  All,  I  want  to convert  the following  into  Epoch  time ,but  it  is not  getting  resolved.   2020-10-05 23:06:05.946Asia/Singapore  and i  use the below  command    eval  time=strptime... See more...
Hi  All,  I  want  to convert  the following  into  Epoch  time ,but  it  is not  getting  resolved.   2020-10-05 23:06:05.946Asia/Singapore  and i  use the below  command    eval  time=strptime("testtime",%Y-%m-%d%s%H:%M:%S.%3N")
Hello Experts, I  am having a search as below      |search | eval _time=new_t | timechart span=1mon sum(alloc) as used | streamstats sum(used) as "Total" | predict "Total" as "Projected" future_t... See more...
Hello Experts, I  am having a search as below      |search | eval _time=new_t | timechart span=1mon sum(alloc) as used | streamstats sum(used) as "Total" | predict "Total" as "Projected" future_timespan=8       Output from the search is as below _time Used Total Projected lower95 2019-09 1 1 <some numbers> <some numbers> 2020-03 2 3 <some numbers> <some numbers> 2020-04 4 7 <some numbers> <some numbers> 2020-05 4 11 <some numbers> <some numbers> 2020-09 5 16 <some numbers> <some numbers> 2020-10     <some numbers> <some numbers> 2020-11     <some numbers> <some numbers> 2020-12     <some numbers> <some numbers> 2021-01     <some numbers> <some numbers>   How can i compare the "_time" field with current "month-year"  and display only those rows greater than the current Year-Month. | search _time>strftime(now(),"%Y-%m-%d")     Any hep will be appreciated ..Thanks
I have the following data as example: I want to find the events whose locations having had temperature above a threshold, say 80F.   Temperature=82.4, Location=xxx.165.152.17, Time=Wed Sep 16 07:4... See more...
I have the following data as example: I want to find the events whose locations having had temperature above a threshold, say 80F.   Temperature=82.4, Location=xxx.165.152.17, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=84.2, Location=xxx.165.152.48, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=82.4, Location=xxx.165.154.21, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=82.4, Location=xxx.165.162.22, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=77.0, Location=xxx.165.164.17, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=75.2, Location=xxx.165.170.17, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=77.0, Location=xxx.165.208.12, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=73.4, Location=xxx.165.224.20, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=75.3, Location=xxx.165.52.13, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=77.9, Location=xxx.165.52.14, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=76.3, Location=xxx.165.54.24, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=83.8, Location=xxx.165.48.20, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=73.8, Location=xxx.165.36.21, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor   I would like to first find the subsets of locations with whom the temperatures having been above a threshold, say 80F.  I may perform the a query like the following:   | Temperature > 80 | fields Location | dedup Location   I'd call the locations outcome of the query "hot_locations", then I'd like perform my eventual query:   | Location IN hot_locations   My question is what's the syntax of Splunk query language to express the declarations? Thanks for your pointers!
Hi Splunkers, I have a splunk search query  index="xyz" source="/var/log/production.log" sourcetype="xyzlogs" type="report" | dedup uuid | stats count(uuid) as TOTAL | append [ search index="xyz" s... See more...
Hi Splunkers, I have a splunk search query  index="xyz" source="/var/log/production.log" sourcetype="xyzlogs" type="report" | dedup uuid | stats count(uuid) as TOTAL | append [ search index="xyz" sourcetype=abclogs NOT host="xyte150.com.dmz" "<vv:general-messages>" ("conditions1"  "conditions2" | dedup uuid | stats count(uuid) as FAIL] | eval SUCCESS=TOTAL - FAIL |stats list(TOTAL) as TotalTransactions, values(SUCCESS) as PASSED, list(FAIL) as FAILED | eval Availability=round((PASSED*100)/TotalTransactions,2)   I cannot see any value in SUCCESS and due to this no Availability. Somehow the subtraction is not working. My end goal is display a table to show the below TOTAL PASSED FAIL Availability   Can you please suggest why is not working? Thanks, Amit
Hi  I tried the below SPL query which is not working , can anyone help me  index=aws  sourcetype=* earliest=-30d user="*" action=login OR action=logout | table user status action reason message  O... See more...
Hi  I tried the below SPL query which is not working , can anyone help me  index=aws  sourcetype=* earliest=-30d user="*" action=login OR action=logout | table user status action reason message  OR source="*" EventCode=4624 OR EventCode=4634 | table _time Account* Logon*
Excuse my knowledge with Splunk how do track user device details  Mobile ( Device model , OS version ) Browser ( Browser details  , version ) 
I am new to splunk. I received a splunk diag file for a UF. How can I open and  analysis the splunk diag file. Do I need splunk support for this. Any tool is available from splunk itself.
Hi,  I'm trying to build a line graph that would show me the completion time of an event on a daily basis. The completion time is in the timestamp field. The y axis should display the time of comple... See more...
Hi,  I'm trying to build a line graph that would show me the completion time of an event on a daily basis. The completion time is in the timestamp field. The y axis should display the time of completion and the x axis the date Example: timestamp="2020-10-03 00:48:48.0" statusText="SUCCESS" "JOB1" timestamp="2020-10-01 21:45:22.0" statusText="SUCCESS" "JOB1" timestamp="2020-09-31 21:44:22.0" statusText="SUCCESS" "JOB1" timestamp="2020-09-30 22:48:48.0" statusText="SUCCESS" "JOB1" timestamp="2020-09-29 00:48:48.0" statusText="SUCCESS" "JOB1"  Can anyone please advise what is the best way to do this?
Can someone help me understand the difference between Splunk Web and Splunk enterprise? and the Python scripts that interact with them? "Splunk Web supports only Python 3.7" - Do they mean Scripts t... See more...
Can someone help me understand the difference between Splunk Web and Splunk enterprise? and the Python scripts that interact with them? "Splunk Web supports only Python 3.7" - Do they mean Scripts that read WebUI or is there some package like SplunkWeb for Python?    "In Splunk Enterprise version 8.0, Splunk Enterprise continues to use the Python 2 interpreter globally by default, but Splunk Web supports only Python 3.7. Scripts and templates that depend on Splunk Web must be adjusted to use Python 3-compatible syntax before the upgrade, but you have several options for how and when you adjust Python scripts that aren't dependent on Splunk Web."  https://docs.splunk.com/Documentation/Splunk/8.0.0/Python3Migration/PythonDevelopment
Hello, we have to create a role from the scratch. that role has to have the capabilities required to upload .csv files to the environment. I tried with my admin user and it works just fine, but th... See more...
Hello, we have to create a role from the scratch. that role has to have the capabilities required to upload .csv files to the environment. I tried with my admin user and it works just fine, but the custom role is not working. I added the following capabilities to it and still is not working: edit_monitor indexes_edit edit_tcp search But still I can't upload files, I've got the error im attaching. if anyone knows what extra capabilities do i need please help me. Thanks!    
Hello All, I have created identities and when i am trying to create a new connection to ms-sql server, i am getting "database connection is invalid"  Login failed for user svc_account. Clientconnec... See more...
Hello All, I have created identities and when i am trying to create a new connection to ms-sql server, i am getting "database connection is invalid"  Login failed for user svc_account. ClientconnectionId: ------      
Hi all, using the following: ${index+sourcetype-information} NOT src_ip IN ("10.*","127.*","192.168.*","172.16.0.0/12") dest_ip IN ("10.*","192.168.*","172.16.0.0/12") dest_port>-1 NOT dest_port IN ... See more...
Hi all, using the following: ${index+sourcetype-information} NOT src_ip IN ("10.*","127.*","192.168.*","172.16.0.0/12") dest_ip IN ("10.*","192.168.*","172.16.0.0/12") dest_port>-1 NOT dest_port IN (80,443) | bin _time span=5m | stats dc(dest_ip) as d_C by src_ip dest_port | where d_C > 99 How to get "_time" of the first occurrence and the "_time" of the second occurrence? Also, does "span=5m" consider say 45 results in 09:08:00 - 09:09:59 and 52 results in 09:10:00 - 09:12:59?  
Created a custom streaming command that concatenates an event's fields and field values into one field (since the events that we're dealing with has an unpredictable list of fields, I couldn't figure... See more...
Created a custom streaming command that concatenates an event's fields and field values into one field (since the events that we're dealing with has an unpredictable list of fields, I couldn't figure out a way to do it in SPL). When ran in a stand-alone Splunk Enterprise instance, it works fine. However, when ran in a clustered environment, it results in an error (one message per indexer node):   [<indexer hostname>] Streamed search execute failed because: Error in 'condensefields' command: External search command exited unexpectedly with non-zero error code 1..   I have the app that contains the custom command in both the search heads and indexers.   Setup: Oracle Linux Server 7.8 Splunk Enterprise 7.2.6 Search Example:   index=_audit | condensefields _time, user, action, info, _raw | table _time, user, action, info, details     App (was not able to upload compressed folder): <app> bin condensefields.py #!/usr/bin/env python import sys import os sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import \     dispatch, StreamingCommand, Configuration, Option, validators @Configuration() class CondenseFields(StreamingCommand):     """ Condense fields of an event into one field.     ##Syntax     | condensefields <fields>     ##Description     Condenses all of the fields, except ignored fields, from the event into one field in a key-value format.     """     def stream(self, events):         for event in events:             fields_to_condense = filter(lambda key: key not in self.fieldnames, event.keys())             condensed_str = ''             is_first = True             for key in fields_to_condense:                 value = event[key]                 if not value or len(value) == 0:                     continue                 if not is_first:                     condensed_str += '|'                 else:                     is_first = False                 if isinstance(value, list):                     value = '[\'' + '\', \''.join(value) + '\']'                 condensed_str += key + '=' + value             event['details'] = condensed_str             yield event dispatch(CondenseFields, sys.argv, sys.stdin, sys.stdout, __name__) default app.conf [install] is_configured = false build = 1 [ui] is_visible = false label = commands [launcher] author = Some Rando description = Provides custom commands. version = 1.0.0 commands.conf # [commands.conf]($SPLUNK_HOME/etc/system/README/commands.conf.spec) [condensefields] chunked = true searchbnf.conf # [searchbnf.conf](http://docs.splunk.com/Documentation/Splunk/latest/Admin/Searchbnfconf) [condensefields-command] syntax = condensefields shortdesc = Condense fields of an event into one field. description = Condenses all of the fields, except ignored fields, from the event into one field in a key-value format. content1 = A typical use-case where all of the fields, except for a defined subset, are condensed into the a field with the specified format. example1 = | condensefields _time, event_name, application category = streaming tags = format lib splunklib if you need it, see https://github.com/splunk/splunk-sdk-python/tree/master/splunklib and https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/createcustomsearchcmd#Install-the-Splunk-Enterprise-SDK-for-Python-in-your-app metadata default.meta [] access = read : [ * ], write : [ admin, power ] export = system
Looking for some advice on combining searches from multiple sourcetypes into a single report for my auditing team. They have requested a report showing hostnames and for each host the current AV defi... See more...
Looking for some advice on combining searches from multiple sourcetypes into a single report for my auditing team. They have requested a report showing hostnames and for each host the current AV definitions, the last time the sec log was cleared or archived, and then content from a couple of text files that is produced by some scheduled tasks on the systems. I currently have all of this info in Splunk, but will need to create a single report to show it all. As an example, how could I take the three queries I've put together so far into the same report? index=windows source="WinEventLog:Security" EventCode="1105" | rename Date as LastSecLogArchive | stats latest(LastSecLogArchive) by host index=windows sourcetype="Symantec:VirusDefs" | stats latest(CurrDefs) by host index=windows source="WinEventLog:Security" EventCode="1102" | rename Date as LastSecLogClear | stats latest(LastSecLogClear) by host
Hello Splunk I am trying to set an alert when a result is much higher than the other rows. A simplified search of: index="my index" user=* | top limit=200  returns a row of users which usually ha... See more...
Hello Splunk I am trying to set an alert when a result is much higher than the other rows. A simplified search of: index="my index" user=* | top limit=200  returns a row of users which usually has a high count of 150-200, but during some events this will obviously differ so I can't just use a static value. Is there a way I can do something like: when (count.row1 > 5xcount.row2) {trigger alert}    Thank you in advance.
Hi, I am implementing dropdown through dynamic options and the search for the dropdown has two fields; index and index_label. I was hoping to use index to pass in the search and index_label as the ... See more...
Hi, I am implementing dropdown through dynamic options and the search for the dropdown has two fields; index and index_label. I was hoping to use index to pass in the search and index_label as the label for title. However, somehow when I have field for label as index_label. It doesnt work. I also tried adding <condition> <set token="panellabel">$label$</set> </condition> it doesnt work. Can someone let me know if I am doing something wrong? <row> <panel> <title>Historical Overview By Day</title> <input type="dropdown" token="weekday" searchWhenChanged="true"> <label>Select a Weekday:</label> <choice value="Monday">Monday</choice> <choice value="Tuesday">Tuesday</choice> <choice value="Wednesday">Wednesday</choice> <choice value="Thursday">Thursday</choice> <choice value="Friday">Friday</choice> <fieldForLabel>Weekday</fieldForLabel> <fieldForValue>Weekday</fieldForValue> <default>Monday</default> </input> <input type="dropdown" token="index" searchWhenChanged="true"> <label>Select a Index-</label> <fieldForLabel>index_label</fieldForLabel> <fieldForValue>index</fieldForValue> <search> <query>| tstats count WHERE index=* BY index | eval index_label = upper(split(index,"_")) | rex mode=sed field=index_label "s/_/ /g"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <condition> <set token="panellabel">$label$</set> </condition> </change> </input> <chart> <title>Log Volume for $weekday$ - $panellabel$</title> <search> <query>| tstats count WHERE index=$index$ BY index _time span=1d  | trendline sma2(count) AS trend</query> <earliest>-90d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search>
  Hi Splunkers, With the Splunk Active Directory logs, Splunk parses the event as though there's no difference between the actor and the target of a critical event like a new account being created. ... See more...
  Hi Splunkers, With the Splunk Active Directory logs, Splunk parses the event as though there's no difference between the actor and the target of a critical event like a new account being created. How can i able to give a different name to the targeted account? Is there any way other than Regex?   Attaching a screenshot. TIA
According to some of the post I copied paste the code into the dashboard code just prior to the closing of the dashboard but the dash board scroll bar does not show.  Any help would be a appreciated.... See more...
According to some of the post I copied paste the code into the dashboard code just prior to the closing of the dashboard but the dash board scroll bar does not show.  Any help would be a appreciated.   Thank You  
Good afternoon Currently we have a UF that is configured with 50 inputs, of which 49 work well and only 1 does not index events and also reports any errors. Review the information on the internal v... See more...
Good afternoon Currently we have a UF that is configured with 50 inputs, of which 49 work well and only 1 does not index events and also reports any errors. Review the information on the internal validating that the splunkd does not inform any evidence that it can help to validate why this input is not working. But what you see is what you do next query index = _introspection component = PerProcess "event that does not index ..." I have current information, the script runs every 1 minute and gives me the next information. component: PerProcess date: {[-] args: python /path/file.py XXXXXXXX elapsed: 111505.2300 fd_used: 5 mem_used: 8,555 normalized_pct_cpu: 0.00 page_faults: 0 pct_cpu: 0.00 pct_memory: 0.01 pid: 22673 ppid: 7990 process: python2.7 process_type: other read_mb: 0.000 status: W t_count: 1 written_mb: 0.000 } datetime: 10-05-2020 15: 36: 26.387 -0300 log_level: INFO Review the too many events that you index and don't use these metrics .... why when the event I stop indexing this information splunk differently,... and I don't understand why they too many fuels that are working correctly in the tienen this information. Any help is appreciated.