All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have a question about metric rollup. https://docs.splunk.com/Documentation/Splunk/latest/Metrics/MRollupsOverview As stated by the document, metric rollup is for saving storage. Sa... See more...
Hi All, I have a question about metric rollup. https://docs.splunk.com/Documentation/Splunk/latest/Metrics/MRollupsOverview As stated by the document, metric rollup is for saving storage. Saying there are source metric index(A) and target metric index(B). Then, how is it for saving storage? Origin data indexed in index(B) is rolled up into index(B), which means taking more storage. My question is this: do I have to delete source index(A) periodically? How to handle source index in metic rollup environment?
tl:dr - questions I am looking to get answers for: 1.  Is there a better way to do this? 2. Is it possible to dynamically declare and store the _key values that I want to update? ---------------... See more...
tl:dr - questions I am looking to get answers for: 1.  Is there a better way to do this? 2. Is it possible to dynamically declare and store the _key values that I want to update? --------------------------------------------------- I have a KV Store that holds statefulness data from a program that runs over 300 different tests. The KV Store presently string fields with data that I filled from a previous query with the last_status field set as the string "stub" while I'm working on getting it functional.  The field names in the kvstore are: last_time last_status test_name rule_name test_target and of course the hidden _key field. I'm trying to update the last_time and last_status when the conditions are suitable.  The suitable condition is when the test, rule, and target fields match a subsearch with the same values in their respective fields. I'm having issues with getting the KV Store to update.  I've seen the following approaches suggested already from the questions I could find and the kb type articles.     | inputlookup csvcoll_lookup | search _key=544948df3ec32d7a4c1d9755 | eval CustName="Marge Simpson" | eval CustCity="Springfield" | outputlookup csvcoll_lookup append=True     Which led me to finding a suggestion of      | inputlookup csvcoll_lookup | where _key IN("544948df3ec32d7a4c1d9755","544948df3ec32d7a4c1d9756","544948df3ec32d7a4c1d9757") | eval CustName="Marge Simpson" | eval CustCity="Springfield" | outputlookup csvcoll_lookup append=True     I have something similar to the following query (some vars and objects have have their names changed to some degree, but still represent the logic I'm trying to work with.     | inputlookup kvstoreA | eval last_time=strftime(last_time,"%Y-%m-%dT%H:%M:%S"), key=_key, joinField=test_name+rule_name+test_target | join type=inner [ search index=a sourcetype=b NOT variable="ignore" | dedup testName testTargetDesc ruleName | eval Event_last_time=strftime(last_time,"%Y-%m-%dT%H:%M:%S"), Event_last_status=case(eventType=="A","healthy",eventType=="B","unhealthy",TRUE(),"undefined"), Event_test_name='alert.testName', Event_rule_name='alert.ruleName', Event_test_target='alert.testTargetsDescription{}', joinField=Event_test_name+Event_rule_name+Event_test_target] | where Event_last_time!=last_time | eval last_status=Event_last_status, last_time=Event_last_time | fields last_time last_status test_name rule_name test_target view_key | outputlookup kvstoreA key_field=view_key append=True       From what I have read and tested so far I am sure that I don't know how to extract the _key values in a dynamic way that can be applied to update specific entries in the table. 1.  Is there a better way to do this? 2. Is it possible to dynamically declare and store the _key values that I want to update?
Hi fellow Splunkers, As parameter maxTotalDataSizeMB is only available for non-Smartstore indexes, what parameter in Smartstore index that replace this parameter? I want to start evicting bucket whe... See more...
Hi fellow Splunkers, As parameter maxTotalDataSizeMB is only available for non-Smartstore indexes, what parameter in Smartstore index that replace this parameter? I want to start evicting bucket when certain size is reached. I could only find hotlist_recency_secs which evict data by age, not size (per index). Thanks! Reference: https://docs.splunk.com/Documentation/Splunk/8.1.0/Admin/Indexesconf  
Been Googling (as a novice user of SPLUNK) for a while in search of an answer but haven't had the luck so I'll try posting a Q. I'm using annotations in-line with timecharts in various searches. The ... See more...
Been Googling (as a novice user of SPLUNK) for a while in search of an answer but haven't had the luck so I'll try posting a Q. I'm using annotations in-line with timecharts in various searches. The UI source looks like this: <search id="base_annotation">     <query>index=my_index | eval My_value = 'host' + " " + 'another_field' | eval annotation_label = Host_value</query>     <earliest>$timefield.earliest$</earliest>     <latest>$timefield.latest$</latest>   </search> I then add the resulting base_annotation into the relevant timecharts of my dashboard using: </search>           <search type="annotation" base="base_annotation">           <query></query>         </search> ...beneath the original query. This works well but I'd like to make the annotations optional to the users. Hence I'd imagine there is a way to let a checkbox govern the presence of annotation in the timecharts ie. if the checkbox is checked then keep the code as is, if un-checked then run the searches without the search type="annotation". Any hints anyone? Kind regards
Hey guys,   I have IIS logs that are logging multiple IPs to the X-Forwarded-For field as below:    114.119.136.78,+162.158.119.25     I would like to apply a regex to the X-Forwarded-For fi... See more...
Hey guys,   I have IIS logs that are logging multiple IPs to the X-Forwarded-For field as below:    114.119.136.78,+162.158.119.25     I would like to apply a regex to the X-Forwarded-For field at index time to ensure the field only contains the first IP, like:   114.119.136.78     In other words, anything after the first comma should be cut out of the field.   So far I have tried to achieve this with the following props/transforms:   #props [iis] TRANSFORMS-rm-extra-ips = rm_extra_ips #transforms [rm_extra_ips] SOURCE_KEY = field:X_Forwarded_For REGEX = ^(.+?),     How do I do this? Thanks!
Hi Everyone! Does the "snowincident" command always create an incident upon being called? I want to use this in an alert, so that we use the "custom_fields" setting, however, we only want the incide... See more...
Hi Everyone! Does the "snowincident" command always create an incident upon being called? I want to use this in an alert, so that we use the "custom_fields" setting, however, we only want the incident to be created if the alert was triggered. Thanks!
Hi, I'm in Splunk since August after 20 years working in SQL, a lot of new things and I need help. I've a daily cron job that is executed writing in a csv. Something like this is an example of a da... See more...
Hi, I'm in Splunk since August after 20 years working in SQL, a lot of new things and I need help. I've a daily cron job that is executed writing in a csv. Something like this is an example of a data generate: index=ans sourcetype="my_source" earliest=1 | stats max(DateLastUpdate) as DateLastUpdate by Ticket | eval SavedDay=strftime(relative_time(now(),"-1d@d"),"%Y%m%d") | table SavedDay Ticket DateLastUpdate When it was executed in 20201019 (YYYYMMDD format) it wrote SavedDay Ticket DateLastUpdate 20201019 A1002 20201019 20201019 AD00X 20200920 ... ... ... 20201019 ADVCX 20200520 20201019 LDS0X 20200415 20201019 SD10X 20200118   in 20201022, it wrote SavedDay Ticket DateLastUpdate 20201022 PPU0X 20201022 20201022 SD10X 20201021 20201022 XX438 20201021 20201022 44$5% 20201020 20201022 A1002 20201019 20201022 AD00X 20200920 ... ... ... 20201022 ADVCX 20200520 20201022 LDS0X 20200415   The job failed for 20201021 and 20201022, now I would like to re-execute it to generate the changes, but: index=ans sourcetype="my_source" earliest=1 | eval SavedDay=strftime(relative_time(now(),"-1d@d"),"%Y%m%d") | dedup SavedDay | table SavedDay | join type=left SavedDay [| inputlookup "my_lookup.csv"                            | dedup SavedDay                            | eval ExistsDay = 1] | where isnull(ExistsDay)| table SavedDay | join type=inner SavedDay [search index=ans sourcetype="my_source" earliest=1                            | eval SavedDay=strftime(relative_time(now(),"-1d@d"),"%Y%m%d")                            | ...] It is only returning SavedDay for values 20201020 and 20201021 and not the rest of the historical data changes. Another solution doing a "cartesian" works, but I don't like it because it isn't efficient in time and cost (around a 1Gb for regenerate 2 days). index=ans sourcetype="my_source" earliest=1 | eval SavedDay=strftime(relative_time(now(),"-1d@d"),"%Y%m%d") | dedup SavedDay | table SavedDay | join type=left max=0 [| inputlookup "my_lookup.csv"                            | dedup SavedDay                            | eval ExistsDay = 1] | where isnull(ExistsDay)| table SavedDay | join type=inner SavedDay [search index=ans sourcetype="my_source" earliest=1                            | eval SavedDay=strftime(relative_time(now(),"-1d@d"),"%Y%m%d")                            | ...] It is only returning SavedDay for values 20201020 and 20201021 and not the rest of the historical data changes. Another solution doing a "cartesian" works, but I don't like it because it's inefficient in time and cost (around 8 minute and 1Gb for regenerate 2 days). index=ans sourcetype="my_source" earliest=1 | eval SavedDay=strftime(relative_time(now(),"-1d@d"),"%Y%m%d") | dedup SavedDay | table SavedDay | join type=left max=0 [| inputlookup "my_lookup.csv"                            | dedup SavedDay                            | eval ExistsDay = 1] | where isnull(ExistsDay)| table SavedDay | join type=inner SavedDay [search index=ans sourcetype="my_source" earliest=1                            | eval SavedDay=strftime(relative_time(now(),"-1d@d"),"%Y%m%d")                            | ...] Is there a way to do something similar to a loop execution (or sequential execution), "Date_missed >= Date_of_sourcetype" with subsearch or execute python? Could I execute search or reports passing parameters values? Thanks!! Javier
Hi ,  I am trying to run a splunk query and i am able to generate the required filed . however i am facing difficulties in replicating the same using stats under statistics table  For example ,  U... See more...
Hi ,  I am trying to run a splunk query and i am able to generate the required filed . however i am facing difficulties in replicating the same using stats under statistics table  For example ,  Under selected filed , I was able to retrieve the required fields , lets say User, host, IP, LP and session  When i use the below command  stats values(User) AS user , values(host) AS host, values(LP) AS LP, values(IP) AS IP BY session Session is common and appears for all. But i am able to see only host , session , LP values under statistics but user and IP values are not being displayed although they are available under the selected field. ( when you look into events tab) Any help would be appreciated !!
Hey,  I am trying to use tooltip in my dashboard can anyone help me out 
I have a NEAP that points back to the correlation search.  It breaks on "normal" severity.  And the action is to close on break.  But the episode review lists it as "new" not "closed".  But the "acti... See more...
I have a NEAP that points back to the correlation search.  It breaks on "normal" severity.  And the action is to close on break.  But the episode review lists it as "new" not "closed".  But the "activity" tab indicates a bulk action by the default splunk user and status=closed.  Is this a rules engine issue with not being able to reach an indexer cluster or a bug?
I am monitoring a directory with 101 csv file with the same format but I am having only 49 of them indexed.  When I start up the splunk I get warn message from TailReader - Could not send data to out... See more...
I am monitoring a directory with 101 csv file with the same format but I am having only 49 of them indexed.  When I start up the splunk I get warn message from TailReader - Could not send data to output queue (parsingqueue), retrying.... Sample of csv files: Timestamp,Value (%) 21-Sep-20 6:38:00 AM BRT,0.0 21-Sep-20 6:39:00 AM BRT,0.0 21-Sep-20 6:40:00 AM BRT,0.0 21-Sep-20 6:41:00 AM BRT,0.0 21-Sep-20 6:42:00 AM BRT,0.0 21-Sep-20 6:43:00 AM BRT,0.0 21-Sep-20 6:44:00 AM BRT,0.0 21-Sep-20 6:45:00 AM BRT,0.0 21-Sep-20 6:46:00 AM BRT,0.0 21-Sep-20 6:47:01 AM BRT,0.0 Timestamp,Value (%) 21-Sep-20 6:38:00 AM BRT,0.0 21-Sep-20 6:39:00 AM BRT,0.0 21-Sep-20 6:40:00 AM BRT,0.0 21-Sep-20 6:41:00 AM BRT,0.0 21-Sep-20 6:42:00 AM BRT,0.0 21-Sep-20 6:43:00 AM BRT,0.0 21-Sep-20 6:44:00 AM BRT,0.0 21-Sep-20 6:45:00 AM BRT,0.0 21-Sep-20 6:46:00 AM BRT,0.0 21-Sep-20 6:47:01 AM BRT,0.0   [porto_file_csv] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = HEADER_FIELD_LINE_NUMBER = 13 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true EXTRACT-Chiller,Variavel = /opt/POC_Chiller/POC_(?P<Chiller>CH\d)_(?P<Variavel>\w+) in source REPORT-poc_porto = REPORT-poc_porto   [monitor:///opt/POC_Chiller] disabled = false host = test4 index = test_porto sourcetype = porto_file_csv   Note: I also have tried to monitor the files with default csv sourcetype and again it didn't work. Any help, would very appreciated! Marcos Pereira        
Here is my multiselect code for the input: <input type="multiselect" token="tok_ABCName" searchWhenChanged="true"> <label>ABC Name</label> <default>All</default> <prefix>(</prefix> <valuePrefix>... See more...
Here is my multiselect code for the input: <input type="multiselect" token="tok_ABCName" searchWhenChanged="true"> <label>ABC Name</label> <default>All</default> <prefix>(</prefix> <valuePrefix>ABCName="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>OR</delimiter> <suffix>)</suffix> <choice value="*">All</choice> <fieldForLabel>ABC Name</fieldForLabel> <fieldForValue>ABCName</fieldForValue>   Here is my main query:   | inputlookup Names_Software_R.csv | rename  "ABC Name" as ABCName  | search $tok_ABCName$  |  values(ABCName) as "ABC Name"  by ServerName    
Hi everyone, I am trying to accelerating one savedsearch "index=main |stats count by type,severity". Now I want an accelerated summary of "All time" data for the first time only and later, every 1... See more...
Hi everyone, I am trying to accelerating one savedsearch "index=main |stats count by type,severity". Now I want an accelerated summary of "All time" data for the first time only and later, every 10minutes I want to collect an incremental summary of the last 10minutes data on top of the "All time" accelerated summary. Is there any way to achieve this? Note: I've come across to know the auto_summarize.dispatch.earliest_time parameter which seems directly bound to "Summary range". If I keep earliest_time to -10min(Because auto_summarize.cron_schedule parameter is set to */10 * * * *) then my summary range is also changing from "Alltime" to "1 Day".
Hi everyone, I am having 8M+ data in kvstore lookup. I am reading data from the lookup to populate 12 dashboard panels using "|inputlookup <lookup_name>" command. Now I checked that the |inputlook... See more...
Hi everyone, I am having 8M+ data in kvstore lookup. I am reading data from the lookup to populate 12 dashboard panels using "|inputlookup <lookup_name>" command. Now I checked that the |inputlookup <lookup_name> command itself already taking around 1mins 30 seconds on 12core CPU configuration. So My concern is, Is there any other way to fetch all the lookup data faster?  In my instance data is increasing day by day, so with growing data in lookup, |inputlookup <lookup_name> command will take more time to load the data. So let's say I am assuming after somedays, If I'll have 25million data in lookup so after sometime it may take 10-15 mins to return the result. So to maintain performance should I increase the core of my machine as well if I am having large data to retrieve? Or Is there any other way to retrieve the data faster apart from the kvstore lookup?
I have set of hosts that are installed with different versions of software but logging to the same index, and I need to create a report based on specific versions. Here is an example set of data that... See more...
I have set of hosts that are installed with different versions of software but logging to the same index, and I need to create a report based on specific versions. Here is an example set of data that I have to deal with. host msgCode msgText qty A 101 1.0.0   A 103 Nut 48 A 103 Widget 289 B 101 1.1.0   B 103 Nut 69 B 103 Widget 367 C 101 1.0.0   C 103 Nut 93 C 103 Widget 433 D 101 1.0.3   D 101 Nut 74 D 103 192 E 101 1.0.0   E 103 Nut 88 E 103 Widget 225 Given this set of data, I would like to get the sum of qty for the hosts that have msgCode=1.0.0. The caveat on this is that msgCode 101 is logged sporadically every couple of days, and when the host gets updated with the newer version,  so I only want to use the most recently logged value as is the case with host B. As such, the result should be something like the following which only sum qty values from hosts A, C E: Nut 229 Widget 947   I can do sum of all qty, but I am not sure how I can filter out host B from it. I was doing something like the following, but then, I just get the result from the sub query.     index=myIndex | eventstats sum(qty) by msgCode as partSum | table msgCode, partSum | join host type=inner [ search index=myIndex msgCode=101 msgText=1.0.0 earliest=-3d | stats count by host ]  
Hello Splunkers I have the following field: Message The Message fields have the following values:  1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 I need to create two additional fields:  Success and Fail... See more...
Hello Splunkers I have the following field: Message The Message fields have the following values:  1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 I need to create two additional fields:  Success and Failure Values for Success are: 2,4,6,10, & 12 Values for Failure are: 1,3,5,9,13 All unused values are ignored and not included in either the Success or Failure field. I have tried the following: | eval  status=if(in(Message,"2","4","6","10","12"),"Success","Failure") This does two things:  The eval statement groups all the values in Success, correctly.  However, the eval statement groups all the values in Failure, incorrectly (by including all values other than 2,4,6,10, & 12).  Additionally, it groups these into a single field with two values, Success & Failure.  However, I would prefer to create two new fields with just the selected values for Success & Failure. UPDATE - I think I figured this out: eval success=case(Message="2","2",Message="4","4",Message="6","6",Message="10","10",Message="12","12") eval failure=case(Message="1","1",Message="3","3",Message="5","5",Message="9","9",Message="13","13") |stats values(success) values(failures) to validate. Is there a better way to do it?  Thank you!
I want to get list of all configs in my DBConnect instance using a REST API . Are there any docs for using the REST API for v2 ?
I have a field "users" that spits out the result "*****"  I want to replace the ***** with an IP address its actually related to...  i.e. replace ***** with (12.13.12.13). so that when ever ****** ... See more...
I have a field "users" that spits out the result "*****"  I want to replace the ***** with an IP address its actually related to...  i.e. replace ***** with (12.13.12.13). so that when ever ****** pops up on my table its replaced with the IP's above. but everytime I search I get a failure stating I can't use consecutive ******   |replace "*****" with  (12.13.12.13)   how can I literally reference ***** as a strings instead of SPL thinking I am saying to search anything bc of the ******   
Hi Guys, I have installed AppD in my DEV environment from SVN and when I try to open Input and configuration page its shows "loading" I waited more than an hour but still the same. Below is the e... See more...
Hi Guys, I have installed AppD in my DEV environment from SVN and when I try to open Input and configuration page its shows "loading" I waited more than an hour but still the same. Below is the error which I get into the DEV environment. 10-27-2020 10:16:19.674 -0500 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/splunktaucclib/rest_handler/handler.py", line 303, in _format_response\n masked = self.rest_credentials.decrypt_for_get(name, data)\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/splunktaucclib/rest_handler/credentials.py", line 188, in decrypt_for_get\n clear_password = self._get(name)\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/splunktaucclib/rest_handler/credentials.py", line 393, in _get\n string = mgr.get_password(user=context.username())\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/solnlib/utils.py", line 159, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/solnlib/credentials.py", line 118, in get_password\n all_passwords = self._get_all_passwords()\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/solnlib/utils.py", line 159, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/splunk_ta_appdynamics/aob_py2/solnlib/credentials.py", line 272, in _get_all_passwords\n clear_password += field_clear[index]\nTypeError: cannot concatenate 'str' and 'NoneType' objects\n". See splunkd.log for more details. Also screenshot, If I'm installing directly I don't see any errors I'm not sure what is the issue I tried multiple ways but no luck.    
I created a workflow action to perform a reverse IP lookup using the link method GET.   I would like to perform this action on multiple events.   Is there a way to call this action on specific eve... See more...
I created a workflow action to perform a reverse IP lookup using the link method GET.   I would like to perform this action on multiple events.   Is there a way to call this action on specific events within the search (not using the GUI)? Is there a way to call this action on multiple events (with or without the GUI?)   So many thanks!