All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, I have to load balance the https requests over indexer cluster.  Need to know the best approach to load balance the data. Is NGNIX is only solution?
I have a data field categ_hierarchy in the format of a series of up to 8 category IDs joined by ">>". For example: categ_id1>>categ_id2>>categ_id3>>...>>categ_id8  Category id 1 is required but cat... See more...
I have a data field categ_hierarchy in the format of a series of up to 8 category IDs joined by ">>". For example: categ_id1>>categ_id2>>categ_id3>>...>>categ_id8  Category id 1 is required but category ids 2 through 8 are optional.  Category Ids are strings without the '>' cahracter in them. They might have whitespace that I want to trim from the beginning or end.  Here are 2 examples: simulink/index>>simulink/simulink-environment>>simulink/programmatic-modeling support/parallel-221>>parallel-computing/index I want to lookup each category id and get back the associated category name for each category id and then reconstruct the category names into a similarly formated path:  categ_name1>>categ_name2>>categ_name3>>...>>categ_name8  Here are the two examples constructed from the lookup results Simulink >> Simulink Environment Fundamentals >> Programmatic Model Editing Parallel Computing >> Parallel Computing Toolbox Is there any way to simplify my SPL from this?  | rex field=category_hierarchy "(?<categ_id1>[^>]+)(>>(?<categ_id2>[^>]+))?(>>(?<categ_id3>[^>]+))?(>>(?<categ_id4>[^>]+))?(>>(?<categ_id5>[^>]+))?(>>(?<categ_id6>[^>]+))?(>>(?<categ_id7>[^>]+))?(>>(?<categ_id8>[^>]+))?" | eval categ_id1=trim(categ_id1), categ_id2=trim(categ_id2), categ_id3=trim(categ_id3), categ_id4=trim(categ_id4), categ_id5=trim(categ_id5), categ_id6=trim(categ_id6), categ_id7=trim(categ_id7), categ_id8=trim(categ_id8) | lookup category_lookup category_id AS categ_id1 OUTPUTNEW category_name AS categ_name1 | lookup category_lookup category_id AS categ_id2 OUTPUTNEW category_name AS categ_name2 | lookup category_lookup category_id AS categ_id3 OUTPUTNEW category_name AS categ_name3 | lookup category_lookup category_id AS categ_id4 OUTPUTNEW category_name AS categ_name4 | lookup category_lookup category_id AS categ_id5 OUTPUTNEW category_name AS categ_name5 | lookup category_lookup category_id AS categ_id6 OUTPUTNEW category_name AS categ_name6 | lookup category_lookup category_id AS categ_id7 OUTPUTNEW category_name AS categ_name7 | lookup category_lookup category_id AS categ_id8 OUTPUTNEW category_name AS categ_name8 | eval category_name_hierarchy=categ_name1 | eval category_name_hierarchy=if(isnull(categ_name2), category_name_hierarchy, category_name_hierarchy." >> ".categ_name2) | eval category_name_hierarchy=if(isnull(categ_name3), category_name_hierarchy, category_name_hierarchy." >> ".categ_name3) | eval category_name_hierarchy=if(isnull(categ_name4), category_name_hierarchy, category_name_hierarchy." >> ".categ_name4) | eval category_name_hierarchy=if(isnull(categ_name5), category_name_hierarchy, category_name_hierarchy." >> ".categ_name5) | eval category_name_hierarchy=if(isnull(categ_name6), category_name_hierarchy, category_name_hierarchy." >> ".categ_name6) | eval category_name_hierarchy=if(isnull(categ_name7), category_name_hierarchy, category_name_hierarchy." >> ".categ_name7) | eval category_name_hierarchy=if(isnull(categ_name8), category_name_hierarchy, category_name_hierarchy." >> ".categ_name8) | table category_hierarchy, category_name_hierarchy I know I could split the category_hierachy field by the ">>" delimeter but I don't know how to lookup each of the category Ids in the resulting multivalue field.  Any help would be appreciated!! Thanks, Rena
Hi All, I have two dashboards, dashboard 1 and dashboard 2. I have linked them. When clicking on host from a line chart from dashboard 1, dashboard 2 opens up and filters on the selected host from ... See more...
Hi All, I have two dashboards, dashboard 1 and dashboard 2. I have linked them. When clicking on host from a line chart from dashboard 1, dashboard 2 opens up and filters on the selected host from dashboard 1. So far, dashboard 2 shows the correct host on the multiselect input. The issue is that I somehow override the panels in dashboard 2 with those of dashboard 1.  It may be an issue with the token, since I have tok_host=$tok_host$ on both dashboard's 1 and 2, but I am not sure if that is causing the issue. Any advise is welcome.  Thanks in advance    Dashboard 1 Input for multiselect: <input type="multiselect" token="tok_host" searchWhenChanged="true"> <label>Select Server (Multi Select)</label> <search> <query> (index=test_*_idx) |fields + host | stats values(host) as host | mvexpand host | rename host as tok_host </query> </search> <prefix>host IN (</prefix> <valuePrefix></valuePrefix> <valueSuffix></valueSuffix> <delimiter> , </delimiter> <suffix>)</suffix> <choice value="*">All</choice> <fieldForLabel>tok_host</fieldForLabel> <fieldForValue>tok_host</fieldForValue> </input> Linking the dashboard code: <drilldown> <link target="_blank">/app/XYZ_Sun_Sys/Dashboard2?form.tok_host=$click.name2$</link> </drilldown> Dashboard 2 Multiselect input code: <input type="multiselect" token="tok_host" searchWhenChanged="true"> <label>Select Server</label> <search> <query>(index=text_idx) |fields + host | stats values(host) as host | mvexpand host | rename host as tok_host</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <fieldForLabel>Select Host</fieldForLabel> <fieldForValue>tok_host</fieldForValue> <choice value="*">All</choice> <delimiter> ,</delimiter> <default>*</default> </input> Dashboard 2 code for panel </fieldset> <row> <panel depends="$tok_host$"> <title> First Panel - $tok_host$ </title> <single> <title>Space Avail</title> <search> <query>index=testing_idx host=$tok_host$ |timechart span=10min avg(Speed) as speed | eval change=_time</query>  
We use the Splunk ServiceNow TA - both on collecting data from ServiceNow and creating incidents via the Splunk alert action.   We have use case on the collection side. Within the inputs.conf there ... See more...
We use the Splunk ServiceNow TA - both on collecting data from ServiceNow and creating incidents via the Splunk alert action.   We have use case on the collection side. Within the inputs.conf there is attribute available call filter_data. This allows you to filter on the data you wish/not wish to collect from ServiceNow. The specific use case is where we do NOT want to collect events from sys_audit table if sys_created_by=user. System.  The basic stanza attributes in inputs.conf within the Snow TA  is this: [snow://sys_audit] filter_data = sys_created_by!=user.system table = sys_audit This approach does not filter sys_created_by, that is, we still see user.system as sys_created_by in our events.  Is there anything I'm doing wrong? Thx.     
I've got some queries I need to do periodically that use the exact same base search, one with teh weekly uniques and one with the average daily uniques. I can do these seperately: (search) | stats ... See more...
I've got some queries I need to do periodically that use the exact same base search, one with teh weekly uniques and one with the average daily uniques. I can do these seperately: (search) | stats dc(thing) as WeeklyCount and (search) |bucket _time span=day |stats dc(thing) as DailyCount by _time |stats avg(DailyCount)   I've tried variations on appendpipe, but can't get it to work.  example: (search) | stats dc(thing) as WeeklyCount |appendpipe [ bucket _time span=day |stats dc(thing) as DailyCount by _time |stats avg(DailyCount)] returns only WeeklyCount. If I switch the order and have weeklycount in the append pipe, it gives my the correct average daily, but weekly reports as 0
I have field with filename  containing .tgz file. I need to check if a particular file example XYZ exists inside this .tgz file.  How can I do this? Thanks in advance.  
I have 2 type of search messages - Problem #1 Problem #5 and other one goes like this - Solved problem_id successful: 1 Solved problem_id successful: 2 Solved problem_id successful: 3   I wan... See more...
I have 2 type of search messages - Problem #1 Problem #5 and other one goes like this - Solved problem_id successful: 1 Solved problem_id successful: 2 Solved problem_id successful: 3   I want to return Problems which have not been solved yet. So in the above case it should result with 5 only. What I tried :     Search1 ==> index="production" "Problem #" earliest=-3h latest=-1h | rex field=message ".*Problem #(?<problem_id>.*):.*" | stats count by problem_id |table problem_id     Extracting all Problem Ids(works fine) Search 2==>      search index="production" "Solved problem_id successful:" earliest=-3h | rex field=message ".*Solved problem_id successful: (?<problem_id>.*)" | stats count by problem_id |table problem_id     Extracting all problem ids which have been solved. (works fine )   Now to find problems which are not solved --> search1 | search not [search2]      index="production" "Problem #" earliest=-3h latest=-1h | rex field=message ".*Problem #(?<problem_id>.*):.*" | stats count by problem_id |table problem_id | search NOT [search index="production" "Solved problem_id successful: " earliest=-3h | rex field=message ".*Solved problem_id successful: (?<problem_id>.*)" | stats count by problem_id |table problem_id ]     Now above query doesn't work and for some reason just returns the result from search1. It seems not is not working for some reason. Thanks in advance. PS: I have modified the queries little bit on the fly to remove sensitive info.  
I have two dropdowns.  I only want to run a single dropdown everytime for a search. Closed Dropdown has token value as field1 OPEN Dropdown has token value as field4 In the dashboard's source, I h... See more...
I have two dropdowns.  I only want to run a single dropdown everytime for a search. Closed Dropdown has token value as field1 OPEN Dropdown has token value as field4 In the dashboard's source, I have listed below query, so that the panel picks up the data as per input provided in dropdowm index=*|lookup bar.csv IP OUTPUT BAR BAR_Status |search BAR="$field1$" OR BAR="$field4$"|chart count(IP) by BAR,STATUS So If select closed dropdown the chart should provide the details only related to Closed and vice-versa. Please help.
I have read on Splunk.com that Ent. reports don't satisfy use cases the ones on the ES. And that they should not be copied or synched to ES. Please tell me why? Thanks a million
TLDR: I'm trying to automate the large 25 day search to break up into 25 separate one day searches. I'm updating a lookup table that is tracking which indexes are affected by the new log4j exploit. ... See more...
TLDR: I'm trying to automate the large 25 day search to break up into 25 separate one day searches. I'm updating a lookup table that is tracking which indexes are affected by the new log4j exploit.  I do this so that I can only have to search through the affected indexes with subsequent searches.  This lookup table takes hours each time it is updated for a day.  Problem being, I need to know all of the affected indexes over all of the days log4j since December 10th or so.   Query that updates lookup table:   NOT [| inputlookup log4j_indexes.csv | fields index] | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | table index | inputlookup append=true log4j_indexes.csv | dedup index | outputlookup log4j_indexes.csv​   Each time this query finishes, it appends log4j-exploit-affected indexes to the lookup table.  I need to automate the scanning over a large timeframe (December 10th 2021 - January 5th 2022).  However, I want the lookup table to update as it runs over each day.  I'm trying to automate the large 25 day search to break up into 25 separate one day searches.  This also makes it so that if the search fails, then I don't lose all progress.  I can then apply this same methodology to other searches. Lookup Table (Log4J_affected_indexes) Index index_1 index_2   How I've tried to solve the problem Commands I've tried while attempting to solve: foreach map gentimes subsearch saved searches Gentimes (smaller timeframes) -> map Explanation of Query below: The gentimes part creates a table based on the selected timerange: Earliest  Latest 01/02/2022:00:00:00 01/03/2022:00:00:00 01/03/2022:00:00:00 01/04/2022:00:00:00 01/04/2022:00:00:00 01/05/2022:00:00:00   I try to pass those values to a subsearch as the earliest and latest parameters using map.  I understand now that map doesn't seem to work for this, and I get no results when the search runs. (gentimes and map) Query:   |gentimes start=-1 |addinfo |eval datetime=strftime(mvrange(info_min_time,info_max_time,"1d"),"%m/%d/%Y:%H:%M:%S") |mvexpand datetime |fields datetime |eval latest=datetime |eval input_earliest=strptime(datetime, "%m/%d/%Y:%H:%M:%S") - 86400 |eval earliest=strftime(input_earliest, "%m/%d/%Y:%H:%M:%S") |fields earliest, latest | map search="search NOT [| inputlookup log4j_indexes.csv | fields index] earliest=$earliest$ latest=$latest$ | regex _raw=\"(\$|\%24)(\{|\%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|\%3A|\$|\%24|}|\%7D)\" | table index | inputlookup append=true log4j_indexes.csv | dedup index | outputlookup log4j_indexes.csv"   Gentimes subsearch -> main search Explanation of Query below: I use gentimes in a subsearch to produce smaller timeframes from the larger selected timeframe: Earliest  Latest 01/02/2022:00:00:00 01/03/2022:00:00:00 01/03/2022:00:00:00 01/04/2022:00:00:00 01/04/2022:00:00:00 01/05/2022:00:00:00   This doesn't give me errors.  However, I get no matches.  I can almost guarantee this isn't running separate searches per value displayed in the above table.  I'm not sure how this can be done. (gentimes subsearch) Query:   NOT [| inputlookup log4j_indexes.csv | fields index] [|gentimes start=-1 |addinfo |eval datetime=strftime(mvrange(info_min_time,info_max_time,"1d"), "%m/%d/%Y:%H:%M:%S") |mvexpand datetime |fields datetime |eval latest=datetime |eval input_earliest=strptime(datetime,"%m/%d/%Y:%H:%M:%S") - 86400 |eval earliest=strftime(input_earliest,"%m/%d/%Y:%H:%M:%S") |fields earliest, latest] | regex _raw="(\$|\%24)(\{|\%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|\%3A|\$|\%24|}|\%7D)" | table index | inputlookup append=true log4j_indexes.csv | dedup index | outputlookup log4j_indexes.csv   Conclusion Other failed attempts: using foreach (can't do non-streaming) passing earliest and latest parameters to saved-search savedsearch doesn't work this way Other solutions I've thought of: Running subsearch that updates a smaller_timeframe.csv file that keeps track of the smaller timeframes.  Then, pass those timeframe parameters (earliest / latest) into a search somehow. Somehow do a recursive sort of search where each search triggers another search to go.  Consequently, I could have a search trigger another search with the earliest and latest values incremented forward one day (or any amount of time). Maybe, Splunk has a feature (not on the search head) that can automate the same search over small timeframes, and over a large period of time.  Maybe this unknown-to-me feature also has scheduling built into it. If there is any other information that I can give to help others solve this with me, then just ask.  I can edit this post...
I need to customize the alert message (send via email) with information that is not intrinsic to the alert itself. For example, if the number of users logging in over a 5 minute period exceeds a thre... See more...
I need to customize the alert message (send via email) with information that is not intrinsic to the alert itself. For example, if the number of users logging in over a 5 minute period exceeds a threshold, then send the alert email with the number of IP addresses that have logged in in that time period. Trying to use Custom Alert Actions, but we feel that there may be an easier way to execute. Is there a way to have an alert trigger a report, then email that contents to a select group? We have an alert X. This alert is setup so it triggers at custom machine learning parameters. It will only trigger when the actual number of events is much higher than the mathematical prediction.  When X is triggered, we need to do 2 things. Firstly, run a report compiling all the information needed to triage. A lot of this is in a Dashboard, but can be run through any number of report and/or splunk query ways. Secondly, we need the information in that report or queries to be put into an email, either by the file itself or using Splunk tokens to convey the report results. My approach in my head is alert > run report > email data from that report.  Thanks in advance!
We have alerts set for 65 client sites and a handful of internal sites. If we want to disable ALL the alerts with one action, is this function available in AppDynamics?
Similar to https://community.splunk.com/t5/Splunk-Search/Word-Cloud-Not-Showing/td-p/544413 When I select "Tag Cloud" as the visualization stats count by campaign   It seems to do this for al... See more...
Similar to https://community.splunk.com/t5/Splunk-Search/Word-Cloud-Not-Showing/td-p/544413 When I select "Tag Cloud" as the visualization stats count by campaign   It seems to do this for all stats count by xxxx searches, as well as my others.     
Can some one help me in building a Splunk search with the below mentioned criteria!. My application contains some fields and one of the field is "Request Number". I want the search query to fetch th... See more...
Can some one help me in building a Splunk search with the below mentioned criteria!. My application contains some fields and one of the field is "Request Number". I want the search query to fetch the records which have "Request Number" as "0". I have the source name, Host name etc. I'm getting other results also, But no Requet number as 0. Can someone help me out here.
We have a commercial appliance that requires a HEC configuration in Splunk to ingest data.  I have configuration the TA and App and the HEC configuration on the search head.  But I get no data being ... See more...
We have a commercial appliance that requires a HEC configuration in Splunk to ingest data.  I have configuration the TA and App and the HEC configuration on the search head.  But I get no data being ingested.   I was told that it requires a valid certificate on the search head in order for this to work.  Is this true?  In the HEC configuration there is a check box for not using SSL.  I've also have run the curl -k command with success using the generated token. 
I have two searches where I need to run an stats count on to do some calculations. First search  is index=xxx wf_id=xxx wf_env=xxx xxx | stats count   Second search is  index=xxx wf_id=xxx wf_env... See more...
I have two searches where I need to run an stats count on to do some calculations. First search  is index=xxx wf_id=xxx wf_env=xxx xxx | stats count   Second search is  index=xxx wf_id=xxx wf_env=xxx    sourcetype=xxx usecase=xxx  | stats count by request_id   First search uses a simple stats to get its count, but the second search uses stats count by request_id so I am having trouble getting the counts for both. Ideally I would like to get the counts for both searches and divide them. I've used appendcols but it returns empty fields for both searches. Any guidance on how to get counts for these searches would be helpful!    Working example: _time Search 1 counts Search 2 counts Search 1/ Search 2 00:30 50 25 2 00:35 100 25 4    
Hi! I have a summarized field (docsReturned) by customer id that I would like to make a top X pie chart of, while summarizing the fields not displayed in the list under the OTHERS tag that the timech... See more...
Hi! I have a summarized field (docsReturned) by customer id that I would like to make a top X pie chart of, while summarizing the fields not displayed in the list under the OTHERS tag that the timechart and top command use. Base command example:   <search here> | stats sum(docsReturned) by customerId   I assumed it would work in the same way as the others (that I could simply set a limit on the "| stats" transform command) like I can with the timechart command, but that does not seem to be supported. I also attempted to chain the above search with the top command, but top appears to only work when counting rows? (Can at least not figure out how to make it work based on an already summarized field) Last but not least I have tested chaining it with the sort command. "| sort 3 -docsReturned" is the closest I have gotten to what I want, but then I am lacking "OTHERS" which is quite important in this scenario.. Sample output that I would like (in a scenario where the dynamic limit is set to 3): 1 Customer 1 14079 2 Customer 2 7015 3 Customer 3 5302 4 OTHER 6407 It seems like this should be an easy thing (since it is available in the timechart and top commands) and hopefully I have simply overlooked something. Fingers crossed that someone here can point me in the right direction?
I am using Splunk Slack webhook to send alert results to Slack channels but at present its only displaying the first result of the alert ...I want to display all the values rather than just the first... See more...
I am using Splunk Slack webhook to send alert results to Slack channels but at present its only displaying the first result of the alert ...I want to display all the values rather than just the first value. I tried for each result but that creates multiple messages which is not the solution I am looking for.  <$results_url$|$result.sum$ transaction failed - tr_id=$result.tr_id$> tr_id has mutiple values but only displaying first value in slack at the moment    
Hi, How can I write the name of a field in the value like I have : test_1 test_2 test_3 warn error critical   I want : test test_1 - warn test_2 - error test_3 - critica... See more...
Hi, How can I write the name of a field in the value like I have : test_1 test_2 test_3 warn error critical   I want : test test_1 - warn test_2 - error test_3 - critical   I must do this for unknown fields (by now I have 3 tests but it can be more so it must be variable).  I thought to foreach command but I don't know how to do it. Can you help me if this usecase is possible ?
I'm pretty new to Splunk and have currently been tasked to startup an App and am outfitting a dashboard for my team. I'm currently in the process of researching ways on how to integrate an external ... See more...
I'm pretty new to Splunk and have currently been tasked to startup an App and am outfitting a dashboard for my team. I'm currently in the process of researching ways on how to integrate an external reverse DNS lookup on an enterprise level. The goal is match/identify the business partner's name with their external connection's IP within our database. As of now, we just have a large scale of outbound/inbound ip's and it would benefit us to match a name to them. Is this a possible task, and if it is what are best practices or known solutions to this request? Thank you in advance!