All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think I just figured it out This search worked when I tried it...  Please suggest..... Thanks | stats count(Vulnerability) as Total_Vuln, count(eval(Score>0)) as Total_Non_Zero_Vuln by ip
Hi @ssuluguri  As per the doc - https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers the UF's should be able to upgrade directly from ... See more...
Hi @ssuluguri  As per the doc - https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers the UF's should be able to upgrade directly from 6x to 9x.    may we know the Splunk indexer version pls, thanks.      EDIT - Pls check this doc: https://docs.splunk.com/Documentation/Splunk/9.0.0/Installation/AboutupgradingREADTHISFIRST it is suggesting.... "Upgrading a universal forwarder directly to version 9.0 is supported from versions 8.1.x and higher." So, better you upgrade from 6x to 8.1x and then to 9.0.x, thanks. 
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ... See more...
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ip2 Vuln5 0 ip2 Vuln6 7 | stats count(Vulnerability) as Total_Vuln, countNonZero(Score) as Total_Non_Zero_Vuln by ip Is there a function similar to countNonZero(Score) to count row number of non-zero field in Splunk? With my search above, I would like to have the following output: ip Total_Vuln Total_Non_Zero_Vuln ip1 3 2 ip2 3 1
Hi @taufiqueshaikh .. From the doc - https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses The Splunk Enterprise Trial license When you download and install Splunk Enterpr... See more...
Hi @taufiqueshaikh .. From the doc - https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses The Splunk Enterprise Trial license When you download and install Splunk Enterprise, a Splunk Enterprise Trial license is automatically generated for that instance. The following important points apply to the Enterprise Trial license: The Enterprise Trial license gives access to all Splunk Enterprise features. The Enterprise Trial license is for standalone, single-instance installations of Splunk Enterprise only. The Enterprise Trial license cannot be stacked with other licenses. The Enterprise Trial license expires 60 days after you install the Splunk Enterprise instance, unless otherwise specified to customers. The Enterprise Trial license allows you to index 500 MB of data per day to Splunk Enterprise. If you exceed that limit you receive a license warning. The Enterprise Trial license prevents searching if there are a set number of license warnings.
Hi @gcusello  For example, in a week, (average EPS). 18th Oct and 19th Oct got less than the actual. Meanwhile, on 15 Oct, 16 Oct, 17th Oct, 20th Oct and 21st Oct data looks normal.  The data sourc... See more...
Hi @gcusello  For example, in a week, (average EPS). 18th Oct and 19th Oct got less than the actual. Meanwhile, on 15 Oct, 16 Oct, 17th Oct, 20th Oct and 21st Oct data looks normal.  The data source, /opt/splunk/var/log/splunk/metrics.log  
You will need to fix 3 files in it: wordcloud_app/appserver/static/visualizations/wordcloud/visualization.js wordcloud_app/appserver/static/visualizations/wordcloud/webpack.config.js wordcloud_a... See more...
You will need to fix 3 files in it: wordcloud_app/appserver/static/visualizations/wordcloud/visualization.js wordcloud_app/appserver/static/visualizations/wordcloud/webpack.config.js wordcloud_app/appserver/static/visualizations/wordcloud/src/wordcloud.js replace "vizapi" with "api" Then you will want to restart and you may need to bump the version (http://splunkhostname:8000/en-US/_bump) as it is javascript and could be cached. 
Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count ... See more...
Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count where index=*  by host |fields - count But it uses index* and now we cannot use it. Will appreciate any ideas. 
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both E... See more...
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both Env tried to start the failed Splunkd.service in both windows in Ubuntu but failed From last 2 days I am trying to find the solution Note : I am using Splunk enterprise trial license      
To add a bit to @gcusello 's answer... The {} are part of the field's name here. There's no magic., no additional syntax or something like that. Splunk can, depending on your needs and configuratio... See more...
To add a bit to @gcusello 's answer... The {} are part of the field's name here. There's no magic., no additional syntax or something like that. Splunk can, depending on your needs and configuration work with json data in three separate ways each of which has its pros and cons. 1) Indexed extractions - in this way the fields are extracted from the event when it is ingested into splunk and are stored alongside the raw data as indexed fields. Since this can be combined with other methods, it can produce double field values. 2) Automatic key-value extractions from structured data 3) Explicit spath command Those two latter options are search-time operations and they produce different results in terms of field naming. Also if you need to filter by field's value after doing spath, you need to firstly do spath on every event which is much less effective than filtering in the search early. On the other hand, automatic KV extraction doesn't work on just part of the message. Anyway, one of those methods produces field named with {} as in your example when they originally contain lists of objects. But after parsing by splunk, the {} part is just a part of the field's name.
I don't think fishbucket has been tampered with since the time it was moved from being an index to a local datastore. So I wouldn't expect problems here.  
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or... See more...
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or “Queued waiting for“   how can I solve this problem.
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B... See more...
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B feilds KM stats Count (K) Max(M) by K M i am able search both indexes  and give separate stats, If I give stats on all fields by XYZKM it is not giving any results. Note: No common feilds between both index’s.
Hi inventsekar, Thank you for explaining the parts of your search. The suggestions you gave helped to create a report that gives a list of unique badges (up to 3 entries for Sell and up to 3 entries... See more...
Hi inventsekar, Thank you for explaining the parts of your search. The suggestions you gave helped to create a report that gives a list of unique badges (up to 3 entries for Sell and up to 3 entries for Deploy) per user, showing the badge with the latest expiration date. The challenge now is to filter the results so that each person only shows the highest level badge earned (per badge type). So, each person will have a maximum of two entries in the final report if they have earned a Sell type badge and a Deploy type badge. To answer your previous question, users who pass the renewal badge will have the badge "Renew_" prefix in the badge field. Your suggestion to remove the prefix worked. Thank you for the suggestion. When a renew badge is earned, it will extend the original badge expiration date further out in time. Using the demo data I provided, the final report should look like below.   Domain First name Last name Email Badge ExpireDate mno.com lisa edwards lisa.edwards@mno.com Sell_Expert 12/6/23 mno.com lisa edwards lisa.edwards@mno.com Deploy_Capable 8/1/24 abc.com allen anderson allen.anderson@abc.com Sell_Novice 10/3/24 def.com andy braden andy.braden@def.com Deploy_Capable 1/3/24 ghi.com bill connors bill.connors@ghi.com Sell_Novice 10/17/23 jkl.com brandy duggan brandy.duggan@jkl.com Sell_Expert 9/5/24   Assuming that the data set has already been filtered to produce a list of badges with the latest expiration date, what type of search can look at the badge type (per person), and filter to the highest level badge (per badge type) and return only that data? For example, let's take "Person A". For the Sell type badge, the highest level is Sell_Expert. If "Person A" has earned "Sell_Expert", then only "Sell_Expert" (with the latest expiration date), will show up in the report for the Sell type badge. If "Person A" has not earned Sell_Expert yet, then the highest level for "Person A" is Sell_Capable.  If "Person A" has not earned Sell_Capable yet, then the highest level for "Person A" is Sell_Novice. If "Person A" has not earned any Sell type badge yet, they will not have a Sell badge type listed in the final report. It will be the same logic for Deploy type badge. Thank you.  
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES... See more...
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES: [props.conf] FIELDALIAS-cve = cves as cve   I assume that in order for the data to be useful, I have to somehow break the array in such a way that each value will enter as a separate row. Is this assumption correct?  and if so, what is they way to do that in props.conf?  Thank you
I'm seeing the same issue. Updated to 9.0.5 and was seeing a server 2019 host fill up with these events. Updated to 9.0.6 and still seeing the issue on that host.
Hello @ITWhisperer, I had used the below SPL for reading the values from lookup file and storing their corresponding SPL back in the lookup file which I wanted to use for computing the respective fo... See more...
Hello @ITWhisperer, I had used the below SPL for reading the values from lookup file and storing their corresponding SPL back in the lookup file which I wanted to use for computing the respective forecasted values. |inputlookup table1.csv |table fieldA, fieldB |eval key="index=custom_index earliest=-4w@w latest-2d@d orig_fieldA=".fieldA." orig_fieldB=".fieldB." |timechart span=1d avg(event_count) AS avg_event_count |predict avg_event_count future_timespan=1 |tail 1 |fields prediction(avg_event_count)" |outputlookup table1.csv Shared the above SPL for your reference incase you could suggest any improvement in the approach for achieving the end result. Thanking you in anticipation.
Hello @ITWhisperer, Thank you for your response and for sharing your inputs and questions. I will try to describe the original task and hopefully in the process will be able to answer your question... See more...
Hello @ITWhisperer, Thank you for your response and for sharing your inputs and questions. I will try to describe the original task and hopefully in the process will be able to answer your questions. I have a lookup file with two fields: fieldA, fieldB. The lookup file has distinct pairs of fieldA and fieldB. I have an index and I need to run predict command to calculate event count for each pair of fieldA and fieldB listed in lookup file to forecast the value of last day by taking historical data of last 1 month span by a day. As the result, the query I used to compute the timeseries data for giving input to predict command is a timechart command. I have to store the resulting forecasted value in the lookup for respective pairs of fieldA and fieldB. Then use the lookup for reading the its data in the dashboard and also display the actual value of event count for each distinct pair of fieldA and fieldB for comparison.  The approach I tried so far: - I tried to read the data from lookup file for each row, build the SPL for each row that uses timechart command to fetch timeseries data, then use it in predict command to compute the forecasted value; finally stored the SPL for each lookup in the third column of the lookup. The approach was to first build the required SPL for each row of unique pairs and then iteratively run the SPL from the lookup to compute the required values. However, I got stuck in reading the SPLs and storing their values back in the lookup.   I will now try to answer your specific questions: - Ideally, I would want to execute these searches in one main search or iteratively get executed in the main search. With regards to error in one of the SPL searches, the error is not anticipated because the search conditions and SPL keywords are same, only the values change based on the row that is currently being accessed in the lookup file. The lookup file more or less may have fixed number of rows but it will periodically get updated by a different search that refreshes its data. The SPL searches are complete search which can be executed independently on the search head to compute the predicted values.   Please share if you need any more details from my end for seeking your inputs. Thank you
Hi @Mien , your question is just a little vague: which days are you comparing? which data source? there could be many factors. Ciao. Giuseppe
Hi,  May I know, why is daily EPS on specific date get less than usually?  Is there any factor or cause to the less EPS count?  Thank you. 
Please provide some sample anonymised events