All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count ... See more...
Hi, We need to find all the hosts across all the indexes , but we cannot use index=* anymore, as it's use is  restricted by workload rule. Before the following command was used | tstats count where index=*  by host |fields - count But it uses index* and now we cannot use it. Will appreciate any ideas. 
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both E... See more...
I am trying to implement Splunk as distributed environment but whenever I am making server as Manager node Server is getting failed (Splunk is not starting) I tried this on Windows and Ubuntu both Env tried to start the failed Splunkd.service in both windows in Ubuntu but failed From last 2 days I am trying to find the solution Note : I am using Splunk enterprise trial license      
To add a bit to @gcusello 's answer... The {} are part of the field's name here. There's no magic., no additional syntax or something like that. Splunk can, depending on your needs and configuratio... See more...
To add a bit to @gcusello 's answer... The {} are part of the field's name here. There's no magic., no additional syntax or something like that. Splunk can, depending on your needs and configuration work with json data in three separate ways each of which has its pros and cons. 1) Indexed extractions - in this way the fields are extracted from the event when it is ingested into splunk and are stored alongside the raw data as indexed fields. Since this can be combined with other methods, it can produce double field values. 2) Automatic key-value extractions from structured data 3) Explicit spath command Those two latter options are search-time operations and they produce different results in terms of field naming. Also if you need to filter by field's value after doing spath, you need to firstly do spath on every event which is much less effective than filtering in the search early. On the other hand, automatic KV extraction doesn't work on just part of the message. Anyway, one of those methods produces field named with {} as in your example when they originally contain lists of objects. But after parsing by splunk, the {} part is just a part of the field's name.
I don't think fishbucket has been tampered with since the time it was moved from being an index to a local datastore. So I wouldn't expect problems here.  
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or... See more...
Hello,   I have created a dashboard of 10 panels and I have used base query’s. The entire dashboard loads with 4 base queries but still always Dashboard Either getting stuck in Waiting for Data or “Queued waiting for“   how can I solve this problem.
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B... See more...
Hello, I have 2 distinct indexes with distinct values.Want to create one final stats query from select fields of both indexes.   Ex : Index A Fields X Y Z Stats Count (X) Avg(Y) by XYZ Index B feilds KM stats Count (K) Max(M) by K M i am able search both indexes  and give separate stats, If I give stats on all fields by XYZKM it is not giving any results. Note: No common feilds between both index’s.
Hi inventsekar, Thank you for explaining the parts of your search. The suggestions you gave helped to create a report that gives a list of unique badges (up to 3 entries for Sell and up to 3 entries... See more...
Hi inventsekar, Thank you for explaining the parts of your search. The suggestions you gave helped to create a report that gives a list of unique badges (up to 3 entries for Sell and up to 3 entries for Deploy) per user, showing the badge with the latest expiration date. The challenge now is to filter the results so that each person only shows the highest level badge earned (per badge type). So, each person will have a maximum of two entries in the final report if they have earned a Sell type badge and a Deploy type badge. To answer your previous question, users who pass the renewal badge will have the badge "Renew_" prefix in the badge field. Your suggestion to remove the prefix worked. Thank you for the suggestion. When a renew badge is earned, it will extend the original badge expiration date further out in time. Using the demo data I provided, the final report should look like below.   Domain First name Last name Email Badge ExpireDate mno.com lisa edwards lisa.edwards@mno.com Sell_Expert 12/6/23 mno.com lisa edwards lisa.edwards@mno.com Deploy_Capable 8/1/24 abc.com allen anderson allen.anderson@abc.com Sell_Novice 10/3/24 def.com andy braden andy.braden@def.com Deploy_Capable 1/3/24 ghi.com bill connors bill.connors@ghi.com Sell_Novice 10/17/23 jkl.com brandy duggan brandy.duggan@jkl.com Sell_Expert 9/5/24   Assuming that the data set has already been filtered to produce a list of badges with the latest expiration date, what type of search can look at the badge type (per person), and filter to the highest level badge (per badge type) and return only that data? For example, let's take "Person A". For the Sell type badge, the highest level is Sell_Expert. If "Person A" has earned "Sell_Expert", then only "Sell_Expert" (with the latest expiration date), will show up in the report for the Sell type badge. If "Person A" has not earned Sell_Expert yet, then the highest level for "Person A" is Sell_Capable.  If "Person A" has not earned Sell_Capable yet, then the highest level for "Person A" is Sell_Novice. If "Person A" has not earned any Sell type badge yet, they will not have a Sell badge type listed in the final report. It will be the same logic for Deploy type badge. Thank you.  
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES... See more...
Hello My data is formatted as JSON and it contains a field named "cves" which contains an array of cve codes related to the event.  If I simply alias it to CVE then one row will contain all the CVES: [props.conf] FIELDALIAS-cve = cves as cve   I assume that in order for the data to be useful, I have to somehow break the array in such a way that each value will enter as a separate row. Is this assumption correct?  and if so, what is they way to do that in props.conf?  Thank you
I'm seeing the same issue. Updated to 9.0.5 and was seeing a server 2019 host fill up with these events. Updated to 9.0.6 and still seeing the issue on that host.
Hello @ITWhisperer, I had used the below SPL for reading the values from lookup file and storing their corresponding SPL back in the lookup file which I wanted to use for computing the respective fo... See more...
Hello @ITWhisperer, I had used the below SPL for reading the values from lookup file and storing their corresponding SPL back in the lookup file which I wanted to use for computing the respective forecasted values. |inputlookup table1.csv |table fieldA, fieldB |eval key="index=custom_index earliest=-4w@w latest-2d@d orig_fieldA=".fieldA." orig_fieldB=".fieldB." |timechart span=1d avg(event_count) AS avg_event_count |predict avg_event_count future_timespan=1 |tail 1 |fields prediction(avg_event_count)" |outputlookup table1.csv Shared the above SPL for your reference incase you could suggest any improvement in the approach for achieving the end result. Thanking you in anticipation.
Hello @ITWhisperer, Thank you for your response and for sharing your inputs and questions. I will try to describe the original task and hopefully in the process will be able to answer your question... See more...
Hello @ITWhisperer, Thank you for your response and for sharing your inputs and questions. I will try to describe the original task and hopefully in the process will be able to answer your questions. I have a lookup file with two fields: fieldA, fieldB. The lookup file has distinct pairs of fieldA and fieldB. I have an index and I need to run predict command to calculate event count for each pair of fieldA and fieldB listed in lookup file to forecast the value of last day by taking historical data of last 1 month span by a day. As the result, the query I used to compute the timeseries data for giving input to predict command is a timechart command. I have to store the resulting forecasted value in the lookup for respective pairs of fieldA and fieldB. Then use the lookup for reading the its data in the dashboard and also display the actual value of event count for each distinct pair of fieldA and fieldB for comparison.  The approach I tried so far: - I tried to read the data from lookup file for each row, build the SPL for each row that uses timechart command to fetch timeseries data, then use it in predict command to compute the forecasted value; finally stored the SPL for each lookup in the third column of the lookup. The approach was to first build the required SPL for each row of unique pairs and then iteratively run the SPL from the lookup to compute the required values. However, I got stuck in reading the SPLs and storing their values back in the lookup.   I will now try to answer your specific questions: - Ideally, I would want to execute these searches in one main search or iteratively get executed in the main search. With regards to error in one of the SPL searches, the error is not anticipated because the search conditions and SPL keywords are same, only the values change based on the row that is currently being accessed in the lookup file. The lookup file more or less may have fixed number of rows but it will periodically get updated by a different search that refreshes its data. The SPL searches are complete search which can be executed independently on the search head to compute the predicted values.   Please share if you need any more details from my end for seeking your inputs. Thank you
Hi @Mien , your question is just a little vague: which days are you comparing? which data source? there could be many factors. Ciao. Giuseppe
Hi,  May I know, why is daily EPS on specific date get less than usually?  Is there any factor or cause to the less EPS count?  Thank you. 
Please provide some sample anonymised events
Hi @herrypeterlee, curly braces are properly from json format and contain the properties (fields) of the json array. here you can find some description: https://www.spiceworks.com/tech/devops/arti... See more...
Hi @herrypeterlee, curly braces are properly from json format and contain the properties (fields) of the json array. here you can find some description: https://www.spiceworks.com/tech/devops/articles/what-is-json/#:~:text=In%20JSON%2C%20data%20is%20represented,separates%20the%20array%20from%20values. (https://www.microfocus.com/documentation/silk-performer/195/en/silkperformer-195-webhelp-en/GUID-6AFC32B4-6D73-4FBA-AD36-E42261E2D77E.html#:~:text=A%20JSON%20object%20contains%20zero,by%20a%20colon%20(%20%3A%20).)  I hint to rename it at the start of the search to avoid problems in the search. Ciao. Giuseppe
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Clou... See more...
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Cloudera][HiveJDBCDriver](500168) Error creating login context using ticket cache: Unable to obtain Principal Name for authentication .   Has anyone faced this issue before? 
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search... See more...
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search:       index="tmdb_my_index" |mvexpand genre_ids{} |rename genre_ids{} as genre_id |table genre_id, id               Why genre_ids{} need the "{}"        
In this case, how do i  continue filter out the events that precede the appearance of 4769 without the other three Eventcodes?
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hou... See more...
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hours in many environments and not a single time I was able to do so. Today, I spent many hours trying to solve it with no success. Here's the problem: I've configured everything and everything is working fine, except KV store.     [sslConfig] cliVerifyServerName = true sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = $SPLUNK_HOME/etc/your/path/your_CA.pem [kvstore] sslVerifyServerCert = true sslVerifyServerName = true serverCert = $SPLUNK_HOME/etc/your/path/your_cert.pem sslPassword = [pythonSslClientConfig] sslVerifyServerCert = true sslVerifyServerName = true [search_state] sslVerifyServerCert = true       (btw, search_state is neither listed in the docs nor does the value display in the UI, however an error is logged if it's not set). You can put the sslPassword parameter in or not, doesn't matter.   What you'll always end up when enabling sslVerifyServerCert and sslVerifyServerName is in the mongod.log:     2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: This server will not perform X.509 hostname validation 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** This may allow your server to make or accept connections to 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** untrusted parties 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: No client certificate validation can be performed since no CA file has been provided 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** Please specify an sslCAFile parameter.     Splunk doesn't seem to be parsing the required parameters to Mongo as it's expecting them, let's dig a bit. This is what you'll find at startups:     2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyFile is deprecated. Please use tlsCertificateKeyFile instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyPassword is deprecated. Please use tlsCertificateKeyFilePassword instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. 2023-10-21T20:31:54.641Z W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. 2023-10-21T20:31:54.644Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2023-10-21T20:31:54.645Z W ASIO [main] No TransportLayer configured during NetworkInterface startup       Has anyone ever tested the TLS verification settings? All of the tlsVerify* settings are just very inconsistent in Splunk 9 and I don't see them mentioned often. Also I don't find any bugs or issues listed with KV store encryption. If you list those parameters on the docs, I expect them to work. A "ps -ef | grep mongo" will list you what options are parsed from Splunk to Mongo, formatted for readability.       mongod --dbpath=/data/splunk/var/lib/splunk/kvstore/mongo --storageEngine=wiredTiger --wiredTigerCacheSizeGB=3.600000 --port=8191 --timeStampFormat=iso8601-utc --oplogSize=200 --keyFile=/data/splunk/var/lib/splunk/kvstore/mongo/splunk.key --setParameter=enableLocalhostAuthBypass=0 --setParameter=oplogFetcherSteadyStateMaxFetcherRestarts=0 --replSet=8B532733-2DEF-42CC-82E5-38E990F3CD04 --bind_ip=0.0.0.0 --sslMode=requireSSL --sslAllowInvalidHostnames --sslPEMKeyFile=/data/splunk/etc/auth/newCerts/machine/deb-spl_full.pem --sslPEMKeyPassword=xxxxxxxx --tlsDisabledProtocols=noTLS1_0,noTLS1_1 --sslCipherConfig=ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-DSS-AES256-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256 --nounixsocket --noscripting     I even tried messing around with old server.conf parameters like caCertFile or sslKeysPassword, but it seems like the CA is simply never parsed as an argument. Why did no one stumple upon this?   How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) as well as recommended best practice encryption settings.   If anyone has a working KV store TLS config, I'm eager to see that.     Skalli    
@oneemailall wrote:  I am still trying to figure out why your solution works.   Hi @oneemailall .. Please note that, on my reply i said that you will need to fine-tune this further.  and ni... See more...
@oneemailall wrote:  I am still trying to figure out why your solution works.   Hi @oneemailall .. Please note that, on my reply i said that you will need to fine-tune this further.  and nice to know the other reply works perfectly as you are expecting.  As you were saying, you are trying to figure out why that solution works, let me try to explain,.. | eval type = split(Badge, "_") ``` Splitting the "Badge" field by the underscore, you get the "type" of the badge``` | eval level = mvfind(mvappend("Novice", "Capable", "Expert"), mvindex(type, -1)) + 1 ``` the mvappend, mvindex are multivalue commands, understanding them takes a looonger time. pls check the docs https://docs.splunk.com/Documentation/SCS/current/SearchReference/MultivalueEvalFunctions ``` | fillnull level | eval type = mvindex(type, -2) | eval expire_ts = strptime(ExpireDate, "%m/%d/%y") ``` to sort the ExpireDate, first you need to convert to epoch timeformat``` | sort - level, expire_ts, + "Last name" "First name" | dedup Domain, "First name", "Last name", Email, type ```sorting and dedup done nicely, you can table the output by below command``` | table Domain, "First name", "Last name", Email, Badge, ExpireDate