All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @herrypeterlee, curly braces are properly from json format and contain the properties (fields) of the json array. here you can find some description: https://www.spiceworks.com/tech/devops/arti... See more...
Hi @herrypeterlee, curly braces are properly from json format and contain the properties (fields) of the json array. here you can find some description: https://www.spiceworks.com/tech/devops/articles/what-is-json/#:~:text=In%20JSON%2C%20data%20is%20represented,separates%20the%20array%20from%20values. (https://www.microfocus.com/documentation/silk-performer/195/en/silkperformer-195-webhelp-en/GUID-6AFC32B4-6D73-4FBA-AD36-E42261E2D77E.html#:~:text=A%20JSON%20object%20contains%20zero,by%20a%20colon%20(%20%3A%20).)  I hint to rename it at the start of the search to avoid problems in the search. Ciao. Giuseppe
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Clou... See more...
Hello, I tried setting up an HIVE connection using Splunk DB connect and got stuck with the Kerberos authentication. I have added Cloudera drivers for HIVE DB.  But we get the below error: [Cloudera][HiveJDBCDriver](500168) Error creating login context using ticket cache: Unable to obtain Principal Name for authentication .   Has anyone faced this issue before? 
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search... See more...
I have a data like: {"adult": false,  "genre_ids": [16, 10751], "id": 1135710, "original_language": "sv", "original_title": "Vem du, Mamma Mu", "vote_average": 6, "vote_count": 2}     I do search:       index="tmdb_my_index" |mvexpand genre_ids{} |rename genre_ids{} as genre_id |table genre_id, id               Why genre_ids{} need the "{}"        
In this case, how do i  continue filter out the events that precede the appearance of 4769 without the other three Eventcodes?
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hou... See more...
Hello everyone, so, many hours went by. It all started with the parameters which were introduced in Splunk 9 (docs reference). Specificially, we should harden the KV store. I've spent several hours in many environments and not a single time I was able to do so. Today, I spent many hours trying to solve it with no success. Here's the problem: I've configured everything and everything is working fine, except KV store.     [sslConfig] cliVerifyServerName = true sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = $SPLUNK_HOME/etc/your/path/your_CA.pem [kvstore] sslVerifyServerCert = true sslVerifyServerName = true serverCert = $SPLUNK_HOME/etc/your/path/your_cert.pem sslPassword = [pythonSslClientConfig] sslVerifyServerCert = true sslVerifyServerName = true [search_state] sslVerifyServerCert = true       (btw, search_state is neither listed in the docs nor does the value display in the UI, however an error is logged if it's not set). You can put the sslPassword parameter in or not, doesn't matter.   What you'll always end up when enabling sslVerifyServerCert and sslVerifyServerName is in the mongod.log:     2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: This server will not perform X.509 hostname validation 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** This may allow your server to make or accept connections to 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** untrusted parties 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** WARNING: No client certificate validation can be performed since no CA file has been provided 2023-10-22T00:11:28.557Z I CONTROL [initandlisten] ** Please specify an sslCAFile parameter.     Splunk doesn't seem to be parsing the required parameters to Mongo as it's expecting them, let's dig a bit. This is what you'll find at startups:     2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyFile is deprecated. Please use tlsCertificateKeyFile instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslPEMKeyPassword is deprecated. Please use tlsCertificateKeyFilePassword instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. 2023-10-21T22:31:54.640+0200 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. 2023-10-21T20:31:54.641Z W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. 2023-10-21T20:31:54.644Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2023-10-21T20:31:54.645Z W ASIO [main] No TransportLayer configured during NetworkInterface startup       Has anyone ever tested the TLS verification settings? All of the tlsVerify* settings are just very inconsistent in Splunk 9 and I don't see them mentioned often. Also I don't find any bugs or issues listed with KV store encryption. If you list those parameters on the docs, I expect them to work. A "ps -ef | grep mongo" will list you what options are parsed from Splunk to Mongo, formatted for readability.       mongod --dbpath=/data/splunk/var/lib/splunk/kvstore/mongo --storageEngine=wiredTiger --wiredTigerCacheSizeGB=3.600000 --port=8191 --timeStampFormat=iso8601-utc --oplogSize=200 --keyFile=/data/splunk/var/lib/splunk/kvstore/mongo/splunk.key --setParameter=enableLocalhostAuthBypass=0 --setParameter=oplogFetcherSteadyStateMaxFetcherRestarts=0 --replSet=8B532733-2DEF-42CC-82E5-38E990F3CD04 --bind_ip=0.0.0.0 --sslMode=requireSSL --sslAllowInvalidHostnames --sslPEMKeyFile=/data/splunk/etc/auth/newCerts/machine/deb-spl_full.pem --sslPEMKeyPassword=xxxxxxxx --tlsDisabledProtocols=noTLS1_0,noTLS1_1 --sslCipherConfig=ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:DHE-DSS-AES256-GCM-SHA384:DHE-DSS-AES256-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256 --nounixsocket --noscripting     I even tried messing around with old server.conf parameters like caCertFile or sslKeysPassword, but it seems like the CA is simply never parsed as an argument. Why did no one stumple upon this?   How did I find all of that? I have developed an app which gives an overview of the Splunk environment's mitigation status against current Splunk Vulnerabity Disclosures (SVDs) as well as recommended best practice encryption settings.   If anyone has a working KV store TLS config, I'm eager to see that.     Skalli    
@oneemailall wrote:  I am still trying to figure out why your solution works.   Hi @oneemailall .. Please note that, on my reply i said that you will need to fine-tune this further.  and ni... See more...
@oneemailall wrote:  I am still trying to figure out why your solution works.   Hi @oneemailall .. Please note that, on my reply i said that you will need to fine-tune this further.  and nice to know the other reply works perfectly as you are expecting.  As you were saying, you are trying to figure out why that solution works, let me try to explain,.. | eval type = split(Badge, "_") ``` Splitting the "Badge" field by the underscore, you get the "type" of the badge``` | eval level = mvfind(mvappend("Novice", "Capable", "Expert"), mvindex(type, -1)) + 1 ``` the mvappend, mvindex are multivalue commands, understanding them takes a looonger time. pls check the docs https://docs.splunk.com/Documentation/SCS/current/SearchReference/MultivalueEvalFunctions ``` | fillnull level | eval type = mvindex(type, -2) | eval expire_ts = strptime(ExpireDate, "%m/%d/%y") ``` to sort the ExpireDate, first you need to convert to epoch timeformat``` | sort - level, expire_ts, + "Last name" "First name" | dedup Domain, "First name", "Last name", Email, type ```sorting and dedup done nicely, you can table the output by below command``` | table Domain, "First name", "Last name", Email, Badge, ExpireDate
Pro tip: Post data and output in text.  It is much easier for volunteers. So, the fields do NOT have values "True" as your mock code has implied.  They have values "true".  If you haven't grasped th... See more...
Pro tip: Post data and output in text.  It is much easier for volunteers. So, the fields do NOT have values "True" as your mock code has implied.  They have values "true".  If you haven't grasped this, Splunk stores most data in string tokenized strings and numeric values.  Have you tried this? | eval ycw = strftime(_time, "%Y_%U") | stats count(eval('FieldA'="true")) as FieldA_True, count(eval('FieldB'="true")) as FieldB_True, count(eval('FieldC'="true")) as FieldC_True by ycw | table ycw, FieldA_True, FieldB_True, FieldC_True
Hi yuanliu, Thank you for answering my query.  I am still trying to figure out why your solution works.  I was able to modify it as you suggested to get the report I needed. I hope I correctly gav... See more...
Hi yuanliu, Thank you for answering my query.  I am still trying to figure out why your solution works.  I was able to modify it as you suggested to get the report I needed. I hope I correctly gave you credit for the solution.  I hope you not only get karma points in this community but also good karma points in life.  Cheers. 
Hi inventsekar, I sincerely appreciate your spending time to solve my issue.  I tested your suggestion, but still ended up with duplicate entries for people with the same type of badge (Sell or Depl... See more...
Hi inventsekar, I sincerely appreciate your spending time to solve my issue.  I tested your suggestion, but still ended up with duplicate entries for people with the same type of badge (Sell or Deploy).  For example, Brandy Duggan should have only one entry, the highest level of Badge type "Sell" which is Brandy Duggan, Sell_Expert, 9/5/24.  The results should look similar to this. Domain, First name, Last name, Email, Badge, ExpireDate mno.com, lisa edwards, lisa.edwards@mno.com, Sell_Expert, 12/6/23 (only show highest level badge of type "Sell") mno.com, lisa edwards, lisa.edwards@mno.com, Deploy_Capable, 8/1/24 (only show highest level badge of type "Deploy") abc.com, allen anderson, allen.anderson@abc.com, Sell_Novice, 10/3/24 (allen anderson renewed his badge and expiry date is updated to reflect that) def.com, andy braden, andy.braden@def.com, Deploy_Capable, 1/3/24 ghi.com, bill connors, bill.connors@ghi.com, Sell_Novice, 10/17/23 jkl.com, brandy duggan, brandy.duggan@jkl.com, Sell_Expert, 9/5/24   Thank you again for helping me.   Cheers.    
Sorry, now I put the correct result (Pl ignore the previous result)  
Thanks for the hint. Attached is the result I get. But want the total count of all TRUE cases listed per calendar week in numbers and also in % (I don't want the FALSE as result ). The attached .xls ... See more...
Thanks for the hint. Attached is the result I get. But want the total count of all TRUE cases listed per calendar week in numbers and also in % (I don't want the FALSE as result ). The attached .xls shows what I am looking for with example numbers    
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'sourc... See more...
Hi all I have a combined lookup data with a fields containing various values like aaa acc aan, and more. I'm looking to find a single value for 'aan' from the 'source' field specifically when 'source' has ss Ann ’or css. Could you please help me construct the correct Splunk query for this?"
Hi @vijreddy30 ...  From your question, what I understood is that...  In Zone 1, you have an indexer and search head in a single windows system. Zone 2 also the same.   >>> As per requirement ne... See more...
Hi @vijreddy30 ...  From your question, what I understood is that...  In Zone 1, you have an indexer and search head in a single windows system. Zone 2 also the same.   >>> As per requirement need to be implement High availability servers Zone1 and Zone2. as per my understanding, by "high availability", you mean, the UF agents should be able to send logs to either both or any one indexer...so you will not miss any logs at all. (This is not high availability, this is actually load balancing). Pls suggest me if this understanding was wrong. If you could provide us some more details about the requirements, we could help you better. Thanks.   
Hi, I am not sure of how the two where commands are working in your SPL. but, the mvindex second argument must be a "number".    mvindex(<mv>, <start>, <end>) This function returns a subset of ... See more...
Hi, I am not sure of how the two where commands are working in your SPL. but, the mvindex second argument must be a "number".    mvindex(<mv>, <start>, <end>) This function returns a subset of the multivalue field using the start and end index values. Usage.....The <mv> argument must be a multivalue field. The <start> and <end> indexes must be numbers. The <mv> and <start> arguments are required. The <end> argument is optional.   https://docs.splunk.com/Documentation/SCS/current/SearchReference/MultivalueEvalFunctions    
I suppose it depends on what you mean by "high availability".  In my book, Splunk doesn't do HA, but I come from a fault-tolerant computing background. The closest you'll get requires search head an... See more...
I suppose it depends on what you mean by "high availability".  In my book, Splunk doesn't do HA, but I come from a fault-tolerant computing background. The closest you'll get requires search head and indexer clusters, which is a bit more of an investment (both in servers and in management) than single instance Splunk servers.  Note that Splunk does not support HA for forwarders, Deployment Servers, or SHC Deployers.  See https://docs.splunk.com/Documentation/Splunk/9.1.1/Deploy/Useclusters and https://docs.splunk.com/Documentation/Splunk/9.1.1/Deploy/Indexercluster for more information.  
The second argument to mvindex must be an integer.  I think perhaps you want something like this: | where (mvindex(description, mvfind(description,"User login to Okta")) == 0) or, even better | wh... See more...
The second argument to mvindex must be an integer.  I think perhaps you want something like this: | where (mvindex(description, mvfind(description,"User login to Okta")) == 0) or, even better | where (isnotnull(mvfind(description, "User login to Okta")))
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log"... See more...
I am trying to create an alert that triggers if a user successfully logs in without first having been successfully authenticated via MFA. The query is below:   index="okta" sourcetype="OktaIM2:log" outcome.result=SUCCESS description="User login to Okta" OR description="Authentication of user via MFA" | transaction maxspan=1h actor.alternateId, src_ip | where (mvcount(description) == 1) | where (mvindex(description, "User login to Okta") == 0)     I keep getting the error    Error in 'where' command: The arguments to the 'mvindex' function are invalid.     Please help me correct my search and explain what I am doing wrong.
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and al... See more...
Hi All,   Currently Development zone-1 HF and( SearchHead+Indexer ) single instance QA -HF,Deploymentserver and Deployment server Zone2 also same servers, but we dont't have Cluster master and all are implemented Windows System.   As per requirement need to be implement High availability servers Zone1 and Zone2.   please send me implemented steps for high availability servers.   Regards, Vijay
Not completely impossible.  But before discussing workarounds, I have the same question as @PickleRick does: Why?  Are they the same events (with the same timestamp, etc.)?  Does the CSV even represe... See more...
Not completely impossible.  But before discussing workarounds, I have the same question as @PickleRick does: Why?  Are they the same events (with the same timestamp, etc.)?  Does the CSV even represent time series events?  If they are the same events but with updates, why not delete previously loaded events before upload?  I use CSV upload regularly.  Each contains different events.  Even so, I name files differently in part for peace of mind.
As @PickleRick said, Splunk does not mimic modern spreadsheet's visualization.  The forte of Splunk is to turn unstructured data into relational tables.  Every grid in Splunk is fully rendered.  Text... See more...
As @PickleRick said, Splunk does not mimic modern spreadsheet's visualization.  The forte of Splunk is to turn unstructured data into relational tables.  Every grid in Splunk is fully rendered.  Text alignment is not articulated.  And cell coloring is generally unsupported. With these constraints, you can design your own visual vocabulary to render the cells with various elements.  For example, your spreadsheet visualization can be simulated with Note your illustrated Standby count of 250 is the sum of url and cleared_log, not the difference as you formulated.  I suspect that this is intended.  So, I added an additional visual element under breakdowns to highlight the url - cleared_log. The above is rendered with the following search   | tstats count as App_logs where index=app-logs TERM(Application) TERM(logs) TERM(received) | appendcols [|tstats count as Exception_logs where index=app-logs TERM(Exception) TERM(logs) TERM(received)] | appendcols [|tstats count as Canceled_logs where index=app-logs TERM(unpassed) TERM( logs) TERM(received)] | appendcols [|tstats count as 401_mess_logs where index=app-logs TERM(401) TERM( error) TERM(message)] | eval mess_type = "Error count", count = App_logs + Exception_logs + Canceled_logs + '401_mess_logs' | eval breakdowns = mvappend("App_logs: " . App_logs, "Exception_logs: " . Exception_logs, "Canceled_logs: " . Canceled_logs, "401_mess_logs: " . '401_mess_logs') | fields - *_logs | append [|tstats count as url where index=app-logs TERM(url) TERM( info) TERM(staged) |appendcols [|tstats count as cleared_log where index=app-logs TERM(Filtered) TERM(logs) TERM(arranged)] | eval mess_type = "Standby count", count = url + cleared_log | eval breakdowns = mvappend("url: " . url, "cleared_log: " . cleared_log, ":standby: " . (url - cleared_log)) | fields - url cleared_log] | addcoltotals labelfield=mess_type label="Total mess" | table mess_type count breakdowns   Note: I did not change your tstats searches.  If the TERM combinations give you the correct counts, great.  If not, you may need to use index searches.  In that scenario, append and appendcols are so inefficient you will need to use other methods to get individual counts.  But the visual tweaks remain the same. Hope this helps.