All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I found "VersionControl For Splunk" on Github would this add-on work for gitlab as well?
Looks like spaces and quotes are being identified as shell.  Try escaping them like below: curl -k -u user:password https://10.236.142.0:8089/services/search/jobs/export -d search='search index=list... See more...
Looks like spaces and quotes are being identified as shell.  Try escaping them like below: curl -k -u user:password https://10.236.142.0:8089/services/search/jobs/export -d search='search index=list-service source=\"eventhub://sams-jupiter-prod-scus-logs-premium-1.servicebus.windows.net/list-service;\" \"kubernetes.namespace_name\"=\"list-service\" | stats dc(kubernetes.pod_name) as pod_count' I had a very long query that needed to be passed via rest api. I ran into such issues but url encoding the query was very helpful. I used this website for that: https://meyerweb.com/eric/tools/dencoder/
Try adding following at the end of your search If your current search result contains _time and one field say "count" | eventstats sum(count) as TotalCount   If your search result contains _time ... See more...
Try adding following at the end of your search If your current search result contains _time and one field say "count" | eventstats sum(count) as TotalCount   If your search result contains _time and several dynamic column names, | addtotals fieldname="Totalcount"
Hello, I have the same problem in the Topology Overview in a distributed environment. The queries stay loading... and they don't work. I manage many environments and the same thing happens in both W... See more...
Hello, I have the same problem in the Topology Overview in a distributed environment. The queries stay loading... and they don't work. I manage many environments and the same thing happens in both Windows and Linux. I think this version 9.1.1 has that problem.  
Hi @ashok968, Can you please share your current SPL and its output and a visual representation of what output you may need to fulfill the requirement? Please mask the relevant sensitive content if a... See more...
Hi @ashok968, Can you please share your current SPL and its output and a visual representation of what output you may need to fulfill the requirement? Please mask the relevant sensitive content if any. Thank you
Hi @Sunil.Agarwal, Any idea how to resolve this? I am still getting the same error. I even tried accessing it internally from the linux server running the controller and getting the same error. 
Hello All, I have data in the form of a table with two fields: index, sourcetype. Each row has unique pair of values for the two fields. I need your guidance to compute and publish the forecast val... See more...
Hello All, I have data in the form of a table with two fields: index, sourcetype. Each row has unique pair of values for the two fields. I need your guidance to compute and publish the forecast value for number of events  next day based on historical data fetched for each row on the basis of corresponding index and sourcetype. Any inputs and guidance will be very helpful. Thank you Taruchit
i am running the below query      curl -k -u user:password https://10.236.142.0:8089/services/search/jobs/export -d search="search index=list-service source="eventhub://sams-jupiter-prod-sc... See more...
i am running the below query      curl -k -u user:password https://10.236.142.0:8089/services/search/jobs/export -d search="search index=list-service source="eventhub://sams-jupiter-prod-scus-logs-premium-1.servicebus.windows.net/list-service;" "kubernetes.namespace_name"="list-service" | stats dc(kubernetes.pod_name) as pod_count" <?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder /> </meta> </results> zsh: command not found: kubernetes.namespace_name=list-service | stats dc(kubernetes.pod_name) as pod_count  
Thanks.  My response object is extracted to responseJson. How do I iterate over any possible field name in responseJson?  What am I doing wrong below?     | eval responseJson='loggingObject.respo... See more...
Thanks.  My response object is extracted to responseJson. How do I iterate over any possible field name in responseJson?  What am I doing wrong below?     | eval responseJson='loggingObject.responseJson' | foreach * [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag
That STRUCTURED_PART could be entered as a comment.  I think it is more practical than a general solution.
Pro tip: Instead of asking volunteers to reverse engineer complex codes and screenshots without explanation, it is much more productive to post sample/mock data in text (anonymize as needed), illustr... See more...
Pro tip: Instead of asking volunteers to reverse engineer complex codes and screenshots without explanation, it is much more productive to post sample/mock data in text (anonymize as needed), illustrate desired results in text, and explain the logic between data and desired results.  Post (pseudo) codes only if they help explanation; if posted code does not give desired results, also illustrate actual results or error message in text.
Found it. https://ideas.splunk.com/ideas/EID-I-208 And I don't think it even needs to cover so many use cases. So far we can't do the one, most popular - a structured data blob forwarded wthin a sy... See more...
Found it. https://ideas.splunk.com/ideas/EID-I-208 And I don't think it even needs to cover so many use cases. So far we can't do the one, most popular - a structured data blob forwarded wthin a syslog message. I wiuldn't expect such feature to be too automatic. It could just go with "if the sourcetype has KV_MODE set the STRUCTURED_PART (by default (.*) ) defines which part of the raw event is subjected to KV extraction". And that's it. You could do your own regex to capture everything after the syslog header or whatever you need. Simple yet flexible. Like LINE_BREAKER or TOKENIZER
Something like | foreach flag* [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag The wildcard expression will depend on actual field n... See more...
Something like | foreach flag* [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag The wildcard expression will depend on actual field names. (Worst comes you iterate over non-flag fields; alternatively, you enumerate all possible flags.) See foreach.
Want to compare Dynatrace results (Total calls & Avg/90% responses times) for current week Vs Last week. And need to display the differences.  Sample Query:  index="dynatrace" sourcetype="dynatra... See more...
Want to compare Dynatrace results (Total calls & Avg/90% responses times) for current week Vs Last week. And need to display the differences.  Sample Query:  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_action_user path=userId | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="xxxxxxx" | spath output=pp_key_user_action input=user_actions path=keyUserAction | where pp_key_user_action="true"| spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_User_Action_Response" by pp_user_action_name | stats count(pp_user_action_response) As "Total_Calls",perc90(pp_user_action_response) AS "Perc90_User_Action_Response" by pp_user_action_name Avg_User_Action_Response | eval Perc90_User_Action_Response=round(Perc90_User_Action_Response,0)/1000| eval Avg_User_Action_Response=round(Avg_User_Action_Response,0)/1000 | table pp_user_action_name,Total_Calls,Avg_User_Action_Response,Perc90_User_Action_Response | sort -Perc90_User_Action_Response
That sounds about right, although I would expect each index to be on a separate line with the corresponding index name.  Do you have a screenshot?
extractions work only if the whole event is a well-formed structure. You can't use it if you have - for example - a json event with a syslog header. There is an open idea about it somewhere on idea... See more...
extractions work only if the whole event is a well-formed structure. You can't use it if you have - for example - a json event with a syslog header. There is an open idea about it somewhere on ideas.splunk.com I remember seeing you posting a link in Slack but can't find it in a hurry.  Maybe more people could vote for such.  Be honest, though, seeing all kinds of "smart" schemes developers come up to mix JSON with plain text in this board, it can be tricky to implement a generally application solution. One workaround for situations like this is to make a data-specific regex field extraction to extract the compliant part for each sourcetype so this regex doesn't have to be included in search commands.  This makes easier code maintenance.
Correct @fredclown , It will search for base search even if we change the tine frame. Apart from using show source option any other way or command to get such details. 
Hi! I am wondering whether there are any advantage to use token over username and passphrase/password when accessing REST API to a dedicated API user (whose access to credential is the same as acces... See more...
Hi! I am wondering whether there are any advantage to use token over username and passphrase/password when accessing REST API to a dedicated API user (whose access to credential is the same as access to the token). In our practices for automated API calls, we like to provide enough resources for the API call but nothing more, and therefore we ended up creating a dedicated functional user in our instance in Splunk Cloud for each function a program or a group of closely related programs need to do through API calls. Consequently, in almost all the cases there is only a single token needed per functional user. Everyone that need to maintain those programs for that function will have access to both the credential of that functional user and that single token, as those team member need to login as the user to test their queries. Therefore there are no isolation of access between token and user credential. So in this case, are there any advantage to create and use the token over just using username and passphrase/password? Thank you!
I have a response that looks like this:   {"meta":{"code":400},"flag1":false,"flag2":false,"flag3":true}   There are more than 3 flags, but this is an example. Assuming that there is only one th... See more...
I have a response that looks like this:   {"meta":{"code":400},"flag1":false,"flag2":false,"flag3":true}   There are more than 3 flags, but this is an example. Assuming that there is only one that is true in each response, I want to get a count of which flag is true the most times, in descending order.
I strongly encourage you to take the free Using Splunk ES (Using Splunk Enterprise Security ) and the (not free) Administering Splunk ES (Administering Splunk Enterprise Security ) courses. ES uses ... See more...
I strongly encourage you to take the free Using Splunk ES (Using Splunk Enterprise Security ) and the (not free) Administering Splunk ES (Administering Splunk Enterprise Security ) courses. ES uses correlation searches to create notable events.  A CS is like a saved search, but will a few added attributes.  You can create a CS in ES by going to Configuration->Content Management and clicking on the New Correlation Search button.