All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Question 1: In my org have Splunk ES 7.2.X with 4 VMs(win os) i.e., 1 Search Head, 1 Deployment server, 2 Indexers Search Head: In search head we installed Splunk Add-on for Amazon Web Services... See more...
Question 1: In my org have Splunk ES 7.2.X with 4 VMs(win os) i.e., 1 Search Head, 1 Deployment server, 2 Indexers Search Head: In search head we installed Splunk Add-on for Amazon Web Services and configured and getting logs in splunk that logs are saving in index (main) search head under defaultdb/db and i didnt set the buckets retension policy. So can you please help me what is the exact indexes.conf to set the retension policy for deletion more than 1year logs. Question 2: I integrated some servers logs(haddop, mulesof, forgerock) to splunk these are indexing in index(main). When i look the indexes.conf file i was shocked there is no indexes.conf file anywhere. i have check some in my way i found _cluster/indexes.conf, in this saw the script like [main] -> repfactor = 0 By seeing this i guess to know that this is cluster indexer so it have repfactor = 0. So can you please help me what is the exact indexes.conf to set the retension policy for deletion more than 1year logs in cluster indexer.
Data model datasets have a hierarchical relationship with each other, meaning they have parent-child relationships. Data models can contain multiple dataset hierarchies. There are three types of data... See more...
Data model datasets have a hierarchical relationship with each other, meaning they have parent-child relationships. Data models can contain multiple dataset hierarchies. There are three types of dataset hierarchies: event, search, and transaction. Dataset types You can work with three dataset types. Two of these dataset types, lookups, and data models, are existing knowledge objects that have been part of the Splunk platform for a long time. Table datasets, or tables, are a new dataset type that you can create and maintain in Splunk Cloud, and after you download and install the Splunk Datasets Add-on in Splunk Enterprise. Which would be the correct answer, please? @admin please don't delete my question.
I'm trying to only extract the value of 'value' with regex. 2020-03-04 12:14:26,363 - measurement:34- sensor=43, value="0.034051", date="None" I've tried this but it didn't work: | rex ... See more...
I'm trying to only extract the value of 'value' with regex. 2020-03-04 12:14:26,363 - measurement:34- sensor=43, value="0.034051", date="None" I've tried this but it didn't work: | rex field=value "(?<myValue>\d{3})" | search myValue=* Where did it go wrong and how do I solve this?
The events in JSON are not breaking properly. Every once in an hour I'll get a few event that are like 60+ event in one.
My Enterprise Splunk version is 7.3.2 and ES app version which i tried installing is 6.1.1. After ES app installation and splunk server restart , i see the following error when i proceed to setup p... See more...
My Enterprise Splunk version is 7.3.2 and ES app version which i tried installing is 6.1.1. After ES app installation and splunk server restart , i see the following error when i proceed to setup page "Installer was unable to start. Error in 'essinstall' command: External search command exited unexpectedly with non-zero error code 1." I understand it is due to version compatibility issue between ES and Entreprise Splunk in one of the Splunk answers https://answers.splunk.com/answers/521781/error-while-installing-splunk-enterprise-security.html But in the app page 7.3 are 8.0 is mentioned as compatible version. Please help if any one has faced this issue. TIA
I installed the Splunk Add-on for Unix and Linux app in one of my Linux machine which have Hadoop is running, I configured ps.sh logs in inputs.conf under /opt/splunkforwarder/Splunk Add-on for Unix ... See more...
I installed the Splunk Add-on for Unix and Linux app in one of my Linux machine which have Hadoop is running, I configured ps.sh logs in inputs.conf under /opt/splunkforwarder/Splunk Add-on for Unix and Linux/local/inputs.conf by seeing documentation. I successfully getting ps.sh logs from that machine along with Kafka services are also getting in Splunk. So after some months I able to get ps.sh logs but Kafka logs are not reporting to Splunk. What is the reason behind this and also i reinstall the Splunk Add-on for Unix and Linux app and checked but no luck? Please anyone helps me to re-reporting the Kafka logs to Splunk. The below query am using to see the Kafka logs, earlier it was given me success results but now its showing stopped at someday. Query: sourcetype="ps" host="xxxxxxx" "kafka"
Hi, I am using below query to get the stats o/p of Total, Failure & Failure percent by couple of fields for every 15 min interval over 2 hrs duration. index=dte_fios sourcetype=dte2_Fios FT=*FT ... See more...
Hi, I am using below query to get the stats o/p of Total, Failure & Failure percent by couple of fields for every 15 min interval over 2 hrs duration. index=dte_fios sourcetype=dte2_Fios FT=*FT earliest=04/20/2020:11:00:00 latest=04/20/2020:13:00:00 | bin _time span=15m | stats count as Total, count(eval(Error_Code!="0000")) AS Failure by FT,Error_Code,_time | eval Failurepercent=round(Failure/Total*100) I am getting O/p as expected in terms of cols like below: FT Error_Code _time Total Failure Failurepercent ALCATEL_FT 8950 2020-04-20 12:15:00 10 10 100% ALCATEL_FT 8950 2020-04-20 12:30:00 10 5 50% ALCATEL_FT 8950 2020-04-20 12:45:00 10 10 100% The issue here is if any interval is having 0 records (we do not have row for 11, 11:15 11:30 intervals) is is not showing a row. I need the O/P to give row for every 15min interval and show Total & Failure as 0. I tried to use timechart but I could not get the above o/p format as stats is not working with timehcart. Can someone help with the query?
Hi, I have two queries one from 1st_index and another from 2nd_index both are separately are giving correct outputs but when i combine them i get 0 results. index="1st_index" | eval name=upp... See more...
Hi, I have two queries one from 1st_index and another from 2nd_index both are separately are giving correct outputs but when i combine them i get 0 results. index="1st_index" | eval name=upper(name) | search name=ABCD |search index="2nd_index" | fillnull value="Other" | mvexpand infrastructure{}.name | rename infra{}.name as "Infrastrucure Name" name as Nom infra{}.type as type | table "Infrastrucure Name" Nom type | mvexpand type | eval Nom=upper(Nom) I want the name from 1st output to be the searched in the second subquery. And at the end show few columns from 1st query and few from 2nd query.
Hi, Our ES's pre-packaged datamodel (DM) Network_Traffic has 3 months of summary range. We've introduced new logs in this said DM by adding new indexes and specifying eventtypes and tags . We'... See more...
Hi, Our ES's pre-packaged datamodel (DM) Network_Traffic has 3 months of summary range. We've introduced new logs in this said DM by adding new indexes and specifying eventtypes and tags . We've confirmed by searching the datamodel that the new logs (i.e. WinEventLogs) are now included. However, using tstats to see counts of events per day, we noticed that there's no data beyond one month ago. See: As can be seen from above, other sourcetypes yield positive result except for the newly added sourcetype. We were wondering why this is being the case when the DM's summary range is for 3 months. Shouldn't we be expecting to get more beyond just a month ago? What are we missing here?
Hi everyone, How can i aline the field output in the table so that it ll not take more space. if you see in the screenshot the URLs field output is taking so much space so i just want to urges... See more...
Hi everyone, How can i aline the field output in the table so that it ll not take more space. if you see in the screenshot the URLs field output is taking so much space so i just want to urgest like other field output TEST Issue Title and TEST Issue Recommended Fix can we have that option here ? my xml code it <dashboard script="panel_tooltip.js" theme="dark"> <!--.dashboard-header-exportmenu {--> <!--display:none;--> <!--}--> <label> Clone test</label> <row> <panel> <table> <search> <query>source="2000.csv" host="BDC4-D-CVYYQG2" sourcetype="csv" |table "Evaluation Date" "UD ID" "Application Name" "URL" "TEST Issue Title" "TEST Issue Recommended Fix" "TEST Area" "TEST Lead" "TEST Director" "LIST" "LIST1" "LIST2" "Conformance TEST" "Success TEST" "Success TEST Title" "TEST Issue Title" "TEST Issue Recommended Fix"</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>
Here is my restmap.conf [validation:savedsearch] # Check for empty feed action.myApp= case('action.myApp' != "1", null(), 'action.myApp.param.feed' == "action.myApp.param.feed" OR 'action.myApp.p... See more...
Here is my restmap.conf [validation:savedsearch] # Check for empty feed action.myApp= case('action.myApp' != "1", null(), 'action.myApp.param.feed' == "action.myApp.param.feed" OR 'action.myApp.param.feed' == "", "Feed cannot be empty.", 1==1, null()) # Check for empty instance action.myApp= case('action.myApp' != "1", null(), 'action.myApp.param.instance' == "action.myApp.param.instance" OR 'action.myApp.param.instance' == "", "Instance cannot be empty.", 1==1, null()) # Check for SSL action.myApp= case('action.myApp' != "1", null(), 'action.myApp.param.ssl' == "1" AND 'action.myApp.param.cert' == "action.myApp.param.cert", "Cert path cannot be empty when SSL checked.", 'action.myApp.param.ssl' == "1" AND 'action.myApp.param.cert' == "", "Cert path cannot be empty when SSL checked.", 1==1, null()) # Check for feed regex action.myApp.param.feed = validate( match('action.myApp.param.feed', "^[A-Z0-9_-]{3,}$"), "Feed is invalid, see regex for detail.") # Check for instance regex action.myApp.param.instance = validate( match('action.myApp.param.instance', "^https?:\/\/([a-zA-Z]+|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):\d+([\/\w]*)?$"), "Instance is invalid, see regex for detail.") Everything works as expected if I comment out the check for SSL line, however as soon as I uncomment that line, I can save the alert even if all of the fields have no values. How can I get this to work? action.myApp.param.ssl is a checkbox action.myApp.param.cert is a string input
Hi, Since a few months I have random problems when I try to execute a search that works correctly. The problem is that sometimes the job of the search returns the following exception: """ X... See more...
Hi, Since a few months I have random problems when I try to execute a search that works correctly. The problem is that sometimes the job of the search returns the following exception: """ X errors occurred while the search was executing. Therefore, search results might be incomplete. Dispatch Command: Unknown error for indexer: my_search_head_0X. Search Results might be incomplete! If this occurs frequently, please check on the peer. """ This error show up problems in each Search Head that I have. How I said this is random because if I try to execute again the same query after the error it shows me the results. Anybody can help me to fix this or understand why is happening this. Thanks.
I have a situation where i will get the success message log when there is response, and there won't be any log in case of failure, I need to show a failure message if i don't get any response. Can yo... See more...
I have a situation where i will get the success message log when there is response, and there won't be any log in case of failure, I need to show a failure message if i don't get any response. Can you please help me with this. case:success Name status Msgtype F1 null request F1 null request F1 Success response Case: failure Name status Msgtype F1 null request F1 null request F1 failure response
Any plans to update the app to include the rotation of the "urlparser.log" created by the app?
Hi Team, I need a visualization similar to this diagram https://www.websequencediagrams.com/ to show the interaction between multiple components.... I am not finding any apps in splunk base, any ... See more...
Hi Team, I need a visualization similar to this diagram https://www.websequencediagrams.com/ to show the interaction between multiple components.... I am not finding any apps in splunk base, any help related to this...
Hi, At startup of our indexer beeing part of a cluster there is an error stating that we have 2 *nix-TA installed. One in /etc/apps and one in etc/slave-apps. As we are using only the clustermaste... See more...
Hi, At startup of our indexer beeing part of a cluster there is an error stating that we have 2 *nix-TA installed. One in /etc/apps and one in etc/slave-apps. As we are using only the clustermaster to deploy apps the first place etc/apps is wrong and not used by our setup. I deleted this app and after a restart the app is again there. I did not install splunk with some special command line options. So why is Splunk installing an app in the wrong possition and where is the configuration?
How can I insert a table in the e-mail notification message? Can I solve that with normal html codes?
Hi Team, I would like to suppress the splunk alert for specific duration every day for 2 hrs ( for instance 9 am to 11 am) every day on a specific field value. With throttle, I can't set for specif... See more...
Hi Team, I would like to suppress the splunk alert for specific duration every day for 2 hrs ( for instance 9 am to 11 am) every day on a specific field value. With throttle, I can't set for specific time range, will I have to modify the query? Quick help would be appreciated!
Hello, I have really urgent issue: - We use LDAP authentication in our instance, it worked fine for quite long. Now, there were some maintenance changes on the DLs / LDAP side and since yesterday ... See more...
Hello, I have really urgent issue: - We use LDAP authentication in our instance, it worked fine for quite long. Now, there were some maintenance changes on the DLs / LDAP side and since yesterday many important users in my Splunk are just gone. They are in the corresponding DLs, I synchronized the authentication details ... nothing helps. This issue will be surely solved somehow someday, but if I do not grant back the access to my Splunk to couple of people immediately, I will loose their trust in the solution. So, I created manually a new user Mickey Mouse and would like him to access the instance by giving the user/password. - How do I configure it properly? - Are there any additional parameters to change on the instance in order to make it possible? - Both LDAP and "manual-Mickey" authentication should be possible in the same time, because strangely most of the users are there, just some are missing, and the rest of them should be able to use LDAP authentication as usual - What link should the Mickey use to reach the user/password logon page? Please see also the attached pictures. Kind Regards, Kamil
Hi I am trying to understand what is the below for in license_usage.log and how I can find it's configuration 05-06-2020 08:43:02.499 +0300 INFO LicenseUsage - type=Usage s="/data/logs/log_f... See more...
Hi I am trying to understand what is the below for in license_usage.log and how I can find it's configuration 05-06-2020 08:43:02.499 +0300 INFO LicenseUsage - type=Usage s="/data/logs/log_from_DP_Test/int/TEST/log_to_Splunk.csv.20200506084158834" st=dp_log h=DPINT o="" idx="log_from_dp_test" i="85B0888C-36DF-45C7-9365-D754F1D9F343" pool="Integration" b=98919 poolsz=21349007360