All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

As per the Smartstore docs, tstatsHomePath must remain unset but I noticed the /default/indexes.conf on 8.1.5 version, the value of tstatsHomePath attribute is already set as shown in screenshot belo... See more...
As per the Smartstore docs, tstatsHomePath must remain unset but I noticed the /default/indexes.conf on 8.1.5 version, the value of tstatsHomePath attribute is already set as shown in screenshot below: I have also noticed /opt/splunk/var/lib/splunk/index_name/datamodel_summary having numerous large files. Not sure if its to do with the tstatsHomePath attribute being set. Should I add the following in /local/indexes.conf ? tstatsHomePath =  
I recently migrated non-smartstore indexes to Smartstore as per the doc - https://docs.splunk.com/Documentation/Splunk/8.2.4/Indexer/MigratetoSmartStore However, on some of the indexers I noticed th... See more...
I recently migrated non-smartstore indexes to Smartstore as per the doc - https://docs.splunk.com/Documentation/Splunk/8.2.4/Indexer/MigratetoSmartStore However, on some of the indexers I noticed the local drive had high disk usage by Splunk, upon investigation noticed /opt/splunk/var/lib/splunk/index_name/datamodel_summary/ of various indexes is holding datamodel summary data that caused the high disk usage. Considering summary replication is unnecessary for Smartstore, I dont believe the directory need to exist anymore. In this case, should empty these /datamodel_summary/ directories ?
In splunk logs, _time is showing 0 milliseconds with latest version of 1.11.4 splunk logging library. Ealier to this  when using v1.6.0 was giving exact _time with  milliseconds which made table view... See more...
In splunk logs, _time is showing 0 milliseconds with latest version of 1.11.4 splunk logging library. Ealier to this  when using v1.6.0 was giving exact _time with  milliseconds which made table view with sequence of results.  Due to inacurate timestamp( in millisecinds) the sorting is not proper hence the monitoring results are inacurate. Please suggest if there can be something done on Splunk App or Log4j settings.  TIA, Sahithi Parsa
      I would like to get the list of those items in the properties field, like appName, levelId, etc.    
Hey, Can anyone help me convert Age to Days? Have trouble parsing and calculating.   Sample Data Age 2 years 3 months 2 days 3 months 4 days 2 days   I want to have a column with converted v... See more...
Hey, Can anyone help me convert Age to Days? Have trouble parsing and calculating.   Sample Data Age 2 years 3 months 2 days 3 months 4 days 2 days   I want to have a column with converted values to just days. Dont want exact days. Year could be 365 and month could be 30. Age, d_age 2 years 3 months 2 days, 457 3 months 4 days, 94 2 days, 2
We currently have a C1 Architecture (3 clustered indexers/1 search head, replication factor of 3) and would like to ask if there are any best practices and guidelines on how to do it ourselves? I've... See more...
We currently have a C1 Architecture (3 clustered indexers/1 search head, replication factor of 3) and would like to ask if there are any best practices and guidelines on how to do it ourselves? I've checked the docs and somehow it is indicated that having a replication factor of more than 3 can become more complicated to archive. Please see the excerpt below from https://docs.splunk.com/Documentation/Splunk/8.2.4/Indexer/Automatearchiving : The problem of archiving multiple copies Because indexer clusters contain multiple copies of each bucket. If you archive the data using the techniques described earlier in this topic, you archive multiple copies of the data. For example, if you have a cluster with a replication factor of 3, the cluster stores three copies of all its data across its set of peer nodes. If you set up each peer node to archive its own data when it rolls to frozen, you end up with three archived copies of the data. You cannot solve this problem by archiving just the data on a single node, since there's no certainty that a single node contains all the data in the cluster. The solution to this would be to archive just one copy of each bucket on the cluster and discard the rest. However, in practice, it is quite a complex matter to do that. If you want guidance in archiving single copies of clustered data, contact Splunk Professional Services. They can help design a solution customized to the needs of your environment.
I am using "sendresults" command and pass the search results to an email body template; however, the search results didn't show up from the body.  Unfortunately, the Splunk sendresults page doesn't h... See more...
I am using "sendresults" command and pass the search results to an email body template; however, the search results didn't show up from the body.  Unfortunately, the Splunk sendresults page doesn't have an example for passing the result to the email body.  I wonder if it is possible to pass search results to the email body.  Does anyone know?   This is the sample code I used.     | makeresults | eval score=90, email_to="john.doe@xyz.com", name="john" | append [|makeresults | eval score=76, email_to="jane.doe@abc.com",name="jane"] | fields - _time | sendresults showresults=f subject="Your Score" body="Hi $result.name$", your score is $result.score$."      
I am trying to add 2 new fields into a chart, which is calculated by the exisiting columns in the following chart. Basically I want to add A3=A2/A1, and B3=B2/B1:    Can anyone suggest which co... See more...
I am trying to add 2 new fields into a chart, which is calculated by the exisiting columns in the following chart. Basically I want to add A3=A2/A1, and B3=B2/B1:    Can anyone suggest which command to use? 
Hello,   We are trying to uninstall ITSI from our search head cluster, so i removed all the ITSI related apps and add-ons from the deployer and pushed to the cluster and the bundle also got pushed ... See more...
Hello,   We are trying to uninstall ITSI from our search head cluster, so i removed all the ITSI related apps and add-ons from the deployer and pushed to the cluster and the bundle also got pushed successfully but the search heads did not take any rolling restart and the ITSI related apps are still there on the search heads. I did a manual rolling restart but they are still there. can anyone please let me know if you ran into this issue before.   Thanks, Sathwik.
Hello I have a question about installing and configuring IT Essentials Works 4.11.3 and removing some of the out of the box functions in a windows environment. History: We currently have Splunk... See more...
Hello I have a question about installing and configuring IT Essentials Works 4.11.3 and removing some of the out of the box functions in a windows environment. History: We currently have Splunk running on a windows 2012 server that is being decommissioned.  So we are looking at moving it to a new MS 2019 server. One of the apps we used in the past was Windows add on infrastructure and we noticed that it has been "decommissioned"  for lack of a better word and replaced with IT essentials works. We noticed after installing the add-on there were many functions that out of the box we will never need, eg: VMWare, Nix, etc....   We wish to focus on the Windows components of IT essentials and remove the rest.  Because of the many components of IT essentials ( eg: DA_ITSI_EUEM.... DA_ITSI_LB....  SA_ITSI_ATSA.... etc) It is unclear what is the best approach to remove the unnecessary components. We were unable to find any data online or in the answers as to how to best approach this. Any information that can be provided would be beneficial Thank you Dan Is there anyone that has edited the IT essentials add on to use just what they need and remove the unnecessary components for efficiency?  
I have Splunk light install on a server 2012 R2. I'm unable to start the services (splunkd, splunkweb). I notice that secondary drive where where indexdata folder is locate is 95% full. 
Hello Splunkers, I’ve created a search to show up all the log4j related events by looking into the strings. We are trying to dig into the events and schedule an alert. Are there any particular mess... See more...
Hello Splunkers, I’ve created a search to show up all the log4j related events by looking into the strings. We are trying to dig into the events and schedule an alert. Are there any particular messages we should check in the events for log4j vulnerability? Any particular events that has high risk factor?  thanks in advance. 
I've been having difficulty with this for a while and looking for some help.  I'm attempting to find users logging and whether they are using username/password or smart card.    I search for 4768 ... See more...
I've been having difficulty with this for a while and looking for some help.  I'm attempting to find users logging and whether they are using username/password or smart card.    I search for 4768 and return the user, ip, preauthentication type, and timestamp from indexA. Then eval the timestamp for each of those events to Logon_Timestamp. The search time is the last 30 days. I then search indexB for each result of the main search for action="Request" events around the same timeframe as the main search, get the timestamp, xuser, zuser, and then eval the timestamp for all of those events to Request_Time. I search indexB again for each result of the main search for action="Connect" events around the same timeframe as the main search, get the timestamp, xuser, xhost, and eval the timestamp for all of those events to Connect_Time. After getting all of that information and renaming Account_Name and xuser to Admin_User, I need to create a table joining all of them with the following columns: Logon_Timestamp, Logon_Method, Request_Time, Connection_Time, Admin_User, Requesting_User, ipaddress, Environment, hostname, group     I seem to be able to get this partially working using different methods like join, multisearch, and append but trying to join all of these events, renaming each of their timestamps, and joining them together by Admin_User is showing as quite a challenge.  Join gets the information I want, but matching up timestamps between the three events doesn't seem possible and it takes a while. Append seems to return most of what I need and I can organize it by making the two indexB queries subsearches, but I can't eval the timestamp fields for them and put them into their own columns. Multisearch looked really promising, but only the results of the first search are returned. I've had a good deal of success using stats and putting all of them into one search using OR, but once again I can't successfully eval the timestamp fields. Below is one of my 20 or so queries that I've been using that provides me with the closest results, but I can't get the event timestamps matched up to ones that occur around the same time for the user.   index=indexA sourcetype=WinEventLog:Security "EventCode=4768" ("-admin" OR "-service" OR "-user")(Pre_Authentication_Type != -) Client_Address != "10.x.x.x" Client_Address != "10.x.x.x" | eval Logon_Timestamp = strftime(_time, "%m-%d-%Y %H:%M:%S") | rename Account_Name as xuser | join max=0 type=left xuser [search index=indexB host="10.x.x.x" ("device=10.x.x.x" OR "device=10.x.x.x") action="Request" | rename zuser as "Requesting_User" |eval Request_Time = strftime(_time, "%m-%d-%Y %H:%M:%S")] | join max=0 type=left xuser [search index=indexB action="Connect" | eval Connection_Time = strftime(_time, "%m-%d-%Y %H:%M:%S")] | rename xuser as "Admin_User" | eval Admin_User=lower(Admin_User) | eval ipaddress=replace(Client_Address,"::ffff:","") | eval Logon_Method=case(Pre_Authentication_Type < 14, "Username/Password", Pre_Authentication_Type > 14, "PIV") | eval Request_Time = if(isnull(Request_Time), "No Request Found", Request_Time) | eval Connection_Time = if(isnull(Connection_Time), "N/A", Connection_Time) | lookup assetlist.csv ipaddress OUTPUTNEW Environment, hostname, group | where like(hostname, "%-%-%") AND !like(hostname, "ex-host-name-%") | table Logon_Timestamp, Logon_Method, Request_Time, Connection_Time, Admin_User, Requesting_User, ipaddress, Environment, hostname, group   Anyone have any experience with something like this?  
I am trying to assign a value to a parameter in a macro that is based on a calculation of a value being sent to the macro but I do not get the expected result. index=my_index ... earliest=exact($tim... See more...
I am trying to assign a value to a parameter in a macro that is based on a calculation of a value being sent to the macro but I do not get the expected result. index=my_index ... earliest=exact($time$-4000) latest=$time$... How can I assign the earliest value which suppose to be 4,000 seconds less than the value $time$ ?
Hello, I have a TA developed through the Splunk Add-On builder that has two different data collections configured. If I create an input instance, it only runs the first data collector that was added... See more...
Hello, I have a TA developed through the Splunk Add-On builder that has two different data collections configured. If I create an input instance, it only runs the first data collector that was added.  Both inputs have different fields available to them, so that doesn't really help, plus they need to run on different intervals. How do we configured the TA to allow for different input configurations based on a selectable input name? Thanks.
    I work at a utility and we have an index that contains SCADA events from the electric system. We have data that goes back to 2015.  There are a very large number of total events (1.8 billion or s... See more...
    I work at a utility and we have an index that contains SCADA events from the electric system. We have data that goes back to 2015.  There are a very large number of total events (1.8 billion or so).  I had an engineer trying to trend some voltages over a long time period and it was discovered that Splunk had removed all of the events before 8/1/2020. I cleaned the index and added enableTsidxReduction=false. I then cleaned and reloaded the index and it appears it has removed events prior to Jan 1 2017 this time. The total size of this index is only around 60GB, The SQL database we are loading it from is 100GB total, these events are only two tables. We use DB Connect with a rising column for loading. MSSQL to dedicated SCADA index. Two inputs, one for each table.     I would like for size to be only factor controlling when data leaves the index, I would also prefer for buckets to only be hot and warm, cold is on a much slower storage system and we have plenty of hot/warm space. what is the conf file settings that achieve this? I have found the spec for indexes.conf and it is very daunting, I have scrolled down through it and it is hard for me to understand what is the right settings to use.  Is there a guide somewhere that outlines the behavior and cotnrols for index data management? We run a distributed system with two indexers on 8.2.3 Thanks for the help. Lee.
Hello, Looks like the action field is not returning results for almost all of the indexes. This is only impacting one of the search heads, the action field is working normally in the other search he... See more...
Hello, Looks like the action field is not returning results for almost all of the indexes. This is only impacting one of the search heads, the action field is working normally in the other search heads ( NOT clustered ).    ex: index=foo ( returns all data ) but when i add index=foo action=allowed returns almost nothing     
Hello Splunk Community, I'm fairly new to splunk and am using it to search and alert me for testing failures in my manufacturing environment. I have a search in which I would like to match up two d... See more...
Hello Splunk Community, I'm fairly new to splunk and am using it to search and alert me for testing failures in my manufacturing environment. I have a search in which I would like to match up two different events and to get a search hit ONLY when both failures occured on the same order number. I have 3 primary fields I'll be using. OrderNum, adviseText, and testName. I want my search result to return the order number when all criteria are met. To me, logically this looks like ((adviseText = "Diagnostic Error" AND testName = "Test 1") AND (adviseText = "Diagnostic error" AND testName = "Test 2")). I've used this to test and got no results and I understand that it's because no single event matches both criteria. Many orderNums fail one or the other, but I need search to single out orderNums that fail both. Can anyone help me with this? Much appreciated.
Scenario on a SHC, Splunk 8.2.2.1 user1 and user2 are 2 users in role user user1 who is in role user owns a private extraction (and saved searches). she is leaving the company and wants user2 to n... See more...
Scenario on a SHC, Splunk 8.2.2.1 user1 and user2 are 2 users in role user user1 who is in role user owns a private extraction (and saved searches). she is leaving the company and wants user2 to now own the knowledge object admin does a reassign knowledge objects of all knowledge objects from user1 -> user2 (and yes they probably got the warning that this might make knowledge objects inaccessible) now no one including admin can access this knowledge object from the UI or curl .. /services/configs/conf-props/extractnamehere/acl Fortunately: the props.conf file in /opt/splunk/etc/users/user1/search/local/props.conf is still there Is there any other way the admin could gain access to this knowledge object other than grabbing the configs off the file system of the Splunk head?
Full disclosure, I'm on the Salesforce team in my org and working with our Splunk team, but we're having an issue getting an object to read in to Splunk. I've checked things on my side and they look ... See more...
Full disclosure, I'm on the Salesforce team in my org and working with our Splunk team, but we're having an issue getting an object to read in to Splunk. I've checked things on my side and they look correct, trying to find some other avenues to pursue on the Splunk side (and idiot check that my side is, indeed, set correctly). Salesforce object: LoginEvent Accessing this object requires either the Salesforce Shield or Salesforce Event Monitoring add-on subscription and the View Real-Time Event Monitoring Data user permission. https://developer.salesforce.com/docs/atlas.en-us.platform_events.meta/platform_events/sforce_api_objects_loginevent.htm We assign object access to our Splunk integration user via permission sets and I've confirmed that permset has the View Real-Time Event Monitoring Data flag = true. Is there anything else I can/should check on my side? I see no other access requirements for this object. My understanding is that the Splunk team isn't seeing any errors on their end, it's just not "doing" anything (not reading it in).