All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, As conf20 is around half-finished, i just thought to get some helping hands about the conf20 from our friends, so that, we, conf-newbies can learn from conf-gurus. Can we have some discussion... See more...
Hi All, As conf20 is around half-finished, i just thought to get some helping hands about the conf20 from our friends, so that, we, conf-newbies can learn from conf-gurus. Can we have some discussions about - 1. whats your favorite sessions so far? 2. whats sessions you are waiting for? (on today and tomorrow) 3. Anything interesting you liked?.. (the million data points, dungens and dragons game, etc) 4. SplunkTrust ... Good to see 66  new splunktrust members. So, how one can plan to become a splunktrust member 2021? what things one need to plan and do, any suggestions, views, points please. 
Hello, I am trying to create basic roles for my app, the corresponding authorize.conf looks as follows: # Indexes that belong to the App [role_s4_DCM_app_indexes] srchIndexesAllowed = mlbso; mlbso_... See more...
Hello, I am trying to create basic roles for my app, the corresponding authorize.conf looks as follows: # Indexes that belong to the App [role_s4_DCM_app_indexes] srchIndexesAllowed = mlbso; mlbso_changelog srchIndexesDefault = mlbso; mlbso_changelog # Role for the users to access logs [role_s4_DCM_app_user_logs] importRoles = user, role_s4_DCM_app_indexes # Role for the users to access all DB connections [role_s4_DCM_app_user_dbcon] importRoles = user, db_connect_user # Role for the users to access both logs and DB [role_s4_DCM_app_user] importRoles = role_s4_DCM_app_user_dbcon, role_s4_DCM_app_user_logs # Power user = user + administering of the db connections [role_s4_DCM_app_power] importRoles = role_s4_DCM_app_user, db_connect_admin # ##################### Start: DB connections to splecific databases ################################## # The idea is to grant the access to specific objects then in the local.meta based on the roles # ... copied for FRUN relevant objects [role_s4_DCM_app_user_FRUN] importRoles = role_s4_DCM_app_user_dbcon # ... copied for Mshadow relevant objects [role_s4_DCM_app_user_Mshadow] importRoles = role_s4_DCM_app_user_dbcon # ... copied for Pingdom relevant objects [role_s4_DCM_app_user_Pingdom] importRoles = role_s4_DCM_app_user_dbcon # ##################### End: DB connections to splecific databases ####################################   however, when I check then in the UI interface, there is no inheritance visible for the new s4 roles, which I would expect to be based on the above: What I did then was to manually change the inheritance in the UI for one of the roles (marked green: s4_dcm_app_user), restart and try to figure out which configuration file it would land in ... and nothing. I used the following linux command: splunk@ccd01v013355:/opt/splunkdev> grep -rnw '.' -e 'role_s4_DCM_app_user' and it returned the same entries from the authorize.conf before and after the UI inheritance setting. So, how would I properly set the inheritance in the configuration files? I need to do this there and not one by one in the UI ... Kind Regards, Kamil
Hi All. I was thinking about the configuration af an Index Cluster Deployer (formerly known as Master), where best practises are, that it should forward data to the index tier, this makes absolutely... See more...
Hi All. I was thinking about the configuration af an Index Cluster Deployer (formerly known as Master), where best practises are, that it should forward data to the index tier, this makes absolutely sense. But why not use it self to define the peers, the same way all other forwarders use it. So I get an outputs.conf with this content:   [indexer_discovery:master1] pass4SymmKey = ***** master_uri = https://127.0.0.1:8089 [tcpout:prod] indexerDiscovery = master1 useACK = true [tcpout] defaultGroup = prod   I tried it on a 8.1.0 just released instanse on Splunk Enterprise, but it seems I get some sort of race condition, where the Cluster Master isn't available when the TCP-out process need to uild the indexer list. Has anybody else seen this, found a workaround or is this by design?   Kind regards and happy .conf las
   i want add Radio button in which i want setting that when i will select MFG host it will show all MFG host result and when i will select MSSQL host it will show only MSSQl result and hide the MFG... See more...
   i want add Radio button in which i want setting that when i will select MFG host it will show all MFG host result and when i will select MSSQL host it will show only MSSQl result and hide the MFG host result. Please help out for this setting    
I have a message feild having below data    message=Successfully created  customer id XXXX message =Duplicate create  customer id XXXX message=Error while create customer id XXXX message=Success... See more...
I have a message feild having below data    message=Successfully created  customer id XXXX message =Duplicate create  customer id XXXX message=Error while create customer id XXXX message=Successfully updated customer id XXXX message=Error while updating  customer id  XXXX   can we display it in the below format. Message                                                               Count Successfully created  customer id              1 Duplicate create  customer id                        1 Error while create customer id                       1 Successfully updated customer id                1  Error while updating  customer id                  1   if not found the count must be 0    
hello, I have a saved search that triggers an alert in the form of an email. I want that alert to be sent to different email id's based on a condition. For example, I have 7 applications value... See more...
hello, I have a saved search that triggers an alert in the form of an email. I want that alert to be sent to different email id's based on a condition. For example, I have 7 applications values in the search result and each application has an application owner. When the threshold value is reached for SLA  suppose, for a given application, only that application owner must be sent email to. Looking for inputs. TIA.
Hello, I have data model that i want to change the acceleration time to all time. since im working with kubernetece the change has to be in the config files directly and not through the ui. i saw ... See more...
Hello, I have data model that i want to change the acceleration time to all time. since im working with kubernetece the change has to be in the config files directly and not through the ui. i saw in the documentation that empty string for  acceleration.earliest_time means all time but when i change this field to be empty, the configuration in the ui changed to be 1 day instead of all time as it should.   any ideas ? thanks 
Hi, we have 180+ machines with different services, which send their data using a splunk forwarder to different indexes. To keep this scenario manageable we use a splunk management instance to rollou... See more...
Hi, we have 180+ machines with different services, which send their data using a splunk forwarder to different indexes. To keep this scenario manageable we use a splunk management instance to rollout inputs.conf and outputs.conf on each of these splunk forwarders. This scenario woks fine, as long as the same index is used for all the services and data for all services arrive in that "superindex". But it is obligatory to seperate the indexes, as data comes from different services (requirement). So I thought we could establish an environment variable on each linux system, which keeps the servicename and then refer to  that servicename as an index in the inputs.conf For example like so: [monitor:///var/log/service/*/service.log] sourcetype = sc:$SERVICENAME:service:log disabled = 0 index = $SERVICENAME That way, we could still rollout the inputs.conf using the splunk manager and only have to set up this environment variable $SERVICENAME once on each machine. But it seems that the environment variable isn`t recognized in the inputs.conf, as on Splunk Indexer there is the message "Search peer splunk-indexer has the following message: Received event for unconfigured/disabled/deleted index=$servicename with source=[...]" So it seems that the environment variable $servicename was not resolved to the value which was set on the machine. Is there a different way to make index in inputs.conf flexible for each machine and nevertheless keep the rollout system, or can it be done using environment variables and we did it wrong somehow? 
Hi,  I am currently undergoing the Splunk Sales Rep 1 course in the Splunk Partner Portal. When I get to the end of the Anti-Bribery module, I find myself stuck as there isn't an 'I agree' button to... See more...
Hi,  I am currently undergoing the Splunk Sales Rep 1 course in the Splunk Partner Portal. When I get to the end of the Anti-Bribery module, I find myself stuck as there isn't an 'I agree' button to click on.  (see below) I have clicked on everything and I have also attempted to scroll down but no luck. Please help as I cannot pass the module without progressing past this page. (I have already completed and passed the exam).  Thanks   
Hi there, Does anyone have a search that can show me what data was forwarded and ingested by which port?We have multiple ports set up in our environment and id like to monitor what sourcetype is bei... See more...
Hi there, Does anyone have a search that can show me what data was forwarded and ingested by which port?We have multiple ports set up in our environment and id like to monitor what sourcetype is being ingested, how much, and by which port. Thanks! 
Hello, what must I do to report only values of diff_min greater than e.g. 1 endTime startTime  | eval ET=strptime(endTime,"%Y-%m-%d %H:%M:%S.%3Q") | eval ST=strptime(startTime,"%Y-%m-%d %H:%M:%S.... See more...
Hello, what must I do to report only values of diff_min greater than e.g. 1 endTime startTime  | eval ET=strptime(endTime,"%Y-%m-%d %H:%M:%S.%3Q") | eval ST=strptime(startTime,"%Y-%m-%d %H:%M:%S.%3Q") | eval diff_min=(ET-ST)/60 | fields diff_min startTime endTime | sort -diff_min Sorry, it's my first dashboard. Thank you   Steff
Hello there! Is there a known method (a function, a command, built-in or custom, a search trick) to convert the earliest_time and latest_time notation to epoch for instance? For instance, I need to... See more...
Hello there! Is there a known method (a function, a command, built-in or custom, a search trick) to convert the earliest_time and latest_time notation to epoch for instance? For instance, I need to convert this: earliest_time -15m@m latest_time -5m@m into an epoch timestamp so I can calculate the interval between them. As the notation is flexible, I want to check if there is something available already rather than try to build a dirty search that would try to cover every variation of the notation.   Thanks in advance for any hint!
Hi, I am creating a dashboard with various database metrics like Time Spent in Database and Executions, Average Number of Active Connections, Query Wait States, etc. I am able to see "Query Wai... See more...
Hi, I am creating a dashboard with various database metrics like Time Spent in Database and Executions, Average Number of Active Connections, Query Wait States, etc. I am able to see "Query Wait States" in the Database Dashboard section but when I am trying to create a dashboard, I am not able to find the metrics "Query Wait States". Can someone please suggest how to fetch "Query Wait States" metrics information? Thanks, Biswajit
I have updated a csv file and one of the fields is a date.   I need to sort the data by date order then I can visualise a graph with it but it won't sort by date.  I've read the posts about changing... See more...
I have updated a csv file and one of the fields is a date.   I need to sort the data by date order then I can visualise a graph with it but it won't sort by date.  I've read the posts about changing to Epoch  time then sorting or using strftime, etc, but none of them have worked. I found the answer on how to change the field data to show as a date which worked is this eval "booking Date"=strptime(timeStr, "%d %m %Y") |sort "Booking Date" How do I then sort by date?
Hello everyone,   i am new to splunk and I am using plugin splunk for jenkins and trying to send data from jenkins to splunk , can someone help me out 1.what changes should be made in Splunk for J... See more...
Hello everyone,   i am new to splunk and I am using plugin splunk for jenkins and trying to send data from jenkins to splunk , can someone help me out 1.what changes should be made in Splunk for Jenkins Configuration . should there be changes in the Customize Event Processing Command  or what? 2- how can i know that now the data/logs is getting into splunk from jenkins and there continous flow of data please give me a step by step answer to the solution Thank you    
message: 'Successfully downloaded the file : FileAData2020-10-20_19_05_05.csv' message: 'Successfully downloaded the file : FileBData2020-10-20_19_05_05.csv' message: 'Successfully downloaded the f... See more...
message: 'Successfully downloaded the file : FileAData2020-10-20_19_05_05.csv' message: 'Successfully downloaded the file : FileBData2020-10-20_19_05_05.csv' message: 'Successfully downloaded the file : FileCData2020-10-20_19_05_05.csv' message: 'Successfully downloaded the file : FileAData2020-10-20_19_05_05.csv'   how can I get output like  FileName Count FileA            2 FileB            1 FileC           1  
Time is not displayed on hover. How can this gap be resolved?
Hi all, Does anyone know of any way to update an event in Splunk? so far what my searches brought me was reindexing the event, then deleting it with the delete command, and then reindex the whole b... See more...
Hi all, Does anyone know of any way to update an event in Splunk? so far what my searches brought me was reindexing the event, then deleting it with the delete command, and then reindex the whole bucket which frankly sucks.... I've noticed that Splunk Phantom supports event updating. Is it also possible in Splunk Enterprise? Thank you very much, and may the helpers be blessed
Hi All, here is the question.  I have some logs form 4 ips and received them at udp 514, for example is 1.1.1.1, 2.2.2.2, 3.3.3.3 and 4.4.4.4 1.1.1.1 and 2.2.2.2 are same sourcetype 3.3.3.3 and 4... See more...
Hi All, here is the question.  I have some logs form 4 ips and received them at udp 514, for example is 1.1.1.1, 2.2.2.2, 3.3.3.3 and 4.4.4.4 1.1.1.1 and 2.2.2.2 are same sourcetype 3.3.3.3 and 4.4.4.4 are same sourcetype. For now, my approach is  input.conf [udp://1.1.1.1:514] index = test sourcetype = pan:firewall [udp://2.2.2.2:514] index = test sourcetype = pan:firewall [udp://3.3.3.3:514] index = test sourcetype = cp_log [udp://4.4.4.4:514] index = test sourcetype = cp_log prof.conf [host::1.1.1.1] TRANSFORMS-throw_dns = throwdns [host::2.2.2.2] TRANSFORMS-throw_dns = throwdns [host::3.3.3.3] TRANSFORMS-throw_ntp = throwntp [host::4.4.4.4] TRANSFORMS-throw_ntp = throwntp As you can see it in the input.conf and props.conf 1.1.1.1 and 2.2.2.2 share the same configuration. 3.3.3.3 and 4.4.4.4 share the same configuration. Is there any method to group them up ?
Hello I would like to add a tag to our Splunk clients by location. I found how to create eventtypes on the server side but I am searching to tag all events directly on the client (in server.conf or... See more...
Hello I would like to add a tag to our Splunk clients by location. I found how to create eventtypes on the server side but I am searching to tag all events directly on the client (in server.conf or inputs.conf maybe ?) Is it doable ? If yes, what setting should I edit ?