All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi all, I'm completely new to Splunk and have some problems understanding the dataflow and what to configure where. i have here a working environment with 2 indexers, 1 heavy forwarder which is th... See more...
hi all, I'm completely new to Splunk and have some problems understanding the dataflow and what to configure where. i have here a working environment with 2 indexers, 1 heavy forwarder which is the search head too. all running version 7.3.6 on ubuntu 20.04. additionally there a several dozen windows servers and ~50 linux servers. a lot of them have splunkforwarder installed and send data to the indexers. this was set up some years ago by some guys that left the company meanwhile. my task now is to add data from the linux machines to splunk. as i have a working environment and a lot of stuff to see how it's done on other machines, it didn't sound too complicated. but... the task: have on all linux servers the same task running which creates a log file in /var/log/ my solution: on a server that already sends data to splunk, i ran: splunk add monitor /var/log/mylog the result: the data shows up in splunk. yepeee. easy. then i went to a server that does not send data to splunk. my solution: download and install splunkforwarder-7.3.6-47d8552a4d84-linux-2.6-amd64.deb splunk add forward-server indexer1:9997 splunk add forward-server indexer2:9997 splunk add monitor /var/log/mylog yepee. data shows up on the search head next task: have a dashboard with the data and have some filter options my solution: found a similar dashboard and tried to adopt it to my needs. not that easy, but i get it done. without the filters first. and then the problems start: the logfile contains headers and lots of other junk i cannot filter out easily. during my search on how to delete events, i found out that i have multiline events. i learned about LINE_BREAKER and SHOULD_LINEMERGE and indexes and other config stuff. and here the confusion starts: where do i have to configure what?  after reading some docs and different solutions here in the forum, i decided to start from zero with one of the linux servers. i deleted the results from this server from the main index. source=/var/log/mylog myserver | delete removed the forwarders and monitor from the linux server splunk remove forward-server indexer1:9997 splunk remove forward-server indexer2:9997 splunk remove monitor /var/log/mylog i created a new index on the 2 indexers and on the search head with the GUI. lets call it myindex and i didn't change the defaults i modified etc/users/admin/myapp/local/props.conf file on the search head, because that was the only place where i could find a reference to the monitor i've added. [mylog-too_small] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) [mylog] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) adding forwarders and monitor again: splunk add forward-server indexer1:9997 splunk add forward-server indexer2:9997 splunk add monitor /var/log/mylog What the heck? no data shows up on the search head What have I missed where? and in what order are all these props.conf files applied? I have some of them in different folders any help or hint is welcome
Hi I wanted to break the line from {"id" so that splunk will treat it as a new event from {"id from below event, I have mentioned the props.conf and the event, please find the same and let me know ... See more...
Hi I wanted to break the line from {"id" so that splunk will treat it as a new event from {"id from below event, I have mentioned the props.conf and the event, please find the same and let me know in case of any concerns.   INDEXED_EXTRACTIONS = JSON KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false SEGMENTATION = iso8601 #TIME_FORMAT=%YYYY-%MM-%DDT%H:%M:%SZ TIMESTAMP_FIELDS = started_on TRUNCATE = 0 category = Ver. 1    
Hello, I have setup a Splunk app that has Custom REST Endpoints which make Splunk API calls, currently via the python requests module.  I am authenticating to the API by grabbing the session key vi... See more...
Hello, I have setup a Splunk app that has Custom REST Endpoints which make Splunk API calls, currently via the python requests module.  I am authenticating to the API by grabbing the session key via the the Custom REST Endpoint's handler.  This works but I am wondering if either the SDK or Splunk python libraries can handle this a little more gracefully or not?  I noticed when building the dashboards that I can just call service.METHOD in SplunkJS to hit the Custom REST Endpoints so wondering if there is a python class/method that handles the authentication, etc such as this.   Thanks
Hi All, We have recently migrated from on-prem to cloud, how to make sure whether all the dashboards are working fine? Do we have any query to check the splunk errors?   Thanks, Vijay Sri S
I am currently working with DbConnect in order to send logs over to an Oracle db. I have a scheduled output set up on my heavy forwarder but it has not been sending logs to Oracle so I have been tryi... See more...
I am currently working with DbConnect in order to send logs over to an Oracle db. I have a scheduled output set up on my heavy forwarder but it has not been sending logs to Oracle so I have been trying to debug it with the  dbxoutput command. In the search on the HF, I am running the search successfully and from what I am seeing, it should be sending the logs over. However, when I query the Oracle table on in the SQL Explorer, I do not see any new logs being shown in the table. Does anyone know where I can look on the CLI to see if there are any error logs associated with this output?
Hi Team, This is Saurabh Dagar and I am working on Offshore company, on there, we have splunk server and we are trying to run splunk site but it is not opening. and the splunkd service is not g... See more...
Hi Team, This is Saurabh Dagar and I am working on Offshore company, on there, we have splunk server and we are trying to run splunk site but it is not opening. and the splunkd service is not getting restart, it is showing "the splunkd service is not in use and no one is using this service for long time" So my question is can how can we be able to start the service. thanks
hi All, We are migrating our AD provider to Azure AD, we generated the XML and cert file, and uploaded the XML via front end, its working for the login, but after validating, it lands on the below ... See more...
hi All, We are migrating our AD provider to Azure AD, we generated the XML and cert file, and uploaded the XML via front end, its working for the login, but after validating, it lands on the below page, mentioning this error. Verification of SAML assertion using the IDP's certificate provided failed. Error: failed to verify signature with cert  In the authentication.conf i see the stanza getting updated with new entries, only 2 lines remain of the old config : caCertFile = /opt/splunk/etc/auth/cacert.pem clientCert = /opt/splunk/etc/auth/server_old.pem I am not sure if client cert is being used at all, thinking something related to IDP chains, seeing other answers, https://community.splunk.com/t5/Deployment-Architecture/Problem-with-SAML-cert-quot-ERROR-UiSAML-Verification-of-SAML/m-p/322375#M12072  Not sure how to generate the certificate. Cert generated from Azure has only one stanza, not 3 as described. any leads would be helpful.
How do I run a search using ldapsearch which shows all members of a group, along with each member's UPNs? Currently, using LDAPGROUP (as shown below), we are only able to receive the basic CN for ea... See more...
How do I run a search using ldapsearch which shows all members of a group, along with each member's UPNs? Currently, using LDAPGROUP (as shown below), we are only able to receive the basic CN for each member. However, I want to see the UPN for each user. Any suggestion? Search: |ldapsearch basedn="ou=test,ou=Groups,ou=Common Resources,ou=group,dc=ad,dc=private" search="(&(objectClass=group)(cn=*))" | ldapgroup | table cn,member_dn,member_name
Hi Team, We got an requirement to create a report based on the accessed time present in the logs here in the logs the time is present with seconds, milliseconds, microseconds, nanoseconds value. ... See more...
Hi Team, We got an requirement to create a report based on the accessed time present in the logs here in the logs the time is present with seconds, milliseconds, microseconds, nanoseconds value. Example: 1s79ms874µs907ns So here in this case how to convert them into a unique value.  So post which we need to check and create a report for the same. In most of the cases the time is getting started with milliseconds and in few cases the time information is getting started with seconds. So how to convert the time (1s79ms874µs907ns) to an unique value either in seconds, milliseconds , microseconds or nanoseconds so then only we can able to create a report for the same. Or do we have any other option to fix this issue while searching the logs during runtime. So kindly help on my request. Sample Logs for Reference: DEBUG 2022-03-10 07:17:26,239 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 145ms227µs975ns ago, IN_USE DEBUG 2022-03-10 07:07:26,239 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 1s79ms874µs907ns ago, IN_USE DEBUG 2022-03-10 07:02:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 7ms215µs946ns ago, IN_USE DEBUG 2022-03-10 06:57:26,237 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 168ms259µs830ns ago, IN_USE DEBUG 2022-03-10 06:57:26,237 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 6s993ms781µs523ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 2ms593µs888ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 55ms239µs616ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 957ms778µs205ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 45ms536µs884ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 22ms906µs437ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 46ms556µs466ns ago, IN_USE DEBUG 2022-03-10 06:42:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 3s286ms410µs997ns ago, IN_USE DEBUG 2022-03-10 06:37:26,239 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 842ms323µs432ns ago, IN_USE DEBUG 2022-03-10 06:27:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 7ms698µs576ns ago, IN_USE DEBUG 2022-03-10 06:27:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 18ms948µs359ns ago, IN_USE DEBUG 2022-03-10 06:17:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 257ms32µs814ns ago, IN_USE  
Dear Splunk community, I have the following query:     index="myIndex" source="*mySource*" nameOfLog* "ExitCode: 0" | stats count by _time     Once a day a event is generated. So eit... See more...
Dear Splunk community, I have the following query:     index="myIndex" source="*mySource*" nameOfLog* "ExitCode: 0" | stats count by _time     Once a day a event is generated. So either it was generated (count = 1) or it was not (count = 0). I have a line diagram for the last 30 days that looks like this: On February 20th there was one event generated. On 23 February there was one event generated. On 21th and 22th of February, no events were generated. Therefore I expect the line to go down in the line chart like so: ------_------- This is not happening, and I am wondering why. How do I adjust this to show count=0 in the chart aswell? Thanks.
Scenario :123 [abc xyz  11111 ] I want to create a getter chain to collect the data in the middle of an array (xyz) where abc and xyz are dynamic . Could anyone help me to get the right getter cha... See more...
Scenario :123 [abc xyz  11111 ] I want to create a getter chain to collect the data in the middle of an array (xyz) where abc and xyz are dynamic . Could anyone help me to get the right getter chain to capture data in analytics. toString().split(123\\\ [).[1].xxx
We are trying to test out the new "cascading" option for the knowledge bundle replication policy. We have sat the replication policy on the search head to "cascading" in distsearch.conf, and have sat... See more...
We are trying to test out the new "cascading" option for the knowledge bundle replication policy. We have sat the replication policy on the search head to "cascading" in distsearch.conf, and have sat a new pass4SymmKey on the indexers for the cascading replication in server.conf, as described in the docs. https://docs.splunk.com/Documentation/Splunk/8.1.6/DistSearch/Cascadingknowledgebundlereplication However, in the Monitoring Console, on the page "Search > Knowledge Bundle Replication  > Cascading Replication", the indexers still says that the  replication policy is "classic", not "cascading". The search head correctly identifies as "cascading", but not the indexers. Dashboard: https://<monitoring-console-server>/en-GB/app/splunk_monitoring_console/cascading_replication How to get the Monitoring Console to correctly identify the knowledge bundle replication policy on the indexers as "cascading"? Is the check only looking for the setting "replicationPolicy = cascading"? If so, the indexers will always "fail" the check, as this setting is not applied on the indexers, as far as I understand.
Hi all, I have 2 queries, from one i get a list of files and the other query should use these files as their source to get some results. The output of first queries may have a lot files and i want ... See more...
Hi all, I have 2 queries, from one i get a list of files and the other query should use these files as their source to get some results. The output of first queries may have a lot files and i want to use all of them together in the second query. Does anyone have idea of how to do this one?
Hi Splunkers, i'm trying to build a most common search, wich is: track when a WIndows/Active Directory account is changed from disabled to enabled. The starting point, the switching to enable, is... See more...
Hi Splunkers, i'm trying to build a most common search, wich is: track when a WIndows/Active Directory account is changed from disabled to enabled. The starting point, the switching to enable, is not a problem for me; this because I know that tracking the EventCode=4722 help me with this scenario. The "but" here is the following: the customer want to be able to distinguish the legit changes from not legit ones. Here there are two tipical scenario, one admitted and one not: New User When a new user arrives in the company and their user account is created, Active Directory first generates an account creation event and then generates another user account enable event. This case should not be alerted because it is a normal process. User disabled in the company When a user left the company some time ago and his user account changes the status to Enabled it is an Abnormal Event, so it should be alerted. So, my question is: how can I identify the legit situation from the not legit one?  
hello as you can see i stats events following the bin time value But when the bin time value is equal to 0, I have nothing displayed I would like to display the results even if the result is 0 ... See more...
hello as you can see i stats events following the bin time value But when the bin time value is equal to 0, I have nothing displayed I would like to display the results even if the result is 0 but just for hour corresponding to the current hour or to the previous hour It means that I dont want to display 0 for a bin time which is later than the current hour     index=toto sourcetype=titi | bin span=1h _time | eval time = strftime(_time, "%H:%M") | stats count as Pb by s time | search Pb >= 3 | stats dc(s) as nbs by time | rename time as Heure     I tried like this but it doesnt works     | appendpipe [ stats count as _events | where _events = 0 | eval nbs = 0 ]      could you help please?
Hey, Is it possible to export every field from a Splunk Search via a Dashboard? Thanks, Patrick
Hi, If everything (for example, frontend Web application, backend microservices, databases...) is running in a Kubernetes cluster and deployed appdynamics-operator, appdynamics-cluster-agent, serv... See more...
Hi, If everything (for example, frontend Web application, backend microservices, databases...) is running in a Kubernetes cluster and deployed appdynamics-operator, appdynamics-cluster-agent, serverviz-machine-agent in the same Kubernetes cluster. I want to know is it possible to monitor applications running inside the Kubernetes cluster without installing application agent?  log example of frontend web application. It contains response status, request time, upstream response time and so on. 198.13.6.0 - - [10/Mar/2022:06:47:31 +0000] "POST /test/api/v2/um/inviteUser?emailAdd=qiqguo@cisco.com HTTP/1.1" 200 40 "http://192.168.0.10/spade/user-dashboard/admin/invite-user" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:97.0) Gecko/20100101 Firefox/97.0" 711 1.948 [test-ns-web-app-9080] [] 198.13.5.62:9080 40 1.948 200 f7cfa3b67e132c961c7a02c1a7445145 198.13.6.0 - - [10/Mar/2022:06:47:31 +0000] "POST test/api/v3/login/validClaim HTTP/1.1" 200 4 "http://10.75.189.81/spade/user-dashboard/admin/invite-user" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:97.0) Gecko/20100101 Firefox/97.0" 937 0.004 [test-ns-web-app-9080] [] 198.13.5.62:9080 4 0.003 200 065863b5bdd28a6f0db5e061cd4944af
I'm getting logs from a dockerized in-house developed application and ingesting them into Splunk. There are 3 types of logs, coming into the log file: 1. Application logs (single line, internal f... See more...
I'm getting logs from a dockerized in-house developed application and ingesting them into Splunk. There are 3 types of logs, coming into the log file: 1. Application logs (single line, internal format) 2. UWSGI logs (multiline) 3. ModSecurity serial logging (multiline) The logs are forwarded to remote syslog server, and then ingested into Splunk with universal forwarder. While those logs are in different formats I want to separate them into different indexes for different processing approaches. Is there any good documentation piece/forum post/tutorial/anything that describes effective way to separate different log types from a mixed source? Thank you!
Hi what is the recommended way to index massage trace logs ?   currently we are using  Microsoft Office 365 Reporting Mail Add-on for Splunk   also in the older version of Splunk Add-on for... See more...
Hi what is the recommended way to index massage trace logs ?   currently we are using  Microsoft Office 365 Reporting Mail Add-on for Splunk   also in the older version of Splunk Add-on for Microsoft Office 365 we had ServiceStatus , ServiceMessage and I don't see it in the latest release 
Hi, I am unable to create a timechart for specific field value aggregations. I have one field with 4 possible values. One timechart needs to be the total number across all 4 values and the second t... See more...
Hi, I am unable to create a timechart for specific field value aggregations. I have one field with 4 possible values. One timechart needs to be the total number across all 4 values and the second timechart meeds to be the total over 2 field values. The only thing on the legend should be TOTAL from the timechart. Here is what my timechart and XML code currently looks like: And Can you please help? Thanks, Patrick