All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

There are reports that run every 0 and 30 minutes. And there's a lot of reports that start every 5 minutes, 35 minutes. If the report that ran at 5 minutes is delayed and ends at 35 minutes, will the... See more...
There are reports that run every 0 and 30 minutes. And there's a lot of reports that start every 5 minutes, 35 minutes. If the report that ran at 5 minutes is delayed and ends at 35 minutes, will the report that ran at 30 minutes have an effect?
Hi Splunk Community, We have splunk enterprise 8.0.7. I would like to know the status of past splunk searches. Load, even count, time range, did the search timed out, how long did the search ran... See more...
Hi Splunk Community, We have splunk enterprise 8.0.7. I would like to know the status of past splunk searches. Load, even count, time range, did the search timed out, how long did the search ran etc. Thank you.
| eval err=if(data>80,code,"") I am composing spl using if statement. If the data value is over 80, a code is generated. However, if the value of 80 or more is maintained, a code is gener... See more...
| eval err=if(data>80,code,"") I am composing spl using if statement. If the data value is over 80, a code is generated. However, if the value of 80 or more is maintained, a code is generated only at the starting value of 80 or more, and the code does not occur until it falls below 80. What should I do?
I'm trying to troubleshoot a Connection that I made in DBConnect app, it show the error as above. I don't know if the error was on my end and my input or the info I received. Most of the answer I s... See more...
I'm trying to troubleshoot a Connection that I made in DBConnect app, it show the error as above. I don't know if the error was on my end and my input or the info I received. Most of the answer I searched for was about JDBC but that was automate in DBConnect. So what should I do in this situation? Is it on my end or should I contact the DB owner or the networking department in my org?
How can I dynamically link 3 drop down button to 3 dashboard tables. Drop down 1 is for "time", drop down button 2 is for "critical, high, low" dropdown button 3 is for "blocked, allowed, unknown".  ... See more...
How can I dynamically link 3 drop down button to 3 dashboard tables. Drop down 1 is for "time", drop down button 2 is for "critical, high, low" dropdown button 3 is for "blocked, allowed, unknown".  I have configured each table to match the dropdown input, but I want to link all 3 together. Thanks in advance for your help.
Hi Everyone, If I am searching through the past 4 weeks in one query, how can I break this data into two columns, one for previous 2 weeks, and one for latest 2 weeks, then sort by Latest 2 weeks? ... See more...
Hi Everyone, If I am searching through the past 4 weeks in one query, how can I break this data into two columns, one for previous 2 weeks, and one for latest 2 weeks, then sort by Latest 2 weeks? In general, im using stats to display the amount of objects affected by errors occurring  in a 4 week period but would like to see them displayed in two 2 week periods, sorted by the amount in the latest 2 weeks. | stats dc(objects) as OBJ by errorMessage | span -OBJ   CURRENT OUTPUT   ERROR MESSAGE OBJ message 1 1792 message 2 1210 message 3 957     DESIRED OUTPUT ERROR MESSAGE LATEST 2 WEEKS PREVIOUS 2 WEEKS message 1 967 825 message 2 872 666 message 3 103 854   Thanks all, Corey
Hi Splunkers, How to ingest binary files to splunk? i get error ," ignored due to binary file". Any help would be appreciated. Many thanks Emy    
Hello, I have recently starting learning about Splunk and been stuck while attempting to make the search display for me events that comes from both my Linux and Windows machine at once. For example, ... See more...
Hello, I have recently starting learning about Splunk and been stuck while attempting to make the search display for me events that comes from both my Linux and Windows machine at once. For example, for Windows, I have created this query that counts and display the times EventID 4625 has fail password from ssh per minute sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" Account_Name=client1 failed password* | bucket span=1m _time | stats count by _time, host, source, Caller_Process_Name, Account_Name, EventCode Failure_Reason | table _time, host, source, EventCode, count, Caller_Process_Name, Account_Name, EventCode, Failure_Reason And I have this query for a Linux that does the same: index=* sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" OR user=* failed password* | bucket span=1m _time | stats count by _time, host, process, source, | table _time, host, source, process, count The issue is, whenever I am trying to make it display both Linux and Windows events at once, by providing it the fields together such as: process(Linux Related) Event Code(Windows Related) Account_Name(Windows Related) user(Linux Related) With this query: sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" failed password* | bucket span=1m _time | stats count by _time, host, source, EventCode | table _time, host, source, EventCode Then it will only display me the Windows logs, and this is just because the EventCode was added. If I will for example remove the  "EventCode" and past it as: sourcetype="wineventlog:security" OR source="/var/log/auth.log" host="CLIENT1-DESKTOP" OR host="client3-kali" failed password* | bucket span=1m _time | stats count by _time, host, source, | table _time, host, source   Then both will appear in the screen, but without the filters I want. I am confused, anyone can help me please? Thanks!
I am getting the below error, but my outputs.conf is already configured.       Here is the outputs.conf server.conf Checked metics.log nothing is blocked.
Hi Gentlemen, I'm working for an API security company, we provide vulnerability detection and real-time detection and prevention. We are now working on integrating our platform with Splunk and so... See more...
Hi Gentlemen, I'm working for an API security company, we provide vulnerability detection and real-time detection and prevention. We are now working on integrating our platform with Splunk and some question popped-up as part of the process: To which version and products of Splunk we should make the integration? is it a generic integration to all of them and we only need to switch platform, or it's different for each one?  How should we send the data to Splunk? we thought about syslog, is there any other recommended way? What kind of data is most recommended that we send for Splunk?  Can we create rules and actions through the integration with Splunk? (e.g WAF rule) What is the best practice to make the integration and test it? should we raise a Splunk environment, if so which one and those Splunk have any support for this processes? Additionally I would like to understand whether we need to send data differently if the type of data is different, for example, let's assume I'm sending both vulnerabilities and anomalies, should I send both of them to the same place? or there is different location for each one of them.   Thanks in advance, Jonathan.
Hi All, I have a join query that works perfectly fine for my use case, but I was trying to see if I can write this using the stats or a more performative command. I'm trying to pull a report for tr... See more...
Hi All, I have a join query that works perfectly fine for my use case, but I was trying to see if I can write this using the stats or a more performative command. I'm trying to pull a report for transactions with their status. These are from a single source file. A log entry is created when the event is started, and another log is created when the event completes. There are also possibilities of the start event repeating itself since it did not complete the first time. Here's my query with a join index=en source="/merchant.log" host="merc.com" event="start" | dedup src_key | join type=outer joinkey [search source="/merchant.log" host="merc.com" event='complete" success="true" | table joinkey, resultcode] | table src_key, area, resultcode, _time, txn_amt The closest I got using stats was with https://community.splunk.com/t5/Splunk-Search/Alternative-method-to-using-Join-command/m-p/532978#M150560.
Hi there,   I have 2 messages that log when a job is run, which share a job_id field  event_name=process.start  event_name=process.end   I'm trying to create an alert that fires if there ... See more...
Hi there,   I have 2 messages that log when a job is run, which share a job_id field  event_name=process.start  event_name=process.end   I'm trying to create an alert that fires if there is an event_name=process.start , but no event_name=process.end , after 3 hours. I've seen lots of examples of using transactions between 2 events to get the duration, but not any if an event is missing. Many thanks, apologies if this is a noob question
Hi all, I need to write a query that checks whether  (Daily AH <= Daily Po <= Daily Risk <= Daily File <= Daily Instrum)  condition is met for each row. If the condition is not met get rid of the r... See more...
Hi all, I need to write a query that checks whether  (Daily AH <= Daily Po <= Daily Risk <= Daily File <= Daily Instrum)  condition is met for each row. If the condition is not met get rid of the row value that did not meet the condition and all the values after it.      
Hi All, We are facing an issue while the alert sent for any health rule violation to the customer is in UTC timezone. So please let us know if there is any way that we can change the timezone fro... See more...
Hi All, We are facing an issue while the alert sent for any health rule violation to the customer is in UTC timezone. So please let us know if there is any way that we can change the timezone from UTC to Local TimeZone so that the customer can receive the alert/health rule violation time in the local timezone.
Hi all, I have few queries to be modified using tstats: I am new to splunk, please let me know whether these queries can be converted into tstats. Query1: index=abc  "NEW"  "/resource/page"  a... See more...
Hi all, I have few queries to be modified using tstats: I am new to splunk, please let me know whether these queries can be converted into tstats. Query1: index=abc  "NEW"  "/resource/page"  appname=ui OR appname=uz  |stats  avg(response_time). Query2: index=abc  sourcetype=abc  host=ghjy   "transaction" NOT "user" |stats avg(ResponseTime) Query3: index=abc  iru=/resiurce/page  appname=ui NOT 1234 NOT 1991 NOT 2022 "Bank status" |stats count  
Hello, I am trying to monitor an application log and have Splunk generate an alert only when the  service_status = "disconnected" and  service_status="connected" entries are logged and the time betw... See more...
Hello, I am trying to monitor an application log and have Splunk generate an alert only when the  service_status = "disconnected" and  service_status="connected" entries are logged and the time between the two is greater than the span of 10 seconds OR if the Service_status = "disconnected" is the only entry being logged.   I've been experimenting with the transaction command but I am not getting the desired results.  Thanks in advance for any help with this. Example log entries: --- service is okay, do not generate an alert.--- 9/2/2022 00:10:36.683   service_status = "disconnected" 9/2/2022 00:10:38.236  service_status="connected"   --- service is down, generate an alert.--- 9/2/2022 00:10:40.683   service_status = "disconnected" 9/2/2022 00:10:51.736  service_status="connected"   --- service is down,  service_status="connected" event is missing,  generate an alert.--- 9/2/2022 01:15:15.603   service_status = "disconnected"  
I inherited a splunk mesh of search-heads, deployment server, index cluster, etc. I am trying to figure out all this splunk stuff, but ran into an issue that I am not sure if it ignores best practice... See more...
I inherited a splunk mesh of search-heads, deployment server, index cluster, etc. I am trying to figure out all this splunk stuff, but ran into an issue that I am not sure if it ignores best practice, poor judgement, or as intended. We have 8 main indexers that do what indexers do, all clustered as peer nodes. The deployment server is the master node and the search head for the cluster (which I don't understand that since we also have 5 main separate search heads).  We also have a disaster recovery DR site that has an indexer as a peer node part of the aforementioned cluster. The cluster has a Replication factor 3, for # of copies of raw data. The cluster has a Search factor of 2, for # of searchable copies. Newer to cyber so forgive me if I don't understand right away or if I am missing something glaringly obvious. But does it makes sense to have the DR indexer be part of the cluster? If it does makes sense, then how do i ensure that the other 8 indexers send a copy of all their stuff to the DR indexer? I thought the master node just kind of juggles the incoming streams from the forwarders and balances the data across all the indexers.  Also; - should the deployment server double as a master node and search head for the index cluster? - what is the difference between the 5 main separate search heads and the search head in the index cluster? - (last one, i swear) would it make sense to have a search head cluster, or keep the search heads separate as they the 5 are accessed and used by different groups (networking, development, QA/testing, cybersecurity, and UBA (which we dont even have UBA servers active right now cuz I cannot get them to work or web ui to launch))
Hi,  I am trying to get SQL Performance monitoring logs into our environment for one of our ITSI use cases The event successfully comes into our event index however I would like to convert these ... See more...
Hi,  I am trying to get SQL Performance monitoring logs into our environment for one of our ITSI use cases The event successfully comes into our event index however I would like to convert these performance monitoring sql logs into metrics as it will work much better with ITSI  I am struggling to convert the logs into metrics and am using the following documentation to help me do so -  https://docs.splunk.com/Documentation/Splunk/9.0.1/Data/Extractfieldsfromfileswithstructureddata   Here are my props and transforms conf files for 1 of the sql perfmon inputs props.conf      [Perfmon:sqlserverhost:physicaldisk] TRANSFORMS-field_value = field_extraction TRANSFORMS-sqlphysicaldiskmetrics = eval_sqlphysicaldiskcounter METRIC-SCHEMA-TRANSFORMS = metric-schema:extract_sqlphysicaldisk     transforms.conf     [eval_sqlphysicaldiskcounter] INGEST_EVAL = metric_name=counter [metric-schema:extract_sqlphysicaldisk] METRIC-SCHEMA-MEASURES = _ALLNUMS_       My SQL index where i would like these logs to go into does not have the "datatype=metrics" setting as i thought this should convert the events into metrics regardless, also i changed this setting so that the datatype = metrics but this removed all the data entirely and no data was populated into the sql index  I can still see the event data populating in the SQL index but it cannot be searched using the metrics commands (mstats, mcatalog etc)  Note - There are 8 counter field values which i would like to convert individually into metrics hence why i set the metric_name = counter. I did not break it down individually into separate settings under the transforms.conf due to there being spaces in the field values   Any idea why this is failing and how i can fix this? Any help would be greatly appreciated!  Any questions please ask!  Thanks    
Recently learned of this new Splunk app for Red Hat Insights. Just curious if anyone is currently utilizing in with their Splunk Cloud. We looked into installing the app but says the app does not sup... See more...
Recently learned of this new Splunk app for Red Hat Insights. Just curious if anyone is currently utilizing in with their Splunk Cloud. We looked into installing the app but says the app does not support search head cluster deployments. I realize Splunk defers all support to Red Hat for this application and exploring if anyone came across this issue. 
For example I have getting splunk logs with 4 fields    Time Event time 1 service = "service1"  | operation = "sampleOperation1" | responseTime = "10" | requestId = "sampleRequestId1" ti... See more...
For example I have getting splunk logs with 4 fields    Time Event time 1 service = "service1"  | operation = "sampleOperation1" | responseTime = "10" | requestId = "sampleRequestId1" time2 service = "service2"  | operation = "sampleOperation2" | responseTime = "60" | requestId = "sampleRequestId2" time3 service = "service2"  | operation = "sampleOperation2" | responseTime = "60" | requestId = "uniqueRequestId3" time4 service = "service4"  | operation = "sampleOperation4" | responseTime = "30" | requestId = "sampleRequestId4"   My objective is to find from all the logs if count is greater then 20 for  combination of (service,operation) with reponseTime>40. Expected Output service1  operation1  [sampleRequestId2,uniqueRequestId3]   The query I have for now is search here...... | stats count(eval(responseTime>60)) as reponseCount by service, operation | eval title= case(                               match(service,"service2") AND reponseCount>20, "alert1", ) | search title=* | table title,service   But here I cannot refer to requestId which is being dropped. Please suggest if you any solution.