All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi folks, I have a host that is sending different logs to Splunk, this host sends various logs successfully except for the syslog-ng logs. Here you have an example of the inputs config (there are... See more...
Hi folks, I have a host that is sending different logs to Splunk, this host sends various logs successfully except for the syslog-ng logs. Here you have an example of the inputs config (there are 3 inputs in this way not being received by Splunk) [monitor:///store/data/log/cisco_ise] disabled = false host = xxxxxxxxxx index = syslog sourcetype = cisco:ise Inputs appear when using the command 'splunk list monitor', then it doesn't seem a permissions issue. Other logs are being successfully ingested by this host. the syslog-ng is working as expected and it is receiving and storing logs on the hdd Does anyone has an idea of steps I can follow to troubleshoot this? Thanks in advance,
I have found these two endpoints related to saved searches https://<host>:<mPort>/services/saved/searches This provides the list of all the saved services in the instance https://<host>:<... See more...
I have found these two endpoints related to saved searches https://<host>:<mPort>/services/saved/searches This provides the list of all the saved services in the instance https://<host>:<mPort>/services/saved/searches/{name} This provides the search configurations and the SPL query used to create the particular saved search.  I would like to know if there is any particular API endpoint to get the data within the created saved search. 
we have separate data with respect to "DATE" listed as shown in the below table, we need to create a separate graph for each date with respect to M1,M2 etc values   by trellis is possible, but trell... See more...
we have separate data with respect to "DATE" listed as shown in the below table, we need to create a separate graph for each date with respect to M1,M2 etc values   by trellis is possible, but trellis option is not available to download in PDF hence we have to segregate according to date and create separate column graph for each date. NUMBER DATE M1 M2 M3 M4 M5 M6 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 31-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 24-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* ******* 10 17-07-2022 ******* ******* ******* ******* ******* *******
Hi Team I have a JSON file as below :- [{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"Approval","comment":"Bank Account Manag... See more...
Hi Team I have a JSON file as below :- [{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"Approval","comment":"Bank Account Manager approved this request. Comments: ","commentType":"MilestoneApproval","when":"2022-07-26T06:10:43.91Z","id":30},{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"Approval","comment":"Bank Account Manager approved this request. Comments: ","commentType":"MilestoneApproval","when":"2022-07-26T06:10:43.91Z","id":30},{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"A task was completed","comment":"Prepare SAP Config Docs","commentType":"MilestoneGeneric","when":"2022-07-26T06:10:43.907Z","id":29},{"entityId":null,"entityType":"Account.AccountRequest","accessedByUser":"jinghui@bullish.treasurygo.com","milestone":"A task was completed","comment":"Prepare SAP Config Docs","commentType":"MilestoneGeneric","when":"2022-07-26T06:10:43.907Z","id":29}] I am using the pattern while testing and reviewing the events :- (\[|,|\]){ This breaks everything fine but the last line which has the closing ]   How to get rid of the ] at the end of the JSON array? Kindly request you'll to guide me.Many thanks in anticipation.
I want to track multiple ORA numbers, we received different format logs as below, can you help me to write a query for this.   Logs/Events:   2022-08-04T06 : 55 : 54.009110 + 01 : 00 opiodr a... See more...
I want to track multiple ORA numbers, we received different format logs as below, can you help me to write a query for this.   Logs/Events:   2022-08-04T06 : 55 : 54.009110 + 01 : 00 opiodr aborting process unknown ospid ( 8696 ) as a result of ORA - 609 2022-08-04T06 : 51 : 54.137474 + 01 : 00 WARNING : inbound connection timed out ( ORA - 3136 )
Hi team, I wonder if someone can help me with the below query.  I have a to combine my two searches with join. With first search i get the assignement group and with second search i get email of th... See more...
Hi team, I wonder if someone can help me with the below query.  I have a to combine my two searches with join. With first search i get the assignement group and with second search i get email of those assigment group to send alert.  i have common values between two sourcetype but field name is different.  in the first serach, field is called dv_name and in second it is called name. Therefore i create name variable before using join. However my field email is still coming blank  serach: index=production sourcetype=call | eval name=dv_name | join name type=left [ index=production sourcetype=mail  earliest="04/30/2022:20:00:00" latest=now() | dedup name | stats values (dv_email) values (name) by name] | eval Email=if(isnull(dv_email), " ", dv_email)  | table dv_name Email
Hello, I'm starting to work on a new integration for Splunk Enterprise Security. Doc mention only devs with "entitlements" can test a ES integration, but I didn't found any other mention on how to g... See more...
Hello, I'm starting to work on a new integration for Splunk Enterprise Security. Doc mention only devs with "entitlements" can test a ES integration, but I didn't found any other mention on how to get them apart from that link. What is the process to obtain them? Thanks.
Hi,  how can I make a stacked column chart . Currently the Purple area displays how long it took for all processes combined to execute. How could I modify my spl query so that it would display how ... See more...
Hi,  how can I make a stacked column chart . Currently the Purple area displays how long it took for all processes combined to execute. How could I modify my spl query so that it would display how long each individual process took to complete in a column chart.    (A1, A2, A3 - process names)   | rex field=PROCESS_NAME ":(?<Process>[^\"]+)" | eval finish_time_epoch = strftime(strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval start_time_epoch = strftime(strptime(START_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval duration_s = strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S") - strptime(START_TIME, "%Y-%m-%d %H:%M:%S") | eval duration_min = round(duration_s / 60, 2) | chart sum(duration_min) as "time" by G_DT
I have data in json format like this.     "Task_no":"5", "Group": "G1", "EXECUTION_DATE":1648081994535, "STATUS":"FAILURE", "DURATION":1951628     I want to produce a table which has Grou... See more...
I have data in json format like this.     "Task_no":"5", "Group": "G1", "EXECUTION_DATE":1648081994535, "STATUS":"FAILURE", "DURATION":1951628     I want to produce a table which has Group Total_tasks SUCCESS FAILURE as fields. I tried the query like this.     index..... Group=G1| chart count(Task_No) by STATUS | eval Total_Tasks = SUCCESS + FAILURE | table Group Total_Tasks SUCCESS FAILURE     Its showing as no results found. But when i run the same query for all the group that is,   index..... | chart count(Task_No) by Group STATUS | eval Total_Tasks = SUCCESS + FAILURE | table Group Total_Tasks SUCCESS FAILURE   this query gives the required fields, but i want the table to be created for particular Group. Can anyone please help me to achieve this?
I have a lookup table with allowed CIDR ranges. allowed_cidr_range      applications Xyx                                             abc I need to build a alert whenever source ip does not belo... See more...
I have a lookup table with allowed CIDR ranges. allowed_cidr_range      applications Xyx                                             abc I need to build a alert whenever source ip does not belong to the allowed cidr range. Query : NOT [| lookup cidr_vpc.csv allowed_cidr_range as src_ip output allowed_cidr_range] |table _time,host,sourcetype,src_ip,dst_ip  
hi, Please check with below screenshot The indexed time and event log time both are different. Kindly let me know the solution to fix this error.    
How to make particular row bold in splunk table (with css and without jss) It can be only 4th row for an example in set of 10 rows  
Greetings, I have a query I'm working on using tstats and lookup. My lookup is named hosts_sites and has two columns, hosts and site. My sample query is below;     | tstats latest(_time) ... See more...
Greetings, I have a query I'm working on using tstats and lookup. My lookup is named hosts_sites and has two columns, hosts and site. My sample query is below;     | tstats latest(_time) as latest where index=main by host | lookup hosts_sites hosts as host OUTPUT site | table host, site, latest     How can I make sure that my table includes non-matches. I want to make sure that hosts in the lookup that were not matched are included in the table so they can be addressed/remediated
Hello, I am fairly new to using splunk. I am having some trouble understanding how to extract the fields.  My sample data looks somewhat like this: ...Event={request=Request{data={firstName=jane, ... See more...
Hello, I am fairly new to using splunk. I am having some trouble understanding how to extract the fields.  My sample data looks somewhat like this: ...Event={request=Request{data={firstName=jane, lastName=doe, yearOfBirth=1996}}}... I want to get count based on yearOfBirth. How should I do it? I tried doing stats count by request.data.yearOfBirth, and also tried simply doing stats count by yearOfBirth but neither returned any results. Am I accessing the yearOfBirth field incorrectly? How can I 
I have the following events that arrive every five minutes from a pool of servers (two servers' events shown):   Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdw... See more...
I have the following events that arrive every five minutes from a pool of servers (two servers' events shown):   Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache LRU expired : 0 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache lifetime : 0 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache inactive : 21157 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache del : 297 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache add : 21967 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache miss : 8801 Aug 2 18:00:23 ServerX stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache hit : 79198 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache LRU expired : 0 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache lifetime : 1 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache inactive : 21085 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache del : 230 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache add : 21861 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache miss : 8880 Aug 2 18:00:32 ServerY stats.pdweb.sescache 2022-08-02-18:00:00.000-05:00I----- pdweb.sescache hit : 74540 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache LRU expired : 6100 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache lifetime : 0 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache inactive : 71624 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache del : 6122 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache add : 80511 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache miss : 190 Aug 2 18:05:23 ServerX stats.pdweb.sescache 2022-08-02-18:05:00.000-05:00I----- pdweb.sescache hit : 6239   The server names (in this case, "ServerX" and "ServerY") are extracted at index time as a field called "server_name". In addition, two other field extractions are performed at index time: "metric_type": In this example, the values are "LRU expired", "lifetime", "inactive", "del", "add", "miss" and "hit". "metric_value": The numeric value at the end of each event. I'm attempting to do the following: Collect the "metric_value" values aligned with the seven metric types for each server in five minute increments and display all values in a table (each row reflecting the unique time, server name, and values for each metric type) Perform arithmetic operations against four of the metric types (add - (del + inactive + lifetime)) to create a new value "current_sessions". I envision the output to look like this: _time server_name LRU expired lifetime inactive del add miss hit current_sessions 18:00:00 ServerX 0 0 21157 297 21967 8801 79198 513 18:00:00 ServerY 0 1 21085 230 21861 8880 74540 545 18:05:00 ServerX 6100 0 71624 6122 80511 190 6239 2765 ...and so on... Here's what I've put together so far:   index=foo sourcetype=bar stats_category="pdweb.sescache" | bin span=5m _time | stats values(*) AS * by server_name, metric_type, _time | table _time, server_name, metric_type, metric_value   The resulting table shows me the following: _time server_name metric_type metric_value 2022-08-02 18:00:00 ServerX LRU expired 0 2022-08-02 18:00:00 ServerX lifetime 0 2022-08-02 18:00:00 ServerX inactive 21157 2022-08-02 18:00:00 ServerX del 297 2022-08-02 18:00:00 ServerX add 21967 2022-08-02 18:00:00 ServerX miss 8801 2022-08-02 18:00:00 ServerX hit 79198 2022-08-02 18:05:00 ServerX LRU expired 0 2022-08-02 18:05:00 ServerX lifetime 1 2022-08-02 18:05:00 ServerX inactive 21085 2022-08-02 18:05:00 ServerX del 230 2022-08-02 18:05:00 ServerX add 21861 2022-08-02 18:05:00 ServerX miss 8880 2022-08-02 18:05:00 ServerX hit 74540 2022-08-02 18:00:00 ServerY LRU expired 6100 2022-08-02 18:00:00 ServerY lifetime 0 2022-08-02 18:00:00 ServerY inactive 71624 2022-08-02 18:00:00 ServerY del 6122 2022-08-02 18:00:00 ServerY add 80511 2022-08-02 18:00:00 ServerY miss 190 2022-08-02 18:00:00 ServerY hit 6239   How should I adjust my query to accommodate my requirements?
I would like to automate Splunk Logs to make sure user detail is marked. Note: We are capturing and displaying user detail in JSON Response Body.     
query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHE... See more...
query 1 |mstats count(_value) as count1 WHERE metric_name="*metric1*" AND metric_type=c AND status="success" by metric_name,env,status| where count1>0 query 2 |mstats count(_value) as count2 WHERE metric_name="*metric2*" AND metric_type=c AND status="success" by metric_name,env,status| where count2=0 These queries are working fine individually I need combine show results only if  count1>0 and count2=0
I have sample log in that count is there and in the same row in message are fix length log are there if same count so and count is also dynamic    For eg Date time server count server name   ... See more...
I have sample log in that count is there and in the same row in message are fix length log are there if same count so and count is also dynamic    For eg Date time server count server name   How can we get all the server count data  
Hi, So i am trying to index the log file data.log, log file is 2 days old and splunk is indexing only the latest events. Is there a way i can index the older events in data.log ?
Is there a way to populate the items in an "IN" statement with the results of a sub query?  I've tried several variations. index=x accountid IN ( [ search index=special_accounts | rename accountid ... See more...
Is there a way to populate the items in an "IN" statement with the results of a sub query?  I've tried several variations. index=x accountid IN ( [ search index=special_accounts | rename accountid as query ] )