All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, I have set one alert as below: index=abc ns=c2 ("NullPointerException" OR "IllegalStateException" OR "RuntimeException" OR "IllegalArgumentException" OR "NumberFormatException" OR "NoS... See more...
Hi Everyone, I have set one alert as below: index=abc ns=c2 ("NullPointerException" OR "IllegalStateException" OR "RuntimeException" OR "IllegalArgumentException" OR "NumberFormatException" OR "NoSuchMethodException" OR "ClassCastException" OR "ParseException" OR "InvocationTargetException" OR "OutOfMemoryError")| rex "message=(?<ExceptionMessage>[^\n]+)"|eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")|cluster showcount=t t=0.9|table app_name, ExceptionMessage,cluster_count,_time, environment, pod_name,ns|dedup ExceptionMessage,pod_name|rename app_name as APP_NAME, _time as Time, environment as Environment, pod_name as Pod_Name,cluster_count as Count And I am sending it through via mail. My requirement is when there is no data no alert should be send . Can someone guide me on that. Thanks in advance  
Hello I am installing a fresh new install of this app to replace our old version (1.2.4) I am using the same credentials as the old working version (client id, secret and tenant) With the new app I... See more...
Hello I am installing a fresh new install of this app to replace our old version (1.2.4) I am using the same credentials as the old working version (client id, secret and tenant) With the new app I get the error:   2021-03-30 09:17:26,598 ERROR pid=109928 tid=MainThread file=base_modinput.py:log_error:309 | _Splunk_ Unable to obtain access token 2021-03-30 09:17:26,596 DEBUG pid=109928 tid=MainThread file=connectionpool.py:_make_request:437 | https://login.microsoftonline.com:443 "POST /27776982-d882-41b2-95ac-322f28d5a2ce/oauth2/v2.0/token HTTP/1.1" 401 471 2021-03-30 09:17:26,372 DEBUG pid=109928 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): login.microsoftonline.com:443   When reverting to the old app it works fine and I am able to collect data. We checked all the permissions and Azure settings. What else do we need to do to get this working with the new version.
hello I use the search below which works fine     `fiability` | fields host Logfile SourceName ProductName SITE DEPARTMENT RESPONSIBLE_USER | search Logfile=Application AND (SourceName="Applicat... See more...
hello I use the search below which works fine     `fiability` | fields host Logfile SourceName ProductName SITE DEPARTMENT RESPONSIBLE_USER | search Logfile=Application AND (SourceName="Application Hang" OR SourceName="Application Error") | search (ProductName=*) | stats last(SITE) as SITE, last(DEPARTMENT) as DEPARTMENT, last(RESPONSIBLE_USER) as RESPONSIBLE_USER, count(eval(SourceName="Application Error")) as "Number of Errors", count(eval(SourceName="Application Hang")) as "Number of Hang", count as "Number of crashes" by ProductName | rename ProductName as Product | sort -"Number of crashes"      The problem I have is in my xml file because I use token filters on DEPARTMENT and RESPONSIBLE_USER fields Since I just use a stats by ProductName, the RESPONSIBLE_USER related to the ProductName is just the last RESPONSIBLE_USER of the productName and not all the RESPONSIBLE_USER for a specific ProductName So when I use the token for the RESPONSIBLE_USER in my dashboard, it doesn't reflect the exact reality And if I do a stats by ProductName RESPONSIBLE_USER it's not good because I have many count for a same ProductName What I need is to have a single count for a same ProductName and in the same time having all the ProductName count for a same RESPONSIBLE_USER (it means something else than the last RESPONSIBLE_USER for a ProductName...) Could you help me please?  
Hi Guys,   I have this query , which will provide me the list of “Name” on which ProtectionStatus is OFF. index=altiris sourcetype=altiris source=altiris_BGP_Excluded_WithREGION OR source=mi_input... See more...
Hi Guys,   I have this query , which will provide me the list of “Name” on which ProtectionStatus is OFF. index=altiris sourcetype=altiris source=altiris_BGP_Excluded_WithREGION OR source=mi_input://altiris_BGP_Excluded_WithREGION | eval ProtectionStatus = if(Protectionstatus == 0, "OFF", "ON") | dedup Name | where ProtectionStatus="OFF"   ProtectionStatus goes OFF and ON frequently ( we run the query in every 6 hours). Here we need to add one more field or column which should show us the time (Date and time) when the last ProtectionStatus change happened (it can be either OFF to ON or ON to OFF). Can some one please help us on this.
Hello to everyone, We'd like to set up a Splunk Test environment using the same license we are currently using in Production environment. The reason we want to do so is we want Test environment be r... See more...
Hello to everyone, We'd like to set up a Splunk Test environment using the same license we are currently using in Production environment. The reason we want to do so is we want Test environment be representative of Production environment, so we need a distributed environment in Test too. Test environment will be on a different VLAN respect to the one on which Production is located. Is it possible to do so? Is it enough to only open firewall from test servers to IP and port (8089) of License Manager? Thanks in advance for any advice.
I have 2 events 1) request event 2) response event I need response time to be calculated (i.e) request event time - response event time. How to construct the query?
Hi Everyone, Can someone provide me the cron expression to run a job at 6 of every day.   Thanks in advance
Hi Splunkers!! I'm working with a team where they have to access to one of the saved  search results through Splunk API. The search results are more than 10k and API call returning only few result... See more...
Hi Splunkers!! I'm working with a team where they have to access to one of the saved  search results through Splunk API. The search results are more than 10k and API call returning only few results. I've checked limits.conf and it's good(limit is 50k). Any recommendations  in solving this issue. TIA.
I checked an issue with a rest api data input and found that the scheduling of the poll is inconsistent. With the polling interval set to 300 there is a 5 second difference each and every time the j... See more...
I checked an issue with a rest api data input and found that the scheduling of the poll is inconsistent. With the polling interval set to 300 there is a 5 second difference each and every time the job is run. This causes an issue for calls that utilize point in time updates or require more accurate polling.  I can mitigate this issue for the most part by looking at the internal logs for the time the inputs are trigger and offset the schedule by that time it is still out by around 1 second however so we still miss events.  
Hi, for a testing purpose, i would like to create a failed search job.. i did search for this, but no luck.. any suggestions please
Hello! I am trying to retrieve two events: the latest event where a user leaves a room and the earliest event where a user chooses to go to that room. In the example below I would want to retrieve t... See more...
Hello! I am trying to retrieve two events: the latest event where a user leaves a room and the earliest event where a user chooses to go to that room. In the example below I would want to retrieve the first and fourth events.  2021-03-29 13:29:44  User: 2 (LeaveRoom): 230 2021-03-29 13:29:44  User: 2 Choose To Go To Room 212 2021-03-29 13:29:44  User: 2 (LeaveRoom): 245 2021-03-29 13:29:44  User: 2 Choose To Go To Room 230 2021-03-29 13:29:44  User: 2 Choose To Go To Room 245   You'll notice that the user must choose the next place to go before actually leaving a room. My problem is that I cannot seem to get my WHERE clause to work to narrow down the results to the two events. I cannot simply dedup 4 in order to get the needed events because the latest event is not necessarily a "leave room" event, and there could be other events (unaccounted for below). I might need the 2nd and 5th events, or 1st and 3rd events, or another combination. It depends. Both needed events will appear within the 10 most recent events, however, so I use a dedup in my query.   index=INDEX host=HOSTNAME sourcetype=SOURCETYPE | rex field=_raw "User:\s(?<user_id>\d+)\s\(LeaveRoom\):\s(?<leave_room_id>\d+)" | rex field=_raw "User:\s(?<user_id>\d+)\sEntered\s(?<entered_room_id>\d+)" | dedup 10 user_id | where leave_room_id=entered_room_id | stats latest(leave_room_id) as left_room, earliest(entered_room_id) as entered_room by user_id   How can I rewrite this to get the events I need?
Hello!  I am having trouble creating a query to retrieve all of the events between now and the second instance of a particular event. For example, this could be how my events appear after grabbing t... See more...
Hello!  I am having trouble creating a query to retrieve all of the events between now and the second instance of a particular event. For example, this could be how my events appear after grabbing the events for EntryType #1: 2021-03-29 13:27:11  EntryType #1 Issue Fixed 2021-03-29 13:26:23  EntryType #1 Something is Still Broken 2021-03-29 13:26:12  EntryType #1 Something is Still Broken 2021-03-29 13:25:56  EntryType #1 Something is Broken 2021-03-29 13:22:34  EntryType #1 Issue Fixed 2021-03-29 13:22:10  EntryType #1 Something is Broken   In this case, I would want to grab the first four events (from 13:25:56 to 13:27:11), but I cannot simply dedup 4 because  there could be more or less "Something is Broken" events between the "Issue Fixed" events. My events are all from the same index, host and sourcetype, and I'm mainly just using regex to extract the events with certain phrases. Nevertheless, I can't seem isolate the events I need. Does anyone have any ideas?
Greetings Community, Does anyone know if internal certs are ok to use for the Microsoft Team add-on Web-hook and Subscription? We have a public accessible IP NATing to our HF for port 4444.  Thanks... See more...
Greetings Community, Does anyone know if internal certs are ok to use for the Microsoft Team add-on Web-hook and Subscription? We have a public accessible IP NATing to our HF for port 4444.  Thanks in advance ~John @jconger 
Hello,  I have Splunk Add-on for AWS version 4.6.1 installed on a standalone search head that is running on Splunk Enterprise version 7.3.3, and running on CentOS 7.  I have a S3 bucket named, back... See more...
Hello,  I have Splunk Add-on for AWS version 4.6.1 installed on a standalone search head that is running on Splunk Enterprise version 7.3.3, and running on CentOS 7.  I have a S3 bucket named, backups.  and under backups, I have two sub folders, server_test1 server_test2  I only want to ingest files from server_test1, but I am ingesting files from the both folders.  Could you tell me what I am not doing right?  here is the inputs.conf [aws_s3://server_test] aws_account = aws-instances bucket_name = backups character_set = auto ct_blacklist = ^$ host_name = s3.amazonaws.com index = test_index initial_scan_datetime = 2021-03-29T15:00:15Z max_items = 100000 max_retries = 3 polling_interval = 1800 recursion_depth = -1 sourcetype = aws:s3 disabled = 0 log_partitions = server_test1/
I am trying to get counts based on comma delimited values for specified groupings of events. For instance I have the following logs.   Event=A Ids="55,32,5" Event=A Ids="55" Event=B Ids="56,63" Ev... See more...
I am trying to get counts based on comma delimited values for specified groupings of events. For instance I have the following logs.   Event=A Ids="55,32,5" Event=A Ids="55" Event=B Ids="56,63" Event=C Ids="23,53,12" Event=C Ids="39,6"   I want the data to show up in a table like the following Event A&B Event C 6 5   How would I craft the query to get it to aggregate it like this?  Note: This would be happening for a large number of events.
I am trying to alert on any processes where their CPU time is gaining 60 sec for every elapsed minute.  I am using the following search to calculate the delta in CPU time per process: ... | stat... See more...
I am trying to alert on any processes where their CPU time is gaining 60 sec for every elapsed minute.  I am using the following search to calculate the delta in CPU time per process: ... | stats max(cpu_time_sec) as maxTime by PID _time | delta maxTime as deltaTime  | table _time PID deltaTime I get the following output as an example (the above was filtered on a specific PID to get the below output): _time PID maxTime deltaTime 2021-03-29 13:28:44 PID: 26916 2857 2856 2021-03-29 13:29:45 PID: 26916 2857 0 2021-03-29 13:30:44 PID: 26916 2857 0 2021-03-29 13:31:45 PID: 26916 2857 0   The first value is always higher than it "should" be as the CPU time at that point is compared to a non-existent previous interval value unless I select a large enough time range that it predates when the process started.  I do not want to do this. I am only strictly interested in getting the most recent deltas for the monitored processes and flagging those that have 60 sec of accumulated CPU time.  If I put a  where deltaTime>60 on my query I erroneously capture the first entry. Any insights on how to accomplish this?
Dear community, I have the following scenario: User can make many actions, in this case we can have action equals search, result clicked, or load. Each action type has its own log format with many ... See more...
Dear community, I have the following scenario: User can make many actions, in this case we can have action equals search, result clicked, or load. Each action type has its own log format with many overlapping fields.   I want to count a click index rank, a field of the action = result clicked. However, I want to sort these by pages with this highest or lowest index rank. However, the page value for action= result clicked is the search results page, i.e. page="/search?query=example". The page I want is in the action=load, and will always be the next action of the user, i.e. action=load page=/usergude/exampletopic.html.   So, I'm using the search transaction here to group the journey by customer, but really I want an event that groups the next load action for a specific user following a result clicked, but so that I can make stats on the whole environment.   Any ideas?   Example scenario: Find pages with a low average resultIndex clicked. user=name action=search query=example user=name action=resultClicked page=/search?examplequeryfromuser user=name action=load page=/userguide/exampletopic/theactualpageuserclicked.html   What is the average click rank ? [ for page /userguide/exampletopic/theactualpageuserclicked.html ]   Example base search:   index=server sourcetype=stats action!=pageChanged | rex field=_raw "query=\"(?<query_quotes>.*)\",filters"| rex field=searchIndex "\[(?<filts>.+)\]" | rex max_match=0 field=filts "\"(?<index_select>[\w :-]+)\"" | rex field=product_name "\[(?<prods>.+)\]" |transaction email maxspan=1h maxpause=15m mvlist=true nullstr="-" | eval usercode=mvdedup(instcode), time_spent_searching=round(duration/60, 4) | search action=resultClicked query_quotes!="" query_quotes="*" query_quotes="*" publicationId="*" OR NOT publicationId="*" |eval searchTransaction=lower(query_quotes) | table custcode publicationId topic searchTransaction action, resultIndex, time_spent_searching,page | rename time_spent_searching as "Minutes Spent Searching", prods as "Product Filter Selected"   Produce something like   customer code publication topic / page search string action resultIndex Minutes spent searching page usernumber - - how to login search - 10.79 /search   - - how to login resultClicked 3   /search?how_to_login   product_operation_guide login.htm - - Load - /publications/productoperationsguide/2.0?topic=login.htm   product_operation_guide reset.htm - - Load   /publications/productoperationsguide/2.0?topic=reset.htm I want to see that the average click rank is 3 for page=/publications/productoperationsguide/2.0?topic=login.htm. Of course, there would be many users who click on the same page, after searching any number of search strings.   Business goal: Provide pages with the lowest click rank where the query contains the key term login
Hello! We are new to Splunk Cloud and have a question about installing app/add-ons that we couldn't find definitive information on in the documentation. We have 3 instances, IDM, Search head 1, and... See more...
Hello! We are new to Splunk Cloud and have a question about installing app/add-ons that we couldn't find definitive information on in the documentation. We have 3 instances, IDM, Search head 1, and Search head 2 which is our Enterprise Security (ES) instance. Which one is the indexer? The IDM instance is a sort of Heavy forwarder correct? When installing apps such as the 'Splunk Add-on for F5 BIG-IP' or the 'Cloudflare App for Splunk' the instructions say to install on the search head(s), Should they be installed on Both search heads? Or just one? What are the advantages or disadvantages of either? Sorry for the barrage of questions but we are having trouble wrapping our head around how these instances all work together and how the apps interact. Thanks!
Hi there! I have a subjected case to find out list of employees who get retire in next 5 years. i tried with lot of queries but didn't get correct output, can you please help in this. Field Name of... See more...
Hi there! I have a subjected case to find out list of employees who get retire in next 5 years. i tried with lot of queries but didn't get correct output, can you please help in this. Field Name of Employee Age is "Age in Yrs" Thank you
I was curious if anyone could help me understand or point me to documentation that refers to accessing fields in a summary index.   I've found that running eval against a known field (bytes) that I'... See more...
I was curious if anyone could help me understand or point me to documentation that refers to accessing fields in a summary index.   I've found that running eval against a known field (bytes) that I'm writing to the summary index results in no information being populated for the eval field 'gb': index=my_summary | eval gb=bytes/(pow(1024, 3)) | stats sum(gb) by _time If I perform a stats call, then the field appears to be exposed and I can utilize the eval to populate the 'gb' field:  index=my_summary | stats sum(bytes) as bytes by _time | eval gb=bytes/(pow(1024, 3)) | stats sum(gb) by _time Just curious if there was any insight to why this may be the case.  I wouldn't expect it to work this way. Thank you!