All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to search below events in the base search. However these are not getting displayed when I use the where cmd. They are only getting shown when I use the spath and search cmd like below. Any ide... See more...
I want to search below events in the base search. However these are not getting displayed when I use the where cmd. They are only getting shown when I use the spath and search cmd like below. Any idea why where cmd is not working even though the filter criteria is getting matched? Its working for other events with the same filter criteria belpw. Only difference I can see is the multi line stack_trace field that is missing in other events. Could that be the issue?  Query that does not give any results even though qualifying events are there BASE_SEARCH | where like(MOP,"MC") Query that yields the results BASE_SEARCH | spath MOP | search MOP=MC   Event     { LEVEL: ERROR MESSAGE: Failed to process stack_trace: Exception trace.. at blah at blah MOP: MC }      
We are configuring salesforce splunk integration in our salesforce sandbox. We followed the documentation provided by splunk and have added the Salesforce add and was able to authenticate successfull... See more...
We are configuring salesforce splunk integration in our salesforce sandbox. We followed the documentation provided by splunk and have added the Salesforce add and was able to authenticate successfully.  We have configured the inputs with the oAuth authentication and validated successfully.  However we do not see any logs captured in our splunk and additionally we see the following in the logs,  Splunk Log : 2022-10-10 20:54:56,778 INFO pid=28611 tid=MainThread file=input_module_sfdc_event_log.py:collect_events:333 | [stanza_name=Test_Event_Logs] Collecting events started. 2022-10-10 20:54:56,779 WARNING pid=28611 tid=MainThread file=sfdc_common.py:key_configured:223 | [stanza_name=Test_Event_Logs] Salesforce refresh_token is not configured for account "Sandbox_POC". Add-on is going to exit. any advice is greatly appreciated
I need to split the below log files to like excel table. My Log file is: 2022-05-25 13:00:02 100.200.190.70 - test [12345]dele /TestingFile+-+END+-+GOD+WEL+SOONER+-+SFTP.txt - 220- 105 - 443 202... See more...
I need to split the below log files to like excel table. My Log file is: 2022-05-25 13:00:02 100.200.190.70 - test [12345]dele /TestingFile+-+END+-+GOD+WEL+SOONER+-+SFTP.txt - 220- 105 - 443 2022-06-30 12:05:08 200.231.150.150 - welcome [98765]created /TestingFileFromSource+-+COME+-+THE+END+Server+-+FileName.csv - 226 - 19 - 22 Expected Result is: ( I tried some regular expression but no luck) Field1 Field2 Field3 Field4 Field5 Field6 Field7 Field8 Field9 2022/05/25 13:00:02 100.200.190.70 test 12345 dele TestingFile END GOD WEL sooner SFTP.txt 220 105 443 2022/06/30 12:05:08 200.231.150.150 welcome 98765 created TestingFileFromSource COME THE END Sending FileName.csv 226 19 22
Hi everyone, In my search, I set bucket span=2h _time. It returns only hours which have data There are some hours where no data returns so, it is not shown in the result. I want to find it and I ... See more...
Hi everyone, In my search, I set bucket span=2h _time. It returns only hours which have data There are some hours where no data returns so, it is not shown in the result. I want to find it and I use makecontinous Raw data: _time id count 10/10/2022 16:00 1 12 10/10/2022 18:00 1 14 11/10/2022 08:00 1 15 11/10/2022 10:00 1 54 10/10/2022 16:00 2 78 10/10/2022 18:00 2 45 10/10/2022 20:00 2 5 11/10/2022 00:00 2 6 Expectation: _time id count 10/10/2022 16:00 1 12 10/10/2022 18:00 1 14 10/10/2022 20:00     10/10/2022 22:00     11/10/2022 00:00     10/10/2022 20:00     10/10/2022 22:00     11/10/2022 00:00     11/10/2022 08:00 1 15 11/10/2022 10:00 1 54 10/10/2022 16:00 2 78 10/10/2022 18:00 2 45 10/10/2022 20:00 2 5 10/10/2022 22:00     11/10/2022 00:00 2 6   After that I want to fill the id = null by the previous id and count = null by 0 I can do it for a single id but the makecontinuous doesn't work like that for multiple id (in the example I take 2 but in reality I have more) Do you have any idea please?
Hi to all, after upgrading from version 3.8 to version 3.10.0 we had to rename all input name containing a . (dot) or a - (dash), otherwise they were not working. Someone knows if this is a normal be... See more...
Hi to all, after upgrading from version 3.8 to version 3.10.0 we had to rename all input name containing a . (dot) or a - (dash), otherwise they were not working. Someone knows if this is a normal behavior? It took long time to understand the issue expecially because in dbx log nothing was written. Please check this issue and in case update documentation in Splunk DOC.
I need to set the date range from month to Date in splunk.   <earliest>now</earliest> <latest>mon</latest>
Visualization is visible in edit mode but not in non-edit/normal panel. The panel is empty in UI ,but the graph is visible in edit mode. I checked query its working and everything fine.   The a... See more...
Visualization is visible in edit mode but not in non-edit/normal panel. The panel is empty in UI ,but the graph is visible in edit mode. I checked query its working and everything fine.   The answer for this is highly appreciable.   Thanks
Hi, is it possible to hide the description of the dashboard? I found the command to hide the title including the description but I just want to hide the description.
The deployment server and the UF both run on linux.  In deployment server the app owned by splunk: splunk but when I push the app to the UF the app changes to root:root also the permission changes as... See more...
The deployment server and the UF both run on linux.  In deployment server the app owned by splunk: splunk but when I push the app to the UF the app changes to root:root also the permission changes as well.  What configuration do I need to change to make owner and permission of the app not to change?  The Splunk service run as Splunk user.  
 I wrote an external command in python and the only way I can get it to work is to put a | makeresults prior to it in the search. | makeresults | mycustomcommand |  My command just pulls back an arr... See more...
 I wrote an external command in python and the only way I can get it to work is to put a | makeresults prior to it in the search. | makeresults | mycustomcommand |  My command just pulls back an array of data through a rest call. I am not passing it any arguments. I have tried setting streaming to both "true" and "false". I have also tried setting generating to both "true" and "false" in the commands.conf. Can someone tell me the correct setting so I can just run: | mycustomcommand Currently if I run it like that I do not get any results (also no errors). Any help would be appreciated. Thanks, -Bob    
I'm reading the official Documentation ( https://docs.splunk.com/Documentation/Splunk/8.2.0/Installation/HowtoupgradeSplunk ), which warn not to update Splunk from 7.0.x to 8.2.6 directly, 7.... See more...
I'm reading the official Documentation ( https://docs.splunk.com/Documentation/Splunk/8.2.0/Installation/HowtoupgradeSplunk ), which warn not to update Splunk from 7.0.x to 8.2.6 directly, 7.0.x 8.0.x or 8.1.x 8.2.x Documentation suggests to make a middle update to 8.0.x/8.1.x, and then 8.2.x. Why? Is it so unsafe to make 7.0.x to 8.2.x? Asking since i'll soon need to replace my very old 7.0.x with 8.2.x. Thanks
I have json events/messages in my search result. There is a field or property called "stack_trace" in the json like below. I want to group the events and count them as shown below based on the Except... See more...
I have json events/messages in my search result. There is a field or property called "stack_trace" in the json like below. I want to group the events and count them as shown below based on the Exception Reason or message. The problem is traces are multi lined and hence below query that I am using is, it seems not able to extract the exact exception message. Is there a way to achieve the expected output?  Event       { MESSAGE : Failed to send stack_trace : com.abc.xyz.package.ExceptionName: Missing A. at random.package.w(DummyFile1:45) at random.package.x(DummyFile2:64) at random.package.y(DummyFile3:79) }         Query I am using       MY_SEARCH | rex field=stack_trace "(?<exceptionclass>\w+): (?<exceptiontext>\w+)." | stats count as Count by "exceptiontext"         Expected Output       Exception Count Missing A 3 Missing B 4 Missing C 1        
Friends, tell me how to be in the next task. I have an alert time every two minutes. I need to use this time, apparently something like this: now(); Next, I need to get the difference between the ... See more...
Friends, tell me how to be in the next task. I have an alert time every two minutes. I need to use this time, apparently something like this: now(); Next, I need to get the difference between the now() time and the time the last message (t). Let's call the difference between now() and t (t-now); Enter the variable "interval" (inter), the value of which is 30 seconds; Then, compare t-now and inter.
I am trying to make certain texts as bold in Splunk email alerts and choosing the HTML/Plain Text option, but the HTML tags appear as it is in the emails which get generated Thanks in advance Manoj
Within splunk add on for AWS - CloudWatch input type has a option to specify assume role (for multi aws account setup). However for the CloudWatch logs input type there is no assume role option. So e... See more...
Within splunk add on for AWS - CloudWatch input type has a option to specify assume role (for multi aws account setup). However for the CloudWatch logs input type there is no assume role option. So each account should have a programmatic access user if we have to cloudwatch logs input? 
Hi peeps, I want to join below information result in one table: 1st query index=sslvpn | iplocation src_ip | search Country != Malaysia | eval Country = if(isnull(Country),"unknown",Country)... See more...
Hi peeps, I want to join below information result in one table: 1st query index=sslvpn | iplocation src_ip | search Country != Malaysia | eval Country = if(isnull(Country),"unknown",Country) | table _time, user,src_ip,Country,action | rename user as "User ID", src_ip as "Source IP", action as "Status" 2nd query index=sslvpn group_path="ADL" | iplocation accessIP | where Country !="Malaysia" | table _time, user,accessIP,Country,action i try to join this table as below query: index=sslvpn | iplocation src_ip | search Country != Malaysia | eval Country = if(isnull(Country),"unknown",Country) | table _time, user,src_ip,Country,action | append      [search index=sslvpn group_path="ADL"      | iplocation accessIP      | where Country !="Malaysia"      | rename accessIP as src_ip] | rename user as "User ID", src_ip as "Source IP" action as "Status" but the result is not consist of 2nd query information. please help. thankyou.
We recently upgraded our KVstore Storage Engine to WiredTiger, after previously using MMAPv1. When I run: splunk show kvstore-status It says: storageEngine: wiredTiger Yet we keep getting th... See more...
We recently upgraded our KVstore Storage Engine to WiredTiger, after previously using MMAPv1. When I run: splunk show kvstore-status It says: storageEngine: wiredTiger Yet we keep getting the "Storage engine migration recommended" alert. Did any one else run into this? Does anyone know why? And if there is a way to disable it?
Am I able to set up a Splunk instance that would allow users outside of my network to enter my Splunk instance in a VM environment? Can anyone link me to documentation that will help.   Preferably ... See more...
Am I able to set up a Splunk instance that would allow users outside of my network to enter my Splunk instance in a VM environment? Can anyone link me to documentation that will help.   Preferably something  that can be connected via URL like this  https://splunk.samsclass.info  (Username: Student1 Password: Student1) Just something in a safe environment that a few people can play around in.  Does Splunk itself allow this ability?
Hello Splunkers!! As per the below results. I want to send individual report to each manager on their email id. Likewise I have more than 50+ managers And i have to send individual reports on their ... See more...
Hello Splunkers!! As per the below results. I want to send individual report to each manager on their email id. Likewise I have more than 50+ managers And i have to send individual reports on their individual email id. Please guide me how can I achieve this. Manager pass fail email abc 80 20 abc@gmail.com xyz 70 30 xyz@gmail.com nbq 60 40 nbq@gmail.com
Hello, we are ingesting data on multiple indexes for different departments. we want to create an alert when any index is not receiving logs from host should send and email to specific department ma... See more...
Hello, we are ingesting data on multiple indexes for different departments. we want to create an alert when any index is not receiving logs from host should send and email to specific department mail address.  we created a lookup .csv file and mention indexname and email address. below is the query which i am trying to execute but no results.   | tstats latest(_time) as latest where index=* earliest=-6h by host | eval recent = if(latest > relative_time(now(),"-45m"),1,0), realLatest = strftime(latest,"%c") | where recent=0 | outputlookup weblogs-index.csv | stats values(useremail) AS emailToHeader | mvexpand emailToHeader | map search="index | inputlookup weblogs-index.csv | where useremail=\"$emailToHeader$\" | fields - useremail | sendemail sendresults=true inline=true server=\"Your.Value.Here\" from=\"Your.Value.Here\" to=\"$emailToHeader$\" subject=\"Your Subject here: \$name\$\" message=\"This report alert was generated by \$app\$ Splunk with this search string: \$search\$\"" | appendpipe [|inputlookup weblogs-index.csv]