All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to send my error message recorded in Splunk to ServiceNOW via Custom Python Script. However, I am failing to hit the SNOW URL   1. The Python Script is getting triggered by Splunk and ... See more...
I am trying to send my error message recorded in Splunk to ServiceNOW via Custom Python Script. However, I am failing to hit the SNOW URL   1. The Python Script is getting triggered by Splunk and the error messages too have been extracted in a JSON. This is verified. 2. The ServiceNOW endpoint is accessible through Command Prompt or Postman. 3. However, I am failing to hit the URL when it gets triggered by Splunk. 4. The Splunk instance is on-premise and ServiceNOW mid-server has been created for communication.   I doubt if this is any tunneling issue. The request does not gets triggered only when executed from SPLUNK   Can anyone assist me with a possible solution?
Hi I am trying to extract timestamp including nanoseconds but I am able to extract only 7 digits of nanoseconds though I used %9N in TIME_FORMAT. Below is my sample event-     10,11/03/20 04:00:... See more...
Hi I am trying to extract timestamp including nanoseconds but I am able to extract only 7 digits of nanoseconds though I used %9N in TIME_FORMAT. Below is my sample event-     10,11/03/20 04:00:00.00000010,11/03/20,04:00:00,Zx: 6037,04:00:00,48d4c21c3014850838840a460424c05b20412128053ce6074720006e00f1ff5500000000000000,Mod=2,AckReq=0,RtBits=0,MsgSeq=35,OnRte=1,Id=46,VId=6037     Below is my props.conf -     [abc_logs_st] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false TIME_PREFIX = ^\d+\, MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %m/%d/%y %H:%M:%S.%9N     Why Splunk is considering only 7 digits after decimal..Is this bug in Splunk?  Thanks.
A recent Ivanti scan showed that we needed to upgrade some of our forwarders to 8.0.6. When I got to download the latest Forwarder version it still says 8.0.5, when will 8.0.6 be available for downlo... See more...
A recent Ivanti scan showed that we needed to upgrade some of our forwarders to 8.0.6. When I got to download the latest Forwarder version it still says 8.0.5, when will 8.0.6 be available for download?
Hello All, It is only Tuesday and it has been quite the journey. I have a splunk cloud instance. I logged a call and got the Splunk add-on for Microsoft Infrastructure installed. Went to configure i... See more...
Hello All, It is only Tuesday and it has been quite the journey. I have a splunk cloud instance. I logged a call and got the Splunk add-on for Microsoft Infrastructure installed. Went to configure it and found that I needed the Microsoft add-on for Active Directory also configured. So after reading the documentation: https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.1/User/ConfiguretheSplunkSupportingAdd-onforActiveDirectory#Configure_the_add-on_to_send_data_to_Splunk_Cloud I found that I need to configure the add-on for AD on a heavy forwarder on site. So the above link sent me to here: https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.1/User/ConfiguretheSplunkSupportingAdd-onforActiveDirectory#Configure_the_add-on_for_use_on_a_heavy_forwarder Once at the above link everything starts to fall apart. The documentation style on the splunk site is so bad. Just link to link to link with not really good substance of data. The lines get blurred really quickly on what I need to do if I have Splunk cloud and if I have Splunk enterprise. Do we have a good guide on how to get these pieces working together? Sorry for sounding grumpy it has been an adventure.   Thanks, Scott
Hi everyone   Has anyone faced with integrating a cloud application called ONBASE with Splunk? If so, what requirements do I need to perform the procedure? Thank you
Adhoc search of bash_history files and attempting to just pull out listing of commands regardless of the timestamp value. I am not interested at this point in retrieving the Timestamp. Here is what ... See more...
Adhoc search of bash_history files and attempting to just pull out listing of commands regardless of the timestamp value. I am not interested at this point in retrieving the Timestamp. Here is what a given search returns: Event 1 #1597921243 <-- Timestamp whoami                               <-- Command whoami is returned as part of Event 1 Event 2 uname                 <-- Command uname is returned as a unique Event Event 3 #1597921243 <-- Timestamp returned as unique event Event 4 df -h                      <-- Command df -h is returned as a unique Event Event 5 #1597678043 <-- Timestamp returned as unique event When I execute this Search only Event 1 is returned which is Timestamp and separate line with whoami command index=os sourcetype=bash_history host=my_host_name |regex "^#\d+\s+(?P<PGCMD>\w+)" When I execute this Search 3 Events are returned, Event 1 (TS + whoami) and Event 2 (uname) and Event 4 (df) index=os sourcetype=bash_history host=my_host_name |regex "[a-zA-Z]+" When I execute this Search 2 Events are returned, Event 2 (uname) and Event 4 (df) index=os sourcetype=bash_history host=ps2pr608661 |regex "^\w+" What I am trying to end up with is just viewing the commands, no time stamps, in essence results should just be whoami, uname and df -h nothing else I've been searching for a solution but 1.5 days into this I cannot find one. Any help is appreciated
How do we handle an alert that needs to run on certain hours on Saturday and on different hours on Sunday. Can the cron expression express something like that? We ended up placing the scheduling log... See more...
How do we handle an alert that needs to run on certain hours on Saturday and on different hours on Sunday. Can the cron expression express something like that? We ended up placing the scheduling logic in the query itself.
Hello all,  I trying to get a reason field to generate based on field values as to why a system is showing up in a report.  This is the example of the where clause I'm using, that defines what I'm lo... See more...
Hello all,  I trying to get a reason field to generate based on field values as to why a system is showing up in a report.  This is the example of the where clause I'm using, that defines what I'm looking for. | where ((system_class="Echo") AND ('Mem_Util'>=83 OR 'CPU_Util'>=83 OR 'Mem_Al'>=100 OR 'CPU_Al'>=110)) For example if I Mem_Util is the reason it shows up on the report,  I want a reason field to display at the end of the output that says Memory Util.  What makes it more interesting is that I have 5 different system_classes  with 5 different levels of of values for each of the 4 metrics.
Hi, I have search results in below format in screenshot1. I need that to be the way in screenshot 2. I used transpose and xyseries but no results populate. Compared to screenshots, I do have additio... See more...
Hi, I have search results in below format in screenshot1. I need that to be the way in screenshot 2. I used transpose and xyseries but no results populate. Compared to screenshots, I do have additional fields in this table. I only need the Severity fields and its counts to be divided in multiple columns as shown in screenshot 2. Rest of the fields will stay as is. I am missing something. How to do this? Thanks in-advance!!! Now:       Need to be:
Hello guys, could you let us know if the log format differs if we switch from "Splunk Add-on for Check Point OPSEC LEA" to "Check Point App for Splunk" hence needing to modify all dashboards? Thank... See more...
Hello guys, could you let us know if the log format differs if we switch from "Splunk Add-on for Check Point OPSEC LEA" to "Check Point App for Splunk" hence needing to modify all dashboards? Thanks for your help.  
Hi, Is it possible to make use of the same database agent to monitor databases at different AppD instances (prod, test, dev etc) if they are all located at the same data center? we have a requireme... See more...
Hi, Is it possible to make use of the same database agent to monitor databases at different AppD instances (prod, test, dev etc) if they are all located at the same data center? we have a requirement like this. user specially got a separate VM to install db agent for their dev & prod application instances. Is it possible if we provide both the instances' entries in the controller xml file, will it work?
Splunk alerts are being quarantined from an invalid sender. What backend files need to be modified?  How can I make changes from the GitLab server (not from Splunk Web) to track changes for a Splunk ... See more...
Splunk alerts are being quarantined from an invalid sender. What backend files need to be modified?  How can I make changes from the GitLab server (not from Splunk Web) to track changes for a Splunk configuration file: savedsearches.conf for an invalid email sender?
Hi all, I'm looking to create a simple bar chart that compares the monthly data from this year against the monthly data of last year. This search seems to do the job for me but unfortunately it als... See more...
Hi all, I'm looking to create a simple bar chart that compares the monthly data from this year against the monthly data of last year. This search seems to do the job for me but unfortunately it also shows the data for all of the months in 2019. ... | timechart count span=1mon | timewrap 1y   Ideally I would like to only compare year to date data of 2020 with 2019. Has anyone any tips on how I might achieve this? Thanks for the help.
We have an existing environment with 100+ servers sending data to IDX. We never had a DS before and now we want to introduce DS so that it's easier to manage the client.  What are the things I consi... See more...
We have an existing environment with 100+ servers sending data to IDX. We never had a DS before and now we want to introduce DS so that it's easier to manage the client.  What are the things I consider before I start planning? Which config files I should be worried about getting overwritten when I add the existing UF as client to my DS.  
im trying to set up an alert that will mail me when one of my indexes hasn't passed any data for the last 3 hours, and make it part of a dashboard does anyone have a search string that will do this ... See more...
im trying to set up an alert that will mail me when one of my indexes hasn't passed any data for the last 3 hours, and make it part of a dashboard does anyone have a search string that will do this please
Hi,  I have a rest call that runs every 24hours, and the number of events that are returned are in the region of +500 000 this obviously takes a few minutes to get everything into Splunk. The probl... See more...
Hi,  I have a rest call that runs every 24hours, and the number of events that are returned are in the region of +500 000 this obviously takes a few minutes to get everything into Splunk. The problem is that the timestamps are completely out, I want all events to have the cron timestamp instead of the indexed time. I've tried  DATETIME_CONFIG = NONE and I've tried DATETIME_CONFIG = CURRENT  is there anything else I can try? Thanks
Hi Team, In my use-case, we are querying multiple hosts in a single query. Here we're using custom trigger conditions to trigger an alert. (e.g "search health='Unhealthy'") The schedule is every ... See more...
Hi Team, In my use-case, we are querying multiple hosts in a single query. Here we're using custom trigger conditions to trigger an alert. (e.g "search health='Unhealthy'") The schedule is every 7 mins (Cron: */7 * * * *), for Time Range of the last 10 mins. We need alert triggered only once per host till the alert is not cleared. (Unless another host is alerting. Not same host) The purpose to do this is an automatic ticket generation. (If repetitive alerts are getting triggered, that will result in duplicate tickets generation) If we use Throttle condition here, let's say throttle alerts for 1 hour, during that 1 hour time, any other host may start alerting, I won't be able to receive alert for that host as well. Please suggest the best way to do this asap...
Hi ,we created a token and shared with the enduser to configure and send the logs on secure https. if i run the curl command then it is successful for first 2 or 3  times after that i am facing th... See more...
Hi ,we created a token and shared with the enduser to configure and send the logs on secure https. if i run the curl command then it is successful for first 2 or 3  times after that i am facing the  OpenSSL SSL_connect : SSL_ERROR_SYSCALL  issue and again in middle of i see some success messages.  how to find the root cause for this problem.  
from logs how i ll get number of vms present in that server
 Hi,I am trying to remove some of the sensitive information to be indexed by Splunk. But these configurations are not working ,even after getting the configuration reflected over btool and validatin... See more...
 Hi,I am trying to remove some of the sensitive information to be indexed by Splunk. But these configurations are not working ,even after getting the configuration reflected over btool and validating the regex over SPL. Anyone can assist? props.conf [o365:management:activity] TRANSFORMS-anonymize = info-anonymizer KV_MODE = json TRUNCATE = 10485760 transforms.conf [info-anonymizer] DEST_KEY = _raw FORMAT = $1$2 REGEX = (.*\"SensitiveInformationDetections\"\:\s\{)\"DetectedValues\"\:\s\[.*\]\,\s(\"ResultsTruncated\"\:.*) Have already Validated regex over SPL, It is working fine. |regex _raw="(.*\"SensitiveInformationDetections\"\:\s\{)\"DetectedValues\"\:\s\[.*\]\,\s(\"ResultsTruncated\"\:.*)" and |rex field=_raw "(?<before>.*\"SensitiveInformationDetections\"\:\s\{)\"DetectedValues\"\:\s\[.*\]\,\s(?<after>\"ResultsTruncated\"\:.*)" |eval _raw=before+""+after