All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I want to send a Splunk alert to Slack channel. Below are the steps I have followed. However the alert wont be sent to Slack using the Slack Webhook Alert I have created a Webhook ... See more...
Hello everyone, I want to send a Splunk alert to Slack channel. Below are the steps I have followed. However the alert wont be sent to Slack using the Slack Webhook Alert I have created a Webhook in slack Put it into Splunk alert. Note Webhook is working as from other apps we are receiving alerts Alert is working as I have tested the same alert with email I have also not left the message field empty and it has ""  ( As advised by a previous Splunk post) Kindly advise how I could resolve this issue.   Kind Regards, DR_GD
Hi,    I need to do search with multiple raw strings within a single query.  When I search these strings separately, I am able to get the results.  But when I combine these it is not giving the res... See more...
Hi,    I need to do search with multiple raw strings within a single query.  When I search these strings separately, I am able to get the results.  But when I combine these it is not giving the results and ending with 'No results found'.  The below three queries are working fine. sourcetype="States*"  *Karnataka* sourcetype="States*"  *Tamil Nadu* sourcetype="States*"  *Mumbai* When I execute the below query I am getting 'No results found' comment.  sourcetype="States*"  *Karnataka*  *Tamil Nadu*  *Mumbai* Can anyone through some light on this, thanks in advance.
Good day, We are looking at a solution to alert us on abnormal traffic spike. We have leverage the standard deviation, and `streamstats` for the moving average. We are "graphing" for the last 2 ho... See more...
Good day, We are looking at a solution to alert us on abnormal traffic spike. We have leverage the standard deviation, and `streamstats` for the moving average. We are "graphing" for the last 2 hours. Last but not least, there is a cron job running every 2 minutes. Below is the query:   base_search earliest=-121m@m latest=-1m@m | bin _time span=2m | stats count by _time | streamstats avg("count") AS avg stdev("count") AS stdev | eval upperBound=(avg+stdev*exact(2)) | eval isOutlier=if('count' < lowerBound OR 'count' > upperBound, 1, 0) | eval avg=round(avg,0) | eval upperBound=round(upperBound,0) | rename count as "Events" upperBound AS"Upper Limit" isOutlier AS"Is Outlier" avg AS "Average" | fields _time, "Events", "Average", "Upper Limit", "Is Outlier" | search "Is Outlier"=1     The problem I am encountering is once there is a "Outlier" it will remain in the table for the next 2 hours. i.e. Outlier a 7:31am on the next schedule run at 7:32am it will trigger. But the entry will still show up at 7:34am, 7:36am, and so forth. I tried using the following arguments but it doesn't work.   | search "Is Outlier"=1 earliest=-2m@m latest=now()   Does anyone has any idea how I can have the alerts show the last two minutes, but retaining the 2 hours moving average? Thank you in advance!  
Dears, please can you help? I have dashboard with several panels including graphs and reports. I would like create 2 paralel reports which will have different time frame in order to see values. In ... See more...
Dears, please can you help? I have dashboard with several panels including graphs and reports. I would like create 2 paralel reports which will have different time frame in order to see values. In source i have main time frame  <earliest>$searchtime.earliest$</earliest> <latest>$searchtime.latest$</latest> Which will be applied to one report and i would like to apply something like this on second paralel one <earliest>$searchtime.earliest$ - 7d</earliest> <latest>$searchtime.latest$ -7d</latest> Is this possible? Thank you
Hello All! I am configuring Splunk in different servers to send the IIS Logs. I am doing it by adding the IIS Log Folder as a Data Input -> Files & Directories.  But in the IIS Log File there is ol... See more...
Hello All! I am configuring Splunk in different servers to send the IIS Logs. I am doing it by adding the IIS Log Folder as a Data Input -> Files & Directories.  But in the IIS Log File there is old Logs, and I only want that send to splunk Logs from no more that two days.  I already configured in the .props.config the MAX_DAYS_AGO=2, but it doesn't work.  I have tried in these ways:   With the file in ...\etc\system\local\props.config  [iis] MAX_DAYS_AGO=2 ---------------- Didn't work [default] MAX_DAYS_AGO=2 ------------------Didn't work  Changing the Default in the ..\etc\system\default\props.config [default] MAX_DAYS_AGO=2 ------------------Didn't work  Restarting the Splunk service every time when I made the change  Could somebody say me what I am missing?  Thanks   
Hi I use the search below but I lose some events because I have the following message : [subsearch]: Subsearch produced 124329 results, truncating to maxout 50000. `software` earliest=-90d latest=... See more...
Hi I use the search below but I lose some events because I have the following message : [subsearch]: Subsearch produced 124329 results, truncating to maxout 50000. `software` earliest=-90d latest=now | fields MachineID ProductVersion00 ProductName00 | stats last(ProductVersion00) as ProductVersion00 by MachineID ProductName00 | join max=0 type=inner MachineID [| search `machineID` | fields MachineID Name0 | stats last(Name0) as Hostname by MachineID] | stats last(Hostname) as Hostname by ProductName00 ProductVersion00 | rename ProductVersion00 as "Product version", ProductName00 as "Product name Is there a workaround concerning this issue please?
Hi, I am not able to raise a JIRA ticket through AppDynamics, I have set a policy in which when CPU Utilization health rule gets violated then I should be receiving a custom Email and a JIRA ticket... See more...
Hi, I am not able to raise a JIRA ticket through AppDynamics, I have set a policy in which when CPU Utilization health rule gets violated then I should be receiving a custom Email and a JIRA ticket should be raised but unfortunately I am only receiving Emails and under JIRA Action it shows "FAILED".
Good Afternoon - I am new to Splunk and setting this up. My aim is to push IIS W3C formatted files from our web server into Splunk Cloud.  I have installed the Universal forwarder on the web server... See more...
Good Afternoon - I am new to Splunk and setting this up. My aim is to push IIS W3C formatted files from our web server into Splunk Cloud.  I have installed the Universal forwarder on the web server where the log files currently are, and i am in the process of configuring the forwarder however i am having issues.  I have set up an index ready (i believe), however when attempting to configure the output.conf file i am not sure how to populate the command  ./splunk add forward-server <host name or ip address>:<listening port> Where can i locate the hostname and listening port for my Splunk Cloud Deployment?
Hi, Does anyone know if either of these apps, provide the means to collect events generated by the Azure Key Vault or Active Directory Identity Protection Alerts? Splunk Add-on for Microsoft Cloud ... See more...
Hi, Does anyone know if either of these apps, provide the means to collect events generated by the Azure Key Vault or Active Directory Identity Protection Alerts? Splunk Add-on for Microsoft Cloud Services Microsoft Azure Add-on for Splunk     
I recently noticed a huge amount of warnings in the _internal logs for our search heads. events are all like this: 02-04-2021 12:22:08.485 +0300 WARN SearchResultsFiles - Unable to parse site_l... See more...
I recently noticed a huge amount of warnings in the _internal logs for our search heads. events are all like this: 02-04-2021 12:22:08.485 +0300 WARN SearchResultsFiles - Unable to parse site_label, label=invalid due to err="Invalid site id: invalid" We are running a distributed environment with a search head cluster and all installations are Splunk 8.1.1. The warnings are logged only on the search heads. When investigating i see this has occured for quite som time but i'm very qurious as to what this means.  There are no other indications in the _internal log that hints to why this warning keep appearing. I have however discovered that it seems to maybe be related to lookups and perhaps the kvstore. The reason i think so is that i can't force this warning when doing normal searches, but when i open dashbords that uses searches with macros and lookups they appear immediately. I've tried several different dashboards and searches and it seems consistent that anything with a lookup will produce this warning.  I'm further thinking this might have happened when we upgraded to Splunk 8.1.1 recently. I've got two standalone servers for test purposes where one is running Splunk 8.1.1 and the other one is running Splunk 8.1.0.1 I have not been able to force this warning on the Splunk instance running 8.1.0.1 as of yet, but the one running 8.1.1 will have these warnings when i open dashbords and advanced searches. I have not found anything in the Splunk "known issues" about this warning specifically. I don't even know if it causes any problems other than filling up the _internal log (There are noe issues with our environment relating to this warning as far as i know). So i was wondering if anyone else have been experiencing these warnings, know what they are and know how to stop them? In peak search time there can be several million events per hour.  One thing i have not yet tried, but will try as soon as possible, is to upgrade one of the standalone servers to Splunk 8.1.2 and see if that fixes things.  
Hello, i have problem with dnslookup, i want to check what is the hostname of the ip, the ip is the ip address of host which is sending to one of the indexers. And the dnslookup in the search head i... See more...
Hello, i have problem with dnslookup, i want to check what is the hostname of the ip, the ip is the ip address of host which is sending to one of the indexers. And the dnslookup in the search head is not able to resolve it, for example the dns lookup on the indexer are able to do so. I'm looking forward for some solution, i was searching for possibility to use dnslookup or search from indexer and get back the result but i haven't found anything.  
Alert should trigger each time if count of event is less 10 in last 30 min. But it will aggregate alerts if count is more than 10 in last 30 min
Hello all, I have a problem with which I am currently stuck. Here is a short explanation. For the automated installation of the Splunk Forwarder I wanted to perform a customized installation via G... See more...
Hello all, I have a problem with which I am currently stuck. Here is a short explanation. For the automated installation of the Splunk Forwarder I wanted to perform a customized installation via GPO. Via Orca I have adapted the MSI file and added it to the extra GPO. The goal is that new servers that join the domain install the forwarder directly at system startup. This works without any problems. The forwarder and the server are shown in the forwarder management. But I can't find the server in the search. It seems that the forwarder does not send any data to the Splunk Enterprise Server. After comparing the server with a second working one I noticed that the inputs.conf is missing on the new server. For example, as soon as I copy it from a running server and change the corresponding hostname, the Enterprise Server receives data from the new server. My question now is, why is the file not installed during setup? Is there any possibility that I forgot during the customization via Orca? I thank you in advance for your answers.
Hello, I am quite new to Splunk and this is my first post. Hoping that I can get some help from this awesome community. I have two systems, System A and System B. System A receives customer informat... See more...
Hello, I am quite new to Splunk and this is my first post. Hoping that I can get some help from this awesome community. I have two systems, System A and System B. System A receives customer information which is then sent to System B . The data in both systems have the exact same fields and a unique Customer ID with the same name in both systems. I want to create a dashboard where I can select a time period and see only problematic customers that only exist in System A, meaning they haven't been sent to System B for some reason. This is my search to see all the data: index=systemA OR index=systemB | fields customer_ID, systemA_Timestamp, systemB_Timestamp | stats values(*) as * by customer_ID | table customer_ID, systemA_Timestamp, systemB_Timestamp   So to summarize, I want to see customer_IDs that only exist in System A. I am not sure which function to use here. I have been experimenting with isnull(systemB_Timestamp) with no success. Join is not an option as the limit of 50 000 might be a problem.   Would be very grateful for any help!
I am currently collecting Broker metrics from cloudwatch where in my input in advanced section the current configuration is: [{"Broker":[".*"]}]   - works ok But I would like to collect also Queue ... See more...
I am currently collecting Broker metrics from cloudwatch where in my input in advanced section the current configuration is: [{"Broker":[".*"]}]   - works ok But I would like to collect also Queue and Topic dimensions and this is not working : [{"Queue":[".*"]}]    Is it at all possible to collect these metrics with this add-on or is there mistake in my configuration
In Mitre ATT&CK Framework's dashboard of Splunk Security Essentials app there is an unclear checkbox: the label sais "Show Only Available Content" and the tooltip sais: "This checkbox filters out the... See more...
In Mitre ATT&CK Framework's dashboard of Splunk Security Essentials app there is an unclear checkbox: the label sais "Show Only Available Content" and the tooltip sais: "This checkbox filters out the MITRE ATT&CK Techniques that do not have an associated detection in this Splunk environment, i.e. it removes all cells with zeros."  This checkbox is not filtering on data showing only the Available Detections or removing cells with all zeros for Active, Available and Needs Data. This checkbox filters data showing only the TTPs with a note technique or detectable in actual environment.  The actual state is: In this picture you can see the filter working: the cells with “Active”, “Available” and “Needs Data” with all zeroes are still displayed. We just improved the dashboard by changing the label on the first checkbox and introducing 3 new checkboxes: Show only active detections Show only available detections Show only needs data detections This is the dashboard with new checkboxes: This is how the filter works: only cells with "Needs Data" parameter != 0 are displayed in matrix. For achieving this we have edited this files: mitre_overview.xml: dashboard file to edit checkbox, labels  and panel search advisor_analytics.js: to edit tooltips' text The checkboxes are built in this way:   <input type="checkbox" token="active_content_token" searchWhenChanged="true"> <label>Show Active Detections</label> <choice value="'Active'&gt;0">Yes</choice> <default>0=0</default> <delimiter> </delimiter> </input>    If the box is checked the value of “yes” is <fieldname>>0, default value is an always true clause. In this picture is displayed the checkbox used for selecting only the "Active" rules. So the value is 'Active'>0 and the default is 0=0. The search in matrix panel is changed in the last three evals:   ... | eval "<<FIELD>>" = if(text!="" AND $needs_data_token$ AND $active_content_token$ AND $available_content_token$,mvappend("TechniqueId: ".'<<FIELD>>_TechniqueId',"Technique: ".Technique,"Color: ".color,"Opacity: ".opacity,"Active: ".p0_count,"Available: ".p1_count,"Needs data: ".p2_count,"Total: ".total_count,"Selected: ".p3_count,"Groups: ".'<<FIELD>>_Groups'),null) | eval "<<FIELD>>"="{".mvjoin(mvmap('<<FIELD>>',"\"".mvindex(split('<<FIELD>>',": "),0)."\": \"".mvindex(split('<<FIELD>>',": "),1)."\""),",").Selected_Threat_GroupsJson.Selected_SoftwareJson.",".Software_Json.Sub_Technique."}" | eval count = count + 1 ...   The js file is changed only for display appropriate text in labels, this are the changes:   /** * The below defines tooltips. */ require(["jquery", "splunkjs/ready!", "bootstrap.popover", "bootstrap.tooltip" ], function( $, Ready) { /* edited rules */ $("label:contains('Show Active Detections')").prop('title', 'Filters TTPs showing only the active rules'); $("label:contains('Show Available Detections')").prop('title', 'Filters TTPs showing rules available but not active'); $("label:contains('Show Needs Data Content')").prop('title', 'Filters TTPs showing rules that need data to work'); $("label:contains('Show Only In Scope Content')").prop('title', 'This checkbox filters out the MITRE ATT&CK Techniques that do not have a defined technique for detection or a defined rule for detection in this Splunk environment.');   We ask to introduce this improvements with the next update, if needed we can provide files with changes.
Hi Team, I have a dashboard in which there are event date, event title, AD location, Logon location and IP address. I have visualized all the datas into my dashboard. I my splunk query  i need to i... See more...
Hi Team, I have a dashboard in which there are event date, event title, AD location, Logon location and IP address. I have visualized all the datas into my dashboard. I my splunk query  i need to ignore if AD and Logon location are same. Need to ignore those events getting displayed in dashboard. Here is the eg:  AD location : Almaty,KZ Logon location: Almaty city, Almaty, KZ.  In this case I need to match any of these pair values. Like KZ or Almaty if anything is same. Need to ignore those in my dashboard. I tried using Like and != operator coudn't able to get the search properly.  Required a quick help. Thanks,
Current Output : Disconnected_time Disconnected_Session_Name count 2021-02-02T02:04:29.000 RDP-Tcp#10 12 2021-02-02T02:15:55.000 RDP-Tcp#27 6 2021-02-02T03:25:10.000 RDP-Tcp#10 1... See more...
Current Output : Disconnected_time Disconnected_Session_Name count 2021-02-02T02:04:29.000 RDP-Tcp#10 12 2021-02-02T02:15:55.000 RDP-Tcp#27 6 2021-02-02T03:25:10.000 RDP-Tcp#10 11 2021-02-02T09:30:59.000 RDP-Tcp#27 5   PreviousEventTime should be generated based on "Disconnected_Session_Name" match Example : Disconnected_time Disconnected_Session_Name count PreviousEventTime 2021-02-02T02:04:29.000 RDP-Tcp#10 12   2021-02-02T02:15:55.000 RDP-Tcp#27 6   2021-02-02T03:25:10.000 RDP-Tcp#10 11 2021-02-02T02:04:29.000 2021-02-02T09:30:59.000 RDP-Tcp#27 5 2021-02-02T02:15:55.000  
Hi, I'm a trial user for Splunk.  I have a setup in Azure: One Azure VM running Splunk Enterprise and four Azure VMs with Universal Forwarders that should send a data to Enterprise server.  I can ... See more...
Hi, I'm a trial user for Splunk.  I have a setup in Azure: One Azure VM running Splunk Enterprise and four Azure VMs with Universal Forwarders that should send a data to Enterprise server.  I can see those instances listed in Enterprise server in Forwarder Management, but UFs are not sending any data. Ports 9997 and 8089 are open both inbound and outbound in servers with UF and in the server running Enterprise server. Also they are opened in Azure NSG for all VMs. When looking splunkd in servers with UF, the handshake is done and the enterprise server IP is accessed. When restarting UF, it shows that all is fine - port is open etc.  But nothing more is happened. I can't see other VMs with UF as host when searching "index = _*", only the one which is running Enterprise, i.e. itself.  I don't know anymore how to troubleshoot further.  Earlier it gathered events from the server running Enterprise, but not anymore. It captured 6928 events and nothing has happened after that. There is a warning as in the picture attached.    Any ideas? Thanks!      
Hi I am trying to install the latest version of baremetal uba on rhel 7.8. I have followed the requirements and steps mentioned in splunk docs. When I ran the pre check script, i noticed the follo... See more...
Hi I am trying to install the latest version of baremetal uba on rhel 7.8. I have followed the requirements and steps mentioned in splunk docs. When I ran the pre check script, i noticed the following: /var/log symlinks: 13 <= expecting 14; verify missing link ... 'containers' symlink not found   It looks like the containers folder was not created in the /var/log folder it also showed me this: /var/log perm/owner: lrwxrwxrwx. 1 root root 23 Feb 3 12:58 /var/log/kafka -> /var/vcap/sys/log/kafka <= issue with one (or more) log sub-directories The owner for this should be caspida:caspida correct? Also showed me this: interface: '<%' <== system.network.interface value in /etc/caspida/local/conf/uba-site.properties does not match 'eth0'   Splunk docs mentioned If the network interface is not the default eth0, edit configuration file /etc/caspida/local/conf/uba-site.properties and add the following entry with the corresponding interface: system.network.interface=<interface> My nic is already eth0   Any assistance will be appreciated..   Thanks