All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a drop/drill down with 3 values namely: All,A,B And there are 2 panels, let's say 1 and 2 which take input in the form of tokenfilter from above drop down. 1 should be displayed and 2 hidd... See more...
I have a drop/drill down with 3 values namely: All,A,B And there are 2 panels, let's say 1 and 2 which take input in the form of tokenfilter from above drop down. 1 should be displayed and 2 hidden when A is selected. 2 should be displayed and 1 hidden when B is selected. And lastly when All is selected both the panels should be displayed.  Is there a way to achieve this in the panels or dashboard?   Any pointers would be helpful on the same.
Hello, I have created a search for failed logins for win,linux and network devices from authentication datamodel but this is generating lot of false positive alerts. Please help me to finetune this... See more...
Hello, I have created a search for failed logins for win,linux and network devices from authentication datamodel but this is generating lot of false positive alerts. Please help me to finetune this search | from datamodel:"Authentication"."Failed_Authentication" | search NOT user IN ("sam","sunil") | stats values(signature) as signature, dc(user) as "user_count", dc(dest) as "dest_count" latest(_raw) as orig_raw, count by "app","src",user | where 'count'>=200 AND user_count=1 | head 5    
Hello, Is it possible to control timed access to a dashboard or a knowledge object? I do not include the SPL here because I don't believe it is needed at this time. We have a dashboard populated... See more...
Hello, Is it possible to control timed access to a dashboard or a knowledge object? I do not include the SPL here because I don't believe it is needed at this time. We have a dashboard populated from the results of several outputlookup files run at 5:00 in the morning every day. The users of this dashboard have been advised to not use the dashboard until 5:45 am. However, it is still possible that they could. As all the outputlookup files are not in place until approx 5:40, the results on the dashboard might be incomplete or totally inaccurate. Is there a way to control timed access to the dashboard? Thanks and God bless, Genesius
Hi eveybody, I have a series of alerts that generate new events that are sent to a specific index and also send an email to a web application, but there is no way to identify these "correlated even... See more...
Hi eveybody, I have a series of alerts that generate new events that are sent to a specific index and also send an email to a web application, but there is no way to identify these "correlated events" by unique id. My goal is to be able to relate these indexed events to the event created in the web application using only a number, but this number must be assigned by splunk. Do you know of a way to assign an increasing numerical value to each new event sent to the index?
We had a user leave and before he did he asked that I change the ownership of all his reports to another employee.  I did that.  Today I found out that he owns a lookup.  When I look in knowledge obj... See more...
We had a user leave and before he did he asked that I change the ownership of all his reports to another employee.  I did that.  Today I found out that he owns a lookup.  When I look in knowledge objects orphans, it wasn't in there.  From what I've found online, lookups are completely different.  Is there anyway in Splunk to find everything a user owns?  I would rather be proactive and find things the the user didn't mention, rather than wait for notification that something isn't working. TIA, Joe
Hi all, We use email ingestion as an input for several processes, mainly for phishing analysis. So far we are ingesting from O365 through the EWS app but we are experiencing some issues so we wan... See more...
Hi all, We use email ingestion as an input for several processes, mainly for phishing analysis. So far we are ingesting from O365 through the EWS app but we are experiencing some issues so we want to migrate to ingestion through Graph API via the Graph app. The thing is that comparing the artifacts generated at ingestion time for the same emails between the EWS app and the Graph app there are differences in the number of artifacts (sometimes more, sometimes less) and the CEF detail in those of "Email Artifact" type. Even in the containers generated by the Graph app the different email artifacts created during ingestion (ex: from an email with other emails attached) have different structures, some of them similar or maybe equal to the CEF structure generated by the EWS and the Parser apps and some with a new structure exclusive of Graph generated artifacts. Since the source of the emails is exactly the same and the output type is the same (Email Artifact) we expected the output content to be also the same. There are differences not only in the output structure, but in the content also, mainly in the body content and its parsing.   Has anyone found any documentation explaining the parsing process and the output structure? Any hints about the logic behind the different output data structures?   I'll mention some members who posted about related topics: @phanTom  @drew19  @EdgeSync @lluebeck_splunk
I am looking for suggestions as to how best to implement an alerting request made by my users.  Summary A query is run to count the number of events. The time weighted difference (in percentage) ... See more...
I am looking for suggestions as to how best to implement an alerting request made by my users.  Summary A query is run to count the number of events. The time weighted difference (in percentage) between one period of the next will be used to trigger the alert if the threshold is met. Query I have a query which I am already using on an existing dashboard. I am using TSTATS to count events, then a REX to group the events based on the first 3 characters for a field (DRA).      | tstats prestats=t count where index="foo" sourcetype="bar*" DRA=C* by DRA, _time | rex field=DRA "^(?<pfx>\D\S{2}).*" | timechart span=5m count by pfx useother=f limit=0 usenull=f       The groups and the number of these 'groups' will vary, but the result will be similar to the below : _time C27 C31 C33 2022-10-12 13:00:00 116 2 70 2022-10-12 13:05:00 287 3 20 2022-10-12 13:10:00 383 6 45 2022-10-12 13:15:00 259 7 41 I suspect the maximum number of DRA codes that we will see will be 25, although I can break this up into different queries and play with some timing and priorities with the running of the searches. Goal The goal is to alert when any percentage changes from one period  to the next by more than a set percentage.  So, for example, in the above, I might want an alert at 13:05 that 'C33' had changed by ~72% from the previous period. I Have Tried Using a mix of streamstats, eval and trendline statements, have the following which will alert for a single 'C' code.     | tstats count as count where index="foo" sourcetype="bar" DRA=C27* by _time span=5m | timechart sum(count) as total_count span=5min | streamstats current=f window=1 last(total_count) as prev_count | eval percentage_errors=round(abs(prev_count/total_count)*100,1) | fillnull value=000 | trendline wma5(percentage_errors) AS trend_percentage | eval trend_percentage=round(trend_percentage,1) | fillnull value=0 trend_percentage | table _time, total_count, prev_count, percentage_errors, trend_percentage       Problems and Concerns How can I modify my query to account for the variable nature of the DRA code - both in name (Cxx) and in the number of DRA codes returned? I have added 'by' clauses almost everywhere, but have not had success. Each 5 minute period can see up to 70k events per DRA. Any thoughts on running all of the calculations across all extracted DRAs every 5 minutes? Any suggestions or comments on my line of thinking are appreciated.  
When I import a Threat Intelligence source that contains an IP address e.g. 1.2.3.4 with weight=60, then another source imports the same IP 1.2.3.4 with weight=100 what happens to the weight? x
Does anybody know a good way to filter out AWS Cloudtrail events? I'd like to send to null queue events that contains eventType=AwsApiCall. My input is configured as "Generic S3" (https://docs.splu... See more...
Does anybody know a good way to filter out AWS Cloudtrail events? I'd like to send to null queue events that contains eventType=AwsApiCall. My input is configured as "Generic S3" (https://docs.splunk.com/Documentation/AddOns/released/AWS/S3) This is what I have on my HF where the Splunk_TA_AWS is installed and configured: transforms.conf   [eliminate-AwsApiCall] REGEX = \"eventType\":\s+\"AwsApiCall\" DEST_KEY = queue FORMAT = nullQueue   props.conf:   [aws:cloudtrail] TRANSFORMS-eliminate-AwsApiCall = eliminate-AwsApiCall     Doesn't seem to be filtering ... any thoughts?   Thanks Marta
HI, I'm trying to create a stream for CloudWatch Logs under Splunk Cloud Web, but it is not streaming to the sourcetype/index i have setup. Found out under our Splunk HF, it's already streaming this... See more...
HI, I'm trying to create a stream for CloudWatch Logs under Splunk Cloud Web, but it is not streaming to the sourcetype/index i have setup. Found out under our Splunk HF, it's already streaming this CW Logs (from S3 directly), but with default configuration (index=aws sourcetype=aws:cloudwatchlogs <resource_id>), It's possible to customize it from the HF ? the "aws_cloudwatch_logs_tasks.conf" is empty.  *OBS: From https://community.splunk.com/t5/All-Apps-and-Add-ons/Why-are-some-AWS-CloudWatch-logs-not-appearing-in-Splunk/m-p/237123 , @jzhong_splunk   answer, if using HF, i would need to raise ticket, why? 
Hi all, So, our license expired, and before the new license was uploaded there was a 45-day gap. Now I know the search is blocked but indexing continues and Splunk will revert to its trial license ... See more...
Hi all, So, our license expired, and before the new license was uploaded there was a 45-day gap. Now I know the search is blocked but indexing continues and Splunk will revert to its trial license daily index volume.   1. My question is does Splunk still index all logs coming in? 2. Logs that were not indexed during that time can they be reindexed or what happens to them?   Thanks   
I have several alerts set up for a series of events. When an alert fires I want to log it to a new index. The problem when I do that is the new index isn't an index of the original events, but rather... See more...
I have several alerts set up for a series of events. When an alert fires I want to log it to a new index. The problem when I do that is the new index isn't an index of the original events, but rather of the alerts. So I'm trying to use the "Log Event" action in the alert to add it to the index and at the same time use the "Event" field of the "Log Event" action to include information about the offending event in JSON format with token variables from the $result$. So my Event field is literally:        {"field1":"$result.value1$", "field2":"$result.value2$", ...}       This was "kind of" working but some fields were causing issues with the JSON parsing if they included line breaks, etc. I started tinkering with the props.conf file, as well as the alert query string to try and do some pre-processing to the data to prevent it from breaking JSON parsing but now it is just completely broken and I don't understand why. I've reverted the alerts to be working as they always were. My props.conf file is stripped down to simply include: "[my_sourcetype] INDEXED_EXTRACTIONS=json TRUNCATE=0 SHOULD_LINEMERGE=true" I have tried with and without KV_MODE. Sometimes it works, sometimes it doesn't.  At this point I'm a little bit lost on what to try and or explore. Maybe there's a different approach to what I'm trying to do? I want Splunk to treat the data as fields so I can reference them directly from SOAR / Phantom. Any and all ideas welcome. Thank you!
I am creating a dashboard for reporting and one of the values of my search is called 'start date' when I check the column of 'start date' in the Search app it shows the date format like this 2022-07-... See more...
I am creating a dashboard for reporting and one of the values of my search is called 'start date' when I check the column of 'start date' in the Search app it shows the date format like this 2022-07-10 which is what I want.   When I put the Query into a Splunk dashboard the field value shows like this for all values in the column. 2022-07-10t18:00:00-06:00 Question: How can I have the query show just Year-Month-Day and remove the rest of what is showing above?  
hai all, is any way to add citrix application date into splunk. please let me know if any Addon to add from  citrix receiver.  
10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 AlertNotifierWorker-0] - action=twilio - Execution of alert action script failed 10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 Alert... See more...
10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 AlertNotifierWorker-0] - action=twilio - Execution of alert action script failed 10-13-2022 19:05:01.052 +0800 ERROR sendmodalert [20016 AlertNotifierWorker-0] - Error in 'sendalert' command: Alert script execution failed. 10-13-2022 19:05:01.052 +0800 ERROR SearchScheduler [20016 AlertNotifierWorker-0 ] - Error in 'sendalert' command: Alert script execution failed., search='sendal ert twilio results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__ SMS Alerting from Splunk with Twilio | Splunk https://www.baboonbones.com/php/markdown.php?document=twilio_alert/README.md   I have followed all  the documentation for this but getting error .. please help    
I've been able to deploy universal forwarders to dozens of Windows servers that run IIS logs. I have created a dedicated index and I have pushed an app (used to be Splunk supported, they have since m... See more...
I've been able to deploy universal forwarders to dozens of Windows servers that run IIS logs. I have created a dedicated index and I have pushed an app (used to be Splunk supported, they have since moved to a different app package) to said forwarders. The forwarders are set to send the data to our indexer cluster. To cover my bases for the different versions I have included several different monitor stanzas in the inputs.conf file:     [monitor://C:\inetpub\logs\...\W3S*\*.log] disabled = false sourcetype = ms:iis:auto index=iis [monitor://C:\inetpub\logs\*\W3S*\*.log] disabled = false sourcetype = ms:iis:auto index=iis [monitor://C:\Program Files\Microsoft\Exchange Server\V*\Logging\Ews] disabled = false sourcetype = ms:iis:auto index=iis     When deployed to the dozens of servers, I'm not seeing any data come back up or even any path watches coming back when searching the logs coming back from the universal forwarders. As a test I have added several files to a dedicated server and kept playing around with the monitor stanzas with no luck. When opening the inputs.conf locally on that server in notepad, the text looked merged so I added some spaces and line breaks. Restarted the service, I can path watches added but still nothing coming in. Even when specifying a path to a file, nothing comes in:     [monitor://C:\Test\logs\LogFiles\W3SVC1\u_ex221010.log] disabled = false sourcetype = ms:iis:auto index=iis     For something that seems so simple, where am I going wrong?
I have the data has "1111|xxx, xxx y|000000|111111|firstname, lastname|10/13/22 02:12:09|" I used TIME_FORMAT = %m/%d/%Y %H:%M:%S and Timestamp prefix = ^(?:[^\|\n]*\|){5} However, I still get ... See more...
I have the data has "1111|xxx, xxx y|000000|111111|firstname, lastname|10/13/22 02:12:09|" I used TIME_FORMAT = %m/%d/%Y %H:%M:%S and Timestamp prefix = ^(?:[^\|\n]*\|){5} However, I still get an error stating could not use strptime to parse the timestamp. Would need help in providing timestamp prefix here.
Hi, I want to use Splunk, but not sure where to start, i am new to it. I have a situation where in, I have a log file that has all sort of logs, say category1 catergory2 and category3 etc. logs.... See more...
Hi, I want to use Splunk, but not sure where to start, i am new to it. I have a situation where in, I have a log file that has all sort of logs, say category1 catergory2 and category3 etc. logs. I have dedicated regex parsers for each category say parser1 parser2 and parser3. One single log line would match to one of the parser only. If there is no suitable parser i.e. no match found, the line is not eligible to be indexed. I want it all to happen before indexing. The log source could be either a log file or a stream of logs. Can someone help me on how to parse the whole log file and get each line parsed and indexed in one single index, say myidx? I understand I will have to deploy props.conf and transforms.conf, but how to configure these files to achieve this. Please help or suggest better way. TIA sample log lines. 1. Sep 01 23:43:47 test_device001 test_device001 default default-log [test_domain][0x0001][mp][alert] mp(Rrocessor): trans(53)[request][109.2.x.z] gtid(127d3b333052): event((test.xsl) Transaction Initiated) TestURI(my/mapped/url) Size(0) Node((test_domain)) userID(test_uid) 2. Sep 05 23:43:47 test_device001 test_device001 default default-log [test_domain][0x0001][mp][alert] mp(Rrocessor): trans(53)[request][109.2.x.z] gtid(127d3b33305): (set-client-idy-head.xsl)*** P O N O D E T<entry><url event((test.xsl) Transaction Initiated) TestURI(my/mapped/url) <http-method>GET</http-method> 3. Sep 04 23:43:47 test_device001 test_device001 default default-log [test_domain][0x0001][mp][alert] mp(Rrocessor): trans(53)[request][109.2.x.z] gtid(127d3b333052): *** NODETYPE(SS) ***FLOW(HTTP{->HTTP) ***OUTG(mysite.test.com)  
Hi, I'm starting with ES Threat Intelligence and am wondering, how threat intel data is populated to the KV stores used in the correlation search "Threat Activity Detected". As a simple example ... See more...
Hi, I'm starting with ES Threat Intelligence and am wondering, how threat intel data is populated to the KV stores used in the correlation search "Threat Activity Detected". As a simple example I manually added an entry to local_email_intel (which is of course enabled). Now I'm expecting the email address to appear in the KV store threatintel_by_email, which is used in the threat matching search for email. But threatintel_by_email is still empty, although I waited for a while for background jobs. I can't find the entered email address in the Threat Artifacts dashboard as well. What is my mistake here? What kind of background job do we need/wait for to make my entry available for threat detection? Thanks in advance  
Hi all, I have a single value visualisation added in a dashboard. Its background colour depends on the value shown. (Green for 'Pass' and red for 'Fail'). But somehow it's always giving red backgrou... See more...
Hi all, I have a single value visualisation added in a dashboard. Its background colour depends on the value shown. (Green for 'Pass' and red for 'Fail'). But somehow it's always giving red background eventhough the value is 'Pass'. Here is the code I use: ``` <panel depends="$hide_css$"> <html> <style> #verdict rect { fill: $verdict_background$ !important; } #verdict text { fill: $verdict_foreground$ !important; } </style> </html> </panel> <panel> <single id="verdict"> <search> <query>index=temp_index | search splunk_id=$splunk_id$ | eval ver = verdict.$campaigns_included$ | table verdict.$campaigns_included$ </query> <done> <eval token="verdict_background">if($result.ver$=="Pass", "green", "red")</eval> <set token="verdict_foreground">black</set> </done> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="height">60</option> <option name="rangeColors">["0x53a051","0xdc4e41"]</option> <option name="rangeValues">[0]</option> <option name="useColors">1</option> </single> </panel> ``` $campaigns_included$ is the value that's chosen on a dropdown. Pls help, any help would be appreciated. @bowesmana requesting for expertise here!