All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI,  I created a new indexes for our another ITSI environment. But I lost some of the entities which I already fixed. Now my issue is that the healthscore status is not showing in the deep dive. C... See more...
HI,  I created a new indexes for our another ITSI environment. But I lost some of the entities which I already fixed. Now my issue is that the healthscore status is not showing in the deep dive. Configure multiple ITSI deployments to use the same indexing layer - Splunk Documentation  Noticed that this search is not returning those 2 important fields alert_color and alert_value. suspecting this is the issue but not sure. And why those fields are not showing? search: 'get_full_itsi_summary_kpi(SHKPI-bee4cb6f-8691-4ee2-97fd-f40ed45f4acd)' service_level_kpi_only Thanks, adhoc123
Hi community, I am trying to write a query that looks for bulk email (say >50) from a single sender to multiple recipients, that has a unique subject. Sender Recipient Subject bob... See more...
Hi community, I am trying to write a query that looks for bulk email (say >50) from a single sender to multiple recipients, that has a unique subject. Sender Recipient Subject bob @ scamm . com alice @ mycompany .net spam for alice bob @ scamm . com jane @ mycompany .net spam for jane bob @ scamm . com fred @ mycompany .net spam for fred   I can add this to my search:     | stats count by subject sender recipient | search count>50     but I just want to see results where the subjects are unique, but the sender is the same.   Ideally I'd like to have it spit out a table of the sender, subject(s) and recipient(s) Thank you 
Hello, My Splunk is no longer ingesting emails from our O365 email account anymore. I was not the person to set this up and need assistance in troubleshooting. Can anyone provide assistance/guidanc... See more...
Hello, My Splunk is no longer ingesting emails from our O365 email account anymore. I was not the person to set this up and need assistance in troubleshooting. Can anyone provide assistance/guidance.     There is also an error that is showing in regards to the KvStore "KV Store process terminated abnormally (exit code 14, status exited with code 14).", which I'm not sure is related or not. We have a search head cluster setup with 2 indexers that are not clustered.
Hi Everyone, I run into an issue today in SIT where TIV0 was inaccessible because a similar directory was full. I'm trying to set one alert for DEV and one for SIT and the folder path for each en... See more...
Hi Everyone, I run into an issue today in SIT where TIV0 was inaccessible because a similar directory was full. I'm trying to set one alert for DEV and one for SIT and the folder path for each environment is : DEV:/mms/ora1200/u00/oracle. SIT:/mms/ora1201/u00/oracle. this is what i have so far : index=A "/mms/ora1200/u00/oracle" source= B | stats latest(storage_used*) as storage_used* latest(storage_free*) as storage_free* by host mount | where storage_used_percent>90 | eval storage_used=if(storage_used>1000,(storage_used/1000). " GB" ,storage_used+" MB"), storage_free=if(storage_free>1000, (storage_free/1000, (storage_free/1000). " GB", storage_free+" MB") Any feedback will be appreciated.  
I have two streams of data coming into a HEC.  one has call direction (i.e. inbound) and the other has call disposition (i.e. allowed).  at first i was joining these streams (join), but found a gre... See more...
I have two streams of data coming into a HEC.  one has call direction (i.e. inbound) and the other has call disposition (i.e. allowed).  at first i was joining these streams (join), but found a great thread in the community suggesting using stats and so with some cleanup, i have something like this:   index="my_hec_data" resource="somedata*" | stats values(*) as * by id   which works great, and may not even be related to my actual question, but next I want to count by day, cool, so just timechart it, but i suppose my real question is Is that the most efficient way to count calls by day?  or should i do some higher level aggregation somehow? i don't even know if that makes sense, but if there are 2M calls a day and I go back 30d, is "counting 60M rows" the best way to display 'events per day?'
Hello everyone,   As i written in title, i started using Splunk recently. I would like to know if someone could help me: I have created a dashbord for analyze windows events. I have a query lik... See more...
Hello everyone,   As i written in title, i started using Splunk recently. I would like to know if someone could help me: I have created a dashbord for analyze windows events. I have a query like this: index=windows sourcetype in (...) EventCode=* | stats count by EventCode Using this search, i get a table with in a column the EventCode, and in the other column i have the count of how many times that specific eventcode has "appeared". And so far everything is fine. How can i retrieve the number of all windows hosts? I can't figure it out, i'm trying in a lot of ways but nothing Thnks for the help
Hi folks, I'm trying to get all saved searches from my SHC and ES SH running the following SPL, but I'm unable to see the ones from my ES SH (the SPL is being run on the SHC). | rest /servicesNS/... See more...
Hi folks, I'm trying to get all saved searches from my SHC and ES SH running the following SPL, but I'm unable to see the ones from my ES SH (the SPL is being run on the SHC). | rest /servicesNS/-/-/saved/searches When running the SPL appears the following message: Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability. The I tried running the following SPL and the message disappeared, however, I'm not able to see the saved searches from my ES SH.: | rest splunk_server=local /servicesNS/-/-/saved/searches   Any idea about this? Is this because of the missing capability? Am I restricted to make this search?   Thanks in advance.
I see lot of developers using splunk, but many times log files simply kept growing without limit due to debug enablement and chronic failure in environment which takes long time to fix .  It is impor... See more...
I see lot of developers using splunk, but many times log files simply kept growing without limit due to debug enablement and chronic failure in environment which takes long time to fix .  It is important splunk provide admins an ability to put caps on ingestion for certain data t ype
I'm having issues with eventtypes not correctly being applied from VMware Carbon Black Cloud ingest that I can't figure out, as each search in the chain successfully finds events. These are the three... See more...
I'm having issues with eventtypes not correctly being applied from VMware Carbon Black Cloud ingest that I can't figure out, as each search in the chain successfully finds events. These are the three eventtypes that chain together. The first two apply correctly (vmware_cbc_base_index, vmware_cbc_alerts), but not the third (vmware_cbc_malware). From eventtypes.conf:     [vmware_cbc_base_index] search = index=carbonblack_audit [vmware_cbc_alerts] search = eventtype=vmware_cbc_base_index sourcetype="vmware:cbc:s3:alerts" OR sourcetype="vmware:cbc:alerts" [vmware_cbc_malware] search = eventtype=vmware_cbc_alerts threat_cause_threat_category="*MALWARE*" NOT threat_cause_threat_category="*NON_MALWARE*"    When I use the search in the third eventtype (vmware_cbc_malware), I do get events. Search: eventtype=vmware_cbc_alerts threat_cause_threat_category="*MALWARE*" NOT threat_cause_threat_category="*NON_MALWARE*" | stats count by eventtype    eventtype count vmware_cbc_alerts 65 vmware_cbc_base_index 65 Can anyone help me figure out why this third eventtype is not being applied?  
I need to show a tooltip on a panel, to let users know that clicking on the value will take them to a drill down. Is there a way to achieve this without using Javascript? This is the code for the p... See more...
I need to show a tooltip on a panel, to let users know that clicking on the value will take them to a drill down. Is there a way to achieve this without using Javascript? This is the code for the panel from the source.     <panel> <title>Supported Platforms Count</title> <single> <title>This metric gives the count of platforms supported by Integration platform engineering team</title> <search> <query>| inputlookup Integrations_Platform_List.csv | stats count</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">all</option> <option name="height">200</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.size">medium</option> <option name="trellis.splitBy">_aggregation</option> <drilldown> <link target="_blank">search?q=%7C%20inputlookup%20Integrations_Platform_List.csv%0A%7C%20stats%20count&amp;earliest=$global_time.earliest$&amp;latest=$global_time.latest$</link> </drilldown> </single> </panel>       Thanks,
I have a splunk query, in which my intention is to get all ipAddress for which "EVENT A" occurred in last 22 hours starting from 4 hours before,  but "EVENT B" is not there in last 24 hours for same ... See more...
I have a splunk query, in which my intention is to get all ipAddress for which "EVENT A" occurred in last 22 hours starting from 4 hours before,  but "EVENT B" is not there in last 24 hours for same IpAddress. It is known that "Event A" will have one occurrence for Ip address,(if any), but "Event B" will have ,multiple occurrences. Following is the query:     index=prod-* sourcetype="kube:service" "Event A" earliest=-24h latest=-4h |table IpAddress | search NOT [search index=prod-* sourcetype="kube:service" AND ("Event B") earliest=-24h latest=-0h |table IpAddress ]     Why the first query is not working fine? This does not work fine and return the results, even if, there is an Ip address for "Event A" and multiple events for same Ip address "Event B". But if I add, dedup IpAddress to inner search not query, then it works fine. Updated query:     index=prod-* sourcetype="kube:service" "Event A" earliest=-24h latest=-4h |table IpAddress | search NOT [search index=prod-* sourcetype="kube:service" AND ("Event B") earliest=-24h latest=-0h |dedup IpAddress|table IpAddress ]    
I have a drop/drill down with 3 values namely: All,A,B And there are 2 panels, let's say 1 and 2 which take input in the form of tokenfilter from above drop down. 1 should be displayed and 2 hidd... See more...
I have a drop/drill down with 3 values namely: All,A,B And there are 2 panels, let's say 1 and 2 which take input in the form of tokenfilter from above drop down. 1 should be displayed and 2 hidden when A is selected. 2 should be displayed and 1 hidden when B is selected. And lastly when All is selected both the panels should be displayed.  Is there a way to achieve this in the panels or dashboard?   Any pointers would be helpful on the same.
Hello, I have created a search for failed logins for win,linux and network devices from authentication datamodel but this is generating lot of false positive alerts. Please help me to finetune this... See more...
Hello, I have created a search for failed logins for win,linux and network devices from authentication datamodel but this is generating lot of false positive alerts. Please help me to finetune this search | from datamodel:"Authentication"."Failed_Authentication" | search NOT user IN ("sam","sunil") | stats values(signature) as signature, dc(user) as "user_count", dc(dest) as "dest_count" latest(_raw) as orig_raw, count by "app","src",user | where 'count'>=200 AND user_count=1 | head 5    
Hello, Is it possible to control timed access to a dashboard or a knowledge object? I do not include the SPL here because I don't believe it is needed at this time. We have a dashboard populated... See more...
Hello, Is it possible to control timed access to a dashboard or a knowledge object? I do not include the SPL here because I don't believe it is needed at this time. We have a dashboard populated from the results of several outputlookup files run at 5:00 in the morning every day. The users of this dashboard have been advised to not use the dashboard until 5:45 am. However, it is still possible that they could. As all the outputlookup files are not in place until approx 5:40, the results on the dashboard might be incomplete or totally inaccurate. Is there a way to control timed access to the dashboard? Thanks and God bless, Genesius
Hi eveybody, I have a series of alerts that generate new events that are sent to a specific index and also send an email to a web application, but there is no way to identify these "correlated even... See more...
Hi eveybody, I have a series of alerts that generate new events that are sent to a specific index and also send an email to a web application, but there is no way to identify these "correlated events" by unique id. My goal is to be able to relate these indexed events to the event created in the web application using only a number, but this number must be assigned by splunk. Do you know of a way to assign an increasing numerical value to each new event sent to the index?
We had a user leave and before he did he asked that I change the ownership of all his reports to another employee.  I did that.  Today I found out that he owns a lookup.  When I look in knowledge obj... See more...
We had a user leave and before he did he asked that I change the ownership of all his reports to another employee.  I did that.  Today I found out that he owns a lookup.  When I look in knowledge objects orphans, it wasn't in there.  From what I've found online, lookups are completely different.  Is there anyway in Splunk to find everything a user owns?  I would rather be proactive and find things the the user didn't mention, rather than wait for notification that something isn't working. TIA, Joe
Hi all, We use email ingestion as an input for several processes, mainly for phishing analysis. So far we are ingesting from O365 through the EWS app but we are experiencing some issues so we wan... See more...
Hi all, We use email ingestion as an input for several processes, mainly for phishing analysis. So far we are ingesting from O365 through the EWS app but we are experiencing some issues so we want to migrate to ingestion through Graph API via the Graph app. The thing is that comparing the artifacts generated at ingestion time for the same emails between the EWS app and the Graph app there are differences in the number of artifacts (sometimes more, sometimes less) and the CEF detail in those of "Email Artifact" type. Even in the containers generated by the Graph app the different email artifacts created during ingestion (ex: from an email with other emails attached) have different structures, some of them similar or maybe equal to the CEF structure generated by the EWS and the Parser apps and some with a new structure exclusive of Graph generated artifacts. Since the source of the emails is exactly the same and the output type is the same (Email Artifact) we expected the output content to be also the same. There are differences not only in the output structure, but in the content also, mainly in the body content and its parsing.   Has anyone found any documentation explaining the parsing process and the output structure? Any hints about the logic behind the different output data structures?   I'll mention some members who posted about related topics: @phanTom  @drew19  @EdgeSync @lluebeck_splunk
I am looking for suggestions as to how best to implement an alerting request made by my users.  Summary A query is run to count the number of events. The time weighted difference (in percentage) ... See more...
I am looking for suggestions as to how best to implement an alerting request made by my users.  Summary A query is run to count the number of events. The time weighted difference (in percentage) between one period of the next will be used to trigger the alert if the threshold is met. Query I have a query which I am already using on an existing dashboard. I am using TSTATS to count events, then a REX to group the events based on the first 3 characters for a field (DRA).      | tstats prestats=t count where index="foo" sourcetype="bar*" DRA=C* by DRA, _time | rex field=DRA "^(?<pfx>\D\S{2}).*" | timechart span=5m count by pfx useother=f limit=0 usenull=f       The groups and the number of these 'groups' will vary, but the result will be similar to the below : _time C27 C31 C33 2022-10-12 13:00:00 116 2 70 2022-10-12 13:05:00 287 3 20 2022-10-12 13:10:00 383 6 45 2022-10-12 13:15:00 259 7 41 I suspect the maximum number of DRA codes that we will see will be 25, although I can break this up into different queries and play with some timing and priorities with the running of the searches. Goal The goal is to alert when any percentage changes from one period  to the next by more than a set percentage.  So, for example, in the above, I might want an alert at 13:05 that 'C33' had changed by ~72% from the previous period. I Have Tried Using a mix of streamstats, eval and trendline statements, have the following which will alert for a single 'C' code.     | tstats count as count where index="foo" sourcetype="bar" DRA=C27* by _time span=5m | timechart sum(count) as total_count span=5min | streamstats current=f window=1 last(total_count) as prev_count | eval percentage_errors=round(abs(prev_count/total_count)*100,1) | fillnull value=000 | trendline wma5(percentage_errors) AS trend_percentage | eval trend_percentage=round(trend_percentage,1) | fillnull value=0 trend_percentage | table _time, total_count, prev_count, percentage_errors, trend_percentage       Problems and Concerns How can I modify my query to account for the variable nature of the DRA code - both in name (Cxx) and in the number of DRA codes returned? I have added 'by' clauses almost everywhere, but have not had success. Each 5 minute period can see up to 70k events per DRA. Any thoughts on running all of the calculations across all extracted DRAs every 5 minutes? Any suggestions or comments on my line of thinking are appreciated.  
When I import a Threat Intelligence source that contains an IP address e.g. 1.2.3.4 with weight=60, then another source imports the same IP 1.2.3.4 with weight=100 what happens to the weight? x
Does anybody know a good way to filter out AWS Cloudtrail events? I'd like to send to null queue events that contains eventType=AwsApiCall. My input is configured as "Generic S3" (https://docs.splu... See more...
Does anybody know a good way to filter out AWS Cloudtrail events? I'd like to send to null queue events that contains eventType=AwsApiCall. My input is configured as "Generic S3" (https://docs.splunk.com/Documentation/AddOns/released/AWS/S3) This is what I have on my HF where the Splunk_TA_AWS is installed and configured: transforms.conf   [eliminate-AwsApiCall] REGEX = \"eventType\":\s+\"AwsApiCall\" DEST_KEY = queue FORMAT = nullQueue   props.conf:   [aws:cloudtrail] TRANSFORMS-eliminate-AwsApiCall = eliminate-AwsApiCall     Doesn't seem to be filtering ... any thoughts?   Thanks Marta