All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does anyone here have any experience running the Crowdstrike Falcon Sensor in their Splunk environment? I've found the following: https://docs.splunk.com/Documentation/Splunk/8.2.5/ReleaseNotes/Runni... See more...
Does anyone here have any experience running the Crowdstrike Falcon Sensor in their Splunk environment? I've found the following: https://docs.splunk.com/Documentation/Splunk/8.2.5/ReleaseNotes/RunningSplunkalongsideWindowsantivirusproducts but it references on-access AV, and Crowdstrike is a behavioral AV and that likely isn't totally applicable. I have a case open with Splunk with this same question but I wondered if the community had any experience; do's/don'ts; best practices; etc. My gut is that I won't see a substantive performance impact but I'd love to have a little more knowledge before I start deploying the agent. Trying to search for this online has proven neigh impossible since CS-->Splunk integration is very common and almost all the search hits focus on ingesting CS logs, not actually running the agent on a Splunk environment. For reference I have a modestly sized distributed architecture with three search-heads and three indexers (not clustered) in addition to a deployment and multiple forwarders.
I see in the docs splunk doc that summary indexing does not count against your license. It also says that summary indexes are built via transforming searches over event data.  If I use a scheduled ... See more...
I see in the docs splunk doc that summary indexing does not count against your license. It also says that summary indexes are built via transforming searches over event data.  If I use a scheduled report that does not use a transforming command and saves the data to an index will that count against the license? Ie I want to extract a subset of data from the main index and save certain fields to a new index so that a role doesn't have access to all the data.
I have this table and I'm trying to send it as a report/alert every morning to our teams chat group   This is how it's getting sent out, its only showing the first result of every row ... See more...
I have this table and I'm trying to send it as a report/alert every morning to our teams chat group   This is how it's getting sent out, its only showing the first result of every row heres the Query  | webping http://CTXSDC1CVDI041.za.sbicdirectory.com:4444/grid/console | append [ webping http://CTXSDC1CVDI042.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI043.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI044.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI045.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI046.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI047.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI048.za.sbicdirectory.com:4444/grid/console ]| append [ webping http://ctxsdc1cvdi013.za.sbicdirectory.com:4444/grid/console ] | append [ webping http://CTXSDC1CVDI049.za.sbicdirectory.com:4444/grid/console ]| append [ webping http://CTXSDC1CVDI050.za.sbicdirectory.com:4444/grid/console ] | eval timed_out = case(timed_out=="False", "Machine On", timed_out=="True", "Machine Off") | eval response_code=if(response_code==200, "Hub and Node Up", "Hub and Node Down") | rex field=url "http:\/\/(?<host_name>[^:\/]+)" | table host_name response_code timed_out total_time  
Hello I have a table I want this I am not sure which tool (chart, table anything else) and arguments would be best to explore and learn in order to get the result I want. Do you have... See more...
Hello I have a table I want this I am not sure which tool (chart, table anything else) and arguments would be best to explore and learn in order to get the result I want. Do you have any advice? Thank you.
How to customize the Phantom dashboard time filters dropdown box (see screenshot below)? For a Phantom instance, we have started exploring using the data retention features of Splunk Phantom keeping... See more...
How to customize the Phantom dashboard time filters dropdown box (see screenshot below)? For a Phantom instance, we have started exploring using the data retention features of Splunk Phantom keeping less than 1 year of Phantom data. It is desired to have a maximum filter equal to the current number of days for data retention. Otherwise, users are misled by time filters that are more than current number of days for data retention. A feature that might nice to have is a way to tie the Phantom dashboard time filters dropdown box to the days of data retention.    
All, I need some help on a problem I am trying to solve. Problem: I need to calculate the average user events per unique user, per day over a 14 day period (excluding weekends). Basically, we h... See more...
All, I need some help on a problem I am trying to solve. Problem: I need to calculate the average user events per unique user, per day over a 14 day period (excluding weekends). Basically, we have users logging into a system and I want to see if a threshold of say 10% or more is reached that is outside of the norm for a particular user. The output would then list the username who is in violation of the above. Thanks for any guidance...
Seeing ERROR message "may have returned partial results" from few indexers".  Logs from those indexers are showing following error messages.   WARN CacheManagerHandler - Localization failure has ... See more...
Seeing ERROR message "may have returned partial results" from few indexers".  Logs from those indexers are showing following error messages.   WARN CacheManagerHandler - Localization failure has been reported, cache_id="bid|index_name~8264~6A0ED00A-E4AB-4B46-9F69-CD517B4C8965|", sid="remote_*_1646235912.747477_6AFBB424-8451-40A4-A05C-A0337BDBC296", errorMessage='waitFor probe, cache_id="bid|index_name~8264~6A0ED00A-E4AB-4B46-9F69-CD517B4C8965|", did not localize all files before reaching download_status=idle files={"file_types":["tsidx","bloomfilter","deletes"]} local_files={"file_types":["dma_metadata","strings_data","sourcetypes_data","sources_data","hosts_data","lex","tsidx","bloomfilter","journal_gz","other"]} failure_code=0 failure_reason='   Any idea, whats causing this and how to fix it?
hi everyone, i'm trying to parse json inline.  i'm using kv mode= json already but i'm trying to achieve selective groups. essentially i want to capture two groups if they have an "exclusion type... See more...
hi everyone, i'm trying to parse json inline.  i'm using kv mode= json already but i'm trying to achieve selective groups. essentially i want to capture two groups if they have an "exclusion type" sample json. [{"ruleGroupId":"AWS#AWSManagedRulesAmazonIpReputationList","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":null},{"ruleGroupId":"AWS#AWSManagedRulesBotControlRuleSet","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":null},{"ruleGroupId":"AWS#AWSManagedRulesCommonRuleSet","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":[{"exclusionType":"EXCLUDED_AS_COUNT","ruleId":"SizeRestrictions_BODY"}]},{"ruleGroupId":"AWS#AWSManagedRulesKnownBadInputsRuleSet","terminatingRule":null,"nonTerminatingMatchingRules":[],"excludedRules":null}] so for this i wanted to capture only the ruleGroupId name if it has excludedRules not null, then capture the exclusionType   any help would be appreciated.    
Hello, we have Splunk running in an AWS Account and getting AWS CloudWatchMetrics data from that account is no issue at all. However we have a second AWS Account and i currently find no way to assu... See more...
Hello, we have Splunk running in an AWS Account and getting AWS CloudWatchMetrics data from that account is no issue at all. However we have a second AWS Account and i currently find no way to assume the IAM Role of that other Account. Using an IAM User is forbidden due to security reasons. Of course we are using the standard Splunk App for AWS Would help a ton if anyone has an idea. Regards, Mike
Hi, I can't get Splunk to use  the content of timestamp_start as _time. This is an example of log: canale=<value>;an=<value>;num_fattura=<value>;data_emissione=2022-01-01;timestamp_start=2022-03... See more...
Hi, I can't get Splunk to use  the content of timestamp_start as _time. This is an example of log: canale=<value>;an=<value>;num_fattura=<value>;data_emissione=2022-01-01;timestamp_start=2022-03-02 11:22:00;timestamp_end=2022-03-02 11:22:02;total_time=1.56035;http_code=200;purl=<value> and this is what I get as _time 2022-01-01 11:22:00 I found a configuration that should work so I edited the props.conf file on the deployment server but even if I can see the "new" props.conf on the forwarder and on the deployment server, new indexed files still have the wrong timestamp. [my_sourcetype] SHOULD_LINEMERGE=false NO_BINARY_CHECK=true TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=.*\d*-\d*-\d*\;timestamp_start= MAX_TIMESTAMP_LOOKAHEAD=19 After editing the props.conf, I reloaded the deployment server (splunk reload deploy-server) and then I restarted Splunk on the deployment server and on the forwarder. My Splunk version is 6.5.1. Thanks for any help you may be able to give me!
I have this date/time format, I need to add 4 hours to each field, can it be possible? if yes please help me
Hi Community, I have a scenario where I am getting emails every 5 min for the list of services that are not running. But the list pretty much gets repeating over a period of time with a few new rec... See more...
Hi Community, I have a scenario where I am getting emails every 5 min for the list of services that are not running. But the list pretty much gets repeating over a period of time with a few new records added now and then. Let's say I want to get the list of records just once over a period of 60 minutes. Will the throttle settings work for this use case if I set the trigger to send just once? Will all the records be suppressed over the time range or will I get emails for the new records? How does that work? I had set the throttle settings to send an email for each record where the above conditions met but I got an email for every record from the query results which is tiring and confusing. Should I be using my own throttle settings? How should this problem be approached? Thanks in advance!! Regards, Pravin
Hi Team , How to get below output using Splunk SPL query from below input . INPUT : _time url scannedissues 1-Feb abc.com issue1 1-Feb abc.com issue2 1-Feb abc.com issue3 ... See more...
Hi Team , How to get below output using Splunk SPL query from below input . INPUT : _time url scannedissues 1-Feb abc.com issue1 1-Feb abc.com issue2 1-Feb abc.com issue3 1-Feb abc.com issue4 5-Feb abc.com issue1 5-Feb abc.com issue3 5-Feb abc.com issue4 7-Feb abc.com issue1 7-Feb abc.com issue3 10-Feb abc.com issue1 10-Feb abc.com issue2 10-Feb abc.com issue3 14-Feb abc.com issue1 14-Feb abc.com issue2 14-Feb abc.com issue5   Expected OUTPUT       url scannedissues LatestTime Earliest Time abc.com issue1 14-Feb 1-Feb abc.com issue2 14-Feb 10-Feb abc.com issue5 14-Feb 14-Feb   can someone guide on the SPL command logic to achieve above output. Thanks in advance!
Hi There, I am looking to produce an output where the field with maximum count is display based on another field. for, eg I am looking something command like  | stats max(count(errors) by status ... See more...
Hi There, I am looking to produce an output where the field with maximum count is display based on another field. for, eg I am looking something command like  | stats max(count(errors) by status time                                       status             errors                    count 2022-03-02 05:30 100 not found 100 2022-03-02 05:30 200 success 300 2022-03-02 05:30 300 failed 500 2022-03-02 06:30 100 not found 400 2022-03-02 06:30 200 success 500 2022-03-02 06:30 300 failed 600 2022-03-02 07:30 100 not found 200 2022-03-02 07:30 200 success 700 2022-03-02 07:30 300 failed 200   What I am looking for is the max count each status and error time                                       status           errors                    count 2022-03-02 05:30 100 not found 400 2022-03-02 06:30 200 success 700 2022-03-02 07:30 300 failed 600   I tried many thing but with no luck, if someone could help with this.
is there anyway to create a file with a list of IP's that i can use in the search field? i am trying to search for IP's that are not in this specific list but i don't want to create the list for ever... See more...
is there anyway to create a file with a list of IP's that i can use in the search field? i am trying to search for IP's that are not in this specific list but i don't want to create the list for every search. For instance if i want to look through zeek conn.log for bad_guy IP's from a predefined list of bad guy IP's. Thank you for any help.
Hi Experts, my SPL query, ...| eval elapse_range=case( TOTAL_ELAPSE>0 AND TOTAL_ELAPSE<4, "Green", TOTAL_ELAPSE>4 AND TOTAL_ELAPSE<8, "Yellow", TOTAL_ELAPSE>8, "Red") |chart values(TOTAL_ELAP... See more...
Hi Experts, my SPL query, ...| eval elapse_range=case( TOTAL_ELAPSE>0 AND TOTAL_ELAPSE<4, "Green", TOTAL_ELAPSE>4 AND TOTAL_ELAPSE<8, "Yellow", TOTAL_ELAPSE>8, "Red") |chart values(TOTAL_ELAPSE) as TOTAL_ELAPSE over JOBID by elapse_range Statistics table: JOBID                         Green                 Red                    Yellow SZ146BKP                                              8.2 SZ11BKP                                                 8.6                         7.9 SZ16BKP                                                 8.6 SZSWTCNT                                            8.7 SZ00D                          T39                                                                     9.5                                                                     9.8                                                                     9.9 SZ24                                                                    10.6                                                                     11.0 SZ07                                  1.7                12.7 SZ04                                                        59.6 SZ22                                                                    66.6                                                                     69.2   The grouped by values i.e Highlighted Values coming in statistical table but not showing in chart     Chart not showing the values 66.6, 69.2 etc
Hi All, Splunk Enterprise 8.2.4 Clustered I have an issue where I have an existing app with a lookup listing all devices we are monitoring and I have a new app where I pull a subset of these devi... See more...
Hi All, Splunk Enterprise 8.2.4 Clustered I have an issue where I have an existing app with a lookup listing all devices we are monitoring and I have a new app where I pull a subset of these devices to provide a dashboard for the team that supports them. The underlying search "| inputlookup NocIP.csv | search Datasource="Eaton" OR Datasource="eltek" Works fine within the original app and works fine from the new app using my "General user" which has admin rights but using a user set up for the support team using the new app the search fails with the following result the lookup table file has the following permissions set The lookup definition permissions are set like this The role for the support team is cloned from the role that uses the original app Inheritance Cababilities This app doesn't use any indexes and there are no Restrictions in place The resources are The user is  with the Config This is doing my head in because it looks like it should work but isn't, can anyone see what I have missed? Cheers Mike
hello   I use this timechart   index=tutu sourcetype=titi | timechart span=15min dc(s) as "Uniq"    Now i would like to display 2 more lines with min and max for "s" field is it possible
Hi, I want to implement a custom command in spluk. So I created an add-on using splunk add-on builder and copied code for my custom command in to add-on. While validating add-on from the add-on b... See more...
Hi, I want to implement a custom command in spluk. So I created an add-on using splunk add-on builder and copied code for my custom command in to add-on. While validating add-on from the add-on builder, I see  one failure (194 tests passed and one failure)  as "Detect usage of JavaScript libraries with known vulnerabilities" When I expand the error clicking on it, I see the solution column mentioned as below solution: 3rd party CORS request may execute parseHTML() executes scripts in event handlers jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution Regex in its jQuery.htmlPrefilter sometimes may introduce XSS Regex in its jQuery.htmlPrefilter sometimes may introduce XSS reDOS - regular expression denial of service Regular Expression Denial of Service (ReDoS) Regular Expression Denial of Service (ReDoS) Can I get help how to resolve this javascript issue ?? Also, when I download the spl file and install the app, it is not giving "launch app" option in the manage apps page as shown in below snapshot. Is it because of installation of non-validated package??
Hi Splunkers! I have a problem with props.conf and tranforms.conf I face with this error in Linux Servers.   multipathd[212317]: sdb: failed to get sgio uid: No such file or directory multip... See more...
Hi Splunkers! I have a problem with props.conf and tranforms.conf I face with this error in Linux Servers.   multipathd[212317]: sdb: failed to get sgio uid: No such file or directory multipathd[212317]: sdb: add missing path   So I set props.conf and transforms.conf to get rid of this messages. It seems correct and I can't figure out why this doesn't work?!   props.conf [syslog] TRANSFORMS-null = setnull     transorms.conf [setnull] REGEX = multipathd DEST_KEY = queue FORMAT = nullQueue   I also tried these REGEXs   REGEX = .*multipathd.* REGEX = (.+multipathd.+)   But nothing happened So where did I make a mistake? appreciate for your time