All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to calculate the difference between 2 times which have been converted to strftime but the duration is not working and i think its to do with min and max stats. I have the below:   mysea... See more...
I am trying to calculate the difference between 2 times which have been converted to strftime but the duration is not working and i think its to do with min and max stats. I have the below:   mysearch | stats count by Location Action Type a_rid _time | stats list(Action) as action, list(Type) as type, list(_time) as time by Location a_rid Action | eval times=strftime(time,"%H:%M:%S") | eval total=mvzip(times, type) | mvexpand total | makemv total delim="," | eval value1=mvindex(total, 0) | eval value2=mvindex(total, 1) | stats min(value1) as begins, max(value1) as ends by Location a_rid | eval duration=ends-begins   I get the below results   n Location a_rid begins ends 1 AssetRepository 63d71f3f-4a6c-4447-9c37-828062493f68 09:23:53 09:23:53 2 ContributionRepository 63d71f3f-4a6c-4447-9c37-828062493f68 09:23:54 09:23:54 3 FinancialSummaryController 63d71f3f-4a6c-4447-9c37-828062493f68 09:23:53 09:23:54   This all works apart from the duration where it doesnt appear. Can you advise if there is another way round this to get the duration to work?
Hi, I'm currently trying to modify the lookup editor app, But i'm having trouble on limiting previous version, i did it through CSS and HTML. But the files are still located at the folder. How ... See more...
Hi, I'm currently trying to modify the lookup editor app, But i'm having trouble on limiting previous version, i did it through CSS and HTML. But the files are still located at the folder. How can i limit the revert version to 5 and delete the older versions.  
Hi @gcusello , How to upload splunk diag files into the case. Regards, Rahul
I want to create a scheduled search that will track the changes made in content under Splunk Enterprise security app. If someone modifies correlation searches i want my query to capture it. Can this ... See more...
I want to create a scheduled search that will track the changes made in content under Splunk Enterprise security app. If someone modifies correlation searches i want my query to capture it. Can this be achieved?? Please help.  
How can I set up an email alert to notify someone who is assigned the incident from the incident review page?
Expanding a bit on my question from last year, "categorize or classify dissimilar field values at search time?": How does one simplify, streamline, maintain event classification on verbose and non-u... See more...
Expanding a bit on my question from last year, "categorize or classify dissimilar field values at search time?": How does one simplify, streamline, maintain event classification on verbose and non-uniform logs? E.g. if someone decided to dump 50 different application logs into one and where each application produces different events that need classification? Something like "linux_messages_syslog" which aggregates a number of different application and service logs into one? In my case, it's a content management and orchestration platform with a variety of different modules each producing a variety of different event types. I assume the primary mechanism is to use (1) rex field extraction then (2) case-match groups like this:   | rex field=event_message "<regex_01>" | rex field=event_message "<regex_03>" | rex field=event_message "<regex_05>" | eval event_category = case ( match( event_message, "<regex_01>"), category_01.": ".extracted_field_01, match( event_message, "<regex_02>"), category_02, match( event_message, "<regex_03>"), category_03.": ".extracted_field_03, match( event_message, "<regex_04>"), category_04, match( event_message, "<regex_05>"), category_05.": ".extracted_field_05, true(), "<uncategorized_yet>" ) | stats count dc(extracted_field_01) dc(extracted_field_03) by event_category, log_level | sort -count   Also assuming that fields can't be extracted via "match" clauses and thus I have to use the same regex statements twice - first in field extraction and then in case-match groups where I classify the events?) Here is a small sample of the actual classification SPL:   index=main sourcetype="custom_application" | rex field=component "\.(?P<component_shortname>\w+)$" | rex field=event_message "^<(?P<event_msg>.*)>$" | rex field=event_message "^<(?P<action>Created|Creating) (?P<object>capacity|workflow execution|step execution) sample(\.+| in (?P<sample_creation_time>\d+) ms\.)>$" | rex field=event_message "^<GENERATED TEST OUTPUT: (?P<filetype>(?P<asset_type>\w+) asset|AdPod file|closed captioning / subtitle assets) (?P<filename>.*) ingested successfully>$" | rename COMMENT AS "the above is just a sample - there are about 20-30 more rex statements" | eval event_category = case ( match( event_message, "^<(?P<action>Created|Creating) (?P<object>capacity|workflow execution|step execution) sample(\.+| in (?P<sample_creation_time>\d+) ms\.)>$"), action." ".object." sample", match( event_message, "^<GENERATED TEST OUTPUT: (?P<filetype>(?P<asset_type>\w+) asset|AdPod file|closed captioning / subtitle assets) (?P<filename>.*) ingested successfully>$"), "GENERATED TEST OUTPUT: ".filetype." <filename> ingested successfully", match( event_message, "^<GENERATED TEST OUTPUT: .*>"), event_msg, true(), "<other>" ) | eval event_category_long = component_shortname." (".log_level."): ".event_category   (In reality it's already a few pages long, and I am far from done.) Event samples:   2020-09-16 00:04:29,253 INFO [com.custom_app.plugin.workflow.execution.processor.TestStepExecutionProcessor] (partnerPackageWorkflow-log result [ex 34588028]) - <GENERATED TEST OUTPUT: deliveries complete for partner some_partner> 2020-09-16 00:03:20,462 INFO [com.custom_app.plugin.workflow.execution.processor.TestStepExecutionProcessor] (packageDelivery-log result [ex 34588139]) - <GENERATED TEST OUTPUT: package_name_anonymized delivered successfully> 2020-09-16 00:03:41,183 TRACE [com.custom_app.workflow.services.ReportService] (pool-8-thread-68) - <Created step execution sample in 57 ms.> 2020-09-16 00:03:41,126 TRACE [com.custom_app.workflow.services.ReportService] (pool-8-thread-68) - <Creating step execution sample...> 2020-09-15 23:58:24,896 INFO [com.custom_app.plugin.workflow.execution.processor.ThrottledSubflowStepExecutionProcessor] (partnerPackageWorkflow-deliver package items [ex 34588027]) - <Executing as the **THROTTLED** subflow step processor.>   Context: Application produces logs with a large number of of dissimilar events where log_level (INFO, DEBUG, etc.) is often meaningless and where INFO events may need to be reclassified as ERROR type events and alerted on accordingly. Key parts of the above code: extract fields where needed e.g. via a number of "rex" statements categorize using case( match( field, regex_01), event_category_01, ...))
My requirement is to display just domain (eg Corp) From below Computername Computername - <host>. Corp. <Domain>. Com
As we have no dev environment I have tried to learn Terraform and Ansible and build my own on Docker. I now have 2 x Search heads in a cluster, 2 Indexers and an Indexer cluster master, 1 x heavy fo... See more...
As we have no dev environment I have tried to learn Terraform and Ansible and build my own on Docker. I now have 2 x Search heads in a cluster, 2 Indexers and an Indexer cluster master, 1 x heavy forwarder, 1 combined deployer/deployment server and a Universal forwarder. Everything works fine and I can build the whole environment in a few minutes. But if I stop the containers when I do a "docker start" the cluster configuration of the indexer cluster master and deployer are reset back to the default.  This is the shclustering stanza of server.conf on the deployer when the environment is built: [shclustering] pass4SymmKey = $7$P6EHXzK5D7eS/B6970mBtVsoThkdIn27+xiyZdy2tkOAveg1O3o2rg== shcluster_label = shcluster_label And this is after the docker start: [shclustering] pass4SymmKey = shcluster_label = shc_label This is the clustering stanza from the indexer cluster master server.conf initially: [clustering] cluster_label = idxcluster_label mode = master search_factor = 1 pass4SymmKey = $7$WLLkzIXVZZmbtPcy1YDkhUNyKI1mzMMPz2Q0dTbivBHxFAokebPZose71eiT replication_factor = 1 And this is after the docker start: [clustering] cluster_label = idxc_label mode = master search_factor = 3 pass4SymmKey = replication_factor = 3 And in the logs for the indexer cluster master I can see this: 09-15-2020 12:56:34.296 +0000 INFO CMMaster - Creating CMMaster: ht=60.000 rf=3 sf=3 ct=60.000 st=60.000 rt=60.000 rct=60.000 rst=60.000 rrt=60.000 rmst=180.000 rmrt=180.000 icps=-1 sfrt=600.000 pe=1 im=1 is=0 mob=2 mor=5 mosr=5 pb=5 rep_port= pptr=10 fznb=10 Empty/Default cluster pass4symmkey=true allow Empty/Default cluster pass4symmkey=true rrt=restart dft=180 abt=600 sbs=1 09-15-2020 12:56:34.296 +0000 WARN CMMaster - pass4SymmKey setting in the clustering or general stanza of server.conf is set to empty or the default value. You must change it to a different value. Note that server.conf is not totally replaced just the clustering stanzas. So that suggests ansible, but I can't find a anything that changes these stanzas. Note that the search heads are not changed and server.conf is unchanged after the "docker stop".       
I have raw field in the below format. {"device":"device1","date":"2020-09-16T05:17:04.197Z","file_path":"CSIDL_PROFILE\\appdata","file_hash":"1bcdefgh12469"} I wanted the content of file_path like ... See more...
I have raw field in the below format. {"device":"device1","date":"2020-09-16T05:17:04.197Z","file_path":"CSIDL_PROFILE\\appdata","file_hash":"1bcdefgh12469"} I wanted the content of file_path like "CSIDL_PROFILE\\appdata"[inclusing quotes]. I tried something like below, sourcetype="file"|rex "{"device":"*","date":"*","file_path":(?<file>.*)"|table _raw,file I am not good at rex queries. Please suggest me some ideas to take the values of file_path including quotes.
We are unable to see our notable events when correlation search criteria met. Upon investigation, found out that notable index is empty, which resulting es_notable_events  kvstore lookup empty. Corre... See more...
We are unable to see our notable events when correlation search criteria met. Upon investigation, found out that notable index is empty, which resulting es_notable_events  kvstore lookup empty. Correlation search has no issue because we could see other AR actions triggered except notable.  Our environment: 2 indexers with cluster configuration, 1 SH, 1 stack of MC/License master/Deployment server, 1 Cluster Master. ES version: 6.2.0, Enterprise version: 8.0.5 Hope someone can give me a hand   
We are unable to see our notable events when correlation search criteria met. Upon investigation, found out that notable index is empty, which resulting es_notable_events  kvstore lookup empty. Corre... See more...
We are unable to see our notable events when correlation search criteria met. Upon investigation, found out that notable index is empty, which resulting es_notable_events  kvstore lookup empty. Correlation search has no issue because we could see other AR actions triggered except notable.  Our environment: 2 indexers with cluster configuration, 1 SH, 1 stack of MC/License master/Deployment server, 1 Cluster Master. ES version: 6.2.0, Enterprise version: 8.0.5 Hope someone can give me a hand
Is there a way to sort field 09 Sep-256789 in descending order?   For example, if we have sample fields 10 Sep-26789  ,31 Aug- 256670 , 09 Sep-256789 . It should sort in order    Before Sort -10 ... See more...
Is there a way to sort field 09 Sep-256789 in descending order?   For example, if we have sample fields 10 Sep-26789  ,31 Aug- 256670 , 09 Sep-256789 . It should sort in order    Before Sort -10 Sep-26789  , 09 Sep-256789  ,31 Aug- 256670  After Sort-  31 Aug- 256670 ,  09 Sep-256789  ,  10 Sep-26789     Number '26789 ' attached to month is a random number .    
Write a Splunk query which will be saved as dashboard to determine if a log feed has stopped (log outage)
Hi everyone,    I have a request from our security team to reorder our notable event statuses in the dropdown. We have a lot of custom statuses that supplant the "In Progress" label, and at present ... See more...
Hi everyone,    I have a request from our security team to reorder our notable event statuses in the dropdown. We have a lot of custom statuses that supplant the "In Progress" label, and at present they all appear at the very end of the dropdown list, which isn't terribly convenient for our analysts. I can see how to edit the status names, but I can't find a way to change the order the appear in the dropdown. Is there a way to do this? 
I have a custom script that collects stats on a custom HW appliance every minute and forwards it to our splunk system. And has following style data:     log_type="throughput_data", local_time="20... See more...
I have a custom script that collects stats on a custom HW appliance every minute and forwards it to our splunk system. And has following style data:     log_type="throughput_data", local_time="2020/09/09 19:01 CST", server_ip="10.221.20.172", host_name="host2", host_ip="10.131.221.37", version="13", model="M1000", serial_no="1234234", ssl_card="No", total_traffic="93700", app_traffic="17524", cpu="15", ssl="0", http="258",connections="1", sql="0", sql2="0" log_type="throughput_data", local_time="2020/09/09 19:01 CST", server_ip="10.221.20.172", host_name="host5", host_ip="10.131.222.36", version="13", model="M2000", serial_no="12342342", ssl_card="No", total_traffic="0", app_traffic="0", cpu="3", ssl="0", http="0",connections="0", sql="0", sql2="0"       I have a 2 parter question: How do I go about generating an alert when the app_traffic has a sudden spike or out of usual spike. EG: normally the app_traffic hovers around 500 and there was a sudden increase to 10000. Just having this will make my team happy, but I do not believe that is the proper solution we need Is there a way I can go about and create a dataset/lookup for each models supported datasheet values and generate an alert when that models certain values go up. EG: Model M1000 can do total app_traffic of 10000 and have an alert be generated when it reaches 90% of that value; in this case 9000. Can this be split do alert if either app_traffic or total_traffic or CPU or SSL reach 90% of the set limit in the data set I believe this will help us scale and be better for future use cases and making a business use case for management.      
I'm trying to  use Splunk to return a list of records that have been modified in our LDAP since a particular datetime. There are certain attributes that I know exist in LDAP (e.g., weillCornellEduEn... See more...
I'm trying to  use Splunk to return a list of records that have been modified in our LDAP since a particular datetime. There are certain attributes that I know exist in LDAP (e.g., weillCornellEduEndDate), and I can retrieve when using ldapsearch but that don't appear when I use ldapfilter (which I have to use; see previous sentence).   This works:   * | head 1 | eval x = "z" | table x | eval timestamp = "20200914213812Z" | ldapfilter domain=ED-people search="(&(objectClass=top)(|(modifyTimestamp>=$timestamp$)(createTimestamp>=$timestamp$)))" attrs="objectClass,cn,mail,title,o,sn,givenName" | table *                 This does NOT work:   * | head 1 | eval x = "z" | table x | eval timestamp = "20200914213812Z" | ldapfilter domain=ED-people search="(&(objectClass=top)(|(modifyTimestamp>=$timestamp$)(createTimestamp>=$timestamp$)))" attrs="objectClass,cn,mail,title,o,sn,givenName,weillCornellEduEndDate" | table *               Nor does this....   * | head 1 | eval x = "z" | table x | eval timestamp = "20200914213812Z" | ldapfilter domain=ED-people search="(&(objectClass=top)(|(modifyTimestamp>=$timestamp$)(createTimestamp>=$timestamp$)))" attrs="*" | table *               I'm using Splunk 7.2.9.1 and SA-LDAPSearch. Here's the error code in the logs. 09-15-2020 17:46:29.177 ERROR script - sid:1600206382.183889 External search command 'ldapfilter' returned error code 1. Script output = "error_message=Invalid attribute types in attrs list: weillCornellEduEndDate\r\n\r\n".
Hello, I would like to use an IAM Role with the AWS SQS-Based S3 Input.  My particulars: Splunk Cloud Version: 7.2.9 Build: 2dc56eaf3546 Splunk Add-on for AWS Version: 4.6.1 Build: 14 No... See more...
Hello, I would like to use an IAM Role with the AWS SQS-Based S3 Input.  My particulars: Splunk Cloud Version: 7.2.9 Build: 2dc56eaf3546 Splunk Add-on for AWS Version: 4.6.1 Build: 14 Note, this is on an IDM.  Based on other community posts, it appears that I would need to complete the following steps: -Create the IAM role (R) in my account (AC) with the necessary permissions -Create a user (U) in AC that can assume R -Add U's Access Key Identifier, and Access Key as an account (A) under Configurations -> Account. -Add R as an IAM role (I) under Configurations -> IAM Role So my question is, on the following screen, would I specify A for 'The name of AWS account' and I as "The name of IAM user would be assumed" (shouldn't this be labeled "The name of IAM Role to assume"?)?  Is there a more direct way to accomplish this e.g., the Splunk add-on directly assuming the role?
We see in the MC the message - Search peer <host> has the following message: Health Check: msg="A script exited abnormally with exit status: 3" input="../bin/mbbr.py" stanza="default" This script i... See more...
We see in the MC the message - Search peer <host> has the following message: Health Check: msg="A script exited abnormally with exit status: 3" input="../bin/mbbr.py" stanza="default" This script is part of the Malwarebytes Remediation App for Splunk. What can it be?
Does Phantom support Jira integration using public key authentication?
I have a multi search command that searches 4 weeks of data to display as a stats table in my dashbaord. The problem is that the search takes way to long. I do not think streamstats or eventstats wor... See more...
I have a multi search command that searches 4 weeks of data to display as a stats table in my dashbaord. The problem is that the search takes way to long. I do not think streamstats or eventstats work for this type of search. I have read up on summary indexes and data models. Would data models increase speed? and how would I create models?