All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, In the Tech Talk under the subject of "7 Tips To Boost Performance of Your Splunk Enterprise Security Operations" there was an app that was supposed to be released a long time ago cal... See more...
Hi Splunkers, In the Tech Talk under the subject of "7 Tips To Boost Performance of Your Splunk Enterprise Security Operations" there was an app that was supposed to be released a long time ago called "Perfinsights". I have searched for it in Splunkbase and google but couldn't find it anywhere! Can someone please point me in the right direction on this? The Tech Talk can be found here: https://www.youtube.com/watch?v=UXFIKMJHwgs
Hello, Can I mix different type of disks, for example SSDs and HDDs, while installing search heads or indexers, in on-premise environment?  Best regards,   
Hi All, I have a field name ip_address which has 50 IP values in it.  at every 5mins interval, I will receive the same values. ip_address 10.1.1.1 10.1.1.2 10.1.1.3 . . . 10.1.1.49 10... See more...
Hi All, I have a field name ip_address which has 50 IP values in it.  at every 5mins interval, I will receive the same values. ip_address 10.1.1.1 10.1.1.2 10.1.1.3 . . . 10.1.1.49 10.1.1.50 What are ways to list down the values which are not coming to splunk. Let's say 10.1.1.2 and 10.1.1.45 are not coming to splunk. Then I need those missing values to be listed in statistical way to create an alert for missing ip address.  What are ways to achieve this. Please help Thanks in advance. 
Hello, what I am trying to do in this search is sum the total CPU seconds, by report class, for a one day period. Once I have that sum, I would like to take it one step further and multiply that sum ... See more...
Hello, what I am trying to do in this search is sum the total CPU seconds, by report class, for a one day period. Once I have that sum, I would like to take it one step further and multiply that sum by our MSU factor to determine the MSUs used by a specific report class for any given day.  I believe what I need to do is store the result from the timechart statement as a new variable, to be able to multiply that variable by the MSUFactor. I have not had any luck in  trying a combination of 'eval' statements or by leveraging the AS keyword to store the result into a variable I can further work with.  I appreciate any help you may be able to offer! index=z* MFSOURCETYPE=SMF030 Subtype=2 `calccpusecs` | where Rptcls IN("RHOTBAT","RPDBATLO","RPDBATMD","RSAGBAT","RTSTBAT") | eval MSUFactor=(37209.3023/5/216000) | timechart span=1d sum(cpusecs) by Rptcls | addcoltotals
Hi All, We are planning to migration KVstore storage engine from mmap to wiredTiger.  I know it is safe to disable kvstore on Indexers but I'm Just wondering what steps to approach if at all we... See more...
Hi All, We are planning to migration KVstore storage engine from mmap to wiredTiger.  I know it is safe to disable kvstore on Indexers but I'm Just wondering what steps to approach if at all we need to upgrade storage engine from mmap to wiredTiger on Indexer Cluster. 
Hi,  Does anyone have a method or an app or query that can check and compare the confs between all SHC members?   Perhaps there is a way with btool or rysnc.   I was given a PS Tech Assessment App... See more...
Hi,  Does anyone have a method or an app or query that can check and compare the confs between all SHC members?   Perhaps there is a way with btool or rysnc.   I was given a PS Tech Assessment App but it is not working correctly.  I don't think the PS Tech knew how to install it or use it. Thank you
I have a dataset that uses some non-segmented character to separate meaningful and commonly-used search terms. Sample events   123,SVCA,ABC123,DEF~AP~SOME_SVC123~1.0,10.0.1.2 ,67e15429-e44c... See more...
I have a dataset that uses some non-segmented character to separate meaningful and commonly-used search terms. Sample events   123,SVCA,ABC123,DEF~AP~SOME_SVC123~1.0,10.0.1.2 ,67e15429-e44c-4c27-bc9a-f3462ae67125,,2023-02-10-12:00:28.578,14,ER40011,"Unauthorized" 123,SVCB,DEF456,DEF~LG~Login~1.0,10.0.1.2,cd63b821-a96c-11ed-8a7c-00000a070dc2,cd63b820-a96c-11ed-8a7c-00000a070dc2,2023-02-10-12:00:28.578,10,0,"OK" 123,SVCC,ZHY789,123~XD-ABC~OtherSvc~2.0,10.0.1.2 ,67e15429-e44c-4c27-bc9a-f3462ae67125,,2023-02-10-12:00:28.566,321,ER00000,"Success" 456,ABC1,,DEFAULT~ENTL~ASvc~1.0,10.0.1.2 ,b70a2c11-286f-44da-9013-854acb1599cd,,2023-02-10-11:59:44.830,14,ER00000,"Success" 456,DEF2,,456~LG~Login~v1.0.0,10.0.0.1,27bee310-a843-11ed-a629-db0c7ca6c807,,2023-02-10-11:59:44.666,300,1,"FAIL" 456,ZHY3,ZHY45678,DEF~AB~ANOTHER_SVC121~1.0,10.0.0.1 ,19b79e9b-e2e2-4ba2-a7cf-e65ba8da5e7b,,2023-02-10-11:58:58.813,,27,ER40011,"Unauthorized"     Users will often search for individual items separated by the ~ character. E.g., index=myindex sourcetype=the_above_sourcetype *LG* My purpose is to reduce the need for leading wildcards in most searches here, as this is a high-volume dataset by adding the minor segmentation character '~' at index time. I've tried these props.conf and segmenters.conf without success. Could anyone provide any insight? <indexer> SPLUNK_HOME/etc/apps/myapp/local/props.conf   [the_above_sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX = ^([^,]*,){7} TIME_FORMAT = %Y-%m-%d-%H:%M:%S.%3Q TRUNCATE = 10000 MAX_TIMESTAMP_LOOKAHEAD=50 SEGMENTATION = my-custom-segmenter   SPLUNK_HOME/etc/apps/myapp/local/segmenters.conf   [my-custom-segmenter] MINOR = / : = @ . - $ # % \\ _ ~ %7E     Added those and bounced my test instance, but I still cannot search for  index=myindex sourcetype=the_above_sourcetype LG -- does not return results such as these, however *LG* as a term does return it. 456,DEF2,,456~LG~Login~v1.0.0,10.0.0.1,27bee310-a843-11ed-a629-db0c7ca6c807,,2023-02-10-11:59:44.666,300,1,"FAIL"
We want to use ITSI with universal forwarders (windows and nix).  Which is best practice, enable the metrics inputs in UF local or windows/nix add on local? 
I have two lists: one has a list of hostnames and another has a list of prefixes to hostnames. I would like to create a search with an output showing hosts that do not have a name containing  any of ... See more...
I have two lists: one has a list of hostnames and another has a list of prefixes to hostnames. I would like to create a search with an output showing hosts that do not have a name containing  any of the prefixes in the second list.  Example:  Inputlookup                                         Lookup Hostname                                             Hostname Prefix appletown                                             town treeville                                                   tree I would like to create a search showing a list of hostnames from the first list that do not contain any of the hostnames in the second. 
Splunkbase is showing different versions of apps for different folks. For instance the Splunk ThreatConnect App is on version 1.0.5 in my version of Splunbase. However my colleague only sees version ... See more...
Splunkbase is showing different versions of apps for different folks. For instance the Splunk ThreatConnect App is on version 1.0.5 in my version of Splunbase. However my colleague only sees version 1.0.4 for the same app. Can someone from support provide information on why that is?
Hi folks, is there a way to enable SSL cert validation to a 'httpout' stanza within 'outputs.conf' like we can do with 'sslVerifyServerCert', etc. for 'tcpout' stanzas? I wasn't able to find a s... See more...
Hi folks, is there a way to enable SSL cert validation to a 'httpout' stanza within 'outputs.conf' like we can do with 'sslVerifyServerCert', etc. for 'tcpout' stanzas? I wasn't able to find a solution for this. The UF doesn't seem to verify the CA provided in 'server.conf/sslConfig/caCertFile' as well. Thanks for your help!
I have to set up an alert with corn schedule from 9pm to 7am. when i tried to get it from crontab it was not coming. how can i create the corn schedule  for this timings.
I am writing a playbook that loops through a list of case ids to delete. This action fails after hitting 50 for the action and I have written in my playbook to continue if there are still valid ids l... See more...
I am writing a playbook that loops through a list of case ids to delete. This action fails after hitting 50 for the action and I have written in my playbook to continue if there are still valid ids left in the list. However, after the failed it won't continue down to the code to what I have written down below to remediate. How can I get the playbook to continue after a failed action and just mark it failed rather than exiting the playbook?
Hi, I am unable to use strptime() here correctly. My code is: index="ABC" | eval time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | table time But the table has no output ...... See more...
Hi, I am unable to use strptime() here correctly. My code is: index="ABC" | eval time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | table time But the table has no output ..... Can you please help? Thanks!
I am trying to get a report of all distribution groups in splunk, what they send say in a 12 month period.  We dont have Exchange Application add in and my understanding is that it no longer really e... See more...
I am trying to get a report of all distribution groups in splunk, what they send say in a 12 month period.  We dont have Exchange Application add in and my understanding is that it no longer really exists. Is there a way to identify distribution groups in the search?  I have access to the index and I can find distribution groups but I cant just look for DL's them selves. Thanks in Advance
Hi, I'm trying to maintain a users very first login using lookup and the scheduler runs twice in a day. The userLogin field is a combination of username, userId and uniqueId associated to each Logi... See more...
Hi, I'm trying to maintain a users very first login using lookup and the scheduler runs twice in a day. The userLogin field is a combination of username, userId and uniqueId associated to each Loginuser. I just want the username and userId from userLogin field to maintain single record of each user but to know the exact logged in user records I've to display the userLogin field as well and have to maintain the user's earliest login dateTime. Currently i'm using csv lookup and has records of past three months in the lookup file but in future if I expand for past 6 months I've to update the earliest login dateTime for the existing user from lookup and append new user details with their login dateTime. I'm bit worried about the performance if my records goes higher in the lookup file. Here's the query i've managed to write, but I'm struggling to track the earliest dateTime. Any suggestions would be highly welcomed. Thanks in advance.   index=user_login_details | rex field=userLogin "(?<userName>\s+\d{5}).*" | dedup userName | eval Time=strftime(_time,"%Y-%m-%dT%H:%M:%S") | table userName, userLogin, Time | inputlookup user_details.csv append=true | dedup userName | outputlookup user_details.csv append=true
Is there a way to get logs in JSON format for an API call from a Springboot Application?
Hello, Our use case is to add a viz which is a URL for an interactive map. New Jersey County Map  This map should be displayed on the dashboard at all times. When a user clicks on any County, a da... See more...
Hello, Our use case is to add a viz which is a URL for an interactive map. New Jersey County Map  This map should be displayed on the dashboard at all times. When a user clicks on any County, a data table will open in the lower left of the window/panel/viz. Tried inserting it as an image, and changed the domain for images in the web.conf. dashboards_csp_allowed_domains = *.njogis-newjersey.opendata.arcgis.com But since it is not really an image, but an image rendered on a webpage, this didn't work - received this error. With Classic DB we used I-Frame on occasion. But this was kludgy at best. We haven't worked with REST API. But could that be a possible solution? Thanks in advance and God bless, Genesius
Hi, My overall goal is to create a resulting data table with headings including HourOfDay, BucketMinuteOfHour, DayOfWeek, and source, as well as creating an upperBound and lowerBound. My curr... See more...
Hi, My overall goal is to create a resulting data table with headings including HourOfDay, BucketMinuteOfHour, DayOfWeek, and source, as well as creating an upperBound and lowerBound. My current query is as follows: index="akamai" sourcetype=akamaisiem | eval time = _time | eval time=strptime(time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | eval HourOfDay=strftime(time, "%H") | eval BucketMinuteOfHour=strftime(time, "%M") | eval DayOfWeek=strftime(time, "%A") | stats avg(count) as avg stdev(count) as stdev by HourOfDay,BucketMinuteOfHour,DayOfWeek,source | eval lowerBound=(avg-stdev*exact(2)), upperBound=(avg+stdev*exact(2)) | fields lowerBound,upperBound,HourOfDay,BucketMinuteOfHour,DayOfWeek,source | outputlookup state.csv However, it produces zero results. Can you please help? I am using the following article as a guide as this is for an anomaly detection project: https://www.splunk.com/en_us/blog/platform/cyclical-statistical-forecasts-and-anomalies-part-1.html I appreciate any help. tHANKS!
Just starting out with provisioning splunk 9.x via AWS AMI and Terraform.  Does anyone have any idea if it is possible to change the admin pwd using a user_data script on the AMI?  I found one mentio... See more...
Just starting out with provisioning splunk 9.x via AWS AMI and Terraform.  Does anyone have any idea if it is possible to change the admin pwd using a user_data script on the AMI?  I found one mention  of using export password="<password>" but that didn't seem to work; it still used the default SPLUNK-$instance id$ We would like to have the pwd changed during provisioning, if possible.  Thanks.