All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to find the outliers in a specific set of data (a count of login failures within 5 minutes). I have created and assigned the variable residual, which is the prediction (using algorith... See more...
Hi, I am trying to find the outliers in a specific set of data (a count of login failures within 5 minutes). I have created and assigned the variable residual, which is the prediction (using algorithm LLP) minus the count. I have then cut out all the positive values, because I only want to catch large differences.  What I am trying to do is figure out how to correspond what I have been doing with the Detect Numeric Outliers Assistant into a model I'm creating. Basically, how do I move an adjust the threshold so that it only catches a few outliers? I have already tried to assign value to threshold and well as lower_threshold and upper_threshold, but it only shades the areas around the data set. Currently I have recreated what I was in the Detect Numeric Outliers Assistant and copy/pasted the SPL from that to see if I can assign it to the algorithm. Below is my example:       1| inputlookup loginfailures_count_5m.csv 2| eval _time=strptime(_time,"%Y-%m-%dT%H:%M:%S.%Q") #lines 1 and 2 are the csv file 3| predict count as prediction algorithm=LLP future_timespan=150 holdback=0 4| where prediction!="" AND count!="" 5| eval residual = prediction - count #lines 3 - 5 are setting what perdiction and residual do 6| eval residual = if(residual < 0, residual, 0) #this line gets rid of all positive values, we only want negative values 7| eventstats avg("residual") as avg stdev("residual") as stdev 8| eval lowerBound=(avg-stdev*exact(6)), upperBound=(avg+stdev*exact(6)) 9| eval isOutlier(residual)=if('residual' < lowerBound OR 'residual' > upperBound, 1, 0) #lines 7 - 9 are the SPL from the Detect Numeric Outliers Assistant, may not be the "answer" 10| fit DensityFunction residual show_options="feature_variables" into my_model #have tried different settings here, unsucessful, this is where we feel the problem lies 11| apply my_model       My DensityFunction Outliers graph looks like this: but I need it to look  like this (populated with the Detect Numeric Outliers Assistant): Any pointers/settings I haven't tried yet? Does anyone know a 1 to 1 correlation between the settings on the Assistant that translate into the DensityFunction (i.e.- what setting does a "sliding window")? Have looked through the docs here to no avail.  
I have specific events with rows and rows of MV data.  They have a header and footer data but the bulk of the body is values for a given process on the host system.  Analysts use these events to vali... See more...
I have specific events with rows and rows of MV data.  They have a header and footer data but the bulk of the body is values for a given process on the host system.  Analysts use these events to validate the host system processes.  Presently they are copying/pasting the raw event text off the page and manually generating csv's from it.  I would like to streamline this process for them by having Splunk spit out a CSV of the needed values.  The body looks something like:  fieldnameA|fieldnameB|fieldnameC |valueA1|valueB1|valueC1| |valueA2|ValueB2|valueC2| |valueA3|Valueb3|valueC3|... This goes on for thousands to tens of thousands of lines.  I can rex out the values into Multi value fields and table those results but the results are a single row.  How would I create separate rows for each series of values so that the csv export provides separate rows?
Hi, I have this query where I am trying to compare two csv files and have the assets data merged CSV1 host ip os abc.domain.com 1.1.1.1 Windows def 2.2.2.3 suse xyz 3.3.3.3 aix... See more...
Hi, I have this query where I am trying to compare two csv files and have the assets data merged CSV1 host ip os abc.domain.com 1.1.1.1 Windows def 2.2.2.3 suse xyz 3.3.3.3 aix         CSV2 Name IP Addresses Description OS abc 1.1.1.1 blah blah windows abc.domain.com 1.1.1.1, 2.2.2.1 blah blah windows bcd 1.1.1.2, 2.2.2.2 blah blah windows def.domain.com 1.1.1.3, 2.2.2.3 blah blah suse xyz   blah blah aix   | inputlookup CSV1.csv | lookup CSV2.csv "IP Addresses" as ip | table Name, host, ip, OS, os | fillnull Name value="Not Found" With this query - for some reason i am not able to correlate assets that have multiple "IP Addresses" in CSV2. How can I ensure that any value in the cell is searched instead of having an exact match?   Thank you
I am trying to use Splunk to review windows events that have been exported from disconnected systems.    I have all the exported .evtx files located in a local folder. I add that folder to the dat... See more...
I am trying to use Splunk to review windows events that have been exported from disconnected systems.    I have all the exported .evtx files located in a local folder. I add that folder to the data inputs and have disabled everything else in data inputs.  In the search box I have source="c:\\Events\\*".  Everything looks good but if I put the curser back at the end of the search string an press enter (not changing the search string at all) the number of events goes up.   I can't figure out why this is happening since the .evtx files in that folder are not changing.
Hi, So, I want to count the number of visitors to a site, but because of the logging mechanism, I get many events per visit.  I want to define a visit as 1 user per day. <basesearch> |table time,... See more...
Hi, So, I want to count the number of visitors to a site, but because of the logging mechanism, I get many events per visit.  I want to define a visit as 1 user per day. <basesearch> |table time,username Where username is the same and time is within 24 hours ( 1 day), I want to count as 1. 
I have an odd situation with a macro starting with an inputlookup like this:   inputlookup ADcomputerslist ```logic time``` | search Enabled="True"   I run it like this:   | `adsearchsortingv3`... See more...
I have an odd situation with a macro starting with an inputlookup like this:   inputlookup ADcomputerslist ```logic time``` | search Enabled="True"   I run it like this:   | `adsearchsortingv3`    However it expands like this:   | inputlookup ADcomputerslist where Enabled="True"   This doesn't seem right to me, but I am probably doing something wrong.
Hi I'm currently looking for a software that can monitor logs from PLC (mostly Allen Bradley devices) and HMI (C-More)  Can Splunk able to do this? 
Hello, I have a simple extraction I need to make on a new dataset that has not yet had the fields defined for me. I want to extract the user name. The log structure is like so: time stamp, server ... See more...
Hello, I have a simple extraction I need to make on a new dataset that has not yet had the fields defined for me. I want to extract the user name. The log structure is like so: time stamp, server info, logstatus, userinfo, result   _raw= 2021-05-12 03:58:59,533 [#-####-abcd-#] INFO ServicesLogName#logStatus(): ## - User john.doe@username.com with IP 01.001.01.1 result [successful]   In every instance, the username value follows "User" and precedes "with IP"    Index=basesearch application=specified |rex field=_raw "(?<username>")   Thank you! 
Hi all, I have server errors and success logs in the data, i want to get the percent of failures out of the total count of requests, this is my initial search: index=my_index source=my_source (line... See more...
Hi all, I have server errors and success logs in the data, i want to get the percent of failures out of the total count of requests, this is my initial search: index=my_index source=my_source (line.data.status = 200) OR ("Sending 500 ("Server Error") response" OR line.data.status = 500)   So lets say, the total number of results is 1000 and the total failures is 100,  10% of failures
I want to use my company Branding on our dashboards and we use Bariol as our typeface, can I add this as a custom font to splunk 
Architecture: 3 search-heads in a cluster, captain dynamically elected, 1 search head deployer that is not one of the search head servers. I go to $SPLUNK_HOME/etc/shcluster/apps/, create my app, ad... See more...
Architecture: 3 search-heads in a cluster, captain dynamically elected, 1 search head deployer that is not one of the search head servers. I go to $SPLUNK_HOME/etc/shcluster/apps/, create my app, add local/data/ui/{nav/default.xml,views/(empty)} that is valid (per https://dev.splunk.com/enterprise/docs/developapps/createapps/addnavsplunkapp/) Push type is merge_to_default - so when I run splunk apply shcluster-bundle command, it should merge the local folder into default and deploy that. That doesn't happen, however. When I check the app on SearchHeads I see that the app is deployed, but there are only two folders inside: default with app.conf:     [ui] is_visible = 1 show_in_nav = 1 label = Testing Label [launcher] author = myname description = Some label version = 1.0 [install] install_source_checksum = <digest>     The second folder metadata contains single file metadata.local with nothing particularly interesting (except version that I am running on - 8.0.2). Note default.xml (or any other UI stuff) is nowhere to be found. When I open the app on the search heads UI, I can select it from navigation and I am automatically redirected to Appname/Inputs despite this is not defined anywhere in my default.xml that I created on SHD.   I dig a little bit and found that /Inputs are defined in Splunk for AWS Addon's default.xml that is apparently being used for the application. I can also make my search head app to use my default.xml if I manually create local folder on each of the search heads - but I don't find this approach good from the maintenance perspective. My questions: 1: Where should I put my default.xml on SH deployer for it to be deployed and used on Search Heads? 2: Why does SearchHead automatically assigns default.xml from different app? Can I configure this somewhere? Can I maybe create a new default.xml to be selected to be used for this app (and future apps I create)?   Thank you for your time!
Hello, everybody! Does anybody can help with such an easy problem as counting events in summary index? I have a summary index populated with something like SS:   | tstats prestats=true summarieso... See more...
Hello, everybody! Does anybody can help with such an easy problem as counting events in summary index? I have a summary index populated with something like SS:   | tstats prestats=true summariesonly=false min(CPU.CPU_Performance.cpu_load_percent), avg(CPU.CPU_Performance.cpu_load_percent), max(CPU.CPU_Performance.cpu_load_percent) from datamodel=MODEL where nodename=CPU.CPU_Performance by host, CPU.CPU_Performance.cpu_instance | sistats min(CPU.CPU_Performance.cpu_load_percent), avg(CPU.CPU_Performance.cpu_load_percent), max(CPU.CPU_Performance.cpu_load_percent) by host, CPU.CPU_Performance.cpu_instance | addinfo | eval _time=info_min_time, host=upper(host) | fields - info_sid, info_search_time, info_min_time, info_max_time | collect index=my_summary   My SS is scheduled to run once an hour, so I every hour get 1 event for each orig_host in summary index. Now I want to check, if all the required events are here in summary index. I expect to get count=24 events for each orig_host in summary index for each day. When I try the search:   index=my_summary | stats count by orig_host   I get all the psrsvd_ct_ values summarized giving me not what I expected. How should I change my search to count events in summary index?
Hello, I have used the trial and cloud demo version of Splunk and I would like some information about the licensing.  I saw that when we reach the limit of the daily indexing, we receive a warning ... See more...
Hello, I have used the trial and cloud demo version of Splunk and I would like some information about the licensing.  I saw that when we reach the limit of the daily indexing, we receive a warning that blocks the search but keep indexing the data.  Imagine that I want to add new devices to the scope. Splunk will receive too much data and will send a warning. In this case, I want to increase the limit. Do I need to order a fresh new license or I can just upgrade the old one?    Thanks , YoanL
Is the Python for Scientific Computing app (https://splunkbase.splunk.com/app/2882/) version 2.0.x is compatible with the AWS app? The aws app documentation specifically mentions the use of Python fo... See more...
Is the Python for Scientific Computing app (https://splunkbase.splunk.com/app/2882/) version 2.0.x is compatible with the AWS app? The aws app documentation specifically mentions the use of Python for Scientific Computing app V1.2. Can anyone confirm that on an 8.1.3 system we can run the AWS app using the Python for Scientific Computing app (https://splunkbase.splunk.com/app/2882/) version 2.0.X? Can anyone who upgraded to 8.x and is using the AWS app chime in?
Hi!  I'm trying to combine ldapsearch with a simple search  Here is my query:  |ldapsearch domain="domain_name" search="(objectSid=$added_by$)" attrs=cn |table added_by [|search index=windows_aud... See more...
Hi!  I'm trying to combine ldapsearch with a simple search  Here is my query:  |ldapsearch domain="domain_name" search="(objectSid=$added_by$)" attrs=cn |table added_by [|search index=windows_audit EventCode=4732 | eval added_by=mvindex(Security_ID,0) | eval Sid=mvindex(Security_ID,1)] The idea is to find CN by Sid using ldapsearch function.  But I got an error  Error in 'table' command: Invalid argument: 'Account_Domain=domain_name' Query like: |ldapsearch domain="domain_name" search="(objectSid="some_sid")" attrs=cn |table cn  Is working perfect on my splunk.   What am I doing wrong? 
While creating new trail account I'm getting error "something went wrong" Your SaaS trial could not be set up at this time. 
Hi, I want to know how I can see what SQL queries are used on every table of my SQL databases. I mean, I want to monitor queries  and users who write those queries. Thanks.
Hello, Is there a possible way to authenticate to Phantom API but without sending ph-auth-token header. The documentation sugjests to add the token in the URL,  but doesn't give any details on h... See more...
Hello, Is there a possible way to authenticate to Phantom API but without sending ph-auth-token header. The documentation sugjests to add the token in the URL,  but doesn't give any details on how to achieve that. For token based authentication, the token can be provided in the URL, or ph-auth-token must be present in the HTTP headers. https://docs.splunk.com/Documentation/Phantom/4.10.3/PlatformAPI/Using Many thanks
Hello Is it a way to find how much searches are searching events older than 2 years? I need this information to evaluate the impact if I set the "tsidx retention policy" to two years. Thanks
There is a request from my Boss to create a landing page in Glass table which will give overview  of cluster wise dashboards.  There are multiple dashboards created in Splunk enterprise which are di... See more...
There is a request from my Boss to create a landing page in Glass table which will give overview  of cluster wise dashboards.  There are multiple dashboards created in Splunk enterprise which are displaying status of various files. Objective is to create KPI , service analyser and glass table by taking status from dashboards.  But I found there is some delay between status on dashboards and status from KPI.  How to resolve this?