All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, New to Splunk, trying to get data from ES to Splunk, and I was able to add "Elasticsearch Data Integrator - Modular Input", and the config seems to be fine, but how should I use the data? Any su... See more...
Hi, New to Splunk, trying to get data from ES to Splunk, and I was able to add "Elasticsearch Data Integrator - Modular Input", and the config seems to be fine, but how should I use the data? Any suggestion or docs? Millions of thanks!
I am new to splunk.  I have a need to get the visualization which shows the field of the corresponding stats value.  Data looks like  I want show.: stat_date and min(Size), stat_date and ma... See more...
I am new to splunk.  I have a need to get the visualization which shows the field of the corresponding stats value.  Data looks like  I want show.: stat_date and min(Size), stat_date and max(Size)  stat_date and min(Files), stat_date and max(Files) Below query gets me the stats value, but i don't know how to get the corresponding stat_date for each of this.   | stats sum(mbFileSize) AS "Size", dc(FileName) AS "Files" by stat_date | stats min(Size) as min_size max(Size) as max_size min(Files) as min_file max(Files) as max_file   Thank you
Hi Everyone, I have one requirement like below: I have one dashboard which consists of different different panels like TimeOut,Failure,Success etc. The dashboard is showing the count value for eac... See more...
Hi Everyone, I have one requirement like below: I have one dashboard which consists of different different panels like TimeOut,Failure,Success etc. The dashboard is showing the count value for each panel like TimeOut= 2176, Failue = 51 like that. By default its showing the data for yesterday .  My requirement is when I select suppose last 7 days or last 30 days from the date drop-down. Currently its displaying the total count of last 7 days if I select last 7 days. I want it should display the latest count not the total count . Also I want to create another panel  which will display the trend for last 7 days if I select last 7 days. So suppose I select last 7 days so one TimeOut panel should display the latest count not the total count of the last 7 days and another panel should show the trend timeout for the last 7 days. Currently I have only one panel which is showing total while selecting last 7 days not the latest one.  I want latest count should be displayed while selecting last 7 days or 30 days not the total count of last 7 days 30 days .whatever is selected. Can someone guide me on this. Below is my search query: <row> <panel> <single> <title>TIMEOUT</title> <search> <query>index="abc" sourcetype=xyz Timeout $Org$ | stats count </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="colorBy">value</option> <option name="drilldown">all</option> <option name="height">100</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0,10,25,40]</option> <option name="trendDisplayMode">percent</option> <option name="trendInterval">-5m</option> <option name="unit"></option> <drilldown> <set token="show_panel">true</set> <set token="selected_value">$click.value$</set> </drilldown> </single> </panel> </row> Thanks in advance.
Hi, I have a need for an alternative of | lookup abc field1 AS field2 OUTPUT field1, fieldA, fieldB, fieldC. For above, I have a lookup definition from a lookup that holds information about more th... See more...
Hi, I have a need for an alternative of | lookup abc field1 AS field2 OUTPUT field1, fieldA, fieldB, fieldC. For above, I have a lookup definition from a lookup that holds information about more than 50,000 vulnerabilities. I am using this lookup definition in my queries and result set is no more than 1000. 1000 is the maxmatch limit of lookup definition that Splunk supports. I need an alternative e.g. a subsearch using lookup itself or anything that allows me to do match for all the values in my lookup which is approximately 50,000 on average as efficiently as possible. Sample query (original query is much longer but I will  be using your provided solution to consolidate) index=ABC sourcetype="XYZ" `comment (This is to reduce Splunk's internal fields to keep my table size smaller)` | fields - index, source, sourcetype, splunk_server, splunk_server_group, host, eventtype, field, linecount, punct, tag, tag::eventtype, _raw `comment (This is to limit to the only fields which I need)` | fields dns, vuln_id `comment (vuln_id is a multivalued field and I have to separate them to get accurate stats. When stats is run, it takes care of expanding them and it works as expected)` | makemv delim="," vuln_id | stats count by vuln_id, dns | lookup vuln_info VulnID AS vuln_id OUTPUT Scan_Type, OS, Environment The below approach is what I have tried that is not returning anything but it should. I am missing something in this: index=ABC sourcetype="XYZ" | fields - index, source, sourcetype, splunk_server, splunk_server_group, host, eventtype, field, linecount, punct, tag, tag::eventtype, _raw | fields dns, vuln_id | makemv delim="," vuln_id | stats count by vuln_id, dns [| inputlookup vuln_info.csv | fields VulnID, Scan_Type, OS, Environment | rename VulnID as vuln_id] Any solution that will work as efficiently as possible to get all records from lookup instead of incomplete dataset due to lookup definition's maxmatch limit of 1000 in Splunk. Thanks in-advance!!!
Hi! I was wondering if anyone actually found a way to represent the Availability of an application in time (or downtime) as a metric counter.  As in like a total time of application not performing ... See more...
Hi! I was wondering if anyone actually found a way to represent the Availability of an application in time (or downtime) as a metric counter.  As in like a total time of application not performing without any problem in a given timeframe.
I'm trying to upload debugging symbols (dSYMs) for my iOS app but I keep getting time out errors regardless of the method I use to upload the dSYMs. I get the following error using the build script,... See more...
I'm trying to upload debugging symbols (dSYMs) for my iOS app but I keep getting time out errors regardless of the method I use to upload the dSYMs. I get the following error using the build script, using a curl request, or using the manual upload on the website: * Trying 208.78.105.200... * TCP_NODELAY set * Connected to ios.splkmobile.com (208.78.105.200) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: *   CAfile: /etc/ssl/cert.pem   CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): } [224 bytes data] * Operation timed out after 300178 milliseconds with 0 out of 0 bytes received * stopped the pause stream! * Closing connection 0 Splunk Mint: ERROR "0" while uploading "/tmp/splunk-mint-dsyms/redacted.zip" I haven't changed the API key or token and I haven't changed the provided build script. Everything was working fine until just yesterday. I also noticed that none of the old dSYMs are showing for the app on mint.splunk.com even though I've been uploading dSYMs for a few years now. Any help is greatly appreciated.
Hi, does anyone have an experience archiving data in S3 Glacier using a script or any third party apps. I already know the steps in uploading files in S3 glacier using aws cli commands but this kind ... See more...
Hi, does anyone have an experience archiving data in S3 Glacier using a script or any third party apps. I already know the steps in uploading files in S3 glacier using aws cli commands but this kind of configuration is manual. My goal is to automatically upload all the data incoming in frozen directory to S3 Glacier. like how the splunk forwarder works
Hi splunkers, Is it possible to have all of the indexes have a one frozen directory path setup in archiving to Amazon S3 glacier? Can anyone of you share their thoughts in storing their data in amaz... See more...
Hi splunkers, Is it possible to have all of the indexes have a one frozen directory path setup in archiving to Amazon S3 glacier? Can anyone of you share their thoughts in storing their data in amazon s3 glacier. It would be nice if  you teach me the architecture or what methods needs to be done in archiving data to S3 glacier. 
I would like to modify an existing dashboard to limit the Linux package that is being reported.  Specifically, I want to see any packages that start with kernel.  The plugin that is in use is Softwar... See more...
I would like to modify an existing dashboard to limit the Linux package that is being reported.  Specifically, I want to see any packages that start with kernel.  The plugin that is in use is Software Enumeration (SSH).  The existing query returns too many records and is truncated.  If I could limit it to see kernel packages only I think it would allow the query to complete.  Does anybody have any suggestions how to pass this kernel*?  
Hi All, I want to extract one particular filed under the description column but when i tried to extract the field i am getting the below error "The extraction failed. If you are extracting multiple... See more...
Hi All, I want to extract one particular filed under the description column but when i tried to extract the field i am getting the below error "The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings."   i tried to extract through regex    Defined","category":"Not Defined","resolution_time":"Not Defined","response_sla_exclusion":"","sla_contract":"Met","long_description":"Contact Number: 9967795614Type: Error/FailureWebsite/URL: https://xxxxxxProduct Name: Automatic Ticket AssignmentWorkstation/Cubicle/Bay: xxxxxDescription: Kindly check Health Status alertCountry/Location: xxxxVersion Number: NA","problem_abstract":"Kindly check Health Status alert"}   in the above i want to extract only the Product name field   Can someone help me on this 
Let's say I want to display the total number of unique possible combinations for a given set of things (n) when various amounts (r) of those things are chosen.  You cannot choose the same thing more ... See more...
Let's say I want to display the total number of unique possible combinations for a given set of things (n) when various amounts (r) of those things are chosen.  You cannot choose the same thing more than once and it does not matter which order those things are chosen in.       For example, I have 13 searches that contribute to a user's risk score (n=13).  Right now my users' behavior has triggered a maximum of 6 of those searches (r=6).  However, each user may have triggered a different combination of 6 searches from the 13 total possible searches.   How can I calculate and display for each risk score (1-13) the number of unique combinations of searches that could contribute to the score? For example, if a user has a risk score of 13, there is only 1 combination, all 13 searches.  If a user has a score of 1 there are 13 possible combinations.   I want to be able to dynamically calculate and display all the rest of the possibilities without statically defining n or r.   This is the formula I am using: n!/(r!(n-r)!)
Hello, Is there a way to keep row data together when using the stats command? ID   Loc   FirstName  LastName 1 NYC Tom Jones 2 CHI Peggy Sue 3 BOS Phil Collins 4 BOS Joh... See more...
Hello, Is there a way to keep row data together when using the stats command? ID   Loc   FirstName  LastName 1 NYC Tom Jones 2 CHI Peggy Sue 3 BOS Phil Collins 4 BOS John Lennon 5 NYC Paul McCartney If I used `| stats values(FirstName), values(LastName) BY Loc` I believe I would get this. BOS John Collins Phil Lennon CHI Peggy Sue NYC Paul Jones Tom McCartney How do I keep FirstName and LastName together BY Loc? This is a scaled-down example. I have more than 20 fields, and over 10,000 events. Thanks in advance. Stay safe and healthy, you and yours. God bless, Genesius
How do I round these numbers with this search? index=net_auth_long  | eval time_hour=strftime(_time,"%H") | chart eval(count(channel)/7) AS field_div_by_7 by channel time_hour    
I have a log that contain records for tables processed in a database. For each table, a log entry is added showing the number of records to be processed. If processing fails for whatever reason, an E... See more...
I have a log that contain records for tables processed in a database. For each table, a log entry is added showing the number of records to be processed. If processing fails for whatever reason, an ERROR is recorded. If processing succeeds, nothing is recorded. Sample log   [ProcessId- 5459] [2020-08-29 06:22:34] [INFO] For tenant - test1_sales_nas_10, total number of records purged = 0 [ProcessId- 5459] [2020-08-29 06:22:34] [INFO] For tenant - test1_sales_nas_18, total number of records purged = 0 [ProcessId- 5459] [2020-08-29 06:22:34] [INFO] For tenant - test2_nas_01, total number of records purged = 0 [ProcessId- 5459] [2020-08-29 06:22:34] [INFO] For tenant - test3_nas_1113, total number of records purged = 0 [ProcessId- 5459] [2020-08-29 06:22:34] [ERROR] Error occurred during purging of records. Error code returned to shell script by DB function = -1. [ProcessId- 5459] [2020-08-29 06:22:34] [INFO] For tenant - test3_nas_1112, total number of records purged = 0 [ProcessId- 5459] [2020-08-29 06:22:34] [ERROR] Error occurred during purging of records. Error code returned to shell script by DB function = -1.   There is nothing that links the ERROR record to the INFO record except for the order it happened. How can I create a search that returns records matching "Error occurred during purging of records" AND the previous record in the log to provide context for the error? I realize that this makes a huge assumption -- that the ERROR always refers to the records immediately above it -- but it's unfortunately the only thing I have to go off of. Any help is appreciated.
I was following this guide on adding command line logging to my GPO. I verified that the current GPO has these settings.    You must enable the Audit Process Creation audit policy so that 4688 eve... See more...
I was following this guide on adding command line logging to my GPO. I verified that the current GPO has these settings.    You must enable the Audit Process Creation audit policy so that 4688 events are generated. You can enable this audit policy from the following Group Policy Object (GPO) container: Computer Configuration\Windows Settings\Security Settings\Advanced Audit Policy Configuration\System Audit Policies\Detailed Tracking. You must enable the Include command line in process creation events GPO setting. You can find this setting in the following GPO container: Computer Configuration\Administrative Templates\System\Audit Process Creation. Alternatively, you can enable this setting in the local system registry by setting the HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ ProcessCreationIncludeCmdLine_Enabled registry key value to 1.   Here is what I did to test, I simply created a directory, deleted it, also ran ipconfig, net share and was unable to find those commands expect for ipconfig in the logs.  Is there anything else I need to do maybe in the input.conf file?    EDIT: Seems like mkdir and rmdir do not show up in the logs but all others do. Does anyone know why?
Hello, I am trying to test on a single host and this search may be completely wrong and would appreciate any assistance as I am just starting to use Splunk. I am trying to capture any local accou... See more...
Hello, I am trying to test on a single host and this search may be completely wrong and would appreciate any assistance as I am just starting to use Splunk. I am trying to capture any local accounts created or added to the local Administrators group on one host. This gets me what I need, which is the Time(when), hostname ,who created but the Security_ID field is lumping all into one .. I need a column with just the Hostname\LocalAccountName or just LocalAccountName Security_ID is including the SAmAccountname that created the account, the local account name and BUILTIN\Administrators all in one. This is what I am searching, any help will be appreciated. MyHostName EventCode=4732 OR EventCode=4720 | table _time, HostName, src_user, Security_ID, EventCode,
index=XXXX sourcetype=XXXX ("filename1" OR "filename2" OR filename3)  | rex "(?<status>passed) request\=\[\/\w+\/(?<to_DST_Filename>.*.txt)\.\w+\." | rex "(?<status>orig) request\=\[(?<to_DST_Filen... See more...
index=XXXX sourcetype=XXXX ("filename1" OR "filename2" OR filename3)  | rex "(?<status>passed) request\=\[\/\w+\/(?<to_DST_Filename>.*.txt)\.\w+\." | rex "(?<status>orig) request\=\[(?<to_DST_Filename1>.*.txt)\.\w+\." | eval to_DST_Filename = coalesce(to_DST_Filename,to_DST_Filename1) | fields _time to_DST_Filename | eval Staus_1 = if(substr(to_DST_Filename,3,4)="hold","Duplicate","Transferred") | eval Status1 = if(like(to_DST_Filename,"%dup%"),"Duplicate","Transferred") | eval Status = coalesce(Status_1,Status1) | fields _time to_DST_Filename Status | table _time to_DST_Filename Status | rename _time as "Time_Sent_by_SI" | convert ctime(Time_Sent_by_SI) | dedup to_DST_Filename | search to_DST_Filename!="" AND Status=Transferred In the above search the three files "filename1" OR "filename2" OR "filename3" will not always have results. I'm looking for the results like, if any file is not shown in the results, result will be show with file name and status=pending.  Looking for Results like below: Filename                                                Status filename1                                             Transferred filename2                                              Transferred filename3                                                 Pending
Is it possible to manage tokens of a multi select to take multiple click events from a  table? I have a table of product names (Splunk test data) and I want to be able to click the product and add it... See more...
Is it possible to manage tokens of a multi select to take multiple click events from a  table? I have a table of product names (Splunk test data) and I want to be able to click the product and add it to my multi select. I also have a multi select for the token getting applied on the click, I want essentially a user to be able to use either. When a value is clicked though, it does not show in the multi-select for the token but my searches run correctly if I use either. Can I manage tokens for a multi select in this way?
We're currently using splunk for  traditional dashboards, monitoring and alerting and it means that we're now very effective at identifying and addressing errors and exceptions in our apps when they ... See more...
We're currently using splunk for  traditional dashboards, monitoring and alerting and it means that we're now very effective at identifying and addressing errors and exceptions in our apps when they occur. We're now looking to build more sophisticated monitoring that looks for issues across journeys that users complete in our app and helps us to identify more subtle issues that might not involve errors or exceptions. For example a simplified version of our sign up process looks something like 1. Receive sign up request 2. Create account record 3. Write account created message to queue 4. Read account created from message queue and send welcome email Each of the steps are logged in splunk and there's a common correlation id logged across each of the steps. Any exceptions that occur trigger alerts so that's all good. On occasion we might have an issue where messages at step 4 stop being read from the queue and the welcome emails are not sent but nothings throwing exceptions and it's not obvious anythings wrong until a customer contacts us to flag the missing sign up email. At that point we can query splunk and see that steps 1, 2, and 3 completed successfully, but there's no logs for step 4 which indicates an issue that need investigating. We'd like to be able to automated the process for checking that all of the expected steps in a given journey are completed and alert when steps are missed. Is there a way we can achieve this with splunk? I've seen a question around visualising order journeys (https://community.splunk.com/t5/Dashboards-Visualizations/How-to-Visualize-Order-Journey-through-splunk/m-p/458753) which sounds in the same ballpark and refers to Splunk Business Flow, but the docs indicate it's no longer available to purchase... Are there other out of the box or paid options?
Hi, I'm following the zoom logging instructions and have everything configured.  I'm ready to put in the exception for the firewall change but I'm not sure of the incoming URL from Zoom.  Is it just ... See more...
Hi, I'm following the zoom logging instructions and have everything configured.  I'm ready to put in the exception for the firewall change but I'm not sure of the incoming URL from Zoom.  Is it just https://zoom.marketplace.us/user/ or should there be some designation of the company's account or user's name in that URL?  I've looked everywhere and can't find a specific answer.  I don't want to put in a firewall request and then it not be right so figured someone here may have done this and know for sure.   Thanks!