All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk Commnity! I have a customer that have two different Splunk Licenses: Perpetual and Term License in separated Splunk environments.  It could be possible to merge two license in one sing... See more...
Hello Splunk Commnity! I have a customer that have two different Splunk Licenses: Perpetual and Term License in separated Splunk environments.  It could be possible to merge two license in one single environment?, the customer year after year is renewing the Splunk Support for both licenses. In this environment there is no special Splunk products as ITSI or ES.  The main idea is install both licenses in one license master to merge the environments in just one. Thanks in advance for your response, regards. 
Alright guys I hope you are ready for this question because I almost lot my mind! btw THANK YOU SO MUCH FOR ALL THE HELP! I have been working on this problem for WEEKS and I have to kindly ask f... See more...
Alright guys I hope you are ready for this question because I almost lot my mind! btw THANK YOU SO MUCH FOR ALL THE HELP! I have been working on this problem for WEEKS and I have to kindly ask for your help I am now helpning ut a company that has splunk for the iot stuff and they are a welding company and want me to use SPL to count the number of events (alerts) between sequeantial stages of a 3-stage process .. so let me please break it down for you.. Information related to the process: A test Subject is made to go through a 3 stage process with stages A, B and C respectively the first one being A, second B and finally C; a test Subject may abandon the process at stages A or B and then start again from point A, each time the process takes place a dataset is created with the IDENTIFICATION of the test subject, the TIMESTAMP in which the stage took place and a unique VISIT_CODE During any stage, a test subject may trigger an "ALERT" and this will be recorded with the TIMESTAMP, ALERT_CODE and test subject IDENTIFICATION. WhatI need: to count how many ALERTS where generated by the test subjects between stages A and B, between stages B and C and finally how many ALERTS where generated after C. Please note that a test subject may at some point abandon the process to later on start again from point A. To get the data from the process I do this:   index=bearing_P1 and source=PROBES | table *     and I get STAGE TEST SUBJECT TIMESTAMP VISIT_CODE A XYU-1 10 BKO A XYU-1 15 JUJD B XYU-1 20 DUDH A FF-09 25 KSIWJD B FF-09 30 AJAKAM C FF-09 35 ZISKS A UU-89 40 NNXJD B UU-89 45 DDUWO A I-44 50 JIWIW A W-6 55 SHDN B W-6 60 IWOLS C W-6 65 JDDD A U-90 70 DJDKSMS B U-90 75 NDJSM A T-87 80 DNDJDK   and for the triggered alerts I use   index=alerts source=probes_w1 | table *     and I get TEST SUBJECT TIMESTAMP ALERT_CODE XYU-1 11 AYUJ-151571406 XYU-1 12 AYUJ-487008829 XYU-1 28 AYUJ-211990388 FF-09 32 AYUJ-4177221842 W-6 56 AYUJ-1300211351 W-6 63 AYUJ-3014305494 I-44 67 AYUJ-4454800551 U-90 73 AYUJ-1079921935 U-90 76 AYUJ-3348911727 U-90 79 AYUJ-2381219626 T-87 82 AYUJ-4778326278 W-6 89 AYUJ-3915716168   I want to be able to achieve something like this: Alerts between Stages A & B including alerts from test subjects that abandoned the process in the attempt nth at stage A Alerts between Stages B & C including alerts from test subjects that abandoned the process in the attempt nth at stage B Alerts after stage C AYUJ-151571406 AYUJ-211990388 AYUJ-3915716168 AYUJ-487008829 AYUJ-3014305494   AYUJ-1300211351 AYUJ-3348911727   AYUJ-1079921935 AYUJ-4177221842   AYUJ-4778326278 AYUJ-2381219626   AYUJ-4454800551       I know this may seem imposible but if there is a way to have this done in splunk lets say for a period of time of one year that willl be so great, I have tried autoregress,  and  a bunch of commands but I have not gotten even an inch close to me desired utput plus I fear that if I do at somepoint the data will truncate... Thank you so much to everyone who can point me in the right direction   kindly, Cindy
I created a veteran account to take splunk fundamentals 1 and 2 for free, but the fundamentals 2 course still shows I have to pay for it. Any idea how to fix this?
I am running a query like this     index=main source=transferstatus sourcetype=logs transaction.transferSet.FileName="*myfile*" | stats dc(transaction.Id) by transaction.Id     this gives me th... See more...
I am running a query like this     index=main source=transferstatus sourcetype=logs transaction.transferSet.FileName="*myfile*" | stats dc(transaction.Id) by transaction.Id     this gives me the unique transaction Ids that i am looking for  Now i want to pass this unique transaction Ids to a query like below     index=main source=transferstatus sourcetype=logs transaction.action="success" transaction.Id=[ pass each unique value i got from first query to here]     transaction.action="success" will not present on the first query results.. it will be part of success events that wont have "transaction.transferSet.FileName" field in it. how do I join these two queries?            
Hi Team, We have configured an alert with some set of health rules and we are unable to receive an alert via email & SMS however we are able to see in events that email & SMS were sent for that viol... See more...
Hi Team, We have configured an alert with some set of health rules and we are unable to receive an alert via email & SMS however we are able to see in events that email & SMS were sent for that violation. Please suggest us.
Hi, I need to configure an alert when there is an error. Example: "error: file not able to found"  for an app  (need to get alert, if more than two messages for same app within 120-150 secs) is i... See more...
Hi, I need to configure an alert when there is an error. Example: "error: file not able to found"  for an app  (need to get alert, if more than two messages for same app within 120-150 secs) is it possible to configure, Can anyone please suggest?      
  Hi everyone,  I am trying to use Splunk to catch a flag and also send an alert in a report if department = "business and economics" role = "staff" from the above spreadsheet. And I also want ... See more...
  Hi everyone,  I am trying to use Splunk to catch a flag and also send an alert in a report if department = "business and economics" role = "staff" from the above spreadsheet. And I also want Splunk to return a report containing the employee_id, email, alert_sent_date, and also date_updated when I am running the spreadsheet in Splunk on a daily basis. Could anyone please advise? What should I look into to work on this logic? Thanks   
Hello guys I have this SPL         | stats count(events) by type process         and it gives me something CORRECT like this: PROCESS TYPE OF ALERT COUNT A RED FLAG 458... See more...
Hello guys I have this SPL         | stats count(events) by type process         and it gives me something CORRECT like this: PROCESS TYPE OF ALERT COUNT A RED FLAG 458 A ISJD 5245 A IOO 21452 A XCNCNC 125 B LPOLSSS 21 B SSSSSS 584 B RED FLAG 284 B ISJD 455 C RED FLAG 255214 C ISJD 55551 C IOO 8569   but when I do this:         | stats count(events) by type process | stats values(*) as * by process         I get something incorrect because the type or erros do not correspond witht he count field next to them because splunk seems to order the m in anotehr fashion, like this for example which is not correct PROCESS TYPE OF ALERT COUNT A IOO                        ISJD                      RED FLAG    XCNXNX 125             5245              458         21452 and so the rows for B and C will also be mixed up I will like to have them showm like this: WHICH is correct   is there  a proper way to do that guys THANK you so much in advance! kindly C
Hi, Anyone has this issue, Risk lists are limited to 100,000 rows in Splunk for recorded future. Any ideas?
Hello everyone,  I have been trying to move data from my old 6.3.2 splunk to the new 8.1.3 splunk which is empty.   I tried to first do a search "*" and downloaded everything which is 16gb. I th... See more...
Hello everyone,  I have been trying to move data from my old 6.3.2 splunk to the new 8.1.3 splunk which is empty.   I tried to first do a search "*" and downloaded everything which is 16gb. I then used the new splunk web gui monitor import which did take all the data, but it only had one host, source, and source type. The original splunk had 3 index names, 2 hosts sending data, and many sources and source types. How can i move the data so that search results show the same as it did in the original splunk? Is there a way to export everything to match exactly? I am having a hard time determining how to move these items. Both the new and old splunk have 1 search head, 2 indexers, and one master. I am not familair in how I can copy the index folder method either. Hopefully someone can guide me in how I can move the data in place keeping all the hosts, source, sourcetypes, etc. Thanks
Hi, Using dBConnect in SPLUNK, how I would setup to schedule my SQL query to run a particular time in a day or week. Any help will be highly appreciated. Thank you!
Code Architecture: common code generating initial results; generate unique key for each grouping multireport or appendpipe        stanza 1 with its own set of stats, evals, etc.; uses the key fro... See more...
Code Architecture: common code generating initial results; generate unique key for each grouping multireport or appendpipe        stanza 1 with its own set of stats, evals, etc.; uses the key from the common code      stanza 2 with its own set of stats, evals, etc.; uses the key from the common code Aggregates the resulting data from the common, stanza1, stanza2 results using the key Issue Statement common code + stanza 1 takes about 1 min to execute common code + stanza 2 takes about 1 min to execute common code + stanza 1 + stanza2 using either a multireport or appendpipe takes about 17 min [Q] Does this huge execution time difference make sense? I have attached a few images to show how I think multireport and appendpipe work. [Q] Is my understanding accurate?   The pre-appendpipe SPL reads the data from the index, filters the data, creates some initial fields using streamstats and eventstats and creates a key that is unique per the overall groupings correlated within this code. Lines 1 and 2 are identical and originate from the pre-multireport SPL. These results are presented to the stanza1 and stanza2 SPL. Lines 3 and 4 are independent results from stanza1 and stanza2 respectively stanza1 and stanza2 execute mutually exclusive from one another The sort and stats clauses within stanza1 and stanza2 are quite different but the one does NOT impact the other. The final aggregation software ties all the data together based on a common key.   The pre-appendpipe SPL reads the data from the index, filters the data, creates some initial fields using streamstats and eventstats and creates a key that is unique per the overall groupings correlated within this code. Lines 1 and 2 and 3 are identical and originate from the pre-appendpipe SPL. These results are presented to the stanza1 and stanza2 SPL. Lines 3 and 4 CAN be removed if I filter the input data with a where clause and the flag I called "which" associated with each set of data. Lines 5 and 6 are independent results from stanza1 and stanza2 respectively stanza1 and stanza2 execute mutually exclusive from one another. The stats clauses within stanza1 and stanza2 are quite different but the one does NOT impact the other. The final aggregation software ties all the data together based on a common key.  
Hi Everyone, I am trying to write a query that will allow me to use my notable_events table, display the time the notable opened and the time it was closed. Looking through the forums I found: |ev... See more...
Hi Everyone, I am trying to write a query that will allow me to use my notable_events table, display the time the notable opened and the time it was closed. Looking through the forums I found: |eval _time=strftime(_time,"%Y/%m/%d %T") |eval review_time=strftime(review_time,"%Y/%m/%d %T") |eval assign_time = case(isnotnull(owner), _time) | eval close_time = case(status=5, review_time) |stats min(_time) as notable_time min(assign_time) as assign_time min(close_time) as close_time by AlertTitle,owner  But that isn't quite working as it returns 0 results.
Hi Splunkers, Long time listener, first time caller. I am trying to figure out how to make a dashboard based on a monthly vulnerability scan.  Our previous implementation was using relative dates t... See more...
Hi Splunkers, Long time listener, first time caller. I am trying to figure out how to make a dashboard based on a monthly vulnerability scan.  Our previous implementation was using relative dates to generate a dashboard, but that was highly dependent on everything going right.  I copy/pasted my way to a mostly-working dashboard from this community. Hoping I can get some help to get the rest of the way there.  The new implementation uses a ScanID from the report.csv.  My dashboard has a drop-down which doesn't let me select anything, but automatically selects the latest scanID (and dynamically assigns the previous month's ScanIDs for comparison/trendlines). I'd like to be able to use the drop down to review last month's report as well though.  Examples:  ScanID's: This month: 999999 Last month: 888888 Previous Month: 777777 etc. So as it stands the dashboard automatically performs a search and assigns the following tokens:  <set token="Scan1">$result.row 1$</set> <set token="Scan2">$result.row 2$</set> <set token="Scan3">$result.row 3$</set> <set token="Scan4">$result.row 4$</set> I'd like to be able to click the drop-down and select ScanID 888888 and have it automatically assign the token to "Scan1", and dynamically set "Scan2" to ScanID 777777 and so on.  Hope I've explained it well enough. Below is my sample (anonymized dashboard xml/source). Thanks in advance!     <form theme="dark"> <label>dropdown dashboard</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="Scan1" searchWhenChanged="true"> <label>Select a Report</label> <search> <query>index="fakeindex" | dedup ScanID | table ScanID | head 6 | sort - ScanID | transpose </query> <earliest>-6mon@mon</earliest> <latest>now</latest> <done> <set token="Scan1">$result.row 1$</set> <set token="Scan2">$result.row 2$</set> <set token="Scan3">$result.row 3$</set> <set token="Scan4">$result.row 4$</set> </done> </search> </input> </fieldset> <row> <panel> <title>Panel for Debugging Token:</title> <html> Upercase $ScanX$ <div>This Month: $Scan1$</div> <div>Last Month: $Scan2$</div> <div>Prev Month: $Scan3$</div> </html> </panel> </row>     [example search that required multiple scanID's]       <single> <search> <query>index="fakeindex" sourcetype=fakesourcetype ScanID=$Scan2$ NOT [ search index="fakeindex" sourcetype=fakesourcetype ScanID=$Scan3$ | stats count by somevalue | table somevalue] | dedup somevalue ScanID | stats count(somevalue) as EVENTS | eval period="Last Month" | append [ search index="fakeindex" sourcetype=fakesourcetype ScanID=$Scan1$ NOT [ search index="fakeindex" sourcetype=fakesourcetype ScanID=$Scan2$ | stats count by somevalue | table somevalue] | dedup somevalue ScanID | stats count(somevalue) as EVENTS | eval period="This Month" ] | fields EVENTS period _time</query> <earliest>-6mos@mos</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">trend</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">inverse</option> <option name="trendDisplayMode">absolute</option> <option name="underLabel">New</option> <option name="unitPosition">before</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single>      
Hello, I am doing a fundamentals course lab and cannot figure out what to search in order to get a list of "all web application events where a file was successfully served to the user". Can anyone he... See more...
Hello, I am doing a fundamentals course lab and cannot figure out what to search in order to get a list of "all web application events where a file was successfully served to the user". Can anyone help steer me in the right direction? Any help would be greatly appreciated. 
dd  
I am using snmp to poll interface stats from a device, which is only returning total packets received on interface, I am polling every 60 seconds.  Is there any way in dashboard to take the differenc... See more...
I am using snmp to poll interface stats from a device, which is only returning total packets received on interface, I am polling every 60 seconds.  Is there any way in dashboard to take the difference between those values and then divide by 60t to get packets per second and display this value in dashboard?  (<event1 value> -  event2 value>)/60. Dashboard would need to do this for each event coming in.
I want to use splunk webhook future to send the fired alerts/events to another third party system. the third party rest api needs authentication.  So I have given the weebhook url as https:.//usern... See more...
I want to use splunk webhook future to send the fired alerts/events to another third party system. the third party rest api needs authentication.  So I have given the weebhook url as https:.//username:password@url, but its not sending the trigged alerts to this url. Cannot we give username/password in the url? how to debug or check why splunk is not able to send trigged alerts?
I've got scripts that will call the API to get Applications and their Health Rules.  I'm trying to get just the active/enabled Health Rules.  If the option on the application to evaluate Health Rules... See more...
I've got scripts that will call the API to get Applications and their Health Rules.  I'm trying to get just the active/enabled Health Rules.  If the option on the application to evaluate Health Rules is set to off, the Health Rules are still set to True. Is there a way via the API to see if the application has the evaluate Health Rules option set to Off?
I want to fetch the results from triggered alerts  from time T1 to T2. Tried passing the earliest_time or earliest query params but it didn't work. Can someone please let me how to pass the time fil... See more...
I want to fetch the results from triggered alerts  from time T1 to T2. Tried passing the earliest_time or earliest query params but it didn't work. Can someone please let me how to pass the time filter params to the following rest apis https://splunk1:8089/servicesNS/nobody/-/alerts/fired_alerts/-?output_mode=json   https://splunk1:8089//servicesNS/nobody/-/search/jobs/scheduler__admin__SplunkEnterpriseSecuritySuite__RMD5123456_at_1623704400_52981/results?count=0&output_mode=json