All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi There, I am new to Splunk and have data coming in from just one server. I have tried running the basic brute force detection search, and receive thousands of events. I don't think this is accurat... See more...
Hi There, I am new to Splunk and have data coming in from just one server. I have tried running the basic brute force detection search, and receive thousands of events. I don't think this is accurate and thus feel as though I must have misconfigured something, somewhere. I'm not sure where I should begin to look. Any help would be appreciated, Jamie
Hi, I am looking for a solution to a problem that has been addressed here: Using a column of field names to dynamically select fields for use in eval expression  but with this difference: Or... See more...
Hi, I am looking for a solution to a problem that has been addressed here: Using a column of field names to dynamically select fields for use in eval expression  but with this difference: Original solution was:     | makeresults | eval raw="x1 y1 z1 field1 x1::x2 y2 z2 field3 z2::x3 y3 z3 field2 y3" | makemv delim="::" raw | mvexpand raw | rex field=raw "(?<field1>\S+)\s+(?<field2>\S+)\s+(?<field3>\S+)\s+(?<lookup>\S+)\s+(?<expected_result>\S+)" | fields - raw _time | rename COMMENT AS "Everything above fakes sample data; everything below is your solution." | eval result="N/A" | foreach field* [eval result=if(lookup="<<FIELD>>", $<<FIELD>>$, result)]     In the  | foreach... command is used field* as an set of input fields. But in my case the set of input fields cannot be described by wildcard, there is lot of field names in my input "list".  I have decided to create multivalue field with all values in lookup column:     | eventstats values(lookup) as mv_lookup     That created mv field mv_lookup I want to use as input for | foreach... command.     | foreach mode=multivalue mv_lookup [eval result=if(lookup="<<FIELD>>", $<<FIELD>>$, result)]     I guess, if foreach command input is MV field, i have to use <<ITEM>> instead of <<FIELD>> and that is the reason for no match in lookup="<<FIELD>>"  Does exist any way how to use MV list of values (names of fields) to perform requested lookup? Thanks in advance. David
I have several of our application teams asking if we are able to integrate AppDynamics with the 3rd party application called Snowflake.  Has anyone tried this or been successful with integrating the ... See more...
I have several of our application teams asking if we are able to integrate AppDynamics with the 3rd party application called Snowflake.  Has anyone tried this or been successful with integrating the 2 tools together?  
Hi  I have a lookup having two fields | inputlookup ID-Client-Lookup.csv | fields ClientId ClientName I have a base search sourcetype="oxygen-standard" | regex AllClientIDs="^[a-z0-9A-Z]{2,}$"... See more...
Hi  I have a lookup having two fields | inputlookup ID-Client-Lookup.csv | fields ClientId ClientName I have a base search sourcetype="oxygen-standard" | regex AllClientIDs="^[a-z0-9A-Z]{2,}$" | stats count by AllClientIDs I want a query which will take each ClientIDs from my base search and the search in the lookup and give me the Client Names Can anyone help?
I am wondering why tstats command alters time stamps when I run it by _time. | tstats values(text_len) as text_len  values(ts) as ts where index = data sourcetype = cdr by _time thread_id num_a... See more...
I am wondering why tstats command alters time stamps when I run it by _time. | tstats values(text_len) as text_len  values(ts) as ts where index = data sourcetype = cdr by _time thread_id num_attempts raw data in index=data :   Time   _time 2023-03-22T13:14:16.000+01:00   After tstats: 2023-03-22 13:10:00      
I am trying to send data from salesforce to Splunk using Http POST method but am getting error saying invalid certificate , tried to add certificate manually but unable to do so. Is there any problem... See more...
I am trying to send data from salesforce to Splunk using Http POST method but am getting error saying invalid certificate , tried to add certificate manually but unable to do so. Is there any problem with the certificates or how can it be fixed.
I am getting log file data from some linux boxes and some are not sending data. Unable to find the reason why? Please assist me on the same  Input stanza [monitor:///var/log] disabled = 0 ind... See more...
I am getting log file data from some linux boxes and some are not sending data. Unable to find the reason why? Please assist me on the same  Input stanza [monitor:///var/log] disabled = 0 index = unix_data [monitor:///var/adm] disabled = 0 index = unix_data [monitor:///etc] disabled = 0 index = unix_data RHEL 6.9----------------->not Working RHEL 7.4----------------->not Working SLES 11-------------->Working HP-UX 11.31--------->not Working  HP-UX 11.31---------->not Working  Solaris10--------------->Working 
Hi, Greetings! I am attempting to utilize Splunk ES functionality  by using a test index  After creating a correlation search, I added a trigger action to create a notable event on the search head... See more...
Hi, Greetings! I am attempting to utilize Splunk ES functionality  by using a test index  After creating a correlation search, I added a trigger action to create a notable event on the search head (SH). Any ideas of how to troubleshoot this, or what might be wrong greatly appreciated.    
Hi,   We are trying to migrate from AWS Servers to On-Premise servers hosted on a Splunk Enterprise.  Is there any documentation for migration of Splunk from AWS Servers to On-Prem Servers.   Th... See more...
Hi,   We are trying to migrate from AWS Servers to On-Premise servers hosted on a Splunk Enterprise.  Is there any documentation for migration of Splunk from AWS Servers to On-Prem Servers.   Thank you
Hi, After some advice please.  I am using a left join with Max=0 as need to find some events over a 24 hour period, however a user may have more than one event in the subsearch but i need to match ... See more...
Hi, After some advice please.  I am using a left join with Max=0 as need to find some events over a 24 hour period, however a user may have more than one event in the subsearch but i need to match on the closest time to my main search. Not sure what's the best approach to make that match? Lee
Hello, I want to extract fiends from below log format. Can someone please help. Log format - 2023-03-21 04:14:13.859, queue_name:stream-AccountProfile, messages: 16, bytes: 13 KiB, actCusumer... See more...
Hello, I want to extract fiends from below log format. Can someone please help. Log format - 2023-03-21 04:14:13.859, queue_name:stream-AccountProfile, messages: 16, bytes: 13 KiB, actCusumers: 4, numSubjects: 1 2023-03-21 04:14:13.859, queue_name:stream-SampleProfile, messages: 3,522, bytes: 2.4 MiB, actCusumers: 4, numSubjects: 1 Fields I want to extract are queue name, messages, actCusumers, numSubjects.  I am using below eval commands but looks like I am not getting all logs, also getting duplicate events. I am want to extract only latest ones. Query -  | eval ArrayAttrib=split(_raw,",") | eval numSubjects=mvindex(split(mvindex(ArrayAttrib,-1) ,": "),1) | eval actConsumers=mvindex(split(mvindex(ArrayAttrib,-2) ,": "),1) | eval bytes=mvindex(split(mvindex(ArrayAttrib,-3) ,": "),1) | eval messages=mvindex(split(mvindex(ArrayAttrib,-4) ,": "),1) | eval stream=mvindex(split(mvindex(ArrayAttrib,-5) ,":"),1) | eval dtm=strftime(_time,"%Y-%m-%d %H:%M") | stats max(dtm) by stream numSubjects actConsumers bytes messages | fields "stream", "messages", "actConsumers", "numSubjects", "max(dtm)" | dedup "messages" | dedup "stream" | sort "stream"          
Hi, I'm using "Sendresults" Trigger Action on scheduled Reports. In my scheduled  report search job,  at the end I would create a field called "email_to". So when  scheduled search has some result... See more...
Hi, I'm using "Sendresults" Trigger Action on scheduled Reports. In my scheduled  report search job,  at the end I would create a field called "email_to". So when  scheduled search has some results, it will send to those addresses in "email_to" field. Anyway,  every addresses only can showed in "To" on mail. But I would like make some addresses showed in "CC" (carbon copy) on the mail. ***P.S. I don't want use BCC. Please help me to seperate them, thanks in advance. Sendresults reference: https://apps.splunk.com/app/1794/#/details  
Hello all Very new to splunk Currently analyzing the old botsv1, and its very interesting so far. I'm stuck when analyzing suricata logs First of all, how to identify a false positive or fals... See more...
Hello all Very new to splunk Currently analyzing the old botsv1, and its very interesting so far. I'm stuck when analyzing suricata logs First of all, how to identify a false positive or false negative? Second how to identify from the signatures that identified a ransomware which one did actually detected the ransomware. Thank you all for you comments. Thank you
hi i got a weird problem when i call Splunk API'https://localhost:8089/servicesNS/-/search/search/jobs?output_mode=json', and i can get reaults from it, but i get such error message,  however, ther... See more...
hi i got a weird problem when i call Splunk API'https://localhost:8089/servicesNS/-/search/search/jobs?output_mode=json', and i can get reaults from it, but i get such error message,  however, there is no such lookup files in my report search, and I also can NOT find these lookup in my Splunk. is there someone can help me point out the problem? why i get this error? how can i fix it? thanks a lot.      "messages": [         {             "type": "ERROR",             "text": "[Indexer_01_new,Indexer_11,Indexer_12,Indexer_13,Indexer_14,Indexer_16,Indexer_17,Indexer_18,Indexer_19,Indexer_20,Indexer_21,Indexer_22,Indexer_23,Indexer_24,Indexer_25,Indexer_26,Indexer_27,SearchHead_01] Could not load lookup=User_Account_With_AD"         },         {             "type": "ERROR",             "text": "[Indexer_01_new,Indexer_11,Indexer_12,Indexer_13,Indexer_14,Indexer_16,Indexer_17,Indexer_18,Indexer_19,Indexer_20,Indexer_21,Indexer_22,Indexer_23,Indexer_24,Indexer_25,Indexer_26,Indexer_27,SearchHead_01] Could not load lookup=Userauth_User_Account_With_AD"         },         {
Hi Splunkers, I’m working on a Dashboard panel where I have to create a monthly wise data as shown in the screenshot.  I have a field called “age group” and corresponding month’s data. But every m... See more...
Hi Splunkers, I’m working on a Dashboard panel where I have to create a monthly wise data as shown in the screenshot.  I have a field called “age group” and corresponding month’s data. But every month’s data comes from an external lookup file. Data for example: Age Group    Sept        July        Jun 30-90              235          0             34 90-180          1757        2168     3467 180+              19374     20,534  12,661  I’m using this below code but it’s not actually working for me. Please help me with the logic to produce the chart like as it was in the screenshot. TIA   |inputlookup september.csv |stats count by "Age Group" |eval _time=strptime("2022-09-01","%Y-%m-%d") |append [|inputlookup july.csv |stats count by "Age Group" |eval _time=strptime("2022-07-01","%Y-%m-%d")] … |chart count by _time, “Age Group” 
Hey SMEs,   Has anyone having any prior experience of migrating existing Qradar data to Splunk. Any docs or something useful please do share. thanks in advance 
Hi, If I search on search head, I am getting this error: Monitoring detail of the indexer machine: Can anyone help with this issue? I could not figure out what's and where is the prob... See more...
Hi, If I search on search head, I am getting this error: Monitoring detail of the indexer machine: Can anyone help with this issue? I could not figure out what's and where is the problem. Thanks.   
Hello, I have the input.conf for several log files as   [monitor:///u01/mnt/log-1/data/trafficmanager/access/*] index = myindex sourcetype = csvtype initCrcLength = 1048576   The log file nam... See more...
Hello, I have the input.conf for several log files as   [monitor:///u01/mnt/log-1/data/trafficmanager/access/*] index = myindex sourcetype = csvtype initCrcLength = 1048576   The log file name is structured as access_worker_*_YYYY_mm_dd.log. For example: access_worker_5_03_21.log, access_worker_6_03_21.log, access_worker_5_03_20.log, etc. The stanza that I put in don't work so I try for a specific file name, such as   [monitor:///u01/mnt/log-1/data/trafficmanager/access/access_worker_5_03_21.log] index = myindex sourcetype = csvtype initCrcLength = 1048576    Then the log was pull in no problem. The problem that I see is the way I use my wildcard, somehow don't catch all the log file that I want to monitor. Can anyone point out how to fix this problem?
I have two types of events when the interface is down and when it is up It usually happens that the interface comes down, after 10 seconds it goes back up. * An event arrives where it tells me th... See more...
I have two types of events when the interface is down and when it is up It usually happens that the interface comes down, after 10 seconds it goes back up. * An event arrives where it tells me that the interface is down * Another event arrives where it tells me that the interface is up and it was down for 10 seconds. I would like to alert if the interface does not come back up in a period of 1 minute. I have tried several options but I have not been able to make it alert.
hi there, need to convert a large number of classic dashboards to dashboard studio style. they are used to breakdown quarterly reporting data and have x number of visualisations with a time picker d... See more...
hi there, need to convert a large number of classic dashboards to dashboard studio style. they are used to breakdown quarterly reporting data and have x number of visualisations with a time picker dropdown for quarters eg. Q1-Year, Q2-Year, Q3-Year, Q4-Year using the inbuilt migration option and cloning the dashboards in the new studio style, everything is generally working, but the custom time token setup that was used on my classic dashboards. As an example the classic board have variations on the following time picker:   <input type="dropdown" token="quarter"> <label>Select Quarter</label> <choice value="Q1-22">Q1-22</choice> <choice value="Q2-22">Q2-22</choice> <choice value="Q3-22">Q3-22</choice> <choice value="Q4-22">Q4-22</choice> <choice value="Last4Quarters">Last4Quarters</choice> <change> <condition label="Q1-22"> <set token="custom_earliest">-1y@y+0q</set> <set token="custom_latest">-1y@y+1q</set> </condition> <condition label="Q2-22"> <set token="custom_earliest">-1y@y+1q</set> <set token="custom_latest">-1y@y+2q</set> </condition> <condition label="Q3-22"> <set token="custom_earliest">-1y@y+2q</set> <set token="custom_latest">-1y@y+3q</set> </condition> <condition label="Q4-22"> <set token="custom_earliest">-1y@y+3q</set> <set token="custom_latest">-1y@y+4q</set> </condition> <condition label="Last4Quarters"> <set token="custom_earliest">-4q@q</set> <set token="custom_latest">now</set> </condition> </change> <default>Q4-22</default> <initialValue>Q4-22</initialValue> </input>    this worked fine. but upon migration, the code moves to this:   { "type": "input.dropdown", "title": "Select Quarter", "options": { "token": "quarter", "items": [ { "value": "Q1-22", "label": "Q1-22" }, { "value": "Q2-22", "label": "Q2-22" }, { "value": "Q3-22", "label": "Q3-22" }, { "value": "Q4-22", "label": "Q4-22" }, { "value": "Q1-23", "label": "Q1-23" }, { "value": "Q2-23", "label": "Q2-23" }, { "value": "Q3-23", "label": "Q3-23" }, { "value": "Q4-23", "label": "Q4-23" }, { "value": "Last1Year", "label": "Last1Year" } ], "defaultValue": "" } }     selecting a visualisation and its data source configuration the code seems to reference the custom token   { "type": "ds.search", "options": { "query": "index=report_summary source=quarterly-info | timechart span=1d count by source | eval Threshold = 100000000", "queryParameters": { "earliest": "$custom_earliest$", "latest": "$custom_latest$" } }, "name": "viz1" }   but nothing ever loads. like the tokens are disconnected or not specified. feel like im missing something here on the new style. is it an issue with dynamically setting the tokens custom_earliest and custom_latest per dropdown item? is is this a common migration problem where theres a new token format that should be followed? or am i missing something?