All Topics

Top

All Topics

Hello Community, Yesterday i realize that i can't reach my heavy forwarder, i already tried restarting the splunkd service but i still can't   get web access.  My splunk is running over windows s... See more...
Hello Community, Yesterday i realize that i can't reach my heavy forwarder, i already tried restarting the splunkd service but i still can't   get web access.  My splunk is running over windows server 2019. Please, someone can help me ,  i dont know which log should i check or what are usually the first steps? Thank you for your help.
Hi, I am getting errors similar to below for 5 inputs.conf.spec stanzas:   03-22-2023 09:03:52.484 +0000 WARN SpecFiles [45520 ConfReplicationThread] - Found parameter "python.version" inside "/a... See more...
Hi, I am getting errors similar to below for 5 inputs.conf.spec stanzas:   03-22-2023 09:03:52.484 +0000 WARN SpecFiles [45520 ConfReplicationThread] - Found parameter "python.version" inside "/apps/splunk/splunk/etc/apps/splunk_app_soar/README/inputs.conf.spec", scheme "audit://", but this parameter will be ignored as it does not contain the correct sequence of characters (a parameter name must match the regex "([0-9a-zA-Z][0-9a-zA-Z_-]*)").   The 5 stanzas are: Splunk_TA_paloalto/README/inputs.conf.spec", scheme "iot_security://" TA-tenable/README/inputs.conf.spec", scheme "tenable_io://" TA-tenable/README/inputs.conf.spec", scheme "tenable_securitycenter://" TA-tenable/README/inputs.conf.spec", scheme "tenable_securitycenter_mobile://" splunk_app_soar/README/inputs.conf.spec", scheme "audit://" The definitions for python.version in each stanza are: Splunk_TA_paloalto/README/inputs.conf.spec: [] python.version = python3 TA-tenable/README/inputs.conf.spec: [tenable_io://] python.version = python3 TA-tenable/README/inputs.conf.spec: [tenable_securitycenter://] python.version = python3 TA-tenable/README/inputs.conf.spec: [tenable_securitycenter_mobile://] python.version = python3 splunk_app_soar/README/inputs.conf.spec: [audit://] python.version = {default|python|python2|python3} All the definitions for python.version seem to match the regex requirement stated in the warning. Also I have other spec files with python.version defined in the same way that are not causing these messages: system/README/inputs.conf.spec: [script://] python.version = {default|python|python2|python3} TA-MS-AAD/README/alert_actions.conf.spec: [dismiss_azure_alert] python.version = python3 Anyone have any ideas how to stop these messages being generated?
hey, I need to build a report, that contains approx 500 thousand events. the requirement is  that the report will contain three rows - I need to count if httpStatus is ok or not, and classify each... See more...
hey, I need to build a report, that contains approx 500 thousand events. the requirement is  that the report will contain three rows - I need to count if httpStatus is ok or not, and classify each eventId in its propper position. (the requirement is that we will have minimal amount of rows!!! I cant duplicate or have more then 10 rows) so basically the report looks like this: I have uri column that contains all of my desired info, and all of my calculations of median, avg, precentage etc, are based on the time field as follows:     |*MY SEARCH * |stats count(request.uri) as totalCount values(uri) as uri values(timeTaken.total) as newTime perc95(timeTaken.total) as prec95 perc5(timeTaken.total) as prec5 median(timeTaken.total) as med avg(timeTaken.total) as average max(date) as maxDate min(date) as minDate values(timeTaken.total) as time by status |table uri totalCount prec95 prec5 med average status maxDate minDate time       now my question is- I need to add a new line of totals, based on the other lines. beacuse Im using functions such as avg, median etc, I dont think I can use |addtotals and a very important note is that all of my values in the columns time and uri are not distinct. that means they can appear more then once, and then my calculations are wrong, and I cant base a following stats based on the previous one. Ive tried using list, but it has a limit of 100 values, and I have  hundred of thousands.  what can I do to add another total row that will calculate all of my events ? Ive tried adding |appendPipe it this way based on the results Ive gotten in the stats command, but of course I got wrong values (because the time result is not distinct, and the values shown in the stats are distinct) thats my report after adding the total calculation (that didnt work)       |*MY SEARCH * |stats count(request.uri) as totalCount values(uri) as uri values(timeTaken.total) as newTime perc95(timeTaken.total) as prec95 perc5(timeTaken.total) as prec5 median(timeTaken.total) as med avg(timeTaken.total) as average max(date) as maxDate min(date) as minDate values(timeTaken.total) as time by status |appendpipe [stats sum(totalCount) as totalCount values(uri) as uri values(newTime) as newTime perc95(time) as prec95 perc5(time) as prec5 median(time) as med avg(time) as average| eval status="TOTAL"] |table uri totalCount prec95 prec5 med average status maxDate minDate time       I really hope that Ive made my question clear thank's in advance    
Going through the documentation for the prompt block, I see there is a way to send the prompt to the dynamic role "Playbook run owner" however I am not seeing it as an option under "User or Role" dro... See more...
Going through the documentation for the prompt block, I see there is a way to send the prompt to the dynamic role "Playbook run owner" however I am not seeing it as an option under "User or Role" drop down in my prompt block configuration panel. Is this an error and if not, is there a way to send the prompt to the user who ran the playbook some other way?
I'm having an AWS ECS Cluster and have configured it with Splunk logging and splunk-format: raw in task definition like below:   { "logConfiguration": { "logDriver": "splunk", "secretOptions": [ ... See more...
I'm having an AWS ECS Cluster and have configured it with Splunk logging and splunk-format: raw in task definition like below:   { "logConfiguration": { "logDriver": "splunk", "secretOptions": [ { "valueFrom": "myarn", "name": "splunk-token" } ], "options": { "splunk-url": "my-splunk-url", "splunk-source": "my-splunk-source", "splunk-format": "raw" } } }   All my dashboards in Splunk are expecting this format. The message are getting truncated at 4kb. Changing the format to inline does not truncate the messages but using this new format would require a lot of rework in the Splunk Dashboards. Is there a way to get this to work with splunk-format: raw without having message getting truncated?
Hello there, To keep it simple, I am trying to figure out how to make an alert depend on other alert. Imagine triggering an alert because there is "fail" in some event, but if in the same day there... See more...
Hello there, To keep it simple, I am trying to figure out how to make an alert depend on other alert. Imagine triggering an alert because there is "fail" in some event, but if in the same day there is "success" in the same source, the first alert would be closed and the "success" will be alerted instead. Am I making any sense? can anyone help? If it matters I am using Alert manager add-on Cheers, 
Requirement: I have a ton of events and I need to create an alert that keeps monitoring my job for the number of events it processed for the last 1 hour. It should alert whenever the events count e... See more...
Requirement: I have a ton of events and I need to create an alert that keeps monitoring my job for the number of events it processed for the last 1 hour. It should alert whenever the events count exceeds a specific threshold. I have the below query framed. But it is not showing results at all, even when there are results to be shown.   index=myIndex "myJob" earliest=-1h latest=now | stats count as eventsCount by _time | where eventsCount > 5000   Where am I making a mistake? Please help.
Hi There, I am new to Splunk and have data coming in from just one server. I have tried running the basic brute force detection search, and receive thousands of events. I don't think this is accurat... See more...
Hi There, I am new to Splunk and have data coming in from just one server. I have tried running the basic brute force detection search, and receive thousands of events. I don't think this is accurate and thus feel as though I must have misconfigured something, somewhere. I'm not sure where I should begin to look. Any help would be appreciated, Jamie
Hi, I am looking for a solution to a problem that has been addressed here: Using a column of field names to dynamically select fields for use in eval expression  but with this difference: Or... See more...
Hi, I am looking for a solution to a problem that has been addressed here: Using a column of field names to dynamically select fields for use in eval expression  but with this difference: Original solution was:     | makeresults | eval raw="x1 y1 z1 field1 x1::x2 y2 z2 field3 z2::x3 y3 z3 field2 y3" | makemv delim="::" raw | mvexpand raw | rex field=raw "(?<field1>\S+)\s+(?<field2>\S+)\s+(?<field3>\S+)\s+(?<lookup>\S+)\s+(?<expected_result>\S+)" | fields - raw _time | rename COMMENT AS "Everything above fakes sample data; everything below is your solution." | eval result="N/A" | foreach field* [eval result=if(lookup="<<FIELD>>", $<<FIELD>>$, result)]     In the  | foreach... command is used field* as an set of input fields. But in my case the set of input fields cannot be described by wildcard, there is lot of field names in my input "list".  I have decided to create multivalue field with all values in lookup column:     | eventstats values(lookup) as mv_lookup     That created mv field mv_lookup I want to use as input for | foreach... command.     | foreach mode=multivalue mv_lookup [eval result=if(lookup="<<FIELD>>", $<<FIELD>>$, result)]     I guess, if foreach command input is MV field, i have to use <<ITEM>> instead of <<FIELD>> and that is the reason for no match in lookup="<<FIELD>>"  Does exist any way how to use MV list of values (names of fields) to perform requested lookup? Thanks in advance. David
I have several of our application teams asking if we are able to integrate AppDynamics with the 3rd party application called Snowflake.  Has anyone tried this or been successful with integrating the ... See more...
I have several of our application teams asking if we are able to integrate AppDynamics with the 3rd party application called Snowflake.  Has anyone tried this or been successful with integrating the 2 tools together?  
Hi  I have a lookup having two fields | inputlookup ID-Client-Lookup.csv | fields ClientId ClientName I have a base search sourcetype="oxygen-standard" | regex AllClientIDs="^[a-z0-9A-Z]{2,}$"... See more...
Hi  I have a lookup having two fields | inputlookup ID-Client-Lookup.csv | fields ClientId ClientName I have a base search sourcetype="oxygen-standard" | regex AllClientIDs="^[a-z0-9A-Z]{2,}$" | stats count by AllClientIDs I want a query which will take each ClientIDs from my base search and the search in the lookup and give me the Client Names Can anyone help?
I am wondering why tstats command alters time stamps when I run it by _time. | tstats values(text_len) as text_len  values(ts) as ts where index = data sourcetype = cdr by _time thread_id num_a... See more...
I am wondering why tstats command alters time stamps when I run it by _time. | tstats values(text_len) as text_len  values(ts) as ts where index = data sourcetype = cdr by _time thread_id num_attempts raw data in index=data :   Time   _time 2023-03-22T13:14:16.000+01:00   After tstats: 2023-03-22 13:10:00      
I am trying to send data from salesforce to Splunk using Http POST method but am getting error saying invalid certificate , tried to add certificate manually but unable to do so. Is there any problem... See more...
I am trying to send data from salesforce to Splunk using Http POST method but am getting error saying invalid certificate , tried to add certificate manually but unable to do so. Is there any problem with the certificates or how can it be fixed.
I am getting log file data from some linux boxes and some are not sending data. Unable to find the reason why? Please assist me on the same  Input stanza [monitor:///var/log] disabled = 0 ind... See more...
I am getting log file data from some linux boxes and some are not sending data. Unable to find the reason why? Please assist me on the same  Input stanza [monitor:///var/log] disabled = 0 index = unix_data [monitor:///var/adm] disabled = 0 index = unix_data [monitor:///etc] disabled = 0 index = unix_data RHEL 6.9----------------->not Working RHEL 7.4----------------->not Working SLES 11-------------->Working HP-UX 11.31--------->not Working  HP-UX 11.31---------->not Working  Solaris10--------------->Working 
Hi, Greetings! I am attempting to utilize Splunk ES functionality  by using a test index  After creating a correlation search, I added a trigger action to create a notable event on the search head... See more...
Hi, Greetings! I am attempting to utilize Splunk ES functionality  by using a test index  After creating a correlation search, I added a trigger action to create a notable event on the search head (SH). Any ideas of how to troubleshoot this, or what might be wrong greatly appreciated.    
Hi,   We are trying to migrate from AWS Servers to On-Premise servers hosted on a Splunk Enterprise.  Is there any documentation for migration of Splunk from AWS Servers to On-Prem Servers.   Th... See more...
Hi,   We are trying to migrate from AWS Servers to On-Premise servers hosted on a Splunk Enterprise.  Is there any documentation for migration of Splunk from AWS Servers to On-Prem Servers.   Thank you
Hi, After some advice please.  I am using a left join with Max=0 as need to find some events over a 24 hour period, however a user may have more than one event in the subsearch but i need to match ... See more...
Hi, After some advice please.  I am using a left join with Max=0 as need to find some events over a 24 hour period, however a user may have more than one event in the subsearch but i need to match on the closest time to my main search. Not sure what's the best approach to make that match? Lee
Hello, I want to extract fiends from below log format. Can someone please help. Log format - 2023-03-21 04:14:13.859, queue_name:stream-AccountProfile, messages: 16, bytes: 13 KiB, actCusumer... See more...
Hello, I want to extract fiends from below log format. Can someone please help. Log format - 2023-03-21 04:14:13.859, queue_name:stream-AccountProfile, messages: 16, bytes: 13 KiB, actCusumers: 4, numSubjects: 1 2023-03-21 04:14:13.859, queue_name:stream-SampleProfile, messages: 3,522, bytes: 2.4 MiB, actCusumers: 4, numSubjects: 1 Fields I want to extract are queue name, messages, actCusumers, numSubjects.  I am using below eval commands but looks like I am not getting all logs, also getting duplicate events. I am want to extract only latest ones. Query -  | eval ArrayAttrib=split(_raw,",") | eval numSubjects=mvindex(split(mvindex(ArrayAttrib,-1) ,": "),1) | eval actConsumers=mvindex(split(mvindex(ArrayAttrib,-2) ,": "),1) | eval bytes=mvindex(split(mvindex(ArrayAttrib,-3) ,": "),1) | eval messages=mvindex(split(mvindex(ArrayAttrib,-4) ,": "),1) | eval stream=mvindex(split(mvindex(ArrayAttrib,-5) ,":"),1) | eval dtm=strftime(_time,"%Y-%m-%d %H:%M") | stats max(dtm) by stream numSubjects actConsumers bytes messages | fields "stream", "messages", "actConsumers", "numSubjects", "max(dtm)" | dedup "messages" | dedup "stream" | sort "stream"          
Hi, I'm using "Sendresults" Trigger Action on scheduled Reports. In my scheduled  report search job,  at the end I would create a field called "email_to". So when  scheduled search has some result... See more...
Hi, I'm using "Sendresults" Trigger Action on scheduled Reports. In my scheduled  report search job,  at the end I would create a field called "email_to". So when  scheduled search has some results, it will send to those addresses in "email_to" field. Anyway,  every addresses only can showed in "To" on mail. But I would like make some addresses showed in "CC" (carbon copy) on the mail. ***P.S. I don't want use BCC. Please help me to seperate them, thanks in advance. Sendresults reference: https://apps.splunk.com/app/1794/#/details  
Hello all Very new to splunk Currently analyzing the old botsv1, and its very interesting so far. I'm stuck when analyzing suricata logs First of all, how to identify a false positive or fals... See more...
Hello all Very new to splunk Currently analyzing the old botsv1, and its very interesting so far. I'm stuck when analyzing suricata logs First of all, how to identify a false positive or false negative? Second how to identify from the signatures that identified a ransomware which one did actually detected the ransomware. Thank you all for you comments. Thank you