All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| eval correlation1=coalesce(ID_1_A, ID_2_A) | eval correlation2=coalesce(ID_1_B, ID_2_B) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 ... See more...
| eval correlation1=coalesce(ID_1_A, ID_2_A) | eval correlation2=coalesce(ID_1_B, ID_2_B) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_A) | eval correlation2=coalesce(ID_1_B, ID_2_C) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_B) | eval correlation2=coalesce(ID_1_B, ID_2_A) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_B) | eval correlation2=coalesce(ID_1_B, ID_2_C) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_C) | eval correlation2=coalesce(ID_1_B, ID_2_A) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2 | eval correlation1=coalesce(ID_1_A, ID_2_C) | eval correlation2=coalesce(ID_1_B, ID_2_B) | eventstats values(index1data) as index1data, values(index2data) as index2data by correlation1 correlation2
Yes I have added it. Please find below the complete source code. { "type": "ds.search", "options": { "query": "index = index host=hostname source=\"/var/log/history-*.log\" servernam... See more...
Yes I have added it. Please find below the complete source code. { "type": "ds.search", "options": { "query": "index = index host=hostname source=\"/var/log/history-*.log\" servername | table Websphere GUI \n| eval Websphere=if(Websphere=\"0\",\"UP\",\"DOWN\")\n| eval GUI=if(GUI=\"0\",\"UP\",\"DOWN\")", "queryParameters": { "earliest": "-10m@m", "latest": "now" }, "refresh": "10m", "refreshType": "delay" }, "options": { "columnFormat": { "Websphere": { "rowBackgroundColors": "> table | seriesByName(\"Websphere\") | matchValue(WebsphereColumnFormatConfig)" } } }, "context": { "WebsphereColumnFormatConfig": [ { "match": "DOWN", "value": "#FF0000" }, { "match": "UP", "value": "#00FF00" }, ] } "name": "DC Web Server _ search" }
Hi @Jananie.Rajeshwari, Thanks for sharing your concern. I've shared this with the team. We'll get back to you soon. 
Try the appendpipe as I suggested index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" |stats latest(Nb_msg) as Back_log | ap... See more...
Try the appendpipe as I suggested index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" |stats latest(Nb_msg) as Back_log | appendpipe [| stats count | where count=0 | rename count as Back_log] | table Back_log
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 ... See more...
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 must match two separate ID fields in Index 1 using any permutation of index 2’s three ID fields. Here is an outline of the logic below.   Combine index 1 record with index 2 record into a single record when any matching condition is satisfied below: (ID_1_A=ID_2_A AND ID_1_B=ID_2_B) OR (ID_1_A=ID_2_A AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_B) Sample Data:   Index 1: ----------------------- |ID_1_A |ID_1_B| ------------------------ |123       |345      | ------------------------ | 345      |123      | ------------------------   Index 2: ________________________ |ID_2_A   | ID_2_B   | ID_2_C| ---------------------------------------- |123         |345           |999       | ---------------------------------------- |123         |999           |345       | ---------------------------------------- |345         |123            |999      | ---------------------------------------- |999         |123           | 345       | ---------------------------------------- | 345       | 999           |123        | ----------------------------------------   Any help would be greatly appreciated.   Thanks.  
Extending the end time is the solution, only you then have to filter out any transactions which started after your required time period end time. (How else are you going to find the ends of the trans... See more...
Extending the end time is the solution, only you then have to filter out any transactions which started after your required time period end time. (How else are you going to find the ends of the transactions if you don't include these events in your search?)
Sorry , my query was not that.  I will try to explain it again.  Query :  index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s... See more...
Sorry , my query was not that.  I will try to explain it again.  Query :  index = events_prod_tio_omnibus_esa ( "SESE030" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" |stats latest(Nb_msg) as Back_log If there is no record fetched in last 15 mins , then currently it is showing "No results found. Try expanding the time range." I will to display the number as 0 instead of "No results found. Try expanding the time range.".  Is it possible ?? 
It looks like you didn't read what I had suggested properly as you have missed the "options" key
I am not sure I understand - if you restrict the search to the last 15 minutes, you will either get a number of events or none. If you want to determine how many events you have you could do this in... See more...
I am not sure I understand - if you restrict the search to the last 15 minutes, you will either get a number of events or none. If you want to determine how many events you have you could do this index = events_prod_tio_omnibus_esa ( "SESE023" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" | rex field=msg "NB\s* (?<Msg_typ>[^\s]+)" | table Nb_msg | appendpipe [| stats count] | table count | where isnotnull(count)
Thank you very much!!!!
Hi @shadysplunker, did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input ? ... See more...
Hi @shadysplunker, did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Routeandfilterdatad#Route_inputs_to_specific_indexers_based_on_the_data_input ? see "Perform selective indexing and forwarding". Ciao. Giuseppe
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the dat... See more...
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the data to Splunk indexers as well as to other output group. Following is the config of outputs.conf under peer-apps:   [tcpout] indexandForward=true   [tcpout:send2othergroup] server=..... sslPassword=..... sendCookedData=true   This config is currently sending the same data to both the outputs. Indexing locally and then forwarding to another group.  Is there a way to keep some indexes to be indexed locally and some to be sent only to another group? I tried using props and transforms by including _TCP_ROUTING but it is not working at all.    Thanks in advance!  
Hi @gbam , I created this search (starting from a search from PS) to display active Correlation Searches with some information, as also Adaptive Responsa Actions:   | rest splunk_server=local coun... See more...
Hi @gbam , I created this search (starting from a search from PS) to display active Correlation Searches with some information, as also Adaptive Responsa Actions:   | rest splunk_server=local count=0 /servicesNS/-/-/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rename title as search_name, eai:acl.app as app, action.correlationsearch.annotations as frameworks action.correlationsearch.label AS label action.notable.param.security_domain AS security_domain action.notable.param.severity AS severity dispatch.earliest_time AS earliest_time dispatch.latest_time AS latest_time action.notable.param.drilldown_searches AS drilldown alert.suppress AS throttle alert.suppress.period AS throttle_period alert.suppress.fields AS throttle_fields | table search_name, app, description, frameworks, disabled label security_domain actions cron_schedule earliest_time latest_time search drilldown throttle throttle_period throttle_fields | spath input=frameworks | rename mitre_attack{} as mitre_attack, nist{} as nist, cis20{} as cis20, kill_chain_phases{} as kill_chain_phases | table app, search_name, label, description, disabled, security_domain actions cron_schedule earliest_time latest_time throttle throttle_period throttle_fields | sort label   You can create your own, starting from this adapting it to your requirements, Ciao. Giuseppe
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default,... See more...
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default, it means that the transactions which start within the selected timerange of the search but ends after are counted as uncompleted transactions.  How can I do to extend the search out of the range for the uncompleted transactions?  StartSearchtime > StartTransaction > EndTransaction > EndSearchTime = OK  StartSearchtime > StartTransaction  > EndSearchTime = True KO (case where the EndTransaction never happened) StartSearchtime > StartTransaction > EndSearchTime > EndTransaction = False KO (case where EndTransaction exists but can only be found after the selected timerange) Extending the EndSearchTime is not the solution, as the service runs 24/7,  new transactions started within the extended slot will then end up with potential EndTransaction out of the new range.  Thanks for your help. Flo
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and... See more...
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and the ip of asset changes due to personal internet that ip does not get added to the asset lookup as its not part of infoblox flow. i was thinking maybe using zscaler to add ip details for the asset but if there is any successful way someone used to mitigate this would be helpful .    
Hi @SATYENDRA.DAS, Your screenshot is cut off a little above where the list of agents should be visible. When I filter by the same results, I do see a list. 
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ... See more...
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ones only increase risk score.  I had hoped to use /services/alerts/correlationsearches however it doesn't appear that endpoint exists anymore?    
No luck as it throws an error saying "must NOT have additional properties" in JSON.
This just means that the script being called did not run to completion. I recently had a confusing problem that caused this exact error. I had a working design where I had a sendalert named "my_sen... See more...
This just means that the script being called did not run to completion. I recently had a confusing problem that caused this exact error. I had a working design where I had a sendalert named "my_send_alert" which called a python script named "my_send_alert.py" which then called a shell script named "my_send_alert_alt.sh".  It all worked great.  So I cloned it to create a different one and it didn't work, giving this error.  The problem ended up being that I named the shell script the same name as the python script and splunk was SKIPPING calling the python script and was calling the shell script directly!  I simply changed the name of the shell script and all was well.  So in summary: All 3 named the same does not work. This DOES NOT work: my_send_alert(alert_actions.conf) -> my_send_alert.py -> my_send_alert.sh This DOES work: my_send_alert(alert_actions.conf) -> my_send_alert.py -> my_send_alert_alt.sh This should also work: my_send_alert(alert_actions.conf) -> my_send_alert.sh