All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the dat... See more...
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the data to Splunk indexers as well as to other output group. Following is the config of outputs.conf under peer-apps:   [tcpout] indexandForward=true   [tcpout:send2othergroup] server=..... sslPassword=..... sendCookedData=true   This config is currently sending the same data to both the outputs. Indexing locally and then forwarding to another group.  Is there a way to keep some indexes to be indexed locally and some to be sent only to another group? I tried using props and transforms by including _TCP_ROUTING but it is not working at all.    Thanks in advance!  
Hi @gbam , I created this search (starting from a search from PS) to display active Correlation Searches with some information, as also Adaptive Responsa Actions:   | rest splunk_server=local coun... See more...
Hi @gbam , I created this search (starting from a search from PS) to display active Correlation Searches with some information, as also Adaptive Responsa Actions:   | rest splunk_server=local count=0 /servicesNS/-/-/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rename title as search_name, eai:acl.app as app, action.correlationsearch.annotations as frameworks action.correlationsearch.label AS label action.notable.param.security_domain AS security_domain action.notable.param.severity AS severity dispatch.earliest_time AS earliest_time dispatch.latest_time AS latest_time action.notable.param.drilldown_searches AS drilldown alert.suppress AS throttle alert.suppress.period AS throttle_period alert.suppress.fields AS throttle_fields | table search_name, app, description, frameworks, disabled label security_domain actions cron_schedule earliest_time latest_time search drilldown throttle throttle_period throttle_fields | spath input=frameworks | rename mitre_attack{} as mitre_attack, nist{} as nist, cis20{} as cis20, kill_chain_phases{} as kill_chain_phases | table app, search_name, label, description, disabled, security_domain actions cron_schedule earliest_time latest_time throttle throttle_period throttle_fields | sort label   You can create your own, starting from this adapting it to your requirements, Ciao. Giuseppe
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default,... See more...
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default, it means that the transactions which start within the selected timerange of the search but ends after are counted as uncompleted transactions.  How can I do to extend the search out of the range for the uncompleted transactions?  StartSearchtime > StartTransaction > EndTransaction > EndSearchTime = OK  StartSearchtime > StartTransaction  > EndSearchTime = True KO (case where the EndTransaction never happened) StartSearchtime > StartTransaction > EndSearchTime > EndTransaction = False KO (case where EndTransaction exists but can only be found after the selected timerange) Extending the EndSearchTime is not the solution, as the service runs 24/7,  new transactions started within the extended slot will then end up with potential EndTransaction out of the new range.  Thanks for your help. Flo
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and... See more...
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and the ip of asset changes due to personal internet that ip does not get added to the asset lookup as its not part of infoblox flow. i was thinking maybe using zscaler to add ip details for the asset but if there is any successful way someone used to mitigate this would be helpful .    
Hi @SATYENDRA.DAS, Your screenshot is cut off a little above where the list of agents should be visible. When I filter by the same results, I do see a list. 
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ... See more...
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ones only increase risk score.  I had hoped to use /services/alerts/correlationsearches however it doesn't appear that endpoint exists anymore?    
No luck as it throws an error saying "must NOT have additional properties" in JSON.
This just means that the script being called did not run to completion. I recently had a confusing problem that caused this exact error. I had a working design where I had a sendalert named "my_sen... See more...
This just means that the script being called did not run to completion. I recently had a confusing problem that caused this exact error. I had a working design where I had a sendalert named "my_send_alert" which called a python script named "my_send_alert.py" which then called a shell script named "my_send_alert_alt.sh".  It all worked great.  So I cloned it to create a different one and it didn't work, giving this error.  The problem ended up being that I named the shell script the same name as the python script and splunk was SKIPPING calling the python script and was calling the shell script directly!  I simply changed the name of the shell script and all was well.  So in summary: All 3 named the same does not work. This DOES NOT work: my_send_alert(alert_actions.conf) -> my_send_alert.py -> my_send_alert.sh This DOES work: my_send_alert(alert_actions.conf) -> my_send_alert.py -> my_send_alert_alt.sh This should also work: my_send_alert(alert_actions.conf) -> my_send_alert.sh
Hi @ITWhisperer  My requirement is to fetch the value from the latest event (even if I restrict the search to 30 mins).  Example : Query : index = events_prod_tio_omnibus_esa ( "SESE023" ) sour... See more...
Hi @ITWhisperer  My requirement is to fetch the value from the latest event (even if I restrict the search to 30 mins).  Example : Query : index = events_prod_tio_omnibus_esa ( "SESE023" ) sourcetype=Log_mvs | rex field=msg "(ADV|ALERT REACH)\s* (?<Nb_msg>[^\s]+)" | rex field=msg "NB\s* (?<Msg_typ>[^\s]+)" | table Nb_msg   Result :    I want to display the value "Nb_msg" in the result if there is any event in the last 15 mins. if there is no event in the last 15 mins , then display the value "0" in the result.   Currently with the query (attached above) , i am getting the value "Nb_msg" from all the events generated in last 15 mins.   
Hi @LearningGuy, sorry there was a misunderstanding: it isn't possible to update un index- It's possible to display the index data enriched with the phone by a lookup. Otherwise, it's possible to ... See more...
Hi @LearningGuy, sorry there was a misunderstanding: it isn't possible to update un index- It's possible to display the index data enriched with the phone by a lookup. Otherwise, it's possible to save the events of the old index in a new one, enriched also with phone number. Ciao. Giuseppe
Hi @LearningGuy, the only way to restrict access to any object are roles. Ciao. Giuseppe
Hi @gcusello  If I am not an admin, Is it possible to do the following?   (refer to my main question) I want to allow ONLY my team within "App1" to have read and write access to "Test" dashboard.... See more...
Hi @gcusello  If I am not an admin, Is it possible to do the following?   (refer to my main question) I want to allow ONLY my team within "App1" to have read and write access to "Test" dashboard. Thanks
Hi @Roy_9 , you can create a custom input and you have to find the parsing rules. Ciao. Giuseppe
Hi @gcusello  there are logs for the windows onesettings service "This service offers to report telemetry data back to MS about OS health, build info, etc. in order to keep the computer "healthy" ... See more...
Hi @gcusello  there are logs for the windows onesettings service "This service offers to report telemetry data back to MS about OS health, build info, etc. in order to keep the computer "healthy" . We came accross this setting recently. The logs are written to "Microsoft\Windows\Privacy-Auditing\" and they are in Windows Event Log I am not sure whether these events can be tracked using Splunk add-on for windows, any thoughts on this? Thanks
Hi @gcusello  When you said it's possible to add new field into past data in a summary index, is it a new entry/insert or an update ?   In my example, Is it possible to update (not insert) "Phone"... See more...
Hi @gcusello  When you said it's possible to add new field into past data in a summary index, is it a new entry/insert or an update ?   In my example, Is it possible to update (not insert) "Phone" field in "test_1" past data (_time/timestamp is in the past) ? Do I need to have permission to perform an update to an index? I think I only can perform insert, but not delete or update Your sample query is moving new data to the new index "test_2", not to same  "test_1" past data  if you have these information in a lookup, way do you need to save it in the index? the main_index has a large set of data, it's very slow doing a lookup in dashboard, that's why i filtered necessary data and moved it to summary index Past:  index=main_index + csv data ===> index=summary report="test1" Now: I updated csv data with a phone field  index=main_index + csv data ===> index=summary report="test2" Can I update (not insert) only "phone" field from "test2" to "test1" with Past timestamp? OR Can I update (not insert) only "phone" field from "main index+CSV" to  "test1" with Past Timestamp? index=main_index + csv data (NEW) ===> index=summary report="test1" (PAST Timestamp) Thank you
Hi @Dayalss , yes, there are seven apps for Qualys, two of them seem to be related to vulnerabilities. I'm not a Qualys expert, so I don't know which app is the one for your requirements. Ciao. G... See more...
Hi @Dayalss , yes, there are seven apps for Qualys, two of them seem to be related to vulnerabilities. I'm not a Qualys expert, so I don't know which app is the one for your requirements. Ciao. Giuseppe
Hi,   You mean other app?    
If I understand correctly, then yes; you could use a single to display a number, you just need a search to calculate the number for your. The stats command can easily count the number of events retur... See more...
If I understand correctly, then yes; you could use a single to display a number, you just need a search to calculate the number for your. The stats command can easily count the number of events returned by the search.
Hi @Dayalss , check other dashboards, I'm almost sure that you'll find what you're searching. Ciao. Giuseppe
Hi @gcusello ,   I have installed the Qualys Vulnerabilities app , but it does not full fill our requirement.   We need to build custom dashboards , but there is data mismatch. Need to fix it. ... See more...
Hi @gcusello ,   I have installed the Qualys Vulnerabilities app , but it does not full fill our requirement.   We need to build custom dashboards , but there is data mismatch. Need to fix it.   Regards, Dayal