All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @AL3Z , No, you have only to define the asset (or the identity) in the correlation search. In other words, in the results of your CS you must have an asset (or the identity) and define this fiel... See more...
Hi @AL3Z , No, you have only to define the asset (or the identity) in the correlation search. In other words, in the results of your CS you must have an asset (or the identity) and define this field for the risk score. Ciao. Giuseppe
Hey, that SPL is good. But it have 99 Data section and getting Regex backlag errors on Regex101.  Currently I make it like [test_xmldata_to_fields] SOURCE_KEY = EventData_Xml REGEX = (?ms)<Data>(.*... See more...
Hey, that SPL is good. But it have 99 Data section and getting Regex backlag errors on Regex101.  Currently I make it like [test_xmldata_to_fields] SOURCE_KEY = EventData_Xml REGEX = (?ms)<Data>(.*?)<\/Data> FORMAT = test_data::$1 MV_ADD = 1 And then (dirty one, but it's working for start) EVAL-t_process_name=mvindex(test_data,0) EVAL-t_signature_name=mvindex(test_data,1) EVAL-t_binary_description=mvindex(test_data,2)   Regarding the <Data> field, does it always have the same format (process_name, signature_name,binary_description)? * Yes   Sourcetype, I create my own and just using Splunk_TA_Windows for initial report to extract Data_Xml. Basically, it's new Sourcetype and can do transform, props as I like.   
Does splunk shares common userbase amongst all splunk products? Which API request fetch Audit logs or events for splunk users?
@gcusello , Why we are not seeing the alerts for the disabled CS using the above search ?
Hi @AL3Z, yes you can use it in cs, but you can also use Notables. Anyway, as action when an alert is triggered, you can define a Risk Score to assign to an asset or to an identity instead to trigg... See more...
Hi @AL3Z, yes you can use it in cs, but you can also use Notables. Anyway, as action when an alert is triggered, you can define a Risk Score to assign to an asset or to an identity instead to trigger an alert. Then you can define a threshold for the risk score, so, you'll have a Notable when the risk score, for an asset or an identity exceeds the threshold. See in the Actions from a Correlation Search the Risk Score and make some try, I cannot guide you more. For more infos see at https://docs.splunk.com/Documentation/ES/7.2.0/RBA/Analyzerisk Ciao. Giuseppe  
Can we use this CS in ES ? Could you pls guide me how we could use the Risk Score for assets and identities?
When I try to use below code to test the API search:     var context = new Context(Scheme.Https, "www.splunk.com", 443); using (var service = new Service(context, new Namespace(user: "nobody", app... See more...
When I try to use below code to test the API search:     var context = new Context(Scheme.Https, "www.splunk.com", 443); using (var service = new Service(context, new Namespace(user: "nobody", app: "search"))) { Run(service).Wait(); } /// <summary> /// Called when [search]. /// </summary> /// <param name="service">The service.</param> /// <returns></returns> static async Task Run(Service service) { await service.LogOnAsync("aaa", "bbb"); //// Simple oneshot search using (SearchResultStream stream = await service.SearchOneShotAsync("search index=test_index | head 5")) { foreach (SearchResult result in stream) { Console.WriteLine(result); } } }     But failed, get the error message: XmlException: Unexpected DTD declaration. Line 1, position 3. Question: int this line: new Namespace(user: "nobody", app: "search") how to define the "user" and "app" parameters value? I try to use this way: var service = new Service(new Uri("https://www.splunk.com")); but still failed and got the same error message.  
Hi @AL3Z , the second search only lists the alerts not the triggered ones. If you want the triggered alerts you have to use the first. If you want to use a threshold, please try this: index=_audi... See more...
Hi @AL3Z , the second search only lists the alerts not the triggered ones. If you want the triggered alerts you have to use the first. If you want to use a threshold, please try this: index=_audit action=alert_fired ss_app=* | eval ttl=expiration-now() | search ttl>0 | convert ctime(trigger_time) | stats count BY ss_name severity | where count>10 If you're using Enterprise Security, you don't need to use a Correlation Search like this, but you could use the Risk Score for assets and identities, but it's too long to describe. Ciao. Giuseppe
Hi, For Splunk, earliest and latest fields are always in epoch format. But you can try to format them using strptime(earliest , "%Y-%m-%dT%H:%M:%S"). Docs: https://docs.splunk.com/Documentation/S... See more...
Hi, For Splunk, earliest and latest fields are always in epoch format. But you can try to format them using strptime(earliest , "%Y-%m-%dT%H:%M:%S"). Docs: https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions#strptime.28.26lt.3Bstr.26gt.3B.2C_.26lt.3Bformat.26gt.3B.29   ------------ If this was helpful, some karma would be appreciated.  
@gcusello , How we can set the threshold for the second search like if any of the CS alerts more than 10 times it should trigger a notables !
Hi, below are the log details. index=ABC sourcetype=logging_0 Below are the values of "ErrorMessages" field: invalid - 5 count unprocessable - 7 count (5 invalid pair + 2 others) no user foundv... See more...
Hi, below are the log details. index=ABC sourcetype=logging_0 Below are the values of "ErrorMessages" field: invalid - 5 count unprocessable - 7 count (5 invalid pair + 2 others) no user foundv- 3 count invalid message process - 3 count process failed- 3 count   Now I have to eliminate ErrorMessage=invalid and ErrorMessage=unprocessable. Then show all other  ErrorMessage. But the problem here is , "unprocessable" ErrorMessage will show for other messages as well. so we cannot fully eliminate the "unprocessable" ErrorMessage. Whenever "Invalid" ErrorMessage is logging that time "unprocessable" ErrorMessage also will be logged. So we need to eliminate this pair only. Not every "unprocessable" ErrorMessage.   Expected result: unprocessable - 2 count no user foundv- 3 count invalid message process - 3 count process failed- 3 count   I tried with join using requestId but its not resulting anything because i am using | search ErrorMessage="Invalid" and elimated this in next query so its not searching for other ErrorMessages.   Can someone please help.    
Hi @anandhalagaras1 , sorry, I didn't notice that the format of Timestamp was different than the other two, please try this: <your_search> | eval Timestamp=strptime(Timestamp,"%Y-%m-%dT%H:%M:%... See more...
Hi @anandhalagaras1 , sorry, I didn't notice that the format of Timestamp was different than the other two, please try this: <your_search> | eval Timestamp=strptime(Timestamp,"%Y-%m-%dT%H:%M:%S.%6N%:z"), from=strptime("2023-12-13 00:00:00","%Y-%m-%d %H:%M:%S"), to=strptime("2023-12-13 23:59:59"","%Y-%m-%d %H:%M:%S") | where Timestamp>=from AND Timestamp<=to Ciao. Giuseppe
Hi @EricMonkeyKing , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors  
Hi, What is the sourcetype applied by splunk? Also can you paste an complete event? Regarding the <Data> field, does it always have the same format (process_name, signature_name,binary_description)... See more...
Hi, What is the sourcetype applied by splunk? Also can you paste an complete event? Regarding the <Data> field, does it always have the same format (process_name, signature_name,binary_description)?   Maybe to start you could try this on spl: | rex "<Data>(?<process_name>.*)<\/Data><Data>(?<signature_name>.*)<\/Data><Data>(?<binary_description>.*)<\/Data>"  
Hi @parthiban. if you have as results of your search : onlineStatus="online" and/or onlineStatus=offline, you could modify your search in this way: index= "XXXXX" "Genesys system is available" | sp... See more...
Hi @parthiban. if you have as results of your search : onlineStatus="online" and/or onlineStatus=offline, you could modify your search in this way: index= "XXXXX" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Bangladesh" | stats count(eval(onlineStatus="offline")) AS offline_count count(eval(onlineStatus="online")) AS online_count earliest(eval(if(onlineStatus="offline",_time,""))) AS offline_time earliest(eval(if(onlineStatus="online",_time,""))) AS online_time | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online", offline_count>0 AND online_count>0 AND offline>online, "Offline", offline_count=0 AND online_count=0, "No data") | search condition="Offline" OR condition="Offline but newly online" | table condition Ciao. Giuseppe    
Thanks. I tried but the scenario is a little bit complex ^^
Hi my hint is to use some filenames which don't contains any special marks when you are searching, calculate or manipulate data. If/when you want those "fancy names" on your output it's better to us... See more...
Hi my hint is to use some filenames which don't contains any special marks when you are searching, calculate or manipulate data. If/when you want those "fancy names" on your output it's better to use like rename fooPercent as foo% rename bar as "this is bar" on last command on your SPL. With this way you will get much easier life with SPL r. Ismo
Much appreciated! It works!
Hi You must remember that as splunk is using a timeseries "database" those values are stored into buckets based on _time. Splunk always use that _time value when you are searching events from bucket... See more...
Hi You must remember that as splunk is using a timeseries "database" those values are stored into buckets based on _time. Splunk always use that _time value when you are searching events from buckets! This means that if you are using earliest + latest to get events from buckets and then make final selection based on that separate Timestamp field, Splunk do that only from events which _time is between earliest and latest. If your Timestamp fields has your needed values outside of earliest - latest then you didn't get those! For that reason you should think (based on your data and use case), should you fix the ingestion to put that Timestamp field into _time? Or is your current way that _time have something else than Timestamp better way. IMHO is that you should fix your _time value on ingestion phase instead of trying to guess where those event could be (usually this leads quite open time spans). r. Ismo
@gcusello  index=xyz host=abc sourcetype=mkb | eval Timestamp=strptime(Timestamp,"%Y-%m-%d %H:%M:%S"), from=strptime("2023-12-13 00:00:00","%Y-%m-%d %H:%M:%S"), to=strptime("2023-12-13 23:59:59"... See more...
@gcusello  index=xyz host=abc sourcetype=mkb | eval Timestamp=strptime(Timestamp,"%Y-%m-%d %H:%M:%S"), from=strptime("2023-12-13 00:00:00","%Y-%m-%d %H:%M:%S"), to=strptime("2023-12-13 23:59:59","%Y-%m-%d %H:%M:%S") | where Timestamp>=from AND Timestamp<=to When i used this search query I am not getting any events at all. I ran the query for last 30 days as well but there is no events getting displayed but actually there are events for the same. So is there any modification needs to be done in the query kindly let me know.