All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You probably want to use the eventstats command. For example (from my home lab). Let's search for events from my private web server index=httpd earliest=-1d Now add to each event a count of _all_ ... See more...
You probably want to use the eventstats command. For example (from my home lab). Let's search for events from my private web server index=httpd earliest=-1d Now add to each event a count of _all_ events for particular client | eventstats count by client Now we only want to see those events where the number of requests for the particular client was bigger than 5 (meaning a client requested a file from my web server 6 or more times) | where count>5    
I have two timestamps in milliseconds: start=1710525600000, end=1710532800000. How can I search for logs between those timestamps? Let's say I want to run this query: index=my_app | search env=pr... See more...
I have two timestamps in milliseconds: start=1710525600000, end=1710532800000. How can I search for logs between those timestamps? Let's say I want to run this query: index=my_app | search env=production | search service=my-service How to specify the time range in millis for this query?
Hi Team, We are using below query   [| inputlookup ABCD_Lookup_Blacklist.csv | outputlookup ABCD_Lookup_Blacklist_backup.csv append=false | sendemail to="nandan@cumpass.com" sendresults=false sen... See more...
Hi Team, We are using below query   [| inputlookup ABCD_Lookup_Blacklist.csv | outputlookup ABCD_Lookup_Blacklist_backup.csv append=false | sendemail to="nandan@cumpass.com" sendresults=false sendcsv=true sendpdf=false inline=true subject="ABCD_Lookup_Blacklist_backup.csv" | rename target as host | eval lookup_name="ABCD_Lookup_Blacklist.csv"]   Now we are getting attachment of CSV file name is unknown.csv So,  we want Attachment of CSV file name with appropriate lookup_name.   Please help with us Thank you, NANDAN  
You can "nest" mvappends to add multiple values at once. You can also use split() to make a multivalued field from a string of delimited values. You can use isnull() to check if mvfind returned a va... See more...
You can "nest" mvappends to add multiple values at once. You can also use split() to make a multivalued field from a string of delimited values. You can use isnull() to check if mvfind returned a value or not. One caveat about mvfind though - it matches based on regex so you might get some unexpected results if you're not careful
On Slack is a new MASA diagram from where you could see how those pipelines are working and which conf files (and parameters) are affecting to those events. https://splunk-usergroups.slack.com/archiv... See more...
On Slack is a new MASA diagram from where you could see how those pipelines are working and which conf files (and parameters) are affecting to those events. https://splunk-usergroups.slack.com/archives/CD9CL5WJ3/p1710515462848799?thread_ts=1710514363.198159&channel=CD9CL5WJ3&message_ts=1710515462.848799
Hello,  So, is multivalue the only way to use list/array? If I want to assign 7 values, should I use mvappend 7 times like the following? | eval test = mvappend("0", test) | eval test = mvappend(... See more...
Hello,  So, is multivalue the only way to use list/array? If I want to assign 7 values, should I use mvappend 7 times like the following? | eval test = mvappend("0", test) | eval test = mvappend("1", test) | eval test = mvappend("2", test) | eval test = mvappend("3", test) | eval test = mvappend("4", test) | eval test = mvappend("5", test) | eval test = mvappend("6", test) How do I get true/false return if I want to see if number 5 is in the array/list?   MVfind only give me the position of 5, which is 1. | eval n = mvfind(test, "5") Thank you
Sample Logs: <<< Reporting.logs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.SampleDBinternalexternal:::XII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 34567d34-1245-4... See more...
Sample Logs: <<< Reporting.logs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.SampleDBinternalexternal:::XII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 34567d34-1245-4asd-a27f-42345cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< Applicationlogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.AccountBinding:::XIS KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 7854d34-7623-4asd-a27f-90864cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< IntialLogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.IntialReortbinding:::XIP KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 12345d34-1288-8asd-a26f-42348cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< PartialReportingLogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.totalDBinternalexternal:::XII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 09876d34-6753-3asd-a30f-87654cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< FailedLogs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.SampleDBinternalexternal:::ZII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 56744d34-1245-4asd-a11f-89765cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters <<< Reporting.logs : 2454 : 15671231232345:INFO :com.am.sss.inws.sample.connector.notalwayslogs:::PII KEY:: g67a124-6f55-433a-345aexwc vx:: REQS REQUID :: 89765d34-9875-4asd-a2f-87654cvdwwxz:: SUB REQUID:: 7866-ghnb5-33333:: Application :barcode! company :: Org : Branch-loc :: TIME:<TIMESTAMP> (12) 2022/01/22 17:17:58:208 to 17:17:58:212 4 ms Generic BF Invoice time for one statment with parameters    I am not sure how to write rex to do field extraction. please find the below screenshot, i need rex for the highlighted ones:
Ok. The question (because there might not be many Windows DNS experts here) is whether you have those events you want in those eventlogs (and they are properly identified by those whitelisted EventID... See more...
Ok. The question (because there might not be many Windows DNS experts here) is whether you have those events you want in those eventlogs (and they are properly identified by those whitelisted EventIDs) or are you happily randomly setting your inputs in hope of finding something. Can you find relevant events in EventViewer?
The number of _defined_ dashboards doesn't matter much. In fact most Splunk installations have many dashboards which are never used and as such they cause no problem. On the other hand even one dashb... See more...
The number of _defined_ dashboards doesn't matter much. In fact most Splunk installations have many dashboards which are never used and as such they cause no problem. On the other hand even one dashboard used in parallel by many users can hog resources causing performance isues. In general, if you (try to) overbook your resources depending on your configuration you might end up with delayed/skipped searches (remember that ad-hoc searches have priority so if you have unlimited roles, you can cause scheduled searches to be delayed/skipped), searches terminated due to resource exhaustion or in some cases even causing Splunk process to be killed due to memory exhaustion. There are many things that can go wrong.
If you specify multiple aggregation functions for timechart by some field, it creates separate data series for each aggregation function and the field value. In the case of :NULL these are stats for ... See more...
If you specify multiple aggregation functions for timechart by some field, it creates separate data series for each aggregation function and the field value. In the case of :NULL these are stats for events where the field value is empty (I suspect that for log.event=res there is no field log.operation).
Hi @Fadil.CK, Good question, let me look into this. I had to edit my reply here because the info I shared before might be incorrect.  ^ Post edited by me to update my response. 
In order to search for something in Splunk ("query" the data as you said) you must first have the data from which you want to search ingested into Splunk. So if you have such data (for example from ... See more...
In order to search for something in Splunk ("query" the data as you said) you must first have the data from which you want to search ingested into Splunk. So if you have such data (for example from some endpoint inventory software or from installer logs), you will probably be able to find some information about the wireshark. But the main question is whether you have this data. Splunk on its own is "just" an data analysis platform. It's not a network monitor, endpoint manager, vulnerability scanner and so on.
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/MultivalueEvalFunctions#mvfind.28.26lt.3Bmv.26gt.3B.2C.26lt.3Bregex.26gt.3B.29
You can also try to fiddle with the timewrap command (but that's just a general idea, I don't have any particular solution in mind at the mkment).
No, you can just define another sourcetype and upload the file onto your all-in-one instance. The trick will be to handle the csv fields properly. If I remember correctly, with INDEXED_EXTRACTIONS=cs... See more...
No, you can just define another sourcetype and upload the file onto your all-in-one instance. The trick will be to handle the csv fields properly. If I remember correctly, with INDEXED_EXTRACTIONS=csv Splunk uses first (by default) line of input file to determine field names. Without it you need to explicitly name field names and use proper FIELD_DELIMITER so that Splunk knows what the fields are (or write a very ugly regex-based extraction pattern).
ah! ok, so I need to test this a different way and update the SEDCMD command to reference the new source type. What's the next easiest method to test? Setup a UF with a file monitor?
I think I got it. Need properly configure "_time" depend on difference in days i.e. for week long difference  eval _time = _time + 7*86400. If I am wrong, please advise.
Awesome that seemed to do it, thank you so much. index="azure-activity" | spath input=_raw path=properties.targetResources{}.modifiedProperties{} output=hold | eval hold = mvfilter(like(hold,"%Group... See more...
Awesome that seemed to do it, thank you so much. index="azure-activity" | spath input=_raw path=properties.targetResources{}.modifiedProperties{} output=hold | eval hold = mvfilter(like(hold,"%Group.DisplayName%")) | spath input=hold path=newValue output=NewGroupName | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.userPrincipalName", NewGroupName, operationName, _time
The default csv sourcetype has INDEXED_EXTRACTIONS=csv It changes how the data is processed. Even if the SEDCMD is applied (of which I'm not sure), the fields are already extracted and since you're... See more...
The default csv sourcetype has INDEXED_EXTRACTIONS=csv It changes how the data is processed. Even if the SEDCMD is applied (of which I'm not sure), the fields are already extracted and since you're only editing _raw, you're not changing already extracted fields.
I see report by ReportKey now, but graph is leaner I wonder how I can get something like one in articale.