All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for responding so quickly!!! The SPL commands I have been trying is as follows: index=indexname |stats count by domain,src_ip |sort -count |stats list(domain) as Domain, list(count) as co... See more...
Thanks for responding so quickly!!! The SPL commands I have been trying is as follows: index=indexname |stats count by domain,src_ip |sort -count |stats list(domain) as Domain, list(count) as count, sum(count) as total by src_ip |sort -total | head 10 |fields - total The task i have been given is: Use the stats, count, and sort search terms to display the top ten URI's in ascending order. This is from the botsv1 dataset
Hi i have log line like this, 1-need to group by them by ID, 2- filter those transactions that has T[A]   #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ... See more...
Hi i have log line like this, 1-need to group by them by ID, 2- filter those transactions that has T[A]   #txn1 16:30:53:002 moduleA ID[123] 16:30:54:002 moduleA ID[123] 16:30:55:002 moduleB ID[123]T[A] 16:30:56:002 moduleC ID[123] #txn2 16:30:57:002 moduleD ID[987] 16:30:58:002 moduleE ID[987]T[B] 16:30:59:002 moduleF ID[987] 16:30:60:002 moduleZ ID[987]   Any idea? Thanks
So, search for "exception". This will return events which have this word in. However, this might give you some false positives, so you need to be more precise about defining exactly what you consider... See more...
So, search for "exception". This will return events which have this word in. However, this might give you some false positives, so you need to be more precise about defining exactly what you consider to be an exception event. Once you have these, you can look to extract the exception type for your statistics.
https://docs.splunk.com/Documentation/Forwarder/9.1.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller The section "about the least-privileged user"
Yeah, you probably can fiddle with groupping by the T value or binning _time to some value (I suppose not all parts of a single transaction will have the same exact timestamp - they would probably di... See more...
Yeah, you probably can fiddle with groupping by the T value or binning _time to some value (I suppose not all parts of a single transaction will have the same exact timestamp - they would probably differ by some fraction of a second or even whole seconds) so you'd have to bin the _time and then use it for groupping.
no i haven't configured it yet, but i can use some help i only installed the add on
I want all expections which are getting entered.like in the splunk log if we have exception encountered it shoud fetch that
Of course I can try, if you can give me a list of all the exception types you want to capture.
@PickleRick Thanks work perfectly. but on some lines because of poor logging issue, i can see another transaction with same transactionID! “transactionID” Not unique in some transactions, but it ... See more...
@PickleRick Thanks work perfectly. but on some lines because of poor logging issue, i can see another transaction with same transactionID! “transactionID” Not unique in some transactions, but it is possible to differentiate from each other with “timestamp” and “Type” e.g. i can see transactionID 12345 detected on 00:00:01:000 after second detect another transaction with same transactionID 12345 on 00:00:02:000   FYI: it’s not a lot but affecting on result, is there any way to separate them in some way in splunk?  Thanks
Can you pleas suggest me query which gives me all types of exceptions
It is unclear what is being asked.  What is the relationship between the field you tabled ("user") and all the lookup tables?  And the relationship with "field_stats_wanted"?  Most importantly, why i... See more...
It is unclear what is being asked.  What is the relationship between the field you tabled ("user") and all the lookup tables?  And the relationship with "field_stats_wanted"?  Most importantly, why is inputlookup even considered?  If you wonder, appending multiple inputlookups is rarely the correct approach.  It usually means that the problem is not clearly understood. So, explain the use case without SPL first.  Is "user" is the only field of interest from raw events?  What is the desired results?  What are in those lookup tables?  Why are there so many different tables? Are there inherent relationships between those tables?  What is the logic between "user", these tables, and desired results?  Try not make volunteers read your mind.
Since everyone is not familiar with the data set you are referring to, please can you provide some examples of the events you are trying to find. Also, please share the SPL you have already tried, so... See more...
Since everyone is not familiar with the data set you are referring to, please can you provide some examples of the events you are trying to find. Also, please share the SPL you have already tried, so we can see where you might be going wrong.
You need to actually extract the values into a field - you might also consider escaping the dots as an unescaped dot in regex means any character. | rex field=_raw "\b(?<exception_type>(java|javax)\... See more...
You need to actually extract the values into a field - you might also consider escaping the dots as an unescaped dot in regex means any character. | rex field=_raw "\b(?<exception_type>(java|javax)\.[\w\.]+Exception)"
Pro tip: Do not assume anyone knows anything about your data. Update the title to a question that clearly defines the problem.  This will help others in the community.  "Splunk search command" conv... See more...
Pro tip: Do not assume anyone knows anything about your data. Update the title to a question that clearly defines the problem.  This will help others in the community.  "Splunk search command" conveys no information. Always illustrate relevant data.  For example, which field contains URI? Be conscious that many natural language terms are ambiguous.  For example, "top ten URI's" can mean many different things.  What is your definition related to your data? If the field URI contains URI, and "top ten" means the ten URI's that appear in the most events, this can be | stats count by URI I recommend that you read/watch some tutorials. Search Tutorial can be a good place to start.
Correct, it is not possible in one go because you are effectively grouping by two different dimension sets.
Hi here is instructions how to use Gmail account with splunk. https://community.splunk.com/t5/Alerting/Unable-to-send-test-email-from-Splunk/m-p/667242/highlight/false#M15466 This has changed coupl... See more...
Hi here is instructions how to use Gmail account with splunk. https://community.splunk.com/t5/Alerting/Unable-to-send-test-email-from-Splunk/m-p/667242/highlight/false#M15466 This has changed couple of year ago as @PickleRick said. r. Ismo
It sounds like that each event is a summary report for a day. Your response worked but I am getting all the events of "all time" even if I have selected a timestamp of 24h. Do you mean that when ... See more...
It sounds like that each event is a summary report for a day. Your response worked but I am getting all the events of "all time" even if I have selected a timestamp of 24h. Do you mean that when your time selector is for last 24 hours, Splunk returns multiple daily summaries?  If _time and the date key do not agree, and if your intention is to search for those summaries that fall within your search window, you can filter by that key, e.g., | spath path=employees | eval date = json_array_to_mv(json_keys(employees)) | mvexpand date ``` skip this if each employees record has only one top level key ``` | addinfo | eval date_start = strptime(date, "%F") | where info_min_time <= date_start AND relative_time(date_start, "+1d") < info_max_time | eval day_employees = json_extract(employees, date) | eval employee_id = json_array_to_mv(json_keys(day_employees)) | mvexpand employee_id | eval day_employees = json_extract(day_employees, employee_id) | spath input=day_employees  
Hi I think that currently there isn't any real email smtp server which don't want to use TLS and other authentication? So you must enable TLS and use smtp server which is supporting it. Here is ins... See more...
Hi I think that currently there isn't any real email smtp server which don't want to use TLS and other authentication? So you must enable TLS and use smtp server which is supporting it. Here is instructions how to use your personal Gmail account with splunk https://community.splunk.com/t5/Alerting/Unable-to-send-test-email-from-Splunk/m-p/667242/highlight/false#M15466 r. Ismo
  Hi Currently gmail don't allow use smtp server as earlier. Instead it wants to use more secure authentication. For that reason the old way is not working anymore with Splunk. Fortunately they hav... See more...
  Hi Currently gmail don't allow use smtp server as earlier. Instead it wants to use more secure authentication. For that reason the old way is not working anymore with Splunk. Fortunately they have implemented 2-factor authentication and additional app password feature which you could use. Here is steps to do it https://support.google.com/accounts/answer/185833?sjid=13755993998155727325-EU#:~:text=to%2520your%2520data.-,sign%2520in%2520with%2520app%2520passwords,-Tip%253A%2520App%2520Passwords Check above instructions and ensure that you have 2-step verification on (probably this is as Google has enabled it for all) Create a new app password for your Splunk server Login to your local splunk instance Settings -> Server settings Email settings Mail host: smtp.gmail.com:587 Email security: Enable TLS Username: Your Gmail account where you have enabled 2 Step verification Password: App password for above Gmail account Allowed Domains: <add what is needed> Save Go to search GUI:   index=_internal | head 1 | sendemail to="<your test email recipient>" subject=test sendresults=true format=table sendcsv=false   r. Ismo  
@ThomasC you are going to need a combination of REST and the playbook API.  Use REST to get all container_ids for a label /rest/container?_filter_label="<label>"&page_size=0 https://docs.splunk.c... See more...
@ThomasC you are going to need a combination of REST and the playbook API.  Use REST to get all container_ids for a label /rest/container?_filter_label="<label>"&page_size=0 https://docs.splunk.com/Documentation/SOARonprem/6.1.1/PlaybookAPI/SessionAPI  Then create a loop where you use the phantom.playbook() API to call the playbook against each container id.  https://docs.splunk.com/Documentation/SOARonprem/6.1.1/PlaybookAPI/PlaybookAPI#playbook  The above can be done in a single custom function / Code Block.  Also if you need these to run without you having to do historical backfill like this, you just need to set your playbook to Active and it will run automatically when an even with the relevant label drops into the queue from SNOW.  -- Happy SOARing! --