All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The distinct_count (dc) function will give the unique values of a field. ErrorCode=4624 user!="*$" | timechart span=1d dc(user) as "Unique Users"    
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5  ... See more...
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5   I want to calculate the average for all of the values in Column 4 (that aren't N/A) that have the same value in Column 1. Then I want to output that as a table: Column 1 Column 2 Value 1 37.5 Value 2 40
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do... See more...
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do i do it ? i have configured an input field for time with token - global_time my query looks like this  index=xyz query1 earliest=global_time.earliest latest=now() [search index=xyz query2 earliest=global_time.earliest latest=global_time.latest] this is not working - can you suggest how to make this work
This corrected itself, after I toggled the server's role from standalone to distributed, then back to standalone -- then clients started showing up on the UI. Monitoring Console, General Setup, Mode... See more...
This corrected itself, after I toggled the server's role from standalone to distributed, then back to standalone -- then clients started showing up on the UI. Monitoring Console, General Setup, Mode (top left). Go figure.
FWIW, happening here as well, with 9.2.0.1. Checked all The Things mentioned in that doc everyone keeps referencing, including those stanzas mentioned numerous times here. Another symptom of mine i... See more...
FWIW, happening here as well, with 9.2.0.1. Checked all The Things mentioned in that doc everyone keeps referencing, including those stanzas mentioned numerous times here. Another symptom of mine is that the ForwarderManager (deployer) doesn't appear in my monitored servers in the SplunkManager (aka Master).
I've tried this before but wasn't successful in finding any matches, hence I resorted to an eval. Anyway you can expand on the examples you provided? Is there an eval statement or search that I shoul... See more...
I've tried this before but wasn't successful in finding any matches, hence I resorted to an eval. Anyway you can expand on the examples you provided? Is there an eval statement or search that I should be using?
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This in... See more...
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This info is not being presented in the Splunk Cloud monitoring console and is an indicator that indexed events are being dropped. index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Error parsing events from message content" | eval bytesRemaining=trim(bytesRemaining,":") | stats sum(bytesRemaining) as bytesNotIndexed What these errors are telling us is that some SQSSmartbusInputWorker process is parsing events and that there is some type of invalid field, or value in the data, in our case _subsecond.  When this process hits the invalid value, it appears to drop everything else in the stream (i.e. bytesRemaining).  So this is also to say that bytesRemaining contains events that were sent to Splunk Cloud, but not indexed.   When this error occurs,  Splunk cloud writes the failed info to an SQS DLQ in S3 which can be observed using: index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Successfully sent a SQS DLQ message to S3 with location" Curious if anyone else out there is experiencing the same issue.  SQSSmartbusInputWorker  doesn't appear in any of the indexing documents, but does appear to be very important to the ingest process.
Hey @padresman  Will try your example. Gotta be very careful that your expression fields match the capture group you use, as it will store it in "attributes."capture group value" by default.  Also,... See more...
Hey @padresman  Will try your example. Gotta be very careful that your expression fields match the capture group you use, as it will store it in "attributes."capture group value" by default.  Also, make sure to use golang regex on regex101. though your regex appears to be fine.  Also its wise to iterate and NOT remove the fields you make to see what they look like when they arrive at splunk. Can help make sure your value is what you think it is.....
Quick update.  I changed the format block to use format_3:formatted_data instead of formatted_data.*.  The note looks a lot nicer, but it's still 500 items.
Hi Everyone, i need an help about the following problem: during the analysis of some logs, we found that for a specific Index the Sourcetype had the only value Unknown. The first question we asked ... See more...
Hi Everyone, i need an help about the following problem: during the analysis of some logs, we found that for a specific Index the Sourcetype had the only value Unknown. The first question we asked ourselves was that there could have been some App or Add-on that probably did not match the data well, but neither was present. Subsequently we tried to see if there could be some missing value at the files.conf level, but even in this case we found no problems. So what could be the reason why for that specific Index the Sourcetype only has that value?
I seem to be close on trying to find the statistics to be able to pull unique users per day but I know I'm missing something. Goal: Have a stat/chart/search that has the unique user attribute per d... See more...
I seem to be close on trying to find the statistics to be able to pull unique users per day but I know I'm missing something. Goal: Have a stat/chart/search that has the unique user attribute per day for a span of 1 week / 1 month / 1 year search. Search queries trialed: EventCode=4624 user=* stats count by user | stats dc(user) EventCode=4624 user=* | timechart span1d count as count_user by user | stats count by user So the login event 4624 would be a successful log in code and then trying to get it to give me a stat number of the unique values of user names that get it each day for a time span. Am I close? Any help would be appreciated!
I'm using the Cisco FireAMP app to return the trajectory of an endpoint, and the data includes a list of all running tasks/files.  For my test there are 500 items returned, with 9 marked as 'Maliciou... See more...
I'm using the Cisco FireAMP app to return the trajectory of an endpoint, and the data includes a list of all running tasks/files.  For my test there are 500 items returned, with 9 marked as 'Malicious'.  I'm trying to filter for those and write the details to a note.  But the note always contains all 500 items, not just the 9. My filter block (filter_2) is this:   if get_device_trajectory_2:action_result.data.*.events.*.file.disposition == Malicious     My format block (format_3) is this:   %% File Name: {0} - File Path: {1} - Hash: {2} - Category: {4} - Parent: {3} %%   where each of the variables refer to the filter block e.g.:   0: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_name 1: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_path 2: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.identity.sha256 3: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name 4: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.detection     Finally, I use a Utility block to add the note.  The Utility block contents reference the format block:   format_3:formatted_data.*     The debugger shows this when running the filter block:   Mar 25, 13:52:54 : filter_2() called Mar 25, 13:52:54 : phantom.condition(): called with 1 condition(s) '[['get_device_trajectory_2:action_result.data.*.events.*.file.disposition', '==', 'Malicious']]', operator : 'or', scope: 'new' Mar 25, 13:52:54 : phantom.get_action_results() called for action name: get_device_trajectory_2 action run id: 0 app_run_id: 0 Mar 25, 13:52:54 : phantom.condition(): condition 1 to evaluate: LHS: get_device_trajectory_2:action_result.data.*.events.*.file.disposition OPERATOR: == RHS: Malicious Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Clean' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False   so it looks like it's correctly identifying the malicious files.  The debugger shows this when running the format block:   Mar 25, 13:52:55 : format_3() called Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_name'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_path'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.identity.sha256'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.detection'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:56 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:56 : save_run_data() saving 136.29 KB with key format_3:formatted_data_ Mar 25, 13:52:56 : save_run_data() saving 140.23 KB with key format_3__as_list:formatted_data_   there are 9 malicious files and it looks like that's what it's saying in the debugger, so again it seems like it's using the filtered data correctly.   But my note always has 500 items in it.  I'm not sure what I'm doing wrong.  Can anyone offer any help, because I'm stuck.  Thanks.        
so if I do a "join" with your query, the correct index will be associated with the sourcetype?
Ahh my mistake, which makes what I was reading in the documentation make much more sense thank you! I'll also accept this as the solution, apologies for my ignorance! 
Nothing "happens".  It's legitimate for a sourcetype to be present in more than one index.  It may complicate your query, though.
To be pedantic, reports power dashboards rather than the other way around.  What you call a "report" is merely scheduled emailing of a dashboard. Yes, you can modify the dashboard and those edits sh... See more...
To be pedantic, reports power dashboards rather than the other way around.  What you call a "report" is merely scheduled emailing of a dashboard. Yes, you can modify the dashboard and those edits should be reflected in the email.
This is precisely my problem, I have to start from this command and therefore retrieve the index elsewhere... but then what happens if the indexes have sourcetype names in common?
You can't retrieve the index from the log if it isn't there, which is the case for these events.  You'll have to search for the index by sourcetype. | tstats count where index=* sourcetype=data_sour... See more...
You can't retrieve the index from the log if it isn't there, which is the case for these events.  You'll have to search for the index by sourcetype. | tstats count where index=* sourcetype=data_sourcetype | fields - count
Hi all,  I was wondering if anyone could help with hopefully a simple question. I have a dashboard that is used to power a report that sends a pdf to a number of individuals via email but we're look... See more...
Hi all,  I was wondering if anyone could help with hopefully a simple question. I have a dashboard that is used to power a report that sends a pdf to a number of individuals via email but we're looking to extract some further data and I was wondering if I just simply edit the existing dashboard with a few more searches will that reflect in the report?    Cheers,
additional: It appears the forwarder manager is servicing clients, but they are not being reflected in the GUI or at the commandline: [root@splunkdeployer ~]# /opt/splunk/bin/splunk list deploy-c... See more...
additional: It appears the forwarder manager is servicing clients, but they are not being reflected in the GUI or at the commandline: [root@splunkdeployer ~]# /opt/splunk/bin/splunk list deploy-clients Splunk username: admin Password: Login successful, running command... No deployment clients have contacted this server. Go figure... More Googling...