All Topics

Top

All Topics

Register here. This thread is for the Community Office Hours session on Getting Data In (GDI): Forwarders on Wed, May 22, 2024 at 1pm PT / 4pm ET.   This is your opportunity to ask questions relate... See more...
Register here. This thread is for the Community Office Hours session on Getting Data In (GDI): Forwarders on Wed, May 22, 2024 at 1pm PT / 4pm ET.   This is your opportunity to ask questions related to getting data into Splunk Platform using forwarders. Including: Universal Forwarder (UF) or heavy forwarder (HF) deployment/configuration Troubleshooting forwarder connectivity issues, blocked queues, etc. Improving forwarder performance Anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Register here. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, May 8, 2024 at 1pm PT / 4pm ET.   Join our Office Hour series where technica... See more...
Register here. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, May 8, 2024 at 1pm PT / 4pm ET.   Join our Office Hour series where technical Splunk experts answer questions and provide how-to guidance on a different topic every month! This is your opportunity to ask questions related to your specific GDI challenge or use case, including: How to onboard common data sources (AWS, Azure, Windows, *nix, etc.) Using forwarders Apps and add-ons to get data in Processing data with Edge Processor, Ingest Processor, and Ingest Actions Archiving your data Anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
How do you copy a knowledge object from one app to another in Splunk
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5  ... See more...
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5   I want to calculate the average for all of the values in Column 4 (that aren't N/A) that have the same value in Column 1. Then I want to output that as a table: Column 1 Column 2 Value 1 37.5 Value 2 40
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do... See more...
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do i do it ? i have configured an input field for time with token - global_time my query looks like this  index=xyz query1 earliest=global_time.earliest latest=now() [search index=xyz query2 earliest=global_time.earliest latest=global_time.latest] this is not working - can you suggest how to make this work
Dynatrace may claim to outpace Cisco in the observability space — but that’s simply not true and here’s why:  Application performance monitoring (APM) is the foundation of observability. But no m... See more...
Dynatrace may claim to outpace Cisco in the observability space — but that’s simply not true and here’s why:  Application performance monitoring (APM) is the foundation of observability. But no matter how good the builder, tool, or solution — you can’t get AI-driven correlated insights across the entire tech stack to make business-critical decisions from Dynatrace.  Join our webinar for a deep dive into why only Cisco Full-Stack Observability can deliver an observability roadmap that takes your IT environment from reactive monitoring to predictive observability.   ​​​Cisco FSO vs. Dynatrace: Monitoring is no longer enough to safeguard user experience You’ll gain insights into:  Key differentiators between Dynatrace Grail and Cisco Full-Stack Observability. (Spoiler alert: MTTR isn’t the only thing FSO can accelerate.)    What to look for in an observability solution to ensure it serves current and future needs.  Why unifying views of siloed telemetry from Dynatrace can’t safeguard business-critical user experiences.  Register now! Don’t miss this insightful webinar.
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This in... See more...
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This info is not being presented in the Splunk Cloud monitoring console and is an indicator that indexed events are being dropped. index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Error parsing events from message content" | eval bytesRemaining=trim(bytesRemaining,":") | stats sum(bytesRemaining) as bytesNotIndexed What these errors are telling us is that some SQSSmartbusInputWorker process is parsing events and that there is some type of invalid field, or value in the data, in our case _subsecond.  When this process hits the invalid value, it appears to drop everything else in the stream (i.e. bytesRemaining).  So this is also to say that bytesRemaining contains events that were sent to Splunk Cloud, but not indexed.   When this error occurs,  Splunk cloud writes the failed info to an SQS DLQ in S3 which can be observed using: index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Successfully sent a SQS DLQ message to S3 with location" Curious if anyone else out there is experiencing the same issue.  SQSSmartbusInputWorker  doesn't appear in any of the indexing documents, but does appear to be very important to the ingest process.
Hi, Currently working in Splunk ITSI, Can I do the Splunk IT Service Intelligence Certified Admin certification without completing the previous certifications like Splunk Core Certified Power User... See more...
Hi, Currently working in Splunk ITSI, Can I do the Splunk IT Service Intelligence Certified Admin certification without completing the previous certifications like Splunk Core Certified Power User , Splunk Enterprise Certified Admin etc. Regards, Nagalakshmi A
Hi Everyone, i need an help about the following problem: during the analysis of some logs, we found that for a specific Index the Sourcetype had the only value Unknown. The first question we asked ... See more...
Hi Everyone, i need an help about the following problem: during the analysis of some logs, we found that for a specific Index the Sourcetype had the only value Unknown. The first question we asked ourselves was that there could have been some App or Add-on that probably did not match the data well, but neither was present. Subsequently we tried to see if there could be some missing value at the files.conf level, but even in this case we found no problems. So what could be the reason why for that specific Index the Sourcetype only has that value?
I seem to be close on trying to find the statistics to be able to pull unique users per day but I know I'm missing something. Goal: Have a stat/chart/search that has the unique user attribute per d... See more...
I seem to be close on trying to find the statistics to be able to pull unique users per day but I know I'm missing something. Goal: Have a stat/chart/search that has the unique user attribute per day for a span of 1 week / 1 month / 1 year search. Search queries trialed: EventCode=4624 user=* stats count by user | stats dc(user) EventCode=4624 user=* | timechart span1d count as count_user by user | stats count by user So the login event 4624 would be a successful log in code and then trying to get it to give me a stat number of the unique values of user names that get it each day for a time span. Am I close? Any help would be appreciated!
I'm using the Cisco FireAMP app to return the trajectory of an endpoint, and the data includes a list of all running tasks/files.  For my test there are 500 items returned, with 9 marked as 'Maliciou... See more...
I'm using the Cisco FireAMP app to return the trajectory of an endpoint, and the data includes a list of all running tasks/files.  For my test there are 500 items returned, with 9 marked as 'Malicious'.  I'm trying to filter for those and write the details to a note.  But the note always contains all 500 items, not just the 9. My filter block (filter_2) is this:   if get_device_trajectory_2:action_result.data.*.events.*.file.disposition == Malicious     My format block (format_3) is this:   %% File Name: {0} - File Path: {1} - Hash: {2} - Category: {4} - Parent: {3} %%   where each of the variables refer to the filter block e.g.:   0: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_name 1: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_path 2: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.identity.sha256 3: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name 4: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.detection     Finally, I use a Utility block to add the note.  The Utility block contents reference the format block:   format_3:formatted_data.*     The debugger shows this when running the filter block:   Mar 25, 13:52:54 : filter_2() called Mar 25, 13:52:54 : phantom.condition(): called with 1 condition(s) '[['get_device_trajectory_2:action_result.data.*.events.*.file.disposition', '==', 'Malicious']]', operator : 'or', scope: 'new' Mar 25, 13:52:54 : phantom.get_action_results() called for action name: get_device_trajectory_2 action run id: 0 app_run_id: 0 Mar 25, 13:52:54 : phantom.condition(): condition 1 to evaluate: LHS: get_device_trajectory_2:action_result.data.*.events.*.file.disposition OPERATOR: == RHS: Malicious Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Clean' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False   so it looks like it's correctly identifying the malicious files.  The debugger shows this when running the format block:   Mar 25, 13:52:55 : format_3() called Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_name'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_path'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.identity.sha256'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.detection'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:56 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:56 : save_run_data() saving 136.29 KB with key format_3:formatted_data_ Mar 25, 13:52:56 : save_run_data() saving 140.23 KB with key format_3__as_list:formatted_data_   there are 9 malicious files and it looks like that's what it's saying in the debugger, so again it seems like it's using the filtered data correctly.   But my note always has 500 items in it.  I'm not sure what I'm doing wrong.  Can anyone offer any help, because I'm stuck.  Thanks.        
Hi all,  I was wondering if anyone could help with hopefully a simple question. I have a dashboard that is used to power a report that sends a pdf to a number of individuals via email but we're look... See more...
Hi all,  I was wondering if anyone could help with hopefully a simple question. I have a dashboard that is used to power a report that sends a pdf to a number of individuals via email but we're looking to extract some further data and I was wondering if I just simply edit the existing dashboard with a few more searches will that reflect in the report?    Cheers,
Good morning, I hope you can help me, we maintain an infrastructure with splunk enterprise with SIEM and we must forward the security events to an elastic and kafka, I would like to know how I could... See more...
Good morning, I hope you can help me, we maintain an infrastructure with splunk enterprise with SIEM and we must forward the security events to an elastic and kafka, I would like to know how I could forward the events and if this will consume license.
Hello splunk community,  I have this query but I would also like to retrieve the index to which the sourcetype belongs index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd... See more...
Hello splunk community,  I have this query but I would also like to retrieve the index to which the sourcetype belongs index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd (component=AggregatorMiningProcessor OR component=LineBreakingProcessor OR component=DateParserVerbose OR component=MetricSchemaProcessor OR component=MetricsProcessor) (log_level=WARN OR log_level=ERROR OR log_level=FATAL) | rex field=event_message "\d*\|(?<st>[\w\d:-]*)\|\d*" | eval data_sourcetype=coalesce(data_sourcetype, st) | rename data_sourcetype as sourcetype | table sourcetype event_message component thread_name _time _raw | stats first(event_message) as event_message by sourcetype component any ideas ? thx in advance
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.co... See more...
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.com" I need to search if, in my url field of my log, all the possibilities of my lookup are present. If yes, how much matches with this field. My expected result is : url type count(type) xxxxabcxxxxx.google.com 1 3 2   How can i do ? -"| lookup" command don't take into account the "*" symbol. Only space or comma with "WIDLCARD" config. -"| inputlookup" command works but can't display the field "type" because it only exists in my csv. So, i can't count either. Thank's for your answers
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "total... See more...
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Sell", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ] } ]     [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 10, "totalSuccessfulTransactions": 2, "totalTransactions": 12 }, { "orderType": "Sell", "totalFailedTransactions": 1, "totalSuccessfulTransactions": 2, "totalTransactions": 3 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 11, "totalSuccessfulTransactions": 5, "totalTransactions": 16 } ] } ]   I have the above event coming inside a field in _raw events . using json(field) i have validated that the above is a valid json . UseCase : I need to have the total of all the different ordertypes using totalFailedTransactions": , "totalSuccessfulTransactions": , "totalTransactions": numbers into a table .   totalFailedTransactions totalSuccessfulTransactions totalTransactions Purchase 10 2 12 Sell 1 2 3 Cancel 0 2 2   Thanks in advance! Sam  
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that ... See more...
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that request_id and send this link to slack channel. I am able to fetch all the errors/exception within timestamp and able to send to slack but I am not able to generate the relevant logs for the request_id mentioned with error/exception as it is dynamic in nature. I am new to splunk so would like to understand, is this possible? if yes then could you please share relevant documentation so that I can understand it better.   Thank you so much.
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search my... See more...
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search mystate="DOWN " OR mystate="UP | transaction by host startswith=mystate="DOWN " endswith=mystate="*UP " | table host,duration,_time | sort by duration | reverse ...where I REX for the specific patterns of "to state " (host transition into another state, in this example "DOWN" or "UP"), I had do do another "search" to get only the specific ones as there are more than DOWN/UP states (due to my anonymization of the data). I then can retrieve the duration between transitions using "duration" and sorting it as I please. My question - if I'd like to look into ongoing, "at-this-moment-active" hosts in state "DOWN" ie. replace "endswith" with a nominal time value ("NOW"). Where there yet has not been any "endswith" match, just counting the duration from "startswith" to the present moment - any tips on how I can formulate that properly?
We have Splunk Enterprise installed in almost 6 different regions(APAC/AUS/LATAM/EMEA/NA/LA) worldwide and now we are looking for feasibility check for implementing :- a. Single Triage Dashboard whi... See more...
We have Splunk Enterprise installed in almost 6 different regions(APAC/AUS/LATAM/EMEA/NA/LA) worldwide and now we are looking for feasibility check for implementing :- a. Single Triage Dashboard which can be deployed in one region and can be accessing data coming from all these 6 regions. b. I understand that looking at the current setup we can't access Splunk data from one region to other region. However, is there a possibility through API calls or any other method that we can access other region Splunk data? Kindly assist us on this topic if anybody can help.
Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"numbe... See more...
Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"number"},{"text":"drops","type":"number"}],"rows":[["BM0077",35602782,3043.01,0],["BM1604",2920978,4959.1,2],["BM1612",2141607,5623.3,6],["BM2870",41825122,2545.34,7],["BM1834",74963092,2409.0,8],["BM0267",86497692,1804.55,44],["BM059",1630092,5684.5,0]],"type":"table"}]   I tried to extract each field so that each value  corresponds to id,event,delays and drops as a table using the below command.    index=result | rex field=_raw max_match=0 "\[\"(?<id>[^\"]+)\",\s*(?<event>\d+),\s*(?<delays>\d+\.\d+),\s*(?<drops>\d+)" | table id  event delays drops    I get the result in table format , however it spits out as one whole table and not individual entries and I cannot manipulate the result.  I have tried using mvexpand , however it can only do for one value, so have not been helpful as well.    Does anyone know how we can properly get the table in splunk .