All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How can I search not only filter messages also couple of messages around it?
Hello, I am a student conducting research related to the MLTK app of Splunk. One of the topics of my work is to explore and attempt to apply the same model as the "Predict the Presence of Malware" ... See more...
Hello, I am a student conducting research related to the MLTK app of Splunk. One of the topics of my work is to explore and attempt to apply the same model as the "Predict the Presence of Malware" - one of the sample examples in MLTK. I would like to learn more about how the data for this model was collected, such as the firewall used, the operating system, and other relevant details, so that I can reproduce it on my own machine and collect the data as well. As I am new to the security field, any additional information would be greatly appreciated. Additionally, I have been able to retrieve some of the fields used in the model, such as src_ip, src_port, session_id, serial_number, receive_time, packets_sent, has_known_vulnerability, dst_ip, dest_port, bytes_sent, and bytes_received. However, I am unsure about how to obtain the packets_received field. Any guidance or assistance on retrieving this particular field would be highly valuable. Thank you.  
Dear all, In the environment there are two affilaites/plants with 5 machines each (total 10, 8 endpoint, 2 mcafee server). We have the each side mcafee server integrated with our splunk which tells... See more...
Dear all, In the environment there are two affilaites/plants with 5 machines each (total 10, 8 endpoint, 2 mcafee server). We have the each side mcafee server integrated with our splunk which tells us about the latest DAT file on them.  The environment is dynamic and there will be new endpoints added and old removed also. I want two things to be monitored in the most efficent way through lookup( my real environment is huge):- 1) I want to create a lookup that will keep track of all the endpoints along with last report date. Kindly help me with both the query i.e one for creating lookup another for comparing it with real time data. 2) I want to monitor the latest DAT file updated on endpoint with the help of lookup. for this I will be using 1 lookup that keeps track of latest DAT version amongst the machine in the environment and store it for e.g value 10. Now compare this value through query in realtime data. Help me with sample queries please.   I have mcafee endpoint data coming in.
I need to write a python script to install an app/add-on to the remote Splunk search head. The app file ".spl" and ".tgz" is in the server which hosts my python script. As far as i understand fr... See more...
I need to write a python script to install an app/add-on to the remote Splunk search head. The app file ".spl" and ".tgz" is in the server which hosts my python script. As far as i understand from all the errors that I got and from documentation, bot services/app/local and services/app/appinstall either searches for a local file or splunkbase but never from a remote server 
Hello splunk,    I'm trying to compare the exceptions between time ranges and get the new exceptions list. Suppose consider I'm searching for 3to4am stats of exceptions and 7to 8am stats of excep... See more...
Hello splunk,    I'm trying to compare the exceptions between time ranges and get the new exceptions list. Suppose consider I'm searching for 3to4am stats of exceptions and 7to 8am stats of exceptions. I should compare these two stats and list out what are the new exceptions occurred at 7to8am.  Example: 3to4am stats:         7to8am         Ex1                                Ex3 Ex2                                Ex1 Ex3                                Ex5 Ex4                                Ex6   Result: Ex5 Ex6   Thanks in advance   
Hello. How to extract and count personal email address? Say the destination email field (d-email) contains email as below: abc.work-email.com cef.work-email.com a.gmail.com b.hotmail.com I wan... See more...
Hello. How to extract and count personal email address? Say the destination email field (d-email) contains email as below: abc.work-email.com cef.work-email.com a.gmail.com b.hotmail.com I want to see how many email was sent to Gmail/Hotmail or any personal account.    
The search query it showing only the roles for currently logged-in user. But this is not what we are looking for, we need list of users with logmon_app* roles. | rest /services/authoriz... See more...
The search query it showing only the roles for currently logged-in user. But this is not what we are looking for, we need list of users with logmon_app* roles. | rest /services/authorization/roles/  | search title="logmon_app*" | table title  | rename title as roles | join type=left role max=0      [| rest /services/authentication/users      | table roles title      | rename title as userName      | mvexpand roles | search roles="logmon_app*" ]  | stats values(userName) as username by roles |eval rolepresent="yes"  
Hey guys, First post, new user. I see example on how to implement drilldown passing time value from one dashboard to another, but how to do it in Studio builder as its not using xml code?   T... See more...
Hey guys, First post, new user. I see example on how to implement drilldown passing time value from one dashboard to another, but how to do it in Studio builder as its not using xml code?   Thank you.
index=web sourcetype=access_combined | transaction _time,clientip, JSESSIONID,action How do I Modify my search to display the transactions in a table for above SPL
In a recent "Splunk Enterprise 9.0 Data Administration" class, the documentation says that Ingest Actions should be implemented on a Deployment Server.  Am I correct that this only refers to Ingest A... See more...
In a recent "Splunk Enterprise 9.0 Data Administration" class, the documentation says that Ingest Actions should be implemented on a Deployment Server.  Am I correct that this only refers to Ingest Actions defined for a heavy forwarder and that if the Ingest Action is to be deployed on an indexer it should be defined on the cluster manager?
Hi Community We are looking for basic server availability report for which we have deployed and also not able to find metric to create detector for server down. Please any one suggestion how can we... See more...
Hi Community We are looking for basic server availability report for which we have deployed and also not able to find metric to create detector for server down. Please any one suggestion how can we create basic availability report in percentage and creation of detector when the server is down? Regards, Eshwar
Hi Is there any feature or ability exist in "Splunk Enterprise" that does not exist in "Splunk Security"? Any cheat sheet or comparable list? Thanks
Hi Splunkers, I am looking for a query to categorize timestamp into Morning, Afternoon, Night. I'm using this to know how many items were seen for each category in a month 0001-0800 = Morning 08... See more...
Hi Splunkers, I am looking for a query to categorize timestamp into Morning, Afternoon, Night. I'm using this to know how many items were seen for each category in a month 0001-0800 = Morning 0801-1600 = Afternoon 1601-0000 = Night Example output: Date                      Time     Category 02-May-2023    1043    Afternoon 02-May-2023    1932    Night 04-May-2023    0249    Morning 04-May-2023    0717    Morning 14-May-2023    2051    Night 20-May-2023    0534    Morning Thank you very much
I am starting with this query to show which types of products our top customers buy     ``` get all purchases ``` index=customer_data action=purchase ``` narrow down to purchases from our t... See more...
I am starting with this query to show which types of products our top customers buy     ``` get all purchases ``` index=customer_data action=purchase ``` narrow down to purchases from our top 100 customers (based on purchase count) ``` [search index=customer_data action=purchase top limit=100 user_id | table user_id] ``` show type of products that each user purchased ``` | stats count, list(product_category) by user_id     This will give some output like this:     user_id | values(product_category) ------------------------------------ 1234 | clothing, clothing, clothing, food 2345 | electronics, electronics, food, food       I need one additional piece of information: the ratio of the product categories, for each user. I need this output instead. The format of the distributions is not important     user_id | values(product_category) ------------------------------------ 1234 | clothing=75%, food=25% 2345 | electronics=50%, food=50%       The best idea I have thus far is something like this:     ``` get all purchases ``` index=customer_data action=purchase ``` narrow down to purchases from our top 100 customers (based on purchase count) ``` [search index=customer_data action=purchase top limit=100 user_id | table user_id] ``` show type of products that each user purchased ``` | stats count, list(product_category) by user_id | eval product_category_distribution = ??? | table user_id, product_category_distribution       but I can't find any such function ...    
Is it possible to forward different Splunk Add-on for AWS inputs to different indexer clusters?   We have a heavy forwarder using Splunk_TA_aws and standard defaultGroup=<clusterlabel>, indexerDiscov... See more...
Is it possible to forward different Splunk Add-on for AWS inputs to different indexer clusters?   We have a heavy forwarder using Splunk_TA_aws and standard defaultGroup=<clusterlabel>, indexerDiscovery, etc configured in etc/system/local/outputs.conf.  At present, all the AWS inputs are forwarded to indexes contained within the default indexer cluster.  However, we now have an AWS input which we want to forward to an index in a different indexer cluster.  Are there options within either the etc/apps/Splunk_TA_aws/local/ or other .conf files that will allow us to, say, add a second [tcpout:Cluster2] stanza into outputs.conf and then forward  events from this new AWS input to it?
getting "no results found" ,but  i want the results day wise as zero Query1: |tstats count where index=applicationlogs sourcetype=app-logs by PREFIX(report:) _time |rename report: as Report |eval... See more...
getting "no results found" ,but  i want the results day wise as zero Query1: |tstats count where index=applicationlogs sourcetype=app-logs by PREFIX(report:) _time |rename report: as Report |eval Flaw=if(Report!="0", "Flaw", null()) |where Report!=0 |timechart span=1d max(count) as Flaw |fillnull value=0 Flaw Query2: index=applicationlogs sourcetype=app-logs |rex field= _raw "CCC\s\d+\:\d+\:\d+\,\d{3}\(\s+\)\s\-\s(?<status>.*)" |where isnotnull(status) |convert timeformat="%d-%m-%Y" ctime(_time) as date |stats count by date
We have indexes with retention of a year each, and when looking at _audit, it's pretty obvious that the queries against this index are mostly either daily or weekly. We would like to present to manag... See more...
We have indexes with retention of a year each, and when looking at _audit, it's pretty obvious that the queries against this index are mostly either daily or weekly. We would like to present to management the "waste" of storage (and maybe potentially making a case for using cheaper storage for older data, ie. older than 3 months). Is anybody aware of any visualization that does it? We tried to combine the information from audit with the retention information that we got via REST, however we don't have a cohesive view that can be appealing to management. Any ideas?
Hi, I would like to extract fields from an unstructured data that contain multiple labels followed by its HTML href tag: Sample events:     Change: <a href="https://xxyyzz.com/changes/12345... See more...
Hi, I would like to extract fields from an unstructured data that contain multiple labels followed by its HTML href tag: Sample events:     Change: <a href="https://xxyyzz.com/changes/12345">#12345</a> - Review: <a href="https://xxyyzz.com/reviews/7890">#7890</a> Change: <a href="https://xxyyzz.com/changes/1345">#1345</a> - Review: <a href="https://xxyyzz.com/reviews/7891">#7891</a> Review: <a href="https://zzyyyxxx/reviews/205657">205657</a>     I wish to get results for the above data as follows:     change_url change review_url review https://xxyyzz.com/changes/12345 #12345 https://xxyyzz.com/reviews/7890 #7890 https://xxyyzz.com/changes/1345 #1345 https://xxyyzz.com/reviews/7891 #7891 https://zzyyyxxx/reviews/205657 #205657     Can someone suggest how can I use rex to obtain the above fields? 
totally stuck with this query 
We are using Splunk Enterprise 9.0.1 and we are developing SimpleXML dashboards. Im currently using a mobile device with Android. The problem is that we need to preload a few tokens with search r... See more...
We are using Splunk Enterprise 9.0.1 and we are developing SimpleXML dashboards. Im currently using a mobile device with Android. The problem is that we need to preload a few tokens with search results and we cant make it work in the mobile dashboard. To achieve this we are running a search directly in the <dashboard> xml tag just like a base search. This approach works for the web dashboards but the mobile one does not load those tokens because the search never runs. The same thing happens when we run the search inside a hidden panel using the depends attribute for <panel>, it shows well on web but it fails to run the search on mobile. We cant make this panel visible for all web and mobile, as it is just a table with values that we need to preload on tokens. Does anyone managed to work around this issue? UPDATE 2023-06-05: The same thing happens with the Dashboard Studio, loading tokens from searches only works for web dashboards.