All Topics

Top

All Topics

Hi all, regarding a complete monitoring for our customers we want to detect if a subsearch is finalized. If this happens the search results of the corresponding main search could be incomplete. Es... See more...
Hi all, regarding a complete monitoring for our customers we want to detect if a subsearch is finalized. If this happens the search results of the corresponding main search could be incomplete. Especially for scheduled searches, where no user gets a feedback, this would be helpful. Is there any log channel, where this is logged? I tried starting splunk in debug-mode but found nothing. The only place are the search-logs, but indexing every search-log seems a bit oversized. Any one out there who could provide a hint? Thanks!
good morning community I want to generate an alert in splunk based on some graphs that are generated from a .TXT file, therefore I only need to use the last two values generated in said file to app... See more...
good morning community I want to generate an alert in splunk based on some graphs that are generated from a .TXT file, therefore I only need to use the last two values generated in said file to apply a formula if said value drops 10% of its measurement. When I query the TXT file which displays a list as follows in the events: 2022-7-1 11:00:0 OVERALL: 10000 2022-7-1 12:00:0 OVERALL: 11000 I just need to get the last numeric value and the penultimate numeric value registered in the list and add them to a variable to apply the formula of comparing these two values to see if there is a difference of more than 10%. Please, if you have had a similar case, please share the solution.
Hi,   In Splunk, is a custom visualization  just any visualization modified by the user, or is it a visualization that specifically makes use of the Splunk Custom Visualization API????? I am c... See more...
Hi,   In Splunk, is a custom visualization  just any visualization modified by the user, or is it a visualization that specifically makes use of the Splunk Custom Visualization API????? I am confused
Hi, In Splunk, are saved searches and reports the same thing? Thanks, Patrick
Hello I need help. I am looking for a SPL query or app that visualizes log sources not sending logs to the SIEM within in 24 hours.   Can anyone assist? Thank you in advance!
I have an index: an_index , there's a field with URLs - URL/folder/folder   I only want to list the records that contain a specific URL.  I don't care about anything after the URL.  I just want to ma... See more...
I have an index: an_index , there's a field with URLs - URL/folder/folder   I only want to list the records that contain a specific URL.  I don't care about anything after the URL.  I just want to match the URL
Hi Folks,   I have a question, consider I have a simple architecture with 1 search head and 1 indexer. I need to move a single indexer to clustered indexer. I know I need to add another twin inde... See more...
Hi Folks,   I have a question, consider I have a simple architecture with 1 search head and 1 indexer. I need to move a single indexer to clustered indexer. I know I need to add another twin indexer server and cluster manager but my question is: when I transform the single indexer to the clustered indexer I will lost all my old data? or the old data will replicate to the new indexer? How do I manage the indexes.conf in that case?  
Hi, I have 5 multiselect input filed in dashboard as below Environment, field2,field3, field4,field5 After selecting the Environment value in the Environment field, the dashboard immediately st... See more...
Hi, I have 5 multiselect input filed in dashboard as below Environment, field2,field3, field4,field5 After selecting the Environment value in the Environment field, the dashboard immediately starts searching. But our requirement is the dashboard will not start searching until the Environment has Values and either of  remaining 4 fields(field2 OR 3OR 4OR 5) has values. kindly help me on this on how to pass token to achieve this result. Thanks in advance!      
Can someone help me pull out these data points: cw.pptx;text.html;text.txt I need it to split at the ; mark but have them in their own fields so I can only see the .txt in my search.
Hello, Trying to find an efficient way to take the results from a dbxlookup - where a single userID would bring back more than one record, -  into multiple multiple row output. Example: I have a ... See more...
Hello, Trying to find an efficient way to take the results from a dbxlookup - where a single userID would bring back more than one record, -  into multiple multiple row output. Example: I have a list of 10 userIDs and run a dbxlookup against a d/b containing login/logout times. I want to see how many times each userID logged in/out, as well as their first login/out of the month, and their most recent login/out. I will supply my SPL shortly, but I wanted to see if anyone might have experienced this issue in the past and has a solution. Thanks and God bless, Genesius
Hello. i've been looking around a bit but it appears my google-fu isnt up to snuff for this problem. i'm wondering how one can parse non-pure xml logs. as in we have a ".txt" file that timestam... See more...
Hello. i've been looking around a bit but it appears my google-fu isnt up to snuff for this problem. i'm wondering how one can parse non-pure xml logs. as in we have a ".txt" file that timestamps, debug messages and all that jazz. but it also writes entire XML responses in the log file. i dont know if the source log is able to separate it out (doubt it) but the question goes as follows. i'm aware there is a XML parser for the props.conf. but does that work on intermittent xml formatted log entries? <date>,source,debug: event happened <date>,source,debug: function executed <date>,source,debug: send data to URI <date>,source,debug: multiline XML response <date>,source,log message the xml response can sometimes be longer than 1000 lines which is where the issue arrives, as we get an error due to the log entry being more than 1000 lines long. it's all the xml tags <meta> </meta> <link> </link> and the like. does props support something in the line of if XML do XML parsing else do normal props things or do I have to parse ell the XML response segments manually?
Hello, I have an alert that output a csv file that look like this Person Number_of_login Login_fail Person A 1   Person B 6 2 Person C 4 1   I run this search everyday and ... See more...
Hello, I have an alert that output a csv file that look like this Person Number_of_login Login_fail Person A 1   Person B 6 2 Person C 4 1   I run this search everyday and append upon the csv file. Upon a few days it'll look like this Person Number_of_login Login_fail Person A 1   Person B 6 2 Person C 4 1 Person A 2 1 Person B 7 1 Person C 4   .....       Now at the end of the month, I want to calculate the total number of login and number of login fail for each person, and also to calculate the percentage of login fail for each one. As far as I know addcoltotals don't have "by" like stats do (like stats count by Person). I want my final table to look like this Person Number_of_login Login_fail Person A 3  1 Person B 13 3 Person C 8 1 So how do I go about this?
Hi, I want to extract judgments to a fields from "37.0.10.15" and "47.105.153.104", Is there any way it can do that? {"data":{"37.0.10.15":{"severity":"medium","judgments":["Scanner","Zombie","Spa... See more...
Hi, I want to extract judgments to a fields from "37.0.10.15" and "47.105.153.104", Is there any way it can do that? {"data":{"37.0.10.15":{"severity":"medium","judgments":["Scanner","Zombie","Spam"],"tags_classes":[],"basic":{"carrier":"Delis LLC","location":{"country":"The Netherlands","province":"Zuid-Holland","city":"Brielle","lng":"4.16361","lat":"51.90248","country_code":"NL"}},"asn":{},"scene":"","confidence_level":"high","is_malicious":true,"update_time":"2022-06-20 13:00:09"},"47.105.153.104":{"severity":"high","judgments":["Zombie","IDC","Exploit","Spam"],"tags_classes":[{"tags":["Aliyun"],"tags_type":"public_info"}],"basic":{"carrier":"Alibaba Cloud","location":{"country":"China","province":"Shandong","city":"Qingdao City","lng":"120.372878","lat":"36.098733","country_code":"CN"}},"asn":{"rank":2,"info":"CNNIC-ALIBABA-CN-NET-AP Hangzhou Alibaba Advertising Co.,Ltd., CN","number":37963},"scene":"Hosting","confidence_level":"high","is_malicious":true,"update_time":"2022-06-27 21:11:32"}},"response_code":0,"verbose_msg":"OK"}  
Hi all, I have 2 dropdowns. The first one is manually filled values and the second one is automatically generates using the value selected in the first dropdown. When i select any value from the fir... See more...
Hi all, I have 2 dropdowns. The first one is manually filled values and the second one is automatically generates using the value selected in the first dropdown. When i select any value from the first dropdown and further select the value from the 2nd dropdown , this works fine. But after these selection if i select any value from the first dropdown the 2nd dropdown value will not set to "all" instead it is using the same value selected during the beginning. I explored and made the necessary changes but i am not getting what is missing here. Can anyone help me in this?
Dashboard is taking a long time to load and it has 17 apps in one panel. I have also tried creating a token such as <set token="app_names">app1,app2</set> instead of using wildcard.  but query is not... See more...
Dashboard is taking a long time to load and it has 17 apps in one panel. I have also tried creating a token such as <set token="app_names">app1,app2</set> instead of using wildcard.  but query is not returning any result. Code snippet   <init> <set token="app_names">app1,app2,app3................</set> </init> ...... <row> <panel> <title>Dashboard</title> <table> <search> <query>index=idx ns=xyz* app_name=$app_names$ pod_container=xyz* ((" ERROR " OR " WARN ") |stats count by app_name | append [| stats count | eval app_name="app1,app2,app3........" | table app_name | makemv app_name delim="," | mvexpand app_name] .......     Please advise!
I have close to 200 inputs configured on Splunk TA for MS cloud services on a HF along with other TAs that are also pulling data from other sources but the TA for ms cloud services makesup the majori... See more...
I have close to 200 inputs configured on Splunk TA for MS cloud services on a HF along with other TAs that are also pulling data from other sources but the TA for ms cloud services makesup the majority of the the inputs on this HF. The issue i am facing at the moment is owing to the huge number of PULL data ingestion my HF CPU is frequently maxing out thereby leadin to ingestion delays for all associated data sources. Splunk docs here suggests "Increase intervals in proportion to the number of inputs you have configured in your deployment" My guess is since all the inputs are configured with interval=300 all of them "might" be trying to fetch data at the same time. OPtions available for the fix is: 1. Change the interval settings on each stanza in inputs.conf by 1-10 secs that should make them run at different times 2. reduce the number of inputs on this overutilised HF and move them to another HF Is there any other option that i can implement to remediate this situation? Also, for option 1, is there a way where i can timechart (or plot) the count of inputs over time to see the trend of the inputs being triggered over say 24 hrs regarding option 2, this might result in double ingestion of data, i would be interested to know how i might go forward so as to minimize dual ingestion
I made drilldown from pie chart to scatter chart. Pie chart :   <drilldown>  <set token="BU">$row.BU$</set> </drilldown>   search sentence of Scatter chart :    <search> <query> index="in... See more...
I made drilldown from pie chart to scatter chart. Pie chart :   <drilldown>  <set token="BU">$row.BU$</set> </drilldown>   search sentence of Scatter chart :    <search> <query> index="indexA" | search BU=$BU$ | eval XXX </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search>   Question : Can I set the  Initial or default value of this token? I'm in trouble that scatter chart does not appear until you click something on pie chart.
Hi, so for the context, I'm developing a custom reporting command that will allow running a python ML script over the result of a search, and yield a prediction over all the records in the search res... See more...
Hi, so for the context, I'm developing a custom reporting command that will allow running a python ML script over the result of a search, and yield a prediction over all the records in the search result. I've followed the template in https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app to create such a command. However, running this reportingcsc command (in my case, the command is `source="account202203.csv" | table date, new | reportingcsc cutoff=10 new`) over my dataset yields more than one statistics, while it's quite clear that the reduce method of this custom command would only yield one statistics over the search result.       def reduce(self, records): """returns a students count having a higher total marks than cutoff""" pass_student_cnt = 0 for record in records: value = float(record["totalMarks"]) if value >= float(self.cutoff): pass_student_cnt += 1 yield {"student having total marks greater than cutoff ": pass_student_cnt}       It looks like the search result is binned into different chunks, and each chunk goes thru the reduce() method once. However, as a matter of fact, the search result aren't actually binned: If I simply do `source="account202203.csv" | table date, new | stats sum(new)` then it would only yield only 1 statistic. Any suggestion to do this the right way would be greatly appreciated!
I see lots of suggestions in the Community for Linux but not Windows. Has anyone resolved this on a production Windows 2016 Server? Three errors when logging in to Splunk following the upgrade to 9... See more...
I see lots of suggestions in the Community for Linux but not Windows. Has anyone resolved this on a production Windows 2016 Server? Three errors when logging in to Splunk following the upgrade to 9.0 Failed to start the KV Store See mongod.log and splunkd.log for details. KV Store changed status to failed. KV Store process terminated. KV Store process terminated abnormally (exit code 1, status exited with code 1) See mongod.log and splunkd.log for details
I would like to know if Splunk has any documentation that shows some pre-created rules, like those of elastic for example. Below is the reference: https://www.elastic.co/guide/en/security/curre... See more...
I would like to know if Splunk has any documentation that shows some pre-created rules, like those of elastic for example. Below is the reference: https://www.elastic.co/guide/en/security/current/prebuilt-rules.html. I want to create some rules for firewall, antivirus and office 365. Thanks