All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have an alert that output a csv file that look like this Person Number_of_login Login_fail Person A 1   Person B 6 2 Person C 4 1   I run this search everyday and ... See more...
Hello, I have an alert that output a csv file that look like this Person Number_of_login Login_fail Person A 1   Person B 6 2 Person C 4 1   I run this search everyday and append upon the csv file. Upon a few days it'll look like this Person Number_of_login Login_fail Person A 1   Person B 6 2 Person C 4 1 Person A 2 1 Person B 7 1 Person C 4   .....       Now at the end of the month, I want to calculate the total number of login and number of login fail for each person, and also to calculate the percentage of login fail for each one. As far as I know addcoltotals don't have "by" like stats do (like stats count by Person). I want my final table to look like this Person Number_of_login Login_fail Person A 3  1 Person B 13 3 Person C 8 1 So how do I go about this?
Hi, I want to extract judgments to a fields from "37.0.10.15" and "47.105.153.104", Is there any way it can do that? {"data":{"37.0.10.15":{"severity":"medium","judgments":["Scanner","Zombie","Spa... See more...
Hi, I want to extract judgments to a fields from "37.0.10.15" and "47.105.153.104", Is there any way it can do that? {"data":{"37.0.10.15":{"severity":"medium","judgments":["Scanner","Zombie","Spam"],"tags_classes":[],"basic":{"carrier":"Delis LLC","location":{"country":"The Netherlands","province":"Zuid-Holland","city":"Brielle","lng":"4.16361","lat":"51.90248","country_code":"NL"}},"asn":{},"scene":"","confidence_level":"high","is_malicious":true,"update_time":"2022-06-20 13:00:09"},"47.105.153.104":{"severity":"high","judgments":["Zombie","IDC","Exploit","Spam"],"tags_classes":[{"tags":["Aliyun"],"tags_type":"public_info"}],"basic":{"carrier":"Alibaba Cloud","location":{"country":"China","province":"Shandong","city":"Qingdao City","lng":"120.372878","lat":"36.098733","country_code":"CN"}},"asn":{"rank":2,"info":"CNNIC-ALIBABA-CN-NET-AP Hangzhou Alibaba Advertising Co.,Ltd., CN","number":37963},"scene":"Hosting","confidence_level":"high","is_malicious":true,"update_time":"2022-06-27 21:11:32"}},"response_code":0,"verbose_msg":"OK"}  
Hi all, I have 2 dropdowns. The first one is manually filled values and the second one is automatically generates using the value selected in the first dropdown. When i select any value from the fir... See more...
Hi all, I have 2 dropdowns. The first one is manually filled values and the second one is automatically generates using the value selected in the first dropdown. When i select any value from the first dropdown and further select the value from the 2nd dropdown , this works fine. But after these selection if i select any value from the first dropdown the 2nd dropdown value will not set to "all" instead it is using the same value selected during the beginning. I explored and made the necessary changes but i am not getting what is missing here. Can anyone help me in this?
Dashboard is taking a long time to load and it has 17 apps in one panel. I have also tried creating a token such as <set token="app_names">app1,app2</set> instead of using wildcard.  but query is not... See more...
Dashboard is taking a long time to load and it has 17 apps in one panel. I have also tried creating a token such as <set token="app_names">app1,app2</set> instead of using wildcard.  but query is not returning any result. Code snippet   <init> <set token="app_names">app1,app2,app3................</set> </init> ...... <row> <panel> <title>Dashboard</title> <table> <search> <query>index=idx ns=xyz* app_name=$app_names$ pod_container=xyz* ((" ERROR " OR " WARN ") |stats count by app_name | append [| stats count | eval app_name="app1,app2,app3........" | table app_name | makemv app_name delim="," | mvexpand app_name] .......     Please advise!
I have close to 200 inputs configured on Splunk TA for MS cloud services on a HF along with other TAs that are also pulling data from other sources but the TA for ms cloud services makesup the majori... See more...
I have close to 200 inputs configured on Splunk TA for MS cloud services on a HF along with other TAs that are also pulling data from other sources but the TA for ms cloud services makesup the majority of the the inputs on this HF. The issue i am facing at the moment is owing to the huge number of PULL data ingestion my HF CPU is frequently maxing out thereby leadin to ingestion delays for all associated data sources. Splunk docs here suggests "Increase intervals in proportion to the number of inputs you have configured in your deployment" My guess is since all the inputs are configured with interval=300 all of them "might" be trying to fetch data at the same time. OPtions available for the fix is: 1. Change the interval settings on each stanza in inputs.conf by 1-10 secs that should make them run at different times 2. reduce the number of inputs on this overutilised HF and move them to another HF Is there any other option that i can implement to remediate this situation? Also, for option 1, is there a way where i can timechart (or plot) the count of inputs over time to see the trend of the inputs being triggered over say 24 hrs regarding option 2, this might result in double ingestion of data, i would be interested to know how i might go forward so as to minimize dual ingestion
I made drilldown from pie chart to scatter chart. Pie chart :   <drilldown>  <set token="BU">$row.BU$</set> </drilldown>   search sentence of Scatter chart :    <search> <query> index="in... See more...
I made drilldown from pie chart to scatter chart. Pie chart :   <drilldown>  <set token="BU">$row.BU$</set> </drilldown>   search sentence of Scatter chart :    <search> <query> index="indexA" | search BU=$BU$ | eval XXX </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search>   Question : Can I set the  Initial or default value of this token? I'm in trouble that scatter chart does not appear until you click something on pie chart.
Hi, so for the context, I'm developing a custom reporting command that will allow running a python ML script over the result of a search, and yield a prediction over all the records in the search res... See more...
Hi, so for the context, I'm developing a custom reporting command that will allow running a python ML script over the result of a search, and yield a prediction over all the records in the search result. I've followed the template in https://github.com/splunk/splunk-app-examples/tree/master/custom_search_commands/python/reportingsearchcommands_app to create such a command. However, running this reportingcsc command (in my case, the command is `source="account202203.csv" | table date, new | reportingcsc cutoff=10 new`) over my dataset yields more than one statistics, while it's quite clear that the reduce method of this custom command would only yield one statistics over the search result.       def reduce(self, records): """returns a students count having a higher total marks than cutoff""" pass_student_cnt = 0 for record in records: value = float(record["totalMarks"]) if value >= float(self.cutoff): pass_student_cnt += 1 yield {"student having total marks greater than cutoff ": pass_student_cnt}       It looks like the search result is binned into different chunks, and each chunk goes thru the reduce() method once. However, as a matter of fact, the search result aren't actually binned: If I simply do `source="account202203.csv" | table date, new | stats sum(new)` then it would only yield only 1 statistic. Any suggestion to do this the right way would be greatly appreciated!
I see lots of suggestions in the Community for Linux but not Windows. Has anyone resolved this on a production Windows 2016 Server? Three errors when logging in to Splunk following the upgrade to 9... See more...
I see lots of suggestions in the Community for Linux but not Windows. Has anyone resolved this on a production Windows 2016 Server? Three errors when logging in to Splunk following the upgrade to 9.0 Failed to start the KV Store See mongod.log and splunkd.log for details. KV Store changed status to failed. KV Store process terminated. KV Store process terminated abnormally (exit code 1, status exited with code 1) See mongod.log and splunkd.log for details
I would like to know if Splunk has any documentation that shows some pre-created rules, like those of elastic for example. Below is the reference: https://www.elastic.co/guide/en/security/curre... See more...
I would like to know if Splunk has any documentation that shows some pre-created rules, like those of elastic for example. Below is the reference: https://www.elastic.co/guide/en/security/current/prebuilt-rules.html. I want to create some rules for firewall, antivirus and office 365. Thanks  
Hello Team, Did anyone find disadvantages by upgrading to Victoria Experience from classic experience. I have seen the advantages in this link https://docs.splunk.com/Documentation/SplunkCloud/la... See more...
Hello Team, Did anyone find disadvantages by upgrading to Victoria Experience from classic experience. I have seen the advantages in this link https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/Experience  But I want to know the disadvantages of Victoria Experience when compared to Classic, I didn’t found any such links of this. So kindly help me in this.
Hi everyone, I have a search on approval success rates : stats count as TOTAL,count(eval(criteria)) as APPROVED | eval APPROVEDPERCENT=if(TOTAL>0,round((APPROVED*100)/TOTAL,2),100) I would then g... See more...
Hi everyone, I have a search on approval success rates : stats count as TOTAL,count(eval(criteria)) as APPROVED | eval APPROVEDPERCENT=if(TOTAL>0,round((APPROVED*100)/TOTAL,2),100) I would then get the percentage of approved transactions on the time range. I would like to raise an alert when this approvedpercent is less than 50 in 3 consecutive span of 15min. I have tried the following based on another post: | timechart span=15min max(APPROVEDPERCENT) as APPPERCENT | where APPPERCENT<50 | stats count as NumberNOK (that I could run with the alert trigger condition as NumberNOK>=3 on last 45min) But I ran this search on a time range where approval is 100% and NumberNOK is null . Anyone can help on this search? thank you in advance
A lot of our Windows UF are getting this message in Windows Event Logs, "Splunk could not get the description for this event". Does anyone know why?
I have 2 files I want to monitor for in the same directory with 2 different sourcetypes. My issue is both files are being picked up by sourcetype a because of the wildcard. The wildcard is needed for... See more...
I have 2 files I want to monitor for in the same directory with 2 different sourcetypes. My issue is both files are being picked up by sourcetype a because of the wildcard. The wildcard is needed for the dates that follow the log name. I tried blacklisting the diagnostic file from sourcetype a but that did not work.     [monitor://E:\path\to\log\directory\HFMWeb*-diagnostic.log] sourcetype = <sourcetype b> disabled = false index = <index> crcSalt = <SOURCE> [monitor://E:\path\to\log\directory\HFMWeb*.log] sourcetype = <sourcetype a> disabled = false index = <index> crcSalt = <SOURCE> blacklist = \-diagnostic     Any ideas on how I can exclude the diagnostic file from sourcetype a but then include in sourcetype b?
I am trying to ingest a new log and unfortunately, it doesn't include year or time zone as part of the message. The timestamp in the messages is in the following format:   Jun 30 01:02:03 <msg>... See more...
I am trying to ingest a new log and unfortunately, it doesn't include year or time zone as part of the message. The timestamp in the messages is in the following format:   Jun 30 01:02:03 <msg>   I wrote the following props.conf settings to extract the timestamp in the message:   [new_sourcetype] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 15 TIME_FORMAT = %b %d %H:%M:%S    I see the following warnings under splunkd.log:   06-30-2022 14:05:59.555 -0600 WARN DateParserVerbose [1556614 merging] - The TIME_FORMAT specified is matching timestamps (Tue Jun 6 17:43:20 2023) outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=/path/to/log|host=UF01|new_sourcetype|230    I'm confused where "(Tue Jun 6 17:43:20 2023)" is coming from because none of the logs have this string. How do I approach this? I've thought about using transforms to write into the DEST_KEY "_time" but I read that any key starting with "_" is not indexed. This data is being received from a syslog server so I thought about modifying the data as it's being received. What are you recommendations?
Hi  I have been thru the docs but don't see any specific detail regarding system hardware needed for Specific Roles.... Let's say you have a 3 member SHC and a 10 member IDXC. IF you were to pu... See more...
Hi  I have been thru the docs but don't see any specific detail regarding system hardware needed for Specific Roles.... Let's say you have a 3 member SHC and a 10 member IDXC. IF you were to put the deployer role on a separate box, what would you need at a minimum? (For instance > maybe 8 Cores, 12G Ram, 1OOG Disk) And what would you need for the following roles if each and every one was on a dedicated VM? License Manager Monitoring Console Index Cluster Master Deployment Server I find the documentation is more specific for SHs and Indexers, but per documentation >>> “For detailed sizing and resource allocation recommendations, contact your Splunk account team”. I am up against a HW resource wall, and need to shuffle some things around.  So any advice or lessons learned on the "minimum hardware requirements topic" are greatly appreciated.  
I want to run a query where: 1. Query1 returns resultset1containing myEvent1.uid 2. Query2 returns resultset2 containing myEvent2.uid which is a subset of the myEvent1uid values. 3. Filter myEv... See more...
I want to run a query where: 1. Query1 returns resultset1containing myEvent1.uid 2. Query2 returns resultset2 containing myEvent2.uid which is a subset of the myEvent1uid values. 3. Filter myEvent1 events and discard any that don't have a matching myEvent2.uid. This can be done easily with an inner join but the result2 dataset is larger than 50k so I cannot use a join. What I want is to do an inner join without using join!    (I'm also practicing not using join, in general, but I really can't use join in this case.) Saw some other posts that use join and other tricks and tried different solutions with coalesce() and also creating a new fields but haven't figured out a way that worked. Thanks in advance!  
Hi I have a table similar to this: Brand ID_EMP Nike 123 Adidas 456 Lotto 123   other table like this: code name 123 Smith 456 Myers   The result should be Nike 123 Smith  A... See more...
Hi I have a table similar to this: Brand ID_EMP Nike 123 Adidas 456 Lotto 123   other table like this: code name 123 Smith 456 Myers   The result should be Nike 123 Smith  Adidas 456 Myers  Lotto 123 Smith     but insted of this I'm getting: Nike 123 Smith john adidas 456 Myers. Mike   This is the query that I'm using. | dbxquery query="SELECT * FROM Clients ;" connection="Clients" | where NOT like(SAFE_NAME,"S0%") |rename ID_EMP as code | join type=left [|  inputlookup append=t Emp.csv | table code, name, STATUS ]| fillnull value="Blank" STATUS | dedup code | where STATUS!="Blank"
Hello, Are there any ways we can hide (encrypt) the USERNAME and PASSWORD in our REST API> The main reason of that our client doesn't disclose the username and password. Any help or recommendation ... See more...
Hello, Are there any ways we can hide (encrypt) the USERNAME and PASSWORD in our REST API> The main reason of that our client doesn't disclose the username and password. Any help or recommendation would be highly appreciated. Thank you so much.
Hi everyone, Newbie here again. Is it possible to have query similar to the LIKE function in SQL where MM/DD/YYYY in Splunk to set different MAX VALUE and INTERVAL for the Y-Axis? Would it also... See more...
Hi everyone, Newbie here again. Is it possible to have query similar to the LIKE function in SQL where MM/DD/YYYY in Splunk to set different MAX VALUE and INTERVAL for the Y-Axis? Would it also be possible just to use the Month and Year for the condition to take effect so that regardless what Date the user may enter, it would not effect the result?  
Is anyone using the same TLS internally signed certificate for the host and web?  If so, is there anything additive to the cert process I should know while setting something like that up?