All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello;  I'm a bit stuck and looking for assistance.  Base query returns the following values: Brand SystemId ResponseStatus Amount UniqueIdentifier I would like to perform work to make the query... See more...
Hello;  I'm a bit stuck and looking for assistance.  Base query returns the following values: Brand SystemId ResponseStatus Amount UniqueIdentifier I would like to perform work to make the query return stats on top of it: 1. Count the UniqueIdentifier in a new column. 2. Sum the Amount grouping by Brand SystemId ResponseStatus 3. Percent of Total Sum(Amount) by Brand SystemId ResponseStatus 4. Percent of Total count(UniqueIdentifier) by Brand SystemId ResponseStatus Then I would like the final result to appear like:  1.2.3.4, then Brand SystemId ResponseStatus --> to demonstrate the groupings.    In splunk I know I can perform counts, and sum, using a grouping, then sum(counts) by, and create averages. What I can't determine is how to line them all together in a single resultset.  Further, if I try to do stats on multiple columns and set it to a new column, it won't accept it (or doing it wrong)   I would appreciate any pointers. 
Hello , I see lot of warning internal logs for one of the csv which says unable to find filename property for lookup .Anyone had the same issue and how to troubleshoot it?   Thanks in Advance 
Hi All,  I'm currently in trying to extract the second IP address in each log as an field, but I'm simply not able to achive the desired results. The log differ quite variably and I'm unable to get ... See more...
Hi All,  I'm currently in trying to extract the second IP address in each log as an field, but I'm simply not able to achive the desired results. The log differ quite variably and I'm unable to get a reliable pattern to "use" only the second match on IP address  REGEX query to grab match IP address (?P<Public_IP_Test>\d+\.\d+\.\d+\.\d+) Log Example  2020-10-19 14:13:54 12.23.34.45 POST /owa/service.svc action=FindItem&UA=0&ID=-18&AD=1&CorrelationID=e275e3c1-7ccb-4ac9-95a3-58550573648f_160312683455318;&ClientId=***************; 443 testing@domain.com 34.56.78.89 Mozilla/5.0+(iPhone;+CPU+iPhone+OS+13_3_1+like+Mac+OS+X)+AppleWebKit/605.1.15+(KHTML,+like+Gecko)+Version/13.0.5+Mobile/15E148+Safari/604.1 https://mail.domain.com/owa/ 200 0 0 124   Any assistance will be greatly appreciated 
Hi guys, I can see how this question comes across as dumb but I would like to remove duplicated entries from my ip_intel kv store. I understand the whole purpose of a kv_store is used for data that... See more...
Hi guys, I can see how this question comes across as dumb but I would like to remove duplicated entries from my ip_intel kv store. I understand the whole purpose of a kv_store is used for data that constantly gets updated. I am finding lots of duplicated IPs in my ip_intel kvstore and I'd like to know if there's a better way to do it than just  | inputlookup ip_intel | dedup ip I'd like to cleanup the file from duplicates and not just create a dedup search.     Thanks!  
Hi there, What is the best way to approach attaching a DS to an environment that is already in place and scattered with apps? In terms of inputs/outputs etc. EG there were inputs.conf in random app... See more...
Hi there, What is the best way to approach attaching a DS to an environment that is already in place and scattered with apps? In terms of inputs/outputs etc. EG there were inputs.conf in random apps on forwarders. These are still there forwarding. However when I connect the new DS to these,  inputs.conf will be in new <appname>/appstructure. So it would be deployed alongside the current inputs.conf, rather than overwrite whats there - would this mean that the files being monitored would be ingested twice? How do i go about removing the old config and using the new without either duplicating or having data gaps? My plan for all other apps including outputs.conf will be to deploy those first, then remove anything from the "old" config manually. As the DS previously didnt manage these old dodgy apps, it will not autoremove them. This is what made me curious about there being duplicate data as mentioned above.  What are your thoughts on this ? Thanks!
I want to calculate todays date data and previous day data from the host. Please suggest SPL for this.
We hit the 0.5 TB limit for _internal in our lower environment and we have barely 10 days of data. Unfortunately, we don’t have enough disk space to have the index for 60 days or so. We wonder if th... See more...
We hit the 0.5 TB limit for _internal in our lower environment and we have barely 10 days of data. Unfortunately, we don’t have enough disk space to have the index for 60 days or so. We wonder if there is a way to log, let’s say 60 days from the Splunk servers but only 10 or so from the forwarders. Is it possible? Are there other ways to configure what goes to _internal ?
We are planning to deploy our splunk enterprise into Azure Premises. Which Splunk version is supported in Azure platform?Any relevant docs on the same? Thanks 
i have regular expression that i use to extract the below words, but i dont want to show the Results fiels or column, how do i exclude it?    Ive tried        | fields -Results  & it didnt work   ... See more...
i have regular expression that i use to extract the below words, but i dont want to show the Results fiels or column, how do i exclude it?    Ive tried        | fields -Results  & it didnt work     I  
Hello community, I used the search to find a possible solution for my problem but without success.  My problem looks the following:  1. I have historical data from a storage sytem which contains... See more...
Hello community, I used the search to find a possible solution for my problem but without success.  My problem looks the following:  1. I have historical data from a storage sytem which contains the date and the amount used storage (column name historical_gb). This data is in .csv and therefore easy to read.  2. In addition I get the realtime amount of used storage (live_gb) via a REST API. This data is in .json. My question would be: How can I combine those two data sources?  The final search should look like: date usedStorage   01.01.2020 110 GB source: historical_gb 19.10.2020 125 GB source: live_gb   I added the last column just for better understanding. The column is not relevant for the actual search.  So far I have:  index="test" (source = Used_Sotrage_KPI objid = 18392 ) OR (source="historicdata.csv") | rename lastvalue_raw as usedStorage | rename "PhysUsedCapacity_Raw" as usedStorage | timechart span=1d max(usedStorage) lastvalue_raw = realtime data (source: Used_Storage_KPI) PhysUsedCapacity_Raw = historical data (source: histrocialdata.csv) But the second rename overwrites the first rename statement.  Thanks for your help!
Hi, I'm trying to internationalize the Splunk Web user interface by adding support for languages such as Japanese, German etc in the app. I found that Splunk already provides support for these langu... See more...
Hi, I'm trying to internationalize the Splunk Web user interface by adding support for languages such as Japanese, German etc in the app. I found that Splunk already provides support for these languages and provides translations for some of the strings such as "Database", "Help", "Overview" etc.  I wanted to override the default translations provided by Splunk to add my own. For eg, if Splunk converts "Help" to "Help_splunk_ja", I want to have it as "Help_own_ja". I was able to achieve this for the dashboard titles, filter labels etc but not for the navigation menu. I've followed the document given below for implementation: https://docs.splunk.com/Documentation/Splunk/8.0.6/AdvancedDev/TranslateSplunk  Attaching screenshots for reference, wherein the default conversion provided for "Overview" could be changed in dashboard title (to "Overview_ja") but not in the navigation menu.   Dashboard Title Gets converted Menu shows default conversion Is there a solution for this? Thanks.  
Quick question regarding version 2.2 of DECRYPT https://splunkbase.splunk.com/app/2655/ Why does the commands.conf have a local=true under the decrypt command? It did not have this previously and i... See more...
Quick question regarding version 2.2 of DECRYPT https://splunkbase.splunk.com/app/2655/ Why does the commands.conf have a local=true under the decrypt command? It did not have this previously and it still has streaming=true... Thanks
Hi. We are trying to do some stats on the "component" field in the internal splunkd logs, but have encountered a strange problem, the stats command only works if we search in "Verbose Mode". If we sw... See more...
Hi. We are trying to do some stats on the "component" field in the internal splunkd logs, but have encountered a strange problem, the stats command only works if we search in "Verbose Mode". If we switch to "Smart Mode" or "Fast Mode" the search gives no results. This is our search:    index=_internal sourcetype=splunkd component=* | stats count by component   This is the default regex in props.conf in the search app for the "component" field:    (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P<log_level>[^ ]*)\s+(?P<component>[^ ]+) - (?P<event_message>.+)   I've tried running the regex manually with the rex command, so I know it works fine. Also, I tried running the stats command in the search app itself, to ensure that there is no permission errors, but the results are the same. The permission for the field extraction is set to read everyone and global anyway, so it should not matter. Also, since the extraction works in verbose mode, we know it actually works, as the component field would not be extracted by a normal key-value pair extraction. It has to be extracted by the regex. Example of an internal log with the component field (being "Metrics" in this case):   10-19-2020 10:36:03.997 +0200 INFO Metrics - group=thruput, name=uncooked_output, instantaneous_kbps=0, instantaneous_eps=0, average_kbps=0, total_k_processed=0, kb=0, ev=0   Also, if I search for only "index=_internal sourcetype=splunkd" in smart mode, the component field is extracted, but if I then click on the field in the UI and click e.g. "Top values", it gives no results again. Can anyone explain this behaviour, and what may be the cause?
Hello. I am trying to compute a user's login failure rate so I can pit that against their daily failure rate to see if they're staying within their expected behavior.  The problem is that a user ... See more...
Hello. I am trying to compute a user's login failure rate so I can pit that against their daily failure rate to see if they're staying within their expected behavior.  The problem is that a user may not login everyday so something like   ... | eval av=(FailedLogin/7)   won't work if a user does not login everyday. Is there a way to say   |eval _counter="0" |eval if(FailedLoginForThisDay!="0") counter = counter + 1 |span -1d |eval avg = (FailedLoginTotal/counter) |eval if(avg < FailedLoginForThisDay) stats count by user, ipaddr, etc   where I am saying iterate over a set of days and determine if a user had a failed login. If so increment the counter, do it in 1 day buckets (is there a better way?) then finally take my counter, find my true average for failed logins and if they are above their average print out their username, ip, and other stats.    
Hello, how can I add treshold line in my forecastviz? I've tried as below but it's not visable on the chart. With normal line chart it works fine. | mysearch | eval Treshold=90| timechart span=120... See more...
Hello, how can I add treshold line in my forecastviz? I've tried as below but it's not visable on the chart. With normal line chart it works fine. | mysearch | eval Treshold=90| timechart span=120min latest(Treshold) as Treshold avg(MyPredictedValue) as PredictedValue | predict "PredictedValue" as prediction algorithm=LLT holdback=12 future_timespan=96 upper95=upper95 lower95=lower95 | `forecastviz(96, 12, "PredictedValue", 95)` | filldown Threshold 
Hi Team, Is there any validity for Splunk Power User certification. I completed the cert on 17th of March 2017. Can I directly take Splunk Admin Certification now?
Hi, folks. I am stumped on this matter. My goal is extracting ABC, BCE, & CDE from ABCDE into a multivalue field. So far, I have played around with regex101.com and got these 2 regex: (?<field_... See more...
Hi, folks. I am stumped on this matter. My goal is extracting ABC, BCE, & CDE from ABCDE into a multivalue field. So far, I have played around with regex101.com and got these 2 regex: (?<field_1>(?=(\w{3}))) (?<field_2>(?<=(\w{3}))) Both seem to work on regex101.com But the thing is, I always get empty results in Splunk. I was using this command | makeresults | eval sample="ABCDE" | rex field=sample max_match=0 "(?<field_1>(?=(\w{3})))" I understand that I was using positive lookahead and positive lookbehind. I opt to use one of them, since I'm not aware of how many characters the original field would have. So, either lookahead or lookbehind seems to be the appropriate method to do. Are these two methods available in Splunk? Or am I doing this in a wrong way? Please advise. Thank you
Hi, I´m looking for a list of all CIM fileds that are created by the Windows TA... I can´t find any doku...   Thanks for the Help!
Hi, We are using Microsoft Exchange On-Premise EWS app version 2.0.29 (Upgraded from 2.0.17) and we are experiencing some issues with Polling. First of all the "oldest first" parameter seems to wor... See more...
Hi, We are using Microsoft Exchange On-Premise EWS app version 2.0.29 (Upgraded from 2.0.17) and we are experiencing some issues with Polling. First of all the "oldest first" parameter seems to work as "latest first" and the "latest first" works as "oldest first". Secondly the Scheduled/interval polling is working this way (more or less in every single test I have made): - First iteration: brings the Max emails per scheduled polling. -Second iteration: brings the first iteration number of emails. -Third iteration: brings the max emails per scheduled polling. -After that it does not bring any more emails despite the fact that there are more pending emails to bring.   As well it seems that there is a cache when I try the same emails and there are some emails missing when I execute the Scheduled polling over the same set of emails.   Can you help please?   Thank you!  
Hi Splunkers, When UFs are installed, I don't want them to send all the events which were generated prior to UF was installed. I just want UF to start sending logs when it was installed. Is there ... See more...
Hi Splunkers, When UFs are installed, I don't want them to send all the events which were generated prior to UF was installed. I just want UF to start sending logs when it was installed. Is there a way to conf this?   To be honest we need to reinstall UFs for some reason and don't want the reinstalled UF to resend the previous events