All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calc... See more...
I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calculating three fields: REMEDIATION_50, _80, and _100 using the following   |eval REMEDIATION_50 = if(PERCENTAGE <= 50, "x", "")     From this eval statement, I am going to have multiple rows where the _50, and _80 rows are marked, and some where both fields are marked.  I'm interested in isolating the DAY of the first time each of these milestones are hit.  I've yet to craft the right combination of stats, where, and evals that gets me what I want. In the end, I'd like to get to this of sorts Start 50% 80% 100% 2024-11-01 2024-11-23 2024-12-02 -   Any help would be appreciated, thanks!
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events w... See more...
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events where the path field matches any of the regex patterns specified in the Regex_Path column. lookupfile:   Here is the challenge I'm facing: When using the match() function in my search query, it only successfully matches if the Regex_Path pattern completely matches the path field in the event. However, I expected match() to perform partial matches based on the regex pattern, which does not seem to be the case. Interestingly, if I manually replace the Regex_Path in the where match() clause with the actual regex pattern, it successfully performs the match as expected. Here is an example of my search query: index=teleport event="sftp" path!="" | eval path_lower=lower(path) | lookup Sensitive_File_Path.csv Regex_Path AS path_lower OUTPUT Regex_Path, Note | where match(path_lower, Regex_Path) | table path_lower, Regex_Path, Note I would like to understand why the match() function isn't working as anticipated when using the lookup table and whether there is a better method to achieve the desired regex matching. Any insights or suggestions on how to resolve this issue would be greatly appreciated.
"No luck", "Does not work" are useless words in this forum.  What is the input?  What is the output?  How does the output differ from your expectations?  Are you sure your data contains time periods ... See more...
"No luck", "Does not work" are useless words in this forum.  What is the input?  What is the output?  How does the output differ from your expectations?  Are you sure your data contains time periods where the condition is satisfied?  Unless you can illustrate these data points, volunteers here cannot help you. Here is an emulation for the first search.  As you can see, remaining results after "where" all have output1 > 30% of output2   index = _audit action IN (artifact_deleted, quota) | rename action as field1 | eval field1 = if(field1 == "quota", "output1", "output2") ``` the above emulates index=sample sample="value1" ``` | timechart span=10m count by field1 | where output1 > 0.3 * output2   My output is _time output1 output2 2024-12-01 21:00:00 6 0 2024-12-01 21:20:00 4 4 2024-12-01 22:00:00 2 2 2024-12-01 23:30:00 11 11 2024-12-01 23:40:00 2 4 2024-12-02 00:00:00 10 8 2024-12-02 01:00:00 6 8 2024-12-02 03:00:00 11 31 2024-12-02 03:10:00 5 6 2024-12-02 03:20:00 3 8 2024-12-02 03:30:00 3 7 2024-12-02 03:40:00 5 4 2024-12-02 03:50:00 8 13 2024-12-02 04:00:00 5 11 2024-12-02 04:10:00 14 12 2024-12-02 04:20:00 12 14 2024-12-02 04:30:00 6 13 2024-12-02 04:50:00 4 0 2024-12-02 07:10:00 2 2 2024-12-02 12:00:00 6 0 Without "where", there are 150 time intervals. Play with the emulation, modify it to see how timechart, timebucket, and filter conditions work together with different datasets.  Then, analyze your own dataset.  For example, if your search doesn't return any result when "where" applies, post output when "where" is removed. (You can anonymize actual values with "output1" "output2" like I do in the emulation but data accurate to real data.)
Hello, dear Splunk Community. I am trying to extract the ingest volume from our client's search head, but I noticed that I am getting different results depending on which method I am using. For exa... See more...
Hello, dear Splunk Community. I am trying to extract the ingest volume from our client's search head, but I noticed that I am getting different results depending on which method I am using. For example, if a run the following query: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval GB=round(b/1024/1024/1024, 3) | timechart sum(GB) as Volume span=1d     I get the following table: _time Volume 2024-11-25 240.489 2024-11-26 727.444 2024-11-27 751.526 2024-11-28 777.469 2024-11-29 727.366 2024-11-30 724.419 2024-12-01 787.632 2024-12-02 587.710   On the other hand, when I got to Apps > CMC > License usage > Ingest, and fetch the data for "last 7 days" (same as above) I get the following table: _time GB 2024-11-25 851.012 2024-11-26 877.134 2024-11-27 872.973 2024-11-28 949.041 2024-11-29 939.627 2024-11-30 835.154 2024-12-01 955.316 2024-12-02 963.486   As you can see, there is a considerable mismatch between both results. So here's where I'm at a crossroad because I don't know which one should I trust. Based on previous topics, I notice the above query has been recommended before, even in posts from 2024. I don't know if this is related to my user not having the appropriate capabilities or whatnot, but any insights about this issue are greatly appreciated. Cheers, everyone.
Wondering if this will work for you. It puts both datasets in the outer query. The first stats will pull all fields together by TraceID, then the where will remove those without data. The @t will co... See more...
Wondering if this will work for you. It puts both datasets in the outer query. The first stats will pull all fields together by TraceID, then the where will remove those without data. The @t will contain multivalue dates which will get converted and then your next stats will collapse any duplicates. (index=test OR index=test2 source="insertpath" ErrorCodesResponse=TestError TraceId=*) OR (index=test "Test SKU" AND @MT !="TestAsync: Request(Test SKU: )*") | fields TraceId, @t, @MT, RequestPath | stats values(*) as * by TraceId | where isnotnull('@t') AND isnotnull('@mt') AND match('@mt', "Test SKU: *") | eval date=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%Y-%m-%d"), time=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%H:%M") | stats values(date) as date values(time) as time values(@mt) as message values(RequestPath) as Path by TraceId | where isnotnull(date) AND isnotnull(time) AND isnotnull(message) | table date, time, TraceId, message, Path  There may be more optimisations depending on your data.
Hi there are at least this one https://splunkbase.splunk.com/app/5927. Not exactly what you are looking for, but probably it gives you some ideas how to do it. Basically you can do it as you said a... See more...
Hi there are at least this one https://splunkbase.splunk.com/app/5927. Not exactly what you are looking for, but probably it gives you some ideas how to do it. Basically you can do it as you said alert action (could be an issue, if you want sent lot of data?). Another way is to create a custom command and use it. But If. you have lot of data to export, then maybe easiest way to go is just create saved search, call it with splunk rest api with some other job management software/system which then send it forward. r. Ismo
One way using stats, which will be efficient | makeresults | eval new_set="A,B,C" | makemv delim="," new_set | append [| makeresults | eval baseline="X,Y,Z" ] | makemv delim="," baseline ``` Join ro... See more...
One way using stats, which will be efficient | makeresults | eval new_set="A,B,C" | makemv delim="," new_set | append [| makeresults | eval baseline="X,Y,Z" ] | makemv delim="," baseline ``` Join rows together ``` | stats values(*) as * ``` Expand out the baseline data ``` | stats values(*) as * by baseline ``` Collect combinations ``` | eval combinations=mvmap(new_set, new_set. "-". baseline) ``` and combine again ``` | stats values(combinations) as combinations It relies on the expansion of the MV using stats by baseline - which could also be done with mvexpand, not sure which one is more efficient.  
Not yet. I'm still discussing with support is this a bug or something else. Currently we are waiting (final?) answer from developers/PM to hear what are their plans for it.
Using the below sample search I'm trying to get every possible combination of results between two different sets of data and interested if there are any good techniques for doing so that are relative... See more...
Using the below sample search I'm trying to get every possible combination of results between two different sets of data and interested if there are any good techniques for doing so that are relatively efficient.  At least with the production data set I'm working with it should translate to about 40,000 results.  Below is just an example to make the data set easier to understand.  Thank you in advance for any assistance. Sample search | makeresults | eval new_set="A,B,C" | makemv delim="," new_set | append [| makeresults | eval baseline="X,Y,Z" ] | makemv delim="," baseline Output should be roughly in the format below and I'm stuck on getting the data manipulated in a way that aligns with the below. new_set - baseline -- A-X A-Y A-Z B-X B-Y B-Z C-X C-Y C-Z
Hey. Any updates regarding the bug? Found the same issue, using latest splunk (9.3.2)
When you run the command "netsh wlan show wlanreport", it does not only generate a HTML report, but also a xml report. This is good because the HTML report is intended for human consumption so Splunk... See more...
When you run the command "netsh wlan show wlanreport", it does not only generate a HTML report, but also a xml report. This is good because the HTML report is intended for human consumption so Splunk will not be happy with it. You can instead index the XML file. The XML file is at: C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.xml To set up Splunk to generate and index this file once per hour, you need 3 configuration files: 1) A props.conf file on your indexer machine(s) # Put this in /opt/splunk/etc/apps/<yourappname>/local/props.conf [WlanReport] maxDist = 170 SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = <?xml version TIME_PREFIX = ReportDate> 2) A inputs.conf file on your forwarder machine(s) # Put this in /opt/splunkforwarder/etc/apps/<yourdeploymentappname>/local/inputs.conf [monitor://C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.xml] index=main sourcetype=WlanReport disabled = 0 initCrcLength = 256 # You can use a scripted input to run the command once per X seconds specified by the interval [script://C:\Program Files\SplunkUniversalForwarder\etc\apps\<yourdeploymentappname>\bin\scripts\wlanreport.bat] interval = 3600 disabled = 0 # (I have trouble getting it to work with a relative path to the script) 3) The script file on your forwarder machine(s): # Put this in /opt/splunk/etc/apps/<yourdeploymentappname>/bin/wlanreport.bat @echo off netsh wlan show wlanreport   You will then have events coming in containing the XML file contents, every hour.
Sorry for delayed response, holidays got in the way. I ran "splunk btool server list sslConfig" and it returned no data.  I tried it without sslconfig and searched for that cert name and nothing Wh... See more...
Sorry for delayed response, holidays got in the way. I ran "splunk btool server list sslConfig" and it returned no data.  I tried it without sslconfig and searched for that cert name and nothing When I run openssl.exe x509 -enddate -noout -text -in "c:\programs files\splunk\etc\auth\server_pkcs1.pem" it shows as the issuer being Splunk.
In Dashboard Studio it's $row.<<fieldname>>.value$. $row.host.value$
Hi @Vinodh.Angalaguthi, It's been a few days with no reply from the community. Did you happen to find a solution or more information you can share? If you still need help, you can contact AppDyn... See more...
Hi @Vinodh.Angalaguthi, It's been a few days with no reply from the community. Did you happen to find a solution or more information you can share? If you still need help, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
Hello Splunk Community,  I was wondering if anyone has been successful in setting up the Microsoft Teams Add-on for Splunk app in their Enterprise/Heavy Forwarder. This application requires configur... See more...
Hello Splunk Community,  I was wondering if anyone has been successful in setting up the Microsoft Teams Add-on for Splunk app in their Enterprise/Heavy Forwarder. This application requires configuring a Teams webhook. When reading the documentation it appears that the app is supposed to create or include the Microsoft Teams-specific webhook. However, when I attempt to search for the Webhook in the search app using:  sourcetype="m365:webhook" I don't get anything back and I'm not sure what the Webhook address is since document doesn't specify the format or go over the steps to create a Webhook address.  I followed these steps: https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_the_Microsoft_Teams_Add-on_for_Splunk If anyone has an idea on how to create the Webhook or has an idea what I am doing wrong, I would greatly appreciate it.  Thanks!
Remove Blue Dot In Dashboard Studio, my panels use a parent search which uses a multisearch. Because of this, all of the panels have this annoying informational blue dot that appears until the sea... See more...
Remove Blue Dot In Dashboard Studio, my panels use a parent search which uses a multisearch. Because of this, all of the panels have this annoying informational blue dot that appears until the search completely finishes. How can I get rid of this so it never appears? 
Sorry about that, I didn't think it would matter.  Looks like it does.  I've created a Support ticket for this as well.  Hopefully, they'll get back to me.  If they do, I'll let you know the solution... See more...
Sorry about that, I didn't think it would matter.  Looks like it does.  I've created a Support ticket for this as well.  Hopefully, they'll get back to me.  If they do, I'll let you know the solution with Studio. Thanks again, Tom
@gcusello  I'm not entirely sure what you're referring to to be honest. Our subsearch is well under 50k results so that shouldn't be the issue. But I appreciate you trying to assist. I'll see if I c... See more...
@gcusello  I'm not entirely sure what you're referring to to be honest. Our subsearch is well under 50k results so that shouldn't be the issue. But I appreciate you trying to assist. I'll see if I can puzzle it out.
| eval "Last Logon"=strftime(strptime(LastLogon, "%Y-%m-%dT%H:%M:%S.%QZ"),"%Y%m%d %H:%M:%S") | eval lastLogon=strptime(LastLogon, "%Y-%m-%dT%H:%M:%S.%QZ") Sorry about not having a better explanation... See more...
| eval "Last Logon"=strftime(strptime(LastLogon, "%Y-%m-%dT%H:%M:%S.%QZ"),"%Y%m%d %H:%M:%S") | eval lastLogon=strptime(LastLogon, "%Y-%m-%dT%H:%M:%S.%QZ") Sorry about not having a better explanation.  "Last Logon" and "lastLogon" are being generated from a field "LastLogon" which I hope or assume is in the original data set. "Last Logon" is a nested strptime inside a strftime.  The strptime takes and human readable format and converts to epoch, while the strftime will take epoch and convert to human readable.  The nested function here essentially converts the format from one human readable to another human readable.  There are easier methods but if it was working maybe don't change it until your skill level jumps. "lastLogon" just takes the human readable format and converts to epoch(Unix) time - which makes duration calculations much easier. Check that "LastLogon" field is still there and that the format still matches the "xxxx-xx-xxTxx:xx:xx.xxxZ" that the strptime command is configured to expect.  Also check to see if the time shift you are experience can be explain by the delta in your local time zone (either personal setting, or that of the Search Head).  It expects the raw data from the field to be in Zulu time.  
That should be doable. Does the other product have documentation describing the format in which it expects to receive the lookup? You should be able then to use SPL to convert the lookup into that fo... See more...
That should be doable. Does the other product have documentation describing the format in which it expects to receive the lookup? You should be able then to use SPL to convert the lookup into that format, in one or more fields, then send it using the POST HTTP alert action.