All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Start by illustrating/mockup/explaining data input. (Anonymize as needed.)  What are data fields of interest?  In your search, what does the field 'query' stand for? What are sample/mock values?  Wha... See more...
Start by illustrating/mockup/explaining data input. (Anonymize as needed.)  What are data fields of interest?  In your search, what does the field 'query' stand for? What are sample/mock values?  What does "Workstation" do in your logic? Anyway, your sort command has no effect because "combo" is no longer available after stats.  If anything, you can try index=foo Host=<variable> | streamstats count(query) as Domains by User query Workstation | eval combo=Domains +" : "+ query | stats values(combo) as "Unique Hits : Domain" by User Workstation | sort - "Unique Hits : Domain"
The License Manager can show license usage over time.  It can group the usage by index or sourcetype to help you find where the usage is coming from. Sudden spikes can be normal periodic increases, ... See more...
The License Manager can show license usage over time.  It can group the usage by index or sourcetype to help you find where the usage is coming from. Sudden spikes can be normal periodic increases, but often are caused by a new data source or existing source that changed verbosity (turned on DEBUG logging, for instance).
Hi,  We are seeing the sudden spike of the license consumption in our splunk es since last week, Where do we get to see the all indexes license consumption  daily wise,, what is the cause of this ... See more...
Hi,  We are seeing the sudden spike of the license consumption in our splunk es since last week, Where do we get to see the all indexes license consumption  daily wise,, what is the cause of this sudden splunk ?
TYVM for the reply and info
If presenting result in multivalue format is that important, you can pad decimal octet IPv4 address to full three digits, then use mvsort.   | makeresults | eval ip = split("119.0.6.159,62.0.3.75,6... See more...
If presenting result in multivalue format is that important, you can pad decimal octet IPv4 address to full three digits, then use mvsort.   | makeresults | eval ip = split("119.0.6.159,62.0.3.75,63.0.3.84,75.0.3.80,92.0.4.159", ",") ``` data emulation above ``` | eval idx = mvrange(0,4) | foreach ip mode=multivalue [eval sorted_ip = mvappend(sorted_ip, mvjoin(mvmap(idx, printf("%.3d", tonumber(mvindex(split(<<ITEM>>, "."), idx)))), "."))] | eval sorted_ip = mvsort(sorted_ip) | table ip sorted_ip   You'll get ip sorted_ip 119.0.6.159 62.0.3.75 63.0.3.84 75.0.3.80 92.0.4.159 062.000.003.075 063.000.003.084 075.000.003.080 092.000.004.159 119.000.006.159 If you want the decimal octets to be stripped of padding, you can do that with printf or any number of other methods.  I'll leave this for your homework.
may we know if you have a working splunk environment (splunk indexer(s), linux UF's already sending logs to indexer, required apps installed on SH, etc..)  if yes, pls suggest us what things exactly... See more...
may we know if you have a working splunk environment (splunk indexer(s), linux UF's already sending logs to indexer, required apps installed on SH, etc..)  if yes, pls suggest us what things exactly you have..  OR do you have nothing and you want to start from zero.. 
Hello, I am working on a search to find domains queried via a particular host, and list out a count of hits per unique domain on the host, along with the username. This search is returning the doma... See more...
Hello, I am working on a search to find domains queried via a particular host, and list out a count of hits per unique domain on the host, along with the username. This search is returning the domains individually, but they are showing up as an entry in each "count" (see the Actual Results below).  What I am looking to do is get the results to show only the values for the highest count of each domain, and to order these results from highest to lowest (see Expected Results below).  index=foo Host=<variable> | streamstats count(query) as Domains by User query Workstation | eval combo=Domains +" : "+ query | stats values(combo) as "Unique Hits : Domain" by User Workstation | sort - combo Actual Results (truncated): 1 : www.youtube.com 2 : history.google.com 3 : history.google.com Expected Results (truncated): 3 : history.google.com 2 : mail.google.com 1 : www.youtube.com
Yes, you can do this using the Splunk HTTP Event Collector.   Here's a blog post showing an example of Splunk receiving data from a Zoom webhook: Splunking Webhooks with the HTTP Event Collecto... See more...
Yes, you can do this using the Splunk HTTP Event Collector.   Here's a blog post showing an example of Splunk receiving data from a Zoom webhook: Splunking Webhooks with the HTTP Event Collector | Splunk
I was looking for this answer! This worked for me,  I have a Mac.
Sounds like you need this app from Splunkbase: Treeview Viz | Splunkbase
Does this page help at all?  There's some gotchas with this configuration depending on how the account was setup for the Slack integration: Slack Integration Guide | VictorOps Knowledge Base
If you're looking to reset the Splunk UF's user password, you can follow the instructions under the "Reset a lost administrator password" section on this page: Create secure administrator credential... See more...
If you're looking to reset the Splunk UF's user password, you can follow the instructions under the "Reset a lost administrator password" section on this page: Create secure administrator credentials - Splunk Documentation  
Thanks Yuanliu for your quick reply. Shared solution works for me. But now I am getting error 'list' command: Limit of '100' for values reached. Additional values may have been truncated or ignored. ... See more...
Thanks Yuanliu for your quick reply. Shared solution works for me. But now I am getting error 'list' command: Limit of '100' for values reached. Additional values may have been truncated or ignored. Could you please suggest to resolve this issue without updating the limits.conf file? @yuanliu 
Dear Splunk new learners...  https://www.youtube.com/@siemnewbies101/playlists the primary objective of this youtube channel is to teach Splunk newbies / new learners the SPL commands and most impo... See more...
Dear Splunk new learners...  https://www.youtube.com/@siemnewbies101/playlists the primary objective of this youtube channel is to teach Splunk newbies / new learners the SPL commands and most importantly the regular expressions. pls check it out, thanks. 
I stand corrected.  Thanks, @richgalloway ! Now, @tom_porter will need to explain what "does not work" means.  To be clear, this phrase should be forbidden in a forum like this as it conveys very li... See more...
I stand corrected.  Thanks, @richgalloway ! Now, @tom_porter will need to explain what "does not work" means.  To be clear, this phrase should be forbidden in a forum like this as it conveys very little information in the best scenarios. Illustrate/mock your data (anonymize as needed), show code you tried, show actual results (anonymize as needed), illustrate/mock desired results, explain the logic connecting data and desired results if it is not painfully obvious. Also explain the difference between actual results and desired results if it is not painfully obvious.
We fixed this issue by changing the HEC endpoint that the data was being sent to from services/collector/raw to  services/collector/event. More information here: https://docs.splunk.com/Document... See more...
We fixed this issue by changing the HEC endpoint that the data was being sent to from services/collector/raw to  services/collector/event. More information here: https://docs.splunk.com/Documentation/Splunk/latest/Data/HECRESTendpoints
Extract the CI from the URL and use that in the eval command. | rex field=URL "https?:\/\/(?<url_ci>[^\.]+)" | eval ci = coalesce(ci, url_ci)  
The first query I gave you should have worked, but the logic you just suggested should work, too.  This query marks servers from the index as "indexed" and those from the lookup file as "lookup".  Af... See more...
The first query I gave you should have worked, but the logic you just suggested should work, too.  This query marks servers from the index as "indexed" and those from the lookup file as "lookup".  After combining the results by server name, it keeps only the servers found in the index. index=abc host=def_inven | eval inven="indexed" | append [ | inputlookup mylookup.csv | eval inven="lookup" ] | stats values(*) as * by server | where (mvcount(inven)=1 AND isnotnull(mvfind(inven,"indexed")))  
Curly braces on the LHS of an eval says to use the contents of the field as the new field name. See if this example query helps explain | makeresults | eval base="file_watch" | eval {base}= base."... See more...
Curly braces on the LHS of an eval says to use the contents of the field as the new field name. See if this example query helps explain | makeresults | eval base="file_watch" | eval {base}= base.".exe" | eval {base}exe=base.".exe"  
No, in lookup file there are few servers which are monitored, but also in index  some servers which is monitored but I need to find which is not monitored. Is it possible to try something like this ... See more...
No, in lookup file there are few servers which are monitored, but also in index  some servers which is monitored but I need to find which is not monitored. Is it possible to try something like this for example: index=abc host=def_inven •if it is in inventory flag it  flag inven= something  join  lookup <> flag  splunk=something so we can use |stats values by flag Whether this logic is correct? If it is ok give a exact query or suggest me something different query   @richgalloway