All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Short answer is no. Splunk SPL is not a procedural language (like some other languages). Essentially, the if function can be used to modify what is assigned by an eval command to a new or existing f... See more...
Short answer is no. Splunk SPL is not a procedural language (like some other languages). Essentially, the if function can be used to modify what is assigned by an eval command to a new or existing field in the event, although you can have multiple assignments in the same eval command e.g. | eval a=value1, b=value2
You can't do block ifs in Splunk, so you have to do all conditionals inside the | eval x=if(...) construct
There are a number of ways to achieve something like this. Uses a tab mechanism (using Splunk input type="link") to show groups of panels Use a small visualisation to show a "thumbnail" and then e... See more...
There are a number of ways to achieve something like this. Uses a tab mechanism (using Splunk input type="link") to show groups of panels Use a small visualisation to show a "thumbnail" and then expand the chart and remove other thumbnails when clicking on the chart These all generally work through panel dependency and tokens to hide or show certain panels. The tab approach simply uses a <change> element in the <input> to set and unset tokens that show or hide panels relating to that tab <input id="cascade_group" type="link" token="tab"> <label>Cascade</label> <choice value="l1">Tab 1</choice> <choice value="l2">Tab 2</choice> <default>l1</default> <change> <condition value="l1"> <unset token="show_l2"></unset> <set token="show_l1"></set> </condition> <condition value="l2"> <unset token="show_l1"></unset> <set token="show_l2"></set> </condition> </change> </input> Use the <row depends="$show_l1$> syntax to show rows/panels for l1 panels and the same for l2. And this is the thumbnail approach which if you click on the second thumbnail, expands to the chart below and removes the other thumbnails. This is done by setting the height attribute of the chart through a token set by drilldown, e.g. something like this (but a little more complext) <option name="height">$varietal_height$</option> <drilldown> <set token="varietal_height">800</set> <unset... tokens for other thumbnails> </drilldown> Go checkout the XML reference and read about tokens and depends  https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML  
Is it possible to action multiple operations in a single if condition, like what can be done in other languages? For example, in other scripting languages this can be done:   if(field==1){ gro... See more...
Is it possible to action multiple operations in a single if condition, like what can be done in other languages? For example, in other scripting languages this can be done:   if(field==1){ group=group+1; groups=groups+","+group; } else { //this is a comment, do nothing }   How can this be done in splunk?
I don't think you're going to find an easy option - the push is to move to DS. Depending on how complex your DS dashboards are I would suggest starting by just copying all the "query" elements in DS ... See more...
I don't think you're going to find an easy option - the push is to move to DS. Depending on how complex your DS dashboards are I would suggest starting by just copying all the "query" elements in DS JSON to an empty  <table>...</table> template so all your searches are there and then editing to change all the viz definitions, but that is going to be a manual process...  
Say I create a query that outputs (as a csv) the last 14 days of hosts and the dest_ports the host has communicated on. Then I would inputlookup that csv to compare the last 7 days of the same type ... See more...
Say I create a query that outputs (as a csv) the last 14 days of hosts and the dest_ports the host has communicated on. Then I would inputlookup that csv to compare the last 7 days of the same type of data. What would be simplest spl to detect anomalies?
You can't change Splunk's user interface - firstly how does Splunk know what "unpleasant" means. If you want to show a timeline, then create a dashboard and you can do that in the dashboard. See th... See more...
You can't change Splunk's user interface - firstly how does Splunk know what "unpleasant" means. If you want to show a timeline, then create a dashboard and you can do that in the dashboard. See this documentation https://docs.splunk.com/Documentation/Splunk/9.0.2/Viz/ChartConfigurationReference  
Assuming you don't have suitable admin permissions to directly manipulate knowledge objects, then the simplest way is to  Dashboard Edit the dashboard source, copy the dashboard Change to new app... See more...
Assuming you don't have suitable admin permissions to directly manipulate knowledge objects, then the simplest way is to  Dashboard Edit the dashboard source, copy the dashboard Change to new app, create new dashboard and paste the data If you cannot edit the dashboard, but can clone it, then clone it privately, edit the dashboard and copy as above Lookup - assuming the existing lookup is app visible only and NOT global in the source app run  | inputlookup lookup_to_be_copied.csv | outputlookup my_tmp_copy.csv then in the new app space do | inputlookup my_tmp_copy.csv | outputlookup new_name_in_new_app.csv This assumes that when you do the  outputlookup, it will get private or global app permissions. If it gets global, then the new app will see this, but take care - you don't want 2 lookups of the same name with global scope. If it is output as private then you should be able to 'upgrade' the permissions to app scope in the new app. Much will depend on the permissions you have
@Raja_Selvaraj  Take a look at this article on Process\% Processor Time https://learn.microsoft.com/en-us/archive/technet-wiki/12984.understanding-processor-processor-time-and-process-processor-tim... See more...
@Raja_Selvaraj  Take a look at this article on Process\% Processor Time https://learn.microsoft.com/en-us/archive/technet-wiki/12984.understanding-processor-processor-time-and-process-processor-time How many cores does your machine have?  
If you start to mix the hard coded timestamps in your query with expectations from the time picker, I think you will get confused. Is this in a dashboard or as a general search? If you want a time p... See more...
If you start to mix the hard coded timestamps in your query with expectations from the time picker, I think you will get confused. Is this in a dashboard or as a general search? If you want a time picker to be used, then why is this not OK (index="A" OR index="B") "reports" "arts" because then both indexes will be searched for data. If the time picker is last 24 hours then there will be no data found from index=A and if your time picker is set to a range before 2024/06/01 then it will find no data from index=B Whereas if your time picker is from 2024/05/01 to 2024/07/01 it will find data from both indexes Don't start trying to optimise - Splunk does not work the way you seem to be implying. Data in Splunk is stored in time buckets, so there will be NO time buckets for index=A after 2024/06/01, so there is no data to search and the same for index=B for time before 2024/06/01. You don't need to worry about efficiency of the search for this - Splunk is good at this.
I have a new indexer set up for dev, and I need to move its default SPLUNK_DB path to the mountpoints we have set up for its cold/data   Currently, we have storage allocated on drives for the cold ... See more...
I have a new indexer set up for dev, and I need to move its default SPLUNK_DB path to the mountpoints we have set up for its cold/data   Currently, we have storage allocated on drives for the cold and hot data. We have storage allocated at /export/opt/slunk/data/<cold|hot> Currently, I have ingested some test data with eventgen, and it ended up in /export/opt/splunk/var/lib/splunk/   I would just copy everything over and update the splunk-launch.conf and edit the $SPLUNK_DB to be /export/opt/splunk/data, but there are a lot of files under the /export/opt/splunk/var/lib/splunk/.   I really only have one index with data in it, the testindex index.  What would be the best way to go about migrating all of the data from /export/opt/splunk/var/lib/splunk/ while making sure that future events get sent to the correct hot/cold databases.  The files under /export/opt/splunk/var/lib/splunk/ dont specify hot or cold until i get into the specific directories. At this point, all of the data could be considered hot as its new, but id like to confirm that any future events get sent to the correct index.  When i run echo $SPLUNK_DB, i do not get any output. When i run printenv, I do not see $SPLUNK_HOME or $SPLUNK_DB and their values. WIthin the SPlunk-launch.conf, the $SPLUNK_DB is commented out, and there isnt one set in local to specify it. So why does it default to  /export/opt/splunk/var/lib/splunk/?  I saw this Splunk DOC:https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Moveanindex But I already have a directory i want, would i have to move each folder under the current db directory individually to ensure they land in the right place?  Id just like some guidance on best practice for this indexer. I just have one SH, one Indexer, and one Forwarder.    Thanks for any help   
I'm not sure I understood the steps shown in your reply and what was not there yet, but you have taken the right path in that if your user in the index has extra data at the end, then using a subsear... See more...
I'm not sure I understood the steps shown in your reply and what was not there yet, but you have taken the right path in that if your user in the index has extra data at the end, then using a subsearch will not work unless you, for example, add a * character in the user in the lookup, which may or may not be useful in your case. So, the  search | eval... | stats... | lookup | where approach rather than subsearch is right - where are you not getting what you want?  
You are right, partialcode is the second field - mvfilter has a few use cases, but I've generally found I'm always wanting to relate it to some other field, so when mvmap came along in Splunk 8, I al... See more...
You are right, partialcode is the second field - mvfilter has a few use cases, but I've generally found I'm always wanting to relate it to some other field, so when mvmap came along in Splunk 8, I almost never use mvfilter now - even when I could.
Or one of the log file under var/log/splunk is flooding.
If you have only one UF, few SHs and still internal index is pausing, it's likely the system is running out of CPU due to high load/search activity or there is some I/O performance issue.
My bad, it was missing the 'g' (global) flag at the end | makeresults | eval ip=split("010.1.2.3,10.013.2.3,010.001.002.003",",") | mvexpand ip | rex field=ip mode=sed "s/\b0+\B//g"  
The log message is bit generic.  The reason for this message is that on that indexer too many internal index log events arrived and as a result there are already 100+ tsidx files for that hot bucket... See more...
The log message is bit generic.  The reason for this message is that on that indexer too many internal index log events arrived and as a result there are already 100+ tsidx files for that hot bucket in question. Unless splunk-optimize brings the count below 100, indexer will remain paused. On the forwarder side make sure not too many events hit the same indexer. 1. On SH/CM/UF you can enable volume based forwarding  2. From all instances SH/CM/UF/IDX, reduce unwanted metrics.log events
I have a new deployment of Splunk 9.2.1 Enterprise. We only have the Splunk servers running so far, other than one Universal Forwarder. I'm getting this error: The index processor has paused data ... See more...
I have a new deployment of Splunk 9.2.1 Enterprise. We only have the Splunk servers running so far, other than one Universal Forwarder. I'm getting this error: The index processor has paused data flow. Too many tsidx files in idx=_internal bucket="/opt/splunk/var/lib/splunk/_internaldb/db/hot_v1_57" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised. I have 4TB of available disk space, so I have no idea what's going on. Any thoughts?
Need a bit clarification.  Do you mean that the following is faster than or about as fast as your second search? earliest=-6mon latest=now (index="A" "reports" "arts") OR (index="B" "reports" "art... See more...
Need a bit clarification.  Do you mean that the following is faster than or about as fast as your second search? earliest=-6mon latest=now (index="A" "reports" "arts") OR (index="B" "reports" "arts") In other words, in your first search, setting earliest to last 6 months and latest as now (presumably in time selector) is faster or as fast as limiting each dataset in search command?
Process names, but when analyzing the results, for particular time frame there are multiple 100% CPU utilization for mutiple Windows process names. Are these 100% utilization for multiple proces... See more...
Process names, but when analyzing the results, for particular time frame there are multiple 100% CPU utilization for mutiple Windows process names. Are these 100% utilization for multiple process names on a single host or multiple hosts?  Your last stats is | timechart latest('CPU') by process_name, which aggregates across all that match host=*hostname*.  Is there any reason why there must not be multiple 100%? Maybe you are looking for groupby process_name AND host? index=tuuk_perfmon source="Perfmon:Process" counter="% Processor Time" host=*hostname* (instance!="_Total" AND instance!="Idle" AND instance!="System") | eval 'CPU'=round(process_cpu_used_percent,2) | timechart latest('CPU') by process_name host The output will not be pretty but it's an idea.