All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Raja_Selvaraj  Take a look at this article on Process\% Processor Time https://learn.microsoft.com/en-us/archive/technet-wiki/12984.understanding-processor-processor-time-and-process-processor-tim... See more...
@Raja_Selvaraj  Take a look at this article on Process\% Processor Time https://learn.microsoft.com/en-us/archive/technet-wiki/12984.understanding-processor-processor-time-and-process-processor-time How many cores does your machine have?  
If you start to mix the hard coded timestamps in your query with expectations from the time picker, I think you will get confused. Is this in a dashboard or as a general search? If you want a time p... See more...
If you start to mix the hard coded timestamps in your query with expectations from the time picker, I think you will get confused. Is this in a dashboard or as a general search? If you want a time picker to be used, then why is this not OK (index="A" OR index="B") "reports" "arts" because then both indexes will be searched for data. If the time picker is last 24 hours then there will be no data found from index=A and if your time picker is set to a range before 2024/06/01 then it will find no data from index=B Whereas if your time picker is from 2024/05/01 to 2024/07/01 it will find data from both indexes Don't start trying to optimise - Splunk does not work the way you seem to be implying. Data in Splunk is stored in time buckets, so there will be NO time buckets for index=A after 2024/06/01, so there is no data to search and the same for index=B for time before 2024/06/01. You don't need to worry about efficiency of the search for this - Splunk is good at this.
I have a new indexer set up for dev, and I need to move its default SPLUNK_DB path to the mountpoints we have set up for its cold/data   Currently, we have storage allocated on drives for the cold ... See more...
I have a new indexer set up for dev, and I need to move its default SPLUNK_DB path to the mountpoints we have set up for its cold/data   Currently, we have storage allocated on drives for the cold and hot data. We have storage allocated at /export/opt/slunk/data/<cold|hot> Currently, I have ingested some test data with eventgen, and it ended up in /export/opt/splunk/var/lib/splunk/   I would just copy everything over and update the splunk-launch.conf and edit the $SPLUNK_DB to be /export/opt/splunk/data, but there are a lot of files under the /export/opt/splunk/var/lib/splunk/.   I really only have one index with data in it, the testindex index.  What would be the best way to go about migrating all of the data from /export/opt/splunk/var/lib/splunk/ while making sure that future events get sent to the correct hot/cold databases.  The files under /export/opt/splunk/var/lib/splunk/ dont specify hot or cold until i get into the specific directories. At this point, all of the data could be considered hot as its new, but id like to confirm that any future events get sent to the correct index.  When i run echo $SPLUNK_DB, i do not get any output. When i run printenv, I do not see $SPLUNK_HOME or $SPLUNK_DB and their values. WIthin the SPlunk-launch.conf, the $SPLUNK_DB is commented out, and there isnt one set in local to specify it. So why does it default to  /export/opt/splunk/var/lib/splunk/?  I saw this Splunk DOC:https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Moveanindex But I already have a directory i want, would i have to move each folder under the current db directory individually to ensure they land in the right place?  Id just like some guidance on best practice for this indexer. I just have one SH, one Indexer, and one Forwarder.    Thanks for any help   
I'm not sure I understood the steps shown in your reply and what was not there yet, but you have taken the right path in that if your user in the index has extra data at the end, then using a subsear... See more...
I'm not sure I understood the steps shown in your reply and what was not there yet, but you have taken the right path in that if your user in the index has extra data at the end, then using a subsearch will not work unless you, for example, add a * character in the user in the lookup, which may or may not be useful in your case. So, the  search | eval... | stats... | lookup | where approach rather than subsearch is right - where are you not getting what you want?  
You are right, partialcode is the second field - mvfilter has a few use cases, but I've generally found I'm always wanting to relate it to some other field, so when mvmap came along in Splunk 8, I al... See more...
You are right, partialcode is the second field - mvfilter has a few use cases, but I've generally found I'm always wanting to relate it to some other field, so when mvmap came along in Splunk 8, I almost never use mvfilter now - even when I could.
Or one of the log file under var/log/splunk is flooding.
If you have only one UF, few SHs and still internal index is pausing, it's likely the system is running out of CPU due to high load/search activity or there is some I/O performance issue.
My bad, it was missing the 'g' (global) flag at the end | makeresults | eval ip=split("010.1.2.3,10.013.2.3,010.001.002.003",",") | mvexpand ip | rex field=ip mode=sed "s/\b0+\B//g"  
The log message is bit generic.  The reason for this message is that on that indexer too many internal index log events arrived and as a result there are already 100+ tsidx files for that hot bucket... See more...
The log message is bit generic.  The reason for this message is that on that indexer too many internal index log events arrived and as a result there are already 100+ tsidx files for that hot bucket in question. Unless splunk-optimize brings the count below 100, indexer will remain paused. On the forwarder side make sure not too many events hit the same indexer. 1. On SH/CM/UF you can enable volume based forwarding  2. From all instances SH/CM/UF/IDX, reduce unwanted metrics.log events
I have a new deployment of Splunk 9.2.1 Enterprise. We only have the Splunk servers running so far, other than one Universal Forwarder. I'm getting this error: The index processor has paused data ... See more...
I have a new deployment of Splunk 9.2.1 Enterprise. We only have the Splunk servers running so far, other than one Universal Forwarder. I'm getting this error: The index processor has paused data flow. Too many tsidx files in idx=_internal bucket="/opt/splunk/var/lib/splunk/_internaldb/db/hot_v1_57" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised. I have 4TB of available disk space, so I have no idea what's going on. Any thoughts?
Need a bit clarification.  Do you mean that the following is faster than or about as fast as your second search? earliest=-6mon latest=now (index="A" "reports" "arts") OR (index="B" "reports" "art... See more...
Need a bit clarification.  Do you mean that the following is faster than or about as fast as your second search? earliest=-6mon latest=now (index="A" "reports" "arts") OR (index="B" "reports" "arts") In other words, in your first search, setting earliest to last 6 months and latest as now (presumably in time selector) is faster or as fast as limiting each dataset in search command?
Process names, but when analyzing the results, for particular time frame there are multiple 100% CPU utilization for mutiple Windows process names. Are these 100% utilization for multiple proces... See more...
Process names, but when analyzing the results, for particular time frame there are multiple 100% CPU utilization for mutiple Windows process names. Are these 100% utilization for multiple process names on a single host or multiple hosts?  Your last stats is | timechart latest('CPU') by process_name, which aggregates across all that match host=*hostname*.  Is there any reason why there must not be multiple 100%? Maybe you are looking for groupby process_name AND host? index=tuuk_perfmon source="Perfmon:Process" counter="% Processor Time" host=*hostname* (instance!="_Total" AND instance!="Idle" AND instance!="System") | eval 'CPU'=round(process_cpu_used_percent,2) | timechart latest('CPU') by process_name host The output will not be pretty but it's an idea.
You are correct.  Thanks for pointing out this subtle behavior of latest.  In addition to tstats, I verified that this behavior exists in stats as well; in fact, this applies to any multivalue data, ... See more...
You are correct.  Thanks for pointing out this subtle behavior of latest.  In addition to tstats, I verified that this behavior exists in stats as well; in fact, this applies to any multivalue data, not just JSON array.  (I don't believe that latest_values will really solve the problem because | stats values() discards original order; a latest_list would work but tstats doesn't support list to begin with.)
How do i clone a dashboard and lookuptables from one App to another in Splunk
What you can do is to monitor process or service types of events from Windows or Linux systems and monitor if it’s being run. You can't check what the packets directly into Splunk from the Wireshark ... See more...
What you can do is to monitor process or service types of events from Windows or Linux systems and monitor if it’s being run. You can't check what the packets directly into Splunk from the Wireshark app, unless the user left behind pcap files, which can be collected and read by Splunk Stream app.   You will need to first look at your systems and run Wireshark and analyse the processes or services that are running and then and look at the various TA's to help ingest the data and monitor using the various fields that contain the data. To Monitor Process and Services you need to look at the Windows Sysmon Or Windows TA and the Splunk Nix TA for Linux based systems. (These will also show users logged on, what commands run etc and use SPL to analyse) Look at the  below and explore the various options available to you. #Sysmonlog https://splunkbase.splunk.com/app/5709    #Windows TA https://splunkbase.splunk.com/app/742    #Nix TA https://splunkbase.splunk.com/app/3412    #Stream App + Pcap https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/UseStreamtoparsePCAPfiles
Good its working and yes lots of moving parts / configs and scenarios with Splunk. 
As data will route via the Heavy forwarder it needs to communicate with your License Manager (LM) Server. So it doesn't need its own licence, you just need to point the HF to the LM.    
I am looking to place a heavy forwarder in Azure have it forward events/data to the main indexer with one method using http token. The heavy forwarder will just be used to forward data and not index ... See more...
I am looking to place a heavy forwarder in Azure have it forward events/data to the main indexer with one method using http token. The heavy forwarder will just be used to forward data and not index or search. I am asking will the HF need its own license/how will it relate to the license server?
I was testing out a lot of different things. I know I def did edit the distsearch manually. I did most everything from the CLI. Redoing and moving the License Manager through the GUI fixed some of th... See more...
I was testing out a lot of different things. I know I def did edit the distsearch manually. I did most everything from the CLI. Redoing and moving the License Manager through the GUI fixed some of the issues, as i can now search the data.    Thanks
Yep, I've been conceded to store data as Kvstore lookups (it's a large table.) It is a struggle because I have a personal dislike for lookups due to the search logic being abstracted and its stanza ... See more...
Yep, I've been conceded to store data as Kvstore lookups (it's a large table.) It is a struggle because I have a personal dislike for lookups due to the search logic being abstracted and its stanza is a pain in butt to locate in a savedsearches.conf file. Why use lookups when tstats gives the result in 3 seconds? Could save tstats as a macro too.