All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You are correct.  Thanks for pointing out this subtle behavior of latest.  In addition to tstats, I verified that this behavior exists in stats as well; in fact, this applies to any multivalue data, ... See more...
You are correct.  Thanks for pointing out this subtle behavior of latest.  In addition to tstats, I verified that this behavior exists in stats as well; in fact, this applies to any multivalue data, not just JSON array.  (I don't believe that latest_values will really solve the problem because | stats values() discards original order; a latest_list would work but tstats doesn't support list to begin with.)
How do i clone a dashboard and lookuptables from one App to another in Splunk
What you can do is to monitor process or service types of events from Windows or Linux systems and monitor if it’s being run. You can't check what the packets directly into Splunk from the Wireshark ... See more...
What you can do is to monitor process or service types of events from Windows or Linux systems and monitor if it’s being run. You can't check what the packets directly into Splunk from the Wireshark app, unless the user left behind pcap files, which can be collected and read by Splunk Stream app.   You will need to first look at your systems and run Wireshark and analyse the processes or services that are running and then and look at the various TA's to help ingest the data and monitor using the various fields that contain the data. To Monitor Process and Services you need to look at the Windows Sysmon Or Windows TA and the Splunk Nix TA for Linux based systems. (These will also show users logged on, what commands run etc and use SPL to analyse) Look at the  below and explore the various options available to you. #Sysmonlog https://splunkbase.splunk.com/app/5709    #Windows TA https://splunkbase.splunk.com/app/742    #Nix TA https://splunkbase.splunk.com/app/3412    #Stream App + Pcap https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/UseStreamtoparsePCAPfiles
Good its working and yes lots of moving parts / configs and scenarios with Splunk. 
As data will route via the Heavy forwarder it needs to communicate with your License Manager (LM) Server. So it doesn't need its own licence, you just need to point the HF to the LM.    
I am looking to place a heavy forwarder in Azure have it forward events/data to the main indexer with one method using http token. The heavy forwarder will just be used to forward data and not index ... See more...
I am looking to place a heavy forwarder in Azure have it forward events/data to the main indexer with one method using http token. The heavy forwarder will just be used to forward data and not index or search. I am asking will the HF need its own license/how will it relate to the license server?
I was testing out a lot of different things. I know I def did edit the distsearch manually. I did most everything from the CLI. Redoing and moving the License Manager through the GUI fixed some of th... See more...
I was testing out a lot of different things. I know I def did edit the distsearch manually. I did most everything from the CLI. Redoing and moving the License Manager through the GUI fixed some of the issues, as i can now search the data.    Thanks
Yep, I've been conceded to store data as Kvstore lookups (it's a large table.) It is a struggle because I have a personal dislike for lookups due to the search logic being abstracted and its stanza ... See more...
Yep, I've been conceded to store data as Kvstore lookups (it's a large table.) It is a struggle because I have a personal dislike for lookups due to the search logic being abstracted and its stanza is a pain in butt to locate in a savedsearches.conf file. Why use lookups when tstats gives the result in 3 seconds? Could save tstats as a macro too.
I think the unfortunate reality with this specific JSON structure for tags is equivalent to a multivalue when converted with spath. When you perform latest() like you suggested, and even latest() on... See more...
I think the unfortunate reality with this specific JSON structure for tags is equivalent to a multivalue when converted with spath. When you perform latest() like you suggested, and even latest() on a multivalue field, it returns only a single value from a multivalue list. The mvcount will always return exactly 1 or 0 if empty. Values() has to be used instead of latest() to capture the full list, BUT there is no guarantee all values are factually latest _time. If a tag value was removed 2 days ago, I don't want that tag in my report. This is why I try to parse _raw to capture whole JSON objects with all tags nested within. This way, the entire JSON object survives latest() function with its nested tag list intact. Note the AWS's IMDSv2 metadata has this exact same JSON structure so this observed problem persists in AWS space as well. As instance tagging is industry standard, isn't some random edge case. What Splunk really is missing is something like latest_values() function which could allow for values() to function and its values _time matches the latest(_time) of a unique identifier field.
The goal i have is to track when a user launch wireshark i want to see what user launched it. I also want to see what the user is doing within the application such as packets that were captured etc. ... See more...
The goal i have is to track when a user launch wireshark i want to see what user launched it. I also want to see what the user is doing within the application such as packets that were captured etc. Is this possible to do within Splunk enterprise. Are there any additional apps i will need to make the activity easily readable?
I am planning to build a Splunk dashboard for monitoring connection issues from various sources. Specifically, I need to identify when a connection fails or when an application stops sending data to ... See more...
I am planning to build a Splunk dashboard for monitoring connection issues from various sources. Specifically, I need to identify when a connection fails or when an application stops sending data to Splunk and display these issues on the dashboard. The data sources include: Application server universal forwarder to our Splunk heavy forwarder HEC (HTTP Event Collector) Various add-ons (e.g., Azure add-on, AWS add-on, DB Connect add-on) I am aware that many logs can be found under index=_internal, but I need assistance in identifying the necessary logs that pertain to real-time errors or connection failures. Could you please help me with this?
It's most likely some config setting under the hood as you should be able to use the SH as licence manager.  Not saying you did this, but did you set the distsearch.conf manually ?  If you look her... See more...
It's most likely some config setting under the hood as you should be able to use the SH as licence manager.  Not saying you did this, but did you set the distsearch.conf manually ?  If you look here it states  You must specify the non-clustered search peers through either Splunk Web or the CLI. Due to authentication issues, you cannot specify the search peers by directly editing distsearch.conf. When you add a search peer with Splunk Web or the CLI, the search head prompts you for public key credentials. It has no way of obtaining those credentials when you add a search peer by directly editing distsearch.conf https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Configureclusteredandnonclusteredsearch     
After a bit of work, I made the indexer the License Master. I already wanted the SH to also server as the DMC, and im not sure what was happening, but i made the indexer the License master, confirmed... See more...
After a bit of work, I made the indexer the License Master. I already wanted the SH to also server as the DMC, and im not sure what was happening, but i made the indexer the License master, confirmed a few settings, and I was able to add a new search peer. That window now allows me to see search peers under Distributed peers:   Not entirely sure what the problem was.  But in this instance, I am trying to get one indexer, one SH, and one forwarder working. i made the indexer the License master, the forwarder just a forwarder, and hopefully the SH as a SH and DMC.   Thanks for the help. 
Hi @Naresh.Pula, Since it's been a few days with no reply from the Community, is there a chance you've found a solution you can share? If you still need help, you can contact Cisco AppDynamics S... See more...
Hi @Naresh.Pula, Since it's been a few days with no reply from the Community, is there a chance you've found a solution you can share? If you still need help, you can contact Cisco AppDynamics Support here: How do I submit a Support ticket? An FAQ 
Hi community,   My forwarder is putting logs in index A before 2024/06/01, and in index B after this date. To avoid miss any data when searching, I have to have a query which searches both index. ... See more...
Hi community,   My forwarder is putting logs in index A before 2024/06/01, and in index B after this date. To avoid miss any data when searching, I have to have a query which searches both index. (index="A" "reports" "arts") OR (index="B" "reports" "arts") In this case, I believe if now I select "last 24 hours" in the time selector, the query will still search index A, which is unnecessary. I guess it would be more efficient if I can add a time limit in the first part, to limit the range of events. (earliest=-6mon latest="06/01/2024:00:00:00" index="A" "reports" "arts") OR (earliest="06/01/2024:00:00:00" index="B" "reports" "arts")   I expect Splunk would take an intersection of the two time ranges, but it doesn't. I noticed that adding these surprisingly slows down the query. The "earliest" and "latest" I added override the time selector. Even though I selected "last 24 hours", it returns events in the past 6 months of index A.   Again, my first query should give the correct result, but I'm still wondering if there's a way to improve the efficiency with the date 06/01. Any suggestions are appreciated!
Thanks we are getting closer. The user and nt_host is the link to the two searches index and inputlookup. The subsearch in the win index the user field has a special character at the end. which I h... See more...
Thanks we are getting closer. The user and nt_host is the link to the two searches index and inputlookup. The subsearch in the win index the user field has a special character at the end. which I have used this eval command to strip the special character at the end | eval user=replace(user,"[^[:word:]]","")   search index=win EventCode=4725 src_user="*" [ | inputlookup Assets | rename nt_host as user | fields user ] ``` The subsearch above will restrict the search to user=nt_host ``` | stats count by src_user, EventCode, signature, user ``` And this lookup will then fetch the DN - it can be done after the stats as the data does not change for the group by user ``` | lookup Assets nt_host as user OUTPUT nt_host distinguishedName   search index=win EventCode=4725 src_user="*" | stats count by src_user, EventCode, signature, user ``` And this lookup will then fetch the DN - it can be done after the stats as the data does not change for the group by user ``` | lookup Assets nt_host as user OUTPUT nt_host distinguishedName ``` Now remove all the ones that were not in the Assets lookup ``` | where isnotnull(nt_host) Modified the lookup to include the removal of special character. In the events, shows the correct user impacted. Getting close. search index=win EventCode=4725 src_user="*" | eval user=replace(user,"[^[:word:]]","") | stats count by src_user, EventCode, signature, user | lookup Assets nt_host as user OUTPUT nt_host distinguishedName | where isnotnull(nt_host) | fields src_user, EventCode, signature, user, nt_host, distinguishedName Index Inputlookup End Goal src_user EventCode user nt_host distinguishedName src_user EventCode user nt_host distinguishedName service 4725 device1 device1 CN=device1,OUComputers,OU,Agency service 4725 device1 device1 CN=device1,OUComputers,OU,Agency service 4725 device2 device2 CN=device2,OUComputers,OU,Agency service 4725 device2 device2 CN=device2,OUComputers,OU,Agency service 4725 device3 device3 CN=device3,OUComputers,OU,Agency service 4725 device3 device3 CN=device3,OUComputers,OU,Agency service 4725 device4 device4 CN=device4,OUComputers,OU,Agency service 4725 device4 device4 CN=device4,OUComputers,OU,Agency service 4725 device5 device5 CN=device5,OUComputers,OU,Agency service 4725 device5 device5 CN=device5,OUComputers,OU,Agency
  Hi all, Can you please help me with the Splunk query to list the Windows Process Names and CPU utilizations for the particular hostname. I have made the query as follows:- index=tuuk_perfmon ... See more...
  Hi all, Can you please help me with the Splunk query to list the Windows Process Names and CPU utilizations for the particular hostname. I have made the query as follows:- index=tuuk_perfmon source="Perfmon:Process" counter="% Processor Time" host=*hostname* (instance!="_Total" AND instance!="Idle" AND instance!="System") | eval 'CPU'=round(process_cpu_used_percent,2) | timechart latest('CPU') by process_name   With the above mentioned query, i can able to get the CPU utilization results for listed Windows Process names, but when analyzing the results, for particular time frame there are multiple 100% CPU utilization for mutiple Windows process names. Could someone please suggest or validate whether i am getting valid results and also the reason for multiple 100% CPU utilization?      
In the indexer, the search for data returns a timeline and details. The timeline is always green:  This is fine for queries returning pleasant result. However, when the query returns unpleasan... See more...
In the indexer, the search for data returns a timeline and details. The timeline is always green:  This is fine for queries returning pleasant result. However, when the query returns unpleasant results, I would like to use red.  
Thanks for your detailed answer. it worked. Appreciated.
v9.2.0.1 Monitoring Console in Splunk manager is not displaying volume information. All panels say "Search is waiting for input...". When I open the search of a given panel, the query it opens to is... See more...
v9.2.0.1 Monitoring Console in Splunk manager is not displaying volume information. All panels say "Search is waiting for input...". When I open the search of a given panel, the query it opens to is an "All time" query with "undefined" in the query box... I'm trying to monitor volume (and index) size/space using the Management console, Indexing, Volume Details. ../app/splunk_monitoring_console/volume_detail_deployment... The dashboards/panels populate fine when looking at index data, but are empty when trying to view volume data. Index panels have a nice rest query, and the Volume panels are all "undefined". Fixing thoughts? Cheers,