All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yep, I've been conceded to store data as Kvstore lookups (it's a large table.) It is a struggle because I have a personal dislike for lookups due to the search logic being abstracted and its stanza ... See more...
Yep, I've been conceded to store data as Kvstore lookups (it's a large table.) It is a struggle because I have a personal dislike for lookups due to the search logic being abstracted and its stanza is a pain in butt to locate in a savedsearches.conf file. Why use lookups when tstats gives the result in 3 seconds? Could save tstats as a macro too.
I think the unfortunate reality with this specific JSON structure for tags is equivalent to a multivalue when converted with spath. When you perform latest() like you suggested, and even latest() on... See more...
I think the unfortunate reality with this specific JSON structure for tags is equivalent to a multivalue when converted with spath. When you perform latest() like you suggested, and even latest() on a multivalue field, it returns only a single value from a multivalue list. The mvcount will always return exactly 1 or 0 if empty. Values() has to be used instead of latest() to capture the full list, BUT there is no guarantee all values are factually latest _time. If a tag value was removed 2 days ago, I don't want that tag in my report. This is why I try to parse _raw to capture whole JSON objects with all tags nested within. This way, the entire JSON object survives latest() function with its nested tag list intact. Note the AWS's IMDSv2 metadata has this exact same JSON structure so this observed problem persists in AWS space as well. As instance tagging is industry standard, isn't some random edge case. What Splunk really is missing is something like latest_values() function which could allow for values() to function and its values _time matches the latest(_time) of a unique identifier field.
The goal i have is to track when a user launch wireshark i want to see what user launched it. I also want to see what the user is doing within the application such as packets that were captured etc. ... See more...
The goal i have is to track when a user launch wireshark i want to see what user launched it. I also want to see what the user is doing within the application such as packets that were captured etc. Is this possible to do within Splunk enterprise. Are there any additional apps i will need to make the activity easily readable?
I am planning to build a Splunk dashboard for monitoring connection issues from various sources. Specifically, I need to identify when a connection fails or when an application stops sending data to ... See more...
I am planning to build a Splunk dashboard for monitoring connection issues from various sources. Specifically, I need to identify when a connection fails or when an application stops sending data to Splunk and display these issues on the dashboard. The data sources include: Application server universal forwarder to our Splunk heavy forwarder HEC (HTTP Event Collector) Various add-ons (e.g., Azure add-on, AWS add-on, DB Connect add-on) I am aware that many logs can be found under index=_internal, but I need assistance in identifying the necessary logs that pertain to real-time errors or connection failures. Could you please help me with this?
It's most likely some config setting under the hood as you should be able to use the SH as licence manager.  Not saying you did this, but did you set the distsearch.conf manually ?  If you look her... See more...
It's most likely some config setting under the hood as you should be able to use the SH as licence manager.  Not saying you did this, but did you set the distsearch.conf manually ?  If you look here it states  You must specify the non-clustered search peers through either Splunk Web or the CLI. Due to authentication issues, you cannot specify the search peers by directly editing distsearch.conf. When you add a search peer with Splunk Web or the CLI, the search head prompts you for public key credentials. It has no way of obtaining those credentials when you add a search peer by directly editing distsearch.conf https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Configureclusteredandnonclusteredsearch     
After a bit of work, I made the indexer the License Master. I already wanted the SH to also server as the DMC, and im not sure what was happening, but i made the indexer the License master, confirmed... See more...
After a bit of work, I made the indexer the License Master. I already wanted the SH to also server as the DMC, and im not sure what was happening, but i made the indexer the License master, confirmed a few settings, and I was able to add a new search peer. That window now allows me to see search peers under Distributed peers:   Not entirely sure what the problem was.  But in this instance, I am trying to get one indexer, one SH, and one forwarder working. i made the indexer the License master, the forwarder just a forwarder, and hopefully the SH as a SH and DMC.   Thanks for the help. 
Hi @Naresh.Pula, Since it's been a few days with no reply from the Community, is there a chance you've found a solution you can share? If you still need help, you can contact Cisco AppDynamics S... See more...
Hi @Naresh.Pula, Since it's been a few days with no reply from the Community, is there a chance you've found a solution you can share? If you still need help, you can contact Cisco AppDynamics Support here: How do I submit a Support ticket? An FAQ 
Hi community,   My forwarder is putting logs in index A before 2024/06/01, and in index B after this date. To avoid miss any data when searching, I have to have a query which searches both index. ... See more...
Hi community,   My forwarder is putting logs in index A before 2024/06/01, and in index B after this date. To avoid miss any data when searching, I have to have a query which searches both index. (index="A" "reports" "arts") OR (index="B" "reports" "arts") In this case, I believe if now I select "last 24 hours" in the time selector, the query will still search index A, which is unnecessary. I guess it would be more efficient if I can add a time limit in the first part, to limit the range of events. (earliest=-6mon latest="06/01/2024:00:00:00" index="A" "reports" "arts") OR (earliest="06/01/2024:00:00:00" index="B" "reports" "arts")   I expect Splunk would take an intersection of the two time ranges, but it doesn't. I noticed that adding these surprisingly slows down the query. The "earliest" and "latest" I added override the time selector. Even though I selected "last 24 hours", it returns events in the past 6 months of index A.   Again, my first query should give the correct result, but I'm still wondering if there's a way to improve the efficiency with the date 06/01. Any suggestions are appreciated!
Thanks we are getting closer. The user and nt_host is the link to the two searches index and inputlookup. The subsearch in the win index the user field has a special character at the end. which I h... See more...
Thanks we are getting closer. The user and nt_host is the link to the two searches index and inputlookup. The subsearch in the win index the user field has a special character at the end. which I have used this eval command to strip the special character at the end | eval user=replace(user,"[^[:word:]]","")   search index=win EventCode=4725 src_user="*" [ | inputlookup Assets | rename nt_host as user | fields user ] ``` The subsearch above will restrict the search to user=nt_host ``` | stats count by src_user, EventCode, signature, user ``` And this lookup will then fetch the DN - it can be done after the stats as the data does not change for the group by user ``` | lookup Assets nt_host as user OUTPUT nt_host distinguishedName   search index=win EventCode=4725 src_user="*" | stats count by src_user, EventCode, signature, user ``` And this lookup will then fetch the DN - it can be done after the stats as the data does not change for the group by user ``` | lookup Assets nt_host as user OUTPUT nt_host distinguishedName ``` Now remove all the ones that were not in the Assets lookup ``` | where isnotnull(nt_host) Modified the lookup to include the removal of special character. In the events, shows the correct user impacted. Getting close. search index=win EventCode=4725 src_user="*" | eval user=replace(user,"[^[:word:]]","") | stats count by src_user, EventCode, signature, user | lookup Assets nt_host as user OUTPUT nt_host distinguishedName | where isnotnull(nt_host) | fields src_user, EventCode, signature, user, nt_host, distinguishedName Index Inputlookup End Goal src_user EventCode user nt_host distinguishedName src_user EventCode user nt_host distinguishedName service 4725 device1 device1 CN=device1,OUComputers,OU,Agency service 4725 device1 device1 CN=device1,OUComputers,OU,Agency service 4725 device2 device2 CN=device2,OUComputers,OU,Agency service 4725 device2 device2 CN=device2,OUComputers,OU,Agency service 4725 device3 device3 CN=device3,OUComputers,OU,Agency service 4725 device3 device3 CN=device3,OUComputers,OU,Agency service 4725 device4 device4 CN=device4,OUComputers,OU,Agency service 4725 device4 device4 CN=device4,OUComputers,OU,Agency service 4725 device5 device5 CN=device5,OUComputers,OU,Agency service 4725 device5 device5 CN=device5,OUComputers,OU,Agency
  Hi all, Can you please help me with the Splunk query to list the Windows Process Names and CPU utilizations for the particular hostname. I have made the query as follows:- index=tuuk_perfmon ... See more...
  Hi all, Can you please help me with the Splunk query to list the Windows Process Names and CPU utilizations for the particular hostname. I have made the query as follows:- index=tuuk_perfmon source="Perfmon:Process" counter="% Processor Time" host=*hostname* (instance!="_Total" AND instance!="Idle" AND instance!="System") | eval 'CPU'=round(process_cpu_used_percent,2) | timechart latest('CPU') by process_name   With the above mentioned query, i can able to get the CPU utilization results for listed Windows Process names, but when analyzing the results, for particular time frame there are multiple 100% CPU utilization for mutiple Windows process names. Could someone please suggest or validate whether i am getting valid results and also the reason for multiple 100% CPU utilization?      
In the indexer, the search for data returns a timeline and details. The timeline is always green:  This is fine for queries returning pleasant result. However, when the query returns unpleasan... See more...
In the indexer, the search for data returns a timeline and details. The timeline is always green:  This is fine for queries returning pleasant result. However, when the query returns unpleasant results, I would like to use red.  
Thanks for your detailed answer. it worked. Appreciated.
v9.2.0.1 Monitoring Console in Splunk manager is not displaying volume information. All panels say "Search is waiting for input...". When I open the search of a given panel, the query it opens to is... See more...
v9.2.0.1 Monitoring Console in Splunk manager is not displaying volume information. All panels say "Search is waiting for input...". When I open the search of a given panel, the query it opens to is an "All time" query with "undefined" in the query box... I'm trying to monitor volume (and index) size/space using the Management console, Indexing, Volume Details. ../app/splunk_monitoring_console/volume_detail_deployment... The dashboards/panels populate fine when looking at index data, but are empty when trying to view volume data. Index panels have a nice rest query, and the Volume panels are all "undefined". Fixing thoughts? Cheers,
I had the same problem on a UF, checking the sourcetype props I noticed that there were magic 6 on the agent. After deleting them, the collection works again.
Hello deepakc,   Thank you very much for this information. This forum is great. Kudos to you for helping me understanding the "internals" of Splunk,   eholz1
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as... See more...
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as shown in the attached images.     
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as... See more...
I set the alart to High and security Domaiin = Network, but it appears to me in the Incident Review interface that it is low and security Domaiin = threat, and every event is classified like this, as shown in the attached images.     
Loading the data into splunk using Add tdata settings only. please tell me the exact time need to be configuration.      
Hi @bowesmana  Your solution worked and you provided better example than Splunk documentation I appreciate your help. Thanks I thought I used one field on my mvfilter, which is fullcode...  I ... See more...
Hi @bowesmana  Your solution worked and you provided better example than Splunk documentation I appreciate your help. Thanks I thought I used one field on my mvfilter, which is fullcode...  I guess partialcode is considered  the second field.. | eval fullcode2=mvfilter(match(fullcode,partialcode))
Hi @vijreddy30 , only fo test, load this file using the [Settings > Add Data] feature. In this way, you can find (and save) the best sourcetype for your events (e.g. I see that you have a wrong tim... See more...
Hi @vijreddy30 , only fo test, load this file using the [Settings > Add Data] feature. In this way, you can find (and save) the best sourcetype for your events (e.g. I see that you have a wrong timestamp). Ciao. Giuseppe