All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hai @Zer0sss , Younshould try www.splunkbase.splunk.com Ciao. Giuseppe
Thank you for this answer, it really get the job done. However according to some research this might not be good for safety reasons, because a XSS (Cross Site Scripting) attacks, especially if the co... See more...
Thank you for this answer, it really get the job done. However according to some research this might not be good for safety reasons, because a XSS (Cross Site Scripting) attacks, especially if the content of msg can be influenced by user input. Also like mentioned this change will be lost after an upgrade. For now I've decided to use Sendresults app, which I'm not 100% sure if it can avoid the XSS, but at least it won't break (hopefully) during an upgrade and also it should provide more features than the built-in one.
As almost all the video on youtube using splunk server on the same victim computer that have "Local windows network monitoring", the server on kali does not have it. And i don't know how to catch the... See more...
As almost all the video on youtube using splunk server on the same victim computer that have "Local windows network monitoring", the server on kali does not have it. And i don't know how to catch the event of the attack, although using TAwinfw Technology Addon for Windows Firewall. But when searching index="firewall", it return no results. Can someone help me, pls?
Hi,   I have to replace all the possible delimiters in the field with space so that I capture each word separately. Example: 5bb2a5-bb04-460e-a7bc-abb95d07a13_Setgfub.jpg I need to remove the exte... See more...
Hi,   I have to replace all the possible delimiters in the field with space so that I capture each word separately. Example: 5bb2a5-bb04-460e-a7bc-abb95d07a13_Setgfub.jpg I need to remove the extension as well it could be anything so .csv or .xslx or .do I need the output as below 5bb2a8d5 bb04 460e a7bc bb995d07a13 Setgfub   I came up with expression which works fine but i need this either in regular expression or eval expression as I am using it for data model.     | makeresults | eval test="ton-o-mete_r v4.pdf" | rex field=test mode=sed "s/\-|\_|\.|\(|\)|\,|\;/ /g" | eval temp=split('test'," "      
Thank you very much it's really helpful
Hi @PickleRick , I did the same like you mentioned and created a table for the fields. But im getting some duplicates values .  Is there anything else i have to do
Hi all My Splunk model is configured behind a proxy to access the Internet. The proxy will allow access to the specified URL. I want to use "Find More Apps" to download Apps directly without having ... See more...
Hi all My Splunk model is configured behind a proxy to access the Internet. The proxy will allow access to the specified URL. I want to use "Find More Apps" to download Apps directly without having to download and upload SPL files. Which URL do I need to open the rule to? Thanks
  index="wineventlog" | search ( [|inputlookup PS-monitored.csv | eval Message= "*" + PS_command + "*" | fields Message] )   This makes my teeth itch But seriously - doing initial search and... See more...
  index="wineventlog" | search ( [|inputlookup PS-monitored.csv | eval Message= "*" + PS_command + "*" | fields Message] )   This makes my teeth itch But seriously - doing initial search and then piping to another search is... well, simply not elegant. Splunk will optimize it out anyway and treat as it would a single search command so you could just write it as   index="wineventlog" [|inputlookup PS-monitored.csv | eval Message= "*" + PS_command + "*" | fields Message]   But that's less important. More important thing is that you're creating a search with a Message=*something conditions. They will be very, very inefficient since Splunk has to parse every single event to find your matching ones. Assuming your commands are "whole commands" meaning that if your command is "cmd", you're looking for strings like "whatever cmd whatever" and not "whatevercmd whatever" (notice the space difference), you can limit your search with   index="wineventlog" [ | inputlookup PS-monitored.csv | eval search=PS_command | fields search | format ] [ | inputlookup PS-monitored.csv | eval Message= "*" + PS_command + "*" | fields Message]   You could also try to combine those two subsearches into one.
It depends on your architecture and what you mean by "user deleted an index". What user would you want to associate with deleting an index by means of editing indexes.conf and restarting splunkd? An... See more...
It depends on your architecture and what you mean by "user deleted an index". What user would you want to associate with deleting an index by means of editing indexes.conf and restarting splunkd? And in a cluster scenario when the entry is deleted from indexes.conf and pushed to indexers? I suppose you're talking about an all-in-one installation and assume all admin operations are done via GUI. You'd have to look for requests from the browser to the appropriate section of the settings page.
you can use fieldsummary command:    index=example sourcetype=example | fieldsummary And count fields using:    index=example sourcetype=example | fieldsummary | table field | stats count ... See more...
you can use fieldsummary command:    index=example sourcetype=example | fieldsummary And count fields using:    index=example sourcetype=example | fieldsummary | table field | stats count more about fieldsummary here: fieldsummary - Splunk Documentation
There are other similar threads on Answers - use the search functionality (yes, I know it can sometimes return results in a strange order). The short and imprecise answer is - it should work but wit... See more...
There are other similar threads on Answers - use the search functionality (yes, I know it can sometimes return results in a strange order). The short and imprecise answer is - it should work but with caveats. Longer answer is - since you probably have your Splunk instance installed by means of RPM package (which is a good thing on its own) just moving orphaned package contents without the package itself might be troublesome for later maintenance. So you might want to install an "empty" package first and then remove /opt/splunk and mount your LV in this place. You also have to take care of creating service files for auto-starting the daemon. And you have to make sure you move any additional content needed by your configuration (for example - certificate/key files if they are stored outside of your /opt/splunk directory). EDIT: Of course it works assuming you want to replace an operating system "in-place" (retaining the same IP and hostname). If you want to move your instance to a host with a different name and IP things start getting messy.
I have created a .tar.gz file using splunk tools and I am unable to upload the app, there is no error but the file after I choose to upload is not showing in UI. are there any file  permissions is e... See more...
I have created a .tar.gz file using splunk tools and I am unable to upload the app, there is no error but the file after I choose to upload is not showing in UI. are there any file  permissions is expected for .tar.gz file to upload. Also, the file has 777 permission, I changes it to upload
Hello, So I have to count the number of resulted fields, it doesn't go far than this. for my search I have index=example sourcetype=example source=example, and the goal is to know how many fields ... See more...
Hello, So I have to count the number of resulted fields, it doesn't go far than this. for my search I have index=example sourcetype=example source=example, and the goal is to know how many fields are extracted from the results of this search.   Can anyone help please ?
How Splunk detect a new file? Is it using polling or does it depend on Inotify in Linux for example?
A bit of additional explanation on this (all other comments in this thread so far are valid). 1. If you search by field value, you have to explicitly name the field. There is no wildcard functionali... See more...
A bit of additional explanation on this (all other comments in this thread so far are valid). 1. If you search by field value, you have to explicitly name the field. There is no wildcard functionality as such. There simply isn't. So field=*whatever*wildcarded*value*you*can*think*of  is a formally valid search condition (even though it might not be the best one from performance point of view), your idea of PROD*=* would (if asterisk was allowed in field name; I'm pretty sure it isn't) search for a field called literally PROD*. 2. Even if there was a way to do so, this type of search (as well as any search for non-indexed field value beginning with a with an asterisk) is the worst possible idea performance-wise. Since Splunk cannot limit the set of processed events to some values, it has to parse all events from a given search time range (possibly limited by other search conditions) to find out if there is such field present in your events at all. It might be very, very costly in terms of CPU time. If your field(s) can be "anchored" to some static, delimited at least at the beginning with a breaker, text within the event, you can help Splunk by limiting your events by adding a proper search term in the initial search. That's why it's best to combine @ITWhisperer 's solution with @bowesmana 's way of limiting the events you're searching from.
Unless I'm misunderstanding something if you just want to list a result for a maximum Test_ID value your initial approach to use eventstats is good but you can simply filter by that value. So you'll... See more...
Unless I'm misunderstanding something if you just want to list a result for a maximum Test_ID value your initial approach to use eventstats is good but you can simply filter by that value. So you'll get something like | eventstats max(Test_ID) as maxtestid | where Test_ID=maxtestid Unless you want something else and we have some miscommunication here
OK. The proper (and actually the only reasonable I thkink) approach to diagnose "not working" SPL searches is to start from the start and add one step at a time verifying if you're getting desired re... See more...
OK. The proper (and actually the only reasonable I thkink) approach to diagnose "not working" SPL searches is to start from the start and add one step at a time verifying if you're getting desired results at each step of the way. So first do index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error and see if you get any results returned at all. Then add | rex field=message "Message=.* \((?<apiName>\w+?) -" And verify that your apiName field is properly extracted. Then apply | lookup My_Client_Mapping Client OUTPUT ClientID ClientName Region and see if the values from the lookup are properly assigned. If any of those steps fails to produce predicted results, you'll know which step to debug.
Yes, both @bowesmana 's as well as @yuanliu 's methods can be used to speed up your search. Another way to help Splunk with searching such stuff is using summary indexing. You can do an incremental ... See more...
Yes, both @bowesmana 's as well as @yuanliu 's methods can be used to speed up your search. Another way to help Splunk with searching such stuff is using summary indexing. You can do an incremental scheduled search which finds your latest value from - for example - every five minute or one hour long window and stores it in an auxiliary index. Then you have to just search a small set of summarized data to find the "true latest" one. Of course it means a fairly constant load on your system to build those summaries but you do it only once per each batch of data. After that you just have a relatively fast search from the summary index. So while Splunk cannot directly do what you want in the initial search there are some advanced techniques which can help you write a fairly efficient search to do a similar thing another way.
Do you mean to say that some periods have no data about pods? (Or rather, no data about pods with importance value "non-important".)  My initial suggestion was based on the assumption that during any... See more...
Do you mean to say that some periods have no data about pods? (Or rather, no data about pods with importance value "non-important".)  My initial suggestion was based on the assumption that during any given interval, there are some pods.  Now that I think about it, it is possible that that assumption is still true but some intervals may only get important pods and all non-important ones are missing. Try this index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | dedup pod_name | where sourcetype == "kubectl" | timechart span=1m@m values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all | append [makeresults format=csv data="namespace, pod_name_lookup, importance ns1, kafka-*, critical ns1, apache-*, critical ns2, grafana-backup-*, non-critical ns2, someapp-*, non-critical" | where importance = "non-critical" ``` subsearch thus far emulates | inputlookup pod_list where importance = non-critical ``` | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all | eval missing = if(isnull(pod_name_all), pod_name_all, mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all)))) | where isnotnull(missing) | timechart span=1m@m count by missing Exactly the same idea, just fill intervals with no non-important pod groups.  Those intervals will see all pod groups marked as missing.
Hi @varshini_3141, are there other messages, near the one you shared? which operative systema are you running in? Anyway, the fastest solution is to open a case to Splunk Support (sending them a d... See more...
Hi @varshini_3141, are there other messages, near the one you shared? which operative systema are you running in? Anyway, the fastest solution is to open a case to Splunk Support (sending them a diag that they can use to analyze your system). Ciao. Giuseppe