All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. Whenever possible (I know that sometimes you don't have technical means) try to copy-paste actual text input in the code box (the </> symbol in the editor when you're typing in your post) or in th... See more...
1. Whenever possible (I know that sometimes you don't have technical means) try to copy-paste actual text input in the code box (the </> symbol in the editor when you're typing in your post) or in the preformatted style instead of doing a screenshot - it's much easier to work with. 2. As @isoutamo already pointed out - those messages don't seem to have anything to do with time issues (nobody says you don't have time issues, it's just that this particular case is about network connectivity, not time). We don't know your network setup but it seems our hosts don't see each other (or the traffic is filtered somewhere).  
Hi @jaro , if the field is in the Notable index, can be displayed. Did you checked if it's in the visualized fields? Ciao. Giuseppe
Luckily, the requirements are not that strict. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers I ran successfully 9.0 forwarders ... See more...
Luckily, the requirements are not that strict. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers I ran successfully 9.0 forwarders with 8.2 indexers for some time since the client's policy was to install the latest available UF. But while the s2s protocol is not that often upgraded (and even so, the components can negotiate a lower version if one of the connection sides doesn't support the most recent version), there are some issues which can happen if you're using UF with a newer version than your indexers - in case of 9.0 forwarders it was that the forwarders generated events for the _configtracker index which did not exist on the indexers. But it was a minor annoyance, not a real problem.
Thanks @gcusello.  ---to check if this field is displayed in the Notable event (running index=notable search=your_correlation_search), yes, I have display the result "signature" in the search I ran. ... See more...
Thanks @gcusello.  ---to check if this field is displayed in the Notable event (running index=notable search=your_correlation_search), yes, I have display the result "signature" in the search I ran. However, the below description can not show the field value "signature" I search in correlation search as $signature$.  Also I have tried eval other name equal to field signature, still nothing.
After you have define those lookups as described on those links you could use it as | makeresults | eval foo="a" ``` Previous lines generate example data and should replaced your real search ``` | l... See more...
After you have define those lookups as described on those links you could use it as | makeresults | eval foo="a" ``` Previous lines generate example data and should replaced your real search ``` | lookup regiondetails Alias as foo Above example gives you a result like Name _time foo america 2024-01-05 11:44:56 a If you want to do it without lookup command you must define automatic lookup. You find that from previous links. 
There are probably several different possible approaches. index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="di... See more...
There are probably several different possible approaches. index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS To, envelope.from AS From, msg.header.subject AS Subject, filter.modules.av.virusNames{} AS Virus_Type This part is OK (unless you have too many results from the subsearch; you are aware of the subsearch limitations?) - it will give you a list of matching events. Now you're doing | eval Time=strftime(_time,"%H:%M:%S %m/%d/%y") While in your particular case it might not be that bad, I always advise to (unless you have a very specific use case like filtering by month so you render your timestamp to just month to have something to filter by) leave the _time as it is since it's easier manipulated this way. Just use eval (or even better - fieldformat) at the end of your pipeline for presentations. | stats count, list(From) as From, list(Subject) as Subject, list(Time) as Time, list(Virus_Type) as Virus_Type by To Now that's a tricky part - you're doing stats list() over several separate fields. Are you aware that you are creating completely disconnected multivalued fields? If - for any reason - you had an empty Subject in one of your emails, you wouldn't know which email it was from because the values in the multivalued field are "squished" together. I know it's tempting to use multivalued fields to simulate "cell merging" functionality you know from spreadsheets but it's good to know that mechanism has its limitations. | search [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS To] This part is pointless. You already searched for those addresses (and you're creating a subsearch again). I'd do it differently. After your initial search I'd do | eventstats count by To | sort - count + To - _time | streamstats count as eventorder by To | where eventorder<=5 | table _time To From Subject Virus_Type The eventstats part is needed only if you want to have the users with most matches first. Otherwise just drop the eventstats and remove the first field from the sort command - you'll just have your results sorted alphabetically then. Now if you want to have your time field called Time, not _time, add | rename _time as Time | fieldformat Time=strftime(Time,"%H:%M:%S %m/%d/%y") And if you don't want to repeat the To values (which I don't recommend because this breaks the logical structure of your data), you can use autoregress or streamstats to copy over the To value from the previous event and in case it's the same as the current one, just blank the existing field. But again - I don't recommend it - it does make the output look "prettier" but it makes it "logically incomplete".
@isoutamo I have gone through the document link but i  am not able to fix the issue.   I am trying to bring the fullname of the region from lookup into search results.
Hi one more. Have you test it on GUI with  <your base search> |eval src_user_idx = json_extract(lookup("user_ip_mapping.csv",json_object("src_ip", src_ip),json_array(src_user_idx)),"src_user") | ta... See more...
Hi one more. Have you test it on GUI with  <your base search> |eval src_user_idx = json_extract(lookup("user_ip_mapping.csv",json_object("src_ip", src_ip),json_array(src_user_idx)),"src_user") | table src* In that way you can validate your INGEST_EVAL expression.  And as @PickleRick said there are some other action to do after you have successfully validate it. r. Ismo 
Hi you can try to list those with  splunk list licenses|egrep '(label|guid|hash)' then you could remove correct one with it's hash like splunk remove licenses <hash> But remember that this comma... See more...
Hi you can try to list those with  splunk list licenses|egrep '(label|guid|hash)' then you could remove correct one with it's hash like splunk remove licenses <hash> But remember that this command remove this file from filesystem! So if you are using it for real license then backup those files first on some safe place. r. Ismo
But from where to find the field that are to be used in the query. I cannot find it anywhere. Only this information is present in "builtin:service.errors.server.rate " metrics : 1/5/24 5:10:... See more...
But from where to find the field that are to be used in the query. I cannot find it anywhere. Only this information is present in "builtin:service.errors.server.rate " metrics : 1/5/24 5:10:00.000 AM { [-]    MessageDeduplicationId:     aggregation: avg    entity.service.id: SERVICE-xxxxx    entity.service.name: AccountDetailsControllerImpl    metric_name:builtin:service.errors.server.rate: 3.8461538461538463    resolution: 1m    source.name: DT_Prod_SaaS    unit: Percent
1. Did you put the definition in the proper place? 2. Do you have your lookup defined on that component? 3. Most importantly - why would you use json functions when pan:traffic format does not have... See more...
1. Did you put the definition in the proper place? 2. Do you have your lookup defined on that component? 3. Most importantly - why would you use json functions when pan:traffic format does not have anything to do with json? (unless you have some completely non-standard configuration we know nothing about).
Hi you should remember that to getting supported state on your environment your Splunk server (and if you have heavy forwarders between UFs and Splunk servers) must be at least the same level than y... See more...
Hi you should remember that to getting supported state on your environment your Splunk server (and if you have heavy forwarders between UFs and Splunk servers) must be at least the same level than your UFs will be. In practically this means that you must 1st update your Splunk Server version to the same level where you are planning to upgrade your UFs. r. Ismo
Hi experts, We are getting this error consistently while querying data from Splunk Enterprise hosted in the company's internal network.  Exception in receiving splunk data 145java.lang.RuntimeExce... See more...
Hi experts, We are getting this error consistently while querying data from Splunk Enterprise hosted in the company's internal network.  Exception in receiving splunk data 145java.lang.RuntimeException: HTTPS hostname wrong: should be <splunk_enterprise_url - splunk.org.company.com>   The line of code that causes this is String query = "<splunk valid query>"; Job job = service.getJobs().create(query);   Splunk SDK Version used: 1.9.5   Connection to Splunk is established as follows: String token = System.getenv("SPLUNK_TOKEN"); ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setPort(8089); loginArgs.setHost("splunk.org.company.com"); loginArgs.setScheme("https"); loginArgs.setToken(String.format("Bearer %s", token)); service = new Service(loginArgs); log.info("service val is {}", service.toString()); Service.setValidateCertificates(false); This was working few days ago and suddenly it has stopped. We checked the server certificate and it valid till March 2024.  The program querying the splunk is called from a runner hosted on AWS and it has no network restrictions.  Not sure what is the issue. But this issue is getting reproduced consistently.   Note: Surprisingly, the same program runs fine on local machine.  Cannot find out what would be the issue? Any help will be appreciated.       
Hi there are already quite many examples about this (or at least with event data). You could find those with search "site:splunk.com calculate error rate". You must just modify those to work with me... See more...
Hi there are already quite many examples about this (or at least with event data). You could find those with search "site:splunk.com calculate error rate". You must just modify those to work with metric index if you have stored that data into those. r. Ismo
Hi Here is some instructions about this https://dev.splunk.com/enterprise/docs/releaseapps/splunkbase/optionsforsubmittingcontent/ Before you send your app 1st time to splunkbase you must decide if... See more...
Hi Here is some instructions about this https://dev.splunk.com/enterprise/docs/releaseapps/splunkbase/optionsforsubmittingcontent/ Before you send your app 1st time to splunkbase you must decide if you are hosting it in splunkbase or some external place. If you want to monetize it you must host it outside of splunkbase. But you can store the information and link to your real hosting page (like your own web site etc.) into splunkbase. r. Ismo
I want to have a query that can show me the percentage of error rate in the "AccountDetailsController" service of my application. We have the metrics data coming in from splunk so If that has to be u... See more...
I want to have a query that can show me the percentage of error rate in the "AccountDetailsController" service of my application. We have the metrics data coming in from splunk so If that has to be used or however we can do this. Please help
Hi Those message stanzas are configured on messages.conf, but at least I cannot find any option, which allow reshowing those regularly after you have deleted those. Some of those messages can found... See more...
Hi Those message stanzas are configured on messages.conf, but at least I cannot find any option, which allow reshowing those regularly after you have deleted those. Some of those messages can found from _internal, but yo must know what you are looking for. Some messages pop up again e.g. after you log in again and splunk has done some checks which rise those, but this is not valid for all. r. Ismo
Hi I think that you should start with these documentations: https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Aboutlookupsandfieldactions https://docs.splunk.com/Documentation/Splunk/9... See more...
Hi I think that you should start with these documentations: https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Aboutlookupsandfieldactions https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Usefieldlookupstoaddinformationtoyourevents https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/LookupexampleinSplunkWeb Those will guide you step by step, how to define and use lookups. r. Ismo
Hi These log entries said that you haven't connection to that another host (10.4.118.215 / No route to host).  Also those entries told to us that you have cluster configuration and this host try to ... See more...
Hi These log entries said that you haven't connection to that another host (10.4.118.215 / No route to host).  Also those entries told to us that you have cluster configuration and this host try to replicate _audit bucket to that another peer and cannot do it. You should test  why you haven't that tcp connection working on between these hosts. You can start with ping / traceroute then use telnet/curl and if needed even tcpdump to see what is happening. r. Ismo
Hi @jaro, to see a field in the fields of a Notable (in the Incident Review dashboard) you have to check if this field is displayed in the Notable event (running index=notable search=your_correlatio... See more...
Hi @jaro, to see a field in the fields of a Notable (in the Incident Review dashboard) you have to check if this field is displayed in the Notable event (running index=notable search=your_correlation_search), if not, probably isn't displayed in the output of the correlation search: manually run your correlation search and see if the field is displayed, if not add it to the correlation Search. One additional hint: don't modify the Correlation Search, but clone it and modify and enable only the cloned one. If the field is present in the Notable event, you have also to check if it's present in the default visible fields, that you can find these configurations at [Configure > Incident management > Incident Review Settings] in the section Incident Review - Event Attributes. Ciao. Giuseppe