All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the ... See more...
Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the events based on the INTERFACE_NAME field and calculates the count of remaining unique events.   Splunk alert monitors the iSell application's request activity logs, specifically looking for cases where no data is processed within the last 30 minutes. If fewer than 2 unique events are found, the alert triggers once, notifying the appropriate parties.   From our end records are processed successfully and we provide the condition to trigger an INC count less than 2 we are getting more than one successful events even alert get triggering and getting INC Please check why we are getting false alert and suggest us index=*core host=* sourcetype=app_log APPNAME=iSell event=requestActivity httpMethod=POST loggerName="c.a.i.p.a.a.a.StreamingActor" | dedup INTERFACE_NAME| stats count
Hi @Tron-spectron47, here you can find all the Splunk courses: https://www.splunk.com/en_us/training/course-catalog.html  in details you should see these courses: Splunk Enterprise System Administ... See more...
Hi @Tron-spectron47, here you can find all the Splunk courses: https://www.splunk.com/en_us/training/course-catalog.html  in details you should see these courses: Splunk Enterprise System Administration chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-system-administration-course-description.pdf Splunk Enterprise Data Administration chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-data-administration-course-description.pdf Data Models chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.splunk.com/en_us/pdfs/training/data-models-course-description.pdf You can find the page to register in the first url. Ciao. Giuseppe
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analy... See more...
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analyze every day, is used by SOA and OSB diagnostic logs. It is, more or less, like csv structure but instead of tab/space/comma, each value is pakced into brakets Below example with the short descrption [2010-09-23T10:54:00.206-07:00] [soa_server1] [NOTIFICATION] [] [oracle.mds] [tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0] [APP: wsm-pm] "Metadata Services: Metadata archive (MAR) not found." Timestamp, originating: 2010-09-23T10:54:00.206-07:00 Organization ID: soa_server1 Message Type: NOTIFICATION Component ID: oracle.mds Thread ID: tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)' User ID: userId: <anonymous> Execution Context ID: ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0 Supplemental Attribute: APP: wsm-pm Message Text: "Metadata Services: Metadata archive (MAR) not found." Any solution, hints how to manage it in Splunk ? regards KP.
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top... See more...
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top of the box .  How to remove this ?  I just want it to show the title OKTA and not the field name or whatever on top of the boxes   Below is my query for the above OKTA log source   | tstats dc(host) as distinct_count where index=okta sourcetype="OktaIM2:log"   Thanks in advance
Hi @Splunk-Star, you have to check at first your infrastructure: have you the minimal resources required by Splunk? if yes, you should analyze your  situation and eventually redesign your infrastru... See more...
Hi @Splunk-Star, you have to check at first your infrastructure: have you the minimal resources required by Splunk? if yes, you should analyze your  situation and eventually redesign your infrastructure for the new requirements: e.g. if you have many users or you're using many scheduled searches or you're using too real time searches, you have to use more resources (CPUs). then you have to analyze your configurations, e.g. some time ago I had this issue on Splunk Cloud, but the solution was to redistribute the schedule of the scheduled searches and the percentage of resources for scheduled searches. In both cases I hint to engare a Splunk Professional Service or a Splunk Architect: this issue requires a good experience in Splunk infrastructures. Ciao. Giuseppe
Hi @Splunk-Star , this message means that the h_vms lookup, that you're using in a search, isn't present. You should check if the name is correct: maybe you missed .csv extention or the name has so... See more...
Hi @Splunk-Star , this message means that the h_vms lookup, that you're using in a search, isn't present. You should check if the name is correct: maybe you missed .csv extention or the name has some char in uppercase (lookup name is case sensitive). If instead this lookup isn't present in your search, in the add-ons you're using there's this automatic lookup, but the lookup isn't present. You could solve the issue adding a lookup with this name, but put again attention to the lookup name. Ciao. Giuseppe
On splunk user is getting the following error:Could not load lookup=LOOKUP-pp_vms  but admin is not getting any such errors.   that look up file is not present also. What we need to do?
applied rex, after that it worked. Thanks
@rickymckenzie10 I think that you should read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Indexesconf https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-whe... See more...
@rickymckenzie10 I think that you should read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Indexesconf https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf  frozenTimePeriodInSecs = <nonnegative integer> * The number of seconds after which indexed data rolls to frozen. * If you do not specify a 'coldToFrozenScript', data is deleted when rolled to frozen. * NOTE: Every event in a bucket must be older than 'frozenTimePeriodInSecs' seconds before the bucket rolls to frozen. * The highest legal value is 4294967295. * Default: 188697600 (6 years) maxTotalDataSizeMB = <nonnegative integer> * The maximum size of an index, in megabytes. * If an index grows larger than the maximum size, splunkd freezes the oldest data in the index. * This setting applies only to hot, warm, and cold buckets. It does not apply to thawed buckets. * CAUTION: The 'maxTotalDataSizeMB' size limit can be reached before the time limit defined in 'frozenTimePeriodInSecs' due to the way bucket time spans are calculated. When the 'maxTotalDataSizeMB' limit is reached, the buckets are rolled to frozen. As the default policy for frozen data is deletion, unintended data loss could occur. * Splunkd ignores this setting on remote storage enabled indexes. * Highest legal value is 4294967295 * Default: 500000  
regex was not applied correctly thats why it was not extracting the data.   Thank you
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Coul... See more...
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Could you please help me in fixing the issue. Error Details: --------------------- *-199.corp.apple.com] Configuration initialization for /ngs/app/splunkp/mounted_bundles/peer_8089/*_SHC took longer than expected (1145ms) when dispatching a search with search ID remote_sh-*-13.corp.apple.com_2320431658__232041658__search__RMD578320bc0a7e9dada_1709881516.707_378AAA09-A2C2-4B63-B88A-50A6B29A67DF. This usually indicates problems with underlying storage performance."
@power12  The best place to start is by analyzing https://docs.splunk.com/Documentation/Splunk/latest/Search/ViewsearchjobpropertieswiththeJobInspector  Use the https://docs.splunk.com/Documentati... See more...
@power12  The best place to start is by analyzing https://docs.splunk.com/Documentation/Splunk/latest/Search/ViewsearchjobpropertieswiththeJobInspector  Use the https://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview  to check the health of Splunk. Also use other OS related tools to troubleshoot system performance; vmstat, iostat, top, lsof to look for any processes hogging CPU, memory or any high iowait times on your disk array. Here is a good explanation of calculating limits: https://answers.splunk.com/answers/270544/how-to-calculate-splunk-search-concurrency-limit-f.html  Also check out these apps: https://splunkbase.splunk.com/app/2632/  Check the efficiency of your users searches. The following will show you the longest running searches (by user - run it for 24hrs) index="_audit" action="search" (id=* OR search_id=*) | eval user=if(user=="n/a",null(),user) | stats max(total_run_time) as total_run_time first(user) as user by search_id | stats count perc95(total_run_time) median(total_run_time) by user |sort - perc95(total_run_time)  
That's the job for tostring. | appendpipe [stats sum(*) as * by Number | foreach * [eval <<FIELD>> = tostring(<<FIELD>>, "commas")] | eval UserName="Total By Number: "] I'm not ... See more...
That's the job for tostring. | appendpipe [stats sum(*) as * by Number | foreach * [eval <<FIELD>> = tostring(<<FIELD>>, "commas")] | eval UserName="Total By Number: "] I'm not sure why you group by "Number" but evals "UserName". 
This part won't work, as search can't take another field as it's constraint | eval NAME_LIST="task1,task2,task3" | search NAME IN (NAME_LIST)  
That's a good reduction with the stats values, I assume the simple host count is significantly less than the 1.9m rows, so although you will have the same number of qids per host, the lookup command ... See more...
That's a good reduction with the stats values, I assume the simple host count is significantly less than the 1.9m rows, so although you will have the same number of qids per host, the lookup command count will be reduced - it would be interesting to compare the job inspector details between the two searches. As for KV store replication - a KV store on the search head is not a KV store on the indexer, instead a CSV is transferred to the indexers, so any accelerations in the KV are lost and you are simply using CSV lookups on the indexer. I am not sure how a 250MB CSV on the indexer will be handled - if it works the same way as on the SH, it exceeds the  max_memtable_bytes value discussed in limits.conf https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Limitsconf#.5Blookup.5D so I imagine it will then be "indexed" (as in a file system index) on disk. If you are not running in Splunk Cloud you may want to try a local CSV lookup and play around with the limits.conf settings for your app. If you have the time, you might also want to experiment with using the eval lookup() statement and use a split CSV. For example, you could split the CSV into 10 * 25MB files to stay under the existing threshold and partition the QIDs into each lookup. Then you could do some weird SPL like  | eval p=partition_logic(QID) | eval output_json=case( p=1, lookup("qid_partition_1.csv", json_object("QID", QID), json_array("field1", "field2"), p=2, lookup("qid_partition_2.csv", json_object("QID", QID), json_array("field1", "field2"), p=3, lookup("qid_partition_3.csv", json_object("QID", QID), json_array("field1", "field2"), p=4, lookup("qid_partition_4.csv", json_object("QID", QID), json_array("field1", "field2"), p=5, lookup("qid_partition_5.csv", json_object("QID", QID), json_array("field1", "field2"), p=6, lookup("qid_partition_6.csv", json_object("QID", QID), json_array("field1", "field2"), p=7, lookup("qid_partition_7.csv", json_object("QID", QID), json_array("field1", "field2"), p=8, lookup("qid_partition_8.csv", json_object("QID", QID), json_array("field1", "field2"), p=9, lookup("qid_partition_9.csv", json_object("QID", QID), json_array("field1", "field2"), p=10, lookup("qid_partition_10.csv", json_object("QID", QID), json_array("field1", "field2")) technically this could work, but whether you would see any improvements, I have no idea. Was that 50 seconds using local lookup or standard lookup. If it was normal (remote) then try using local so it does use the local KV store 
Hi @marnall  Your suggestion worked fine.   I accepted this as a solution. Thank you so much for your help. It looks like the reason it didn't work earlier because I assigned  eval _time = info_... See more...
Hi @marnall  Your suggestion worked fine.   I accepted this as a solution. Thank you so much for your help. It looks like the reason it didn't work earlier because I assigned  eval _time = info_max_time, but I didn't put "addinfo", so it went to default value which is info_min_time. Can you test on your end if _time set to info_min_time, if you don't use/remove the following eval? Thanks | eval _time = now() + 3600  
Is there a way to use only Upper bound to define Outliers? I wish to only define Outliers using the Upper Bound. (I want to define striking up outliers)
Okay will try. Thanks.
You can sort of do that.  But why?  This gets more convoluted that your problem warrants.  Your OP says you are doing selector in dashboard logic.  As @bowesmana said, that's precisely what multi-sel... See more...
You can sort of do that.  But why?  This gets more convoluted that your problem warrants.  Your OP says you are doing selector in dashboard logic.  As @bowesmana said, that's precisely what multi-selector token is for. But if you really need a CSV file to do so, name the column "NAME" instead of NAME_LIST.  Then, split the value.   | search [inputlookup csv.csv | eval NAME = split(NAME, ",")]   It doesn't really do an IN operation but is semantically equivalent. Here's an emulation   | makeresults format=csv data="NAME task2 task4" | search [inputlookup csv.csv | eval NAME = split(NAME, ",")]   Your sample CSV row will give you NAME task2
Didn't think that far ahead but Ill be making a dashboard for this so I think it would be easier if I separated them instead of trying to put it all in one search. What you just did works perfectly s... See more...
Didn't think that far ahead but Ill be making a dashboard for this so I think it would be easier if I separated them instead of trying to put it all in one search. What you just did works perfectly so thank you so much for that!