All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@richgalloway  thanks for replay the | rex is working as it should the problem start when I'm trying to save the Regex. and this is cause by the fact i need to save the regex from the "source" field ... See more...
@richgalloway  thanks for replay the | rex is working as it should the problem start when I'm trying to save the Regex. and this is cause by the fact i need to save the regex from the "source" field and no from the "_raw" field. The main goal is to add another field in all searches without using the | rex command every time. 
You can parse this event with rex https://regex101.com/r/eUputR/1 However, this assumes you have an empty / not required field for the 4th bracket pair, and that you don't have further nesting of b... See more...
You can parse this event with rex https://regex101.com/r/eUputR/1 However, this assumes you have an empty / not required field for the 4th bracket pair, and that you don't have further nesting of bracketed sub-strings in the Thread ID
hey @gcusello  thanks for your replay. It seems like the capture do not capture any of the fields i needed, I've tried to save it an even to play a bit with the syntax.  but still no success.   
I'm trying to test Splunk Cloud, have registered for free trial but have not received any email so far from Splunk. Faced similar problem a few times. What do I do in this situation?
Your final command will only give you one result event - depending on how you have set up the trigger for your alert, you could remove this and then trigger on the number of results being less than 2... See more...
Your final command will only give you one result event - depending on how you have set up the trigger for your alert, you could remove this and then trigger on the number of results being less than 2 i.e. let Splunk do the counting for you.
Thank you for your support @ITWhisperer , the given code is working as expected.
Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the ... See more...
Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the events based on the INTERFACE_NAME field and calculates the count of remaining unique events.   Splunk alert monitors the iSell application's request activity logs, specifically looking for cases where no data is processed within the last 30 minutes. If fewer than 2 unique events are found, the alert triggers once, notifying the appropriate parties.   From our end records are processed successfully and we provide the condition to trigger an INC count less than 2 we are getting more than one successful events even alert get triggering and getting INC Please check why we are getting false alert and suggest us index=*core host=* sourcetype=app_log APPNAME=iSell event=requestActivity httpMethod=POST loggerName="c.a.i.p.a.a.a.StreamingActor" | dedup INTERFACE_NAME| stats count
Hi @Tron-spectron47, here you can find all the Splunk courses: https://www.splunk.com/en_us/training/course-catalog.html  in details you should see these courses: Splunk Enterprise System Administ... See more...
Hi @Tron-spectron47, here you can find all the Splunk courses: https://www.splunk.com/en_us/training/course-catalog.html  in details you should see these courses: Splunk Enterprise System Administration chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-system-administration-course-description.pdf Splunk Enterprise Data Administration chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.splunk.com/en_us/pdfs/training/splunk-enterprise-data-administration-course-description.pdf Data Models chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.splunk.com/en_us/pdfs/training/data-models-course-description.pdf You can find the page to register in the first url. Ciao. Giuseppe
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analy... See more...
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analyze every day, is used by SOA and OSB diagnostic logs. It is, more or less, like csv structure but instead of tab/space/comma, each value is pakced into brakets Below example with the short descrption [2010-09-23T10:54:00.206-07:00] [soa_server1] [NOTIFICATION] [] [oracle.mds] [tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0] [APP: wsm-pm] "Metadata Services: Metadata archive (MAR) not found." Timestamp, originating: 2010-09-23T10:54:00.206-07:00 Organization ID: soa_server1 Message Type: NOTIFICATION Component ID: oracle.mds Thread ID: tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)' User ID: userId: <anonymous> Execution Context ID: ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0 Supplemental Attribute: APP: wsm-pm Message Text: "Metadata Services: Metadata archive (MAR) not found." Any solution, hints how to manage it in Splunk ? regards KP.
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top... See more...
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top of the box .  How to remove this ?  I just want it to show the title OKTA and not the field name or whatever on top of the boxes   Below is my query for the above OKTA log source   | tstats dc(host) as distinct_count where index=okta sourcetype="OktaIM2:log"   Thanks in advance
Hi @Splunk-Star, you have to check at first your infrastructure: have you the minimal resources required by Splunk? if yes, you should analyze your  situation and eventually redesign your infrastru... See more...
Hi @Splunk-Star, you have to check at first your infrastructure: have you the minimal resources required by Splunk? if yes, you should analyze your  situation and eventually redesign your infrastructure for the new requirements: e.g. if you have many users or you're using many scheduled searches or you're using too real time searches, you have to use more resources (CPUs). then you have to analyze your configurations, e.g. some time ago I had this issue on Splunk Cloud, but the solution was to redistribute the schedule of the scheduled searches and the percentage of resources for scheduled searches. In both cases I hint to engare a Splunk Professional Service or a Splunk Architect: this issue requires a good experience in Splunk infrastructures. Ciao. Giuseppe
Hi @Splunk-Star , this message means that the h_vms lookup, that you're using in a search, isn't present. You should check if the name is correct: maybe you missed .csv extention or the name has so... See more...
Hi @Splunk-Star , this message means that the h_vms lookup, that you're using in a search, isn't present. You should check if the name is correct: maybe you missed .csv extention or the name has some char in uppercase (lookup name is case sensitive). If instead this lookup isn't present in your search, in the add-ons you're using there's this automatic lookup, but the lookup isn't present. You could solve the issue adding a lookup with this name, but put again attention to the lookup name. Ciao. Giuseppe
On splunk user is getting the following error:Could not load lookup=LOOKUP-pp_vms  but admin is not getting any such errors.   that look up file is not present also. What we need to do?
applied rex, after that it worked. Thanks
@rickymckenzie10 I think that you should read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Indexesconf https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-whe... See more...
@rickymckenzie10 I think that you should read at https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Indexesconf https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf  frozenTimePeriodInSecs = <nonnegative integer> * The number of seconds after which indexed data rolls to frozen. * If you do not specify a 'coldToFrozenScript', data is deleted when rolled to frozen. * NOTE: Every event in a bucket must be older than 'frozenTimePeriodInSecs' seconds before the bucket rolls to frozen. * The highest legal value is 4294967295. * Default: 188697600 (6 years) maxTotalDataSizeMB = <nonnegative integer> * The maximum size of an index, in megabytes. * If an index grows larger than the maximum size, splunkd freezes the oldest data in the index. * This setting applies only to hot, warm, and cold buckets. It does not apply to thawed buckets. * CAUTION: The 'maxTotalDataSizeMB' size limit can be reached before the time limit defined in 'frozenTimePeriodInSecs' due to the way bucket time spans are calculated. When the 'maxTotalDataSizeMB' limit is reached, the buckets are rolled to frozen. As the default policy for frozen data is deletion, unintended data loss could occur. * Splunkd ignores this setting on remote storage enabled indexes. * Highest legal value is 4294967295 * Default: 500000  
regex was not applied correctly thats why it was not extracting the data.   Thank you
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Coul... See more...
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Could you please help me in fixing the issue. Error Details: --------------------- *-199.corp.apple.com] Configuration initialization for /ngs/app/splunkp/mounted_bundles/peer_8089/*_SHC took longer than expected (1145ms) when dispatching a search with search ID remote_sh-*-13.corp.apple.com_2320431658__232041658__search__RMD578320bc0a7e9dada_1709881516.707_378AAA09-A2C2-4B63-B88A-50A6B29A67DF. This usually indicates problems with underlying storage performance."
@power12  The best place to start is by analyzing https://docs.splunk.com/Documentation/Splunk/latest/Search/ViewsearchjobpropertieswiththeJobInspector  Use the https://docs.splunk.com/Documentati... See more...
@power12  The best place to start is by analyzing https://docs.splunk.com/Documentation/Splunk/latest/Search/ViewsearchjobpropertieswiththeJobInspector  Use the https://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview  to check the health of Splunk. Also use other OS related tools to troubleshoot system performance; vmstat, iostat, top, lsof to look for any processes hogging CPU, memory or any high iowait times on your disk array. Here is a good explanation of calculating limits: https://answers.splunk.com/answers/270544/how-to-calculate-splunk-search-concurrency-limit-f.html  Also check out these apps: https://splunkbase.splunk.com/app/2632/  Check the efficiency of your users searches. The following will show you the longest running searches (by user - run it for 24hrs) index="_audit" action="search" (id=* OR search_id=*) | eval user=if(user=="n/a",null(),user) | stats max(total_run_time) as total_run_time first(user) as user by search_id | stats count perc95(total_run_time) median(total_run_time) by user |sort - perc95(total_run_time)  
That's the job for tostring. | appendpipe [stats sum(*) as * by Number | foreach * [eval <<FIELD>> = tostring(<<FIELD>>, "commas")] | eval UserName="Total By Number: "] I'm not ... See more...
That's the job for tostring. | appendpipe [stats sum(*) as * by Number | foreach * [eval <<FIELD>> = tostring(<<FIELD>>, "commas")] | eval UserName="Total By Number: "] I'm not sure why you group by "Number" but evals "UserName". 
This part won't work, as search can't take another field as it's constraint | eval NAME_LIST="task1,task2,task3" | search NAME IN (NAME_LIST)