All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello  I tried with below query still one extra row is coming index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event bala... See more...
@gcusello  I tried with below query still one extra row is coming index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully",Day=strftime(_time,"%Y-%m-%d")| dedup EBNCStatus Day|search EBNCStatus=* | table EBNCStatus True Day
@gcusello  How can I used Group By command here .Can you please guide.
Hi @gjhaaland , open a case To Splunk Support, it's the only way to have a quick answer. ciao. Giuseppe
Forgot to mention When I open Data Summary it says "Waiting for results" but it never get/receive any data. Only Waiting for Results without ending. Rgds Geir
Giuseppe, Thanks again,  Yes, If I run search command and/or old reports we get  no answer at all. The splunk gui is running, but we don't get any answer if we run search - index=*. Normally we wi... See more...
Giuseppe, Thanks again,  Yes, If I run search command and/or old reports we get  no answer at all. The splunk gui is running, but we don't get any answer if we run search - index=*. Normally we will see a long listing with output.  I have not deleted any files.  All I have done is  some settings regarding field extraction. After a while I discovered that we did not receive any data at all.  So I must be some connection between fields (enable/disable) and fields extraction. Rgds Geir 
If I have understood your requirement correctly, you could try something like this   index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\... See more...
If I have understood your requirement correctly, you could try something like this   index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1\"\s(?\d*)\s(?\d*)\"(?\S*)\"\"\w.*\"\s\S*(?web*\d*)\s\S*" | search sourceLBIP="*" responsetime="*" getorpost="*" uri="*" statuscode="*" responsesize="*" refereralURL="*" node="*" | eval responsetime1=responsetime/1000000 | eventstats max(responsetime) as max_responsetime | eventstats first(eval(if(responsetime == max_responsetime, uri, null()))) as longest_uri | where uri=longest_uri | chart values(responsetime) by _time longest_uri  
We have activated several data models for use with Splunk Enterprise security scenarios and are interested in clarifying the retention period for the summaries generated by these data models. Accordi... See more...
We have activated several data models for use with Splunk Enterprise security scenarios and are interested in clarifying the retention period for the summaries generated by these data models. According to the Splunk documentation, the retention period is determined by the accelerated summary range. For instance, if our network traffic accelerated summary range is set to 15 days, does this imply that the retention period is also 15 days, and that it stores 15 days' worth of summaries?
Hi @gjhaaland, the error messages aren't relevant. Let me better understan: the search doesn't run or you have always no results? When you say that yesterday worked perfectly, are you meaning: tha... See more...
Hi @gjhaaland, the error messages aren't relevant. Let me better understan: the search doesn't run or you have always no results? When you say that yesterday worked perfectly, are you meaning: that yesterday the searches  run or that running  today a search on yesterday data the are ok? Probably the only solution is to opena a case to Splunk Support that can access your system (with you) and debug the situation. Ciao. Giuseppe
Hi gcusello   Thanks for the answer.  No answer at all, even if I run “Usage Reporting Dashboard” the answer is empty. Since it work perfect yesterday I thinks/assume that some files are blocking s... See more...
Hi gcusello   Thanks for the answer.  No answer at all, even if I run “Usage Reporting Dashboard” the answer is empty. Since it work perfect yesterday I thinks/assume that some files are blocking stopping normal behavior .     If I restart splunkd  I got following messages   1: Invalid key in stanza  [admin_external:configure]in /home/splunk/etc/apps/TA-eStreamer/default/restmap.conf, line 7: python.version 2: your indexes and inputs configurations are not internally consistent. For more info run splunk btool -check –debug 3: Validating installed files against hashes from /home/splunk/splunk/7.1……..-x86_64manifest’ Problems were found, please review your files and more customization to local   Starting splunk aerver deamon (splunkd) Done [OK}   Rgds Geir   If I run splunk btool -check –debug   I got following error (cut/paste errors)   No spec file for: /home/splunk/etc/apps/Splunk_CiscoSecuritySuite/local/css_views.co No spec file for: /home/splunk/etc/apps/TA-eStreamer/local/encore.conf No spec file for: /home/splunk/etc/apps/eStreamer/local/estreamer.conf No spec file for: /home/splunk/etc/apps/Splunk_CiscoSecuritySuite/default/css_views.conf No spec file for: /home/splunk/etc/apps/Splunk_CiscoSecuritySuite/default/eventgen.conf No spec file for: /home/splunk/etc/apps/TA-eStreamer/default/encore.conf Invalid key in stanza [admin_external:configure] in /home/splunk/etc/apps/TA-eStreamer/default/restmap.conf, line 7: python.version  (value:  python3). No spec file for: /home/splunk/etc/apps/eStreamer/default/estreamer.conf No spec file for: /home/splunk/etc/apps/firepower_dashboard/default/appsetup.conf No spec file for: /home/splunk/etc/apps/firepower_dashboard/default/umbrella.conf No spec file for: /home/splunk/etc/system/default/conf.conf No spec file for: /home/splunk/etc/system/local/migration.conf  
Can you help with a query to find out which indexs are not used 
Hi @Rajini, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Yes, I figured out the cause, It is fixed now. Thank you
Hi @aditsss, is it correct the "|head 7" in the second row? Anyway, did you checked the data in the events? you used the table command that doesn't group any data and only display them. It seemes... See more...
Hi @aditsss, is it correct the "|head 7" in the second row? Anyway, did you checked the data in the events? you used the table command that doesn't group any data and only display them. It seemes that you have wrong data. Ciao. Giuseppe
@gcusello @richgalloway  Below is the query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successful... See more...
@gcusello @richgalloway  Below is the query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","")|head 7 | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True |rename busDt as Business_Date |rename fileName as File_Name |rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes) |table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus |sort -Business_Date The issue I am facing is when I am sorting with -businessDate  businessDate is coming correct but startTime AND EndTime is not coming correct For example in below screenshot for BusinessDate 09/11 startTime and EndTime is coming as 09/13 it should be 09/12. @gcusello @richgalloway please guide  
Any luck with this?
I am able to get the list of URL with top response time using below query. index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1... See more...
I am able to get the list of URL with top response time using below query. index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1\"\s(?\d*)\s(?\d*)\"(?\S*)\"\"\w.*\"\s\S*(?web*\d*)\s\S*" | search sourceLBIP="*" responsetime="*" getorpost="*" uri="*" statuscode="*" responsesize="*" refereralURL="*" node="*" | eval responsetime1=responsetime/1000000 | stats count by responsetime1,node, responsesize, uri, _time, statuscode | sort -responsetime1 | head 1    I am trying to modify this query for more detailed information. I am able to get the top 1 URL which has highest response time. But I need the timechart partner to understand the responsetime trend for that speicifc URL for last 1 hour. Also, like to modify the script in a such a way where it sould provide me the timechart trend of any URL (top responsetime) for 1 hour. URL may not be same every time since it may change.
Hi @gjhaaland, if you run a search on _internal, did you have results? have you any messages from Splunk? Ciao. Giuseppe
Hi, Splunk has been working for a long period without any trouble. When I changed settings yesterday (can't remember what I did) the search command dos not work as before (no answer).  If I go to... See more...
Hi, Splunk has been working for a long period without any trouble. When I changed settings yesterday (can't remember what I did) the search command dos not work as before (no answer).  If I go to settings - indexing   _audit, _internal , _introspection,  _telemtry, _history + main area all of them is disabled. I also google, and it says that it perhaps has something to do identical id under db directory. We have same id on some files with .sentinel   example: db_123_345_12 db_123_345_12.rbsentinel    If I run following command: run netsat -an | grep 9997 we have many tcp session establised .    Have of course rebooted, restarted splunk server several times.  It does not help much.  Thanks in advance. Hope someone can give me a hint.    Rgds Geir    
Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggere... See more...
Hi,  Looking to get 1 month report for all alert generated from a splunk app. My "FSS" app have around 60 alerts configured. want to generate report in last one month which all alert get triggered by splunk with date time.   Thanks Abhineet Kumar
Hi @Marta88, as you can read at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?utm_source=answers&utm_medium=in-comment&utm_term=limits.conf&utm_campaign=refdoc, the difference... See more...
Hi @Marta88, as you can read at https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf?utm_source=answers&utm_medium=in-comment&utm_term=limits.conf&utm_campaign=refdoc, the differences are: use_stats_v2 = [fixed-width | <boolean>] * Specifies whether to use the v2 stats processor. * When set to 'fixed-width', the Splunk software uses the v2 stats processor for operations that do not require the allocation of extra memory for new events that match certain combinations of group-by keys in memory. Operations that cause the Splunk software to use v1 stats processing include the 'eventstats' and 'streamstats' commands, usage of wildcards, and stats functions such as list(), values(), and dc(). * NOTE: Do not change this setting unless instructed to do so by Splunk Support. * Default: true and  stats = <boolean> * This setting determines whether the stats processor uses the required field optimization methods of Stats V2, or if it falls back to the older, less optimized version of required field optimization that was used prior to Stats v2. * This setting only applies when 'use_stats_v2' is set to 'true' or 'fixed-width' in 'limits.conf' * When Stats v2 is enabled and this setting is set to 'true', the stats processor uses the Stats v2 version of required field optimization. * When Stats v2 is enabled and this setting is set to 'false' the stats processor falls back to the older version of required field optimization. * Do not change this setting unless instructed to do so by Splunk support. * Default: false In few words, the difference is that in V2 it isn't required allocation of extra memory used, this is maintained for eventats and streamstats. Ciao. Giuseppe