All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer , thank you for sharing the query. Seems "| where url=longest_url" condition is not recognizing. I was looking for timechart which can provide me the response time trend for any top resp... See more...
@ITWhisperer , thank you for sharing the query. Seems "| where url=longest_url" condition is not recognizing. I was looking for timechart which can provide me the response time trend for any top response time consuming.  For example, if http://xyz.com/report consuming highest response time. I need the trend for last 1 hour trend on the performance on that specific URL.  Hope that helps
If it is always the last item of a multivalue field, you could try something like this index=feds | fillnull value="" | table httpRequest.clientIp labels{}.name | rename "labels{}.name" as name | e... See more...
If it is always the last item of a multivalue field, you could try something like this index=feds | fillnull value="" | table httpRequest.clientIp labels{}.name | rename "labels{}.name" as name | eval name=mvindex(name, -1)
I have an output of   index=feds  | fillnull value="" | table httpRequest.clientIp labels{}.name awswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL ... See more...
I have an output of   index=feds  | fillnull value="" | table httpRequest.clientIp labels{}.name awswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL awswaf:managed:aws:bot-control:signal:non_browser_user_agent   wswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL awswaf:managed:aws:bot-control:signal:non_browser_user_agent   wswaf:clientip:geo:country:US awswaf:managed:token:absent awswaf:clientip:geo:region:US-IL awswaf:managed:aws:bot-control:signal:non_browser_user_agent   But need to filter "awswaf:managed:aws:bot-control:signal:non_browser_user_agent" on Table output and see the results only on "awswaf:managed:aws:bot-control:signal:non_browser_user_agent"
@Ryan.Paredez Is there any update on this? I am also struggling with the same issue. Changing it to a higher number (e.g., 5) or a lower number (e.g., 1) doesn't work. It's taking a default value of 2.
Hi @aditsss, if you have an empty field using the table command means that you have incomplete data or that you have a space in that field. anyway, you can remove them using a different search, e.g... See more...
Hi @aditsss, if you have an empty field using the table command means that you have incomplete data or that you have a space in that field. anyway, you can remove them using a different search, e.g. if all the EBNCStatus values starts with "ebnc, you could use  | search EBNCStatus="ebnc*" Ciao. Giuseppe
Hi @aditsss , you have to use a common key to group events: search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event bal... See more...
Hi @aditsss , you have to use a common key to group events: search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","")|head 7 | eval EBNCStatus="ebnc event balanced successfully", StartTime=strptime(StartTime,"%Y-%m-%d %H:%M:%S.%3N"), EndTime=strptime(EndTime,"%Y-%m-%d %H:%M:%S.%3N") | rename busDt as Business_Date fileName as File_Name CARS.UNB_Duration as CARS.UNB_Duration(Minutes) | stats earliest(StartTime) AS StartTime latest(EndTime) AS EndTime values("CARS.UNB_Duration(Minutes)") AS "CARS.UNB_Duration(Minutes)" values(Records) AS Records values(totalClosingBal) AS totalClosingBal values(totalRecordsWritten) AS totalRecordsWritten values(totalRecords) AS totalRecords values(EBNCStatus) AS EBNCStatus BY Business_Date File_Name | eval StartTime=strftime(StartTime,"%Y-%m-%d %H:%M:%S.%3N"), EndTime=strftime(EndTime,"%Y-%m-%d %H:%M:%S.%3N") | sort -Business_Date if you have more values for the other fields, you can use other functions as last or first. Ciao. Giuseppe
@gcusello  I tried with below query still one extra row is coming index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event bala... See more...
@gcusello  I tried with below query still one extra row is coming index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully",Day=strftime(_time,"%Y-%m-%d")| dedup EBNCStatus Day|search EBNCStatus=* | table EBNCStatus True Day
@gcusello  How can I used Group By command here .Can you please guide.
Hi @gjhaaland , open a case To Splunk Support, it's the only way to have a quick answer. ciao. Giuseppe
Forgot to mention When I open Data Summary it says "Waiting for results" but it never get/receive any data. Only Waiting for Results without ending. Rgds Geir
Giuseppe, Thanks again,  Yes, If I run search command and/or old reports we get  no answer at all. The splunk gui is running, but we don't get any answer if we run search - index=*. Normally we wi... See more...
Giuseppe, Thanks again,  Yes, If I run search command and/or old reports we get  no answer at all. The splunk gui is running, but we don't get any answer if we run search - index=*. Normally we will see a long listing with output.  I have not deleted any files.  All I have done is  some settings regarding field extraction. After a while I discovered that we did not receive any data at all.  So I must be some connection between fields (enable/disable) and fields extraction. Rgds Geir 
If I have understood your requirement correctly, you could try something like this   index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\... See more...
If I have understood your requirement correctly, you could try something like this   index=xyz earliest=-1hr latest=now | rex field=_raw "^(?\d*\.\d*\.\d*\.\d*)\s\[\w.*\]\s(?\d*)\s\"(?\w*)\s(?\S*)\sHTTP\/1.1\"\s(?\d*)\s(?\d*)\"(?\S*)\"\"\w.*\"\s\S*(?web*\d*)\s\S*" | search sourceLBIP="*" responsetime="*" getorpost="*" uri="*" statuscode="*" responsesize="*" refereralURL="*" node="*" | eval responsetime1=responsetime/1000000 | eventstats max(responsetime) as max_responsetime | eventstats first(eval(if(responsetime == max_responsetime, uri, null()))) as longest_uri | where uri=longest_uri | chart values(responsetime) by _time longest_uri  
We have activated several data models for use with Splunk Enterprise security scenarios and are interested in clarifying the retention period for the summaries generated by these data models. Accordi... See more...
We have activated several data models for use with Splunk Enterprise security scenarios and are interested in clarifying the retention period for the summaries generated by these data models. According to the Splunk documentation, the retention period is determined by the accelerated summary range. For instance, if our network traffic accelerated summary range is set to 15 days, does this imply that the retention period is also 15 days, and that it stores 15 days' worth of summaries?
Hi @gjhaaland, the error messages aren't relevant. Let me better understan: the search doesn't run or you have always no results? When you say that yesterday worked perfectly, are you meaning: tha... See more...
Hi @gjhaaland, the error messages aren't relevant. Let me better understan: the search doesn't run or you have always no results? When you say that yesterday worked perfectly, are you meaning: that yesterday the searches  run or that running  today a search on yesterday data the are ok? Probably the only solution is to opena a case to Splunk Support that can access your system (with you) and debug the situation. Ciao. Giuseppe
Hi gcusello   Thanks for the answer.  No answer at all, even if I run “Usage Reporting Dashboard” the answer is empty. Since it work perfect yesterday I thinks/assume that some files are blocking s... See more...
Hi gcusello   Thanks for the answer.  No answer at all, even if I run “Usage Reporting Dashboard” the answer is empty. Since it work perfect yesterday I thinks/assume that some files are blocking stopping normal behavior .     If I restart splunkd  I got following messages   1: Invalid key in stanza  [admin_external:configure]in /home/splunk/etc/apps/TA-eStreamer/default/restmap.conf, line 7: python.version 2: your indexes and inputs configurations are not internally consistent. For more info run splunk btool -check –debug 3: Validating installed files against hashes from /home/splunk/splunk/7.1……..-x86_64manifest’ Problems were found, please review your files and more customization to local   Starting splunk aerver deamon (splunkd) Done [OK}   Rgds Geir   If I run splunk btool -check –debug   I got following error (cut/paste errors)   No spec file for: /home/splunk/etc/apps/Splunk_CiscoSecuritySuite/local/css_views.co No spec file for: /home/splunk/etc/apps/TA-eStreamer/local/encore.conf No spec file for: /home/splunk/etc/apps/eStreamer/local/estreamer.conf No spec file for: /home/splunk/etc/apps/Splunk_CiscoSecuritySuite/default/css_views.conf No spec file for: /home/splunk/etc/apps/Splunk_CiscoSecuritySuite/default/eventgen.conf No spec file for: /home/splunk/etc/apps/TA-eStreamer/default/encore.conf Invalid key in stanza [admin_external:configure] in /home/splunk/etc/apps/TA-eStreamer/default/restmap.conf, line 7: python.version  (value:  python3). No spec file for: /home/splunk/etc/apps/eStreamer/default/estreamer.conf No spec file for: /home/splunk/etc/apps/firepower_dashboard/default/appsetup.conf No spec file for: /home/splunk/etc/apps/firepower_dashboard/default/umbrella.conf No spec file for: /home/splunk/etc/system/default/conf.conf No spec file for: /home/splunk/etc/system/local/migration.conf  
Can you help with a query to find out which indexs are not used 
Hi @Rajini, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Yes, I figured out the cause, It is fixed now. Thank you
Hi @aditsss, is it correct the "|head 7" in the second row? Anyway, did you checked the data in the events? you used the table command that doesn't group any data and only display them. It seemes... See more...
Hi @aditsss, is it correct the "|head 7" in the second row? Anyway, did you checked the data in the events? you used the table command that doesn't group any data and only display them. It seemes that you have wrong data. Ciao. Giuseppe
@gcusello @richgalloway  Below is the query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successful... See more...
@gcusello @richgalloway  Below is the query search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","")|head 7 | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True |rename busDt as Business_Date |rename fileName as File_Name |rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes) |table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus |sort -Business_Date The issue I am facing is when I am sorting with -businessDate  businessDate is coming correct but startTime AND EndTime is not coming correct For example in below screenshot for BusinessDate 09/11 startTime and EndTime is coming as 09/13 it should be 09/12. @gcusello @richgalloway please guide