All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| eval diff= now() - reception_time
Hi  Even with strptime , i am not able to find the difference of current time and timestampOfReception. i am using the below criteria.  Can you please help me to extract the difference of current t... See more...
Hi  Even with strptime , i am not able to find the difference of current time and timestampOfReception. i am using the below criteria.  Can you please help me to extract the difference of current time and timestampOfReception in the diff field.  And strptime is not working with the the function now ().  `eoc_stp_events_indexes` host=p* OR host=azure_srt_prd_0001 (messageType= seev.047* OR messageType= SEEV.047*) status = SUCCESS targetPlatform = SRS_ESES NOT [ search (index=events_prod_srt_shareholders_esa OR index=eoc_srt) seev.047 Name="Received Disclosure Response Command" | spath input=Properties.appHdr | rename bizMsgIdr as messageBusinessIdentifier | fields messageBusinessIdentifier ] | eval Current_time =strftime(now(),"%Y-%m-%d %H:%M:%S ") | eval reception_time =strptime( timestampOfReception , "%Y-%m-%d%H:%M:%S.%N" ) | eval diff= current_time - reception_time | fillnull timestampOfReception , messageOriginIdentifier, messageBusinessIdentifier, direction, messageType, currentPlatform, sAAUserReference value="-" | sort -timestampOfReception | table diff , reception_time, Current_time , timestampOfReception, messageOriginIdentifier, messageType, status, messageBusinessIdentifier, originPlatform, direction, sourcePlatform, currentPlatform, targetPlatform, senderIdentifier, receiverIdentifier, currentPlatform, | rename timestampOfReception AS "Timestamp of reception", originPlatform AS "Origin platform", sourcePlatform AS "Source platform", targetPlatform AS "Target platform", senderIdentifier AS "Sender identifier", receiverIdentifier AS "Receiver identifier", messageOriginIdentifier AS "Origin identifier", messageBusinessIdentifier AS "Business identifier", direction AS Direction, currentPlatform AS "Current platform", sAAUserReference AS "SAA user reference", messageType AS "Message type"  
@tmaoz wrote: Actually, SplunkWeb crashes in the "preview" stage of creating a new input so I can't even create the input that way. If you're having problems using SplunkWeb then create the i... See more...
@tmaoz wrote: Actually, SplunkWeb crashes in the "preview" stage of creating a new input so I can't even create the input that way. If you're having problems using SplunkWeb then create the input by editing inputs.conf and increase the limit by modifying limits.conf.  Then restart Splunk for the changes to take effect.
Hello All, I have a dashboard with trellis layout in the panel. I need to drilldown based on the dynamic values for which trellis is generated. The challenge is out of three charts that trellis give... See more...
Hello All, I have a dashboard with trellis layout in the panel. I need to drilldown based on the dynamic values for which trellis is generated. The challenge is out of three charts that trellis gives, the drilldown works on two of them. On the third one, no action happens when I click over the chart.   <row> <panel> <title> <chart> <search> <query>index=... </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleratio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">median_count</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">0</option> <option name="trellis.size">small</option> <drilldown> <link target="_blank">/xxx/yyy/zzz?test_Tok=$trellis.value$</link> </drilldown> </chart> </panel> </row>   The trellis gives vertical column charts arranged one after the other horizontally. Thus, your inputs to resolve the issue will be very helpful. Thank you Taruchit
Hi, I am using splunk enterprise 9.0.5.1 since about a month and have been experimenting with a dashboard (studio) for application insights. I am now trying to get nfs info in my dashboard, because... See more...
Hi, I am using splunk enterprise 9.0.5.1 since about a month and have been experimenting with a dashboard (studio) for application insights. I am now trying to get nfs info in my dashboard, because the nfs shares don't have logical names i have created a simple, small lookup csv with 2 fields app-name and nfs-name.  This is working fine : index=summary type=isilon_nfs-quota-alert (path="*appsdata*") | lookup apps-nfs.csv nfs-name as path output nfs-name as found, app-name as application | where isnotnull(found) | table path, found, application, quota it fetches all the nfs info from all the nfs'es in my apps-nfs.csv But.... I don't want the entire list... I want to use a filter in my apps-nfs.csv first on app-name and can't get that to work. Eventually i want to use the app-name token of my dashboard to filter but i can 't even get a simple search working. How do i filter app-name in the csv before fetching the nfs info, for instance with an IN list... app1, app2, app5, etc    
Hello. When I try to save experiment in Splunk machine learning toolkit smart forecasting, I get an error "Cannot validate experiment". Does anyone have a clue what this could be referring to? Maybe ... See more...
Hello. When I try to save experiment in Splunk machine learning toolkit smart forecasting, I get an error "Cannot validate experiment". Does anyone have a clue what this could be referring to? Maybe I need permissions to be able to do that?
if i try to find count i am only getting count of either BIE or RAD . But I want count of both combined .
First things first - see the changelogs for known bugs - sendemail.py is known to have had a few bugs here and there.
Apart from all the things we already said with @ITWhisperer your main problem here is this: | eval Current_time =strftime(now(),"%Y-%m-%d %H:%M:%S ") | eval diff= Current_time-timestampOfReception ... See more...
Apart from all the things we already said with @ITWhisperer your main problem here is this: | eval Current_time =strftime(now(),"%Y-%m-%d %H:%M:%S ") | eval diff= Current_time-timestampOfReception with the fact that most probably your timestampOfReception is also a string field from your event (you're not strptime()-ing it anywhere in your search so I can safely assume that). What you're trying to do is running a substraction operation on two strings. It won't fly. Strings are not substractable (also they are not additive in Splunk, you need to use concatenation operator). So you won't get any value at all and that's normal in this case. What you need to do is to use strptime() (not strftime()!) to parse the timestampOfReception field to the so-called unix-timestamp (which is the number of seconds from epoch which means it's a number) and substract it from the value of now() which is also returned as unix-timestamp. There is no need to formatting any of that into strings. Quite contrary - you want both of those timestamps as numbers because then you can easily manipulate them.
Hi There, I have noticed that the cloud monitoring console is reporting a critical bucket. I only have one and have attached a screenshot. The small % is 100.  Unfortunately, I am not certain as to... See more...
Hi There, I have noticed that the cloud monitoring console is reporting a critical bucket. I only have one and have attached a screenshot. The small % is 100.  Unfortunately, I am not certain as to what this really means and whether it is something to worry about or not. Any help would be appreciated, Jamie
The answer to the question if it can be achieved _only_ using evals and ifs is almost definitely "no". It needs a bit more than that. While your question is a bit vague and it could use some litera... See more...
The answer to the question if it can be achieved _only_ using evals and ifs is almost definitely "no". It needs a bit more than that. While your question is a bit vague and it could use some literal examples (possibly anonymized), I assume that you need something like this: <your index> ((<conditions for error message1>) OR (<conditions for common message>)) | eval message1=if(searchmatch("<conditions for error message1>"),1,0) | eval commonmessage=if(searchmatch("<conditions for common message>"),1,0) | stats sum(message1) sum(commonmessage) Something like this will give you count of your respective messages over your search window. If those numbers differ, you'll know that you have more messages of one kind than the other. BTW, the searchmatch() is probably not the most efficient way to categorize those events so if you can specify the rules in a simpler way (for example, match a particular field's value) it will probably be beneficial for the search performance
i am not able to find the difference of time using the below clause : Can you please tell me what i should add to get the difference of the 2 timestamps.  | eval diff= Current_time-timestampOfRecep... See more...
i am not able to find the difference of time using the below clause : Can you please tell me what i should add to get the difference of the 2 timestamps.  | eval diff= Current_time-timestampOfReception   Complete search :  `eoc_stp_events_indexes` host=p* OR host=azure_srt_prd_0001 (messageType= seev.047* OR messageType= SEEV.047*) status = SUCCESS targetPlatform = SRS_ESES NOT [ search (index=events_prod_srt_shareholders_esa OR index=eoc_srt) seev.047 Name="Received Disclosure Response Command" | spath input=Properties.appHdr | rename bizMsgIdr as messageBusinessIdentifier | fields messageBusinessIdentifier ] | eval Current_time =strftime(now(),"%Y-%m-%d %H:%M:%S ") | eval diff= Current_time-timestampOfReception | fillnull timestampOfReception , messageOriginIdentifier, messageBusinessIdentifier, direction, messageType, currentPlatform, sAAUserReference value="-" | sort -timestampOfReception | table diff , Current_time, timestampOfReception, messageOriginIdentifier, messageType, status, messageBusinessIdentifier, originPlatform, direction, sourcePlatform, currentPlatform, targetPlatform, senderIdentifier, receiverIdentifier, currentPlatform, | rename timestampOfReception AS "Timestamp of reception", originPlatform AS "Origin platform", sourcePlatform AS "Source platform", targetPlatform AS "Target platform", senderIdentifier AS "Sender identifier", receiverIdentifier AS "Receiver identifier", messageOriginIdentifier AS "Origin identifier", messageBusinessIdentifier AS "Business identifier", direction AS Direction, currentPlatform AS "Current platform", sAAUserReference AS "SAA user reference", messageType AS "Message type"      
Thanks, a lot... Its working   
And what have you tried so far? And how the results weren't meeting your expectations?
1. I don't understand how removing unused apps would save you money (apart from premium apps like ES or ITSI, but here you should know already if your company is using them). There is a small number ... See more...
1. I don't understand how removing unused apps would save you money (apart from premium apps like ES or ITSI, but here you should know already if your company is using them). There is a small number of paid apps for Splunk but they are - if I remember correctly - typically associated with particular input types so you need them if you ingest data of given kind. 2. An app upgrade typically (if the app is properly written) just boils down to deploying a new version of an app. The new app version should contain new default files but should leave your local settings untouched. But. 3. There might be some apps which introduce some incompatibilities across versions and need additional things to be done after upgrade. To find out what those are you need to check the app's docs. There is no other way. 4. The method of deploying new versions of your apps will greatly depend on your environment architecture - whether you use clustering on your search-heads and indexers, if you push apps from deployment server, what push mode do you use on your SHC deployer (if applicable). This might be as easy as going to the apps menu and clicking "upgrade" a few times, but can be much more complicated. EDIT: Sorry, I didn't notice you posted this in the Cloud section. In this case the "main" part should be easier (but if there are apps manually added - not from splunkbase), you'd need to go through vetting new versions again. But you'll need to upgrade apps on DS/HFs/UFs as well which _probably_ (unless you manage everything by hand) means uploading new versions to the DS and letting the clients download and deploy it.
thanks for the reply but  I want the total count when the timeval is latest. (in this case 2023), so according to my lookup result should be 2. with BIE count is 0 and  RAD count is 2 so 0+2=2. Hope ... See more...
thanks for the reply but  I want the total count when the timeval is latest. (in this case 2023), so according to my lookup result should be 2. with BIE count is 0 and  RAD count is 2 so 0+2=2. Hope this helps in understanding
If you want to just get some statistical report on data read from your lookup, use the inputlookup command. Like | inputlookup mylookup | stats count will give you number of rows in your lookup. Y... See more...
If you want to just get some statistical report on data read from your lookup, use the inputlookup command. Like | inputlookup mylookup | stats count will give you number of rows in your lookup. You can do any operation on fields read from the lookup that you would normally do in a "normal" event search.
I have this lookup I want the total count when the timeval is latest. (in this case 2023) any solution
I am tasked to do the application upgrades on splunk & also to find out the applications which are not being used much so we can uninstall them & save some cost around it.  Can someone help me with ... See more...
I am tasked to do the application upgrades on splunk & also to find out the applications which are not being used much so we can uninstall them & save some cost around it.  Can someone help me with the desired steps to upgrade the applications in splunk across regions & also how can i list down the apps which are not being used.
I have a use case where I want to setup Splunk Alerts for certain Exception events. I have already defined standard Error messages for these individual Exceptions. Below is a sample use case: Except... See more...
I have a use case where I want to setup Splunk Alerts for certain Exception events. I have already defined standard Error messages for these individual Exceptions. Below is a sample use case: Exception Event 1:                                  Exception Event 2: Standard Error Message 1                  Common Message Common Message In the above use case, when Exception Event 1 happens, it outputs 2 messages to the Log (Standard Error Message 1 and Common Message). When Exception Event 2 happens, it only outputs the Common Message to the log. For defining Splunk Alert for the Event 1, I want to ensure that I am checking the 2 counts of search results matching both the Message 1 and Common Message to ensure that both these searches return the same results count for a given time period. Is it possible to achieve this type of Splunk query using eval and If statement? My objective is to ensure that I am able to accurately identify scenario for the Exception Event 1 occurring where both the messages would be output to the logs in the same count.