All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your graphic was an event view, what do you get with a statistics view (you might need a table command to show the various fields)? What was the full search which gave you the NaN result?
Use the eval command to add a field to your results. | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen | eval newFiel... See more...
Use the eval command to add a field to your results. | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen | eval newField="srx"  
@ITWhisperer : Just to get your last word on this, the SPL I have shared in description is correct in your view to get event data points and index data points for each hour, given that due to delay i... See more...
@ITWhisperer : Just to get your last word on this, the SPL I have shared in description is correct in your view to get event data points and index data points for each hour, given that due to delay ingestion we will see different patterns? Thank you
I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field popula... See more...
I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field populated by somename that ends in "srx"   Jan 4 13:07:57 1.1.1.1 1 2024-01-04T13:07:57.085-05:00 5995-somename-srx rpd 2188 JTASK_SIGNAL_INFO [junos@2636.1.1.1.2.133 message-name="INFO Signal Info: Signal Number = " signal-number="1" name=" Consumed Count = " data-1="3"]
@ITWhisperer Thanks for your reply, this can be done inline but how do we ensure we have the right time extraction from the data? Is it possible to set the _time to show nanoseconds while creation of... See more...
@ITWhisperer Thanks for your reply, this can be done inline but how do we ensure we have the right time extraction from the data? Is it possible to set the _time to show nanoseconds while creation of the parser? Also, with the command that you gave I cannot see the right results, below is the result I am getting. NaN/NaN/aN NaN:NaN:NaN.000 AM  
Looking at the "provenance" field in the search i found that only fields containing the value "UI:Search" were related to actual search queries. The rest like dashboard searches that fire off automat... See more...
Looking at the "provenance" field in the search i found that only fields containing the value "UI:Search" were related to actual search queries. The rest like dashboard searches that fire off automatically appear under other field values.   Hope that helps
Thank you @ITWhisperer for sharing your inputs.
There is a default for showing the _time field. You can override this. For exmple | fieldformat _time=strftime(_time,"%Y-%m-%dT%H:%M:%S.%9Q%Z")
Why use bin span=1h and then use span=1d in the timechart? The bin span=1h is redundant. What does our timechart search give you and why does it not match your requirement?
I do not know AWS Key Management Service data, so can't comment on that. Given that you have different data sources, you might need a different approach for each data source. I think you need to fi... See more...
I do not know AWS Key Management Service data, so can't comment on that. Given that you have different data sources, you might need a different approach for each data source. I think you need to find an example in your data of the issue you are trying to detect, then determine what the best way of finding the issue in the future is. By using different approaches (as I have hinted at), you might find one which matches your requirement. There is unlikely to be one answer which fits all, although theoretically there could be, so I would suggest you experiment.
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX... See more...
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX = \"time\"\:\s*\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9Q%Z Although, I see no errors while uploading the test data, in the time field I can see values upto 3 milliseconds only, for eg : 2/15/24 11:40:19.843 AM Is this the right way or Splunk does show the nanoseconds values too? If it does, what is it that is missing in my logic to view the same? Kindly help.   Regards.  
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the ... See more...
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the data stops coming to any index. #splunkforwarder#splunkappforwindows
@ITWhisperer extremely sorry to write in the table, need time as well.
Thank you @ITWhisperer for your detailed inputs. I have different types of data sources to monitor.  The one I observed and and shared is of AWS Key Management Service, where we write logs when the... See more...
Thank you @ITWhisperer for your detailed inputs. I have different types of data sources to monitor.  The one I observed and and shared is of AWS Key Management Service, where we write logs when the Key management service calls an AWS service on behalf of application (or user). Thus, please help to confirm if the approach you shared to compute and compare index datapoint counts at successive hours will enable better understanding of data ingestion issues in Splunk? And in that approach does event datapoint counts have any application or usage? Thank you
Hi @Muthu_Vinith, you have at first to create the lookup and the lookup definition (don't forget definition!). Then you have to define the fields list of the new lookup from the fiels in the index ... See more...
Hi @Muthu_Vinith, you have at first to create the lookup and the lookup definition (don't forget definition!). Then you have to define the fields list of the new lookup from the fiels in the index and create a search, and at least create a search ending with the outputlookup command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Outputlookup). So you can run something like this: index=abc | dedup field1 field2 field3 | sort field1 field2 field3 | table field1 field2 field3 | outputlookup your_lookup.csv Analyze the options of the outputlookup command to find the ones that you require. Ciao. Giuseppe
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, ... See more...
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, go to services, SplunkForwarder is running. These logs are meant to be pushed to our CyberDefence 3rd party.  However, it seems Splunk has no rights to send logs (possibly due to 'Log on' as settings in SplunkForwarder service). Has anyone ever encountered this before and resolved, or completed Spunk UF install via Intune?
Your expected output doesn't have a time element so why are you using timechart, or indeed bin _time?
Hi @New_splunkie, It's great to see your interest in Data Manager! You're absolutely correct – Data Manager is a native app to the Splunk Cloud Platform, which means there's no separate installation... See more...
Hi @New_splunkie, It's great to see your interest in Data Manager! You're absolutely correct – Data Manager is a native app to the Splunk Cloud Platform, which means there's no separate installation required. However, there are a few requirements that your deployment must meet in order to have it on your Splunk Cloud Platform.  You can use Data Manager if your Splunk Cloud Platform deployment meets the following requirements: Runs Splunk Cloud Platform versions 8.2.2104.1 and higher on the Victoria experience. Is provisioned in a region that supports Data Manager. See Available regions and region differences in the Splunk Cloud Platform Service Description. Please let me know if that helps! Antoni 
As I said, it depends on your data. For example, Apache HTTPD logs (and other HTTPD servers) log transactions using the timestamp for when the request was received, but it is added to the log when th... See more...
As I said, it depends on your data. For example, Apache HTTPD logs (and other HTTPD servers) log transactions using the timestamp for when the request was received, but it is added to the log when the response is sent back. This means that the event time could be minutes out from the index time even if the log was indexed instantaneously (which it isn't as there will always be a lag between when the log is written and when it reaches the indexers). However, in this instance, the time the response was sent could be inferred from the request time and the duration, so this could be used to compare against the index time to give you a better idea about the lag. Perhaps what might be more useful to you is the difference between successive index times? This might show you when either there was a pause in logging or when there was a breakdown in transmission of the log events to the indexers. However, this would need to be compared with the actual rate at which the events were written to the log, so, again, it depends on your data.
Brackets in the wrong place and it looks like the else part of the first if should start with another if | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403", if(like('thrown.extendedSt... See more...
Brackets in the wrong place and it looks like the else part of the first if should start with another if | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403", if(like('thrown.extendedStackTrace',"%404%"),"404","###ERROR####"))