All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The configs may *look* well, but maybe they aren't.  Share the inputs.conf and outputs.conf settings for a second opinion. What gave you the impression that Splunk has no rights to send logs? How a... See more...
The configs may *look* well, but maybe they aren't.  Share the inputs.conf and outputs.conf settings for a second opinion. What gave you the impression that Splunk has no rights to send logs? How are you attempting to send data to the third-party tool?  Have you seen https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd ?
need to install the splunk enterprise and wanted to make SH and indexer , universal forwarder  same system , please advise
I presume you're referring to the Splunk Add-on for Windows since the app does not have any inputs. It's not enough to change the destination index to a metrics index.  The format of the data must a... See more...
I presume you're referring to the Splunk Add-on for Windows since the app does not have any inputs. It's not enough to change the destination index to a metrics index.  The format of the data must also change. See https://docs.splunk.com/Documentation/AddOns/released/Windows/Configuration#Collect_perfmon_data_and_wmi:uptime_data_in_metric_index for the list of Windows metrics that are available and how to enable them.
Hi! We recently decided to move from Splunk on-prem to Cloud.  Is there any quick way for me to upload my savedsearches.conf file from the On-Prem to the Cloud instance?   I am looking for a way whe... See more...
Hi! We recently decided to move from Splunk on-prem to Cloud.  Is there any quick way for me to upload my savedsearches.conf file from the On-Prem to the Cloud instance?   I am looking for a way where I dont have to manually copy my saved searches.  Thanks!  
I have Solarwinds add-on installed on Linux HF. I am seesin this error: +0000 log_level=WARNING, pid=28286, tid=Thread-4, file=ext.py, func_name=time_str2str, code_line_no=321 | [stanza_name="Solarw... See more...
I have Solarwinds add-on installed on Linux HF. I am seesin this error: +0000 log_level=WARNING, pid=28286, tid=Thread-4, file=ext.py, func_name=time_str2str, code_line_no=321 | [stanza_name="SolarwindAlerts"] Unable to convert date_string "2024-02-15T13:44:46.6370000" from format "%Y-%m-%dT%H:%M:%S.%f" to "%Y-%m-%dT%H:%M:%S.%f", return the original date_string, cause=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/ext.py", line 304, in time_str2str     dt = datetime.strptime(date_string, from_format)   File "/opt/splunk/lib/python3.7/_strptime.py", line 577, in _strptime_datetime     tt, fraction, gmtoff_fraction = _strptime(data_string, format)   File "/opt/splunk/lib/python3.7/_strptime.py", line 362, in _strptime     data_string[found.end():]) ValueError: unconverted data remains: 0   Can someone help. I have no data from solarwinds. I tried reinstalling the add on and reconfiguring it. It was working till 8.* version of HF now we have upgraded to 9.1.3. Its showing supported in splunkbase. 
The SPL will show something, although I am not sure of the value of it on its own, nor what it demonstrates without correlating the results back with the original source events. That being said, by ... See more...
The SPL will show something, although I am not sure of the value of it on its own, nor what it demonstrates without correlating the results back with the original source events. That being said, by comparing the pattern for different times of the day with the same time on different days, you might be able to discern a change which is significant. Again, you would need to investigate the reason for the change to determine whether it is useful to detect such a change.
Your graphic was an event view, what do you get with a statistics view (you might need a table command to show the various fields)? What was the full search which gave you the NaN result?
Use the eval command to add a field to your results. | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen | eval newFiel... See more...
Use the eval command to add a field to your results. | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen | eval newField="srx"  
@ITWhisperer : Just to get your last word on this, the SPL I have shared in description is correct in your view to get event data points and index data points for each hour, given that due to delay i... See more...
@ITWhisperer : Just to get your last word on this, the SPL I have shared in description is correct in your view to get event data points and index data points for each hour, given that due to delay ingestion we will see different patterns? Thank you
I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field popula... See more...
I am using the search below | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen   I would like to add a field populated by somename that ends in "srx"   Jan 4 13:07:57 1.1.1.1 1 2024-01-04T13:07:57.085-05:00 5995-somename-srx rpd 2188 JTASK_SIGNAL_INFO [junos@2636.1.1.1.2.133 message-name="INFO Signal Info: Signal Number = " signal-number="1" name=" Consumed Count = " data-1="3"]
@ITWhisperer Thanks for your reply, this can be done inline but how do we ensure we have the right time extraction from the data? Is it possible to set the _time to show nanoseconds while creation of... See more...
@ITWhisperer Thanks for your reply, this can be done inline but how do we ensure we have the right time extraction from the data? Is it possible to set the _time to show nanoseconds while creation of the parser? Also, with the command that you gave I cannot see the right results, below is the result I am getting. NaN/NaN/aN NaN:NaN:NaN.000 AM  
Looking at the "provenance" field in the search i found that only fields containing the value "UI:Search" were related to actual search queries. The rest like dashboard searches that fire off automat... See more...
Looking at the "provenance" field in the search i found that only fields containing the value "UI:Search" were related to actual search queries. The rest like dashboard searches that fire off automatically appear under other field values.   Hope that helps
Thank you @ITWhisperer for sharing your inputs.
There is a default for showing the _time field. You can override this. For exmple | fieldformat _time=strftime(_time,"%Y-%m-%dT%H:%M:%S.%9Q%Z")
Why use bin span=1h and then use span=1d in the timechart? The bin span=1h is redundant. What does our timechart search give you and why does it not match your requirement?
I do not know AWS Key Management Service data, so can't comment on that. Given that you have different data sources, you might need a different approach for each data source. I think you need to fi... See more...
I do not know AWS Key Management Service data, so can't comment on that. Given that you have different data sources, you might need a different approach for each data source. I think you need to find an example in your data of the issue you are trying to detect, then determine what the best way of finding the issue in the future is. By using different approaches (as I have hinted at), you might find one which matches your requirement. There is unlikely to be one answer which fits all, although theoretically there could be, so I would suggest you experiment.
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX... See more...
Hello  I have to work on a parser which has the time format like this : "time: 2024-02-15T11:40:19.843185438Z" It is json data so I have created a logic like below to extract the time. TIME_PREFIX = \"time\"\:\s*\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9Q%Z Although, I see no errors while uploading the test data, in the time field I can see values upto 3 milliseconds only, for eg : 2/15/24 11:40:19.843 AM Is this the right way or Splunk does show the nanoseconds values too? If it does, what is it that is missing in my logic to view the same? Kindly help.   Regards.  
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the ... See more...
Can I injest CPU, memory,eventID data in metric index by using SPLUNK app for Windows ? I am getting data once I injest this data in event index but when I am changing the index to metric index the data stops coming to any index. #splunkforwarder#splunkappforwindows
@ITWhisperer extremely sorry to write in the table, need time as well.
Thank you @ITWhisperer for your detailed inputs. I have different types of data sources to monitor.  The one I observed and and shared is of AWS Key Management Service, where we write logs when the... See more...
Thank you @ITWhisperer for your detailed inputs. I have different types of data sources to monitor.  The one I observed and and shared is of AWS Key Management Service, where we write logs when the Key management service calls an AWS service on behalf of application (or user). Thus, please help to confirm if the approach you shared to compute and compare index datapoint counts at successive hours will enable better understanding of data ingestion issues in Splunk? And in that approach does event datapoint counts have any application or usage? Thank you