Hi Kiran, Thanks for the replying. Use case: I would like to backup indexed Splunk logs into a third party backup/restore system say on daily or weekly basis and it is required to be in human read...
See more...
Hi Kiran, Thanks for the replying. Use case: I would like to backup indexed Splunk logs into a third party backup/restore system say on daily or weekly basis and it is required to be in human readable format. So they can be retrieved anytime in future within the backup/restore system and can be read with a notepad or other similar tool if needed Would love to know your thoughts on above?
Hi @osh55 , ok, please try this: index=sample1 ((sourcetype=x host=host1) OR sourcetype=y)
| eval caller_party=if(sourcetype=x, substr(caller, 2), caller_party)
| stats
count(eval(sourcetype=...
See more...
Hi @osh55 , ok, please try this: index=sample1 ((sourcetype=x host=host1) OR sourcetype=y)
| eval caller_party=if(sourcetype=x, substr(caller, 2), caller_party)
| stats
count(eval(sourcetype=x)) AS all_calls
count(eval(sourcetype=y)) AS messagebank_calls
BY caller
| search all_calls=* See my approach and adapt it to your use case. Ciao. Giuseppe
Hi I am not aware of anything out of the box that would achieve this for you. Do you have any more detailed requirements on what you are needing to see and what the thresholds you have? Its worth ...
See more...
Hi I am not aware of anything out of the box that would achieve this for you. Do you have any more detailed requirements on what you are needing to see and what the thresholds you have? Its worth starting out with some analysis / service design with stakeholders to determine what is important to be monitored. Typically, depending on the requirements you might end up with KPIs like: Number of failed jobs Average job duration Job completion status (success/failure count) Job start delay (scheduled vs actual start time) Do you need specific help creating these KPIs or were you just looking for a boilerplate to get started? Check out https://docs.splunk.com/Documentation/ITSI/4.19.3/SI/KPIOverview for more info on creating KPIs. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing. @jaracan
Hi @MustakMU There isnt anything obvious in there which would trigger a restart, infact the opposite! It doesnt mean that there isnt something else in the app though which would cause a restart. ...
See more...
Hi @MustakMU There isnt anything obvious in there which would trigger a restart, infact the opposite! It doesnt mean that there isnt something else in the app though which would cause a restart. Its worth reviewing https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Configurationfilechangesthatrequirerestart to see if there are any configs listed here which you have in your app which could be requiring your restart. As you are using the Add-on builder, it could be that a more recent version of the add-on builder is creating config differently and thus wanting the restart. Are you using the latest version of the add-on builder? Its also worth reviewing the splunkd.log around the time of the install to see if there are any logs which suggest why it could be wanting the restart. Search for terms like "install" "restart" "trigger" etc in _internal. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@Splunkduck09 We must follow the outlined process to back up and restore the data. As mentioned, after restoration, you can query the data using the search head. However, it’s not feasible to direc...
See more...
@Splunkduck09 We must follow the outlined process to back up and restore the data. As mentioned, after restoration, you can query the data using the search head. However, it’s not feasible to directly export it into a human-readable format. https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Backupindexeddata https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Restorearchiveddata https://docs.splunk.com/Documentation/Splunk/latest/Indexer/HowSplunkstoresindexes
Hey bowesmana, Thanks for the replying. Use case: I would like to backup indexed Splunk logs into a third party backup/restore system say on daily or weekly basis and it is required to be in huma...
See more...
Hey bowesmana, Thanks for the replying. Use case: I would like to backup indexed Splunk logs into a third party backup/restore system say on daily or weekly basis and it is required to be in human readable format. So they can be retrieved anytime in future within the backup/restore system and can be read with a notepad or other similar tool if needed Would love to know your thoughts on above?
@Splunkduck09 The simplest approach is to use Splunk’s search interface. Run a search query for the data you want (e.g., index=your_index), then export the results. In the Splunk Web UI, after runn...
See more...
@Splunkduck09 The simplest approach is to use Splunk’s search interface. Run a search query for the data you want (e.g., index=your_index), then export the results. In the Splunk Web UI, after running your search, click the "Export" button, choose a format like CSV, JSON, or raw text, and download the file. This will give you the raw event data in a readable format that you can open in a text editor. https://docs.splunk.com/Documentation/Splunk/9.4.1/Search/ExportdatausingSplunkWeb
Hi, We have Splunk ITSI and team wants to have Batch Job Monitoring with Splunk ITSI capabilities. With this, do we have some pre-built dashboards or services/KPI SPL queries available that you can ...
See more...
Hi, We have Splunk ITSI and team wants to have Batch Job Monitoring with Splunk ITSI capabilities. With this, do we have some pre-built dashboards or services/KPI SPL queries available that you can share with us to start with? We will appreciate your guidance on this. Looking forward to your response.
You can search them and see the raw data in the search window, you can export them as CSV files, but what's your use case? Seems like an odd thing to want to do...
@sylim_splunk After apply STEP 9 (Changing the Replication Factor back to what it was), the KV store went back to STARTING. Every other step worked to get the KV store to ready but it went back to ...
See more...
@sylim_splunk After apply STEP 9 (Changing the Replication Factor back to what it was), the KV store went back to STARTING. Every other step worked to get the KV store to ready but it went back to starting after changing the replication factor back to 3 which was the original. is there any splunk solution for this KVstore issue? It is very painful not to find a workaround about this kvstore issue
Hi All, I would like to read Splunk indexed log/data using text editor tool (like Notepad, etc.). I understand Splunk indexed logs are compressed and further processed which creates a Splunk only rea...
See more...
Hi All, I would like to read Splunk indexed log/data using text editor tool (like Notepad, etc.). I understand Splunk indexed logs are compressed and further processed which creates a Splunk only readable format (Not-Human readable) - which works well for searching or other capabilities of Splunk. But, wondering if there is way to convert these indexed logs into a human readable format?
Hi @ITWhisperer , I will try to find out this with our Splunk enterprise team . But if that is true, this should also happen in this case also where space between date and hrs is replaces by : ...
See more...
Hi @ITWhisperer , I will try to find out this with our Splunk enterprise team . But if that is true, this should also happen in this case also where space between date and hrs is replaces by : then it is running fine but I am not sure if it running correct timings which I mean is 12-March-2025 to 13-march-2025
It is not clear where the _time>= and _time< are coming from but these are where the issue is being introduced. Do you have any restrictions etc. associated with the role?
Hi @ITWhisperer , I am observing one thing when I am changing to following format , instead of space giving : (highlighted in red ) , then it is running but not getting values of earliest, latest. N...
See more...
Hi @ITWhisperer , I am observing one thing when I am changing to following format , instead of space giving : (highlighted in red ) , then it is running but not getting values of earliest, latest. Not sure is this correct way to display values . index=aws_np [| makeresults | eval earliest=strptime("12/03/2025:13:00","%d/%m/%Y %H:%M") | eval latest=relative_time(earliest,"+1d") | table earliest, latest] | table earliest, latest