All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ah okay, I'm pretty sure that this is not possible but maybe someone else has a solution for it.
@ITWhisperer : Thank you for this, based on your input I was able to find a working answer for my 3rd question. 
On which splunk instance type do you face this issue? As a last option you could clean up the whole kvstore...
Hi @Jyo_Reel , in _internal index you see the Splunk logs, if you need other logs (e.g. operative system or appications), you have to install also the rerated add-ons (Linux https://splunkbase.splun... See more...
Hi @Jyo_Reel , in _internal index you see the Splunk logs, if you need other logs (e.g. operative system or appications), you have to install also the rerated add-ons (Linux https://splunkbase.splunk.com/app/833 or windows https://splunkbase.splunk.com/app/742 ) enabling the input stanzas that you want. Having the _internal logs from all hosts is a good starting point because it means that you correctly configured your connections and there isn't any connection issue. Ciao. Giuseppe
Good day! We would like to know if it is possible to reduce the number of fields displayed in the Alert Fields section or hide the section entirely for incidents created in Splunk OnCall (VictorOp... See more...
Good day! We would like to know if it is possible to reduce the number of fields displayed in the Alert Fields section or hide the section entirely for incidents created in Splunk OnCall (VictorOps), please see the attached screenshot. Currently, ITSI is passing an excessive number of fields. Can the Splunk OnCall incident details UI be customized to address this? Thank you.
Hi @arjun , to monitor windows or Linux machines having a Universal Forwarder installed, you have to install on these UFs the related add on (Linux https://splunkbase.splunk.com/app/833 or windows h... See more...
Hi @arjun , to monitor windows or Linux machines having a Universal Forwarder installed, you have to install on these UFs the related add on (Linux https://splunkbase.splunk.com/app/833 or windows https://splunkbase.splunk.com/app/742 ) enabling the input stanza for memory monitoring. In this way you'll have the logs to use in your searches. Ciao. Giuseppe
Hi @iamsahil , as also @marnall said, which is the CLI command you used Even if I use CLI onsi if I have to use an unattended installation, otherwise, I always use directly the msi. What if you re... See more...
Hi @iamsahil , as also @marnall said, which is the CLI command you used Even if I use CLI onsi if I have to use an unattended installation, otherwise, I always use directly the msi. What if you retry the CLI command on another machine, there's the same issue? About the failCA, could you share your message? Try to search this message on the Community, probably is a known message or issue. Ciao. Giuseppe
I use default account : sys I use default databse : FREE
HI, All I am trying to ingest data from Oracle DB to Splunk Observability Cloud  Q1:Should I Create a database user for this monitor OR just using the default account Q2: as the sample " datasourc... See more...
HI, All I am trying to ingest data from Oracle DB to Splunk Observability Cloud  Q1:Should I Create a database user for this monitor OR just using the default account Q2: as the sample " datasource: "oracle://<username>:<password>@<host>:<port>/<database>" Should I create a  database OR I can use the default database   thanks in advance
By default, the Splunk Universal Forwarder ("agent") cannot execute arbitrary commands (what a security hole *that* would be!).  In addition, it does not monitor a port so there is no mechanism for s... See more...
By default, the Splunk Universal Forwarder ("agent") cannot execute arbitrary commands (what a security hole *that* would be!).  In addition, it does not monitor a port so there is no mechanism for sending commands. With some effort, you may be able to add a script to the appropriate Deployment Server app that the agent would then download and execute.  It's also possible Splunk SOAR might help.
hi index=idx_myindex source="/var/log/mylog.log" host="myhost-*" "memoryError" I know that if I give the conditions above, I can search for the log that caused the memoryError. As in the example a... See more...
hi index=idx_myindex source="/var/log/mylog.log" host="myhost-*" "memoryError" I know that if I give the conditions above, I can search for the log that caused the memoryError. As in the example above, when a log occurs in myhost-*, I would like to send a command to the host where the log occurred and execute a specific command on the agent. Is there a way?
I've applied this solution to my dashboard and it worked fine! Thanks a lot!
Good morning, I am having consistent trouble with UI in the editor in both firefox and chrome in that I cannot get the Dynamic Element selector to do anything. It displays the available options but ... See more...
Good morning, I am having consistent trouble with UI in the editor in both firefox and chrome in that I cannot get the Dynamic Element selector to do anything. It displays the available options but I cannot select any of them. When I click on one, e.g. Background, nothing happens and it still says Select. Has anyone seen the before and have a workaround, or know what's causing it and how to fix it? Thank you, Charles
if you are in the episode you’ll see by which notable event aggregation policy your episode is created. Were both created by the same NEAP? When you take a look into the timeline in each of your epis... See more...
if you are in the episode you’ll see by which notable event aggregation policy your episode is created. Were both created by the same NEAP? When you take a look into the timeline in each of your episode you‘ll also see what’s alerted. Could it be you have different Episodes because you have a notable event for KPI and for the service health score? Maybe your KPI has a critical status and your service is middle because you have a second KPI in your service and they are both weighted with 5?
I wouldn’t do it with the multi KPi alert. If you install the content pack for monitoring and alerting in ITSI there will be some new correlation searches which are monitoring a sustained status for ... See more...
I wouldn’t do it with the multi KPi alert. If you install the content pack for monitoring and alerting in ITSI there will be some new correlation searches which are monitoring a sustained status for Entities or KPIs or services. This searches can be modified if needed.
It might be that they forgot it or didn't consider it important for the primary use case. This add-on is Splunk supported, so if you have a support contract then you could reach out to Splunk support.
finally its working . Thank you all for your help  | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath ... See more...
finally its working . Thank you all for your help  | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | eval lookupfiledatestart =strftime(INSERT_DATE,"%m/%d/%Y") | addinfo | eval _time = strftime(info_min_time,"%m/%d/%Y") | where _time=INSERT_DATE | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE
Splunk doesn't by default come with a cert file called server_pkcs1.pem. It must be a piece of configuration explicitly done in your deployment. So you have to find 1) Where (if anywhere) its use is... See more...
Splunk doesn't by default come with a cert file called server_pkcs1.pem. It must be a piece of configuration explicitly done in your deployment. So you have to find 1) Where (if anywhere) its use is defined ( @marnall 's hint can help but it doesn't have to contain all possible references to certs - some addons can have use their own cert settings). 2) Where this cert comes from. As far as I remember, only the default cert can be automatically (re)created.
I think @marnall meant | where _time=info_max_time (Or whatever other meta field) instead of eval.
i modified my search  but not getting any result  index = ****** host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} ... See more...
i modified my search  but not getting any result  index = ****** host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE | addinfo | eval _time =info_min_time | where INSERT_DATE=_time My ROW Data: [{"ID":"115918","TARGET_SYSTEM":"EAS","REVIEW":"CPW_00011H","TOTAL_INVENTORY":0,"TOTAL_HITS":0,"TRANSACTION_TYPE":"MQ","TRANSACTION_NAME":"HO620I","TRANSACTION_COUNT":4,"PROCESS_DATE":"11/26/2024","INSERT_DATE":"11/27/2024"} ,{"ID":"115919","TARGET_SYSTEM":"EAS","REVIEW":"CPW_00011H","TOTAL_INVENTORY":0,"TOTAL_HITS":0,"TRANSACTION_TYPE":"MQ","TRANSACTION_NAME":"HO626I","TRANSACTION_COUNT":39,"PROCESS_DATE":"11/26/2024","INSERT_DATE":"11/27/2024"}] When i am not using where condition its giving me data.  index = **** host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE | addinfo | eval _time =info_min_time