All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can DB Connect execute an Oracle anonymous PL/SQL block, with input parameters, and get the output parameters (for indexing)? An anonymous PL/SQL block - and not a stored procedure. The later must... See more...
Can DB Connect execute an Oracle anonymous PL/SQL block, with input parameters, and get the output parameters (for indexing)? An anonymous PL/SQL block - and not a stored procedure. The later must first be created on the Oracle server. The anonymous PL/SQL block exists on the client side - Splunk - only. Is this possible with Splunk DB Connect and Oracle jdbc ?   best regards Altin
Hello, I would like to reach out for some help in creating a custom sourcetype (cloned from _json), I'm calling it "ibcapacity".  I've tried to edit the settings under this new sourcetype but my res... See more...
Hello, I would like to reach out for some help in creating a custom sourcetype (cloned from _json), I'm calling it "ibcapacity".  I've tried to edit the settings under this new sourcetype but my results are even more broken. The output of the file is formatted correctly in _json (the jq checks come back all good); but when using the _json default sourcetype, the Splunk event gets cut off at 349 lines (the entire file is 392 lines); and the other problem using the standard _json format is that its not fully "color coding" the KVs...but that could be due to the fact that the end brackets aren't in the Splunk event because it was cut off at 349 lines. So my solution was to try to create a custom sourcetype (cloned from _json), I'm calling it "ibcapacity".  I've tried to edit the settings under this new sourcetype but my results are even more broken. Here is the event when searched in the standard _json sourcetype: This is where the Splunk event gets cut off. However, the rest of the file has this at the end (past line 349), which doesn't show up in the Splunk event: ], "percent_used": 120, "role": "Grid Master", "total_objects": 529020 } ] Can this community please help to identify what the correct settings should be for my custom sourcetype, ibcapacity?  Why is the Splunk log getting cut off at 349 lines when using sourcetype=_json? Thank you.
Hello,  Wanted to ask the community for some help on getting server build specifications for a stand alone search head/forwarder. We want to keep this search head/forwarder dedicated for M365 traffi... See more...
Hello,  Wanted to ask the community for some help on getting server build specifications for a stand alone search head/forwarder. We want to keep this search head/forwarder dedicated for M365 traffic. Any suggestions or documentation would be very helpful. Thank you. 
good afternoon I'm currently running the trial Enterprise version on a workstation which (appears to) installed OK, i've installed a forwarder on a server, again which appears to have installed OK, ... See more...
good afternoon I'm currently running the trial Enterprise version on a workstation which (appears to) installed OK, i've installed a forwarder on a server, again which appears to have installed OK, however I can't seem to get the two to talk to each other (I can't add the forwarder in the Enterprise UI as it can't find it) I've searched for the problem on google etc but have only come across 1 post that relates to what I am experiencing - the solution didn't work lol. Would anyone happen to have any ideas or point me in the right direction please. TIA
Hi! What's the best strategy if I want my AWS Lambda logs get ingested directly to SplunkCloud? I don't want my Lambda to log into CW first before ingesting to SplunkCloud to avoid paying both. 
[root@uf5 opt]# cd /opt/splunkforwarder/bin [root@uf5 bin]# ./splunk start --accept-license bash: ./splunk: cannot execute binary file [root@uf5 bin]# bash path/to/mynewshell.sh bash: path/to/myn... See more...
[root@uf5 opt]# cd /opt/splunkforwarder/bin [root@uf5 bin]# ./splunk start --accept-license bash: ./splunk: cannot execute binary file [root@uf5 bin]# bash path/to/mynewshell.sh bash: path/to/mynewshell.sh: No such file or directory how can i fix that showing  bash: ./splunk: cannot execute binary file. please help me
To manage our knowledge objects on our search heads I created a blank app on the deployer under /opt/splunk/etc/shcluster/apps/ and deployed it to the search heads. That part worked well. I can now c... See more...
To manage our knowledge objects on our search heads I created a blank app on the deployer under /opt/splunk/etc/shcluster/apps/ and deployed it to the search heads. That part worked well. I can now create knowledge objects in that app and they sync across the search heads. That works well too. However, I am realizing that the app on the deployer is empty and I'm wondering if that could come back to bite me some day somehow. Should I instead be creating the knowledge objects in an app on the deployer in /opt/splunk/etc/apps/, and then copy the changes to /opt/splunk/etc/shcluster/apps/, and finally deploy it to the search heads? That way I can still use the UI to create the knowledge obejects but won't have to worry about anything or is my worry overrated? I'm curious what is the best way to manage this that others have found. Should I just not worry about it and let the apps be different between the deployer and the search heads or should I have some process for configuring it on the deployer, copying it to the deployment apps, and then pushing out? Thanks.
I am trying to create a search query that pulls tenable (critical, and high) scan results that provides an output of the Tenable (ACAS) critical and high scan results with the following information: ... See more...
I am trying to create a search query that pulls tenable (critical, and high) scan results that provides an output of the Tenable (ACAS) critical and high scan results with the following information: 1.) IP Address (unique identifier for ACAS mapping to Xacta) 2.) DNS Name 3.) System Name  4.) MAC address  5.) OS Type 6.) OS Version 7.) OS Build 8.) System Manufacture 9.) System Serial Number 10.) System Model 11.) AWS Account Number (new field to capture in the standard) 12.) AWS Instance ID # 13.) AWS ENI 
We are trying to figure out if it is possible to get info from internal log files the  start time and time spent on dashboards (viewing) per user for our monthly report .  Any ideas if it is possi... See more...
We are trying to figure out if it is possible to get info from internal log files the  start time and time spent on dashboards (viewing) per user for our monthly report .  Any ideas if it is possible?  We are running  Splunk Cloud 8.2.2106  , upgrading to 8.2.2109 soon
I have a dashboard that I have to do a lot of refreshing on. This is causing a lot of jobs to happen on my SPunk install. The first set of jobs run but they are kept by Splunk for 5 minutes (Image ... See more...
I have a dashboard that I have to do a lot of refreshing on. This is causing a lot of jobs to happen on my SPunk install. The first set of jobs run but they are kept by Splunk for 5 minutes (Image below) How do I get the expired to go to 5 seconds? I was trying some setting but they are not working for me. Any ideas woud be great thanks default_save_ttl=5 ttl = 5 remote_ttl = 5 srtemp_dir_ttl = 5 cache_ttl=5
We have Spunk Ent. & ES both on Windows & RHEL (Linux). Are there much different procedures for Win vs Linux? Should I be writing on for each ? Or just one procedures for our entire environment. Some... See more...
We have Spunk Ent. & ES both on Windows & RHEL (Linux). Are there much different procedures for Win vs Linux? Should I be writing on for each ? Or just one procedures for our entire environment. Some of my UFs are as old as 7.2.9 all the way up to 8.0.7. Thanks a million.
Hi All,   I'm using network toolkit's external lookup ping for monitoring server down in my environment, but after increasing the packet count to 4 instead of default 1 I'm getting 3-4 mins of inde... See more...
Hi All,   I'm using network toolkit's external lookup ping for monitoring server down in my environment, but after increasing the packet count to 4 instead of default 1 I'm getting 3-4 mins of indexing delay in data. Any Ideas how can I reduce this delay as the script is taking only 30sec of time to run.
Hi All, Could you please help me. scenario :- i want a result where one field contains a specific value, but in result I am getting all the value along with my specific value. example- index=xx por... See more...
Hi All, Could you please help me. scenario :- i want a result where one field contains a specific value, but in result I am getting all the value along with my specific value. example- index=xx port="10" Or port="110" |  output is like :- hostname port 23 25 110 80 443   i want to show only port 10 or 110 only, not all the ports are open for the host.   Note: "AND" will not work as it will only search those host which are having only 10 or 110 ports open, 
Hi Guys,   I have this issue on my splunk hf on vm redhat on azure I installed the aws add on but when I try to configure I have this view. someone can help? Regards Alessandro
When searching to see which sourcetypes are in the Endpoint data model, I am getting different results if I search: | tstats `summariesonly` c as count from datamodel="Endpoint" by index, sourcetype... See more...
When searching to see which sourcetypes are in the Endpoint data model, I am getting different results if I search: | tstats `summariesonly` c as count from datamodel="Endpoint" by index, sourcetype than when I search: | tstats `summariesonly` c as count from datamodel="Endpoint.Processes" by index, sourcetype Why wouldn't the sourcetypes under the Processes data set be included in the first search for sourcetypes in the Endpoint data model? Thanks.
Hi! Consider the following kpi base search monitoring the windows service state:     index=wineventlog sourcetype="WinEventLog:System" SourceName="Microsoft-Windows-Service Control Manager" | rex... See more...
Hi! Consider the following kpi base search monitoring the windows service state:     index=wineventlog sourcetype="WinEventLog:System" SourceName="Microsoft-Windows-Service Control Manager" | rex field=Message "(The) (?<ServiceName>.+) (service entered the) (?<ServiceState>.+) " | eval ServiceState=case(ServiceState=="running",2,ServiceState=="stopped",0,1==1,1)      If I do not want to explicitly name the windows service in the base search how do I include the service name, here ServiceName, beside the entity_title=host in the later created ITSI episode. Why? From the created episode we run a recovery action to restart a windows service when stopped. For this we need to know the service name and the host it is running on. What we need is the entity_title=host and the whatsoever=ServiceName as dedicated fields available in the correlation search from this generic kpi base search. Performing an ITOA rest call is no problem. Note: If I split by ServiceName then the service name becomes the entity_title and then the host is missing. Maybe one having an idea which does help us. We just want to avoid creating one KPI per Windows Service. Cheers Peter
How to identify important metrics to create a dashboard.
Trying to create a new Splunk Cloud Platform for a newly created account. Seeing the error:   An internal error was detected when creating the stack. We're sorry, an internal error was detected wh... See more...
Trying to create a new Splunk Cloud Platform for a newly created account. Seeing the error:   An internal error was detected when creating the stack. We're sorry, an internal error was detected when creating the stack. Please try again later.
We are getting the below error while running the commands under bin directory.  Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied xxxx bin]# ./splunk restart Pid fil... See more...
We are getting the below error while running the commands under bin directory.  Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied xxxx bin]# ./splunk restart Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied splunkd.pid file is unreadable. [FAILED] Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied Splunk> Be an IT superhero. Go home early. Checking prerequisites... Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port.   I have tried the below steps  Solved: Splunk will not start and is waiting for config lo... - Splunk Community    [root@xxxxxx bin]# ./splunk clean locks Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied   location :  /opt/splunk/var/run/splunk -rw-r-----. 1 root root 48 Nov 2 15:00 splunkd.pid  
folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Docu... See more...
folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Documentation/Splunk/8.2.2/RESTREF/RESTsearch to setup and the tokens are all working good. The settings are like below   logevent.param.index: test logevent.param.sourcetype: my_summary_index_st logevent.param.event: $name$ $result.*$   BUT , only the FIRST alert is captured by the $result.*$ token. Any idea how to ensure the entire events from the alert are captured?  (`$results.*$` is NOT working) PS: I've put a feedback to the docs team to update all the parameters, but the docs are lacking a lot compared to the alert functionalities