All Topics

Top

All Topics

Hello,  Wanted to ask the community for some help on getting server build specifications for a stand alone search head/forwarder. We want to keep this search head/forwarder dedicated for M365 traffi... See more...
Hello,  Wanted to ask the community for some help on getting server build specifications for a stand alone search head/forwarder. We want to keep this search head/forwarder dedicated for M365 traffic. Any suggestions or documentation would be very helpful. Thank you. 
good afternoon I'm currently running the trial Enterprise version on a workstation which (appears to) installed OK, i've installed a forwarder on a server, again which appears to have installed OK, ... See more...
good afternoon I'm currently running the trial Enterprise version on a workstation which (appears to) installed OK, i've installed a forwarder on a server, again which appears to have installed OK, however I can't seem to get the two to talk to each other (I can't add the forwarder in the Enterprise UI as it can't find it) I've searched for the problem on google etc but have only come across 1 post that relates to what I am experiencing - the solution didn't work lol. Would anyone happen to have any ideas or point me in the right direction please. TIA
Hi! What's the best strategy if I want my AWS Lambda logs get ingested directly to SplunkCloud? I don't want my Lambda to log into CW first before ingesting to SplunkCloud to avoid paying both. 
[root@uf5 opt]# cd /opt/splunkforwarder/bin [root@uf5 bin]# ./splunk start --accept-license bash: ./splunk: cannot execute binary file [root@uf5 bin]# bash path/to/mynewshell.sh bash: path/to/myn... See more...
[root@uf5 opt]# cd /opt/splunkforwarder/bin [root@uf5 bin]# ./splunk start --accept-license bash: ./splunk: cannot execute binary file [root@uf5 bin]# bash path/to/mynewshell.sh bash: path/to/mynewshell.sh: No such file or directory how can i fix that showing  bash: ./splunk: cannot execute binary file. please help me
To manage our knowledge objects on our search heads I created a blank app on the deployer under /opt/splunk/etc/shcluster/apps/ and deployed it to the search heads. That part worked well. I can now c... See more...
To manage our knowledge objects on our search heads I created a blank app on the deployer under /opt/splunk/etc/shcluster/apps/ and deployed it to the search heads. That part worked well. I can now create knowledge objects in that app and they sync across the search heads. That works well too. However, I am realizing that the app on the deployer is empty and I'm wondering if that could come back to bite me some day somehow. Should I instead be creating the knowledge objects in an app on the deployer in /opt/splunk/etc/apps/, and then copy the changes to /opt/splunk/etc/shcluster/apps/, and finally deploy it to the search heads? That way I can still use the UI to create the knowledge obejects but won't have to worry about anything or is my worry overrated? I'm curious what is the best way to manage this that others have found. Should I just not worry about it and let the apps be different between the deployer and the search heads or should I have some process for configuring it on the deployer, copying it to the deployment apps, and then pushing out? Thanks.
I am trying to create a search query that pulls tenable (critical, and high) scan results that provides an output of the Tenable (ACAS) critical and high scan results with the following information: ... See more...
I am trying to create a search query that pulls tenable (critical, and high) scan results that provides an output of the Tenable (ACAS) critical and high scan results with the following information: 1.) IP Address (unique identifier for ACAS mapping to Xacta) 2.) DNS Name 3.) System Name  4.) MAC address  5.) OS Type 6.) OS Version 7.) OS Build 8.) System Manufacture 9.) System Serial Number 10.) System Model 11.) AWS Account Number (new field to capture in the standard) 12.) AWS Instance ID # 13.) AWS ENI 
We are trying to figure out if it is possible to get info from internal log files the  start time and time spent on dashboards (viewing) per user for our monthly report .  Any ideas if it is possi... See more...
We are trying to figure out if it is possible to get info from internal log files the  start time and time spent on dashboards (viewing) per user for our monthly report .  Any ideas if it is possible?  We are running  Splunk Cloud 8.2.2106  , upgrading to 8.2.2109 soon
I have a dashboard that I have to do a lot of refreshing on. This is causing a lot of jobs to happen on my SPunk install. The first set of jobs run but they are kept by Splunk for 5 minutes (Image ... See more...
I have a dashboard that I have to do a lot of refreshing on. This is causing a lot of jobs to happen on my SPunk install. The first set of jobs run but they are kept by Splunk for 5 minutes (Image below) How do I get the expired to go to 5 seconds? I was trying some setting but they are not working for me. Any ideas woud be great thanks default_save_ttl=5 ttl = 5 remote_ttl = 5 srtemp_dir_ttl = 5 cache_ttl=5
Heyo! We are pumped to share what’s new in Splunk Cloud Platform 8.2.2109! For Analysts  The new Dashboard Studio is continuously being improved and we have introduced a number of enhancements:... See more...
Heyo! We are pumped to share what’s new in Splunk Cloud Platform 8.2.2109! For Analysts  The new Dashboard Studio is continuously being improved and we have introduced a number of enhancements:  Scheduled PDF email export to easily share dashboards (Limited Availability Release, apply here)  Global environment tokens for standard dashboard development Two additional visualization options: Sankey Diagram and Parallel Coordinates Interactive drilldowns by clicking on visualizations to set tokens Device identification is now streamlined in Splunk Secure Gateway The Splunk Product Guidance (SPG) app is now integrated in each release to offer in-product walkthroughs, guides, and access to relevant how-to articles  For Admins  The Admin Configuration Service in Victoria experience to has been improved with the ability to upload, install, and upgrade apps through APIs (Reach out to your account team to enable this feature) The Upgrade Readiness App now includes jQuery scanning to identify applications and file types that need attention before being updated to a release with only jQuery 3.5 enabled For more details  , take a look at the cloud platform release notes. Want to get notified of "What's  " in every Splunk Cloud Platform release? Click and subscribe to the 'Splunk Cloud Platform' label in this announcements discussion    Your SaaSy (Splunk-as-a-Servicey) Updates, Judith - Splunk Cloud Platform
We have Spunk Ent. & ES both on Windows & RHEL (Linux). Are there much different procedures for Win vs Linux? Should I be writing on for each ? Or just one procedures for our entire environment. Some... See more...
We have Spunk Ent. & ES both on Windows & RHEL (Linux). Are there much different procedures for Win vs Linux? Should I be writing on for each ? Or just one procedures for our entire environment. Some of my UFs are as old as 7.2.9 all the way up to 8.0.7. Thanks a million.
Hi All,   I'm using network toolkit's external lookup ping for monitoring server down in my environment, but after increasing the packet count to 4 instead of default 1 I'm getting 3-4 mins of inde... See more...
Hi All,   I'm using network toolkit's external lookup ping for monitoring server down in my environment, but after increasing the packet count to 4 instead of default 1 I'm getting 3-4 mins of indexing delay in data. Any Ideas how can I reduce this delay as the script is taking only 30sec of time to run.
Hi All, Could you please help me. scenario :- i want a result where one field contains a specific value, but in result I am getting all the value along with my specific value. example- index=xx por... See more...
Hi All, Could you please help me. scenario :- i want a result where one field contains a specific value, but in result I am getting all the value along with my specific value. example- index=xx port="10" Or port="110" |  output is like :- hostname port 23 25 110 80 443   i want to show only port 10 or 110 only, not all the ports are open for the host.   Note: "AND" will not work as it will only search those host which are having only 10 or 110 ports open, 
Hi Guys,   I have this issue on my splunk hf on vm redhat on azure I installed the aws add on but when I try to configure I have this view. someone can help? Regards Alessandro
When searching to see which sourcetypes are in the Endpoint data model, I am getting different results if I search: | tstats `summariesonly` c as count from datamodel="Endpoint" by index, sourcetype... See more...
When searching to see which sourcetypes are in the Endpoint data model, I am getting different results if I search: | tstats `summariesonly` c as count from datamodel="Endpoint" by index, sourcetype than when I search: | tstats `summariesonly` c as count from datamodel="Endpoint.Processes" by index, sourcetype Why wouldn't the sourcetypes under the Processes data set be included in the first search for sourcetypes in the Endpoint data model? Thanks.
Hi! Consider the following kpi base search monitoring the windows service state:     index=wineventlog sourcetype="WinEventLog:System" SourceName="Microsoft-Windows-Service Control Manager" | rex... See more...
Hi! Consider the following kpi base search monitoring the windows service state:     index=wineventlog sourcetype="WinEventLog:System" SourceName="Microsoft-Windows-Service Control Manager" | rex field=Message "(The) (?<ServiceName>.+) (service entered the) (?<ServiceState>.+) " | eval ServiceState=case(ServiceState=="running",2,ServiceState=="stopped",0,1==1,1)      If I do not want to explicitly name the windows service in the base search how do I include the service name, here ServiceName, beside the entity_title=host in the later created ITSI episode. Why? From the created episode we run a recovery action to restart a windows service when stopped. For this we need to know the service name and the host it is running on. What we need is the entity_title=host and the whatsoever=ServiceName as dedicated fields available in the correlation search from this generic kpi base search. Performing an ITOA rest call is no problem. Note: If I split by ServiceName then the service name becomes the entity_title and then the host is missing. Maybe one having an idea which does help us. We just want to avoid creating one KPI per Windows Service. Cheers Peter
How to identify important metrics to create a dashboard.
Trying to create a new Splunk Cloud Platform for a newly created account. Seeing the error:   An internal error was detected when creating the stack. We're sorry, an internal error was detected wh... See more...
Trying to create a new Splunk Cloud Platform for a newly created account. Seeing the error:   An internal error was detected when creating the stack. We're sorry, an internal error was detected when creating the stack. Please try again later.
We are getting the below error while running the commands under bin directory.  Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied xxxx bin]# ./splunk restart Pid fil... See more...
We are getting the below error while running the commands under bin directory.  Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied xxxx bin]# ./splunk restart Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied splunkd.pid file is unreadable. [FAILED] Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied Splunk> Be an IT superhero. Go home early. Checking prerequisites... Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port.   I have tried the below steps  Solved: Splunk will not start and is waiting for config lo... - Splunk Community    [root@xxxxxx bin]# ./splunk clean locks Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied   location :  /opt/splunk/var/run/splunk -rw-r-----. 1 root root 48 Nov 2 15:00 splunkd.pid  
folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Docu... See more...
folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Documentation/Splunk/8.2.2/RESTREF/RESTsearch to setup and the tokens are all working good. The settings are like below   logevent.param.index: test logevent.param.sourcetype: my_summary_index_st logevent.param.event: $name$ $result.*$   BUT , only the FIRST alert is captured by the $result.*$ token. Any idea how to ensure the entire events from the alert are captured?  (`$results.*$` is NOT working) PS: I've put a feedback to the docs team to update all the parameters, but the docs are lacking a lot compared to the alert functionalities
Hello, I want to add dependable Radio button Functionality for below example. When i click on 'TR DEPT' in Landscape View  then Filter radio buttons should be displayed with 'ALL' option being sele... See more...
Hello, I want to add dependable Radio button Functionality for below example. When i click on 'TR DEPT' in Landscape View  then Filter radio buttons should be displayed with 'ALL' option being selected by default. When i select 'TR Failed' option then Filter radio buttons should not be displayed and it should be hidden.   Could you please help me with code? Thank you