All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This is a note my client sent: I can’t really use the logs from Splunk, because they are spotty.. Think I figured out why it’s missing data though.  From our last meeting, they bumped up the transfe... See more...
This is a note my client sent: I can’t really use the logs from Splunk, because they are spotty.. Think I figured out why it’s missing data though.  From our last meeting, they bumped up the transfer rate from 1024KBps to 2048KBps because the CE was generating too much data.  However, the CE raw logs generate roughly about 75MB/Minute or 1.25MBps raw.  Since Splunk does use compression at ~17%, the 2KBps gets you about 11KBps raw.  The 1.25MBps is roughly 217.6KBps compressed.  Hence, it gives you roughly 5% of the total logs.  To support Openway and prevent dropping any logs, we need to set a conservative rate to handle 110MB/minute raw, which would be ~319KBps. We did set in the limits.conf to the 2048 mentioned above, but we're not certain how to set to a level the client needs.
Hello, I have many forwarders sending logs to a cluster of indexers, and for some logs I need to send it not cooked. The problem is, when I add _TCP_ROUTING in my inputs.conf file, the logs are w... See more...
Hello, I have many forwarders sending logs to a cluster of indexers, and for some logs I need to send it not cooked. The problem is, when I add _TCP_ROUTING in my inputs.conf file, the logs are well sent to the correct server without being cooked, but they are not visible in Splunk (in the indexers). When I comment _TCP_ROUTING, the logs are correctly sent to the indexers. I noticed that when I re-comment the _TCP_ROUTING, the logs are not loose and appearing in Splunk after 1-2 minutes.       inputs.conf [monitor:///pqth/to/logs/*/*.log] sourcetype = my:sourcetype index = myindex _TCP_ROUTING = logs_uncooked_to_send outputs.conf [tcpout] defaultGroup = default [tcpout:default] server = indexer1:9997,indexer2:9997,indexer3:9997,indexer4:9997 autoLBFrequency = 10 [tcpout:logs_uncooked_to_send] server = server1:5066 sendCookedData= false   Any idea what is blocking ? Maybe something I missed/don't mentioned here ?
We are planning to move to SAML SSO soon. One of the drawbacks of SAML is that you cannot authenticate on the API any longer. Up to this point, any user defined to use splunkweb has had access to the... See more...
We are planning to move to SAML SSO soon. One of the drawbacks of SAML is that you cannot authenticate on the API any longer. Up to this point, any user defined to use splunkweb has had access to the API. How can I find out who will be impacted by yanking API access?  
I have logs coming in that are the following:      State change from '0' to '1' is complete     or     State change from '1' to '0' is complete      on multiple nodes.  Basically, I want... See more...
I have logs coming in that are the following:      State change from '0' to '1' is complete     or     State change from '1' to '0' is complete      on multiple nodes.  Basically, I want a graph/visualization that displays the most recent status of the hosts. I've used | rex field to extract the value either '0/1' on each log after "to" but I'm wondering how I could do it so that the graph shows the most recent state of each node, maybe a bar graph or something where a green bar=1 and a red bar=0, separated by hosts., Any advice is appreciated, thanks!
I'm trying to add VMware vSphere data using the Splunk app for infrastructure. When I try to, all I get is an alert "You have to install the Splunk VMware Add-on for ITSI and its components before yo... See more...
I'm trying to add VMware vSphere data using the Splunk app for infrastructure. When I try to, all I get is an alert "You have to install the Splunk VMware Add-on for ITSI and its components before you start collecting VMware data." I'm unclear on how to get around this issue since I have installed the necessary components. According to documentation https://docs.splunk.com/Documentation/InfraApp/2.0.4/Install/VMWInstall?ref=hk I need to install ITSI, which I have, and install SA-VMWIndex, Splunk_TA_esxilogs, and Splunk_TA_vcenter, which are all components inside of the vmware_ta_itsi parent directory that is a part of the ITSI install. I made sure the components within vmware_ta_itsi were also in $SPLUNK_HOME/etc/apps/ like the documentation says they should be.
Hi, I have alerts when the number goes above certain % of the disk usage. So there are alerts at 70, 80, 90. It works fine. But when there is a 70% alert, I get alerted twice, because of 70% and als... See more...
Hi, I have alerts when the number goes above certain % of the disk usage. So there are alerts at 70, 80, 90. It works fine. But when there is a 70% alert, I get alerted twice, because of 70% and also 60% usage. Here is what the query looks like. I am trying to keep the alert segmented to query the number only between 60-69.99 and 70.00-79.99 and so on. aws_account="cloud" "DSM: Current disk usage for account" (account_disk_quota > 70 )  
Hi Every one, we are trying to display set of images (.jpg/.png) as a slide show with in a panel. I do have a working html code (with js & css) to run a manual slide show on chrome OR any browser. B... See more...
Hi Every one, we are trying to display set of images (.jpg/.png) as a slide show with in a panel. I do have a working html code (with js & css) to run a manual slide show on chrome OR any browser. But I am having issues while converting it into js object splunk. Does any one worked on this kind of requirement? Thanks in advance
The default polling frequency for Splunk Add-on for CrowdStrike is one hour. (3600) Can this be changed to pull data more frequently?  10m (600) or 5m (300) ?  
I have an authentication token which I have found success using curls and the REST API. I'm trying to drop limited log events from a Java application in AWS. All of the documentation that I can fin... See more...
I have an authentication token which I have found success using curls and the REST API. I'm trying to drop limited log events from a Java application in AWS. All of the documentation that I can find discusses using username and password with the Java SDK client. But I only have HEC token and endpoint. It's working perfectly fine on my test Splunk. service = new Service("HOST",Port); String credentials = "Username:Password"; String basicAuthHeader = Base64.encode(credentials.getBytes()); service.setToken("Basic " + basicAuthHeader); But i don't have username and password for production Splunk, have only HEC endpoint and token. So when I am trying to do service = new Service("HOST",Port); service.setToken("MY HEC TOKEN"); I am getting UnAuthorized Exception  Is there any way to use Java SDK w/o username and password.
Hi I have just downloaded the UCS app and trying to add the UCS manager. I keep getting 409 conflict - check item with the same name already exists. There is no other item with the same name.  ... See more...
Hi I have just downloaded the UCS app and trying to add the UCS manager. I keep getting 409 conflict - check item with the same name already exists. There is no other item with the same name.  can someone please help me out? thank you   
Hi, My query returns results in a tabular form with one column having email id per department. Each Row has department name along with email id and some other stats. Now my requirement is to send ... See more...
Hi, My query returns results in a tabular form with one column having email id per department. Each Row has department name along with email id and some other stats. Now my requirement is to send each line of data to respective email id's specified. How can I achieve this? Thanks
I am going to assume this is a simple question but having a severe brain fart - I have installed Splunk free in the past and did not seem to have an issue with the 'user' field being extracted as an ... See more...
I am going to assume this is a simple question but having a severe brain fart - I have installed Splunk free in the past and did not seem to have an issue with the 'user' field being extracted as an interesting field or being able to search using it. I was able to do a user=John.Doe but now on this install I have to do Account_Name=John.Doe. I have a ton of Dashboards and would like to see if someone has an explanation for why I cannot use 'user=' on this install. I am also faced with having to use mvindex to extract information and Account_Name has multiple fields in Security logs so I would like to continue to use the 'user' field to simplify this. Greatly appreciated, long time reader, first time poster....
Hi All, I have a query where I am passing one field from the output( outer query )to the another query  using subsearch based on field_1 index=index_2 sourcetype=sourcetype_2 [search index=index_1 ... See more...
Hi All, I have a query where I am passing one field from the output( outer query )to the another query  using subsearch based on field_1 index=index_2 sourcetype=sourcetype_2 [search index=index_1 sourcetype=sourcetype_1 | fields field_1]|table field_1 field_2 I could get the required results/events(field_1,field_2 ) from the outer query based on the common field field_1 with the innerquery NowI want some columns/fields from the subsearch or inner query along with the final result from outer query to be displayed. Please suggest  
Hi all I've enabled the python 3.7 Support on my installation, but now my external command won't work anymore, saying i have some syntax error, which i'm not able to find in the binary tree of my co... See more...
Hi all I've enabled the python 3.7 Support on my installation, but now my external command won't work anymore, saying i have some syntax error, which i'm not able to find in the binary tree of my command app... And the command works with python 2.7... The message is: ValueError: Unknown level: 'ERROR ; Default: WARNING' The full log is: 07-17-2020 10:12:48.713 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/elasticsplunk/bin/elasticsplunk.py __GETINFO__ eaddr=cluster1 index=testdrive query="state:IL or state=TN and age>22"': Traceback (most recent call last): File "/opt/splunk/etc/apps/elasticsplunk/bin/elasticsplunk.py", line 49, in <module> from splunklib.searchcommands import dispatch, StreamingCommand, GeneratingCommand, Configuration, Option, validators File "/opt/splunk/etc/apps/elasticsplunk/bin/splunklib/searchcommands/__init__.py", line 145, in <module> from .environment import * File "/opt/splunk/etc/apps/elasticsplunk/bin/splunklib/searchcommands/environment.py", line 120, in <module> splunklib_logger, logging_configuration = configure_logging('splunklib') File "/opt/splunk/etc/apps/elasticsplunk/bin/splunklib/searchcommands/environment.py", line 103, in configure_logging fileConfig(filename, {'SPLUNK_HOME': splunk_home}) File "/opt/splunk/lib/python3.7/logging/config.py", line 80, in fileConfig _install_loggers(cp, handlers, disable_existing_loggers) File "/opt/splunk/lib/python3.7/logging/config.py", line 195, in _install_loggers log.setLevel(level) File "/opt/splunk/lib/python3.7/logging/__init__.py", line 1353, in setLevel self.level = _checkLevel(level) File "/opt/splunk/lib/python3.7/logging/__init__.py", line 192, in _checkLevel raise ValueError("Unknown level: %r" % level) ValueError: Unknown level: 'ERROR ; Default: WARNING' 07-17-2020 10:12:48.725 ERROR script - Getinfo probe failed for external search command 'ess'.    
Hi all, I need help in changing an output that getting from below search to be changed.     index=itsm | stats count by Class_Type | sort - count     Output that results is:  Class_Type c... See more...
Hi all, I need help in changing an output that getting from below search to be changed.     index=itsm | stats count by Class_Type | sort - count     Output that results is:  Class_Type count NodeDown Trap 2129 Cisco LWAPP AP Trap 766   Can i change the output and reflect another name in the section. Like i want "Cisco LWAPP AP Trap" to be displayed as "CISCO AP DOWN" is that possible.
I have 2 custom apps as App_A and App_B. I want to hide a panel based on the current app. For that I have used $env:app$ token.  Now I want to eval a new token in the Init part of the dashboard whic... See more...
I have 2 custom apps as App_A and App_B. I want to hide a panel based on the current app. For that I have used $env:app$ token.  Now I want to eval a new token in the Init part of the dashboard which will compare the $env:app$ with App_A or App_B and then generate the value for new token. I tried various options like case, if but no success. Can any one help me in this scenario?    
Hi, I am using below REST API Call and able to see the results - But it is giving me duplicate values.   In splunk I am able to see only one log whereas in REST API Call I am able to see 3 logs.   ... See more...
Hi, I am using below REST API Call and able to see the results - But it is giving me duplicate values.   In splunk I am able to see only one log whereas in REST API Call I am able to see 3 logs.   Please let me know how to eliminate the duplicate values in REST API Call https://splunk-api-url:8089/servicesNS/nobody/appname/search/jobs/export?output_mode=json&segmentation=none&latest_time=2020-07-15T00%3A05%3A00.000&earliest_time=2020-07-15T00%3A00%3A00.000&search=|savedsearch%20savedsearchname%20|search%20Code=XXX-10-12 Note: This duplicate value could be seen only for JSON Format, for other formats it is working fine. Let me know how to eliminate duplicate values for JSON Format
Hear is the below fields we want to exclude    fields                                   values action_flags                     0x8000000000000000 action_source                 from-application a... See more...
Hear is the below fields we want to exclude    fields                                   values action_flags                     0x8000000000000000 action_source                 from-application app:is_saas                      no log_forwarding_profile    Log to Panorama serial_number                43211001234 devicegroup_level1              14 below i wrote the props and transforms.conf but its not working properly can you please help me with the answer props.conf [sourcetype::pan:traffic] TRANSFORMS-null=setnull   transforms.conf  [setnull] REGEX=(\d{19})|(from-\w*)|(\d[a-z]\d{16})|(\w+\s\w+\s\w+-\w+)|([L]\w+\s\w+\s\w+)|(\d{12}) DEST_KEY=queue FORMAT=nullQueue  
Hi all, Anyone can share to me the best practice to set up ingesting the cloud trail logs from Splunk cloud add-ons. Why do they prepare to use SQS instead of directing it to the cloud trail? please... See more...
Hi all, Anyone can share to me the best practice to set up ingesting the cloud trail logs from Splunk cloud add-ons. Why do they prepare to use SQS instead of directing it to the cloud trail? please give me an understanding of this
Hi,  I have a json that looks like the following -  { "id": "123", "uri": "http://xyz.com/api", "method": "POST", "headers": [ "Accept: application/json", "SERVICE.ENV: qa", "SERVICE.NAME:... See more...
Hi,  I have a json that looks like the following -  { "id": "123", "uri": "http://xyz.com/api", "method": "POST", "headers": [ "Accept: application/json", "SERVICE.ENV: qa", "SERVICE.NAME: someservice", "CLIENT.ID: s0m3id", "CLIENT_TYPE: typeA", "CLIENT_IP:123.456.7.8" ], "cookies": [], "message": "Request Finished", "status": 200 } Within the headers section, I want to capture what all CLIENT_IPs are passing other header info such as SERVICE.ENV and SERVICE.NAME. The catch being, CLIENT_IP:123.456.7.8 is all in a single pair of quotes, so it isn't being parsed as a key value pair (as per my understanding). Please help.