All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am newbie to splunk and need to configure Palo Alto splunk and looking for corrective action. What I did so far 1. Rsyslog allowed port 1514. --> sudo semanage port -a -t syslogd_port_t -p tcp 1... See more...
I am newbie to splunk and need to configure Palo Alto splunk and looking for corrective action. What I did so far 1. Rsyslog allowed port 1514. --> sudo semanage port -a -t syslogd_port_t -p tcp 1514 2. Ran the Firewall command to allow port 1514. --> sudo firewall-cmd --zone=public --permanent --add-port=1514/tcp 3. From Deployment server in serverclass.conf i created app name and enabled and reloaded deploy server just like other appliance app like barracuda and cisco.       3. Install the Splunk_TA_paloalto on heavy forwarder. UI interface configuration is empty as I dont see any information that can be added. after reading few blog came out with below input.conf stanza Ver 1 and Ver 2     ------Outcome----- Logs are being ingested using cisco index because cisco is monitoring the file path /*.log where i have provided the suitable stanza version 1 (not sure if it is working fine, Please note log folder names are in capital (/var/log/remote/ABC-FW01-DOMAIN.COM/1,2022LOG") Logs are going to cloud from the remote folder, but not through palo alto app and so cloud base PA app wont be able to read it out.   please guide for correction..
Hello - Thank you in advance for the help. I am getting following raw data in Splunk events which I'd like to pull into a table format. I would like to pull the following: Host, Success, and Error ... See more...
Hello - Thank you in advance for the help. I am getting following raw data in Splunk events which I'd like to pull into a table format. I would like to pull the following: Host, Success, and Error field as columns for my table.   I tried this query but no success: | makeresults | eval _raw="{\"host\"},{\"success\",{\"error\"}" | spath path=host{} output=temp | mvexpand temp | spath input=temp | fillnull value="None" | table host,success,error
Hi, I have a timechart that is currently split into 8-hour shift bins, however as it is a timechart, the x-axis only shows the timestamps, while I would like the bins to be labeled by their shifts. ... See more...
Hi, I have a timechart that is currently split into 8-hour shift bins, however as it is a timechart, the x-axis only shows the timestamps, while I would like the bins to be labeled by their shifts. Sample data: Count | Time | Shift 500 | 0700 (Yesterday) | Shift A 750 | 1500 (Yesterday) | Shift B 500 | 2300 (Yesterday) | Shift C 700 | 0700 (Today) | Shift A 600 | 1500 (Today) | Shift B   One tricky part is that "Shift C" overlaps between 2 dates as well. Any ideas on how to define the time range for the shifts, then split and label the bins by their shifts would be greatly appreciated. Thanks!
Hi everyone.  I am not sure the right place to post this, but I figured an introduction wasn't a bad place to start. I just graduated from a local technical college with degrees in Web Development ... See more...
Hi everyone.  I am not sure the right place to post this, but I figured an introduction wasn't a bad place to start. I just graduated from a local technical college with degrees in Web Development and CyberSecurity.  As a security intern with the college's technology services, I ended up using Splunk quite a bit.  However, I always gravitated more towards development and coding than the typical security work. So my supervisors came up with a development project that could benefit them and let me develop something as my final project. The result was TenaPull, a Java application that processes data from the Nessus API and outputs it into JSON files that can be ingested by a Splunk index. https://github.com/billyJoePiano/TenaPull (It's my understanding that there used to be a Python 2 script which did this, but the script was deprecated and no longer works.  I did briefly examine the script when I started, but didn't dig very deep into it) I am interested in hearing about your experience with it, and any issues or problems you may have encountered using it.  I am definitely open to making changes and improvements if there is a demand for that. Also, if there is a better place to post this information, please let me know as well!  I'd love to see more people using TenaPull
Hello Splunkers,  With most applications, inputs and outputs are handled by their respectively named config files. (inputs.conf and outputs.conf) this brings some advantages, namely for this issue. ... See more...
Hello Splunkers,  With most applications, inputs and outputs are handled by their respectively named config files. (inputs.conf and outputs.conf) this brings some advantages, namely for this issue. _TCP_Routing = <targetgroup>  However, with the GCP Add-On, storage bucket inputs are contained in google_cloud_storage_buckets.conf. I tried adding _TCP_Routing in there but it does not work according to the conf validation.  We have a HF that is sending all data to an on-prem Splunk with some data being routed to a UF for delivery to Splunk Cloud. We have a few inputs on that HF that are able to be routed correctly using _TCP_Routing.  Unfortunately, for the GCP add-on, i cant think of a way to do it.  Any advice? Thanks! 
I've been asked to find historical index volume information,  going back 5 years, to make projections for future infrastructure and license needs. _internal is of no use, because it's cleared after... See more...
I've been asked to find historical index volume information,  going back 5 years, to make projections for future infrastructure and license needs. _internal is of no use, because it's cleared after 30 days. We track disk space, and I can find the disk space info for the Cold bucket on the indexer, but it's set to roll off after 60 days so that's out as well. I understand that anything like that would be slightly lower than the actual, as there are several indexes that whose index info would have rolled off, but I'm just trying to find a rough base line to track program growth. This is pretty far beyond my SPL abilities, so I would be grateful for any help!
Splunk 8.2.5 Enterprise receiver and indexer operating on the same RHEL 7.9 system.  How do I ingest the Linux audit logs from this system into Splunk? Do I need to install a Universal Forwarder like... See more...
Splunk 8.2.5 Enterprise receiver and indexer operating on the same RHEL 7.9 system.  How do I ingest the Linux audit logs from this system into Splunk? Do I need to install a Universal Forwarder like I did on my other/external systems?  I have dashboards created and I'm receiving Linux audit events from my other/external systems but nothing from the Receiver/Indexer system.
Hello all, I'm finding the default indexer.conf settings too small, making various sourcetypes only searchable back about 4 months but I need a years worth/ability to search back to. I've found num... See more...
Hello all, I'm finding the default indexer.conf settings too small, making various sourcetypes only searchable back about 4 months but I need a years worth/ability to search back to. I've found numerous splunk posts on index.conf stanzas and settings, one more confusing than the next. How the indexer stores indexes - Splunk Documentation Configure index storage - Splunk Documentation https://wiki.splunk.com/Deploy:BucketRotationAndRetention I'm afraid I need a "explain to me like I'm 4 years old" post.  What calculator or tool to use, and for what stanzas to effectively: A) get search visibility into logs older than a few months B) no longer roll buckets into Frozen (which seems to be aka 'deleted') but into archived, to facility easily restoring them when A) isn't as dialed in as thought.
Hello experts,we  have recently upgraded to 8.2.3.We did it two phases, first from 7.x to 8.1.0 and then 8.1.0 to 8.2.3. After the upgrade was done to 8.2.3 we are getting some errors.In the health s... See more...
Hello experts,we  have recently upgraded to 8.2.3.We did it two phases, first from 7.x to 8.1.0 and then 8.1.0 to 8.2.3. After the upgrade was done to 8.2.3 we are getting some errors.In the health status bar, we are getting errors related to buckets and IOWait. In the unhealthy instances, the indexers are appearing. Also, during the any search we are getting the below errors for all the indexer peers.   Can anyone suggest how to resolve the issues. Thanks in advance  
Hello, I've just installed the Splunk Add-on for Microsoft Windows and I will be collecting data from UFs that forward first to a HF and then to an indexing cluster.  The app will be deployed to mul... See more...
Hello, I've just installed the Splunk Add-on for Microsoft Windows and I will be collecting data from UFs that forward first to a HF and then to an indexing cluster.  The app will be deployed to multiple UFs via deployment server.  I only want to collect data from the machines that the UFs are installed on. I see that there is no way to specify within inputs.conf which index to send the data to.  I've read the documentation but I still don't understand how.  I've even found this post which discusses the same topic but doesn't really provide me with an answer that I understand (sends me to documentation for older version of the add-on). Could somebody please give me a push in the right direction? Thank you and best regards, Andrew
I created a custom alert action, but btool is flagging it as wrong. The script is in /opt/splunk/etc/apps/<app>/bin  
The "Splunk Add-on for NetApp Data ONTAP" is only fetching performance information for the first 50 volumes on a cluster.  Changing the "perf_chunk_size_cluster_mode" value in ta_ontap_collection.con... See more...
The "Splunk Add-on for NetApp Data ONTAP" is only fetching performance information for the first 50 volumes on a cluster.  Changing the "perf_chunk_size_cluster_mode" value in ta_ontap_collection.conf will vary the number -- if I set it to 53, I'll get performance data for the first 53 alphabetic volumes on the cluster.   You can't set this arbitrarily large, if I put it to 10000, I get a failure on the data collection.   The chunking mechanism is part of OntapPerf.py, and normally would iterate over multiple queries until it collected data for all volumes.  This worked for years, but has been broken for several months now.  May or may not align with our upgrade to Splunk Enterprise 8.2.2 and Python 3. Add-on is latest version (3.0.2).  Filers are running ONTAP 9.7.  I went back through the install manual and verified all the steps and add-on data.  Inventory/capacity information works without issue, it is just performance metrics that are a problem. OntapPerf.py throws log warnings "No instances found for object type volume"... from line 461 in the script.   It seems like the "next_tag" mechanism in the script is failing, but I can't work out how to run OntapPerf.py from command line, I don't know how to troubleshoot any further. Splunk_TA_ontap shows as "unsupported" and developed by "Splunk Works".   Last release was June 2021.  I could really use some pointers on how to resolve this, or how I could move forward troubleshooting it myself.     
I have a field with the following values. How can I calculate the product i.e multiply all values with each other? There is a sum function but no multiplication. The output should be = 0.1*0.03*0.34*... See more...
I have a field with the following values. How can I calculate the product i.e multiply all values with each other? There is a sum function but no multiplication. The output should be = 0.1*0.03*0.34*0.32. Thanks  0.1 0.03 0.34 0.32
Hi All, I'm new to Splunk,  please let me know if  it possible to integrate with SaaS application which hosed on 3rd party cloud platform (Tencent Cloud). We have never used Splunk to integra... See more...
Hi All, I'm new to Splunk,  please let me know if  it possible to integrate with SaaS application which hosed on 3rd party cloud platform (Tencent Cloud). We have never used Splunk to integrate SaaS application for logging, so I have no idea how Splunk gathering log files through internet.  Any APIs or guidance can share us for reference? Thanks in advance.
Hey everyone, Currently making a report for my team that requires to have two X-Axis values based on the excel sheet shared with me. Below are some screenshots of the desired output, my progress so... See more...
Hey everyone, Currently making a report for my team that requires to have two X-Axis values based on the excel sheet shared with me. Below are some screenshots of the desired output, my progress so far, and search query based on what I have learned so far. The goal: What I am familiar with using chart in the search: My search string   | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | eval _time=strptime(DATE." ","%Y-%m-%d") | where _time >= strptime("$from$", "%m/%d/%Y") AND _time <= strptime("$to$", "%m/%d/%Y") | eval epochtime=strptime(TIME, "%H:%M:%S")| eval desired_time=strftime(epochtime, "%I:%M:%S %p") | chart sum(VIO_PAGING_SEC) as "$lpar$ Sum of VIO_PAGING_SEC" sum(SYSTEM_PAGEFAULTS_SEC) as "$lpar$ SYSTEM_PAGEFAULTS_SEC" sum(SWAP_PAGIN_SEC) as "$lpar$ SWAP_PAGIN_SEC" sum(LOCAL_PAGEFAULTS_SEC) as "$lpar$ LOCAL_PAGEFAULTS_SEC" over desired_time  
Hello, I need help to understand how I can install the Nodejs API Agent.    Thanks.
Hi, I seem to be stuck with something pretty trivial. I have events with users and corresponding hostnames, eg: User Hostname user1 hostA user1 hostB user2 hostA ... See more...
Hi, I seem to be stuck with something pretty trivial. I have events with users and corresponding hostnames, eg: User Hostname user1 hostA user1 hostB user2 hostA user2 hostC user3 hostD   I want to count unique user-hostname values and show the contributing hostnames for users that have used more that 1 hostname like this: User Hostnames used user1 hostA hostB user2 hostA hostC   This seems to take care of the first part of the task:     | stats dc(Hostname) as uh by User | search uh > 1       How can I add the contributing Hostnames? Formatting is not so important - it may be one field with all the hostnames like in the example above, or multiple fields, or one field together with the User field.   Thank you.    
I am trying now for 16 hours now to get Splunk to send an email to a development mail server to test mail notifications from a custom python script. Now that i put a fair amount of time into it, i am... See more...
I am trying now for 16 hours now to get Splunk to send an email to a development mail server to test mail notifications from a custom python script. Now that i put a fair amount of time into it, i am almost there, which means, i actually get the smtplib to contact the Dev-Mailserver that does not require SSL or TLS. Now if i hardcode the password the email gets through. But since i would like to use the email settings from Splunk, i retrieve the auth_username and the clear_password from the /services/configs/conf-alert_actions/email REST-endpoint. Fortunately Splunk takes security very seriously and handles this information carefully as it should. But this means, in the clear_password I only get the SHA256 hash of the password which will let my poor little Dev-Mailserver go "boohoo! wrong credentials." My question is: Is there a way to send an email from my custom python script, just the way that the sendemail.py script does it? (The standard alerts from splunk can already be received by my Dev-Mailserver.) P.S.: I looked up the script, but the credentials seem to be passed as parameters, which again is a very nice and secure way to handle sensitive information but it leaves me with no other option but to ask.
Hi, I have a dashboard where one of the panels is generated from a |loadjob command which produces a table. This part works as normal. Now, I need to add an extra column onto the panel's tabl... See more...
Hi, I have a dashboard where one of the panels is generated from a |loadjob command which produces a table. This part works as normal. Now, I need to add an extra column onto the panel's table and as a result, I need to incorporate another index onto the panel's query. As result, the panel's query has both the |loadjob and also an appendcols piece that searches an index.  The extra clause is as follows: | appendcols [search index=fraud_glassbox sourcetype="gb:sessions" | stats count as email_count by Global_EmailID_CSH | eval score_email = case(email_count<2, 40, 1=1, 100) ] The issue is .... my dashboard SIGNIFICANTLY slows down in how the results are generated on the dashboard. Why would this be? Would I need an alterative to appendcols command? Many thanks,
Hello, I have this query:     | mstats avg(_value) as packets WHERE index=metrics_index sourcetype=network_metrics (metric_name=*.out) ((metric_name="InterfaceEthernetA.*" OR metric_name="Inter... See more...
Hello, I have this query:     | mstats avg(_value) as packets WHERE index=metrics_index sourcetype=network_metrics (metric_name=*.out) ((metric_name="InterfaceEthernetA.*" OR metric_name="InterfaceEthernetB.*") AND (host="hostA" OR host="hostB")) span=1m by metric_name,host | rex field=metric_name ".*InterfaceEthernet(?<mn>\d_\d*)" | eval kbits=packets*8/1000 | timechart span=30m sum(kbits) by mn       It returns this results:     From those results, I would like to make operations generating another column with those results...   For example: (ColumnA - ColumnB) / ColumnA * 100   How could I do that?