All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Can we install a Universal forwarder on netscaler ? netscaler add on for splunk configuration isn't working and also we dont have details steps for setting up netscaler add on.   Please let me... See more...
Hi, Can we install a Universal forwarder on netscaler ? netscaler add on for splunk configuration isn't working and also we dont have details steps for setting up netscaler add on.   Please let me know if we can setup a universal forwarder on our netscaler and forward the required logs?
Below is the sample field value from the event,   sourceServiceName=Endpoint Web analyzedBy=Policy Engine Status=New Success=True By default, the field value detects only Endpoint and not the full ... See more...
Below is the sample field value from the event,   sourceServiceName=Endpoint Web analyzedBy=Policy Engine Status=New Success=True By default, the field value detects only Endpoint and not the full text Endpoint Web Som I used the below rex command but it didn't work, rex "sourceServiceName=(?<sourceServiceName>[^\"]+)"    - This one fetches all the text starting from "Endpoint to end of that event"  Also the value of the field sourceServiceName can be anything, i.e -Endpoint Web or Endpoint Printing Endpoint USB  etc. First word -Endpoint Second word -any can any word I want to fetch only the value of field serviceSourceName Can you please help?    
Dear Team, I have Cloudflare data for my website, and set it up as Cloudflare index on Splunk. After finished index data from S3 of the current date, 3 days passed already and my index hasn't been u... See more...
Dear Team, I have Cloudflare data for my website, and set it up as Cloudflare index on Splunk. After finished index data from S3 of the current date, 3 days passed already and my index hasn't been updated at all. Is there any troubleshooting process that i can follow and later share it here so we can find the root cause of it?   Sincerely,   Peter
I tried to log into slunk enterprise and was told by 2 web browsers chrome and edge that the security certificate had expired for the website and that it could be under potential attack.  Is this a g... See more...
I tried to log into slunk enterprise and was told by 2 web browsers chrome and edge that the security certificate had expired for the website and that it could be under potential attack.  Is this a generic browser message or should this be considered a security risk.
i am a beginner in using splunk. I'm doing research on log traffic from Palo Alto. inside i upload data to splunk. what is the most appropriate sourcetype for me to choose?
Hello, I'm using an old copy of a Windows-based running tracking application. The mapping function no longer works. The app can export the tracks in GPX format. I would like to ingest GPX files into... See more...
Hello, I'm using an old copy of a Windows-based running tracking application. The mapping function no longer works. The app can export the tracks in GPX format. I would like to ingest GPX files into Splunk and build reports, dashboards, etc. But GPX is not a native file type for Splunk. I followed the instructions from this site Splunk My Ride! (and my Run and Swim) Part 1  but results were a single event. Thanks in advance. God bless, Genesius  
Hi, In the logs being ingested Splunk isn't automatically pulling out the action field, so I'm trying to create one for CIM compliance and so on. When I enter the eval command in the search function... See more...
Hi, In the logs being ingested Splunk isn't automatically pulling out the action field, so I'm trying to create one for CIM compliance and so on. When I enter the eval command in the search function of Splunk the field appears as expected, however, when I try to save that as a calculated field it doesn't appear at all.  I'm on Splunk cloud so I don't have access to the .confs   eval command: | eval action = case(status=="200",success,status=="422",failure) calculated field: case(status=="200",success,status=="422",failure)
I have a CIM compliant log that includes an ssl_end_time which I am having trouble getting splunk to show me only certificates that are due to expire in the next x days. Below is my query. Any sugge... See more...
I have a CIM compliant log that includes an ssl_end_time which I am having trouble getting splunk to show me only certificates that are due to expire in the next x days. Below is my query. Any suggestions on how I can get the query to show me only certs that are going to expire based on the ssl_end_time, for example, in the next 30 days? index=* tag="certificate" ssl_is_valid!=false ssl_subject!="CN=sa*" | dedup ssl_subject | convert timeformat="%Y/%m/%d" ctime(ssl_end_time) | sort +ssl_end_time | table ssl_start_time ssl_end_time ssl_subject The log I am getting the data from: {timestamp="2020-10-29T04:02:49+13:00" src_host="" transport="" ssl_end_time="1745081196" ssl_engine="" ssl_hash="sha256RSA" ssl_is_valid="False" ssl_issuer="CN=Company Root Certification Authority, DC=company, DC=xx, DC=xx" ssl_serial="0000000000000000" ssl_start_time="04/19/2018 16:36:36" ssl_subject="CN=Company Enterprise Certification Authority, DC=company, DC=xx, DC=xx" ssl_subject_common_name="CN=Company Enterprise Certification Authority, DC=company, DC=xx, DC=xx" store="" logtype="certificate"} thanks for looking
I have a Python script in an External Lookup app which makes REST GET calls to a third party endpoint which requires basic authentication (username/password). How can I make those authentication cr... See more...
I have a Python script in an External Lookup app which makes REST GET calls to a third party endpoint which requires basic authentication (username/password). How can I make those authentication credentials editable through a graphical interface/dashboard in Splunk? This answer states that there is no way to pass authentication into External Lookup scripts: https://community.splunk.com/t5/Splunk-Search/Pros-and-Cons-External-lookup-script-vs-custom-search-command/m-p/15922 I am aware of the possibility to create a setup page (https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/setuppage/) for my app so credentials can be written into a custom conf file in the "<app_name>/local" folder and then parsed by the Python script but the credentials would be readable due to being  written in plaintext. Is there a way to obfuscate the credentials but then easily use them through Python?
I am trying to create a dashboard to pull information for AUP.    I typed: index=*panlogs then thought I would try and filter out action=blocked   Any suggestions how I can form string correctly... See more...
I am trying to create a dashboard to pull information for AUP.    I typed: index=*panlogs then thought I would try and filter out action=blocked   Any suggestions how I can form string correctly to get this info? 
Hi All, I'm extremely new to Splunk and have been tasked to do the following: Perform a query against one host (Server123) to retrieve MAC addresses then preform a query on a second  host (Server45... See more...
Hi All, I'm extremely new to Splunk and have been tasked to do the following: Perform a query against one host (Server123) to retrieve MAC addresses then preform a query on a second  host (Server456) using the MAC addresses from the first query. I know all the MAC address from query 1 will not be found, but for the MAC address that are found, the MAC address, (which field name is different), the User Name, Network Device Name, and the IP Address would be put into a table and used as a report. I run the query and get the following error: ERROR in 'stats' command: The argument 'Calling_Station_ID=(Mac addr.) When I run the 1st query by its self I see that the MAC addr. in error is the 1st MAC addr. in the 1st row.   Code: index=* host="Server456" | stats count by Calling_Station_ID, User_Name, DeviceName, IP [ search index=* host="Server123" "no free leases" | eval MAC address=suibstr(_raw,52,18) | stats count by MAC address | eval MAC address=replace (MAC address," : ", " - ") | fields MAC address | return Calling_Station_ID=MAC address MAC Address $Mac_address } | table Calling_Station_ID, User_Name, DeviceName, IP | Results: I'm looking for the query to use the MAC from the 1st query to search the 2nd query and if there's a match return the MAC (under the Calling_Station_ID field), the User_Name, DeviceName and IP
All,  I have an index (index=config) where all I store are the sourcetype=config_file. I currently use the stock config from Splunk_TA_nix.  What I am thinking is happening is that when we provisio... See more...
All,  I have an index (index=config) where all I store are the sourcetype=config_file. I currently use the stock config from Splunk_TA_nix.  What I am thinking is happening is that when we provision new server the config_files are often dated in years old so Splunk is creating new bucked dated years old. Is that reasonable assumption?  Looking at btool I am thinking all I need to do is set MAX_DAYS_AGO = 1 from 2000.  Here is the error message I am seeing The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=configs, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=4, small buckets=4 Any other input or best practices around indexing config_files? 
My pipeline is: Kerberized Kafka --> Logstash (hosted on a different server) --> Splunk. Can I replace the Logstash component with Kafka Connect ? From the documentation, what I understood is that ... See more...
My pipeline is: Kerberized Kafka --> Logstash (hosted on a different server) --> Splunk. Can I replace the Logstash component with Kafka Connect ? From the documentation, what I understood is that if Kafka Connect is hosted on the same cluster as that of Kafka, that's quite possible. But I don't have that option right now, as our Kafka cluster is multi-tenant and hence not approved for additional processes on the cluster.
Hi Community mates! Please, Regarding the user details, please, how the user data are maintaining in the application in each user session. Ex: Session or cache or cookies The source 
I have a table within a dashboard that is displaying information comparing event counts from a customer site to the event counts back here at corporate.  The goal is to easily identify if corporate i... See more...
I have a table within a dashboard that is displaying information comparing event counts from a customer site to the event counts back here at corporate.  The goal is to easily identify if corporate is missing any event data from the customer site.  Here is a screenshot of the table: I want to highlight any cells where "Missing=Yes" so it visually pops out to anyone viewing the dashboard.  I found this post How-do-you-change-the-cell-color-based-on-partial-value but that solution is for columns whose name remain the same.  In my table, the column names will change each day as the column names are the dates being displayed.   Is there a way to color a cell based on a condition when the column names are dynamic?   I also want to offer up that I am a beginner in Splunk and have no experience with using JavaScript or anything else besides Simple XML. Thanks in advance for any suggestions.
trying to restart splunk via a script... everything in the script works fine but when the restart happens the script dies at that point too. how to have splunk start the script on startup and when it... See more...
trying to restart splunk via a script... everything in the script works fine but when the restart happens the script dies at that point too. how to have splunk start the script on startup and when it does the restart, keep the script running and let it complete on its own (windows and linux)?
Hi, I'm looking for some insight into the trade offs (if any) in using stdout vs. the  '/services/receivers/simple?' REST API for scripted input. In particular in relation to different deployment op... See more...
Hi, I'm looking for some insight into the trade offs (if any) in using stdout vs. the  '/services/receivers/simple?' REST API for scripted input. In particular in relation to different deployment options (such as a separate forwarder etc.) Thanks  
Hi everyone, i've currently deployed the following instances in my Splunk infrastructure using Splunk 8.1.0: - 1 Search Head - 1 Cluster Master - 2 Indexers in cluster - 2 Heavy Forwarders ... See more...
Hi everyone, i've currently deployed the following instances in my Splunk infrastructure using Splunk 8.1.0: - 1 Search Head - 1 Cluster Master - 2 Indexers in cluster - 2 Heavy Forwarders Everything seems to work fine except for Cluster Master. Since i added the 2 Indexers to the cluster, the following messages are repeated in splunkd.log on Cluster Master system:   10-28-2020 16:59:54.703 +0100 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 10-28-2020 16:59:54.704 +0100 WARN CMMaster - Unable to send scheduled jobs, err="" 10-28-2020 16:59:55.202 +0100 DEBUG CMMaster - event=serviceHeartbeats size=1 10-28-2020 16:59:55.202 +0100 DEBUG CMMaster - event=setPeerStatus Skipping since peer=AAAAA peer_name=splunk-indexer-2 is already in status=Up reason=heartbeat received. 10-28-2020 16:59:55.202 +0100 DEBUG CMMaster - event=serviceRecreateIndexJobs No indexes to be recreated 10-28-2020 16:59:55.203 +0100 DEBUG CMMaster - event=serviceRecreateBucketJobs No buckets to be recreated 10-28-2020 16:59:55.203 +0100 WARN Fixup - GenCommitFixup::finish error in scheduler sendQueued= 10-28-2020 16:59:55.203 +0100 WARN CMMaster - Unable to send scheduled jobs, err="" 10-28-2020 16:59:55.703 +0100 DEBUG CMMaster - event=serviceHeartbeats size=1 10-28-2020 16:59:55.704 +0100 DEBUG CMMaster - event=setPeerStatus Skipping since peer=BBBBB peer_name=splunk-indexer-1 is already in status=Up reason=heartbeat received. 10-28-2020 16:59:55.704 +0100 DEBUG CMMaster - event=serviceRecreateIndexJobs No indexes to be recreated 10-28-2020 16:59:55.704 +0100 DEBUG CMMaster - event=serviceRecreateBucketJobs No buckets to be recreated Can you help me with this issue? Other useful information that may help: NTP and Splunk License are not yet installed. No errors are shown in splunkd.log on the remaining components. No errors are shown on Cluster Master web GUI. Indexer discovery is enable. Internal log are forwarded to Indexers. btool check utility didn't find any error. Each Indexer has indexes.conf with repFactor = auto under [main] Monitor console is enabled on Cluster Master Cluster Master server.log [general] serverName = splunk-clustermaster-1 pass4SymmKey = pass [sslConfig] sslPassword = pass enableSplunkdSSL = true serverCert = /opt/splunk/etc/auth/server.pem sslRootCAPath = /opt/splunk/etc/auth/cacert.pem [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [clustering] cluster_label = cluster mode = master pass4SymmKey = pass replication_factor = 2 [indexer_discovery] pass4SymmKey = pass   Indexers server.conf [general] serverName = splunk-indexer-1 pass4SymmKey = pass parallelIngestionPipelines = 2 pipelineSetSelectionPolicy = weighted_random [sslConfig] enableSplunkdSSL = true sslPassword = pass sslRootCAPath = /opt/splunk/etc/auth/cacert.pem serverCert = /opt/splunk/etc/auth/server.pem [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [replication_port-ssl://8080] serverCert = /opt/splunk/etc/auth/server.pem sslPassword = password [clustering] master_uri = https://clustermaster:8089 mode = slave pass4SymmKey = pass Thank you!
So we have a client who has a splunk deployment already. They are not for using the universal forwarder to send us certain logs while their deployment also gets every index they are reporting on. The... See more...
So we have a client who has a splunk deployment already. They are not for using the universal forwarder to send us certain logs while their deployment also gets every index they are reporting on. The suggestion we got was to use a heavy forwarder to send all index's to them and have the heavy forwarder send just the two or three index's we are looking for to us. Most of the heavy forwarder comments though use an index and forward method, but the client wants to use their indexer and not have it basically indexed twice. Another of the issues is that the client already sends EVERYTHING as it's own index, (IE security logs = security index, application = application index etc). Any assistance would be greatly appreciated.
Is anyone aware of the license cost (per year) for Splunk ITSI module and also if there are any maintainance cost as well?