All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to install a few apps in Splunk.    The dialogue says, "Enter your Splunk.com username and password to download the app."   I use the username/password combination I'm using for splunk.com... See more...
I'm trying to install a few apps in Splunk.    The dialogue says, "Enter your Splunk.com username and password to download the app."   I use the username/password combination I'm using for splunk.com and I continually get back an error message "Incorrect username or password". 
Hello, I have to migrate from an old Splunk standalone instance (version 6.4) to a new one. Is it possible to use in the new instance the same license currently used in the old one in production? Is... See more...
Hello, I have to migrate from an old Splunk standalone instance (version 6.4) to a new one. Is it possible to use in the new instance the same license currently used in the old one in production? Is there anything I need to pay attention to? Thank you in advance!
I am switching from local auth to saml authentication and when logging in, the username is now a random string. How do I get it to be the "nickname" or friendly name that is provided in the saml resp... See more...
I am switching from local auth to saml authentication and when logging in, the username is now a random string. How do I get it to be the "nickname" or friendly name that is provided in the saml response? Is there a way to override the field in the saml stanza in the authentication.conf file? Changing the realName field in the authenticationResponseAttrMap_SAML stanza in the authentication.conf doesn't actually change the username. If it is not possible, how would I transfer knowledge objects to the "new" users.
I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as... See more...
I am trying to fine tune one use case "Suspicious Event Log Service Behaviour". Below is the rule logic  (`wineventlog_security` EventCode=1100) | stats count min(_time) as firstTime max(_time) as lastTime by dest Message EventCode | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `suspicious_event_log_service_behavior_filter` | collect index=asx sourcetype=asx marker="mitre_id=T1070.001, execution_type=adhoc, execution_time=1637664004.675815" but the rule is currently too noisy. Is it possible to set a bin time(5mins) between stop logging and start logging events. After 5mins if the logging started then I want to ignore the alerts.  Or I have seen a field named dvc_priority, can we set the alerts only for high or critical?  Help me with the query please. 
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber c... See more...
"El servidor que aloja Splunk Enterprise no tiene acceso a Internet sin restricciones por razones de seguridad. Es necesario instalar y actualizar Splunk Enterprise Security, pero me gustaría saber con qué FQDN o IP necesita comunicarse para obtener actualizaciones. Esta información es necesaria agregar esos destinos al firewall para que la comunicación no se bloquee y las actualizaciones se puedan realizar sin problemas".
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows d... See more...
I have the Hyper-V add-on installed and configured on the servers hosting Hyper-VMs, but McAfee (Trellix) Endpoint Security is blocking the creation of executable files to be run within the Windows directory. It appears a dll is being created by PowerShell.exe as part of the add-on and the ‘Access Protection’ component of McAfee sees this as a threat and blocks it. If I disable Access Protection or add PowerShell.exe to the exclusion list within McAfee, then the add-on creates a tmp file (but no visiable dll) and the configured logs are available within Splunk Enterprise. I do not want to do either of these options with McAfee and would instead prefer to change the location used by the Hyper-V add-on to be outside the Windows directory and therefore would not be considered a threat. Is this possible, or is there a better way?
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and... See more...
We have recently configured this App on our heavy forwarder. Hitting this error "'Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))" Our splunk environment is onprem and endpoint starts with https. what can be the issue? Thanks 
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would lik... See more...
I have a webapp that has about 90 api calls from 1 domain, and another 50 or so api calls to a different domain.  I would like to have metrics for both, but it is becoming too cluttered. I would like to have the calls for the second domain, to go into an application container of their own, instead of all the api calls going into the same application container in EUM. Is this possible? Thanks, Greg
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our sy... See more...
I'm not very experienced with Splunk, but I've been asked to set up syslog forwarding from our UPS's to our Splunk server.  I've configured it with the default settings, and pointed it towards our syslog server on the default syslog port. I'm able to get test logs from any severity to go through without issue, but I am unable to see any other type of logs.  NMC: AP9641 Syslog settings on device: Port: 514 Protocol : UDP   Message Generation: Enabled Facility Code: User (I've tried all the other options but I was still unable to see any logs)   Severity Mapping Critical: Critical Warning: Warning Informational: Informational  
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a repo... See more...
Hello, How do I create bar chart using two fields and keep all fields in the statistical table? The column chart automatically created the following chart below. My intention is to create a report emailed periodically with all the fields, but the column chart only two fields If I used table command only to show Name and GPA, it showed two graph, but it removed the rest of the fields Please suggest. Thanks StudentID Name GPA Percentile Email 101 Student1 4 100% Student1@email.com 102 Student2 3 90% Student2@email.com 103 Student3 2 70% Student3@email.com 104 Student4 1 40% Student4@email.com | makeresults format=csv data="StudentID,Name,GPA,Percentile,Email 101,Student1,4,100%,Student1@email.com 102,Student2,3,90%,Student2@email.com 103,Student3,2,70%,Student3@email.com 104,Student4,1,40%,Student4@email.com" Current graph Expected result    
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise.... See more...
Hello, I am brand new to Splunk and after watching a short tutorial to get started, I saw that Settings => Data Input => Local Event Log Collection did not appear on my version of Splunk Enterprise. I have it on Mac OS Monterey and it seems to work fine, but I know most use it on Windows. Please, can someone help me find how to log local events on Splunk for Mac? Thank you for your help. Noé
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, ev... See more...
Hi All, I have a sample json event in Splunk as below.  Please help me to understand how I can parse custom_tags value from below . This may have multiple key value pairs in it { account:xyz, eventdate:01/25/2024, properties: {             version:1.0,              requestID: cvv,               response: {"statusCode":"200", "result":"{\"run_id\":465253,\"custom_tags\":{\"jobname\":\"xyz\",\"domain\":\"bgg\"}}}               time:12:55 } }
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startda... See more...
Hi Team, I have tried executing the cluster-merge-buckets command on Cluster Manager and got the following error / exception: Command: ./splunk cluster-merge-buckets -index-name _audit -startdate 2020/01/01 -enddate 2024/01/24 -max-count 1000 -min-size 1 -max-total-size 1024 Output: Using the following config:  -max-count=1000 -min-size=1 -max-size=1000 -max-timespan=7776000 Dryrun has started. merge_txn_id=1706209703.24 [...] peer=IDX01 processStatus=Merge_Done totalBucketsToMerge=28 mergedBuckets=28 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 868MB progress=100.0% [...] peer=IDX02 processStatus=Merge_Done totalBucketsToMerge=23 mergedBuckets=23 bucketsUnableToMerge=0 createdBuckets=1 sizeOfMergedBuckets= 718MB progress=100.0% progress=100.0% peers=2 completedPeers=2 failedPeers=0 totalBucketsToMerge=51 mergedBuckets=51 bucketsUnableToMerge=0 createdBuckets=2 totalSizeOfMergedBucket s=1586MB (Additional space required for localizing S2 buckets up to the equivalent of sizeOfMergedBuckets for each peer) ---------------------------------------------------------------------------------------------------------------------------------------- Have anyone experienced the same earlier or could help me with the resolutions.
I have events with an array field named "tags".  The tags array has 2 fields for each array object named "name" and "type".  I reference this array as tags{}.name.  The values being returned for one... See more...
I have events with an array field named "tags".  The tags array has 2 fields for each array object named "name" and "type".  I reference this array as tags{}.name.  The values being returned for one event are: name, type Dept_Finance, Custom Asset_Workstation, Custom My goal is to count the events by tags starting with "Dept_".     (index="index_name") | dedup id | stats count by tags{}.name      This returns the correct count of tags for "Dept_" but it's also including all other tags that do not begin with "Dept_".  The Asset_Workstation tag is attached to this event however I don't want it to output in the query.  How can I pull records with multiple tags but exclude all tags not beginning with "Dept_" from the output? I know this is an easy thing to do but I'm still learning SPL.  Thanks for your help.
Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separat... See more...
Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separate log entry (and each device returns multiple results each time it does an operation). E.g., given a list of possible results, the data itself looks something like this:     (results from today:) hostname=x result=2 hostname=x result=3 hostname=y result=1 hostname=z result=1 (results from yesterday/previous days:) hostname=x result=1 hostname=y result=1 hostname=z result=1     and I need to find all hostnames that had a result of "1" but also not results "2" or "3" over some given timeframe. So, from the data above, I'd be looking to return hostnames "y" and "z", but not "x". Unfortunately, the timeframe would be weeks, and would be looking at many thousands of possible hostnames. The only data point I'd know ahead of time would be the list of possible results (it'd only be a handful of possibilities, but a device can potentially return some/all of them at once). Any advice on where to start? Thanks!
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") ... See more...
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") |bin span=30m _time bins=2 | stats count as "total_requests", sum(resp_time_exceeded) as long_calls by kubernetes.namespace.name, kubernetes.container.name | eval Percent_Exceeded = (long_calls/total_requests)*100 | where total_requests>200 and Percent_Exceeded>5   Getting results as shown below: I use the following IN THE CODE ABOVE |bin span=30m _time bins=2 BUT NOT GETTING so that the data is shown in 30 minutes increments? How can I refine the query so that it shows 30 minute increments instead of all  at once?
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configura... See more...
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configuration. Searching the heavy forwarders' /etc/* recursively for "514", "tcp", "udp", "syslog", or "SC4S" returns no relevant results. We know syslog is working, because we have multiple sources that are pointed at the heavy forwarders using udp over port 514 and their data is being indexed. Curiously, when a new syslog source is pointed at the HFs, a new index with a random name pops up in our LastChanceIndex. We have no idea how any of this is configured - the index selection, or the syslog listener. We usually create an index that matches the name given, since we've never been able to find the config to set it manually. Any suggestions on how syslog might be set up, or what else I could try searching for?
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux ser... See more...
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux server. However, I'm able to use Curl command successfully and able to see my msg in Splunk dashboard. doc/html/boost_asio/example/http/client/sync_client.cpp - 1.47.0 ./sync_client 171.134.154.114 /services/collector arg[1]:171.134.154.114 Exception: resolve: Host not found (authoritative) [asio.netdb:1]  
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity... See more...
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity Dashboards show data from two custom entity types with some relation to each other. I want to create a navigation between the two Dashboards. I did create a normal drilldown action to call the related Dashboard. This works somehow, but the Token is not handled correctly. for example I defined Token Parameters: host = $click.value2$ and in the target dashboard I see |search host=$click.value2$ instead of the real value that should have been handed over in the token. When I use the Dashboards outside of ITSI, the drilldown action works fine. Looks to me that in ITSI some scripts are used and the handover is not directly to the other Entity Dashboard, but somehow through the Entity (_key) and the defined entity type. Great if somebody could shed some insights on that!
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [... See more...
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [sslConfig] enableSplunkdSSL = true serverCert = <path_to_the_server_certificate> sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = <path_to_the_CA_certificate>   Everything is working well - servers communicate each other. But my question is: I use Deployment server for pushing config to UFs and I am little bit surprised that management traffic between UFs and Deployment server is still flowing (I see all UFs phoning home, I can push config) even I did not configure encryption nor hostname validation on any UF. Is it OK? Does it mean that hostname validation for management traffic cannot be configured on UF? Or there is a way how to config hostname validation on UFs? I found only how to configure hostname validation on UF in outputs.conf for sending collected data to Indexer, but nothing about management traffic. Thank you for any hint. Best regards Lukas Mecir