All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Why do so many people call the CIM Add-On an application? From everything that I learned so far wouldn't it just be considered an Add-On instead of an application? I need to understand this for testi... See more...
Why do so many people call the CIM Add-On an application? From everything that I learned so far wouldn't it just be considered an Add-On instead of an application? I need to understand this for testing purposes.
Hello, In order to restore data from archive the user need access to settings-->indexes What is the role capability I should add ?
Hi friends,   I am trying to get total resolved incidents, open incident and total incidents each day. I am getting the information from same source and Index. How do I assign resolved, open and ... See more...
Hi friends,   I am trying to get total resolved incidents, open incident and total incidents each day. I am getting the information from same source and Index. How do I assign resolved, open and total to separate variable and get the count of each and percentage too? Please suggest Thanks in Advance
  While reviewing Sysmon events within Endpoint Datamodel, noticed that file_hash information is not available within Filesystem dataset and the field just reads "unknown". It looks like in the lat... See more...
  While reviewing Sysmon events within Endpoint Datamodel, noticed that file_hash information is not available within Filesystem dataset and the field just reads "unknown". It looks like in the latest Sysmon TA (https://splunkbase.splunk.com/app/5709), alias for file_hash is missing, which is the field required by Endpoint datamodel. Mapping this with one of the extract hash fields(SHA1, SHA256 etc) should be relatively straightforward but wanted to check with the community first in case I am missing something obvious here. Thank you, ~ Abhi
Hello All, We keep getting some errors from Splunk Add-on for Java Management Extensions 5.3.0: splunkd.log 1. ERROR ExecProcessor [20951 ExecProcessor] - message from "/opt/splunk/bin/python3.7... See more...
Hello All, We keep getting some errors from Splunk Add-on for Java Management Extensions 5.3.0: splunkd.log 1. ERROR ExecProcessor [20951 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" INFO: Loading mapping descriptors from jar:file:/opt/splunk/etc/apps/Splunk_TA_jmx/bin/lib/jmxmodinput.jar!/mapping.xml 2. ERROR ExecProcessor [20951 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" Mar 01, 2023 11:59:59 PM org.exolab.castor.mapping.Mapping loadMapping jmx.log 3. 2023-03-02 06:37:18,168 - com.splunk.modinput.ModularInput -1968729474 [Thread-11] INFO [] - Failed connection with 'service:jmx:rmi:///jndi/rmi://server1.domain.com:port/jmxrmi', trying to collect data with a short URL: 'service:jmx:rmi://server1.domain.com:port/jndi/jmxrmi' . 2023-03-02 06:37:18,168 - com.splunk.modinput.ModularInput -1968729474 [Thread-11] ERROR [] - Exception@checkConnector, e= java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369) ~[?:1.8.0_51] at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270) ~[?:1.8.0_51] at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:229) ~[?:1.8.0_51] at com.splunk.jmx.ServerTask.connect(Unknown Source) ~[jmxmodinput.jar:?] at com.splunk.jmx.ServerTask.checkConnector(Unknown Source) ~[jmxmodinput.jar:?] at com.splunk.jmx.Scheduler.run(Unknown Source) ~[jmxmodinput.jar:?] Caused by: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:662) ~[?:1.8.0_51] at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313) ~[?:1.8.0_51] at javax.naming.InitialContext.getURLOrDefaultInitCtx(InitialContext.java:350) ~[?:1.8.0_51] at javax.naming.InitialContext.lookup(InitialContext.java:417) ~[?:1.8.0_51] at javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1957) ~[?:1.8.0_51] at javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1924) ~[?:1.8.0_51] at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:287) ~[?:1.8.0_51] ... 5 more 2023-03-02 06:37:18,168 - com.splunk.modinput.ModularInput -1968729474 [Thread-11] WARN [] - Server misconfiguration already notified once for jmx://_Splunk_TA_jmx_:server1 stanza with message: Failed to connect with the JMX server. Review the configuration of the jmx://_Splunk_TA_jmx_:server1 stanza, and try again. 2023-03-02 06:37:18,174 - com.splunk.modinput.ModularInput -1968729480 [Thread-11] INFO [] - 1 servers found in stanza jmx://_Splunk_TA_jmx_:server1 Add-on has been configured on a Heavy Forwarder Our jmx_servers.conf is: [hostname] description = <description> destinationapp = Splunk_TA_jmx host = <FQDN> jmxport = <port> protocol = rmi stubSource = jndi disabled = 0 lookupPath = /jmxrmi Does anyone have any idea on how to fix it?  Greetings, Justyna
Hi guys, Anyone else has run in to a similar error? I found this in my internal logs.     ConfObjectManagerDB [55087 TcpChannelThread] - /opt/splunk/etc/apps/test_app/metadata/local.m... See more...
Hi guys, Anyone else has run in to a similar error? I found this in my internal logs.     ConfObjectManagerDB [55087 TcpChannelThread] - /opt/splunk/etc/apps/test_app/metadata/local.meta: Refused forced reload:outstanding write    
I have different types of log coming into splunk via mod input and the logs are being ingested into Kafka topic event. I need to override the logs based on some field or some way to multiple source... See more...
I have different types of log coming into splunk via mod input and the logs are being ingested into Kafka topic event. I need to override the logs based on some field or some way to multiple sourcetype
Hi, New Dashboard's does not have Timeline vis. what do you use for representing data in Time series and find where there are overlapping? Basically what I have is this: Engine name S... See more...
Hi, New Dashboard's does not have Timeline vis. what do you use for representing data in Time series and find where there are overlapping? Basically what I have is this: Engine name Scan name Start time End time test 12_14 12:00 15:12 test 14:30_17:00 14:30 16:45 test_another_engine 13_14 13:00 13:52 ...         Now I would like to put in inside some visualization during 24h where I can see "start time", "duration" and if there is overlapping in duration between two "Scan names".  In example above we can see that 12_14 last's longer that expected and then overlapped with 14:30_17:00 (now I would like to see that overlapping and on click [token] populate another Vis for that "Engine"). I would like to create all in new Dashboards but there is no Timeline visulization. Timeline vis is also great but not in new dashboards or I can hover and see what "Scan name" is. Timeline vis is almost what I need as it shows duration through 24h, group by "Engine name" but is lacking "Scan name", old Dashboard, ... What would you recommend for Vis as workaround?
Hello all, following use case: We wanted to create a backup of some json data. For this we created a new index called  "xyz_backup" and moved all data from the original index to it. By doing that... See more...
Hello all, following use case: We wanted to create a backup of some json data. For this we created a new index called  "xyz_backup" and moved all data from the original index to it. By doing that the sourcetype was set to "stash" in the backup index. Now we want to move the data from the "xyz_backup" index back to the original index. But the sourcetype should be json again and also the field extraction should be back. By running following command the only thing that happens is that the sourcetype gets set to "json" but the data itself is still not in the right json format (field extractions not working etc.).: index=xyz_backup | collect index=original sourcetype=_json How can we get the data back into its original format (json)? The original data is still available and could maybe be "read-in" again by resetting the fishbucket but the bad thing is its only possible for individual files right? not for a complete folder? because we have over 100files...   Thanks in advance for your help or a quick tip.
This is in continuation to my query(resolved) here - Solved: How to check time difference between a series of e... - Splunk Community  Here I was able to get the overall downtime for any selecte... See more...
This is in continuation to my query(resolved) here - Solved: How to check time difference between a series of e... - Splunk Community  Here I was able to get the overall downtime for any selected time range by using SUM() and AVG() after teh suggested solution. In continuation to the sample scenario explained in the aforementioned query, I have to now handle a scheduled downtime.  We have a process which turns our servers down at 4:30 pm and brings them up at 1:30am UTC time  , automatically on a schedule, every day. The 9 hrs 10mins downtime which is shown in the sample is of the same. In this case I was able to come up with unplanned downtime by subtracting 9:10 from total downtime as it was a specific selected time range : 2023-02-21T16:00:00Z to 2023-02-22T02:25:00Z and as there was only one record of 9hrs 10min downtime.  However, we have noticed few other scenarios which are a bit complex, like - 1. When the time range selected is 24 hrs or yesterday (3/1/23 12:00:00.000 AM to 3/2/23 12:00:00.000 AM), assuming the system was up all the time outside the scheduled downtime, the total downtime will be shown as empty/null as the first event for the day will be at 1:35 and last one at 16:30 and no other downtime in between. (this is still okay, as I just have to handle the null downtime with a zero)  2. When the time range selected is anything more than a day, say 7days or 30 days or even a random custom date time range, I'm not sure how to calculate the actual downtime. Can this be handled somehow?
Hello, (I will use fictional data to give examples) I'm trying to use regex to extract data from one field to another, but Splunk doesn't find the data I want in this specific field. The field I w... See more...
Hello, (I will use fictional data to give examples) I'm trying to use regex to extract data from one field to another, but Splunk doesn't find the data I want in this specific field. The field I want to extract data (let's call it DATA_FIELD) looks something like: SMBv2 guid=111111-b111-1111-1111-11111aaaa11 time=2023-02-27 15:17:35 domain=DOMAINNAME version=12.1.11111 ntlm-ver=15 domain=DOMAINNAME name=HOSTNAME domain-dns=DOMAINDNS name-dns=NAMEDNS I want to extract "name=HOSTNAME" from this field to another one (HOSTNAME_FIELD) I suspect my Regex parameters are not working with Splunk, even though I tested them all in regex101 . Example: | rex field=DATA_FIELD "HOSTNAME_FIELD: name=(?<name>\S*(?))\s" I tried many forms of regex, such as: name.(?<=[=])\S*(?:\s)? .name=(?<=[=])\S*(?) name=(?<name>\S*(?))\s None of them worked in Splunk but they work in regex101. Is this problem on the regex formula itself on Splunk? Or is it problem with the sintax? What's the best way to extract this data?
I am running Splunk in Docker on my local machine. I would like to monitor a directory folder also on my local machine where data will be posted (csv files which I would like to index). I go to: D... See more...
I am running Splunk in Docker on my local machine. I would like to monitor a directory folder also on my local machine where data will be posted (csv files which I would like to index). I go to: Data Inputs > Files and Directories > Add New File or Directory If I use Browse, I can't find my directory - assume as it isn't mounted. If I add the path to the folder, I get an error saying "This path does not exist or is not accessible." It seems it should be easy to add a folder for monitoring - as yet I can't find a way to do it. Can anyone point me in the right direction? Many thanks in advance.   NM
Hi, I need to monitor jobs only at specific interval .From Application server we are getting only Job Name And Date of Job generated into Splunk. For example: Job will only run between 9:30 PM ... See more...
Hi, I need to monitor jobs only at specific interval .From Application server we are getting only Job Name And Date of Job generated into Splunk. For example: Job will only run between 9:30 PM -10:30  so Splunk will have data only after 9:30 PM so up to 9:30 PM dashboard will be showing as 'Job has not run' which is incorrect. I need to check only between 9:30 PM -10:30 PM and if there is no data in Index then show as "Job has not run" Please suggest. query:index = test_job sourcetype = test_job | rex field=source ".*/(?<name>.*?)_(?<date>.*)\." | eval DATE=strftime(strptime(date,"%m%d%Y_%I.%M.%S.%p"),"%m-%d-%Y %I:%M:%S %p") | rename name as JobName | table JobName DATE | append [| inputlookup job.csv | search NOT [ search index = test_job sourcetype = test_job | rex field=source ".*/(?<name>.*?)_(?<date>.*)\." | eval DATE=strftime(strptime(date,"%m%d%Y_%I.%M.%S.%p"),"%m-%d-%Y %I:%M:%S %p") | rename name as JobName | table JobName ]] | fillnull value="N" DATE | eval DATE=if(DATE="N","Job has not run", DATE)
Hi, I have a timechart of number of events over a 7 day period and I need to run a Seasonal-Trend decomposition on the results. This is my current query: [BASE QUERY] | timechart span=1h count ... See more...
Hi, I have a timechart of number of events over a 7 day period and I need to run a Seasonal-Trend decomposition on the results. This is my current query: [BASE QUERY] | timechart span=1h count | streamstats window=24 avg(count) as hourly_avg_count | timechart span=1h stl hourly_avg_count as seasonal component=longterm However, I am getting the error: Unknown search command 'stl'. Can you please help? Many thanks!
Hello, Our Splunk Enterprise structure are 1 Master, 2 Search Head and 4 Indexer Cluster. The Master will configure Forwarder Management and the deployment apps stay there. Now I want to index so... See more...
Hello, Our Splunk Enterprise structure are 1 Master, 2 Search Head and 4 Indexer Cluster. The Master will configure Forwarder Management and the deployment apps stay there. Now I want to index some logs file from a server (which already installed an UF) with csv structure, but not have csv extension. The log file is like this       api_key,api_method_name,bytes,cache_hit,client_transfer_time,connect_time,endpoint_name,http_method,http_status_code,http_version,oauth_access_token,package_name,package_uuid,plan_name,plan_uuid,pre_transfer_time,qps_throttle_value,quota_value,referrer,remote_total_time,request_host_name,request_id,request_time,request_uuid,response_string,service_definition_endpoint_uuid,service_id,service_name,src_ip,ssl_enabled,total_request_exec_time,traffic_manager,traffic_manager_error_code,uri,user_agent,org_name,org_uuid,sub_org_name,sub_org_uuid unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641598.598_unknown_unknown,2023-02-05T23:59:58,dafeac38-123d-4bb7-aa1c-59680afbc0b2,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.0,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,- unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641608.030_unknown_unknown,2023-02-06T00:00:08,e4cd645a-5471-4097-baf0-67f90f4d2cee,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.001,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,- unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641618.607_unknown_unknown,2023-02-06T00:00:18,ee18e506-2ea5-4792-a586-f0274e6c823b,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.0,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,- unknown,-,30,0,0.0,0.0,-,POST,596,HTTP/1.1,-,-,-,-,-,0.0,0,0,-,0.0,developer.napas.com.vn,1675641627.988_unknown_unknown,2023-02-06T00:00:27,5cc9f704-61a3-443c-b670-26373afe5502,596 Service Not Found (Proxy),-,unknown,-,10.244.1.0,1,0.0,tm-deploy-0-97674db57-smcdv,ERR_596_SERVICE_NOT_FOUND,/healthcheck,-,-,-,-,-       The log file is named as: access_worker5_2023_2_5.log or access_worker5_2023_2_5.log.1 Before config input.conf in my deployment I config props.conf and transfroms.conf in my Search Head in /splunk/etc/apps/search/local as props.conf       [mllog.new] CHARSET = UTF-8 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured description = sourcetype for index csv disabled = false pulldown_type = true FIELD_NAMES = api_key,api_method_name,bytes,cache_hit,client_transfer_time,connect_time,endpoint_name,http_method,http_status_code,http_version,oauth_access_token,package_name,package_uuid,plan_name,plan_uuid,pre_transfer_time,qps_throttle_value,quota_value,referrer,remote_total_time,request_host_name,request_id,request_time,request_uuid,response_string,service_definition_endpoint_uuid,service_id,service_name,src_ip,ssl_enabled,total_request_exec_time,traffic_manager,traffic_manager_error_code,uri,user_agent,org_name,org_uuid,sub_org_name,sub_org_uuid TIMESTAMP_FIELDS = request_time REPORT-tibco-mllog-new = REPORT-tibco-mllog-new DATETIME_CONFIG = HEADER_FIELD_LINE_NUMBER = 1 FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , ###THIS IS FOR TESTING### [mllog.new2] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disalbe = false pulldown_type = 1         transfroms.conf       [REPORT-tibco-mllog-new] DELIMS = "," FIELDS = "api_key","api_method_name","bytes","cache_hit","client_transfer_time","connect_time","endpoint_name","http_method","http_status_code","http_version","oauth_access_token","package_name","package_uuid","plan_name","plan_uuid","pre_transfer_time","qps_throttle_value","quota_value","referrer","remote_total_time","request_host_name","request_id","request_time","request_uuid","response_string","service_definition_endpoint_uuid","service_id","service_name","src_ip","ssl_enabled","total_request_exec_time","traffic_manager","traffic_manager_error_code","uri","user_agent","org_name","org_uuid","sub_org_name","sub_org_uuid"         Then I config deployment apps like normal, with input.conf like this input.conf       [monitor:///u01/pv/log-1/data/trafficmanager/enriched/access/*] disabled = 0 index = myindex sourcetype = mllog.new ###THIS IS FOR TESTING### [monitor:///u01/pv/log-1/data/trafficmanager/enriched/access/*] disabled = 0 index = myindex sourcetype = mllog.new2         After configuration and restart. I ran 2 query index = myindex sourcetype = mllog.new -> 0 events index = myindex sourcetype = mllog.new2 -> Have events, but not with correct line breaking, some event have 1 line (correct) some events have 2 lines or even 257 lines (which clearly wrong), indexed header and don't have fields seperation.   So clearly I have config wrong somewhere, can someone point me to the right direction.
Hi all! I'm currently struggling to ingest network telemetry from windows endpoints/servers into Splunk Cloud. We've installed Splunk's Universal Forwarder on each instance. SysMon Logs and basic... See more...
Hi all! I'm currently struggling to ingest network telemetry from windows endpoints/servers into Splunk Cloud. We've installed Splunk's Universal Forwarder on each instance. SysMon Logs and basic Windows events that you can tick in the setup of UF are also being forwarded already.  Isn't the UF also supposed to capture network data? If that's not the case, what's best practice or what method do you use? We want to monitor unusual spikes in network traffic and be able to see what client it is and where it's sending its data to. I already opened 2 support tickets but I've gotten no response in over a week now. That's why I'm trying it here now. Hope you're having a great day and thanks in advance for your help. -Maik 
Hello Splunkers, I have switches from which the logs are getting ingested into splunk. So when the specific multiple interfaces are down it should trigger me only one alert. At present i am using t... See more...
Hello Splunkers, I have switches from which the logs are getting ingested into splunk. So when the specific multiple interfaces are down it should trigger me only one alert. At present i am using the below query index="switches"| spath Message |search Message="*Interface GigabitEthernet0/21 · ** Connected to AD Server ** for node DISTRIBUTION is Shutdown." OR Message="Interface GigabitEthernet0/23 - Gi0/23 for node DISTRIBUTION is Shutdown." OR Message="Interface GigabitEthernet0/24 - Gi0/24 for node DISTRIBUTION is Shutdown." OR Message="*Interface GigabitEthernet0/22 · ** Connected to DHCP Server ** for node DISTRIBUTION is Shutdown." |table _time Message Device_Name Here when any one interface is down or multiple interface is down then the number of alert is getting triggered depending on the number of interfaces down. Now what i need is that if there are multiple interfaces down, it should raise only one alert in alert manager.  I have attached the screenshot of the alerts when the multiple interfaces are down.     Thanks in Advance..  
Hi all. I have a search that searches a large amount of events. Its run on fast mode, on the statistics page. When i start the search it slow starts populating the fields, but then at one poin... See more...
Hi all. I have a search that searches a large amount of events. Its run on fast mode, on the statistics page. When i start the search it slow starts populating the fields, but then at one point it just empties all the results and says "No results found", even thought they were there at the beginning of the search running... Any ideas what could be the issue here? Never had anything like this before on other large searches though Some setting in limits.conf or something? All i get is this, which makes no sense since data is there at the start of the search In the beginning it shows data:  
As rest command has some limitation on splunk cloud. How to find the license purchase date and expiration date on splunk cloud. And How to find the license pool information in splunk cloud.
earliest=-0@d latest=now) OR (earliest=-7d@d latest=-6d@d) -- This is giving me the comparison of Today vs -7 days. Instead, Dynamically I need to choose the date (whatever I want) and other shoul... See more...
earliest=-0@d latest=now) OR (earliest=-7d@d latest=-6d@d) -- This is giving me the comparison of Today vs -7 days. Instead, Dynamically I need to choose the date (whatever I want) and other should automatically show the -7d. Could someone please help with this.