All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I noticed that one of my scheduled saved searches randomly refuses to return results.  I can run the search at any point from the search bar and get data, even immediately after the scheduled... See more...
Hello! I noticed that one of my scheduled saved searches randomly refuses to return results.  I can run the search at any point from the search bar and get data, even immediately after the scheduled saved search returns 0.  Here are the results of when it was scheduled at 2 and 5 minute intervals: Randomly it will conclude with 0 results after a second with no errors. Why would it do this?  How can I ensure that the results are produced consistently each time? Thanks! Andrew
Hi. I create simple file.bat file, which I placed it into .etc/apps/appname/bin I created commands.conf in ./etc/apps/appname/local and its content is: [sps_export_vitocharge] filename = sps_expo... See more...
Hi. I create simple file.bat file, which I placed it into .etc/apps/appname/bin I created commands.conf in ./etc/apps/appname/local and its content is: [sps_export_vitocharge] filename = sps_export_vitocharge.bat chunked = true I restarted teh Splunk instance   I try to run this search:   my_search | outputcsv sps_export_vitocharge | script sps_export_vitocharge   and I receive error message: Error: External search command <file.bat> returned error code 1. The file.bat is only simple move of this generated csv file. It works from the shell window. Can anybody help me, please?
Hello everyone, I am trying to remove this string "0#.w|" with a transforms.conf file. To be sure that my regex is working I tried it with the rex command : | rex field=cs_username "(^[^|]+\|(?<c... See more...
Hello everyone, I am trying to remove this string "0#.w|" with a transforms.conf file. To be sure that my regex is working I tried it with the rex command : | rex field=cs_username "(^[^|]+\|(?<cs_username>[^|]+)$)" I just want to overwrite the field "cs_username" without this string. It works! Now I want to put this regex on a transforms.conf and in props.conf I am not sure that I can do this but here is what I am trying to do : Transforms.conf [username] SOURCE_KEY = cs_username REGEX = ^[^|]+\|(?<cs_username>[^|]+)$ REPEAT_MATCH = true MV_ADD = true Props.conf TRANFORMS-mynewusername = username I reload in the indexer by using the command: | extract reload=true But apparently it is not working that is why I am asking if it is possible to use a field as I did through the rex command in the GUI in the transforms.conf file? Thank you for your answers,
Hi Splunk team The image below is information about my datamodel. Summary Range 31622400 second (s) But why do I search for a period of May, the result returns 0 events? How can i fix it? Th... See more...
Hi Splunk team The image below is information about my datamodel. Summary Range 31622400 second (s) But why do I search for a period of May, the result returns 0 events? How can i fix it? Thank all!
We are trying to run bidirectional ticketing (ServiceNow) and are experiencing some issues. ITSI v4.3.3, datamodel are working just find far as I know. The correlation search uses snow_hash.csv as in... See more...
We are trying to run bidirectional ticketing (ServiceNow) and are experiencing some issues. ITSI v4.3.3, datamodel are working just find far as I know. The correlation search uses snow_hash.csv as input and ouput. But the file are missing, anyone with a quickfix? Should I just manually create it? Anyone know when it is created? Error message from job output when running the correlation search manually: [subsearch]: File '/opt/splunk/var/run/splunk/csv/snow_hash.csv' could not be opened for reading.
Hi all, Is it possible to enable analytics in 15 days trial license? Thanks in advance.
Hi, Does somebody have a working example of how to create a Saved Search using the Rest API with XML? Thanks Max
Hi, I have a main query which returns below 4 columns: rule, result, name, department Now i have to add another query as subsearch where i want to get column address for all the name returned from... See more...
Hi, I have a main query which returns below 4 columns: rule, result, name, department Now i have to add another query as subsearch where i want to get column address for all the name returned from 1st result. I have fullname column in subsearch index which is same as name from 1st query . how to achieve this . 
Hi All, I am urgently looking for a help . I have one field object_name which is present in lookup X1.csv and has values like  object_name GRM MGT Shortfirer Appointment  Blasting Security Regist... See more...
Hi All, I am urgently looking for a help . I have one field object_name which is present in lookup X1.csv and has values like  object_name GRM MGT Shortfirer Appointment  Blasting Security Register Test Morning Schedule The other lookup(X2.csv)  has the column object_name , which has values like below Appointment Schedule Blasting I have to match the two columns and give the results , wherever object_name contains *keyword* of object_name from secondlookup.. The field values can be in upper case or lower case or a combination.
Can someone please explain the importance of  Events.conf & Tags.conf Configuration in Splunk also we use SVN tortoise.   better will be if someone can explain with use case ?
Hi I try to install forwarder in rhel 7, add jboss log path to forward splunk server, but no have performance issue. 1-what type of solution can apply here? 2-tune forwarder? 3-set interval for fo... See more...
Hi I try to install forwarder in rhel 7, add jboss log path to forward splunk server, but no have performance issue. 1-what type of solution can apply here? 2-tune forwarder? 3-set interval for forwarder that when send data to server e.g. every 60 min? 4-try to send data to splunk server through udp? any idea?
Hi, I installed the "smart Exporter" app from Splunkbase and exporting PDF is working properly. When I try to schedule, the below error message keeps coming even though I set up the mail host. "Pl... See more...
Hi, I installed the "smart Exporter" app from Splunkbase and exporting PDF is working properly. When I try to schedule, the below error message keeps coming even though I set up the mail host. "Please finish setting up the smart export application (restart may be required)" The host is an application mail relay, hence the id and password is not required to send the emails but this app keep asking the id and password. Based on the below post, I commented out the id, port, password in the service.py, however, still the same error message is coming. Could someone help to fix the issue, thanks in advance! Regards, Purush
Need some help in understanding how the _time, timestamp default fields are extracted. Raw event as mentioned below and the field values extracted for respective event is as mentioned below. As can c... See more...
Need some help in understanding how the _time, timestamp default fields are extracted. Raw event as mentioned below and the field values extracted for respective event is as mentioned below. As can clearly be seen  I dont see anything that could relate to the value extracted in _time field. Any pointer related to this would be much helpful. Fields extracted: @timestamp                                                    |      _time                             |  timestamp 2020-06-22T15:17:34.892576+00:00 | 2020-06-17 17:54:50 | 2020-06-23 01:17:34.888 Raw event: ========= {"docker":{"container_id":"c0cb3bd3563f5f01133bcc496479b77b6c72bf898f24612ad7634b50a1749301"},"test":{"container_name":"anything","namespace_name":"test10-project","pod_name":"anything-1-w44fj","pod_id":"9289218b-b1cc-11ea-abcd-005056a44ead","labels":{"app":"anything","deployment":"anything-1","deploymentconfig":"anything"},"host":"ost-clb-osp-app-c02.linux.ostravam.corp.telstra.com","master_url":"https://test.default.svc.cluster.local","namespace_id":"0fbe0d11-cade-11e9-a562-005056a44ead"},"message":"2020-06-23 01:17:34.888 DEBUG --- [nio-8090-exec-5] o.s.web.servlet.DispatcherServlet : GET \"/healthcheck\", parameters={}\n","level":"info","hostname":"xxxxxxxxxxxxx","pipeline_metadata":{"collector":{"ipaddr4":"10.130.5.172","ipaddr6":"fe80::823:d3ff:fe3f:bf2d","inputname":"fluent-plugin-systemd","name":"fluentd","received_at":"2020-06-22T15:17:35.076698+00:00","version":"0.12.43 1.6.0"}},"@timestamp":"2020-06-22T15:17:34.892576+00:00","viaq_index_name":"project.test10-project.0fbe0d11-cade-11e9-a562-005056a44ead.2020.06.22","viaq_msg_id":"YzY0NWI1ZGItMjc5Ni00YWI2LWI4OWUtMWZkODU1NTRlNjdj","forwarded_by":"standalone-fluentd-splunk.openshift-logging.svc.cluster.local","source_component":"testsource"}
Hi Team,    We are using confluence and we are planning to use our Splunk Dashboard URL as a RSS feed URL in Confluence. Could you please advise if the can I convert my Splunk Dashboard/Report URL ... See more...
Hi Team,    We are using confluence and we are planning to use our Splunk Dashboard URL as a RSS feed URL in Confluence. Could you please advise if the can I convert my Splunk Dashboard/Report URL into a RSS feed URL. If so please advise.   Thanks !!!
We see inconsistent response in the UI (settings --> Users and Authentication --> access control --> users). Some users are not found, we know that the user recently accessed the platform. This makes... See more...
We see inconsistent response in the UI (settings --> Users and Authentication --> access control --> users). Some users are not found, we know that the user recently accessed the platform. This makes it challenging to triage and review what role is being inherited by a specific user. This response and list of users can vary between search head cluster nodes that all point to the same LDAP environment.
We have a few very old machines running UF version 6.2.X. [No, upgrading the UF/OS cannot happen] The certificate just recently expired and we are attempting to rectify the issue. My question is: doe... See more...
We have a few very old machines running UF version 6.2.X. [No, upgrading the UF/OS cannot happen] The certificate just recently expired and we are attempting to rectify the issue. My question is: does the certificate only have to be reissued to the indexers (I have read this in previous answers)? Or do I need to reissue certs to all the expired UFs, as well?  If the latter question is the answer, what is the best way to go about it? Ship the certs with a new outputs.conf pointing to the new certs resident in that app? Or use SCCM and our in-house CA to issue new certs and set outputs.conf to point to that cert? I appreciate any help!
Hi, Is there a way to use environment variables within transforms.conf. I am trying to override the hostname to the hostname of the forwarder which receives it. I'm using the following. However, th... See more...
Hi, Is there a way to use environment variables within transforms.conf. I am trying to override the hostname to the hostname of the forwarder which receives it. I'm using the following. However, this doesn't seem to be working. The conf files are distributed from a deployment server, so I don't want to hard-code the host names   [hostoverride] DEST_KEY = MetaData:Host REGEX = . FORMAT = host::$HOSTNAME    
I am using tstats command from a while, right now we want to make tstats command to limit record as we are using in kubernetes and there are way too many events. I have looked around and don't see li... See more...
I am using tstats command from a while, right now we want to make tstats command to limit record as we are using in kubernetes and there are way too many events. I have looked around and don't see limit option. though as a work around I use `| head 100` to limit but that won't stop processing the main search query.
Hi,  I'm using the following search to monitor disk space.  I have 2 partitions, drive D and E.  I am only returning results for drive D.  I would have expected results for both.  Any thoughts are ap... See more...
Hi,  I'm using the following search to monitor disk space.  I have 2 partitions, drive D and E.  I am only returning results for drive D.  I would have expected results for both.  Any thoughts are appreciated. thanks | rest splunk_server=Splunk01 /services/server/status/partitions-space | eval free = if(isnotnull(available), available, free) | eval usage = round((capacity - free) / 1024, 2) | eval capacity = round(capacity / 1024, 2) | eval compare_usage = usage." / ".capacity | eval pct_usage = round(usage / capacity * 100, 2) | stats first(fs_type) as fs_type first(compare_usage) as compare_usage first(pct_usage) as pct_usage by mount_point | rename mount_point as "Mount Point", fs_type as "File System Type", compare_usage as "Disk Usage (GB)", capacity as "Capacity (GB)", pct_usage as "Disk Usage (%)" 
Using splunk 8.0.2.1 I have a container (spring boot that uses tomcat underneath) that I'm running that I'm attempting to push the contents to the HEC.   I'm starting the container like this: docke... See more...
Using splunk 8.0.2.1 I have a container (spring boot that uses tomcat underneath) that I'm running that I'm attempting to push the contents to the HEC.   I'm starting the container like this: docker run --name test-spring-boot-app --publish 8080:8080 --log-driver=splunk --log-opt splunk-token=SOME-TOKEN --log-opt splunk-url=http://ec2-someip.compute-1.amazonaws.com:8088 --log-opt splunk-format=inline --log-opt splunk-sourcetype=log4j-test test-spring-boot-app I can't for the life of me get ingested logs to merge multi-line events.  The Exception in the log below shows up as a single event for every line even though I've tried every combination I can think of to try to get it to merge.  It almost appears that it is ignoring my source type.  I have the token in HEC selected with the log4j-test source type as well. My log output looks like this:   . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.3.1.RELEASE) 2020-06-29 19:57:52,828 [main] INFO com.sss.app.ws.TestSpringBootAppApplication - Starting TestSpringBootAppApplication v0.0.1-SNAPSHOT on 84837ec423e5 with PID 1 (/spring-boot-test.jar started by root in /) 2020-06-29 19:57:52,843 [main] INFO com.sss.app.ws.TestSpringBootAppApplication - No active profile set, falling back to default profiles: default 2020-06-29 19:57:54,370 [main] INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8080 (http) 2020-06-29 19:57:54,406 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8080"] 2020-06-29 19:57:54,407 [main] INFO org.apache.catalina.core.StandardService - Starting service [Tomcat] 2020-06-29 19:57:54,408 [main] INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.36] 2020-06-29 19:57:54,520 [main] INFO org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext 2020-06-29 19:57:54,520 [main] INFO org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 1597 ms 2020-06-29 19:57:54,856 [main] INFO org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'applicationTaskExecutor' 2020-06-29 19:57:55,080 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8080"] 2020-06-29 19:57:55,128 [main] INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8080 (http) with context path '' 2020-06-29 19:57:55,143 [main] INFO com.sss.app.ws.TestSpringBootAppApplication - Started TestSpringBootAppApplication in 2.877 seconds (JVM running for 4.391) 2020-06-29 19:58:01,670 [http-nio-8080-exec-1] INFO org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring DispatcherServlet 'dispatcherServlet' 2020-06-29 19:58:01,670 [http-nio-8080-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - Initializing Servlet 'dispatcherServlet' 2020-06-29 19:58:01,680 [http-nio-8080-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - Completed initialization in 10 ms 2020-06-29 19:58:01,807 [http-nio-8080-exec-1] INFO com.sss.app.ws.controller.TestController - foo bar log: true 2020-06-29 19:58:01,807 [http-nio-8080-exec-1] INFO com.sss.app.ws.controller.TestController - The querystring parameter name was supplied as: mark 2020-06-29 19:58:01,807 [http-nio-8080-exec-1] INFO com.sss.app.ws.controller.TestController - The querystring parameter exc was supplied as: true 2020-06-29 19:58:01,813 [http-nio-8080-exec-1] ERROR org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet] - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.Exception: Give me an exception please] with root cause java.lang.Exception: Give me an exception please at com.sss.app.ws.controller.TestController.getTest(TestController.java:47) ~[classes!/:0.0.1-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_111-internal] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_111-internal] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_111-internal] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111-internal] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.2.7.RELEASE.jar!/:5.2.7.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]   In my props.conf I have log4j-test which looks like:   ./splunk btool --debug props list log4j-test | more /home/ubuntu/apps/splunk/etc/system/default/props.conf [log4j-test] /home/ubuntu/apps/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /home/ubuntu/apps/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /home/ubuntu/apps/splunk/etc/system/default/props.conf AUTO_KV_JSON = true /home/ubuntu/apps/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = \d\d?:\d\d:\d\d /home/ubuntu/apps/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /home/ubuntu/apps/splunk/etc/system/default/props.conf CHARSET = UTF-8 /home/ubuntu/apps/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /home/ubuntu/apps/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /home/ubuntu/apps/splunk/etc/system/default/props.conf HEADER_MODE = /home/ubuntu/apps/splunk/etc/system/default/props.conf LEARN_MODEL = true /home/ubuntu/apps/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /home/ubuntu/apps/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /home/ubuntu/apps/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /home/ubuntu/apps/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /home/ubuntu/apps/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /home/ubuntu/apps/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION = indexing /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-all = full /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /home/ubuntu/apps/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = true /home/ubuntu/apps/splunk/etc/system/default/props.conf TRANSFORMS = /home/ubuntu/apps/splunk/etc/system/default/props.conf TRUNCATE = 10000 /home/ubuntu/apps/splunk/etc/system/default/props.conf category = Application /home/ubuntu/apps/splunk/etc/system/default/props.conf description = Test Output produced by any Java 2 Enterprise Edition (J2EE) application server using log4j /home/ubuntu/apps/splunk/etc/system/default/props.conf detect_trailing_nulls = false /home/ubuntu/apps/splunk/etc/system/default/props.conf maxDist = 75 /home/ubuntu/apps/splunk/etc/system/default/props.conf priority = /home/ubuntu/apps/splunk/etc/system/default/props.conf pulldown_type = true /home/ubuntu/apps/splunk/etc/system/default/props.conf sourcetype =   Any thoughts would be greatly appreciated.