All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Experts, I have a search query that give me a result table like below: Employee Salary A 1000 B 2000 C 0   How can we trigger an alert when one of our employee's salary equa... See more...
Hi Experts, I have a search query that give me a result table like below: Employee Salary A 1000 B 2000 C 0   How can we trigger an alert when one of our employee's salary equals to zero or specific number?
Hello! I noticed that one of my scheduled saved searches randomly refuses to return results.  I can run the search at any point from the search bar and get data, even immediately after the scheduled... See more...
Hello! I noticed that one of my scheduled saved searches randomly refuses to return results.  I can run the search at any point from the search bar and get data, even immediately after the scheduled saved search returns 0.  Here are the results of when it was scheduled at 2 and 5 minute intervals: Randomly it will conclude with 0 results after a second with no errors. Why would it do this?  How can I ensure that the results are produced consistently each time? Thanks! Andrew
Hi. I create simple file.bat file, which I placed it into .etc/apps/appname/bin I created commands.conf in ./etc/apps/appname/local and its content is: [sps_export_vitocharge] filename = sps_expo... See more...
Hi. I create simple file.bat file, which I placed it into .etc/apps/appname/bin I created commands.conf in ./etc/apps/appname/local and its content is: [sps_export_vitocharge] filename = sps_export_vitocharge.bat chunked = true I restarted teh Splunk instance   I try to run this search:   my_search | outputcsv sps_export_vitocharge | script sps_export_vitocharge   and I receive error message: Error: External search command <file.bat> returned error code 1. The file.bat is only simple move of this generated csv file. It works from the shell window. Can anybody help me, please?
Hello everyone, I am trying to remove this string "0#.w|" with a transforms.conf file. To be sure that my regex is working I tried it with the rex command : | rex field=cs_username "(^[^|]+\|(?<c... See more...
Hello everyone, I am trying to remove this string "0#.w|" with a transforms.conf file. To be sure that my regex is working I tried it with the rex command : | rex field=cs_username "(^[^|]+\|(?<cs_username>[^|]+)$)" I just want to overwrite the field "cs_username" without this string. It works! Now I want to put this regex on a transforms.conf and in props.conf I am not sure that I can do this but here is what I am trying to do : Transforms.conf [username] SOURCE_KEY = cs_username REGEX = ^[^|]+\|(?<cs_username>[^|]+)$ REPEAT_MATCH = true MV_ADD = true Props.conf TRANFORMS-mynewusername = username I reload in the indexer by using the command: | extract reload=true But apparently it is not working that is why I am asking if it is possible to use a field as I did through the rex command in the GUI in the transforms.conf file? Thank you for your answers,
Hi Splunk team The image below is information about my datamodel. Summary Range 31622400 second (s) But why do I search for a period of May, the result returns 0 events? How can i fix it? Th... See more...
Hi Splunk team The image below is information about my datamodel. Summary Range 31622400 second (s) But why do I search for a period of May, the result returns 0 events? How can i fix it? Thank all!
We are trying to run bidirectional ticketing (ServiceNow) and are experiencing some issues. ITSI v4.3.3, datamodel are working just find far as I know. The correlation search uses snow_hash.csv as in... See more...
We are trying to run bidirectional ticketing (ServiceNow) and are experiencing some issues. ITSI v4.3.3, datamodel are working just find far as I know. The correlation search uses snow_hash.csv as input and ouput. But the file are missing, anyone with a quickfix? Should I just manually create it? Anyone know when it is created? Error message from job output when running the correlation search manually: [subsearch]: File '/opt/splunk/var/run/splunk/csv/snow_hash.csv' could not be opened for reading.
Hi all, Is it possible to enable analytics in 15 days trial license? Thanks in advance.
Hi, Does somebody have a working example of how to create a Saved Search using the Rest API with XML? Thanks Max
Hi, I have a main query which returns below 4 columns: rule, result, name, department Now i have to add another query as subsearch where i want to get column address for all the name returned from... See more...
Hi, I have a main query which returns below 4 columns: rule, result, name, department Now i have to add another query as subsearch where i want to get column address for all the name returned from 1st result. I have fullname column in subsearch index which is same as name from 1st query . how to achieve this . 
Hi All, I am urgently looking for a help . I have one field object_name which is present in lookup X1.csv and has values like  object_name GRM MGT Shortfirer Appointment  Blasting Security Regist... See more...
Hi All, I am urgently looking for a help . I have one field object_name which is present in lookup X1.csv and has values like  object_name GRM MGT Shortfirer Appointment  Blasting Security Register Test Morning Schedule The other lookup(X2.csv)  has the column object_name , which has values like below Appointment Schedule Blasting I have to match the two columns and give the results , wherever object_name contains *keyword* of object_name from secondlookup.. The field values can be in upper case or lower case or a combination.
Can someone please explain the importance of  Events.conf & Tags.conf Configuration in Splunk also we use SVN tortoise.   better will be if someone can explain with use case ?
Hi I try to install forwarder in rhel 7, add jboss log path to forward splunk server, but no have performance issue. 1-what type of solution can apply here? 2-tune forwarder? 3-set interval for fo... See more...
Hi I try to install forwarder in rhel 7, add jboss log path to forward splunk server, but no have performance issue. 1-what type of solution can apply here? 2-tune forwarder? 3-set interval for forwarder that when send data to server e.g. every 60 min? 4-try to send data to splunk server through udp? any idea?
Hi, I installed the "smart Exporter" app from Splunkbase and exporting PDF is working properly. When I try to schedule, the below error message keeps coming even though I set up the mail host. "Pl... See more...
Hi, I installed the "smart Exporter" app from Splunkbase and exporting PDF is working properly. When I try to schedule, the below error message keeps coming even though I set up the mail host. "Please finish setting up the smart export application (restart may be required)" The host is an application mail relay, hence the id and password is not required to send the emails but this app keep asking the id and password. Based on the below post, I commented out the id, port, password in the service.py, however, still the same error message is coming. Could someone help to fix the issue, thanks in advance! Regards, Purush
Need some help in understanding how the _time, timestamp default fields are extracted. Raw event as mentioned below and the field values extracted for respective event is as mentioned below. As can c... See more...
Need some help in understanding how the _time, timestamp default fields are extracted. Raw event as mentioned below and the field values extracted for respective event is as mentioned below. As can clearly be seen  I dont see anything that could relate to the value extracted in _time field. Any pointer related to this would be much helpful. Fields extracted: @timestamp                                                    |      _time                             |  timestamp 2020-06-22T15:17:34.892576+00:00 | 2020-06-17 17:54:50 | 2020-06-23 01:17:34.888 Raw event: ========= {"docker":{"container_id":"c0cb3bd3563f5f01133bcc496479b77b6c72bf898f24612ad7634b50a1749301"},"test":{"container_name":"anything","namespace_name":"test10-project","pod_name":"anything-1-w44fj","pod_id":"9289218b-b1cc-11ea-abcd-005056a44ead","labels":{"app":"anything","deployment":"anything-1","deploymentconfig":"anything"},"host":"ost-clb-osp-app-c02.linux.ostravam.corp.telstra.com","master_url":"https://test.default.svc.cluster.local","namespace_id":"0fbe0d11-cade-11e9-a562-005056a44ead"},"message":"2020-06-23 01:17:34.888 DEBUG --- [nio-8090-exec-5] o.s.web.servlet.DispatcherServlet : GET \"/healthcheck\", parameters={}\n","level":"info","hostname":"xxxxxxxxxxxxx","pipeline_metadata":{"collector":{"ipaddr4":"10.130.5.172","ipaddr6":"fe80::823:d3ff:fe3f:bf2d","inputname":"fluent-plugin-systemd","name":"fluentd","received_at":"2020-06-22T15:17:35.076698+00:00","version":"0.12.43 1.6.0"}},"@timestamp":"2020-06-22T15:17:34.892576+00:00","viaq_index_name":"project.test10-project.0fbe0d11-cade-11e9-a562-005056a44ead.2020.06.22","viaq_msg_id":"YzY0NWI1ZGItMjc5Ni00YWI2LWI4OWUtMWZkODU1NTRlNjdj","forwarded_by":"standalone-fluentd-splunk.openshift-logging.svc.cluster.local","source_component":"testsource"}
Hi Team,    We are using confluence and we are planning to use our Splunk Dashboard URL as a RSS feed URL in Confluence. Could you please advise if the can I convert my Splunk Dashboard/Report URL ... See more...
Hi Team,    We are using confluence and we are planning to use our Splunk Dashboard URL as a RSS feed URL in Confluence. Could you please advise if the can I convert my Splunk Dashboard/Report URL into a RSS feed URL. If so please advise.   Thanks !!!
We see inconsistent response in the UI (settings --> Users and Authentication --> access control --> users). Some users are not found, we know that the user recently accessed the platform. This makes... See more...
We see inconsistent response in the UI (settings --> Users and Authentication --> access control --> users). Some users are not found, we know that the user recently accessed the platform. This makes it challenging to triage and review what role is being inherited by a specific user. This response and list of users can vary between search head cluster nodes that all point to the same LDAP environment.
We have a few very old machines running UF version 6.2.X. [No, upgrading the UF/OS cannot happen] The certificate just recently expired and we are attempting to rectify the issue. My question is: doe... See more...
We have a few very old machines running UF version 6.2.X. [No, upgrading the UF/OS cannot happen] The certificate just recently expired and we are attempting to rectify the issue. My question is: does the certificate only have to be reissued to the indexers (I have read this in previous answers)? Or do I need to reissue certs to all the expired UFs, as well?  If the latter question is the answer, what is the best way to go about it? Ship the certs with a new outputs.conf pointing to the new certs resident in that app? Or use SCCM and our in-house CA to issue new certs and set outputs.conf to point to that cert? I appreciate any help!
Hi, Is there a way to use environment variables within transforms.conf. I am trying to override the hostname to the hostname of the forwarder which receives it. I'm using the following. However, th... See more...
Hi, Is there a way to use environment variables within transforms.conf. I am trying to override the hostname to the hostname of the forwarder which receives it. I'm using the following. However, this doesn't seem to be working. The conf files are distributed from a deployment server, so I don't want to hard-code the host names   [hostoverride] DEST_KEY = MetaData:Host REGEX = . FORMAT = host::$HOSTNAME    
I am using tstats command from a while, right now we want to make tstats command to limit record as we are using in kubernetes and there are way too many events. I have looked around and don't see li... See more...
I am using tstats command from a while, right now we want to make tstats command to limit record as we are using in kubernetes and there are way too many events. I have looked around and don't see limit option. though as a work around I use `| head 100` to limit but that won't stop processing the main search query.
Hi,  I'm using the following search to monitor disk space.  I have 2 partitions, drive D and E.  I am only returning results for drive D.  I would have expected results for both.  Any thoughts are ap... See more...
Hi,  I'm using the following search to monitor disk space.  I have 2 partitions, drive D and E.  I am only returning results for drive D.  I would have expected results for both.  Any thoughts are appreciated. thanks | rest splunk_server=Splunk01 /services/server/status/partitions-space | eval free = if(isnotnull(available), available, free) | eval usage = round((capacity - free) / 1024, 2) | eval capacity = round(capacity / 1024, 2) | eval compare_usage = usage." / ".capacity | eval pct_usage = round(usage / capacity * 100, 2) | stats first(fs_type) as fs_type first(compare_usage) as compare_usage first(pct_usage) as pct_usage by mount_point | rename mount_point as "Mount Point", fs_type as "File System Type", compare_usage as "Disk Usage (GB)", capacity as "Capacity (GB)", pct_usage as "Disk Usage (%)"