All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When I used the following code to perform a query: service = client.connect( host= 'splunk.bart.gov', port = '8089', username = 'userid', password = 'secrete', ) query = "search index=slog_ics... See more...
When I used the following code to perform a query: service = client.connect( host= 'splunk.bart.gov', port = '8089', username = 'userid', password = 'secrete', ) query = "search index=slog_ics sourcetype=occ_mgr | table _time, ENTRY | head 3" query_results = service.jobs.oneshot(query) reader = res.ResultsReader(query_results) results = [] for item in reader: print(item) results.append(item) print("results[1]:") print(results[1]) In the above result, I cannot see the value for the field ENTRY. ENTRY is a field defined by the sourcetype occ_mgr in my application ics_analytics. While in Splunk web UI, in the context of the application ics_analytics using the same query, I can see the field value of ENTRY: index=slog_ics sourcetype=occ_mgr | fields _time, ENTRY | head 3 with the result: _time ENTRY 4/6/22 2:11:00.000 AM EOR. 4/6/22 1:48:00.000 AM (ref 0120) T203 released ATO, (762) second delay. 4/6/22 1:36:00.000 AM CORE Blanket established. What could be the root cause of the problem?          
Does anyone have a solution for a query that will return the daily event count of every index, index by index, even the ones that have ingested zero events? | tstats count WHERE index=* OR index=_*... See more...
Does anyone have a solution for a query that will return the daily event count of every index, index by index, even the ones that have ingested zero events? | tstats count WHERE index=* OR index=_* by index ... only returns indexes that have > 0 events.  
Hello, I'm trying to create a dashboard with a statistics table that will show a list of domains/hosts 10 minutes before and after a user connected to a specific domain. i.e. A user connected to ab... See more...
Hello, I'm trying to create a dashboard with a statistics table that will show a list of domains/hosts 10 minutes before and after a user connected to a specific domain. i.e. A user connected to abc.com at 12pm EST on 4/7/2022. I want to be able to input the users ID and host (abc.com) into text fields and when I submit/search I want results to show (sorted by time) all of the domains/hosts the user went to 10 minutes before and leading up to abc.com as well as all of the domains/hosts after abc.com I am stuck and have tried different variations of the query below using sort and desc. I've gotten results however, they only show the specific host that was entered into the text field and not the other hosts around that searched host. This is what I've started with and as previously mentioned, I've tried altering it quite a few times: index=proxy userID=$user_id$ host=$host_id$ | table _time, userID, host, ip, | sort host span=10m, -host span=10m Any assistance is appreciated!
Hi, I am trying to send data to HTTP Event Collector(HEC) free trial via postman but I am getting  SSL Error: Self signed certificate in certificate chain error. After searching online, I got t... See more...
Hi, I am trying to send data to HTTP Event Collector(HEC) free trial via postman but I am getting  SSL Error: Self signed certificate in certificate chain error. After searching online, I got that some self signed certs are required. So I added the certs in the postman, But I am getting another error: SSL Error: Hostname/IP does not match certificate's altnames Could anyone help in this issue for HEC free trial account. I have a hunch that it could be possible that free trial only support HTTP and for HTTPS paid cloud account is required. Kindly correct me if I am going in wrong direction. Also can anyone share certs if free trial support HTTPS   Endpoint: https://prd-***.splunkcloud.com:8088/services/collector/event Authorization:  "Splunk **********"
Hello,  I want to know if it's possible to upload files in Splunk Cloud  through the Http Collector or other way ?  Now i have a file with lines as events and i'm making an htttp request for each... See more...
Hello,  I want to know if it's possible to upload files in Splunk Cloud  through the Http Collector or other way ?  Now i have a file with lines as events and i'm making an htttp request for each lines to load events in Splunk.    Do you have another solution please ?  Thanks !!  
My Splunk access token seems to have been revoked today. My admin generated a new one but I don't see it. I have read permission.
Hello, everyone! I collect script logs from light forwarders to indexers directly. Logs looks like: 0348788934="Y"; 0304394493="N"; 0874844788="Y"; etc.   When in automatically parses o... See more...
Hello, everyone! I collect script logs from light forwarders to indexers directly. Logs looks like: 0348788934="Y"; 0304394493="N"; 0874844788="Y"; etc.   When in automatically parses on splunk i got fields 348788934=Y, 304394493=N and so on... I did props.conf on indexers:   [my_sourcetype] FIELD_DELIMETERS=;   but still not working, can anybody help? Thank you
How do I find the time events have been sent in for the last 3 days. I want to see the time 53 different events came in
We are trying to run some custom commands that requires cython, but Splunk's python doesnt support it. We tried creating an anaconda environment inside the app, just like MLTK and Python for Scienti... See more...
We are trying to run some custom commands that requires cython, but Splunk's python doesnt support it. We tried creating an anaconda environment inside the app, just like MLTK and Python for Scientific Computing apps, but some issues appeared regarding anaconda symlinks. This is beign discussed in another thread: https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/Why-when-installing-custom-made-app-that-contains-symlinks-the/td-p/592751 ¿Anyone managed to run python custom commands that required cython?
We are using the alert manager app in our environment to add incident workflows to Splunk. After updating our Splunk Cloud to the latest Cloud version, we are facing an error while trying to access t... See more...
We are using the alert manager app in our environment to add incident workflows to Splunk. After updating our Splunk Cloud to the latest Cloud version, we are facing an error while trying to access the alert manager settings page. Error:  'A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.’ We have already reached out to Splunk support to fix this issue. We have tried all the workarounds that they have mentioned to fix the compatibility issue of dashboards that use custom JavaScript with jQuery 3.5 or higher. But it did not help. It would be great if someone could help me in fixing this issue .. Alert Manager Incident Settings page Source code
Hi there, I´m getting following advice on licensing page: "This deployment is subject to license enforcement. Search is disabled after 45 warnings over a 60-day window" I don't know what it´s a... See more...
Hi there, I´m getting following advice on licensing page: "This deployment is subject to license enforcement. Search is disabled after 45 warnings over a 60-day window" I don't know what it´s are referring to.  
I have some doubts about Updating Splunk Apps. 1. The Splunk Apps that comes pre-built/packed with Enterprise Security such as Extreme Search, RapidDiag, Splunk AddOn for UEBA etc.... Do they auto... See more...
I have some doubts about Updating Splunk Apps. 1. The Splunk Apps that comes pre-built/packed with Enterprise Security such as Extreme Search, RapidDiag, Splunk AddOn for UEBA etc.... Do they automatically get updated to newer version. Also  I can't find them on Splunkbase. 2. The apps that come packaged with Splunk , do they show like regular apps when searched under the 'Manage App' option? Is there any way by looking at it to know, if the app is built into Splunk Or downloaded separately from Splunkbase Or developed by in-house team ?
Hi All! The data I am pulling is coming from nodes in multiple time zones. I want to use that time zone instead of Splunk's time field. The correct time data is already being pulled in a NodeTime fie... See more...
Hi All! The data I am pulling is coming from nodes in multiple time zones. I want to use that time zone instead of Splunk's time field. The correct time data is already being pulled in a NodeTime field but I cannot figure out how to use that field instead of Splunk's time field. Any ideas? TIA for the help!
Hi,   When configuring dependencies between services, is it possible to filter down the entities for the dependent KPIs?   For example: I have a service called "OS Performance Monitoring", an... See more...
Hi,   When configuring dependencies between services, is it possible to filter down the entities for the dependent KPIs?   For example: I have a service called "OS Performance Monitoring", and CPU Utilization is a KPI, split by 80 entities. If I create another service called Application X, which has 5 entities out of the 80, and if I select the KPI as a dependency, the rolled up value will be for all 80 entities, or just the 5 ones configured for the application service?
Hello colleagues. we recently switched from Splunk HF to UF. before this event with sourcetype = MSWindows:2012:IIS. parsed normal but after installation, something went wrong. and events in the span... See more...
Hello colleagues. we recently switched from Splunk HF to UF. before this event with sourcetype = MSWindows:2012:IIS. parsed normal but after installation, something went wrong. and events in the spanner do not take all the fields from the logs
I need to exclude the field values if it is less than or equal to 8 characters. For eg: In the field abc, I have the below values in which I need to exclude only (browsers, files, members) 'coz these... See more...
I need to exclude the field values if it is less than or equal to 8 characters. For eg: In the field abc, I have the below values in which I need to exclude only (browsers, files, members) 'coz these has equal to or less than 8 characters. And I need to have the other values abc: browsers files attachment members auto-saved splunk-answers discussions Can someone help me on this, please?
Hello   I have 5 indexers managed by Cluster Master. On the indexes.conf (located as master-app) I have the following configuration:   [default] maxTotalDataSizeMB = 1000000 frozenTimePerio... See more...
Hello   I have 5 indexers managed by Cluster Master. On the indexes.conf (located as master-app) I have the following configuration:   [default] maxTotalDataSizeMB = 1000000 frozenTimePeriodInSecs = 13824000 [volume:hot] path = /hot/splunk_db/ maxVolumeDataSizeMB = 2800000     from my understating, the Hot volume on each indexer should not be more than 2.8TB. but, actually the volume exceeded this limit and reached 2.9TB.  can someone please assist?   Thank you
Hi, is there any app similar to https://splunkbase.splunk.com/app/4144/ for auditing changes made to different settings, conf files in a Clustered deployment ?  If there is NO app, can someone recomm... See more...
Hi, is there any app similar to https://splunkbase.splunk.com/app/4144/ for auditing changes made to different settings, conf files in a Clustered deployment ?  If there is NO app, can someone recommend a report that can be run ? Any help appreciated. 
Hi, We have A hybrid platform for mobile application called "KONY", & we have to apply the Mobile EUM on the mobile applications that is built by KONY Platform. Unfortunately, we didn't find the K... See more...
Hi, We have A hybrid platform for mobile application called "KONY", & we have to apply the Mobile EUM on the mobile applications that is built by KONY Platform. Unfortunately, we didn't find the KONY Platform in the available AppDynamics supported platforms. Is the KONY Platform is supported by AppDynamics? If yes, we have to know the proper way to instrument our mobile app that is built by KONY Platform.
I am looking for some tool/way to get the Splunk index/lookup usage in the system for example to get all lookups that are not used in the system  what is the best way to do it  ?