All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

splunk service is using offsite (external) NTP queries. ntp.domain.com is our internal NTP domain and we have configured the same on our server under /etc/ntp.conf We could see that ntp configurati... See more...
splunk service is using offsite (external) NTP queries. ntp.domain.com is our internal NTP domain and we have configured the same on our server under /etc/ntp.conf We could see that ntp configuration is being used in Splunk_TA_nix app under time.sh script and further verified I see it is configured with NTP internal domain as “ntp­­­­­­­­­­.domain.com” under /etc/ntp.conf. But still splunk is using Offsite NTP queries. Any idea why splunk is using offsite NTP queries ? [splunk@hostname bin]$ ls -lrt | grep -ir ntp time.sh:if [ -f    /etc/ntp.conf ] ; then    ============================================ [splunk@hostname bin]$ cat /etc/ntp.conf | grep  ntp.domain.com restrict -6 ::1 pool ntp.domain.com iburst 
Hi guys! I have a dashboard with a text input that are connected with a token ($tk1$) , and a "Submit" button. What I want is, when click on Submit, to read the typed text on the input, and run a... See more...
Hi guys! I have a dashboard with a text input that are connected with a token ($tk1$) , and a "Submit" button. What I want is, when click on Submit, to read the typed text on the input, and run a | table... | collect... Here is my dashboard code:   <form version="1.1"> <label>LAB2</label> <fieldset submitButton="true" autoRun="false"> <input type="text" token="tk3"> <label>Comments</label> </input> </fieldset> <row> <panel> <table> <search> <query>index=lab sourcetype=lab2 A=$TK1$ B=$TK2$ | eval C="$tk3$" | table A B C</query> </search> </table> </panel> </row> </form>
This is regarding the Integration of java scripted data input from ipad to Splunk HEC for Log Onboarding but Splunk is not receiving any payload.  Pre-requisites:- Splunk Token generated from Splunk... See more...
This is regarding the Integration of java scripted data input from ipad to Splunk HEC for Log Onboarding but Splunk is not receiving any payload.  Pre-requisites:- Splunk Token generated from Splunk and shared the token app owner and below snippet is payload from Ipad with java scripted input.
Hi , I have a requirement to create Availability % Dashboard for Synthetic applications. But in the calculation I need to include the Health Rule violation count as well.  Without Health rule viola... See more...
Hi , I have a requirement to create Availability % Dashboard for Synthetic applications. But in the calculation I need to include the Health Rule violation count as well.  Without Health rule violation count I am able to get the Availability %, with Metric Widget in Dashboard. But couldn't get this Health rule Violation metric in Metric Widget. Is it possible to get Health Rule violation count in Report calculation? Is there any way to achieve this?
I have a dashboard that I'm working on that requires me to conditionally format table rows based on a field. The dashboard currently has 6 identical tables (in terms of the column names) and I need t... See more...
I have a dashboard that I'm working on that requires me to conditionally format table rows based on a field. The dashboard currently has 6 identical tables (in terms of the column names) and I need to be able to apply the JavaScript to each of them. I am aware that I can just state each table in the JavaScript, however, ideally I'd like to not have to do that, as I'm going to be adding additional tables to the dashboard over time. As this will live on cloud, I don't want to have to keep uploading the app every time I make a change.   Is there a way to replace 'the get('highlight01')' with something more generic or even loop that aspect of the code for all tables on the dashboard?     requirejs([ // It's important that these don't end in ".js" per RJS convention '../app/TA-lgw_images_and-_files/libs/jquery-3.6.0-umd-min', '../app/TA-lgw_images_and-_files/libs/underscore-1.6.0-umd-min', '../app/TA-lgw_images_and-_files/theme_utils', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function($, _, themeUtils, mvc, TableView) { // Row Coloring Example with custom, client-side range interpretation var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Enable this custom cell renderer return _(['target_met']).contains(cell.field); }, render: function($td, cell) { // Add a class to the cell based on the returned value var value = parseFloat(cell.value); // Apply interpretation for number of historical searches if (cell.field === 'target_met') { if (value > 0) { $td.addClass('range-cell').addClass('range-elevated'); } } // Update the cell content $td.text(value.toFixed(2)).addClass('numeric'); } }); mvc.Components.get('highlight01').getVisualization(function(tableView) { tableView.on('rendered', function() { // Apply class of the cells to the parent row in order to color the whole row tableView.$el.find('td.range-cell').each(function() { $(this).parents('tr').addClass(this.className); }); }); // Add custom cell renderer, the table will re-render automatically. tableView.addCellRenderer(new CustomRangeRenderer()); }); });   Example dashboard XML you can use <dashboard version="1.1" script="dpi_ops_evaluation.js" stylesheet="dpi_ops_evaluation.css"> <label>DPI TEST</label> <row> <panel> <table id="highlight01"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight02"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight03"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight04"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight05"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight06"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard>
Hi, can anybody help, please? Problem: In dashboard I have label. If I write something in the label <number> and press Enter, I would like to make an action: write something in summary index. L... See more...
Hi, can anybody help, please? Problem: In dashboard I have label. If I write something in the label <number> and press Enter, I would like to make an action: write something in summary index. Label: serial_num Index: index_sum Fields to be saved in summary index: $Label$, <actual_time>, identifier
how to search value of "Dst_IP" field from "ASA" index to "otx" index "indicator" field and display the scrip" field from "ASA" index.
I'm looking for a way to search for freetext after a join. It is easy when the field is known. For instance, there is a join with left L and right R, and the value of variable $id$ can be in one cor... See more...
I'm looking for a way to search for freetext after a join. It is easy when the field is known. For instance, there is a join with left L and right R, and the value of variable $id$ can be in one corresponding fields (in this example, both fields have the same name): | search L.id=$id$ OR R.id=$id$ But how to search for something like freetext when this text can be a substring in any field of one of the two parts? I don't want to write a check for every field, so I tried things with "_raw" or "L._raw": Nothing worked.
Hi everyone. I am trying to create historical capacity data over some servers. I have 1 search that will return all the data i need. This search runs with a timepicker of 14 months(unlike the pic... See more...
Hi everyone. I am trying to create historical capacity data over some servers. I have 1 search that will return all the data i need. This search runs with a timepicker of 14 months(unlike the picture here for speed) and the last part ( | search Customer="*****") is not part of the scheduled report As you can see this returns 46 servers as expected. Then, when i try to load the search later on to create dashboards it now only returns 23 servers... The fact that it returns SOME of the servers but not all is confusing me. I have triple checked that the Customer="***" is correct in both searches. Does anybody have ideas? It makes no sense to me    
Hi, Whenever I try to update ANY app from the Splunk Enterprise web GUI, the following error appears:  I went to splunkd.log and found 3 logs potentially relevant to this issue: 1 03-22-... See more...
Hi, Whenever I try to update ANY app from the Splunk Enterprise web GUI, the following error appears:  I went to splunkd.log and found 3 logs potentially relevant to this issue: 1 03-22-2023 14:31:17.337 +0000 WARN LocalAppsAdminHandler [298988 TcpChannelThread] - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf enable_install_apps is set to "false", but I checked my role's capabilities and I have install_apps, admin_all_objects and edit_local_apps. Besides, never had any problem before with this configuration.  I also checked splunk.docs https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Limitsconf and these capabilities don't seem to be deprecated. 2 03-22-2023 14:31:17.695 +0000 ERROR X509 [298988 TcpChannelThread] - X509 certificate (CN=splunkbase.splunk.com,O=Splunk Inc.,L=San Francisco,ST=California,C=US) common name (splunkbase.splunk.com) did not match any allowed names (apps.splunk.com,cdn.apps.splunk.com) I followed this post https://community.splunk.com/t5/Installation/ERROR-X509-X509-certificate/td-p/367804  so I checked my inputs and didn't find these parameters: 3 03-22-2023 14:31:18.020 +0000 ERROR ApplicationUpdater [298988 TcpChannelThread] - Update file /opt/splunk/var/run/715e039c578d4189.tar.gz is not the correct size: expected 1555199 but got 1555197 I can't find any information about this last log. I guess I could try to uninstall the app and reinstall it, but since this error is happening with ANY app I try to update, I suppose the issue doesn't have to be with this app specifically so this would probably be useless. Any other suggestions? Thanks  
We are using OpenShift 4.11.27 and now looking for OpenShift Log Forwarding to Splunk.   Did below changes at OpenShift end to configure splunk:   Installed cluster-logging and elasticsearc... See more...
We are using OpenShift 4.11.27 and now looking for OpenShift Log Forwarding to Splunk.   Did below changes at OpenShift end to configure splunk:   Installed cluster-logging and elasticsearch-operator into OpenShift. $ oc get csv -n openshift-logging NAME                                     DISPLAY                            VERSION   REPLACES                                       PHASE cluster-logging.v5.6.3                   Red Hat OpenShift Logging          5.6.3     cluster-logging.v5.6.2                         Succeeded elasticsearch-operator.v5.6.3            OpenShift Elasticsearch Operator   5.6.3     elasticsearch-operator.v5.6.2                  Succeeded   Created secret vector-splunk-secret using below command: $ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>   We have create clusterlogforwarders as below: ---   apiVersion: "logging.openshift.io/v1"   kind: "ClusterLogForwarder"   metadata:     name: "instance"     namespace: "openshift-logging"   spec:     outputs:       - name: splunk-receiver         secret:           name: vector-splunk-secret         type: splunk         url: http://splunk-hec.amosirelanddev.amosonline.io:8000     pipelines:       - inputRefs:           - application           - infrastructure         name:         outputRefs:           - splunk-receiver   Updated cluster logging operator as it was using fluentd so replaced fluentd with vector: $ oc edit ClusterLogging instance -n openshift-logging   Splunk Setup changes: Splunk installation on VM done with below steps: wget https://download.splunk.com/products/splunk/releases/8.0.4/linux/splunk-8.0.4-767223ac207f-linux-2.6-x86_64.rpm sudo rpm -ivh splunk-8.0.4-767223ac207f-linux-2.6-x86_64.rpm Two indexes created. Create new index as per below list. Settings > Indexes > New Index openshift (events) openshift-matrix (matix)   Enabling HEC token - Enable HEC (HTTP Event Collector), Settings > Data Inputs > HTTP Event Collector > Global Settings > Default Index as “Default” > Save Create HEC token - Create new HEC token, Settings > Data inputs > HTTP Event Collector > New Token > Name as “openshift” > Next (Input Settings, add allowed indexes like below” > Review > Submit. Note the Token Value, we going to use this for next step.   We are trying to search from New Search “index= openshift” but not getting any result.            Where we can see the logs on Splunk dashboard or if we are missing something then please let us know.   Regards, Suchita Deshmukh
Hi team, Please help me out on Cron job. cron expressions for alert scheduling every month of 1st and 3rd Monday Thanks in Advance..!!
Hello All, I have been able to create a table that lists the top users that have been uploading files the most to cloud storage services for a certain time range as set in shared time picker with t... See more...
Hello All, I have been able to create a table that lists the top users that have been uploading files the most to cloud storage services for a certain time range as set in shared time picker with the following queries.  (time range:last month) index=proxy sourcetype="XXX" filter_category="File_Storage/Sharing" | eval end_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval bytes_in=bytes_in/1024/1024/1024 | eval bytes_in=round(bytes_in, 2) | table end_time,user,src,src_remarks01,url,bytes_in | rename "end_time" as "Access date and time", "user" as "Username", "src" as "IP address", "src_remarks01" as "Asset information", "url" as "FQDN", "bytes_in" as "BytesIn(GB)" | sort - BytesIn(GB) | head 10 The result of the above search is as follows (for example). "Access date and time"     "Username"     "IP address"     "Asset information"    "FQDN"                   "BytesIn(GB)" 2023-02-20 03:04:05           aa                        X.X.X.X              mmm                              box.com                         3.5 2023-02-21 06:07:08           bb                       Y.Y.Y.Y                  nnn                                  firestorage.com          1.3 2023-02-22 09:10:11           cc                       Z.Z.Z.Z                 lll                                     onedrive.com               0.3 . . . . . Now, I am trying to get the number of (file) uploads in the last month for each user corresponding to each FQDN in the result above. However, I still cannot make a correct search for it with the following queries using subsearch. index=proxy sourcetype="XXX" filter_category="File_Storage/Sharing" [ search index=proxy sourcetype="XXX" filter_category="File_Storage/Sharing" | eval end_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval bytes_in=bytes_in/1024/1024/1024 | eval bytes_in=round(bytes_in, 2) | table end_time,user,src,src_remarks01,url,bytes_in | sort - bytes_in | head 10 | fields user url | rename user as username, url as FQDN ] | where bytes_in>0 | stats count sum(bytes_in) as Number_File_Uploads by username FQDN | table end_time,username,src,src_remarks01,FQDN,bytes_in,Number_File_Uploads | rename "end_time" as "Access date and time", "src" as "IP address", "src_remarks01" as "Asset information", "bytes_in" as "BytesIn(GB)" And as the result, I would like the column "Number of uploads" to be added to the table of the first result at the end like this. "Access date and time"     "Username"     "IP address"     "Asset information"    "FQDN"      "BytesIn(GB)" "Number of uploads (times)" 2023-02-20 03:04:05           aa             X.X.X.X               mmm             box.com                           3.5           10 2023-02-21 06:07:08           bb             Y.Y.Y.Y                  nnn                firestorage.com             1.3            20 2023-02-22 09:10:11           cc             Z.Z.Z.Z                lll                    onedrive.com                  0.3            5 . . . . . Does anyone have any idea on the seach queries that I am trying to do. Many thanks.
I have  table with _time, host and source   Hostnames are different . I need email alert to be triggered separately for each hostnames..
We want to set default TZ as SGT for a particular Search Head and that SH is in EDT TZ. We have already applied TZ setting in props settings at master for that index so they can view the related even... See more...
We want to set default TZ as SGT for a particular Search Head and that SH is in EDT TZ. We have already applied TZ setting in props settings at master for that index so they can view the related events when it is pushed. Now, application team wants in preferences settings it should be SGT as default in preferences settings so whenever any query is search for the index it should show as SGT TZ. as it can be seen in the sample events which is not coming as expected.  here is the btool results for the SH   -bash-4.2$ /opt/splunk/splunk_sas/bin/splunk btool --debug user-prefs list /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf [default] /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/splunk_instrumentation/local/user-prefs.conf [general] /opt/splunk/splunk_sas/etc/apps/splunk_instrumentation/local/user-prefs.conf dismissedInstrumentationOptInVersion = 4 /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf hideInstrumentationOptInModal = 1 /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf notification_python_3_impact = false /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf render_version_messages = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_assistant = compact /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_auto_format = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_line_numbers = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_syntax_highlighting = light /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_use_advanced_editor = 1 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf theme = enterprise /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf tz = GMT /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf [general_default] /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf appOrder = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf default_earliest_time = -24h@h /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf default_latest_time = now /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf default_namespace = $default /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf hideInstrumentationOptInModal = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf notification_noah_upgrade = true /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf notification_python_2_removal = false /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf notification_python_3_impact = false /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf showWhatsNew = 1 /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_admin] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_api] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_***] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_infra] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_power] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_general] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_general_default] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf appOrder = search /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong
Can someone please help me in extracting the field Specific_DL_Testing from the below sample log. instance of the "\Specific_DL_Testing" task. The output should be Specific_DL_Testing
Hi Team, Does AppDynamics support UWP application integration?
Hello How can I update/change my display name in Splunk web site (My Dashboard Panel) and also in the education panel (My training)?
Hello team, I am getting email alert from my gmail ID, I want it from splunk@splunk.abc.com. What need to be done for this?     
Hi Team, I have logs coming from certain nodes and clusters. How can I detect if the logs go missing even from one of the clusters. The nodes and clusters are under the field name source.  For exam... See more...
Hi Team, I have logs coming from certain nodes and clusters. How can I detect if the logs go missing even from one of the clusters. The nodes and clusters are under the field name source.  For example, I have source = logs/node*c* node* has 3 to 4 nodes. c* have 8 to 10 clusters. I want to create an alert to notify if logs are missing even from one cluster. Thanks.