All Topics

Top

All Topics

How do you filter out IPv6 and internal routed 169.254.0.0/16 from a multi-value field? Data Example HOST                    IP LIST hostA                   10.0.0.3, 10.3.4.6, 169.254.1.5, fe80::... See more...
How do you filter out IPv6 and internal routed 169.254.0.0/16 from a multi-value field? Data Example HOST                    IP LIST hostA                   10.0.0.3, 10.3.4.6, 169.254.1.5, fe80::2000:aff:fea7:f7c hostB                   10.0.0.2, 192.168.3.12, 169.254.8.9, fe80::2000:aff:fea7:d3c I have attempted using a number of combinations of mvfilter, match, cidrmatch and I can't get it to work. | eval ip_list_filter_IPv6 = mvfilter(match(ip_list_orig, "/\b(?:(?:2(?:[0-4][0-9]|5[0-5])|[0-1]?[0-9]?[0-9])\.){3}(?:(?:2([0-4][0-9]|5[0-5])|[0-1]?[0-9]?[0-9]))\b") | eval ip_list_filter_169 = mvfilter(match(ip_list_filter_IPv6, NOT cidrmatch(169.254.0.0/16,ip_list_filter_IPv6)) I thought cidrmatch might do it all but I believe it is not a validation macro but one that checks if an IP is in a given range.   Thanks for your help.
Hi all I'm struggling to make my chart how I want it. Basically what I currently have, is a graph with a lot of logs received from certain services. And that from the past 3 months. - I don't... See more...
Hi all I'm struggling to make my chart how I want it. Basically what I currently have, is a graph with a lot of logs received from certain services. And that from the past 3 months. - I don't understand why my months are ordered like this: 2022 December, 2023 February, 2023 January Where January should be in the middle. - Aside from this, my main struggle is to filter the top services with the highes   t logs. These are a lot higher than the other ones. So I'll have to make a 2nd graph with the smaller ones. How can I filter the the top (say 4) out? (AND srv!=*** is not the proper way to do it in this case) |dbxquery query="select to_char(received_ts,'YYYY Month') as Month,srv,sum(log_Count) as Total_Log_Count from gmssp.esm.esm_audit_day where client_id = **** AND received_ts>= DATE_TRUNC('month', current_date) - '3 month'::interval AND received_ts< DATE_TRUNC('month', current_date) AND SRV!='ignor' AND SRV!='UNK' group by srv, month" connection="******" | chart max(total_log_count) by srv month Thanks a lot for your help!
Hi all, How to give the range to that first and last if the date is in between last 3weeks till today which matches to first or last in the below splunk query. | eval first = strptime(first_detec... See more...
Hi all, How to give the range to that first and last if the date is in between last 3weeks till today which matches to first or last in the below splunk query. | eval first = strptime(first_detected, "%Y-%m-%dT%H:%M:%S.%3N%Z"), last= strptime(last_detected, "%Y-%m-%dT%H:%M:%S.%3N%Z") Thanks..
Hi,  looking for splunk query having field name similar to field in lookup file with respective value in lookup file. query have field "index" value is same as lookup file field "CAPNSplunk" valu... See more...
Hi,  looking for splunk query having field name similar to field in lookup file with respective value in lookup file. query have field "index" value is same as lookup file field "CAPNSplunk" value. if "index" field value matches with lookup file "CAPNSplunk" then "index" field value should get replaced with associated "RANSplunk" field value available in lookup file. lookup file: CAPNSplunk,RANSplunk "Pricing","Pricing Outlier" "Smart_Factory","Smart Factory BUCT" "SMARTFACTORY_LOGISTICS","Smart Factory Logistics" "SmartFactory_PM_Console","Smart Factory PM Console" "GCW_Dashboard","Global Contingent Worker Dashboard" "HRM_Spans_Layers","HRM - Spans & Layers" "Unity_Portal-Part_Aggregation","Unity Portal" "Blackbird_Dashboard","Blackbird" "WWops","WWOps" "AGS_metrology_AutoML","Metrology Auto ML Classification" "Action_Plan_Tracker","IDCL"   index: Pricing Smart_Factory SMARTFACTORY_LOGISTICS SmartFactory_PM_Console GCW_Dashboard HRM_Spans_Layers Unity_Portal-Part_Aggregation Blackbird_Dashboard WWops AGS_metrology_AutoML Action_Plan_Tracker   For example: if "index" field value is "Pricing" then it should get replaced with "Pricing Outlier" after looking into lookup file.
I want to extract 5degit. number 54879 as number field  
I have some Checkpoint logs (Firewall) that are generating an alert (Data hygiene - events in the future), I would like to know how I can confirm that the logs are arriving with the time in the futur... See more...
I have some Checkpoint logs (Firewall) that are generating an alert (Data hygiene - events in the future), I would like to know how I can confirm that the logs are arriving with the time in the future because they are coming with the time generated in the Checkpoint(Firewall) I tried using some SPLs but I don't know if that's right. Examples: SPL: | rest /services/data/indexes | search title=checkpoint | search totalEventCount > 0 | eval now=strftime(now(), "%Y-%m-%d") | stats first(maxTime) as "Earliest Event Time" first(minTime) as "Latest Event Time" first(now) as "Current Date" first(currentDBSizeMB) as currentDBSizeMB by title | rename title as "Index" | sort - currentDBSizeMB | eval "Index Size in GB"= round(currentDBSizeMB/1000,2) | table Index "Earliest Event Time" "Latest Event Time" "Current Date" "Index Size in GB" updated ======================================================================= OR this SPL: index=idx_checkpoint earliest=+5m latest=+10y | eval criationtimelog=strftime(creation_time,"%Y-%m-%d %H:%M:%S") | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | table host _time indextime criationtimelog
splunk service is using offsite (external) NTP queries. ntp.domain.com is our internal NTP domain and we have configured the same on our server under /etc/ntp.conf We could see that ntp configurati... See more...
splunk service is using offsite (external) NTP queries. ntp.domain.com is our internal NTP domain and we have configured the same on our server under /etc/ntp.conf We could see that ntp configuration is being used in Splunk_TA_nix app under time.sh script and further verified I see it is configured with NTP internal domain as “ntp­­­­­­­­­­.domain.com” under /etc/ntp.conf. But still splunk is using Offsite NTP queries. Any idea why splunk is using offsite NTP queries ? [splunk@hostname bin]$ ls -lrt | grep -ir ntp time.sh:if [ -f    /etc/ntp.conf ] ; then    ============================================ [splunk@hostname bin]$ cat /etc/ntp.conf | grep  ntp.domain.com restrict -6 ::1 pool ntp.domain.com iburst 
Hi guys! I have a dashboard with a text input that are connected with a token ($tk1$) , and a "Submit" button. What I want is, when click on Submit, to read the typed text on the input, and run a... See more...
Hi guys! I have a dashboard with a text input that are connected with a token ($tk1$) , and a "Submit" button. What I want is, when click on Submit, to read the typed text on the input, and run a | table... | collect... Here is my dashboard code:   <form version="1.1"> <label>LAB2</label> <fieldset submitButton="true" autoRun="false"> <input type="text" token="tk3"> <label>Comments</label> </input> </fieldset> <row> <panel> <table> <search> <query>index=lab sourcetype=lab2 A=$TK1$ B=$TK2$ | eval C="$tk3$" | table A B C</query> </search> </table> </panel> </row> </form>
This is regarding the Integration of java scripted data input from ipad to Splunk HEC for Log Onboarding but Splunk is not receiving any payload.  Pre-requisites:- Splunk Token generated from Splunk... See more...
This is regarding the Integration of java scripted data input from ipad to Splunk HEC for Log Onboarding but Splunk is not receiving any payload.  Pre-requisites:- Splunk Token generated from Splunk and shared the token app owner and below snippet is payload from Ipad with java scripted input.
Hi , I have a requirement to create Availability % Dashboard for Synthetic applications. But in the calculation I need to include the Health Rule violation count as well.  Without Health rule viola... See more...
Hi , I have a requirement to create Availability % Dashboard for Synthetic applications. But in the calculation I need to include the Health Rule violation count as well.  Without Health rule violation count I am able to get the Availability %, with Metric Widget in Dashboard. But couldn't get this Health rule Violation metric in Metric Widget. Is it possible to get Health Rule violation count in Report calculation? Is there any way to achieve this?
I have a dashboard that I'm working on that requires me to conditionally format table rows based on a field. The dashboard currently has 6 identical tables (in terms of the column names) and I need t... See more...
I have a dashboard that I'm working on that requires me to conditionally format table rows based on a field. The dashboard currently has 6 identical tables (in terms of the column names) and I need to be able to apply the JavaScript to each of them. I am aware that I can just state each table in the JavaScript, however, ideally I'd like to not have to do that, as I'm going to be adding additional tables to the dashboard over time. As this will live on cloud, I don't want to have to keep uploading the app every time I make a change.   Is there a way to replace 'the get('highlight01')' with something more generic or even loop that aspect of the code for all tables on the dashboard?     requirejs([ // It's important that these don't end in ".js" per RJS convention '../app/TA-lgw_images_and-_files/libs/jquery-3.6.0-umd-min', '../app/TA-lgw_images_and-_files/libs/underscore-1.6.0-umd-min', '../app/TA-lgw_images_and-_files/theme_utils', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function($, _, themeUtils, mvc, TableView) { // Row Coloring Example with custom, client-side range interpretation var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Enable this custom cell renderer return _(['target_met']).contains(cell.field); }, render: function($td, cell) { // Add a class to the cell based on the returned value var value = parseFloat(cell.value); // Apply interpretation for number of historical searches if (cell.field === 'target_met') { if (value > 0) { $td.addClass('range-cell').addClass('range-elevated'); } } // Update the cell content $td.text(value.toFixed(2)).addClass('numeric'); } }); mvc.Components.get('highlight01').getVisualization(function(tableView) { tableView.on('rendered', function() { // Apply class of the cells to the parent row in order to color the whole row tableView.$el.find('td.range-cell').each(function() { $(this).parents('tr').addClass(this.className); }); }); // Add custom cell renderer, the table will re-render automatically. tableView.addCellRenderer(new CustomRangeRenderer()); }); });   Example dashboard XML you can use <dashboard version="1.1" script="dpi_ops_evaluation.js" stylesheet="dpi_ops_evaluation.css"> <label>DPI TEST</label> <row> <panel> <table id="highlight01"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight02"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight03"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight04"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight05"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table id="highlight06"> <search> <query>| makeresults count=6 | streamstats count as Ref | eval target_met = random() % 2, measure= "Measure ".Ref | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard>
Hi, can anybody help, please? Problem: In dashboard I have label. If I write something in the label <number> and press Enter, I would like to make an action: write something in summary index. L... See more...
Hi, can anybody help, please? Problem: In dashboard I have label. If I write something in the label <number> and press Enter, I would like to make an action: write something in summary index. Label: serial_num Index: index_sum Fields to be saved in summary index: $Label$, <actual_time>, identifier
how to search value of "Dst_IP" field from "ASA" index to "otx" index "indicator" field and display the scrip" field from "ASA" index.
I'm looking for a way to search for freetext after a join. It is easy when the field is known. For instance, there is a join with left L and right R, and the value of variable $id$ can be in one cor... See more...
I'm looking for a way to search for freetext after a join. It is easy when the field is known. For instance, there is a join with left L and right R, and the value of variable $id$ can be in one corresponding fields (in this example, both fields have the same name): | search L.id=$id$ OR R.id=$id$ But how to search for something like freetext when this text can be a substring in any field of one of the two parts? I don't want to write a check for every field, so I tried things with "_raw" or "L._raw": Nothing worked.
Hi everyone. I am trying to create historical capacity data over some servers. I have 1 search that will return all the data i need. This search runs with a timepicker of 14 months(unlike the pic... See more...
Hi everyone. I am trying to create historical capacity data over some servers. I have 1 search that will return all the data i need. This search runs with a timepicker of 14 months(unlike the picture here for speed) and the last part ( | search Customer="*****") is not part of the scheduled report As you can see this returns 46 servers as expected. Then, when i try to load the search later on to create dashboards it now only returns 23 servers... The fact that it returns SOME of the servers but not all is confusing me. I have triple checked that the Customer="***" is correct in both searches. Does anybody have ideas? It makes no sense to me    
Hi, Whenever I try to update ANY app from the Splunk Enterprise web GUI, the following error appears:  I went to splunkd.log and found 3 logs potentially relevant to this issue: 1 03-22-... See more...
Hi, Whenever I try to update ANY app from the Splunk Enterprise web GUI, the following error appears:  I went to splunkd.log and found 3 logs potentially relevant to this issue: 1 03-22-2023 14:31:17.337 +0000 WARN LocalAppsAdminHandler [298988 TcpChannelThread] - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf enable_install_apps is set to "false", but I checked my role's capabilities and I have install_apps, admin_all_objects and edit_local_apps. Besides, never had any problem before with this configuration.  I also checked splunk.docs https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Limitsconf and these capabilities don't seem to be deprecated. 2 03-22-2023 14:31:17.695 +0000 ERROR X509 [298988 TcpChannelThread] - X509 certificate (CN=splunkbase.splunk.com,O=Splunk Inc.,L=San Francisco,ST=California,C=US) common name (splunkbase.splunk.com) did not match any allowed names (apps.splunk.com,cdn.apps.splunk.com) I followed this post https://community.splunk.com/t5/Installation/ERROR-X509-X509-certificate/td-p/367804  so I checked my inputs and didn't find these parameters: 3 03-22-2023 14:31:18.020 +0000 ERROR ApplicationUpdater [298988 TcpChannelThread] - Update file /opt/splunk/var/run/715e039c578d4189.tar.gz is not the correct size: expected 1555199 but got 1555197 I can't find any information about this last log. I guess I could try to uninstall the app and reinstall it, but since this error is happening with ANY app I try to update, I suppose the issue doesn't have to be with this app specifically so this would probably be useless. Any other suggestions? Thanks  
We are using OpenShift 4.11.27 and now looking for OpenShift Log Forwarding to Splunk.   Did below changes at OpenShift end to configure splunk:   Installed cluster-logging and elasticsearc... See more...
We are using OpenShift 4.11.27 and now looking for OpenShift Log Forwarding to Splunk.   Did below changes at OpenShift end to configure splunk:   Installed cluster-logging and elasticsearch-operator into OpenShift. $ oc get csv -n openshift-logging NAME                                     DISPLAY                            VERSION   REPLACES                                       PHASE cluster-logging.v5.6.3                   Red Hat OpenShift Logging          5.6.3     cluster-logging.v5.6.2                         Succeeded elasticsearch-operator.v5.6.3            OpenShift Elasticsearch Operator   5.6.3     elasticsearch-operator.v5.6.2                  Succeeded   Created secret vector-splunk-secret using below command: $ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>   We have create clusterlogforwarders as below: ---   apiVersion: "logging.openshift.io/v1"   kind: "ClusterLogForwarder"   metadata:     name: "instance"     namespace: "openshift-logging"   spec:     outputs:       - name: splunk-receiver         secret:           name: vector-splunk-secret         type: splunk         url: http://splunk-hec.amosirelanddev.amosonline.io:8000     pipelines:       - inputRefs:           - application           - infrastructure         name:         outputRefs:           - splunk-receiver   Updated cluster logging operator as it was using fluentd so replaced fluentd with vector: $ oc edit ClusterLogging instance -n openshift-logging   Splunk Setup changes: Splunk installation on VM done with below steps: wget https://download.splunk.com/products/splunk/releases/8.0.4/linux/splunk-8.0.4-767223ac207f-linux-2.6-x86_64.rpm sudo rpm -ivh splunk-8.0.4-767223ac207f-linux-2.6-x86_64.rpm Two indexes created. Create new index as per below list. Settings > Indexes > New Index openshift (events) openshift-matrix (matix)   Enabling HEC token - Enable HEC (HTTP Event Collector), Settings > Data Inputs > HTTP Event Collector > Global Settings > Default Index as “Default” > Save Create HEC token - Create new HEC token, Settings > Data inputs > HTTP Event Collector > New Token > Name as “openshift” > Next (Input Settings, add allowed indexes like below” > Review > Submit. Note the Token Value, we going to use this for next step.   We are trying to search from New Search “index= openshift” but not getting any result.            Where we can see the logs on Splunk dashboard or if we are missing something then please let us know.   Regards, Suchita Deshmukh
Hi team, Please help me out on Cron job. cron expressions for alert scheduling every month of 1st and 3rd Monday Thanks in Advance..!!
Hello All, I have been able to create a table that lists the top users that have been uploading files the most to cloud storage services for a certain time range as set in shared time picker with t... See more...
Hello All, I have been able to create a table that lists the top users that have been uploading files the most to cloud storage services for a certain time range as set in shared time picker with the following queries.  (time range:last month) index=proxy sourcetype="XXX" filter_category="File_Storage/Sharing" | eval end_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval bytes_in=bytes_in/1024/1024/1024 | eval bytes_in=round(bytes_in, 2) | table end_time,user,src,src_remarks01,url,bytes_in | rename "end_time" as "Access date and time", "user" as "Username", "src" as "IP address", "src_remarks01" as "Asset information", "url" as "FQDN", "bytes_in" as "BytesIn(GB)" | sort - BytesIn(GB) | head 10 The result of the above search is as follows (for example). "Access date and time"     "Username"     "IP address"     "Asset information"    "FQDN"                   "BytesIn(GB)" 2023-02-20 03:04:05           aa                        X.X.X.X              mmm                              box.com                         3.5 2023-02-21 06:07:08           bb                       Y.Y.Y.Y                  nnn                                  firestorage.com          1.3 2023-02-22 09:10:11           cc                       Z.Z.Z.Z                 lll                                     onedrive.com               0.3 . . . . . Now, I am trying to get the number of (file) uploads in the last month for each user corresponding to each FQDN in the result above. However, I still cannot make a correct search for it with the following queries using subsearch. index=proxy sourcetype="XXX" filter_category="File_Storage/Sharing" [ search index=proxy sourcetype="XXX" filter_category="File_Storage/Sharing" | eval end_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval bytes_in=bytes_in/1024/1024/1024 | eval bytes_in=round(bytes_in, 2) | table end_time,user,src,src_remarks01,url,bytes_in | sort - bytes_in | head 10 | fields user url | rename user as username, url as FQDN ] | where bytes_in>0 | stats count sum(bytes_in) as Number_File_Uploads by username FQDN | table end_time,username,src,src_remarks01,FQDN,bytes_in,Number_File_Uploads | rename "end_time" as "Access date and time", "src" as "IP address", "src_remarks01" as "Asset information", "bytes_in" as "BytesIn(GB)" And as the result, I would like the column "Number of uploads" to be added to the table of the first result at the end like this. "Access date and time"     "Username"     "IP address"     "Asset information"    "FQDN"      "BytesIn(GB)" "Number of uploads (times)" 2023-02-20 03:04:05           aa             X.X.X.X               mmm             box.com                           3.5           10 2023-02-21 06:07:08           bb             Y.Y.Y.Y                  nnn                firestorage.com             1.3            20 2023-02-22 09:10:11           cc             Z.Z.Z.Z                lll                    onedrive.com                  0.3            5 . . . . . Does anyone have any idea on the seach queries that I am trying to do. Many thanks.
I have  table with _time, host and source   Hostnames are different . I need email alert to be triggered separately for each hostnames..