All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to create a Dashboard with below columns  from below event data.   I couldn't able to get "Status" column value which is combination of  eventData{}.StatusCount{}.status and eventData{}.Status... See more...
I need to create a Dashboard with below columns  from below event data.   I couldn't able to get "Status" column value which is combination of  eventData{}.StatusCount{}.status and eventData{}.StatusCount{}.count Thanks in advance!!!   Dashboard five columns  and  expected values:       Date : "2021-10-14", eventKey: "event.request", ReceivedCount: 10, ProcessedCount: 10, MismatchCount: 0, Status :  DOCUMENT_REQUEST_RECEIVED:10 DOCUMENT_SUCCESS:10 DOCUMENT_NOTIFY_SUCCESS:10     "eventData": [ { "Date": "2021-10-14", "eventKey": "event.request", "ReceivedCount": 10, "ProcessedCount": 10, "MismatchCount": 0, "StatusCount": [ { "status": "DOCUMENT_REQUEST_RECEIVED", "count": 10 }, { "status": "DOCUMENT_SUCCESS", "count": 10 }, { "status": "DOCUMENT_NOTIFY_SUCCESS", "count": 10 } ] } ]
sample json: Hosts: { [-]    Nodepool1: { [-]        Cluster: xyz1        Accountid: idxyz    Nodepool3: { [-]       Cluster: xyz1      Accountid: idxyz    Nodepool5: { [-]      Cluster: xy... See more...
sample json: Hosts: { [-]    Nodepool1: { [-]        Cluster: xyz1        Accountid: idxyz    Nodepool3: { [-]       Cluster: xyz1      Accountid: idxyz    Nodepool5: { [-]      Cluster: xyz1     Accountid: idxyz   am trying below query but it display list of servers but missing few servers randomly, please correct the query if am missing something. index=index1 | eval cluster="" | foreach hosts.*.cluster [| eval cluster=isnotnull('<<FIELD>>'),'<<FIELD>>,cluster)] | table cluster
index A has table1 and Index B has table2 table1 table2.        table3 aaa.      zzz.             aaa bbb.     aaa.           bbb ccc.   ccc              ddd     ddd          I want do ou... See more...
index A has table1 and Index B has table2 table1 table2.        table3 aaa.      zzz.             aaa bbb.     aaa.           bbb ccc.   ccc              ddd     ddd          I want do output new table with values doesn't exist when compare with table1 with table2 
I recently migrated a clustered index.  We wanted to rename the index.  I created the new index as your normally would via the CM.  Put the cluster in maintenance mode.  Stop any ingest into the "old... See more...
I recently migrated a clustered index.  We wanted to rename the index.  I created the new index as your normally would via the CM.  Put the cluster in maintenance mode.  Stop any ingest into the "old" index and merely copied all the contents of the "old" index into the "new" index on all 6 of our indexers.  Took the cluster out of maintenance mode and did a rolling restart.  Everything worked fine except when I count the events in both indexes for ALL TIME, the old index is ~40 million events and the new index is ~111 million events.  We have a SF & RF of 3.  My thoughts are that its something with the RF of 3 however the math does not really workout to be 3x.  
Hi, I have a general question about which commands do you usually avoid in order to make search faster? For example I tend to aviod transaction and join. Instead of join, when possible I try do u... See more...
Hi, I have a general question about which commands do you usually avoid in order to make search faster? For example I tend to aviod transaction and join. Instead of join, when possible I try do use lookup. Also in favour of lookup, I try not to use subsearches which use | inputlookup command. What about other commands? What other commands do you avoid to save system resources?
I have Splunk installed on a machine running Windows 10 that is compliant with all Windows 10 STIGs.  I can access Splunk from that machine but no others.  I can ping the Splunk box from other machin... See more...
I have Splunk installed on a machine running Windows 10 that is compliant with all Windows 10 STIGs.  I can access Splunk from that machine but no others.  I can ping the Splunk box from other machines. I have tried disabling the firewall but the symptoms persist.   I figure it is a setting associated with a STIG and am hoping someone here has run into this before and remembers what it is.  
Has anyone solved the issue of the Splunk sendmail command changing the order of the input columns to another output column order in the resulting email?  Apparently, this order alteration effect has... See more...
Has anyone solved the issue of the Splunk sendmail command changing the order of the input columns to another output column order in the resulting email?  Apparently, this order alteration effect has been around for quite sometime now.  I need to send an email with the columns in a specific order.   How can I  specify the column order?  
We have requirement to mask data in index time. While below works to mask data in raw, it does not work for extracted field "User name". My SED is on universal forwarder (windows) and it works fine f... See more...
We have requirement to mask data in index time. While below works to mask data in raw, it does not work for extracted field "User name". My SED is on universal forwarder (windows) and it works fine for raw data: s/(GBW\d{8}\t)(\d{8}\s){0,1}(\w.*?)(\t)/\1\2(masked)\4/g My props.conf: [sourcetype] SEDCMD-username=s/(GBW\d{8}\t)(\d{8}\s){0,1}(\w.*?)(\t)/\1\2(masked)\4/1 FIELD_DELIMITER=tab HEADER_FIELD_DELIMITER=tab HEADER_FIELD_LINE_NUMBER=1 MAX_TIMESTAMP_LOOKAHEAD=300 TIMESTAMP_FIELDS=Timestamp TIME_FORMAT=%Y%m%dT%H%M%S.%3N+%z TRANSFORMS-anonymize = username-anonymizer However, Transforms does not work. Have tried by placing on Universal forwarder as well as Intermediate heavy forwarder. Have created based on response from Solved: How can I anonymize fields of data that has underg... - Splunk Community transforms.conf: [username-anonymizer] REGEX = (?m)^(.*User name\:\:)(\d{8}\s){0,1}(\w.*?)$ FORMAT = $1(masked) WRITE_META = false SOURCE_KEY = _meta DEST_KEY = _meta   Related info: We are expecting tab-delimited data. The field User name is in the middle and follows hostname and hence GBW is this example. "User name" could be combination of id and name and we only want to mask name: Value : 12345678 firstname lastname 12345678 firstname firstname lastname firstname expected masked value 12345678 (masked) 12345678 (masked) (masked) (masked) It could be blank as well.  
Hi, Inspired from this post: https://community.splunk.com/t5/Dashboards-Visualizations/How-can-i-re-use-Java-scripts-form-one-table-to-another-tables/m-p/414861 I modifed my Javascript to add a len... See more...
Hi, Inspired from this post: https://community.splunk.com/t5/Dashboards-Visualizations/How-can-i-re-use-Java-scripts-form-one-table-to-another-tables/m-p/414861 I modifed my Javascript to add a lense icon in two tables. Same fieldname "Metrics" same icon "lupe.png" but 2 different tables. With Splunk 8.1 it's working without any error. After the upgrade to 8.2.9 I get now this error message by accessing the App but the lens icons are still shown in both tables. The output from the developer console: TypeError: Cannot read property 'getVisualization' of undefined at eval (eval at <anonymous> ... table_lupe.js   require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { var CustomIconRenderer1 = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return cell.field === 'Metrics'; }, render: function($td, cell) { var icon = 'lupe'; // Create the icon element and add it to the table cell $td.addClass('icon').html(_.template('<div class="myicon <%- icon%>"></div>', { icon: icon, })); } }); var CustomIconRenderer2 = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return cell.field === 'Metrics'; }, render: function($td, cell) { var icon = 'lupe'; // Create the icon element and add it to the table cell $td.addClass('icon').html(_.template('<div class="myicon <%- icon%>"></div>', { icon: icon, })); } }); mvc.Components.get('lupe1').getVisualization(function(tableView1){ // Register custom cell renderer, the table will re-render automatically tableView1.addCellRenderer(new CustomIconRenderer1()); }); mvc.Components.get('lupe2').getVisualization(function(tableView2){ // Register custom cell renderer, the table will re-render automatically tableView2.addCellRenderer(new CustomIconRenderer2()); }); });     table_lupe.css   /* Custom Icons */ td.icon { text-align: center; } td.icon .lupe { background-image: url('lupe.png') !important; background-size:20px 20px; } td.icon .myicon { width: 20px; height: 20px; margin-left: auto; margin-right: auto }     Maybe some one can help me find whats the problem. Thank you.  
Hello, I am trying to write a script to run Splunk events every morning using PowerShell.  Has anyone done this before? Thanks,
I get strange errors when searching messages by old dates. If I put a search for more than two hours, I immediately get the following errors: 2 errors occurred while the search was executing. The... See more...
I get strange errors when searching messages by old dates. If I put a search for more than two hours, I immediately get the following errors: 2 errors occurred while the search was executing. Therefore, search results might be incomplete. 'stats' command: limit for values of field 'Time' reached. Some values may have been truncated or ignored. 'stats' command: limit for values of field 'messageType' reached. Some values may have been truncated or ignored. From four days: 4 errors occurred while the search was executing. Therefore, search results might be incomplete.  'stats' command: limit for values of field 'Time' reached. Some values may have been truncated or ignored. 'stats' command: limit for values of field 'eventTime' reached. Some values may have been truncated or ignored. 'stats' command: limit for values of field 'messageId' reached. Some values may have been truncated or ignored. 'stats' command: limit for values of field 'messageType' reached. Some values may have been truncated or ignored. One of my requests: index="external_system" messageType="RABIS-HeartBeat" | eval timeValue='eventTime' | eval time=strptime(timeValue,"%Y-%m-%dT%H:%M:%S") | sort -_time | eval timeValue='eventTime' | eval time=strptime(timeValue,"%Y-%m-%dT%H:%M:%S") | eval Time=strftime(_time,"%Y-%m-%dT%H:%M:%S") | stats list(Time) as Time list(eventTime) as EventTime list(messageType) as MessageType list(messageId) as Messag11eId by messageType   Message example: curl --location --request POST 'http://mon.pd.dev.sis.org:8088/services/collector/raw' --header 'Authorization: Splunk 02-93-48-9-27' --header 'Content-Type: text/plain' --data-raw '{ "messageType": "HeartBeat", "eventTime": "2022-11-14T13:34:15", "messageId": "ED280816-E404-444A-A2D9-FFD2D171F9999" }' Can you please tell me how to solve these problems?  
Earlier we used to run on ec2 instance, and in splunk we had an extracted field called as "host", in which we used to get an ip address of the host. But since we have moved away from ec2 to fargate i... See more...
Earlier we used to run on ec2 instance, and in splunk we had an extracted field called as "host", in which we used to get an ip address of the host. But since we have moved away from ec2 to fargate i want to replace the host with taskid and get all taskids in the extraction field called as "task". Any help appreciated 
Hi peeps, Need help to do some query. Basically I'm trying to group some of field value in the 'Category' field into new fields call 'newCategory'. Below are the sample of data: The newCateg... See more...
Hi peeps, Need help to do some query. Basically I'm trying to group some of field value in the 'Category' field into new fields call 'newCategory'. Below are the sample of data: The newCategory field will have the new count for each of the new field value (such as Anonymizers, Gambling, Malicious Site). Please help.  Thank you.  
Hi there, I used to have a couple of alerts which worked using a crons expression from Monday to Saturday (*/15 7-19 * * 1-) and another for Sunday (*/15 10-15 * * 0). The requirements changed so I... See more...
Hi there, I used to have a couple of alerts which worked using a crons expression from Monday to Saturday (*/15 7-19 * * 1-) and another for Sunday (*/15 10-15 * * 0). The requirements changed so I needed the Saturday and Sunday alert timings to be the same. I used (*/15 10-15 * * 6-7) but that didn't that didn't trigger an alert. I tried */15 10-15 * * SAT-SUN but it doesn't accept that format.   Can you help me with a crons expression for Saturday and Sunday?
Hi,  May I check whether is there character limits when sending data to Splunk? Is there 10000 limit on Splunk Enterprise version 8.0.5?   Thanks!
Hi, I am working with firewall logs in external IP's ,  I want to collect blocked IP's from the firewall, and blocked reason mean, why is the firewall blocked this external IP,  so wanna create a que... See more...
Hi, I am working with firewall logs in external IP's ,  I want to collect blocked IP's from the firewall, and blocked reason mean, why is the firewall blocked this external IP,  so wanna create a query to identify blocked IP's by firewall and the reason , signature of the firewall rule, please help me into this, the tstat  could be useful.    
I just enabled my indexer discovery on my master node and on my deployment server.  I then added three (3) new indexers. I have added the new indexers to the license master and also did indexer clust... See more...
I just enabled my indexer discovery on my master node and on my deployment server.  I then added three (3) new indexers. I have added the new indexers to the license master and also did indexer clustering on the three new indexers. Then I discover the following 1. My pass4SmmyKey is still showing (ie it did not ash) 2. The Ip addresses of the new indexers added were not updated in the deployment server so not also updated in deployment clients
splunkforwarder-monitor exit itself, and I got following message. I saw a similar issue reported for splunk version prior 6.1.3. But in my case, we are using version 8.1.3 [root@em21 splunkforwarde... See more...
splunkforwarder-monitor exit itself, and I got following message. I saw a similar issue reported for splunk version prior 6.1.3. But in my case, we are using version 8.1.3 [root@em21 splunkforwarder]# systemctl status splunkforwarder -l * splunkforwarder.service - Splunk Universal Forwarder Process Monitor Loaded: loaded (/etc/systemd/system/splunkforwarder.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2022-11-02 00:11:03 UTC; 1 weeks 4 days ago Process: 45771 ExecStop=/etc/splunk/splunkforwarder-monitor stop (code=exited, status=0/SUCCESS) Process: 38220 ExecStart=/etc/splunk/splunkforwarder-monitor start (code=exited, status=0/SUCCESS) Main PID: 38220 (code=exited, status=0/SUCCESS) Memory: 6.4M CGroup: /system.slice/splunkforwarder.service Nov 01 23:56:51 em21 splunkforwarder-monitor[38220]: Done Nov 01 23:56:51 em21 splunkforwarder-monitor[38220]: Checking default conf files for edits... Nov 01 23:56:51 em21 splunkforwarder-monitor[38220]: Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-8.1.3-63079c59e632-linux-2.6-x86_64-manifest' Nov 01 23:56:52 em21 splunkforwarder-monitor[38220]: [ OK ] Nov 01 23:56:52 em21 splunkforwarder-monitor[38220]: All installed files intact. Nov 01 23:56:52 em21 splunkforwarder-monitor[38220]: Done Nov 01 23:56:52 em21 splunkforwarder-monitor[38220]: All preliminary checks passed. Nov 01 23:56:52 em21 splunkforwarder-monitor[38220]: Starting splunk server daemon (splunkd)... Nov 01 23:56:52 em21 splunkforwarder-monitor[38220]: Done Nov 02 00:11:03 em21 splunkforwarder-monitor[38220]: INFO: /opt/splunkforwarder/var/run/splunk/conf-mutator.pid is gone, which indicates that splunk existed successfully. Quiting splunkforwarder-monitor... [root@em21 splunkforwarder]#   [root@em21 splunkforwarder]# rpm -qa | grep splunk splunkforwarder-configure-3.7-48.noarch splunkforwarder-8.1.3-63079c59e632.x86_64 [root@em21 splunkforwarder]#
Can someone give some steps on this issue  Push Unnecessary: manager-apps and master-apps are both populated. There can be only one. Bundle push blocked until all bundles are either in manager-apps... See more...
Can someone give some steps on this issue  Push Unnecessary: manager-apps and master-apps are both populated. There can be only one. Bundle push blocked until all bundles are either in manager-apps (preferred) or master-apps.