All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am very new to ITSI, the operational task is to create a business service in ITSI. I have created a test service and under the created service  I have configured KPI and entities but I cannot see ... See more...
I am very new to ITSI, the operational task is to create a business service in ITSI. I have created a test service and under the created service  I have configured KPI and entities but I cannot see any data in KPI or entity it shows  N/A . Splunk ITSI veri Can someone please provide solution to it.  
Hello everyone, We've recently installed the Add On for Cisco Meraki and have configured Splunk as the syslog server. I have been trying to explore failure and error events but I cant seem to fully... See more...
Hello everyone, We've recently installed the Add On for Cisco Meraki and have configured Splunk as the syslog server. I have been trying to explore failure and error events but I cant seem to fully understand what I am seeing. I also havent been able to find any worthy reference online. For instance,  looking at eventData.reason, I dont know what these values represent. Does anyone have a clue or any successful experience with integrating Splunk for Meraki?  
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app ... See more...
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app "SplunkEnterpriseSecuritySuite": Unable to locate suitable script for introspection.. I searched on the documentation and at https://docs.splunk.com/Documentation/ES/7.2.0/Install/Upgradetonewerversion#After_upgrading_to_version_7.2.0 I fond the following indication:   To prevent the display of the error messages, follow these workaround steps: Modify following file: On the search head cluster: /opt/splunk/etc/shcluster/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec On a standalone ES instance this file: /opt/splunk/etc/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec Add the following comment at the end of the file: ###### Conf File Check for Bias Language ###### #[confcheck_es_bias_language_cleanup://default] #debug = <boolean> (optional) If you are on standalone search head, follow these additional steps: Push changes to search head cluster by pushing the bundle apps. Clean the messages from the top of the page so that they do not display again. . In case of a standalone search head, restart the Splunk process.   passing that in the page they are speaking of an upgrade and i'm newly installing, that the file name is wrong (input.conf instead inputs.conf) and that they say to modify a .spec file, but how commented statements can solve an issue? Obviously this solution didn't solved my issue. Is there anyone that can hint a solution to my issue? Thank you in avdance. Ciao. Giuseppe
I added a new syslog source using upd port 514. The data is being ingested into "lastchanceindex". How can I find out what index splunk "wants" to put the data into, so that I can create that index? ... See more...
I added a new syslog source using upd port 514. The data is being ingested into "lastchanceindex". How can I find out what index splunk "wants" to put the data into, so that I can create that index? Or how can I specify an index without disrupting the other syslog data sources? We use udp://514 for many different syslog data sources, so specifying all of it to go to one index wouldn't work.
Hello, I have a peculiar question: Below is sample data: _time data storage name Size of data storage 2023-04-30T00:31:00.000 data_storage_1 10 2023-04-30T00:31:00.000 data_storage_2... See more...
Hello, I have a peculiar question: Below is sample data: _time data storage name Size of data storage 2023-04-30T00:31:00.000 data_storage_1 10 2023-04-30T00:31:00.000 data_storage_2 15 2023-04-30T12:31:00.000 data_storage_1 15 2023-04-30T12:31:00.000 data_storage_2 20 2023-05-01T00:31:00.000 data_storage_1 20 2023-05-01T00:31:00.000 data_storage_2 30 2023-05-01T12:31:00.000 data_storage_1 30 2023-05-01T12:31:00.000 data_storage_2 40 2023-05-02T00:31:00.000 data_storage_1 40 2023-05-02T00:31:00.000 data_storage_2 50 2023-05-02T12:31:00.000 data_storage_1 50 2023-05-02T12:31:00.000 data_storage_2 50   How do i go about getting the the sum of all storages per time frame? Example of output:  Time                   Total Storage 04/30 00:31 -> 25 04/30 12:31 -> 35 05/01 00:31 -> 50 05/01 12:31 -> 70
summary index merges multiple line values into one row, while regular index put the values into a separate lines, so when I used stats values command on summary idnex to group by ip, the merged val... See more...
summary index merges multiple line values into one row, while regular index put the values into a separate lines, so when I used stats values command on summary idnex to group by ip, the merged values are not unique. Questions: 1) How do I make summary index put multiple values into separate lines like on a regular index?  2) When I use stats values command, should it return unique values?    Thank you so much for your help See below example:   1a) Search using regular index index=regular_index | table company, ip company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 1b) Search regular index after grouped with stats values index=regular_index | stats values(company) by ip | table company, ip   company ip companyA 1.1.1.1 companyB 1.1.1.2 2a) Search using summary index index=summary report=test_ip | table company, ip company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 2b) Search summary index after grouped with stats values index=regular_index | stats values(company) by ip | table company, ip   company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2  
How to schedule search between 7pm to 7am and alert if and only if there is an event recorded between 7pm to 7am? my cron expression is */15 19-23,0-6 * * *. What should be the earliest and latest va... See more...
How to schedule search between 7pm to 7am and alert if and only if there is an event recorded between 7pm to 7am? my cron expression is */15 19-23,0-6 * * *. What should be the earliest and latest value?
I have one to many multivalue fields with exact size and I would like to do the average by index. ex: multivalue field1 1 2 3 multivalue field2  3 6 7 Result: 2 4 5
Is this possible to get source which sending the data or IP of the source. If it possible. Thanks
I have a kvstore lookup in a single SH environment. If the environment is made into a cluster and kvstore replication is on, would that decrease the performance of updating or searching using the loo... See more...
I have a kvstore lookup in a single SH environment. If the environment is made into a cluster and kvstore replication is on, would that decrease the performance of updating or searching using the lookup?
Hello! As part of data separation activities I am migrating summary indexes between Splunk deployments.  Some of these summary  indexes have been collected with sourcetype=stash, while others have ... See more...
Hello! As part of data separation activities I am migrating summary indexes between Splunk deployments.  Some of these summary  indexes have been collected with sourcetype=stash, while others have their sourcetype set to a specific one, let's say "specific_st". The data is very simple, here is one event: 2023-06-10-12:43:00;FRONT;GBX;OK The sourcetype is set as follows: [specific_st] DATETIME_CONFIG = NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%d-%H:%M:%S TZ = UTC category = Custom pulldown_type = 1 disabled = false SHOULD_LINEMERGE = false FIELD_DELIMITER = ; FIELD_NAMES = "TIMESTAMP_EVENT","SECTOR","CODE","STATUS" TIMESTAMP_FIELDS = TIMESTAMP_EVENT   After collecting on the source Splunk instance, I run the search on the summary index and the fields are extracted correctly.  After migrating the index to the new Splunk instance, the sourcetype does not seem to work and the fields are not extracted.  The event correctly lists the sourcetype as "specific_st".   To migrate I copied the db folder via SCP from source indexer (single) to target indexer which is part of a cluster.  I made sure to rename any buckets and when I brought the indexer back up the index was correctly recognized and replicated.  The sourcetype is located on all indexers as well as  the search head.   Has anybody had this problem before?  Do I maybe need to update the sourcetype in some way?   Thank you and best regards,   Andrew
I want to change the tooltip text when hovering over a Flow Map Viz node. I am using the event below but it does not seem to work. var flowMapViz = mvc.Components.get('my_flow_map'); flowMapViz.$... See more...
I want to change the tooltip text when hovering over a Flow Map Viz node. I am using the event below but it does not seem to work. var flowMapViz = mvc.Components.get('my_flow_map'); flowMapViz.$el.find('.node').on('mouseover', function(event) { console.log("on mouseover") });
Hi all,   I've configured a new role to inherit settings from user and power role and I let default values for srchJobsQuota and rtSrchJobsQuota Basically: [role_new] importRoles = power; us... See more...
Hi all,   I've configured a new role to inherit settings from user and power role and I let default values for srchJobsQuota and rtSrchJobsQuota Basically: [role_new] importRoles = power; user srchDiskQuota=1000 srchIndexesAllowed = * srchIndexesDefault = * srchMaxTime = 8640000 srchJobsQuota = 3 rtSrchJobsQuota = 6   In this case, which values I will have for srchJobsQuota  and rtSrchJobsQuota? The one set in the role_new or the one set in inherited roles?   Thank you very much  
One the search head that our SOC uses, i get the following: IOWait Sum of 3 highest per-cpu iowaits reached red threshold of 15 Maximum per-cpu iowait reached yellow threshold of 5 Under unheal... See more...
One the search head that our SOC uses, i get the following: IOWait Sum of 3 highest per-cpu iowaits reached red threshold of 15 Maximum per-cpu iowait reached yellow threshold of 5 Under unhealthy instances, its listing our indexers.  I performed a TOP on one of them and I see the following: top - 15:41:36 up 37 days, 11:50, 1 user, load average: 5.31, 6.58, 6.95 Tasks: 416 total, 1 running, 415 sleeping, 0 stopped, 0 zombie %Cpu(s): 28.3 us, 2.5 sy, 0.0 ni, 66.2 id, 2.7 wa, 0.2 hi, 0.2 si, 0.0 st MiB Mem : 31858.5 total, 311.6 free, 3699.4 used, 27847.5 buff/cache MiB Swap: 4096.0 total, 769.0 free, 3327.0 used. 27771.1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 984400 splunk 20 0 4475268 244140 36068 S 105.6 0.7 1:22.47 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_scheduler__zzm+ 796457 splunk 20 0 9232920 790724 36932 S 100.7 2.4 56:56.65 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_scheduler__zzm+ 895450 splunk 20 0 1281092 337308 32668 S 85.8 1.0 23:31.00 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_1698412482.432+ Where is says "Search Head FQDN", that's just listing one of our Search Heads Of course we started seeing this once we upgraded from 8.0.5 to 9.0.5 Seeking guidance on this matter   
Hello Splunkers   I use the deployer to deploy config apps or add_ons on a search head cluster. This works when I want to deploy a new app or delete an app. I see that the search head cluster initi... See more...
Hello Splunkers   I use the deployer to deploy config apps or add_ons on a search head cluster. This works when I want to deploy a new app or delete an app. I see that the search head cluster initiates a rolling restart after each apply-bundle command on the deployer. But when I modify a file in an app (etc/shcluster/app) and run the apply-bundle command, the modification is not propagated to the cluster. What's wrong?
Anyone can help me to onboard data and metrics from openshift to Splunk Cloud. Forwarding Logs to Splunk Using the OpenShift was not enough to get all the data we need i.e cpu and memory utilization.
Hi, I  need an spl to find the threshold for the respective domains. index=ss group="Threat Intelligence" | stats values(attacker_score) as attacker_score by domain eg. admin.com 110 120 ... See more...
Hi, I  need an spl to find the threshold for the respective domains. index=ss group="Threat Intelligence" | stats values(attacker_score) as attacker_score by domain eg. admin.com 110 120 135 145 160 170 185 195 210 220 235 245 270 345 360 370 395 410 420 435 445 45 470 495 520 570 60 645 70 85 920 95 Thanks..
Hi, I have create two different timechart like: Timechart1(cable connection on/off): index=cisco_asa dest_interface=outside | timechart span=10m dc(count) by count   Timechart2(login user listed... See more...
Hi, I have create two different timechart like: Timechart1(cable connection on/off): index=cisco_asa dest_interface=outside | timechart span=10m dc(count) by count   Timechart2(login user listed): host=10.1.1.1 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info   Individually the display is perfect, but it would be even better if we could combined into one graph with common timestamps.    I search through splunk documents, also tried different setup without success. Hope someone could help me with it  
Hello, I have one more begginers question regarding reports and dashboards I am trying to do overview of most used services, I am using this query:   index=notable| top limit=15 app   When I... See more...
Hello, I have one more begginers question regarding reports and dashboards I am trying to do overview of most used services, I am using this query:   index=notable| top limit=15 app   When I put this report into dashboard studio, there are appearing count as well as percentage: I would like to remove percentages completely from the chart. Can you tell me how to do it, please? And one more option just coming to my mind - if I would like to use both - count and percentages, is it possible to adapt x axis in the way that it would use separate scale like 0-100 percent for percentages?
Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of ... See more...
Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of the org and event monitoring for me to be able to monitor who see and does what. I have this need, to respect compliance and auditing rules, and I need this information to be able to be accessible 24/7 on a database, and need the data to be able to be seen/retrieve for, at least, 10 years. I would also like to have an export capability in which it would allow me to setup an export to an external database. Does splunk have such capabilities in its add-ons? and if so, which one. Thank you all,  Paulo