All Topics

Top

All Topics

Hello I have 3 queries that i need to join between them but there is a catch  query number 1 checks for users who sent sms query number 2 checks if we tried to resend the sms query number 3 check... See more...
Hello I have 3 queries that i need to join between them but there is a catch  query number 1 checks for users who sent sms query number 2 checks if we tried to resend the sms query number 3 check if we got verification that the sms sent in the end - i want to see only the cases where we have sent, resend and verify - all of them by id when im using simple join - i get all the results and not only those with the resend method 
We have configured parallelIngestionPipelines as 2 in Splunk HF as we were facing congestion in the TypingQueue while our CPU was underutilized (~2 Cores used/12). However, the load in the pipelines... See more...
We have configured parallelIngestionPipelines as 2 in Splunk HF as we were facing congestion in the TypingQueue while our CPU was underutilized (~2 Cores used/12). However, the load in the pipelines are not balanced. Pipeline 0 is still congested while Pipeline 1 is barely utilized.   Digging around, this seems to be because 80% of our input is on a single UDP port. Will splitting the UDP ports on the source itself solve this issue? i.e. having multiple UDP Inputs on the HF instead of one?
I am very new to ITSI, the operational task is to create a business service in ITSI. I have created a test service and under the created service  I have configured KPI and entities but I cannot see ... See more...
I am very new to ITSI, the operational task is to create a business service in ITSI. I have created a test service and under the created service  I have configured KPI and entities but I cannot see any data in KPI or entity it shows  N/A . Splunk ITSI veri Can someone please provide solution to it.  
Hello everyone, We've recently installed the Add On for Cisco Meraki and have configured Splunk as the syslog server. I have been trying to explore failure and error events but I cant seem to fully... See more...
Hello everyone, We've recently installed the Add On for Cisco Meraki and have configured Splunk as the syslog server. I have been trying to explore failure and error events but I cant seem to fully understand what I am seeing. I also havent been able to find any worthy reference online. For instance,  looking at eventData.reason, I dont know what these values represent. Does anyone have a clue or any successful experience with integrating Splunk for Meraki?  
Hello, I have completed the training but the status has not changed for weeks
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app ... See more...
Hi at all, I installed Enterprise Security 7.2.0 on Splunk 9.1.1 and I'm receiving the following message: Unable to initialize modular input "confcheck_es_bias_language_cleanup" defined in the app "SplunkEnterpriseSecuritySuite": Unable to locate suitable script for introspection.. I searched on the documentation and at https://docs.splunk.com/Documentation/ES/7.2.0/Install/Upgradetonewerversion#After_upgrading_to_version_7.2.0 I fond the following indication:   To prevent the display of the error messages, follow these workaround steps: Modify following file: On the search head cluster: /opt/splunk/etc/shcluster/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec On a standalone ES instance this file: /opt/splunk/etc/apps/SplunkEnterpriseSecuritySuite/README/input.conf.spec Add the following comment at the end of the file: ###### Conf File Check for Bias Language ###### #[confcheck_es_bias_language_cleanup://default] #debug = <boolean> (optional) If you are on standalone search head, follow these additional steps: Push changes to search head cluster by pushing the bundle apps. Clean the messages from the top of the page so that they do not display again. . In case of a standalone search head, restart the Splunk process.   passing that in the page they are speaking of an upgrade and i'm newly installing, that the file name is wrong (input.conf instead inputs.conf) and that they say to modify a .spec file, but how commented statements can solve an issue? Obviously this solution didn't solved my issue. Is there anyone that can hint a solution to my issue? Thank you in avdance. Ciao. Giuseppe
I added a new syslog source using upd port 514. The data is being ingested into "lastchanceindex". How can I find out what index splunk "wants" to put the data into, so that I can create that index? ... See more...
I added a new syslog source using upd port 514. The data is being ingested into "lastchanceindex". How can I find out what index splunk "wants" to put the data into, so that I can create that index? Or how can I specify an index without disrupting the other syslog data sources? We use udp://514 for many different syslog data sources, so specifying all of it to go to one index wouldn't work.
Hello, I have a peculiar question: Below is sample data: _time data storage name Size of data storage 2023-04-30T00:31:00.000 data_storage_1 10 2023-04-30T00:31:00.000 data_storage_2... See more...
Hello, I have a peculiar question: Below is sample data: _time data storage name Size of data storage 2023-04-30T00:31:00.000 data_storage_1 10 2023-04-30T00:31:00.000 data_storage_2 15 2023-04-30T12:31:00.000 data_storage_1 15 2023-04-30T12:31:00.000 data_storage_2 20 2023-05-01T00:31:00.000 data_storage_1 20 2023-05-01T00:31:00.000 data_storage_2 30 2023-05-01T12:31:00.000 data_storage_1 30 2023-05-01T12:31:00.000 data_storage_2 40 2023-05-02T00:31:00.000 data_storage_1 40 2023-05-02T00:31:00.000 data_storage_2 50 2023-05-02T12:31:00.000 data_storage_1 50 2023-05-02T12:31:00.000 data_storage_2 50   How do i go about getting the the sum of all storages per time frame? Example of output:  Time                   Total Storage 04/30 00:31 -> 25 04/30 12:31 -> 35 05/01 00:31 -> 50 05/01 12:31 -> 70
[EMEA-friendly: 10am ET / 3pm GMT] - Register here and ask questions below. This thread is for the special 1-hour Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, De... See more...
[EMEA-friendly: 10am ET / 3pm GMT] - Register here and ask questions below. This thread is for the special 1-hour Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, December 6, 2023 at 7am PT / 10am ET / 3pm GMT   This is your opportunity to ask questions related to your specific GDI challenge or use case, including: How to onboard common data sources (AWS, Azure, Windows, *nix, etc.) Using forwarders Apps to get data in How to filter, mask, enrich, and route your data Data Manager (Splunk Cloud Platform) Edge Processor, ingest actions, archiving your data, and anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.    Look forward to connecting!
summary index merges multiple line values into one row, while regular index put the values into a separate lines, so when I used stats values command on summary idnex to group by ip, the merged val... See more...
summary index merges multiple line values into one row, while regular index put the values into a separate lines, so when I used stats values command on summary idnex to group by ip, the merged values are not unique. Questions: 1) How do I make summary index put multiple values into separate lines like on a regular index?  2) When I use stats values command, should it return unique values?    Thank you so much for your help See below example:   1a) Search using regular index index=regular_index | table company, ip company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 1b) Search regular index after grouped with stats values index=regular_index | stats values(company) by ip | table company, ip   company ip companyA 1.1.1.1 companyB 1.1.1.2 2a) Search using summary index index=summary report=test_ip | table company, ip company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 2b) Search summary index after grouped with stats values index=regular_index | stats values(company) by ip | table company, ip   company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2  
How to schedule search between 7pm to 7am and alert if and only if there is an event recorded between 7pm to 7am? my cron expression is */15 19-23,0-6 * * *. What should be the earliest and latest va... See more...
How to schedule search between 7pm to 7am and alert if and only if there is an event recorded between 7pm to 7am? my cron expression is */15 19-23,0-6 * * *. What should be the earliest and latest value?
I have one to many multivalue fields with exact size and I would like to do the average by index. ex: multivalue field1 1 2 3 multivalue field2  3 6 7 Result: 2 4 5
Is this possible to get source which sending the data or IP of the source. If it possible. Thanks
I have a kvstore lookup in a single SH environment. If the environment is made into a cluster and kvstore replication is on, would that decrease the performance of updating or searching using the loo... See more...
I have a kvstore lookup in a single SH environment. If the environment is made into a cluster and kvstore replication is on, would that decrease the performance of updating or searching using the lookup?
In the last month, the Splunk Threat Research Team (STRT) has had 2 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.13.0, v4.14.0.). With these releases, th... See more...
In the last month, the Splunk Threat Research Team (STRT) has had 2 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.13.0, v4.14.0.). With these releases, there are 22 new detections and 6 new analytic stories, and 3 updated analytic stories now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: An analytic story for a previously unknown vulnerability in the Cisco IOS XE software's Web User Interface (Web UI) feature that is currently being exploited and effectively grants full control of the compromised device.  An analytic story focused on Windows SIP WinVerifyTrust subversion and an analytic story for Microsoft SharePoint Server to detect a flaw in handling authentication tokens, which allows an attacker to escalate privileges and gain unauthorized access to the SharePoint environment.  A NjRat analytic story that contains 7 detections to detect attack techniques relating to NjRat, a notorious remote access trojan (RAT). The detections include tracking file write operations for dropped files, scrutinizing registry modifications to provide persistence mechanisms, monitoring suspicious processes, self-deletion behaviors, browser credential parsing, firewall configuration alterations, spreading itself via removable drive, and other potentially malicious actions. Additionally, we released new analytics to address Splunk CVEs that focus on attacker behavior targeting Splunk environments, along with 2 new analytics for CVEs related to Remote Code Execution (RCE) in WS_FTP and TeamCity On-Premises.  New Analytics (22) Cisco IOS XE Implant Access Detect Certipy File Modifications (External Contributor: Steven Dick) Windows Domain Admin Impersonation Indicator Windows Registry SIP Provider Modification Microsoft SharePoint Server Elevation of Privilege Windows Steal Authentication Certificates - ESC1 Abuse (External Contributor: Steven Dick) Windows SIP Provider Inventory Windows SIP WinVerifyTrust Failed Trust Validation Confluence CVE-2023-22515 Trigger Vulnerability Windows Abused Web Services Windows Admin Permission Discovery JetBrains TeamCity RCE Attempt WS FTP Remote Code Execution Splunk Reflected XSS on App Search Table Endpoint Splunk RCE via Serialized Session Payload Splunk DoS Using Malformed SAML Request Splunk Absolute Path Traversal Using runshellscript Windows Modify Registry With MD5 Reg Key Name Windows Njrat Fileless Storage via Registry Windows Executable in Loaded Modules Windows Disable or Modify Tools Via Taskkill Windows Delete or Modify System Firewall New Analytic Stories (6) Subvert Trust Controls SIP and Trust Provider Hijacking Microsoft SharePoint Server Elevation of Privilege CVE-2023-29357 Cisco IOS XE Software Web Management User Interface vulnerability NjRat WS FTP Server Critical Vulnerabilities JetBrains TeamCity Unauthenticated RCE Updated Analytics (3) Citrix ADC Exploitation CVE-2023-3519 Windows Replication Through Removable Media TOR Traffic For all our tools and security content, please visit research.splunk.com.  — The Splunk Threat Research Team
Hello! As part of data separation activities I am migrating summary indexes between Splunk deployments.  Some of these summary  indexes have been collected with sourcetype=stash, while others have ... See more...
Hello! As part of data separation activities I am migrating summary indexes between Splunk deployments.  Some of these summary  indexes have been collected with sourcetype=stash, while others have their sourcetype set to a specific one, let's say "specific_st". The data is very simple, here is one event: 2023-06-10-12:43:00;FRONT;GBX;OK The sourcetype is set as follows: [specific_st] DATETIME_CONFIG = NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%d-%H:%M:%S TZ = UTC category = Custom pulldown_type = 1 disabled = false SHOULD_LINEMERGE = false FIELD_DELIMITER = ; FIELD_NAMES = "TIMESTAMP_EVENT","SECTOR","CODE","STATUS" TIMESTAMP_FIELDS = TIMESTAMP_EVENT   After collecting on the source Splunk instance, I run the search on the summary index and the fields are extracted correctly.  After migrating the index to the new Splunk instance, the sourcetype does not seem to work and the fields are not extracted.  The event correctly lists the sourcetype as "specific_st".   To migrate I copied the db folder via SCP from source indexer (single) to target indexer which is part of a cluster.  I made sure to rename any buckets and when I brought the indexer back up the index was correctly recognized and replicated.  The sourcetype is located on all indexers as well as  the search head.   Has anybody had this problem before?  Do I maybe need to update the sourcetype in some way?   Thank you and best regards,   Andrew
I want to change the tooltip text when hovering over a Flow Map Viz node. I am using the event below but it does not seem to work. var flowMapViz = mvc.Components.get('my_flow_map'); flowMapViz.$... See more...
I want to change the tooltip text when hovering over a Flow Map Viz node. I am using the event below but it does not seem to work. var flowMapViz = mvc.Components.get('my_flow_map'); flowMapViz.$el.find('.node').on('mouseover', function(event) { console.log("on mouseover") });
Hi all,   I've configured a new role to inherit settings from user and power role and I let default values for srchJobsQuota and rtSrchJobsQuota Basically: [role_new] importRoles = power; us... See more...
Hi all,   I've configured a new role to inherit settings from user and power role and I let default values for srchJobsQuota and rtSrchJobsQuota Basically: [role_new] importRoles = power; user srchDiskQuota=1000 srchIndexesAllowed = * srchIndexesDefault = * srchMaxTime = 8640000 srchJobsQuota = 3 rtSrchJobsQuota = 6   In this case, which values I will have for srchJobsQuota  and rtSrchJobsQuota? The one set in the role_new or the one set in inherited roles?   Thank you very much  
One the search head that our SOC uses, i get the following: IOWait Sum of 3 highest per-cpu iowaits reached red threshold of 15 Maximum per-cpu iowait reached yellow threshold of 5 Under unheal... See more...
One the search head that our SOC uses, i get the following: IOWait Sum of 3 highest per-cpu iowaits reached red threshold of 15 Maximum per-cpu iowait reached yellow threshold of 5 Under unhealthy instances, its listing our indexers.  I performed a TOP on one of them and I see the following: top - 15:41:36 up 37 days, 11:50, 1 user, load average: 5.31, 6.58, 6.95 Tasks: 416 total, 1 running, 415 sleeping, 0 stopped, 0 zombie %Cpu(s): 28.3 us, 2.5 sy, 0.0 ni, 66.2 id, 2.7 wa, 0.2 hi, 0.2 si, 0.0 st MiB Mem : 31858.5 total, 311.6 free, 3699.4 used, 27847.5 buff/cache MiB Swap: 4096.0 total, 769.0 free, 3327.0 used. 27771.1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 984400 splunk 20 0 4475268 244140 36068 S 105.6 0.7 1:22.47 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_scheduler__zzm+ 796457 splunk 20 0 9232920 790724 36932 S 100.7 2.4 56:56.65 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_scheduler__zzm+ 895450 splunk 20 0 1281092 337308 32668 S 85.8 1.0 23:31.00 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_1698412482.432+ Where is says "Search Head FQDN", that's just listing one of our Search Heads Of course we started seeing this once we upgraded from 8.0.5 to 9.0.5 Seeking guidance on this matter   
Hello Splunkers   I use the deployer to deploy config apps or add_ons on a search head cluster. This works when I want to deploy a new app or delete an app. I see that the search head cluster initi... See more...
Hello Splunkers   I use the deployer to deploy config apps or add_ons on a search head cluster. This works when I want to deploy a new app or delete an app. I see that the search head cluster initiates a rolling restart after each apply-bundle command on the deployer. But when I modify a file in an app (etc/shcluster/app) and run the apply-bundle command, the modification is not propagated to the cluster. What's wrong?