All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is commu... See more...
I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is communicating and sending data but when I log into the web UI the forwarder is not listed   does anybody know what this might be?  the configs on all forwrders is the same.
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This ta... See more...
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This table also contains a column that shows the last time a test ran (pass or fail). Here's a picture. Here's my current search: index="redacted" | rex field=runtime "^(?<seconds>\w*.\w*)" |stats latest(result), latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRuntime by test | eval averageRuntime=round(averageRuntime,0) | strcat averageRuntime f2 " seconds." field3 averageRuntime | `timesince(last_checked,last_checked)`   Any ideas or tips are greatly appreciated. Thanks in Advance.
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, a... See more...
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, and so far I have come up with this:     index="x" source="y" EndtoEnd | rex (?<e2e_p>\d+)ms \\Extracts the numerical value from the e2e_p field. | where isnotnull(e2e_p) | streamstats avg(e2e_p) window=1800 current=t time_window=30m as avg_e2e_p | where avg_e2e_p > 500     The condition doesn't happen often, but I'll work with the team that supports the app to simulate the condition once the query is finalized. I have never used streamstats before, but that's what has come up in my search for a means to incoporate a sliding window into a SPL query. Thank you in advance for taking the time to help with this.
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:  ... See more...
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:        | eval regex=split(regex, "|") | mvexpand regex | where match(string, regex)         The regex field contains 80+ different regex codes to match on certain key words. The mvexpand would cause one event to be split up into 80+ different events, just to potentially match on one field. Due to the use of this mvexpand, we encountered mvexpand's memory limitations causing events to be dropped.    I'm seeing if it is possible to match the regex within the "regex" field to the string field without having to use mvexpand to break it apart.  Previously did not work, recommended solutions such as:         | eval true = mvmap(regex, if(match(regex, query),regex,"0")) | eval true = mvfilter(true="0") | where ISNOTNULL(true)        
Question for Omega Core Audit Will I (as app developer) get notified ?
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has sta... See more...
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has started but cannot see messages on when it has been completed    INFO CMMaster [3340464 TcpChannelThread] - Starting a rolling restart of the peers.    
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzi... See more...
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzip it someway?  Big thanks!!!  
Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transm... See more...
Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transmission rate. Each index node has 64GB of physical memory, and SWAP has 8GB. SWAP strategy requires physical memory to exceed 70% before it can be used. The current situation is that only 1.6GB of physical memory is used, but the swap uses 3.8GB. The following is the alarm information. [Alarm Name] system.swap.used_pct [Warning content] The usage rate of swap partition has reached 39.76%,. and the AVG has exceeded the threshold of 20.0% in the past minute. I have some questions to ask: 1. Why is swap usage so much higher than memory. 2. How to configure memory and swap partition space, and what are the best practices?
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specifi... See more...
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specified index in Splunk, and I cannot view the Google Workspace logs on Splunk. Additionally, there are no apparent errors after the integration setup. I would appreciate any advice or precautions to take when installing the Add-on for Google Workspace. # Additional info Upon checking the log files, the following errors were found. However, no 40x errors were found. Could not refresh service account credentials because of ('unauthorized_client: Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.', {'error': 'unauthorized_client', 'error_description': 'Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.'}) # Referenced Documentation ## Installation of the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Installation ## Issuing Authentication Keys for Accounts Created on the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs1 -> Refer to the "Google Workspace activity report prerequisites" section in the above document. ## Add-on Configuration https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs2 -> Refer to the "Add your Google Workspace account information" and "Configure activity report data collection using Splunk Web" sections in the above document. ## Troubleshooting https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Troubleshoot -> Refer to the "No events appearing in the Splunk platform" section in the above document. https://community.splunk.com/t5/Getting-Data-In/Why-is-Splunk-Add-on-for-Google-Workspace-inputs-getting-401/m-p/602874
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, ... See more...
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, 1 DMC, 3 indexes, and 2 heavy forwarders. In this architecture, the search replication factors are all 2 and there is stock data available. The demand for local disaster recovery is: The host room where the existing data center's Xinchuang SIEM system is located has been shut down, and the data in the disaster recovery room can be queried normally. The closure of the newly built disaster recovery host room will not affect the use of the existing data center's SIEM system. RPO 0 cannot lose data, RTO can recover within 6 hours.
I am from Japan. Sorry for my poor English and lack of knowledge about Splunk. I received a Splunk Enterprise Trial License and would like to import Palo Alto logs and issue alerts (via email, etc.)... See more...
I am from Japan. Sorry for my poor English and lack of knowledge about Splunk. I received a Splunk Enterprise Trial License and would like to import Palo Alto logs and issue alerts (via email, etc.), but I am not sure how to do this (manually importing past logs succeeded). I wonder if past logs can issue alert. About our environment, I set up all-in-one virtual server in our FJ Cloud (Fujitsu Cloud)is one virtual server and Splunk is running here. There are no forwarders installed on other servers. I would be more than happy if you could let me know. Thank you for your support.
Hi, I am having some problem to understand How to fetch multiline pattern in a single event. I have logfile in which I am searching this pattern which is scattered in multiple lines, 12345678910... See more...
Hi, I am having some problem to understand How to fetch multiline pattern in a single event. I have logfile in which I am searching this pattern which is scattered in multiple lines, 123456789102BP Tank: Bat from Surface = #07789*K00C0**************************************** 00003453534534534 ****after Multiple Lines*** 123456789107CSVSentinfo:L00Show your passport ****after Multiple Lines*** 123456789110CSVSentinfo Data:z800 ****after Multiple Lines*** 123456789113CSVSentinfoToCollege: ****after Multiple Lines*** 123456789117CSVSentinfoFromCollege: ****after Multiple Lines*** 123456789120CSVSentinfo:G7006L ****after Multiple Lines*** 123456789122CSVSentinfo:A0T0 ****after Multiple Lines*** 123456789124BP Tank: Bat to Surface L000passportAccepted   I have tried below query to find all the occurrences but no luck index=khisab_ustri  sourcetype=sosnmega  "*BP Tank: Bat from surface = *K00C0*" |dedup _time |rex field=_raw "(?ms)(?<time_string>\d{12})BP Tank: Bat from Surface .*K00C0\d{21}(?<kmu_str>\d{2})*" |rex field=_raw "(?<PC_sTime>\d{12})CSVSentinfo:L00Show your passport*" |rex field=_raw "(?<CP_sTime>\d{12})CSVSentinfo Data:z800*" |rex field=_raw "(?<MTB_sTime>\d{12})CSVSentinfoToCollege:*" |rex field=_raw "(?<MFB_sTime>\d{12})CSVSentinfoFromCollege:*" |rex field=_raw "(?<PR_sTime>\d{12})CSVSentinfo:G7006L*" |rex field=_raw "(?<JR_sTime>\d{12})CSVSentinfo:A0T0*" |rex field=_raw "(?<MR_sTime>\d{12})BP Tank: Bat to Surface =.+L000passportAccepted*" |table (PC_sTime- time_string),(CP_sTime- PC_sTime),(MTB_sTime-CP_sTime),(MFB_sTime-MTB_sTime),(PR_sTime- MFB_sTime),(JR_sTime-PR_sTime),(MR_sTime-JR_sTime) Sample Data is Sample Data: 123456789102BP Tank: Bat from Surface = #07789*K00C0**************************************** 00003453534534534 123456789103UniverseToMachine\0a<Ladbrdige>\0a <SurfaceTake>GOP</Ocnce>\0a <Final_Worl-ToDO>Firewallset</KuluopToset>\0a</ 123456789105SetSurFacetoMost>7</DecideTomove>\0a <TakeaKooch>&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;</SurfaceBggien>\0a <Closethe Work>0</Csloethe Work>\0a 123456789107CSVSentinfo:L00Show your passport 123456789108BP Tank: Bat from Surface = close ticket 123456789109CSVSentinfo:Guide iunit 123456789110CSVSentinfo Data:z800 123456789111CSVGErt Infro"8900 123456789112CSGFajsh:984 123456789113CSVSentinfoToCollege: 123456789114CSVSentinfo Data:z800 123456789115CSVSentinfo Data:z800 123456789116Sem startedfrom Surface\0a<Surafce have a data>\0a <Surfacecame with Data>Ladbrdige</Ocnce>\0a <Ladbrdige>Ocnce</Final_Worl>\0a <KuluopToset>15284</DecideTomove>\0a <SurafceCall>\0a <wait>\0a <wating>EventSent</SurafceCall>\0a </wait>\0a </sa>\0a</Surafce have a data>\0a\0a 123456789117CSVSentinfoFromCollege: 123456789118CSVSentinfo:sadjhjhisd 123456789119CSVSentinfo:Loshy890 123456789120CSVSentinfo:G7006L 123456789121CSVSentinfo:8shhgbve 123456789122CSVSentinfo:A0T0 123456789123CSVSentinfo Data:accepted 123456789124BP Tank: Bat to Surface L000passportAccepted
Dashboard Studio working with Reports and Time Range @sainag_splunk  I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there... See more...
Dashboard Studio working with Reports and Time Range @sainag_splunk  I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the reports? The issue we face is  we are able to set the reports in the studio dashboard, but the default is that they are stuck as static reports. how can we add in a time range input that will work with the dashboard and the reports? The users who are viewing this dashboard are third party and people that we do not want to give access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input. we are trying to not give third party users access to Splunk Indexes Also tried looking into Embedded reports but found " Embedded reports also cannot support real-time searches."
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new ... See more...
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new version doesn't support Splunk Cloud anymore. There were no code changes in the new version. Thanks. -M
I have been working on routing logs based on their source into different indexes. I configured below props.conf and transforms.conf on my HF, but it didn't worked. We currently follow the naming conv... See more...
I have been working on routing logs based on their source into different indexes. I configured below props.conf and transforms.conf on my HF, but it didn't worked. We currently follow the naming convention below for our CloudWatch log group names: /starflow-app-logs-<platform-name>/<team-id>/<app-name>/<app-environment-name> -------------------------------------------------------------------------- Example sources: -------------------------------------------------------------------------- us-east-1:/starflow-app-logs/sandbox/test/prod us-east-1:/starflow-app-logs-dev/sandbox/test/dev us-east-1:/starflow-app-logs-stage/sandbox/test/stage Note: We are currently receiving log data for the above use case from the us-east-1 region. -------------------------------------------------------------------------- Condition: -------------------------------------------------------------------------- If the source path contains <team-id>, logs should be routed to the respective index in Splunk. If the source path contains any <team-id>, its logs will be routed to the same <team-id>-based index, which already exists in our Splunk environment. -------------------------------------------------------------------------- props.conf -------------------------------------------------------------------------- [source::us-east-1:/starflow-app-logs*] TRANSFORMS-set_starflow_logging = new_sourcetype, route_to_teamid_index -------------------------------------------------------------------------- transforms.conf -------------------------------------------------------------------------- [new_sourcetype] REGEX = .* SOURCE_KEY = source DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = us-east-1:\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = source FORMAT = index::$1 DEST_KEY = _MetaData:Index I’d be grateful for any feedback or suggestions to improve this configuration. Thanks in advance!
This is more statement than question, but the community should be advised Splunk Universal Forwarder 9.1.2 and 9.1.5 (those I have used and witnessed this occurring on, I cannot speak to others yet) ... See more...
This is more statement than question, but the community should be advised Splunk Universal Forwarder 9.1.2 and 9.1.5 (those I have used and witnessed this occurring on, I cannot speak to others yet) does not abide by cron syntax 100% of the time. It seems to do so at a very low frequency of occurrence, which may make repeated runs of scripted inputs difficult to notice or detect that it's occurring. I feel it's important the community is aware of this as some scripts may be time sensitive or you expect the scripts to only run once in a reporting period so no deduplicating was added -- amongst other possible circumstances a double-run might cause unintended impacts. In my case I noticed it when doing a limited deploy of a Deployment Server GUID reset script. With only 8 targeted systems scheduled to run once per day it was very easy to notice that it ran twice for multiple assets. Fortunately, I'd designed this script to create a bookmark so it wouldn't run more than once so the UF could run it every day to catch new systems or name changes, so it running more than once wouldn't cause a flood of entries in the DS. In my environment I noticed the Scheduler prematurely ran the scripts roughly 2 seconds before the cron-scheduled hour-minute combination, so the following SPL could find when this occurred: index=_internal sourcetype=splunkd reschedule_ms reschedule_ms<2000 Example: Scripted Input is scheduled to run at "2 7 * * *" (07:02:00), but there was a run at 07:01:58 AND 07:02:00. It ran two seconds early, then reschedules it to occur at the next scheduled interval which happens to be the expected time (two seconds later), and the cycle repeats. It seemed to fluctuate and vary, and sometimes even resolve itself given enough time. It's difficult to know what influences it or why after a correct run the run of milliseconds added is short by ~2000, but it does happen and may be influencing your scripts too.   Reported I opened a ticket with Support, but there's no word on a permanent resolution to this less than 1% affected forwarders issue so far, so I feel letting the community know to look if that's happening -- and if it matters in your environment, because maybe a little duplication doesn't matter -- was important.   Workarounds Changing from cron syntax to number of seconds for the Interval does work as a workaround, but that isn't always what you want -- sometimes you want a specific minute in an hour, not just hourly (whenever). Adding logic into a script to check the time is another possible workaround.
Hello colleagues! Have any of you integrated Cisco Talos as an intelligence source for Splunk Enterprise Security? Can you tell me the best way to do this?
Looking for props.conf / transforms.conf configuration guidance. The aim is to search logs from a HTTP Event Collector the same way we search for regular logs. Don't want to search JSON in the sea... See more...
Looking for props.conf / transforms.conf configuration guidance. The aim is to search logs from a HTTP Event Collector the same way we search for regular logs. Don't want to search JSON in the search heads. We're in the process of migrating from Splunk Forwarders to logging-operator in k8s. Thing is, Splunk Forwarder uses log files and standard indexer discovery whereas logging-operator uses stdout/stderr and must output to an HEC endpoint, meaning the logs arrive as JSON at the heavy forwarder. We want to use Splunk the same way we did over the years and want to avoid adapting alerts/dashboards etc to the new JSON source OLD CONFIG AIMED TO THE INDEXERS (using the following config we get environment/site/node/team/pod as search-time extraction fields)   [vm.container.meta] # source: /data/nodes/env1/site1/host1/logs/team1/env1/pod_name/localhost_access_log.log CLEAN_KEYS = 0 REGEX = \/.*\/.*\/(.*)\/(.*)\/(.*)\/.*\/(.*)\/.*\/(.*)\/ FORMAT = environment::$1 site::$2 node::$3 team::$4 pod::$5 SOURCE_KEY = MetaData:Source WRITE_META = true   SAMPLE LOG USING logging-operator   { "log": "ts=2024-10-15T15:22:44.548Z caller=scrape.go:1353 level=debug component=\"scrape manager\" scrape_pool=kubernetes-pods target=http://1.1.1.1:8050/_api/metrics msg=\"Scrape failed\" err=\"Get \\\"http://1.1.1.1:8050/_api/metrics\\\": dial tcp 1.1.1.1:8050: connect: connection refused\"\n", "stream": "stderr", "time": "2024-10-15T15:22:44.548801729Z", "environment": "env1", "node": "host1", "pod": "pod_name", "site": "site1", "team": "team1" }  
I just started a Free trial and it's already horrible. For 30mins I'm running around in circles trying to figure out how to add data as per your docs: https://docs.splunk.com/Documentation/SplunkClou... See more...
I just started a Free trial and it's already horrible. For 30mins I'm running around in circles trying to figure out how to add data as per your docs: https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/SearchTutorial/Systemrequirements . I didn't get any link to Splunk web and the profile page is useless. At some point I got to a different part (splunk.my.site.com) but not only it was as useless but also how the F do I even got there and ? All I see is  'You have no active instances at this time.' .      
Query is to retrieve failed test case matching with exception message. Out of 6 failed test case, one test as exception and rest of them are skipped with message 'Test was skipped'. Below data of ... See more...
Query is to retrieve failed test case matching with exception message. Out of 6 failed test case, one test as exception and rest of them are skipped with message 'Test was skipped'. Below data of one event.      { "suite_build_id": "20241015.12", "suite_build_name": "pipeline_name", "unit_test_name_failed": [ { "message": "Failed to save the shipping address. An unexpected error occurred. Please try again later or contact HP Support for assistance.", "test_rail_name": "test_printer_order_placement_magento", "test_result": "fail" }, { "message": "Test was skipped", "test_rail_name": "test_updation_of_access_token", }, { "message": "Test was skipped", "test_name": "test_printer_and_user_details", "test_rail_name": "test_printer_and_user_details", } ] }      Now, I want to display result to show test_rail_name and exception_message which matches exception. Below is the query that I tried.     index="eqt-e2e" suite_build_name="pipeline-name" suite_build_number="20241015.12" | mvexpand unit_test_name_failed{}.message | mvexpand unit_test_name_failed{}.test_rail_name | search unit_test_name_failed{}.message="Failed to save the shipping address. An unexpected error occurred. Please try again later or contact HP Support for assistance." | table suite_build_number, suite_build_start_time, unit_test_name_failed{}.test_rail_name, unit_test_name_failed{}.message | rename suite_build_number AS "Pipeline Number", suite_build_start_time AS "Pipeline Date", unit_test_name_failed{}.test_rail_name AS "Test Name", unit_test_name_failed{}.message AS "Exception Message"     In the result, it should have been 1 event, but retrieve 6 events.  I understand, mvexpand works only on one multivalue fields, and here I have 2 multivalue fields. Let me know if there is any solution on retrieving the data.