All Topics

Top

All Topics

Hi Splunk Community,  I am having issues with Splunk DB Connect 3.18.0 not sending data.  I was able to connect the db connect app to the database and query properly but no luck seeing the data... See more...
Hi Splunk Community,  I am having issues with Splunk DB Connect 3.18.0 not sending data.  I was able to connect the db connect app to the database and query properly but no luck seeing the data from splunk cloud. I am able to send other logs and data to Splunk cloud with no issues.  Thanks!
We’re excited to introduce two powerful new search features, now generally available for Splunk Cloud Platform customers globally. Federated Search for Amazon S3 and the Splunk AI Assistant for SPL... See more...
We’re excited to introduce two powerful new search features, now generally available for Splunk Cloud Platform customers globally. Federated Search for Amazon S3 and the Splunk AI Assistant for SPL app are Splunk’s latest innovations to enhance search functionality, streamline data access, and drive more effective insights.  Let’s dive in! Federated Search for Amazon S3  Federated Search for Amazon S3 lets you search data stored in Amazon S3 directly from Splunk Cloud Platform without the need to ingest it first. You can now access large amounts of historical data cost-effectively and efficiently, making it easier to gain insights from your data faster. Key Benefits Reduced Storage Costs: Instead of ingesting all your data into Splunk, Federated Search lets you access data stored in Amazon S3 directly.This enables you to save significantly on storage costs by keeping your data in cost-effective Amazon S3 buckets while still allowing Splunk to query it.  Improved Time-to-Detection: Perform investigations directly on historical data stored in Amazon S3, without the time-consuming process of rehydrating or transferring it into Splunk. This allows for quicker detection and response, especially for archival or low-value data that may not need to be stored in Splunk continuously. Enhanced Compliance: Federated Search lets you keep your data where it is for investigations that require as-needed access to historical, archival, or low-value data and achieve better control over compliance and security.  Additional Resources Review the Federated Search for S3 Tech Brief to learn more. Watch this webinar recording to see how these features can transform your Splunk workflows. Learn how to set up Federated Search for Amazon S3, when to use it, and experience a live demo.  Take a look at this Lantern article to learn how to use Federated Search for Amazon S3 with Edge Processor Ready to get started? Federated Search for Amazon S3 requires a Data Scan Units license for your Splunk Cloud Platform stack. Contact your Splunk sales representative to learn more.  Splunk AI Assistant for SPL App The Splunk AI Assistant for SPL app allows you to generate and explain Splunk Search Processing Language (SPL) queries using natural language. Leverage the power of generative AI to write, learn, and understand SPL more efficiently. New to Splunk? You can now onboard and learn Splunk quickly while reducing the burden on Splunk Admins to answer questions. Experienced user? You can get a head start by leveraging the power of generative AI to get your job done even faster.  Key Features (watch it in action) Quickly generate SPL from natural language: The AI Assistant translates your natural language prompts into working SPL queries, drastically reducing the time needed to write queries.  Understand SPL with ease: Struggling to understand an SPL query? Break it down into easy steps with a detailed explanation of how the query works, what it does, and the results it generates. Interactive help with Splunk Docs integration: Ask questions and explain concepts about SPL features and concepts directly in the app, with responses powered by AI and Splunk Docs integration. Ready to get started? Simply complete the user agreement here to get provisioned for the app, then head to Splunkbase to download the app and install it on your activated cloud stack. Please reach out to mlsupport@splunk.com with any questions or feedback. Upcoming Search Events Check out the event below and register now to secure your spot! Ask the experts Office Hours: Splunk Search & New SPL Innovations | Wed, Nov 21, 2024 at 1pm PT: Ask the Experts in this special session where experts will begin by showcasing the latest innovations in search. Join us to ask questions and get live, personalized guidance from technical Splunk experts. Tech Talk: Generative AI for SPL - Faster Results| Tues, Oct 29, 2024 at 11am PT: Join this technical deep dive webinar to learn about Splunk’s differentiating approach to GenAI, get a technical review of the LLM under the hood, watch a live demo of the AI Assistant, and learn how to activate it. Happy Splunking! The Splunk Search Team
The environment I'm monitoring has a large number of custom database metrics.  For those not familiar, these are queries run against the database by the appdynamics agent, that are then displayed in ... See more...
The environment I'm monitoring has a large number of custom database metrics.  For those not familiar, these are queries run against the database by the appdynamics agent, that are then displayed in custom dashboards.  This works great for us.  The problem is, our environment is complex, and frequently changing.  The Custom Metrics are currently maintained by hand (someone has to go in and modify them when the environment changes).  There is no import/export option in the UI.  I've read through the API that is available, but I'm not able to find a way to upload or download a custom database metric.  Alternately, is there a way to perform a variable substitution for the database server and value in the query? Anything that could make this less of a manual process.   Thanks
I'm trying to let Splunk Enterprise log some creation of a user on the same system as where Splunk is installed. My Splunk-version is 9.3.1. Alongside with this install, I've installed the latest Un... See more...
I'm trying to let Splunk Enterprise log some creation of a user on the same system as where Splunk is installed. My Splunk-version is 9.3.1. Alongside with this install, I've installed the latest Universal Forwarder (win) (on localhost 127.0.0.1). When installing: - I skip the SSL page - click "Next" - select "Local System" - click "Next" - check all items under "Windows Log Events" - click "Next" - generate an admin account and password - leave the "Deployment Server"-settings empty - enter "127.0.0.1:9997" as Host and port for "Receiving Indexer" - finish the installer Then I create a user (net user /add <user>) in CMD. After this step I return to Splunk Search and enter * as search criteria but nothing is found. Even when I enter the username (I added) the software finds nothing. Can someone tell me what I'm doing wrong or what the issue can be? Thanks! Gerd
I've the below event, where I need to display only event which has action=test and category=testdata. test { line1: 1 "action": "test", "category": "testdata", } test1 { line1: 1 "action": ... See more...
I've the below event, where I need to display only event which has action=test and category=testdata. test { line1: 1 "action": "test", "category": "testdata", } test1 { line1: 1 "action": "event", "category": "testdata", } test2 { line1: 1 "action": "test", "category": "duplicate_data", }
I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is commu... See more...
I have 5 forwarders forwarding data to my Splunk server   but when I log into this server only two of them are listed this     When I do a TCP dump on the server I can see the forwarder is communicating and sending data but when I log into the web UI the forwarder is not listed   does anybody know what this might be?  the configs on all forwrders is the same.
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This ta... See more...
Hi All, newbie here - Sorry if my subject is poorly worded, I'm a little confused! I'm trying to add a field to the table below that will show how long it's been since the last test failed.  This table also contains a column that shows the last time a test ran (pass or fail). Here's a picture. Here's my current search: index="redacted" | rex field=runtime "^(?<seconds>\w*.\w*)" |stats latest(result), latest(_time) as last_checked, latest(runtime) as lastRuntime, avg(seconds) as averageRuntime by test | eval averageRuntime=round(averageRuntime,0) | strcat averageRuntime f2 " seconds." field3 averageRuntime | `timesince(last_checked,last_checked)`   Any ideas or tips are greatly appreciated. Thanks in Advance.
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, a... See more...
Hi all, I need to create an alert that will be triggered when a latency threshold is breached for sustained 30 minutes. I am doing my research on how to incorporate streamstats into my query, and so far I have come up with this:     index="x" source="y" EndtoEnd | rex (?<e2e_p>\d+)ms \\Extracts the numerical value from the e2e_p field. | where isnotnull(e2e_p) | streamstats avg(e2e_p) window=1800 current=t time_window=30m as avg_e2e_p | where avg_e2e_p > 500     The condition doesn't happen often, but I'll work with the team that supports the app to simulate the condition once the query is finalized. I have never used streamstats before, but that's what has come up in my search for a means to incoporate a sliding window into a SPL query. Thank you in advance for taking the time to help with this.
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:  ... See more...
Hello,  My team has a search that uses a field called regex, containing a load of different regex expressions to match with a field called string to identify key words we are looking for. Example:        | eval regex=split(regex, "|") | mvexpand regex | where match(string, regex)         The regex field contains 80+ different regex codes to match on certain key words. The mvexpand would cause one event to be split up into 80+ different events, just to potentially match on one field. Due to the use of this mvexpand, we encountered mvexpand's memory limitations causing events to be dropped.    I'm seeing if it is possible to match the regex within the "regex" field to the string field without having to use mvexpand to break it apart.  Previously did not work, recommended solutions such as:         | eval true = mvmap(regex, if(match(regex, query),regex,"0")) | eval true = mvfilter(true="0") | where ISNOTNULL(true)        
Question for Omega Core Audit Will I (as app developer) get notified ?
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has sta... See more...
Hi all, I am trying to build a query to monitor my indexer rolling restart  I would like to know how much time it has taken when it has started and when it has ended , I can only see when it has started but cannot see messages on when it has been completed    INFO CMMaster [3340464 TcpChannelThread] - Starting a rolling restart of the peers.    
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzi... See more...
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzip it someway?  Big thanks!!!  
Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transm... See more...
Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transmission rate. Each index node has 64GB of physical memory, and SWAP has 8GB. SWAP strategy requires physical memory to exceed 70% before it can be used. The current situation is that only 1.6GB of physical memory is used, but the swap uses 3.8GB. The following is the alarm information. [Alarm Name] system.swap.used_pct [Warning content] The usage rate of swap partition has reached 39.76%,. and the AVG has exceeded the threshold of 20.0% in the past minute. I have some questions to ask: 1. Why is swap usage so much higher than memory. 2. How to configure memory and swap partition space, and what are the best practices?
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specifi... See more...
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specified index in Splunk, and I cannot view the Google Workspace logs on Splunk. Additionally, there are no apparent errors after the integration setup. I would appreciate any advice or precautions to take when installing the Add-on for Google Workspace. # Additional info Upon checking the log files, the following errors were found. However, no 40x errors were found. Could not refresh service account credentials because of ('unauthorized_client: Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.', {'error': 'unauthorized_client', 'error_description': 'Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.'}) # Referenced Documentation ## Installation of the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Installation ## Issuing Authentication Keys for Accounts Created on the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs1 -> Refer to the "Google Workspace activity report prerequisites" section in the above document. ## Add-on Configuration https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs2 -> Refer to the "Add your Google Workspace account information" and "Configure activity report data collection using Splunk Web" sections in the above document. ## Troubleshooting https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Troubleshoot -> Refer to the "No events appearing in the Splunk platform" section in the above document. https://community.splunk.com/t5/Getting-Data-In/Why-is-Splunk-Add-on-for-Google-Workspace-inputs-getting-401/m-p/602874
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, ... See more...
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, 1 DMC, 3 indexes, and 2 heavy forwarders. In this architecture, the search replication factors are all 2 and there is stock data available. The demand for local disaster recovery is: The host room where the existing data center's Xinchuang SIEM system is located has been shut down, and the data in the disaster recovery room can be queried normally. The closure of the newly built disaster recovery host room will not affect the use of the existing data center's SIEM system. RPO 0 cannot lose data, RTO can recover within 6 hours.
I am from Japan. Sorry for my poor English and lack of knowledge about Splunk. I received a Splunk Enterprise Trial License and would like to import Palo Alto logs and issue alerts (via email, etc.)... See more...
I am from Japan. Sorry for my poor English and lack of knowledge about Splunk. I received a Splunk Enterprise Trial License and would like to import Palo Alto logs and issue alerts (via email, etc.), but I am not sure how to do this (manually importing past logs succeeded). I wonder if past logs can issue alert. About our environment, I set up all-in-one virtual server in our FJ Cloud (Fujitsu Cloud)is one virtual server and Splunk is running here. There are no forwarders installed on other servers. I would be more than happy if you could let me know. Thank you for your support.
Hi, I am having some problem to understand How to fetch multiline pattern in a single event. I have logfile in which I am searching this pattern which is scattered in multiple lines, 12345678910... See more...
Hi, I am having some problem to understand How to fetch multiline pattern in a single event. I have logfile in which I am searching this pattern which is scattered in multiple lines, 123456789102BP Tank: Bat from Surface = #07789*K00C0**************************************** 00003453534534534 ****after Multiple Lines*** 123456789107CSVSentinfo:L00Show your passport ****after Multiple Lines*** 123456789110CSVSentinfo Data:z800 ****after Multiple Lines*** 123456789113CSVSentinfoToCollege: ****after Multiple Lines*** 123456789117CSVSentinfoFromCollege: ****after Multiple Lines*** 123456789120CSVSentinfo:G7006L ****after Multiple Lines*** 123456789122CSVSentinfo:A0T0 ****after Multiple Lines*** 123456789124BP Tank: Bat to Surface L000passportAccepted   I have tried below query to find all the occurrences but no luck index=khisab_ustri  sourcetype=sosnmega  "*BP Tank: Bat from surface = *K00C0*" |dedup _time |rex field=_raw "(?ms)(?<time_string>\d{12})BP Tank: Bat from Surface .*K00C0\d{21}(?<kmu_str>\d{2})*" |rex field=_raw "(?<PC_sTime>\d{12})CSVSentinfo:L00Show your passport*" |rex field=_raw "(?<CP_sTime>\d{12})CSVSentinfo Data:z800*" |rex field=_raw "(?<MTB_sTime>\d{12})CSVSentinfoToCollege:*" |rex field=_raw "(?<MFB_sTime>\d{12})CSVSentinfoFromCollege:*" |rex field=_raw "(?<PR_sTime>\d{12})CSVSentinfo:G7006L*" |rex field=_raw "(?<JR_sTime>\d{12})CSVSentinfo:A0T0*" |rex field=_raw "(?<MR_sTime>\d{12})BP Tank: Bat to Surface =.+L000passportAccepted*" |table (PC_sTime- time_string),(CP_sTime- PC_sTime),(MTB_sTime-CP_sTime),(MFB_sTime-MTB_sTime),(PR_sTime- MFB_sTime),(JR_sTime-PR_sTime),(MR_sTime-JR_sTime) Sample Data is Sample Data: 123456789102BP Tank: Bat from Surface = #07789*K00C0**************************************** 00003453534534534 123456789103UniverseToMachine\0a<Ladbrdige>\0a <SurfaceTake>GOP</Ocnce>\0a <Final_Worl-ToDO>Firewallset</KuluopToset>\0a</ 123456789105SetSurFacetoMost>7</DecideTomove>\0a <TakeaKooch>&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;&#32;</SurfaceBggien>\0a <Closethe Work>0</Csloethe Work>\0a 123456789107CSVSentinfo:L00Show your passport 123456789108BP Tank: Bat from Surface = close ticket 123456789109CSVSentinfo:Guide iunit 123456789110CSVSentinfo Data:z800 123456789111CSVGErt Infro"8900 123456789112CSGFajsh:984 123456789113CSVSentinfoToCollege: 123456789114CSVSentinfo Data:z800 123456789115CSVSentinfo Data:z800 123456789116Sem startedfrom Surface\0a<Surafce have a data>\0a <Surfacecame with Data>Ladbrdige</Ocnce>\0a <Ladbrdige>Ocnce</Final_Worl>\0a <KuluopToset>15284</DecideTomove>\0a <SurafceCall>\0a <wait>\0a <wating>EventSent</SurafceCall>\0a </wait>\0a </sa>\0a</Surafce have a data>\0a\0a 123456789117CSVSentinfoFromCollege: 123456789118CSVSentinfo:sadjhjhisd 123456789119CSVSentinfo:Loshy890 123456789120CSVSentinfo:G7006L 123456789121CSVSentinfo:8shhgbve 123456789122CSVSentinfo:A0T0 123456789123CSVSentinfo Data:accepted 123456789124BP Tank: Bat to Surface L000passportAccepted
Dashboard Studio working with Reports and Time Range @sainag_splunk  I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there... See more...
Dashboard Studio working with Reports and Time Range @sainag_splunk  I am currently using the new dashboard studio interface, they make calls to saved reports in Splunk. Is there a way to have time range work for the dashboard, but also allow it to work with the reports? The issue we face is  we are able to set the reports in the studio dashboard, but the default is that they are stuck as static reports. how can we add in a time range input that will work with the dashboard and the reports? The users who are viewing this dashboard are third party and people that we do not want to give access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input. we are trying to not give third party users access to Splunk Indexes Also tried looking into Embedded reports but found " Embedded reports also cannot support real-time searches."
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new ... See more...
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new version doesn't support Splunk Cloud anymore. There were no code changes in the new version. Thanks. -M
I have been working on routing logs based on their source into different indexes. I configured below props.conf and transforms.conf on my HF, but it didn't worked. We currently follow the naming conv... See more...
I have been working on routing logs based on their source into different indexes. I configured below props.conf and transforms.conf on my HF, but it didn't worked. We currently follow the naming convention below for our CloudWatch log group names: /starflow-app-logs-<platform-name>/<team-id>/<app-name>/<app-environment-name> -------------------------------------------------------------------------- Example sources: -------------------------------------------------------------------------- us-east-1:/starflow-app-logs/sandbox/test/prod us-east-1:/starflow-app-logs-dev/sandbox/test/dev us-east-1:/starflow-app-logs-stage/sandbox/test/stage Note: We are currently receiving log data for the above use case from the us-east-1 region. -------------------------------------------------------------------------- Condition: -------------------------------------------------------------------------- If the source path contains <team-id>, logs should be routed to the respective index in Splunk. If the source path contains any <team-id>, its logs will be routed to the same <team-id>-based index, which already exists in our Splunk environment. -------------------------------------------------------------------------- props.conf -------------------------------------------------------------------------- [source::us-east-1:/starflow-app-logs*] TRANSFORMS-set_starflow_logging = new_sourcetype, route_to_teamid_index -------------------------------------------------------------------------- transforms.conf -------------------------------------------------------------------------- [new_sourcetype] REGEX = .* SOURCE_KEY = source DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = us-east-1:\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = source FORMAT = index::$1 DEST_KEY = _MetaData:Index I’d be grateful for any feedback or suggestions to improve this configuration. Thanks in advance!