All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured... See more...
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured out that the Component number in the call chain corresponds to a tier and I know how to look up the mapping. There is also a "Th:nnnn" in the call chain, but I don't know what it is.  A thread?  What can I do with that? I know this info exists because it's in the UI. thanks  
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits... See more...
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits how much of an event is indexed, but it doesn't actually prevent or drop the event — it just truncates it. My goal is to completely drop large events to avoid ingesting them at all. So far, I haven’t found a built-in way to drop events purely based on size using transforms.conf or regex routing. I'm wondering: Is there any supported way to do this natively in Splunk? Can this be done using a Heavy Forwarder or a scripted/modular input? Has anyone solved this with a custom ingestion pipeline or pre-filter logic? Any guidance or examples would be greatly appreciated!
After setting up DB connect configuration and updating my java path I was faced with another error message being the task server currently being unavailable with the details saying: ValueError: embe... See more...
After setting up DB connect configuration and updating my java path I was faced with another error message being the task server currently being unavailable with the details saying: ValueError: embedded null character validate java command: . Any help would be appreciated.
We have a setup of data going to splunk, where we query a number of files with varying numbers of fields (sometimes over 100 per file), and have a generic dashboard setup to do some displays of them.... See more...
We have a setup of data going to splunk, where we query a number of files with varying numbers of fields (sometimes over 100 per file), and have a generic dashboard setup to do some displays of them. We use the first line of the query output for the headings of the files, but the field names are very short and not descriptive. Since this is done via ODBC we don't have direct access to the more descriptive column text. So we have for example a file coming in with fields F1,F2...F100. We are able to get those descriptive field names from SYSCOLUMNS into the form "filename, fieldname, fielddesc". Is there a reasonable way to have this display a table in splunk to show the fielddesc for each field vs the field name?
Support Portal is broke and I am unable to submit a case due to one of the required fields being unable to select (see attached image) "Splunk Support access to your company data: --" I've emaile... See more...
Support Portal is broke and I am unable to submit a case due to one of the required fields being unable to select (see attached image) "Splunk Support access to your company data: --" I've emailed support@splunk.com which was suggested in other community posts, but it has now been 2 months and several chase up emails and still no response from support.
Hi, I want to run a Powershell script on a Windows universal forwarder according to a cron schedule. My input looks similar to this [powershell://Test] script = . "$SplunkHome\etc\apps\test\bin\te... See more...
Hi, I want to run a Powershell script on a Windows universal forwarder according to a cron schedule. My input looks similar to this [powershell://Test] script = . "$SplunkHome\etc\apps\test\bin\test.ps1" schedule = */15 * * * * index = test Besides running every 15 minutes as it should, I noticed that the script also runs every time when Splunk starts. Reading https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf it says: "Regardless of which option you choose, the command or script always runs once when the instance starts." I don't want that. I don't want the script to run when Splunk starts. Is there any way to disable that?
Dear Splunk Community, I’m currently facing an urgent issue in my Splunk environment: my storage utilization has reached 95%, which threatens system continuity and performance. I plan to move older ... See more...
Dear Splunk Community, I’m currently facing an urgent issue in my Splunk environment: my storage utilization has reached 95%, which threatens system continuity and performance. I plan to move older data to external storage before it’s too late, but I haven’t yet implemented a bucket‐policy to automate time-based data retention. I would greatly appreciate your expertise on: Best practices for safely and efficiently migrating old data from my current Splunk indexes to external storage. Recommended scripts or Splunkbase apps that facilitate this process. How to ensure continued access to the migrated data when needed, without impacting search performance. Any additional suggestions, practical examples, or links to detailed documentation. Thank you in advance for your time and assistance. Kind regards,
Hi, I am looking to extract complete Health rule violations in Appdynamics(Servers,Application,EUM and all). Currently I could see only to pull violation from specific application.   Need to under... See more...
Hi, I am looking to extract complete Health rule violations in Appdynamics(Servers,Application,EUM and all). Currently I could see only to pull violation from specific application.   Need to understand how to use the API to pull all the violation for specified time period. If not through API any other method available.   Will there a event generation for each violation and if so where it is stored and viewed.
Hi Splunk Community, I would appreciate your guidance regarding enabling Scheduled PDF Delivery in Splunk. Currently, the option does not appear for my Classic (Simple XML) dashboard, and I'm unsure... See more...
Hi Splunk Community, I would appreciate your guidance regarding enabling Scheduled PDF Delivery in Splunk. Currently, the option does not appear for my Classic (Simple XML) dashboard, and I'm unsure how to enable or configure it correctly.
Hello Friends, I am trying to join the 2 logs with same index using trx_id(here it is called X_Correlation_ID ) but subquery is returning more than 3000K rows hence it is not working. can someone p... See more...
Hello Friends, I am trying to join the 2 logs with same index using trx_id(here it is called X_Correlation_ID ) but subquery is returning more than 3000K rows hence it is not working. can someone please help me with another way to join two logs without using "join" command. index=xyz X_App_ID=abc API_NAME=abc_123 NOT externalURL |rename X_Correlation_ID AS ID |table ID |join ID [search index=xyz "xmlResponseMapping" |rename X_Correlation_ID AS ID |table accountType,accountSubType,ID] |table ID,accountType,accountSubType
When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw da... See more...
When using the Field Extractor can you use the same name for a field? will it append or add to the original field create Example I am extracting from the _raw data Found that some of the _raw data didn't match when I highlight when using regex match I was getting the red x as the example below, that should of captured it since both logs are identical in patterns. So I extracted twice on a single field on two data sets. will it append? And add it onto the field of data to look for?  
I'm Attempting to speak with someone in sales. I cant seem to get ahold of anyone. Anyone have any tips to help expedite this?
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit ... See more...
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit > Enable",   it doesn't work. Instead of enabling the alert, nothing happens at all.   You click the green button and nothing happens. Looking at the browser console,  there are no errors when this happens and the javascript makes no attempt to post anything at all to Splunk.   The question has two parts.   -- what is the root cause of this,  and how can folks avoid accidentally shipping app content like this? -- what workaround might exist for the end users who need to enable the disabled alert?  
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will b... See more...
Hello, I have this Splunk log that contains tons of quotes, commas, and other special characters. I’m trying to only pull the Latitude":77.0999, Longitude":-99.999 and from time to time there will be WarningMessages: This mail requires a number or Apartment number that I would like to capture in a dashboard. StandardizedAddres SUCCEEDED - FROM: {"Address1":"123 NAANNA SAND RD","Address2":"","City":”GREEN","County":null,"State":"WY","ZipCode":"44444-9360","Latitude":null,"Longitude":null,"IsStandardized":true,"AddressStatus":1,"AddressStandardizationType":0} RESULT: 1 | {"AddressDetails":[{"AssociatedName":"","HouseNumber":"123","Predirection":"","StreetName":" NAANNA SAND RD ","Suffix":"RD","Postdirection":"","SuiteName":"","SuiteRange":"","City":" GREEN","CityAbbreviation":"GREEN","State":"WY","ZipCode":"44444","Zip4":"9360","County":"Warren","CountyFips":"27","CoastalCounty":0,"Latitude":77.0999,"Longitude":-99.999,"Fulladdress1":"123 NAANNA SAND RD ","Fulladdress2":"","HighRiseDefault":false}]," WarningMessages":["This mail requires a number or Apartment number."]:[],"ErrorMessages":[],"GeoErrorMessages":[],"Succeeded":true,"ErrorMessage":null}   I currently use the query below, but I’m not having any luck. This is past my skill set, please help…. index="cf" Environment="NA" msgTxt="API=/api-123BusOwnCommon/notis*" | eval msgTxt=" API=/api-123BusOwnCommon/notis /WGR97304666665/05-08-2024 CalStatus=Success Controller=InsideApi_ notis Action= notis Duration=3 data*" | rex "Duration=(?<Duration>\w+)" | timechart span=1h avg(Duration) AS avg_response by msgTxt   I'd like to show the data like this in Splunk: Latitude       Longitude    WarningMessages 2.351           42.23           Error in blah 4.10             88.235          Hello world 454.2           50.02            Blah blah blah blah...............   Thank you
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be proc... See more...
Hi Splunk Community team, Please help: I have N number of lookup lk_file_abc3477.csv, lk_file_xare000csv, lk_file_ppbc34ee.csv, etc.... files. I have a splunk search/script that will be processing the same data type and same number of columns and my question is, is there any way to process each file and send an email for each individually, using Reports or Alerts option or any other way in one single execution? Regards,
Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cann... See more...
Pulling CMDB data from SNOW is causing 10,000 errors per week and causing long SQL queries  in SNOW, and then timing out trying to query the CMDB table. This table is over 10 million records and cannot be queried directly. Has anyone had this issue in the past? How did you fix it? What other alternatives are there?
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is curren... See more...
Raw message showing the correct filed value but stats & table truncating the field value. RAW meassge: Message=" | RO76 | PXS (XITI) - Server - Windows Server Down Critical | Server "RO76 is currently down / unreachable." Table & Stats showing: Message=| RO76 | PXS (DTI) - Server - Windows Server Down Critical | Server it breaking after " sign.
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could rev... See more...
Hi everyone, We're planning a new Splunk deployment and considering three different scenarios (Plan A and B) based on daily ingestion and data retention needs. I would appreciate it if you could review the sizing and let me know if anything looks misaligned or could be optimized based on Splunk best practices. Overview of each plan: Plan A: Daily ingest: 2.0TB Retention: same 10 Indexers 3 Search Heads 2 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes Plan B: Daily ingest: 2.6TB Retention: same 13 Indexers 3 Search Heads 3 ES Search Heads CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes As I told Each plan includes CM, MC, SH Deployer, DS, LM, 4–5 HFs, and several UBA/ML nodes. Example specs per Indexer (Plan C): Memory: 128GB vCPU: 96 cores Disk: 500GB OS SSD + 6TB hot SSD + 30TB cold HDD + 11TB frozen (NAS) ---------------------------------------- What I'm looking for: Are these hardware specs reasonable per Splunk sizing guidelines? Is the number of indexers/search heads appropriate for the daily ingest and retention? Any red flags or over/under-sizing you would call out? Thanks in advance for your insights!
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this lic... See more...
Hello, I am setting up a test instance to be a license master and trying to connect a second splunk install to point to this license master.  All Splunk 9.4.1 Getting the error on the peer "this license does not support being a remote master".   I've installed a developer license and it shows 'can be remote', so not sure why I cannot connect a peer to it.  On the LM it lists 4 licenses and the 'dev' one is #2, do I need to change the license group to active the 'dev' license?    
We would like to dynamically populate the severity field, is it possible?