All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but whil... See more...
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but while I have tried several times to install it, it still shows me an XML screen saying the file size is too big. I even deleted and re-downloaded the Python file and uploaded it. However, the issue still persists and while other add ons like MLTK,etc got installed just fine. I'm on Windows and I don't have a clue how to move forward from here as I am learning about the splunk environment on the go.
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the... See more...
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the Prod environment, splunk deletes all Prod keys in the folder and creates a Restart required message in the dashboard. After restart only the files installed after the dev key is loaded.  I might very well have installed a new dev key in the prod environment as I received a renewal keys for both prod and dev in the same email. We will ask the maintenance team for a restore of the files in the licenses folder and it will probably be sorted. It would be great if splunk could show a warning when I try to do such stupid things as uploading a dev license in the prod environment, or maybe even a backup of the license files when deleting them, but I have learned now and wont be doing that again  
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared"... See more...
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared" someone must have deleted them. Check your backups for contents of this directory.
Installing an app on SH tier doesn't directly install the same app on indexers. Parts of apps are pushed to indexers as knowledge bundle. Anyway, back to the original question (which is a bit dated)... See more...
Installing an app on SH tier doesn't directly install the same app on indexers. Parts of apps are pushed to indexers as knowledge bundle. Anyway, back to the original question (which is a bit dated) - if you have an automatic lookup defined, you must have a lookup to back it. All "solutions" in this thread do not limit the scope of the lookup to a single SH but rather distribute the lookup across the whole environment.
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the Li... See more...
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the License admin console. Anyone experienced this before? Is there a way to recover the old licenses? Running splunk Enterprise 9.2.0.1 on prem on redhat  
While both approaches (foreach and transpose) should get you what you want, they might not have very good performance. Since "we're using first row as column names" I'm wondering if it wouldn't be e... See more...
While both approaches (foreach and transpose) should get you what you want, they might not have very good performance. Since "we're using first row as column names" I'm wondering if it wouldn't be easier if you didn't pull the data directly to Splunk but rather wrote them to a CSV file and ingested that file with indexed extractions (yes, that's often not the best way either but in this case it might be better).
Log1 (dataset1) Splunk Query :- index=xyz X_App_ID=abc API_NAME=abc_123 NOT externalURL Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : X-App-ID=abc : X-Client-ID=... See more...
Log1 (dataset1) Splunk Query :- index=xyz X_App_ID=abc API_NAME=abc_123 NOT externalURL Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : X-App-ID=abc : X-Client-ID=kjzoAHK7Bt2vnV5jLQIUuKQZDaXqtJJK : X-Client-Version=6.0.0.3627 : X-Workflow= : serviceType= : API_NAME=abc_123 : COMPLETE_URL=<URL> : Client_IP=<IP>: ApiName=abc_123 : StatusCode=200 : ExecutionTime=234 : Brand=abc_345 : Response={JASON response} Log2 (dataset2) Spluk Query :- index=xyz "xmlResponseMapping" Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : xmlResponseMapping : accountType=null, accountSubType=null, Dataset1 and Dataset2 are connected using "X-Correlation-ID" only and dataset2 has more than 3000K logs for last 8 hrs,while dataset1 has 20-21K logs for last 8hrs I want "accountType" and "accountSubType" from dataset2 for X-Correlation-ID=<alpha-numeric> where X-App-ID=abc from dataset1. Dataset2 is having data for multiple "X-App-ID" but doesnt have field "X-App-ID" in logs. If I try below query then it is giving me output of 3000K (all from dataset2) index=masapi (X_App_ID=ScamShield API_NAME=COMBINED_FEATURES NOT externalURL) OR (napResponseMapping) |stats values(accountType) as accountType values(accountSubType) as accountSubType by X_Correlation_ID Kindly suggest the better way.
hi @Cheng2Ready , if you need help, open a new post so more people in Community will be able to help you. Anyway, start checking what's the condition that fails: if the lookup or the weekday, and t... See more...
hi @Cheng2Ready , if you need help, open a new post so more people in Community will be able to help you. Anyway, start checking what's the condition that fails: if the lookup or the weekday, and then check if it fails every time or some times, and if sometimes, when, As secondary test, check if it's a border condition: e.g. if the event has timestamp at 23:59:59 or 00:00:00. Ciao. Giuseppe
As a versatile alternative, you can use transpose.  Using the same lookup example as @livehybrid does, this is how to transform these extended mock data F1 F2 F3 Hello World Test Some ... See more...
As a versatile alternative, you can use transpose.  Using the same lookup example as @livehybrid does, this is how to transform these extended mock data F1 F2 F3 Hello World Test Some thing else into this form FieldName1Example FieldName2Example FieldName3Example Hello World Test Some thing else | transpose 0 | lookup fieldtest.csv fieldID as column | fields - column | transpose 0 header_field=fieldName | fields - column  
You have a couple of options. 1) If you have the permissions you can install a custom app that contains the csv in the app's lookups directory. The installation process will push the app to the sear... See more...
You have a couple of options. 1) If you have the permissions you can install a custom app that contains the csv in the app's lookups directory. The installation process will push the app to the searchheads as well as the indexers. That should resolve your "idx... lookup not found errors" 2) You can transition the lookup to a kvstore and configure replicate=true in transforms.conf  replicate = <boolean> * Indicates whether to replicate this collection on indexers. When false, this collection is not replicated on indexers, and lookups that depend on this collection are not available (although if you run a lookup command with 'local=true', local lookups are available). When true, this collection is replicated on indexers. * Default: false   However there are some default limitations regarding how many results can be returned for a search. Depending on the size of your lookup. But you can review those limitations here: https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf#.5Bkvstore.5D
stats is always the way to join datasets together, please remove join from your toolkit, it can always be replaced with a better option and is not the Splunk way to do things. It has numerous side ef... See more...
stats is always the way to join datasets together, please remove join from your toolkit, it can always be replaced with a better option and is not the Splunk way to do things. It has numerous side effects that can result in unexpected results, as you are seeing. @kamlesh_vaghela gives you an example of how to "join" using stats, but one other observation on your example is that you are using table which is a transforming Splunk command, so you should try to use this as late as possible in your SPL as this has consequences on where the data is maniuplated. If you are just looking to restrict the fields before an operation, use the fields command instead and note that in the stats example, you can still rename theX_Correlation_ID to ID after the stats command, which is a minor optimisation.  
Can you show your search, it seems that those numbers and warnings are the same as the example you gave - if that is what it is showing, then that is likely what the data contains. Can you show an ex... See more...
Can you show your search, it seems that those numbers and warnings are the same as the example you gave - if that is what it is showing, then that is likely what the data contains. Can you show an example of a couple of messages and your search because the search will work - note that you should not include the eval _raw part, as that is just setting up example test data to show you how the rest of the search can work
Tried need-props=true.  All it added was Servlet URI, EUM Request GUID, and ProcessID.   thanks  
Hi @JohnGregg  Its not clear from the docs - it doesnt look like there is much to add to the API call to tell it the bring back further detail, however I was wondering if you have need-props=true in... See more...
Hi @JohnGregg  Its not clear from the docs - it doesnt look like there is much to add to the API call to tell it the bring back further detail, however I was wondering if you have need-props=true in your existing API call? This may ad some further context which might help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @yashb  Ive used INGEST_EVAL to achieve this for a customer previously, although as @richgalloway you may be able to achieve this with Ingest Actions too. Here is the sample props/transforms for... See more...
Hi @yashb  Ive used INGEST_EVAL to achieve this for a customer previously, although as @richgalloway you may be able to achieve this with Ingest Actions too. Here is the sample props/transforms for INGEST_EVAL == props.conf == [yourSourcetype] TRANSFORMS-dropBigEvents = dropBigEvents == transforms.conf == [dropBigEvents] INGEST_EVAL = queue=IF(len(_raw)>=10000,"nullQueue",queue) You could also achieve this with a regex match, however I think this would be resource intensive, so personally would use the INGEST_EVAL route, but including this for completeness. [dropBigEvents] REGEX = ^.{10000,} DEST_KEY = queue FORMAT = nullQueue    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I got solution of this by following what is mentioned in https://community.splunk.com/t5/Splunk-Search/Query-running-time/m-p/367124#M108287
You can do that with Ingest Actions in either an intermediate HF or the indexers. Go to Settings->Ingest Actions and click the New Ruleset button.  Select the sourcetype to filter and then choose "F... See more...
You can do that with Ingest Actions in either an intermediate HF or the indexers. Go to Settings->Ingest Actions and click the New Ruleset button.  Select the sourcetype to filter and then choose "Filter using Eval Expression" from the Add Rule dropdown.  Enter "len(_raw) > 10000" as the Eval Expression and click Apply to see the effect.  When you're happy with the set-up, click Save.
Thank you @gcusello  appreciate the feedback. I'm just having trouble understanding why my alert fired when it was not suppose to. I do not know where to start troubleshooting, but I will accept y... See more...
Thank you @gcusello  appreciate the feedback. I'm just having trouble understanding why my alert fired when it was not suppose to. I do not know where to start troubleshooting, but I will accept your answer to the original question
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured... See more...
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured out that the Component number in the call chain corresponds to a tier and I know how to look up the mapping. There is also a "Th:nnnn" in the call chain, but I don't know what it is.  A thread?  What can I do with that? I know this info exists because it's in the UI. thanks  
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits... See more...
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits how much of an event is indexed, but it doesn't actually prevent or drop the event — it just truncates it. My goal is to completely drop large events to avoid ingesting them at all. So far, I haven’t found a built-in way to drop events purely based on size using transforms.conf or regex routing. I'm wondering: Is there any supported way to do this natively in Splunk? Can this be done using a Heavy Forwarder or a scripted/modular input? Has anyone solved this with a custom ingestion pipeline or pre-filter logic? Any guidance or examples would be greatly appreciated!