All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Log1 (dataset1) Splunk Query :- index=xyz X_App_ID=abc API_NAME=abc_123 NOT externalURL Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : X-App-ID=abc : X-Client-ID=... See more...
Log1 (dataset1) Splunk Query :- index=xyz X_App_ID=abc API_NAME=abc_123 NOT externalURL Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : X-App-ID=abc : X-Client-ID=kjzoAHK7Bt2vnV5jLQIUuKQZDaXqtJJK : X-Client-Version=6.0.0.3627 : X-Workflow= : serviceType= : API_NAME=abc_123 : COMPLETE_URL=<URL> : Client_IP=<IP>: ApiName=abc_123 : StatusCode=200 : ExecutionTime=234 : Brand=abc_345 : Response={JASON response} Log2 (dataset2) Spluk Query :- index=xyz "xmlResponseMapping" Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : xmlResponseMapping : accountType=null, accountSubType=null, Dataset1 and Dataset2 are connected using "X-Correlation-ID" only and dataset2 has more than 3000K logs for last 8 hrs,while dataset1 has 20-21K logs for last 8hrs I want "accountType" and "accountSubType" from dataset2 for X-Correlation-ID=<alpha-numeric> where X-App-ID=abc from dataset1. Dataset2 is having data for multiple "X-App-ID" but doesnt have field "X-App-ID" in logs. If I try below query then it is giving me output of 3000K (all from dataset2) index=masapi (X_App_ID=ScamShield API_NAME=COMBINED_FEATURES NOT externalURL) OR (napResponseMapping) |stats values(accountType) as accountType values(accountSubType) as accountSubType by X_Correlation_ID Kindly suggest the better way.
hi @Cheng2Ready , if you need help, open a new post so more people in Community will be able to help you. Anyway, start checking what's the condition that fails: if the lookup or the weekday, and t... See more...
hi @Cheng2Ready , if you need help, open a new post so more people in Community will be able to help you. Anyway, start checking what's the condition that fails: if the lookup or the weekday, and then check if it fails every time or some times, and if sometimes, when, As secondary test, check if it's a border condition: e.g. if the event has timestamp at 23:59:59 or 00:00:00. Ciao. Giuseppe
As a versatile alternative, you can use transpose.  Using the same lookup example as @livehybrid does, this is how to transform these extended mock data F1 F2 F3 Hello World Test Some ... See more...
As a versatile alternative, you can use transpose.  Using the same lookup example as @livehybrid does, this is how to transform these extended mock data F1 F2 F3 Hello World Test Some thing else into this form FieldName1Example FieldName2Example FieldName3Example Hello World Test Some thing else | transpose 0 | lookup fieldtest.csv fieldID as column | fields - column | transpose 0 header_field=fieldName | fields - column  
You have a couple of options. 1) If you have the permissions you can install a custom app that contains the csv in the app's lookups directory. The installation process will push the app to the sear... See more...
You have a couple of options. 1) If you have the permissions you can install a custom app that contains the csv in the app's lookups directory. The installation process will push the app to the searchheads as well as the indexers. That should resolve your "idx... lookup not found errors" 2) You can transition the lookup to a kvstore and configure replicate=true in transforms.conf  replicate = <boolean> * Indicates whether to replicate this collection on indexers. When false, this collection is not replicated on indexers, and lookups that depend on this collection are not available (although if you run a lookup command with 'local=true', local lookups are available). When true, this collection is replicated on indexers. * Default: false   However there are some default limitations regarding how many results can be returned for a search. Depending on the size of your lookup. But you can review those limitations here: https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf#.5Bkvstore.5D
stats is always the way to join datasets together, please remove join from your toolkit, it can always be replaced with a better option and is not the Splunk way to do things. It has numerous side ef... See more...
stats is always the way to join datasets together, please remove join from your toolkit, it can always be replaced with a better option and is not the Splunk way to do things. It has numerous side effects that can result in unexpected results, as you are seeing. @kamlesh_vaghela gives you an example of how to "join" using stats, but one other observation on your example is that you are using table which is a transforming Splunk command, so you should try to use this as late as possible in your SPL as this has consequences on where the data is maniuplated. If you are just looking to restrict the fields before an operation, use the fields command instead and note that in the stats example, you can still rename theX_Correlation_ID to ID after the stats command, which is a minor optimisation.  
Can you show your search, it seems that those numbers and warnings are the same as the example you gave - if that is what it is showing, then that is likely what the data contains. Can you show an ex... See more...
Can you show your search, it seems that those numbers and warnings are the same as the example you gave - if that is what it is showing, then that is likely what the data contains. Can you show an example of a couple of messages and your search because the search will work - note that you should not include the eval _raw part, as that is just setting up example test data to show you how the rest of the search can work
Tried need-props=true.  All it added was Servlet URI, EUM Request GUID, and ProcessID.   thanks  
Hi @JohnGregg  Its not clear from the docs - it doesnt look like there is much to add to the API call to tell it the bring back further detail, however I was wondering if you have need-props=true in... See more...
Hi @JohnGregg  Its not clear from the docs - it doesnt look like there is much to add to the API call to tell it the bring back further detail, however I was wondering if you have need-props=true in your existing API call? This may ad some further context which might help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @yashb  Ive used INGEST_EVAL to achieve this for a customer previously, although as @richgalloway you may be able to achieve this with Ingest Actions too. Here is the sample props/transforms for... See more...
Hi @yashb  Ive used INGEST_EVAL to achieve this for a customer previously, although as @richgalloway you may be able to achieve this with Ingest Actions too. Here is the sample props/transforms for INGEST_EVAL == props.conf == [yourSourcetype] TRANSFORMS-dropBigEvents = dropBigEvents == transforms.conf == [dropBigEvents] INGEST_EVAL = queue=IF(len(_raw)>=10000,"nullQueue",queue) You could also achieve this with a regex match, however I think this would be resource intensive, so personally would use the INGEST_EVAL route, but including this for completeness. [dropBigEvents] REGEX = ^.{10000,} DEST_KEY = queue FORMAT = nullQueue    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I got solution of this by following what is mentioned in https://community.splunk.com/t5/Splunk-Search/Query-running-time/m-p/367124#M108287
You can do that with Ingest Actions in either an intermediate HF or the indexers. Go to Settings->Ingest Actions and click the New Ruleset button.  Select the sourcetype to filter and then choose "F... See more...
You can do that with Ingest Actions in either an intermediate HF or the indexers. Go to Settings->Ingest Actions and click the New Ruleset button.  Select the sourcetype to filter and then choose "Filter using Eval Expression" from the Add Rule dropdown.  Enter "len(_raw) > 10000" as the Eval Expression and click Apply to see the effect.  When you're happy with the set-up, click Save.
Thank you @gcusello  appreciate the feedback. I'm just having trouble understanding why my alert fired when it was not suppose to. I do not know where to start troubleshooting, but I will accept y... See more...
Thank you @gcusello  appreciate the feedback. I'm just having trouble understanding why my alert fired when it was not suppose to. I do not know where to start troubleshooting, but I will accept your answer to the original question
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured... See more...
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured out that the Component number in the call chain corresponds to a tier and I know how to look up the mapping. There is also a "Th:nnnn" in the call chain, but I don't know what it is.  A thread?  What can I do with that? I know this info exists because it's in the UI. thanks  
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits... See more...
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits how much of an event is indexed, but it doesn't actually prevent or drop the event — it just truncates it. My goal is to completely drop large events to avoid ingesting them at all. So far, I haven’t found a built-in way to drop events purely based on size using transforms.conf or regex routing. I'm wondering: Is there any supported way to do this natively in Splunk? Can this be done using a Heavy Forwarder or a scripted/modular input? Has anyone solved this with a custom ingestion pipeline or pre-filter logic? Any guidance or examples would be greatly appreciated!
As already said, you must call to support not only email to them. Also check if you have several entitlements that you have selected correct one. At least I have several entitlements and only some of... See more...
As already said, you must call to support not only email to them. Also check if you have several entitlements that you have selected correct one. At least I have several entitlements and only some of those are "paid" and can use for creating cases.
You seem to have correct developer not dev/test license or trial. Only this 1st one support remote LM, those two didn't.
oooooooooo I have something awesome for you...don't send a .pdf make an awesome PowerBI dashboard and embed it into SharePoint or PowerPoint and preserve you Splunk data's dynamicness  https://con... See more...
oooooooooo I have something awesome for you...don't send a .pdf make an awesome PowerBI dashboard and embed it into SharePoint or PowerPoint and preserve you Splunk data's dynamicness  https://conf.splunk.com/files/2022/recordings/PLA1122B_1080.mp4 https://conf.splunk.com/files/2022/slides/PLA1122B.pdf
Hi @kn450  To address high storage utilization by moving older Splunk data, the recommended approach involves configuring data retirement policies. Manually moving buckets is generally discouraged d... See more...
Hi @kn450  To address high storage utilization by moving older Splunk data, the recommended approach involves configuring data retirement policies. Manually moving buckets is generally discouraged due to complexity and risk. Implement Data Retention Policies: Configure your indexes.conf file to automatically manage data lifecycle (hot -> warm -> cold -> frozen). Set frozenTimePeriodInSecs to define when data should be considered frozen. Data in the frozen state is typically deleted by Splunk, but you can configure a script (coldToFrozenScript) to move it to external storage instead, or coldToFrozenDir for a frozen path on additional storage. However, searching this manually moved frozen data requires restoring / thawing before being searchable again by Splunk. Immediate Action (Use with Caution): If space is critical now and retention policies aren't configured: Identify the oldest cold buckets ($SPLUNK_DB/<index_name>/colddb/*). Backup these buckets first. Manually move the oldest cold buckets to external storage. This frees up space but makes the data unsearchable by Splunk unless restored. Alternatively, if data loss is acceptable for the oldest data, adjust frozenTimePeriodInSecs to a shorter duration and restart Splunk; it will begin freezing (and potentially deleting, depending on configuration) older data. This is irreversible if deletion is enabled. Accessing Migrated Data: Frozen data must be manually restored (thawed) back into a Splunk index's thawed directory (thaweddb) to be searched again. This is a manual process. - For more info please see https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Restorearchiveddata Splunk manages data through buckets representing time chunks. These buckets transition from hot (actively written), to warm (read-only), to cold (read-only, potentially moved). The final state is frozen, where Splunk expects the data to be archived or deleted based on indexes.conf settings. Manually moving buckets breaks this native searchability. For more info check out https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Automatearchiving Top Tips Backup: Always back up data before manually moving or deleting buckets. Configuration: Properly configuring indexes.conf (especially homePath, coldPath, thawedPath, maxTotalDataSizeMB, frozenTimePeriodInSecs) is crucial for managing storage automatically. Manual Migration Risk: Manually moving buckets is error-prone and complex to manage, especially for searching. It should be a last resort or temporary measure.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @abobengsin  The error "ValueError: embedded null character validate java command:" indicates an issue with the Java path (JRE/JDK) configured for Splunk DB Connect, likely containing an invalid ... See more...
Hi @abobengsin  The error "ValueError: embedded null character validate java command:" indicates an issue with the Java path (JRE/JDK) configured for Splunk DB Connect, likely containing an invalid or null character. Navigate to $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/. Open the dbx_settings.conf file. Locate the [java] stanza and the javaHome setting. Carefully inspect the javaHome value for any hidden characters, extra spaces, or null characters. Ensure it points to the correct directory of a valid Java installation supported by your DB Connect version. Correct the path if necessary. A typical path looks like /usr/lib/jvm/jre-1.8.0, /bin/java/jre or C:\Program Files\Java\jre1.8.0_291. Alternatively, you can check and set this path via the DB Connect UI under Configuration > Settings > General. Ensure the path entered here is correct and free of invalid characters. Save the changes to dbx_settings.conf (if edited manually). Restart Splunk Enterprise for the changes to take effect. Check out the following docs for more: https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/settingsconfspec#:~:text=The%20dbx_settings.,settings%20to%20configure%20your%20settings.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunk310805  Its my understanding that this was previously raised as a bug (SPL-141385) which has now been updated in the docs to mention that it will run at startup too. (See https://community... See more...
Hi @splunk310805  Its my understanding that this was previously raised as a bug (SPL-141385) which has now been updated in the docs to mention that it will run at startup too. (See https://community.splunk.com/t5/Getting-Data-In/Powershell-script-input-on-a-schedule/m-p/369941#M67135) Unfortunately the only current workaround for this is to incorporate status checking in the powershell itself, this could be by checking the time since the last execution (e.g. comparison with last modified log file) or by checking that it is expected at a specific time (e.g. if you did 0,15,30,45 in cron schedule you could check its one of these times. Its not great, but hopefully this helps!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing