All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I got solution of this by following what is mentioned in https://community.splunk.com/t5/Splunk-Search/Query-running-time/m-p/367124#M108287
You can do that with Ingest Actions in either an intermediate HF or the indexers. Go to Settings->Ingest Actions and click the New Ruleset button.  Select the sourcetype to filter and then choose "F... See more...
You can do that with Ingest Actions in either an intermediate HF or the indexers. Go to Settings->Ingest Actions and click the New Ruleset button.  Select the sourcetype to filter and then choose "Filter using Eval Expression" from the Add Rule dropdown.  Enter "len(_raw) > 10000" as the Eval Expression and click Apply to see the effect.  When you're happy with the set-up, click Save.
Thank you @gcusello  appreciate the feedback. I'm just having trouble understanding why my alert fired when it was not suppose to. I do not know where to start troubleshooting, but I will accept y... See more...
Thank you @gcusello  appreciate the feedback. I'm just having trouble understanding why my alert fired when it was not suppose to. I do not know where to start troubleshooting, but I will accept your answer to the original question
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured... See more...
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured out that the Component number in the call chain corresponds to a tier and I know how to look up the mapping. There is also a "Th:nnnn" in the call chain, but I don't know what it is.  A thread?  What can I do with that? I know this info exists because it's in the UI. thanks  
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits... See more...
Hi everyone, I'm working on a use case where I need to drop events that are larger than 10,000 bytes before they get indexed in Splunk. I know about the TRUNCATE setting in props.conf, which limits how much of an event is indexed, but it doesn't actually prevent or drop the event — it just truncates it. My goal is to completely drop large events to avoid ingesting them at all. So far, I haven’t found a built-in way to drop events purely based on size using transforms.conf or regex routing. I'm wondering: Is there any supported way to do this natively in Splunk? Can this be done using a Heavy Forwarder or a scripted/modular input? Has anyone solved this with a custom ingestion pipeline or pre-filter logic? Any guidance or examples would be greatly appreciated!
As already said, you must call to support not only email to them. Also check if you have several entitlements that you have selected correct one. At least I have several entitlements and only some of... See more...
As already said, you must call to support not only email to them. Also check if you have several entitlements that you have selected correct one. At least I have several entitlements and only some of those are "paid" and can use for creating cases.
You seem to have correct developer not dev/test license or trial. Only this 1st one support remote LM, those two didn't.
oooooooooo I have something awesome for you...don't send a .pdf make an awesome PowerBI dashboard and embed it into SharePoint or PowerPoint and preserve you Splunk data's dynamicness  https://con... See more...
oooooooooo I have something awesome for you...don't send a .pdf make an awesome PowerBI dashboard and embed it into SharePoint or PowerPoint and preserve you Splunk data's dynamicness  https://conf.splunk.com/files/2022/recordings/PLA1122B_1080.mp4 https://conf.splunk.com/files/2022/slides/PLA1122B.pdf
Hi @kn450  To address high storage utilization by moving older Splunk data, the recommended approach involves configuring data retirement policies. Manually moving buckets is generally discouraged d... See more...
Hi @kn450  To address high storage utilization by moving older Splunk data, the recommended approach involves configuring data retirement policies. Manually moving buckets is generally discouraged due to complexity and risk. Implement Data Retention Policies: Configure your indexes.conf file to automatically manage data lifecycle (hot -> warm -> cold -> frozen). Set frozenTimePeriodInSecs to define when data should be considered frozen. Data in the frozen state is typically deleted by Splunk, but you can configure a script (coldToFrozenScript) to move it to external storage instead, or coldToFrozenDir for a frozen path on additional storage. However, searching this manually moved frozen data requires restoring / thawing before being searchable again by Splunk. Immediate Action (Use with Caution): If space is critical now and retention policies aren't configured: Identify the oldest cold buckets ($SPLUNK_DB/<index_name>/colddb/*). Backup these buckets first. Manually move the oldest cold buckets to external storage. This frees up space but makes the data unsearchable by Splunk unless restored. Alternatively, if data loss is acceptable for the oldest data, adjust frozenTimePeriodInSecs to a shorter duration and restart Splunk; it will begin freezing (and potentially deleting, depending on configuration) older data. This is irreversible if deletion is enabled. Accessing Migrated Data: Frozen data must be manually restored (thawed) back into a Splunk index's thawed directory (thaweddb) to be searched again. This is a manual process. - For more info please see https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Restorearchiveddata Splunk manages data through buckets representing time chunks. These buckets transition from hot (actively written), to warm (read-only), to cold (read-only, potentially moved). The final state is frozen, where Splunk expects the data to be archived or deleted based on indexes.conf settings. Manually moving buckets breaks this native searchability. For more info check out https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Automatearchiving Top Tips Backup: Always back up data before manually moving or deleting buckets. Configuration: Properly configuring indexes.conf (especially homePath, coldPath, thawedPath, maxTotalDataSizeMB, frozenTimePeriodInSecs) is crucial for managing storage automatically. Manual Migration Risk: Manually moving buckets is error-prone and complex to manage, especially for searching. It should be a last resort or temporary measure.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @abobengsin  The error "ValueError: embedded null character validate java command:" indicates an issue with the Java path (JRE/JDK) configured for Splunk DB Connect, likely containing an invalid ... See more...
Hi @abobengsin  The error "ValueError: embedded null character validate java command:" indicates an issue with the Java path (JRE/JDK) configured for Splunk DB Connect, likely containing an invalid or null character. Navigate to $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/. Open the dbx_settings.conf file. Locate the [java] stanza and the javaHome setting. Carefully inspect the javaHome value for any hidden characters, extra spaces, or null characters. Ensure it points to the correct directory of a valid Java installation supported by your DB Connect version. Correct the path if necessary. A typical path looks like /usr/lib/jvm/jre-1.8.0, /bin/java/jre or C:\Program Files\Java\jre1.8.0_291. Alternatively, you can check and set this path via the DB Connect UI under Configuration > Settings > General. Ensure the path entered here is correct and free of invalid characters. Save the changes to dbx_settings.conf (if edited manually). Restart Splunk Enterprise for the changes to take effect. Check out the following docs for more: https://docs.splunk.com/Documentation/DBX/3.18.2/DeployDBX/settingsconfspec#:~:text=The%20dbx_settings.,settings%20to%20configure%20your%20settings.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunk310805  Its my understanding that this was previously raised as a bug (SPL-141385) which has now been updated in the docs to mention that it will run at startup too. (See https://community... See more...
Hi @splunk310805  Its my understanding that this was previously raised as a bug (SPL-141385) which has now been updated in the docs to mention that it will run at startup too. (See https://community.splunk.com/t5/Getting-Data-In/Powershell-script-input-on-a-schedule/m-p/369941#M67135) Unfortunately the only current workaround for this is to incorporate status checking in the powershell itself, this could be by checking the time since the last execution (e.g. comparison with last modified log file) or by checking that it is expected at a specific time (e.g. if you did 0,15,30,45 in cron schedule you could check its one of these times. Its not great, but hopefully this helps!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @666Meow  I would recommend calling Splunk Support directly using your local regional number which can be found at https://www.splunk.com/en_us/about-splunk/contact-us.html#sp-tabs--customer-supp... See more...
Hi @666Meow  I would recommend calling Splunk Support directly using your local regional number which can be found at https://www.splunk.com/en_us/about-splunk/contact-us.html#sp-tabs--customer-support-tab_1 This way you can speak directly with somebody to should hopefully be able to look into this issue. Splunk Support do not actively monitor the community pages and I dont think there is a workaround for this issue, therefore either calling support directly or contacting your account team (if you know who this is) would be the best option for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kenbaugher  I think Ive wrapped my head around what you're trying to achieve, please see the following working example, note this is two separate SPL queries - the first to generate a lookup and... See more...
Hi @kenbaugher  I think Ive wrapped my head around what you're trying to achieve, please see the following working example, note this is two separate SPL queries - the first to generate a lookup and second to use it: == First create a lookup == |makeresults format=csv data="fieldID,fieldName F1,FieldName1Example F2,FieldName2Example F3,FieldName3Example" | outputlookup fieldtest.csv == Example event == |makeresults | eval F1="Hello", F2="World", F3="Test" | foreach F* [|eval test="Friendly"+json_extract(lookup("fieldtest.csv",json_object("fieldID","<<FIELD>>"),json_array("fieldName")),"fieldName"), {test}=<<FIELD>>] | fields _time Friendly* This loops over fields starting "F" and does a lookup against 'fieldtest.csv" against the "fieldID" field, then sets the value to the friendly field name, after this we then evaluate the existing F<n> value to that friendly field name.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thank you it is working; however, it's repeating the same value. The search will be returning 1000's of logs each with a different value and some will not contain a warning message.      
I pleased to see your query is working; however, it's repeating the same values. Sorry, I did not explain that there will be 1000's of logs, each with a different value.    
@abobengsin  This error often arises when DB Connect cannot properly validate the Java command due to an invalid or misconfigured Java path.     
Thank you.  I thought it was something like this and was going to try but didn't want to lose the 'free' option.  I ending up restarting Splunk and this changed allowing me to setup a peer with the ... See more...
Thank you.  I thought it was something like this and was going to try but didn't want to lose the 'free' option.  I ending up restarting Splunk and this changed allowing me to setup a peer with the license.   Thank you.
After setting up DB connect configuration and updating my java path I was faced with another error message being the task server currently being unavailable with the details saying: ValueError: embe... See more...
After setting up DB connect configuration and updating my java path I was faced with another error message being the task server currently being unavailable with the details saying: ValueError: embedded null character validate java command: . Any help would be appreciated.
Hi @uagraw01 , you forgot to remove the time tokens: <earliest>$TimeTokenMiddle.earliest$</earliest> <latest>$TimeTokenMiddle.latest$</latest> in many rows of your dashboards, replacing them wit... See more...
Hi @uagraw01 , you forgot to remove the time tokens: <earliest>$TimeTokenMiddle.earliest$</earliest> <latest>$TimeTokenMiddle.latest$</latest> in many rows of your dashboards, replacing them with a value for the time window. Ciao. Giuseppe  
We have a setup of data going to splunk, where we query a number of files with varying numbers of fields (sometimes over 100 per file), and have a generic dashboard setup to do some displays of them.... See more...
We have a setup of data going to splunk, where we query a number of files with varying numbers of fields (sometimes over 100 per file), and have a generic dashboard setup to do some displays of them. We use the first line of the query output for the headings of the files, but the field names are very short and not descriptive. Since this is done via ODBC we don't have direct access to the more descriptive column text. So we have for example a file coming in with fields F1,F2...F100. We are able to get those descriptive field names from SYSCOLUMNS into the form "filename, fieldname, fielddesc". Is there a reasonable way to have this display a table in splunk to show the fielddesc for each field vs the field name?