All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you sure you don't have indexed extractions enabled by any chance? Since automatic KV extractions happen after manual extractions the EventID field should not be populated when you're hitting the... See more...
Are you sure you don't have indexed extractions enabled by any chance? Since automatic KV extractions happen after manual extractions the EventID field should not be populated when you're hitting the transforms so the first transform (EventID_as_EventCode) should _not_ set the field to any value.
Were you able to find a fix for this?   I'd really hate to have to modify all Detections again after prepping for ES8.
Hey, I have implemented a GeneratingCommand splunk application that fetches data from an API and yields the results chunk after chunk. I am encountering an issue, where the event count on the top l... See more...
Hey, I have implemented a GeneratingCommand splunk application that fetches data from an API and yields the results chunk after chunk. I am encountering an issue, where the event count on the top left seems funky - it shows `50000 of 0 events matched` and after the next chunk is fetched `100000 of 0 events matched` and so on. I would like to know if and how it's possible to update the `0` counter from within my application, I know the total amount of scanned events from the very first reply I get from the API, but even if it's not possible to set to any desired number I would at least expect it to be possible to "match" the left side of that is increased on every yield... Thanks in advance, Alon
Thanks for your help @livehybrid  I've added Groq as the default provider but still another issue occurs. Revised Prompt:  index=_internal log_level IN ("ERROR","WARNING", "WARN") | table _time ... See more...
Thanks for your help @livehybrid  I've added Groq as the default provider but still another issue occurs. Revised Prompt:  index=_internal log_level IN ("ERROR","WARNING", "WARN") | table _time _raw | ai prompt="please explain these error message to me: {_raw}" provider="Groq" model="llama3-70b-8192" Error Message: RunDispatch has failed: sid=[sid], exit=-1, error=Error in 'ai' command: The provider: '"Groq"' is invalid. Please check the configuration  
Thanks, @livehybrid - wasn't even aware of that param. I'll give that a shot and reply here with results the next time I promote a new version.
Thanks for Replying!  The issue was forwarded to Splunk Support by me. I was told that since the Search Head is standalone, the option point_in_time is not needed. The update was done successfully... See more...
Thanks for Replying!  The issue was forwarded to Splunk Support by me. I was told that since the Search Head is standalone, the option point_in_time is not needed. The update was done successfully and the backup was luckily not required to be used.
Hi @chenfan  That string starting dc736 is *not* your token. This is the token ID. Its not possible to retrieve the token once created so copy it somewhere safe. If using this type of token then yo... See more...
Hi @chenfan  That string starting dc736 is *not* your token. This is the token ID. Its not possible to retrieve the token once created so copy it somewhere safe. If using this type of token then you will need to set use "Bearer" as you were doing before.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
As far as I remember, the license consumption for Cloud in the ingest-based option is the same as on-prem one which means the event is measured by its _raw part just prior to indexing. This means tha... See more...
As far as I remember, the license consumption for Cloud in the ingest-based option is the same as on-prem one which means the event is measured by its _raw part just prior to indexing. This means that: 1) However you modify your event prior to indexing it in terms of the raw event contents (like cutting out some headers or unnecessary trailing parts) will affect your license usage 2) Indexed fields which are saved in the tsidx files but are not "exploding" your _raw event contents do not affect your license usage. Having said that - indexed extractions are very rarely the way to go but not for license-related reasons.
Can you describe in more details your situation, and had you any solution? Because I don't think we are using any kind of summary index, we got this duplicate EventCode in the regular index And str... See more...
Can you describe in more details your situation, and had you any solution? Because I don't think we are using any kind of summary index, we got this duplicate EventCode in the regular index And strangely enough, this only happen to our "XmlWinEventLog:Security" log, others like "XmlWinEventLog:Application" or "XmlWinEventLog:DNS Server" got their EventCode normal - as single values!
Hi @nordinethales Splunk Cloud ingestion (assuming you have an ingest based license, not SVC license) is based on the raw uncompressed data size ingested, rather than indexed fields, apart from metr... See more...
Hi @nordinethales Splunk Cloud ingestion (assuming you have an ingest based license, not SVC license) is based on the raw uncompressed data size ingested, rather than indexed fields, apart from metrics which are each counted as 150 bytes. For storage this is also based on the uncompressed raw ingest.  For more info check out https://help.splunk.com/en/splunk-cloud-platform/get-started/service-terms-and-policies/9.3.2411/information-about-the-service/splunk-cloud-platform-service-details  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi @Nadeen_98  Can you confirm that in the the Connection Management settings you have the "Set as Default" checked for one of the models?  If so, please can you check the logs for any errors: ind... See more...
hi @Nadeen_98  Can you confirm that in the the Connection Management settings you have the "Set as Default" checked for one of the models?  If so, please can you check the logs for any errors: index="_internal" (source="*mlspl.log" OR sourcetype="mlspl" OR source="*python.log*")  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am trying to use the ai prompt in Splunk Machine Learning Toolkit 5.6.0 in order to use Llama Guard Model 4,note that it does not require an access token for it. I am trying to test out the followi... See more...
I am trying to use the ai prompt in Splunk Machine Learning Toolkit 5.6.0 in order to use Llama Guard Model 4,note that it does not require an access token for it. I am trying to test out the following prompt and I keep getting the following error. Please assist with any help or any correct format to test for using Llama Guard 4. Test Prompt: index=_internal log_level=error | table _time _raw | ai prompt="Please summarise these error messages and describe why I might receive them: {_raw}" Error Message: SearchMessage orig_component=SearchOrchestrator sid=[sid] message_key= message=Error in 'ai' command: No default model was found.
@nordinethales  You are correct, there might be significant difference in Splunk Cloud license usage between INDEXED_EXTRACTIONS=json and KV_MODE=json INDEXED_EXTRACTIONS=json - Fields are extrac... See more...
@nordinethales  You are correct, there might be significant difference in Splunk Cloud license usage between INDEXED_EXTRACTIONS=json and KV_MODE=json INDEXED_EXTRACTIONS=json - Fields are extracted at index time and stored, which increases the size and license usage KV_MODE=json - Fields are only extracted at search time, so license usage is based on the raw data size. Also you can refer this #https://splunk.github.io/splunk-add-on-for-crowdstrike-fdr/fieldextractions/ Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@chenfan  Any error messages on splunkd.log?  You can also refer #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.3/authenticate-into-the-splunk-platform-with-to... See more...
@chenfan  Any error messages on splunkd.log?  You can also refer #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.3/authenticate-into-the-splunk-platform-with-tokens/troubleshoot-token-authentication Can you create another token with admin account and test the same. Also test without token, curl -k -u admin:yourpassword https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi -d enabled=0 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @livehybrid Thankyou for your reply! I have tried,but it not work. And this is my token  
@livehybrid Let me interject here If it was just because it resulted in a multivalued field, you could happily search for just one of those values. But in case of this particular field (as well a... See more...
@livehybrid Let me interject here If it was just because it resulted in a multivalued field, you could happily search for just one of those values. But in case of this particular field (as well as other indexed fields which are not (supposed to be) present in the raw data) it's a bit different. When you do index=something source=aaa and check the job log you'll get 07-01-2025 10:47:22.082 INFO UnifiedSearch [3225984 searchOrchestrator] - Expanded index search = (index=something source=aaa) 07-01-2025 10:47:22.082 INFO UnifiedSearch [3225984 searchOrchestrator] - base lispy: [ AND index::something source::aaa ] This means that Splunk will only look for those events which have metadata fields of index and source with given values. In case of index it's not really a field but in case of source, it's gonna be a search only for indexed terms in form of source::something. Splunk will not try to bother with parsing anything out of the event itself. It's in the later part of the processing pipeline that the field might get parsed out and then be used for further manipulation. The problem with the obvious approach index=something | search source=something_else is that Splunk's optimizer will turn this seemingly superfluous search command back into index=something source=something_else which will end up with what I've already shown. That's why I used the where command - it works differently and won't get optimized out. Of course narrowing the search only to the events containing the value of "stderr" will speed the search (but won't be very effecitve if the "stderr" term appears in other terms of the event; tough luck). I'm not quite sure though if TERM() makes any difference here. I'm pretty sure just searching for "stderr" itself would suffice and it doesn't make the resulting SPL look too cryptic
Hey @lar06, I went through the documentation and it seems that the input is not properly configured and it needs the key to authenticate the request and fetch the data. You need to check on your env... See more...
Hey @lar06, I went through the documentation and it seems that the input is not properly configured and it needs the key to authenticate the request and fetch the data. You need to check on your envrironment if the input is enabled or disabled. From the looks of it, the input should be enabled. If you want to disregard the logs, consider disabling the input. If it needs to be enabled for data ingestion, I suggest reviewing the document and configure all the parameters properly. Document Guide - https://support.docusign.com/s/document-item?language=en_US&rsc_301&bundleId=gso1581445176035&topicId=hvj1633980712678.html&_LANG=enus   Thanks, Tejas.    --- If the above solution helps, an upvote is appreciated..!!
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I u... See more...
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I understand: indexed_extractions=json ==> fields are extracted at index time and could increase the size of tsidx and so license usage and cost kv_mode=json ==> fields extracted at search time, and should not impact license usage. Am I correct? Thanks for your confirmation Regards Nordine
Hi @HA-01  Is there anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to the app when retrieving the Dynamic which might give us any clues?  Did this answer help you? If so, please con... See more...
Hi @HA-01  Is there anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to the app when retrieving the Dynamic which might give us any clues?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @chenfan  There are two types of token, one is a JWT token that you can create in the UI via the Tokens page (Bearer). The other is by logging in to the /services/auth/login endpoint and retrievi... See more...
Hi @chenfan  There are two types of token, one is a JWT token that you can create in the UI via the Tokens page (Bearer). The other is by logging in to the /services/auth/login endpoint and retrieving a session token.. Based on your short token length I suspect you are using a session token (and JWT tokens often start "eyJ") which means the Authorization type should be "Splunk" not "Bearer" Bearer: Means to use a bearer token header, the standard for Javascript Object Notation (JSON) Web Tokens (JWTs), on which Splunk authentication tokens are based. Splunk: Means to use the Splunk header for authentication. Try the following: curl -v -X POST -k -H "Authorization: Splunk dc73xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" "https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi" -d enabled=0    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing