All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As far as I remember, the license consumption for Cloud in the ingest-based option is the same as on-prem one which means the event is measured by its _raw part just prior to indexing. This means tha... See more...
As far as I remember, the license consumption for Cloud in the ingest-based option is the same as on-prem one which means the event is measured by its _raw part just prior to indexing. This means that: 1) However you modify your event prior to indexing it in terms of the raw event contents (like cutting out some headers or unnecessary trailing parts) will affect your license usage 2) Indexed fields which are saved in the tsidx files but are not "exploding" your _raw event contents do not affect your license usage. Having said that - indexed extractions are very rarely the way to go but not for license-related reasons.
Can you describe in more details your situation, and had you any solution? Because I don't think we are using any kind of summary index, we got this duplicate EventCode in the regular index And str... See more...
Can you describe in more details your situation, and had you any solution? Because I don't think we are using any kind of summary index, we got this duplicate EventCode in the regular index And strangely enough, this only happen to our "XmlWinEventLog:Security" log, others like "XmlWinEventLog:Application" or "XmlWinEventLog:DNS Server" got their EventCode normal - as single values!
Hi @nordinethales Splunk Cloud ingestion (assuming you have an ingest based license, not SVC license) is based on the raw uncompressed data size ingested, rather than indexed fields, apart from metr... See more...
Hi @nordinethales Splunk Cloud ingestion (assuming you have an ingest based license, not SVC license) is based on the raw uncompressed data size ingested, rather than indexed fields, apart from metrics which are each counted as 150 bytes. For storage this is also based on the uncompressed raw ingest.  For more info check out https://help.splunk.com/en/splunk-cloud-platform/get-started/service-terms-and-policies/9.3.2411/information-about-the-service/splunk-cloud-platform-service-details  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi @Nadeen_98  Can you confirm that in the the Connection Management settings you have the "Set as Default" checked for one of the models?  If so, please can you check the logs for any errors: ind... See more...
hi @Nadeen_98  Can you confirm that in the the Connection Management settings you have the "Set as Default" checked for one of the models?  If so, please can you check the logs for any errors: index="_internal" (source="*mlspl.log" OR sourcetype="mlspl" OR source="*python.log*")  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am trying to use the ai prompt in Splunk Machine Learning Toolkit 5.6.0 in order to use Llama Guard Model 4,note that it does not require an access token for it. I am trying to test out the followi... See more...
I am trying to use the ai prompt in Splunk Machine Learning Toolkit 5.6.0 in order to use Llama Guard Model 4,note that it does not require an access token for it. I am trying to test out the following prompt and I keep getting the following error. Please assist with any help or any correct format to test for using Llama Guard 4. Test Prompt: index=_internal log_level=error | table _time _raw | ai prompt="Please summarise these error messages and describe why I might receive them: {_raw}" Error Message: SearchMessage orig_component=SearchOrchestrator sid=[sid] message_key= message=Error in 'ai' command: No default model was found.
@nordinethales  You are correct, there might be significant difference in Splunk Cloud license usage between INDEXED_EXTRACTIONS=json and KV_MODE=json INDEXED_EXTRACTIONS=json - Fields are extrac... See more...
@nordinethales  You are correct, there might be significant difference in Splunk Cloud license usage between INDEXED_EXTRACTIONS=json and KV_MODE=json INDEXED_EXTRACTIONS=json - Fields are extracted at index time and stored, which increases the size and license usage KV_MODE=json - Fields are only extracted at search time, so license usage is based on the raw data size. Also you can refer this #https://splunk.github.io/splunk-add-on-for-crowdstrike-fdr/fieldextractions/ Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@chenfan  Any error messages on splunkd.log?  You can also refer #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.3/authenticate-into-the-splunk-platform-with-to... See more...
@chenfan  Any error messages on splunkd.log?  You can also refer #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.3/authenticate-into-the-splunk-platform-with-tokens/troubleshoot-token-authentication Can you create another token with admin account and test the same. Also test without token, curl -k -u admin:yourpassword https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi -d enabled=0 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @livehybrid Thankyou for your reply! I have tried,but it not work. And this is my token  
@livehybrid Let me interject here If it was just because it resulted in a multivalued field, you could happily search for just one of those values. But in case of this particular field (as well a... See more...
@livehybrid Let me interject here If it was just because it resulted in a multivalued field, you could happily search for just one of those values. But in case of this particular field (as well as other indexed fields which are not (supposed to be) present in the raw data) it's a bit different. When you do index=something source=aaa and check the job log you'll get 07-01-2025 10:47:22.082 INFO UnifiedSearch [3225984 searchOrchestrator] - Expanded index search = (index=something source=aaa) 07-01-2025 10:47:22.082 INFO UnifiedSearch [3225984 searchOrchestrator] - base lispy: [ AND index::something source::aaa ] This means that Splunk will only look for those events which have metadata fields of index and source with given values. In case of index it's not really a field but in case of source, it's gonna be a search only for indexed terms in form of source::something. Splunk will not try to bother with parsing anything out of the event itself. It's in the later part of the processing pipeline that the field might get parsed out and then be used for further manipulation. The problem with the obvious approach index=something | search source=something_else is that Splunk's optimizer will turn this seemingly superfluous search command back into index=something source=something_else which will end up with what I've already shown. That's why I used the where command - it works differently and won't get optimized out. Of course narrowing the search only to the events containing the value of "stderr" will speed the search (but won't be very effecitve if the "stderr" term appears in other terms of the event; tough luck). I'm not quite sure though if TERM() makes any difference here. I'm pretty sure just searching for "stderr" itself would suffice and it doesn't make the resulting SPL look too cryptic
Hey @lar06, I went through the documentation and it seems that the input is not properly configured and it needs the key to authenticate the request and fetch the data. You need to check on your env... See more...
Hey @lar06, I went through the documentation and it seems that the input is not properly configured and it needs the key to authenticate the request and fetch the data. You need to check on your envrironment if the input is enabled or disabled. From the looks of it, the input should be enabled. If you want to disregard the logs, consider disabling the input. If it needs to be enabled for data ingestion, I suggest reviewing the document and configure all the parameters properly. Document Guide - https://support.docusign.com/s/document-item?language=en_US&rsc_301&bundleId=gso1581445176035&topicId=hvj1633980712678.html&_LANG=enus   Thanks, Tejas.    --- If the above solution helps, an upvote is appreciated..!!
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I u... See more...
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I understand: indexed_extractions=json ==> fields are extracted at index time and could increase the size of tsidx and so license usage and cost kv_mode=json ==> fields extracted at search time, and should not impact license usage. Am I correct? Thanks for your confirmation Regards Nordine
Hi @HA-01  Is there anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to the app when retrieving the Dynamic which might give us any clues?  Did this answer help you? If so, please con... See more...
Hi @HA-01  Is there anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to the app when retrieving the Dynamic which might give us any clues?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @chenfan  There are two types of token, one is a JWT token that you can create in the UI via the Tokens page (Bearer). The other is by logging in to the /services/auth/login endpoint and retrievi... See more...
Hi @chenfan  There are two types of token, one is a JWT token that you can create in the UI via the Tokens page (Bearer). The other is by logging in to the /services/auth/login endpoint and retrieving a session token.. Based on your short token length I suspect you are using a session token (and JWT tokens often start "eyJ") which means the Authorization type should be "Splunk" not "Bearer" Bearer: Means to use a bearer token header, the standard for Javascript Object Notation (JSON) Web Tokens (JWTs), on which Splunk authentication tokens are based. Splunk: Means to use the Splunk header for authentication. Try the following: curl -v -X POST -k -H "Authorization: Splunk dc73xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" "https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi" -d enabled=0    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
we are experiencing the same issue, subscribing to this thread in  case anyone finds a solution
Hi Splunker, I tried to enable/disable with API, but I encountered problems with token authentication. I always get the following error. I have also adjusted the API information, but I still can't... See more...
Hi Splunker, I tried to enable/disable with API, but I encountered problems with token authentication. I always get the following error. I have also adjusted the API information, but I still can't solve this problem. curl -v -X POST -k -H "Authorization: Bearer dc73xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" "https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi" -d enabled=0   It will be really great if you could share some working examples somewhere in your documentation.  Thanks in advance!
I can't collect data which is test type is set to “Dynamic” on Thousandeyes. We are currently unable to retrieve data for "Dynamic" test types via the Test Stream data input configuration in the C... See more...
I can't collect data which is test type is set to “Dynamic” on Thousandeyes. We are currently unable to retrieve data for "Dynamic" test types via the Test Stream data input configuration in the Cisco ThousandEyes App for Splunk. In the Ingest settings of the Cisco ThousandEyes App for Splunk, under the "Tests Stream" configuration, test types set as “HTTP Server” appear correctly under Endpoint Tests and data is successfully ingested into Splunk. However, test types set as “Dynamic” do not appear at all and cannot be ingested into Splunk. Since data configured as "HTTP Server" is being successfully ingested, we do not believe this is a communication issue between ThousandEyes and Splunk. Could you please advise how we can ingest Tests Streams that are configured as “Dynamic”?
Hello Getting this error on DocuSign Monitor Add-on 1.1.3: ERROR: Could not parse the provided public key I haven't provided any public key, so wondering what this is about. Thanks for any help. ... See more...
Hello Getting this error on DocuSign Monitor Add-on 1.1.3: ERROR: Could not parse the provided public key I haven't provided any public key, so wondering what this is about. Thanks for any help. Lionel
Hi @TestUser , I don't know of any best practices for such a check, the only advice I can give you is to use common sense: follow the procedure you indicated that seems correct to me: verify (if ... See more...
Hi @TestUser , I don't know of any best practices for such a check, the only advice I can give you is to use common sense: follow the procedure you indicated that seems correct to me: verify (if you haven't already done so) that before the upgrade there are no parsing and normalization problems, if possible, use a data set that you have already acquired with the old version of the add-on, at the end, don't just check that the data parsing is correct, but also check that the normalization rules that the add-on must have if it is CIM compliant (otherwise it is not relevant) are correctly applied (eventtype, tags and fields. About tools, I usually use the SA_CIM-Vladiator app (https://splunkbase.splunk.com/app/2968) to check the normalization status but there are also other tools to check the CIM compliance of a data flow. Ciao. Giuseppe
Hi @TestUser  Different developers have different approaches to dealing with some of these techniques.  I start by using a local docker instance for testing against various versions of Splunk (e.g.... See more...
Hi @TestUser  Different developers have different approaches to dealing with some of these techniques.  I start by using a local docker instance for testing against various versions of Splunk (e.g. currently supported versions), but it depends on how big the changes made in the upgrade actually are. Changes you make in the UI will persist in the local directory, therefore if a user upgrades the app you should find that this local directory persists and the other folders get updated. Its therefore important to ensure that when you update your app that you dont change something which might be impacted by someone's local configuration. Are you using a framework such as UCC to build your app? If so there are UI testing options available (e.g. see https://splunk.github.io/addonfactory-ucc-generator/ui_tests_inputs_page/) and I would also recommend checking out https://splunk.github.io/pytest-splunk-addon/ which is really powerful for performing testing automatically without human interaction.  Ultimately, depending on your app and what you are changing you might find varying appropriate options, but these are my two favourites - although I dont use them on all apps all of the time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ravi1  I agree that the loss of the fishbucket state (due to ephemeral storage) is the cause of both log duplication and data loss after Splunk Universal Forwarder pod restarts in Kubernetes. Wh... See more...
Hi @Ravi1  I agree that the loss of the fishbucket state (due to ephemeral storage) is the cause of both log duplication and data loss after Splunk Universal Forwarder pod restarts in Kubernetes. When the fishbucket is lost, the UF cannot track which files and offsets have already been ingested, leading to re-reading old data (duplicates) and missing logs that rotated or were deleted during downtime. If logs are rotated (e.g. to myapp.log.1) and Splunk is not configured to monitor the rotated filepath then this could result in your losing data as well as the more obvious duplicate of data due to the file tracking within fishbucket being lost. As far as I am aware, the approach of using a UF within K8s is not generally encouraged, instead the Splunk validated architecture (SVA) for sending logs to Splunk from K8s is via Splunk OpenTelemetry Collector for Kubernetes - this allows sending of logs (amonst other things) to Splunk Enterprise / Splunk Cloud. If you do want to use the UF approach (which may/may not be supported) then you could look at adding PVC as is done with the full Splunk Enterprise deployment under splunk-operator, check out the Storage Guidelines and StorageClass docs for splunk-operator.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing