All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@nordinethales  You are correct, there might be significant difference in Splunk Cloud license usage between INDEXED_EXTRACTIONS=json and KV_MODE=json INDEXED_EXTRACTIONS=json - Fields are extrac... See more...
@nordinethales  You are correct, there might be significant difference in Splunk Cloud license usage between INDEXED_EXTRACTIONS=json and KV_MODE=json INDEXED_EXTRACTIONS=json - Fields are extracted at index time and stored, which increases the size and license usage KV_MODE=json - Fields are only extracted at search time, so license usage is based on the raw data size. Also you can refer this #https://splunk.github.io/splunk-add-on-for-crowdstrike-fdr/fieldextractions/ Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@chenfan  Any error messages on splunkd.log?  You can also refer #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.3/authenticate-into-the-splunk-platform-with-to... See more...
@chenfan  Any error messages on splunkd.log?  You can also refer #https://help.splunk.com/en/splunk-enterprise/administer/manage-users-and-security/9.3/authenticate-into-the-splunk-platform-with-tokens/troubleshoot-token-authentication Can you create another token with admin account and test the same. Also test without token, curl -k -u admin:yourpassword https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi -d enabled=0 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @livehybrid Thankyou for your reply! I have tried,but it not work. And this is my token  
@livehybrid Let me interject here If it was just because it resulted in a multivalued field, you could happily search for just one of those values. But in case of this particular field (as well a... See more...
@livehybrid Let me interject here If it was just because it resulted in a multivalued field, you could happily search for just one of those values. But in case of this particular field (as well as other indexed fields which are not (supposed to be) present in the raw data) it's a bit different. When you do index=something source=aaa and check the job log you'll get 07-01-2025 10:47:22.082 INFO UnifiedSearch [3225984 searchOrchestrator] - Expanded index search = (index=something source=aaa) 07-01-2025 10:47:22.082 INFO UnifiedSearch [3225984 searchOrchestrator] - base lispy: [ AND index::something source::aaa ] This means that Splunk will only look for those events which have metadata fields of index and source with given values. In case of index it's not really a field but in case of source, it's gonna be a search only for indexed terms in form of source::something. Splunk will not try to bother with parsing anything out of the event itself. It's in the later part of the processing pipeline that the field might get parsed out and then be used for further manipulation. The problem with the obvious approach index=something | search source=something_else is that Splunk's optimizer will turn this seemingly superfluous search command back into index=something source=something_else which will end up with what I've already shown. That's why I used the where command - it works differently and won't get optimized out. Of course narrowing the search only to the events containing the value of "stderr" will speed the search (but won't be very effecitve if the "stderr" term appears in other terms of the event; tough luck). I'm not quite sure though if TERM() makes any difference here. I'm pretty sure just searching for "stderr" itself would suffice and it doesn't make the resulting SPL look too cryptic
Hey @lar06, I went through the documentation and it seems that the input is not properly configured and it needs the key to authenticate the request and fetch the data. You need to check on your env... See more...
Hey @lar06, I went through the documentation and it seems that the input is not properly configured and it needs the key to authenticate the request and fetch the data. You need to check on your envrironment if the input is enabled or disabled. From the looks of it, the input should be enabled. If you want to disregard the logs, consider disabling the input. If it needs to be enabled for data ingestion, I suggest reviewing the document and configure all the parameters properly. Document Guide - https://support.docusign.com/s/document-item?language=en_US&rsc_301&bundleId=gso1581445176035&topicId=hvj1633980712678.html&_LANG=enus   Thanks, Tejas.    --- If the above solution helps, an upvote is appreciated..!!
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I u... See more...
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I understand: indexed_extractions=json ==> fields are extracted at index time and could increase the size of tsidx and so license usage and cost kv_mode=json ==> fields extracted at search time, and should not impact license usage. Am I correct? Thanks for your confirmation Regards Nordine
Hi @HA-01  Is there anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to the app when retrieving the Dynamic which might give us any clues?  Did this answer help you? If so, please con... See more...
Hi @HA-01  Is there anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to the app when retrieving the Dynamic which might give us any clues?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @chenfan  There are two types of token, one is a JWT token that you can create in the UI via the Tokens page (Bearer). The other is by logging in to the /services/auth/login endpoint and retrievi... See more...
Hi @chenfan  There are two types of token, one is a JWT token that you can create in the UI via the Tokens page (Bearer). The other is by logging in to the /services/auth/login endpoint and retrieving a session token.. Based on your short token length I suspect you are using a session token (and JWT tokens often start "eyJ") which means the Authorization type should be "Splunk" not "Bearer" Bearer: Means to use a bearer token header, the standard for Javascript Object Notation (JSON) Web Tokens (JWTs), on which Splunk authentication tokens are based. Splunk: Means to use the Splunk header for authentication. Try the following: curl -v -X POST -k -H "Authorization: Splunk dc73xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" "https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi" -d enabled=0    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
we are experiencing the same issue, subscribing to this thread in  case anyone finds a solution
Hi Splunker, I tried to enable/disable with API, but I encountered problems with token authentication. I always get the following error. I have also adjusted the API information, but I still can't... See more...
Hi Splunker, I tried to enable/disable with API, but I encountered problems with token authentication. I always get the following error. I have also adjusted the API information, but I still can't solve this problem. curl -v -X POST -k -H "Authorization: Bearer dc73xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" "https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi" -d enabled=0   It will be really great if you could share some working examples somewhere in your documentation.  Thanks in advance!
I can't collect data which is test type is set to “Dynamic” on Thousandeyes. We are currently unable to retrieve data for "Dynamic" test types via the Test Stream data input configuration in the C... See more...
I can't collect data which is test type is set to “Dynamic” on Thousandeyes. We are currently unable to retrieve data for "Dynamic" test types via the Test Stream data input configuration in the Cisco ThousandEyes App for Splunk. In the Ingest settings of the Cisco ThousandEyes App for Splunk, under the "Tests Stream" configuration, test types set as “HTTP Server” appear correctly under Endpoint Tests and data is successfully ingested into Splunk. However, test types set as “Dynamic” do not appear at all and cannot be ingested into Splunk. Since data configured as "HTTP Server" is being successfully ingested, we do not believe this is a communication issue between ThousandEyes and Splunk. Could you please advise how we can ingest Tests Streams that are configured as “Dynamic”?
Hello Getting this error on DocuSign Monitor Add-on 1.1.3: ERROR: Could not parse the provided public key I haven't provided any public key, so wondering what this is about. Thanks for any help. ... See more...
Hello Getting this error on DocuSign Monitor Add-on 1.1.3: ERROR: Could not parse the provided public key I haven't provided any public key, so wondering what this is about. Thanks for any help. Lionel
Hi @TestUser , I don't know of any best practices for such a check, the only advice I can give you is to use common sense: follow the procedure you indicated that seems correct to me: verify (if ... See more...
Hi @TestUser , I don't know of any best practices for such a check, the only advice I can give you is to use common sense: follow the procedure you indicated that seems correct to me: verify (if you haven't already done so) that before the upgrade there are no parsing and normalization problems, if possible, use a data set that you have already acquired with the old version of the add-on, at the end, don't just check that the data parsing is correct, but also check that the normalization rules that the add-on must have if it is CIM compliant (otherwise it is not relevant) are correctly applied (eventtype, tags and fields. About tools, I usually use the SA_CIM-Vladiator app (https://splunkbase.splunk.com/app/2968) to check the normalization status but there are also other tools to check the CIM compliance of a data flow. Ciao. Giuseppe
Hi @TestUser  Different developers have different approaches to dealing with some of these techniques.  I start by using a local docker instance for testing against various versions of Splunk (e.g.... See more...
Hi @TestUser  Different developers have different approaches to dealing with some of these techniques.  I start by using a local docker instance for testing against various versions of Splunk (e.g. currently supported versions), but it depends on how big the changes made in the upgrade actually are. Changes you make in the UI will persist in the local directory, therefore if a user upgrades the app you should find that this local directory persists and the other folders get updated. Its therefore important to ensure that when you update your app that you dont change something which might be impacted by someone's local configuration. Are you using a framework such as UCC to build your app? If so there are UI testing options available (e.g. see https://splunk.github.io/addonfactory-ucc-generator/ui_tests_inputs_page/) and I would also recommend checking out https://splunk.github.io/pytest-splunk-addon/ which is really powerful for performing testing automatically without human interaction.  Ultimately, depending on your app and what you are changing you might find varying appropriate options, but these are my two favourites - although I dont use them on all apps all of the time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ravi1  I agree that the loss of the fishbucket state (due to ephemeral storage) is the cause of both log duplication and data loss after Splunk Universal Forwarder pod restarts in Kubernetes. Wh... See more...
Hi @Ravi1  I agree that the loss of the fishbucket state (due to ephemeral storage) is the cause of both log duplication and data loss after Splunk Universal Forwarder pod restarts in Kubernetes. When the fishbucket is lost, the UF cannot track which files and offsets have already been ingested, leading to re-reading old data (duplicates) and missing logs that rotated or were deleted during downtime. If logs are rotated (e.g. to myapp.log.1) and Splunk is not configured to monitor the rotated filepath then this could result in your losing data as well as the more obvious duplicate of data due to the file tracking within fishbucket being lost. As far as I am aware, the approach of using a UF within K8s is not generally encouraged, instead the Splunk validated architecture (SVA) for sending logs to Splunk from K8s is via Splunk OpenTelemetry Collector for Kubernetes - this allows sending of logs (amonst other things) to Splunk Enterprise / Splunk Cloud. If you do want to use the UF approach (which may/may not be supported) then you could look at adding PVC as is done with the full Splunk Enterprise deployment under splunk-operator, check out the Storage Guidelines and StorageClass docs for splunk-operator.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I’ve already released a Splunk Add-on with one input and user configurations. Now, I’ve added a new input and made UI changes in the Configuration page. I want to simulate a customer upgrade locally... See more...
I’ve already released a Splunk Add-on with one input and user configurations. Now, I’ve added a new input and made UI changes in the Configuration page. I want to simulate a customer upgrade locally by: Installing the existing released version Adding sample configurations Upgrading it with the new version Testing if existing settings are retained and the new input/UI works without issues Could you guide me on: Best practices for local upgrade testing Ensuring configurations persist after upgrade Any tools or logs to verify migration behavior
Hi @apc  It sounds like you need to increment the "build" value in the [install] stanza of app.conf: build = <integer> * Required. * Must be a positive integer. * Increment this whenever you change... See more...
Hi @apc  It sounds like you need to increment the "build" value in the [install] stanza of app.conf: build = <integer> * Required. * Must be a positive integer. * Increment this whenever you change files in <app_name>/static. * Every release must change both 'version' and 'build' settings. * Ensures browsers don't use cached copies of old static files in new versions of your app. * 'build' is a single integer, unlike 'version' which can be a complex string, such as 1.5.18.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Schroeder  I've been trying to get this to work but unfortunately not managed to. It looks like this is designed for streamed payload being sent into the persistconn/appserver.py rather than as ... See more...
Hi @Schroeder  I've been trying to get this to work but unfortunately not managed to. It looks like this is designed for streamed payload being sent into the persistconn/appserver.py rather than as a response to the browser. Please let us know if you do manage to get anywhere with this though - Perhaps its possible with a custom driver = value, although there arent many examples of this online!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @MrGlass  You are seeing source twice because it is an internal field as well as being specified inside your event. This can cause problems when searching for it because it has two values. You m... See more...
Hi @MrGlass  You are seeing source twice because it is an internal field as well as being specified inside your event. This can cause problems when searching for it because it has two values. You might find that adding a TERM statement is enough to filter this down in order to retain performence, rather than having to search all your data and then filter by source once all the events are loaded: index=YourIndex sourcetype=dnrc:docker TERM(stderr)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Source is one of the default metadata fields which are supposed to be indexed along the event, not included in the event. Therefore the initial search does not look for the fields parsed out from the... See more...
Source is one of the default metadata fields which are supposed to be indexed along the event, not included in the event. Therefore the initial search does not look for the fields parsed out from the event itself when looking for fields like source or sourcetype. As a walkaround I'd try to instead of <rest of your search> source=stderr do <rest of your search> stderr | where source="stderr"