All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@nopera Install the add-on in the /opt/splunk/etc/apps directory on the HF. If you're using a deployment server and plan to deploy the add-on to a heavy forwarder (HF), place the add-on in the /opt... See more...
@nopera Install the add-on in the /opt/splunk/etc/apps directory on the HF. If you're using a deployment server and plan to deploy the add-on to a heavy forwarder (HF), place the add-on in the /opt/splunk/etc/deployment-apps directory on the deployment server. Then, create a server class, add the HF to that server class, associate the app with it, and deploy it to the HF.
I don't know why I have to run the following, and the spl2 file shows up.   ~/splunk/bin/splunk download-spl2-modules app spl2-test -dest default  But still, I am getting error when I try to run |... See more...
I don't know why I have to run the following, and the spl2 file shows up.   ~/splunk/bin/splunk download-spl2-modules app spl2-test -dest default  But still, I am getting error when I try to run |@spl2 from search1 Error in 'SearchParser': The SPL2 query is invalid: 'unknown error: Unable to fetch roles for the user'.
Am I missing something?   I have vscode running splunk extension and created a simple _default.spl2nb.   I'm able to testing it and getting results back, and uploading to the search app or a cu... See more...
Am I missing something?   I have vscode running splunk extension and created a simple _default.spl2nb.   I'm able to testing it and getting results back, and uploading to the search app or a custom app spl2-test also gives me success message.  But when I go to the splunk deployment <app>/default/data.  I don't see spl2 folder at all.  What's going on?  Thanks. 
Hi @studero  The error is being caused by misconfiguration in your /etc/otel/collector/agent_config.yaml file. Is it possible you can share (redacted if required) this file? The service.telemetry.m... See more...
Hi @studero  The error is being caused by misconfiguration in your /etc/otel/collector/agent_config.yaml file. Is it possible you can share (redacted if required) this file? The service.telemetry.metrics section contains an invalid "address" key based on the logs. As of Collector v0.123.0, the service::telemetry::metrics::address setting is ignored and instead should be configured as: service: telemetry: metrics: readers: - pull: exporter: prometheus: host: '0.0.0.0' port: 8888  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tomapatan  Is your first "and" lowercase in both examples? This should be uppercase, if its made to uppercase does it behave as expected or do you still get the issue? Im just wondering if the U... See more...
Hi @tomapatan  Is your first "and" lowercase in both examples? This should be uppercase, if its made to uppercase does it behave as expected or do you still get the issue? Im just wondering if the UI does some correction before running the litsearch.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Everyone, I`m running a query via the Splunk REST API (using  Python), and need to filter events based on the following requirements: - Always include events where TITLE is one of: A, B, C, D, E... See more...
Hi Everyone, I`m running a query via the Splunk REST API (using  Python), and need to filter events based on the following requirements: - Always include events where TITLE is one of: A, B, C, D, E - Only include events where TITLE=F and FROM=1 OR TITLE=G and FROM=2 This works fine in Splunk Web, but when sent via the REST API the conditional clause for TITLEs F and G don`t get applied correctly Works via Splunk WEB and REST (without filtering based on FROM) index=my_index System="MySystem*" Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR Title=F OR Title=G   Works on WEB, not via REST (filtering based on FROM) index=my_index System="MySystem*" Title=A OR Title=B OR Title=C OR Title=D OR Title=E OR (Title=F and FROM=1) OR (Title=G AND FROM=2)   I`ve tried to apply the filtering downstream, but the issue persists. I’m unable to query a saved search because some fields are extracted at search time and aren’t available when accessed via the REST API. As a result, I need to extract those fields directly within the query itself when using the REST API. (Note: the TITLE field is being extracted correctly.)   Many thanks.  
Hi @dpridemore  Are you using Splunk Cloud Victoria or Splunk Cloud Classic? If you are on a classic stack then it could be that this requires manual installation by support as not self-servicable. ... See more...
Hi @dpridemore  Are you using Splunk Cloud Victoria or Splunk Cloud Classic? If you are on a classic stack then it could be that this requires manual installation by support as not self-servicable. However, the app shows that it supports up to Splunk 9.2 - your cloud stack is probably on a 9.3 build by now as 9.2 is getting old. Please could you confirm your cloud stack version? (Top right, Support & Services -> About)  Either way, you may be able to get this installed by going via Splunk Support so its worth logging a support case to see if they can help you out with this one.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
For creating fields dynamicaly you can use the {} syntax. Like | makeresults | eval field="field1",{field}="value" But the important question and a possible issue here is where did you get the mul... See more...
For creating fields dynamicaly you can use the {} syntax. Like | makeresults | eval field="field1",{field}="value" But the important question and a possible issue here is where did you get the multivalued fields from. Remember that two distinct multivalued fields are... well, distinct. There is no relationship between their values whatsoever. And if you are creating multivalued field by means of list() or values() and the original data didn't have some values, you can't tel, which ones were empty. You're just getting a "squashed" list as a result.
There can be several possible issues probably but since you say that the host has been "additionally hardened" I'd hazard a guess that you have applocker policy preventing unknown/not-whitelisted app... See more...
There can be several possible issues probably but since you say that the host has been "additionally hardened" I'd hazard a guess that you have applocker policy preventing unknown/not-whitelisted apps from running. Since the eventlogs are ingested by means of spawning external .exe, if it's not whitelisted, it will fail.
Hi @keen  Its odd that it would work once but then stop with that error. As far as I know, the settings page within the app only has a single encrypted value which is proxy_password - are you using ... See more...
Hi @keen  Its odd that it would work once but then stop with that error. As far as I know, the settings page within the app only has a single encrypted value which is proxy_password - are you using a proxy with the input? Are there any other error lines around the one you posted which might provide more information?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Akamai Guadicore App shows it is cloud app but I don't see it as option to install in Splunk Cloud. The add-on is available, just not the app. 
Hello Your questions are answered in the original post. Thank you
We are running Elasticsearch Data integrator -modular input to ingest logs from elasticsearch to Splunk. However, the app only works when Splunk is restarted and the app stops working a few minutes l... See more...
We are running Elasticsearch Data integrator -modular input to ingest logs from elasticsearch to Splunk. However, the app only works when Splunk is restarted and the app stops working a few minutes later until the next time Splunk is restarted again. Error message: ERROR PersistentScript [3778898 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/TA_elasticsearch_data_integrator___modular_input_rh_settings.py persistent}: f"Failed to get password of realm={self._realm}, user={user}." Can you help fix the problem?  
Hi @tsocyberoperati  are you seeing any permission related issues on Splunkd.log  also check splunk forwarder is running as local user or nt_ user try running splunk with local user and rest... See more...
Hi @tsocyberoperati  are you seeing any permission related issues on Splunkd.log  also check splunk forwarder is running as local user or nt_ user try running splunk with local user and restart the splunk service 
Instead of making the positions for the lines fixed, I want to include lower and upper bounds for the average height and length based on a fourth, fifth, sixth and seventh column. Therefore i want to... See more...
Instead of making the positions for the lines fixed, I want to include lower and upper bounds for the average height and length based on a fourth, fifth, sixth and seventh column. Therefore i want to use tokens to set these values based on the outcome of the search. For example what I have now is the following including three columns: | table ResultHL HeightAvg LengthAvg What I want is the following: | table ResultHL HeightAvg LengthAvg LowLength UppLength LowHeight UppHeight Where LowLength UppLength LowHeight UppHeight are indicated by IV, III, II and I, respectively, in the last figure. The yellow lines should be based upon the result of the search and put into tokens. Many thanks in advance! Happy Splunking!  
Hi @moriteza  are you facing this issuse after updating deployment server to 9.2 and above? if yes.  follow thses steps.  there is known issue in Splunk 9.2.x deployment clients will not sho... See more...
Hi @moriteza  are you facing this issuse after updating deployment server to 9.2 and above? if yes.  follow thses steps.  there is known issue in Splunk 9.2.x deployment clients will not show in forwarder managment apps. you need to add following config in deployment server outputs.conf and restart deployment server [indexAndForward] index = true selectiveIndexing = true full link to issue  https://help.splunk.com/en/splunk-enterprise/administer/manage-and-update-deployment-servers/9.2/configure-the-deployment-system/upgrade-pre-9.2-deployment-servers#ariaid-title1 
Hi, thank you for replies. To clarify, which path should I place the add-on file?It comes as .tgz, to where should i extract it? @livehybrid  @kiran_panchavat 
Hi, thank you for reply. To clarify, which path should I place the add-on file?It comes as .tgz, to where should i extract it?    
Hi @nopera  Please can you confirm if you have downloaded and installed the Splunk Add-on for Microsoft Exchange app from Splunkbase on your forwarder?  Ensure that the folder listed in monitor:// ... See more...
Hi @nopera  Please can you confirm if you have downloaded and installed the Splunk Add-on for Microsoft Exchange app from Splunkbase on your forwarder?  Ensure that the folder listed in monitor:// exists on your filesystem and that the Splunk service can read the files.  Are you able to see other logs (such as _internal logs) on your Splunk instance from the Forwarder with this config on? Are there are any error logs in $SPLUNK_HOME/var/log/splunk/splunkd.log regarding these inputs/monitor configs?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@nopera  The props.conf configuration is specific to the sourcetype MSExchange:2013:MessageTracking. Please ensure that the corresponding add-on is installed on your heavy forwarder to enable proper... See more...
@nopera  The props.conf configuration is specific to the sourcetype MSExchange:2013:MessageTracking. Please ensure that the corresponding add-on is installed on your heavy forwarder to enable proper parsing of the data.  [MSExchange:2013:MessageTracking] CHARSET = UTF-8 SHOULD_LINEMERGE = false CHECK_FOR_HEADER = false REPORT-fields = msexchange2013msgtrack-fields, msgtrack-extract-psender, msgtrack-psender, msgtrack-sender, msgtrack-recipients, msgtrack-recipient TRANSFORMS-comments = ignore-comments FIELDALIAS-server_hostname_as_dest = server_hostname AS dest FIELDALIAS-host_as_dvc = host AS dvc EVAL-src=coalesce(original_client_ip,cs_ip) EVAL-product = "Exchange" EVAL-vendor = "Microsoft" EVAL-sender = coalesce(PurportedSender,sender) EVAL-src_user = coalesce(PurportedSender,sender) EVAL-sender_username = coalesce(psender_username,sender_username) EVAL-sender_domain = coalesce(psender_domain,sender_domain) LOOKUP-event_id_to_action = event_id_to_action_lookup event_id OUTPUT action TIME_PREFIX = ^\d\d MAX_TIMESTAMP_LOOKAHEAD = 24 TIME_FORMAT = %y-%m-%dT%H:%M:%S.%QZ