All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ParsaIsHash  you can use inputs.conf with a blacklist to prevent unwanted files from being forwarded at the source level (on the Heavy Forwarder). This approach stops logs from even being read, whic... See more...
@ParsaIsHash  you can use inputs.conf with a blacklist to prevent unwanted files from being forwarded at the source level (on the Heavy Forwarder). This approach stops logs from even being read, which is more efficient than filtering them in props.conf and transforms.conf. blacklist = <regular expression> * If set, files from this input are NOT monitored if their path matches the specified regex. * Takes precedence over the deprecated '_blacklist' setting, which functions the same way. * If a file matches the regexes in both the deny list and allow list settings, the file is NOT monitored. Deny lists take precedence over allow lists. * No default.
Any reason why you don’t use inputs.conf with blacklist on source side?
You cannot set like 1st example with GUI. It always set the default output which must be some other splunk instance. When you want to set default as devNull you must do it with conf files.
usually we use this easier option for this - choice called "All" using "*" as value I understand the desire to use a simpler method.  However, as your this reply to @gcusello says, that option w... See more...
usually we use this easier option for this - choice called "All" using "*" as value I understand the desire to use a simpler method.  However, as your this reply to @gcusello says, that option will not satisfy your use case which is to make sure all listed options are a match, not "any" as "*" would have done. ("*" should really be named "Any", not "All") My solution, on the other hand, meets that requirement; it is functionally equivalent to Dashboard Studio's "Select all" mouse action.  If I am not mistaken, @woodcock's suggestion uses a similar concept except it populates an additional text box.
Encouraged by a new install on my laptop with app version 4.0.4, I upgraded app on my server to 4.0.5.  I can now confirm that although the error still shows in app version 4.0.1 on Splunk 9.4.0 but ... See more...
Encouraged by a new install on my laptop with app version 4.0.4, I upgraded app on my server to 4.0.5.  I can now confirm that although the error still shows in app version 4.0.1 on Splunk 9.4.0 but no longer shows in app version 4.0.5.  Consider fixed. (Fixed bugs in documentation has no mentioning of this error.)
This looks as working example, but for some reason it doesn't work No search when textbox changed or dropdown. Filtering only if im choosing User from dropdown
Your sample test data shows a field called "tickets" when your stats command is using a field called "ticket" - is it simply a typo in your example, or in your actual search?
Description: I am using a Splunk Heavy Forwarder (HF) to forward logs to an indexer cluster. I need to configure props.conf and transforms.conf on the HF to drop all logs that originate from a speci... See more...
Description: I am using a Splunk Heavy Forwarder (HF) to forward logs to an indexer cluster. I need to configure props.conf and transforms.conf on the HF to drop all logs that originate from a specific directory and any of its subdirectories, without modifying the configuration each time a new subdirectory is created. Scenario: The logs I want to discard are located under /var/log/apple/. This directory contains dynamically created subdirectories, such as: /var/log/apple/nginx/ /var/log/apple/db/intro/ /var/log/apple/some/other/depth/ New subdirectories are added frequently, and I cannot manually update the configuration every time. Attempted Solution: I configured props.conf as follows: [source::/var/log/apple(/.*)?] TRANSFORMS-null=discard_apple_logs And in transforms.conf: [discard_apple_logs] REGEX = . DEST_KEY = queue FORMAT = nullQueue However, this does not seem to work, as logs from the subdirectories are still being forwarded to the indexers. Question: What is the correct way to configure props.conf and transforms.conf to drop all logs under /var/log/apple/, including those from any newly created subdirectories? How can I ensure that this rule applies recursively without explicitly listing multiple wildcard patterns? Any guidance would be greatly appreciated!
I tried putting my props.conf and transforms.conf to $SPLUNK_HOME/etc/apps/yourAppName/local/ but the settings don't seem to take effect for some reason. I created a tcpout destination from the web ... See more...
I tried putting my props.conf and transforms.conf to $SPLUNK_HOME/etc/apps/yourAppName/local/ but the settings don't seem to take effect for some reason. I created a tcpout destination from the web UI, but it nevertheless tries to send stuff over S2S, disregarding the things I've set in transforms.conf.   Though I have to admit, I need to have something like this in the outputs.conf: #Because audit trail is protected and we can't transform it we can not use default we must use tcp_routing [tcpout] defaultGroup = NoForwarding [tcpout:nexthop] server = localhost:9000 sendCookedData = false But if I set up the destination from the Forwarding and receiving page, then I get something like this:   [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = localhost:9000 [tcpout-server://localhost:9000]  
Did you find the reason for this?  Since upgrading to ES 8.0.2 all of our Correlation Searchers (Event-driven searches) now use 'All-time' instead of the $info_min_time$ and $info_max_time$ specif... See more...
Did you find the reason for this?  Since upgrading to ES 8.0.2 all of our Correlation Searchers (Event-driven searches) now use 'All-time' instead of the $info_min_time$ and $info_max_time$ specified in the rule!
Hi @pedropiin , the stats command automatically dedups values, so you don't need to use the dedup command before the stats command. Ciao. Giuseppe
Hi @pedropiin , there isn't any reason for your behavior: after a stats command you have only the fields present in the command. Could you share the full search? Ciao. Giuseppe
Hi @L_Petch , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
The dedup command keeps the first event it finds for each unique value of the field(s) specified in its arguments ("tickets" in this case). The values of other fields are ignored. Depending on the se... See more...
The dedup command keeps the first event it finds for each unique value of the field(s) specified in its arguments ("tickets" in this case). The values of other fields are ignored. Depending on the sequence of events, it's entirely possible for each ticket value to come first from name1 and be retained and other names will be discarded. If you need to dedup on both tickets and name then use dedup tickets name in the query.
Hello everyone.  I'm dealing with a query that deals with certain "tickets" and "events", but some of them are duplicates, that's why it runs a dedup command. But there seems to be something else ... See more...
Hello everyone.  I'm dealing with a query that deals with certain "tickets" and "events", but some of them are duplicates, that's why it runs a dedup command. But there seems to be something else happening. The query is of the form: index=main source=... ... ... | fillnull value="[empty]" | search tickets=*** | dedup tickets | stats count by name, tickets | stats sum(count) as numOfTickets by name ... | fields name, tickets, count Listing all the events, I'm able to see that the, basically, the main duplicate events are the ones that were null and were filled with "[empty]". But, for some reason, some of the events disappear with dedup. In theory, dedup should remove all duplicates and maintain one, representing all of its "copies". And that happens for some "names", but not for all. During the same query, I deal with events of the category "name1" and events of the category "name2". All of theirs instances are "[empty]", and running dedup removes all instances of "name1" and maintains one of "name2", when it should maintain one of both.  Why is that happening? Each instance is of the form " processTime | arrivalTime | name | tickets | count"  
If I change all the versions back to what they were, then linting works again.  So at least I have a workaround.  Is it just me, or is this broken for anyone else?
Hi Have you set the permissions? How is your php application configured, does it run under a specific user/group? Please ensure to set the permissions recursively to allow the php user to access the ... See more...
Hi Have you set the permissions? How is your php application configured, does it run under a specific user/group? Please ensure to set the permissions recursively to allow the php user to access the Appdynamics php agent directory and the copied files within the php directory
Hi Giuseppe. Thank you for your response. This is just with test data. When I deal with a real scenario, face the same issue but it I can't simply run "count".
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and in... See more...
We are trying to on-board Akamai logs to Splunk. Installed the add-on. Here it is asking for proxy server and proxy host. I am not sure what these means? Our splunk instances are hosted on AWS and instances are refreshed every 45 days due to compliance and these are not exposed to internet (internal). Spoke with internal team and they said to use Sidecar Proxy on our splunk instances hosted on AWS. How to create and configure sidecar proxy server here? Please guide me.  This is the app installed - https://splunkbase.splunk.com/app/4310
My goal is to run AppDynamics in the context of a PHP application using an Alpine container. I am using the official image php:8.2-fpm-alpine which can be seen here https://hub.docker.com/layers/lib... See more...
My goal is to run AppDynamics in the context of a PHP application using an Alpine container. I am using the official image php:8.2-fpm-alpine which can be seen here https://hub.docker.com/layers/library/php/8.2-fpm-alpine/images/sha256-fbe14883e5e295fb5ce3b28376fafc8830bb9d29077340000121003550b84748 On the appdynamics side, I am using the archive above which was the latest to be found in the download area appdynamics-php-agent-x64-linux-24.11.0.1340.tar.bz2 I was able to successfully install the PHP agent thanks to the install script from the archive   appdynamics-php-agent-linux_x64/install.sh   However, when running command "php -m", I get this message   Warning: PHP Startup: Unable to load dynamic library 'appdynamics_agent.so' (tried: /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so (Error loading shared library libstdc++.so.6: No such file or directory (needed by /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so)), /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so (Error loading shared library /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so: No such file or directory)) in Unknown on line 0   I tried various ways to install but then run into other problems   RUN apk add --no-cache \ gcompat \ libstdc++   Which leads to   Warning: PHP Startup: Unable to load dynamic library 'appdynamics_agent.so' (tried: /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so (Error relocating /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so: __vsnprintf_chk: symbol not found), /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so (Error loading shared library /usr/local/lib/php/extensions/no-debug-non-zts-20220829/appdynamics_agent.so.so: No such file or directory)) in Unknown on line 0   What could be wrong? I don't see much help in the documentation regarding appdynamic in the context of an alpine container.