All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Anna, We are also using the R8 to obfuscate the code, our app is crashing with AppDynamics agent? Any additional steps to be followed? We haven't provided any mapping file? Is there any procedu... See more...
Dear Anna, We are also using the R8 to obfuscate the code, our app is crashing with AppDynamics agent? Any additional steps to be followed? We haven't provided any mapping file? Is there any procedure or documentation for this?
By selection, if you meant the canvas, it can be adjusted by changing   Or directly in the code "layout": { "type": "absolute", "options": { "width": 1440, "height": 960 }, "stru... See more...
By selection, if you meant the canvas, it can be adjusted by changing   Or directly in the code "layout": { "type": "absolute", "options": { "width": 1440, "height": 960 }, "structure": [], "globalInputs": [ "input_global_trp" ] },
I checked with tcpdump and wireshark. I can clearly see the TCP packets, but not the UDP packets. However, I can see the traffic by echoing the message (TCP and UDP as well) to SC4S server. I believe... See more...
I checked with tcpdump and wireshark. I can clearly see the TCP packets, but not the UDP packets. However, I can see the traffic by echoing the message (TCP and UDP as well) to SC4S server. I believe its the issue of the Kiwi Syslog Message Generator.  Thanks guys. 
Hi, Can I get a recommendation around the appropriate/best options between these two apps for to ingest and query "logs" from Snowflake: Splunk DB Connect Snowflake
To get a count, replace the dedup command with stats.  Since the stats command sorts it results, you don't need the separate sort command. index=cisco sourctype=cisco:asa message_id=XXXXXX | stats ... See more...
To get a count, replace the dedup command with stats.  Since the stats command sorts it results, you don't need the separate sort command. index=cisco sourctype=cisco:asa message_id=XXXXXX | stats count by host, src_ip, dest_ip, dest_port, action | table host, src_ip, dest_ip, dest_port, action count  
Ah, I knew I'd see this asked before...
Sweet - nice optimisation
If you want multiple values in a single field you could do this | stats values(HOST) as HOST by SEVERITY | eval HOST=mvjoin(HOST, ",")
1) c doesn't exist unless it is a value in WriteType and even then it will contain a count not "test" or "qa" 2) No, you can only have two fields with chart. Perhaps it would be better if you expla... See more...
1) c doesn't exist unless it is a value in WriteType and even then it will contain a count not "test" or "qa" 2) No, you can only have two fields with chart. Perhaps it would be better if you explained what you are trying to do, and share some representative anonymised sample events? (I may have said that before a few times!)
We install the agent like it was a VM, we had to move the file "appdynamics_agent.ini" to the same folder where the "php.ini" is in, after that we rebuild the image of the container, we put it in the... See more...
We install the agent like it was a VM, we had to move the file "appdynamics_agent.ini" to the same folder where the "php.ini" is in, after that we rebuild the image of the container, we put it in the dev environment and finally the controller recognized the agent and this one started sending the telemetry.
I want to build a query that pulls Cisco ASA events based on a particular syslog message ID which shows denied traffic. I dedup the information for events that have the same source ip, destination ip... See more...
I want to build a query that pulls Cisco ASA events based on a particular syslog message ID which shows denied traffic. I dedup the information for events that have the same source ip, destination ip, destination port and action.  It seems to work well however now I would like to have a count added for each time that unique combination is seen. Query is: index=cisco sourctype=cisco:asa message_id=XXXXXX | dedup host, src_ip, dest_ip, dest_port, action | table host, src_ip, dest_ip, dest_port, action | sort host, src_ip, dest_ip, dest_port, action That query gives me a table that appears to be dedup'ed however I would like to add a column that shows how many times each entry is seen.
Currently this is a manual process for me, I swap our connections between our primary and secondary HFs for every patch window. Is this what everyone is doing or is there a way to automate a cutover?... See more...
Currently this is a manual process for me, I swap our connections between our primary and secondary HFs for every patch window. Is this what everyone is doing or is there a way to automate a cutover? Thanks for any insight! 
When you have edited those files on disk, splunk needs restarted or at least refreshed before those change as are in use. You should look /debug/refresh url for refresh. When you are using lookup ed... See more...
When you have edited those files on disk, splunk needs restarted or at least refreshed before those change as are in use. You should look /debug/refresh url for refresh. When you are using lookup editor app, no need to do those as this app manage those actions internally. Just create a new lookup and after you have saved it, it’s ready for use.
Thanks for your response, bowesmana!  You've got me headed in the right direction.
Whenever I update/create collections.conf or transforms.conf file manually , should Splunk need to be restarted (by admin)? Same question if I use Lookup Editor app - should Splunk need to be re... See more...
Whenever I update/create collections.conf or transforms.conf file manually , should Splunk need to be restarted (by admin)? Same question if I use Lookup Editor app - should Splunk need to be restarted (by admin) after updating/creating collections.conf or transforms.conf? https://splunkbase.splunk.com/app/1724   I think once we have these answered, you have solved this post.   Thank you so much
It's possible that your SH isn't set up to reach your LDAP systems and that's why it's not returning results, but it's hard to say without more information. I'd recommend checking the logs for the ad... See more...
It's possible that your SH isn't set up to reach your LDAP systems and that's why it's not returning results, but it's hard to say without more information. I'd recommend checking the logs for the add-on and seeing if you can find any errors or anything in there. You'll find these in $SPLUNK_HOME/var/log/splunk/SA-ldapsearch.log (Ref: https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.8/User/UseSA-ldapsearchtotroubleshootproblems)
Yeah the app is not great at deduplicating the notables it sends to SOAR. Ideally you would want this app to run a search, find result with some key field X, then create only one container with one a... See more...
Yeah the app is not great at deduplicating the notables it sends to SOAR. Ideally you would want this app to run a search, find result with some key field X, then create only one container with one artifact containing that result. Subsequent searches in the app will create a new artifact in the same container, but this is unwanted. One way around this is to set up your generating search so that it appends the results to a whitelist which is used in later executions of the search to remove the results already seen. E.g. imagine you have a unique field of "id" in your results. You want only one container+artifact per value of "id". 1. Make a lookup containing one "id" column. E.g. search_whitelist.csv 2. Change your search to append and exclude ids: | <your search> | search NOT [| inputlookup search_whitelist.csv | table id] | outputlookup search_whitelist.csv append=true 3. (optional but recommended) - make another search which removes old entries from the search_whitelist.csv if it gets too big. E.g. | inputlookup search_whitelist.csv | sort - id | head 10000 | outputlookup search_whitelist.csv  
Hi @calvinmcelroy, This doc contains a few instructions related to scenarios as you mentioned: Install a Windows universal forwarder - Splunk Documentation About the least-privileged user For se... See more...
Hi @calvinmcelroy, This doc contains a few instructions related to scenarios as you mentioned: Install a Windows universal forwarder - Splunk Documentation About the least-privileged user For security purposes, avoid running the universal forwarder as a local system account or domain user, as it provides the user with high-risk permissions that aren't needed. When you install version 9.1 or higher of the universal forwarder, the installer creates a virtual account as a "least- privileged" user called splunkfwd, which provides only the capabilities necessary to run the universal forwarder. Since local user groups are not available on the domain controller, the GROUPPERFORMANCEMONITORUSERS flag is unavailable, which might affect WMI/perfmon inputs. To mitigate input issues, when you're installing with the installer, the default account is the local system on the domain controller. If you choose a different account to run the universal forwarder during installation, the universal forwarder service varies based on your choice: If you choose Local System, the universal forwarder runs Windows administrator full privilege. If you choose a domain account with Windows administrator privilege, the universal forwarder runs Windows administrator full privilege. If you choose a domain account without Windows administrator privilege, you select the privilege. Once you choose a non-administrator user to run the universal forwarder, this user becomes a "least privilege user" with limited permissions on Windows. Also, take a look at this point:  Permission Function SeBackupPrivilege Check to grant the least privileged user READ(not WRITE) permissions for files. SeSecurityPrivilege Check to allow the user to collect Windows security event logs. SeImpersonatePrivilege Check to enable the capability to add the least privilege user to new Windows users/groups after the universal forwarder installation. This grants more permissions to the universal forwarder to collect data from secure sources.   Happy Splunking, Rafael Santos Please,  don't forget to accept this solution if it fits your needs
We use a Deployment server to manage config of our UF fleet. Recent changes to privileges on clients are preventing the UF from restarting it's service after new config or systemclass has been downlo... See more...
We use a Deployment server to manage config of our UF fleet. Recent changes to privileges on clients are preventing the UF from restarting it's service after new config or systemclass has been downloaded. The company doesn't want to provide Splunk with a DA-level account or something similar.  What is the best "Least Privilege" way for the Splunk UF to be able to restart it's own service and collect needed logs within a windows domain?
Please help me on the below items: #1) | chart count(WriteType) over Collection by WriteType | sort Collection for above query  can we add conditon as below:  (i am facing issue here) | chart co... See more...
Please help me on the below items: #1) | chart count(WriteType) over Collection by WriteType | sort Collection for above query  can we add conditon as below:  (i am facing issue here) | chart count(WriteType) over Collection by WriteType |where c in("test","qa") | sort Collection  #2): can we add one more field after WriteType as below: | chart count(WriteType) over Collection by WriteType, c |where c in("test","qa")