All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @LizAndy123 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
So i have a dashboard and I want to print custom message when there are 0 results. but in the dashboard i am working on i am using geostats command for map usage , so  when the result comes zero a c... See more...
So i have a dashboard and I want to print custom message when there are 0 results. but in the dashboard i am working on i am using geostats command for map usage , so  when the result comes zero a custom message should be shown on the top of the panel  so I want the custom message on top of this image
No, unfortunately not. It is bothering but can be worked around by using Splunk itself to analyze the logs and ignore that message at search time.  This will show all messages w/o INFO and the befor... See more...
No, unfortunately not. It is bothering but can be worked around by using Splunk itself to analyze the logs and ignore that message at search time.  This will show all messages w/o INFO and the before mentioned messages: index="_internal" sourcetype=splunkd NOT INFO NOT "AQR and authentication extensions not supported. Or authentication extensions is supported but is used for tokens only"
Hi. I'm using Splunk Enterprise 7.3.2 and installed universal forwarder 8.2.6 on Linux. I was asked to monitor the .bash_history file, so I installed the universal forwarder and checked that data i... See more...
Hi. I'm using Splunk Enterprise 7.3.2 and installed universal forwarder 8.2.6 on Linux. I was asked to monitor the .bash_history file, so I installed the universal forwarder and checked that data is coming into Splunk. However, in a real-time search, most of the files are imported as well as newly added data. So monitoring is difficult because previously events are mixed with real-time events. When I do a real-time search again, the _time field of the previously imported event and the newly added event is the same. Is it related to this? Does anyone know how to solve this problem? + inputs.conf settings [monitor:///home/*/.bash_history] index=test sourcetype=test_add disabled=false crcSalt = <SOURCE> [monitor:///root/.bash_history] index=test sourcetype=test_add disabled=false crcSalt = <SOURCE>
Hello, Were you able to resolve this? I'm having the same issue. Thanks.
Dear Anna, We are also using the R8 to obfuscate the code, our app is crashing with AppDynamics agent? Any additional steps to be followed? We haven't provided any mapping file? Is there any procedu... See more...
Dear Anna, We are also using the R8 to obfuscate the code, our app is crashing with AppDynamics agent? Any additional steps to be followed? We haven't provided any mapping file? Is there any procedure or documentation for this?
By selection, if you meant the canvas, it can be adjusted by changing   Or directly in the code "layout": { "type": "absolute", "options": { "width": 1440, "height": 960 }, "stru... See more...
By selection, if you meant the canvas, it can be adjusted by changing   Or directly in the code "layout": { "type": "absolute", "options": { "width": 1440, "height": 960 }, "structure": [], "globalInputs": [ "input_global_trp" ] },
I checked with tcpdump and wireshark. I can clearly see the TCP packets, but not the UDP packets. However, I can see the traffic by echoing the message (TCP and UDP as well) to SC4S server. I believe... See more...
I checked with tcpdump and wireshark. I can clearly see the TCP packets, but not the UDP packets. However, I can see the traffic by echoing the message (TCP and UDP as well) to SC4S server. I believe its the issue of the Kiwi Syslog Message Generator.  Thanks guys. 
Hi, Can I get a recommendation around the appropriate/best options between these two apps for to ingest and query "logs" from Snowflake: Splunk DB Connect Snowflake
To get a count, replace the dedup command with stats.  Since the stats command sorts it results, you don't need the separate sort command. index=cisco sourctype=cisco:asa message_id=XXXXXX | stats ... See more...
To get a count, replace the dedup command with stats.  Since the stats command sorts it results, you don't need the separate sort command. index=cisco sourctype=cisco:asa message_id=XXXXXX | stats count by host, src_ip, dest_ip, dest_port, action | table host, src_ip, dest_ip, dest_port, action count  
Ah, I knew I'd see this asked before...
Sweet - nice optimisation
If you want multiple values in a single field you could do this | stats values(HOST) as HOST by SEVERITY | eval HOST=mvjoin(HOST, ",")
1) c doesn't exist unless it is a value in WriteType and even then it will contain a count not "test" or "qa" 2) No, you can only have two fields with chart. Perhaps it would be better if you expla... See more...
1) c doesn't exist unless it is a value in WriteType and even then it will contain a count not "test" or "qa" 2) No, you can only have two fields with chart. Perhaps it would be better if you explained what you are trying to do, and share some representative anonymised sample events? (I may have said that before a few times!)
We install the agent like it was a VM, we had to move the file "appdynamics_agent.ini" to the same folder where the "php.ini" is in, after that we rebuild the image of the container, we put it in the... See more...
We install the agent like it was a VM, we had to move the file "appdynamics_agent.ini" to the same folder where the "php.ini" is in, after that we rebuild the image of the container, we put it in the dev environment and finally the controller recognized the agent and this one started sending the telemetry.
I want to build a query that pulls Cisco ASA events based on a particular syslog message ID which shows denied traffic. I dedup the information for events that have the same source ip, destination ip... See more...
I want to build a query that pulls Cisco ASA events based on a particular syslog message ID which shows denied traffic. I dedup the information for events that have the same source ip, destination ip, destination port and action.  It seems to work well however now I would like to have a count added for each time that unique combination is seen. Query is: index=cisco sourctype=cisco:asa message_id=XXXXXX | dedup host, src_ip, dest_ip, dest_port, action | table host, src_ip, dest_ip, dest_port, action | sort host, src_ip, dest_ip, dest_port, action That query gives me a table that appears to be dedup'ed however I would like to add a column that shows how many times each entry is seen.
Currently this is a manual process for me, I swap our connections between our primary and secondary HFs for every patch window. Is this what everyone is doing or is there a way to automate a cutover?... See more...
Currently this is a manual process for me, I swap our connections between our primary and secondary HFs for every patch window. Is this what everyone is doing or is there a way to automate a cutover? Thanks for any insight! 
When you have edited those files on disk, splunk needs restarted or at least refreshed before those change as are in use. You should look /debug/refresh url for refresh. When you are using lookup ed... See more...
When you have edited those files on disk, splunk needs restarted or at least refreshed before those change as are in use. You should look /debug/refresh url for refresh. When you are using lookup editor app, no need to do those as this app manage those actions internally. Just create a new lookup and after you have saved it, it’s ready for use.
Thanks for your response, bowesmana!  You've got me headed in the right direction.
Whenever I update/create collections.conf or transforms.conf file manually , should Splunk need to be restarted (by admin)? Same question if I use Lookup Editor app - should Splunk need to be re... See more...
Whenever I update/create collections.conf or transforms.conf file manually , should Splunk need to be restarted (by admin)? Same question if I use Lookup Editor app - should Splunk need to be restarted (by admin) after updating/creating collections.conf or transforms.conf? https://splunkbase.splunk.com/app/1724   I think once we have these answered, you have solved this post.   Thank you so much