All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In my environment, palo alto (proxy) logs are being stored into Splunk. I want to know what kind of operation on a server make high-risk communication to internet using palo alto logs and Windows ev... See more...
In my environment, palo alto (proxy) logs are being stored into Splunk. I want to know what kind of operation on a server make high-risk communication to internet using palo alto logs and Windows event logs or Linux audit  log or some thing. Is it possible with Correlation Search of Splunk ?
I configured a search head cluster and configured a captain and added the searchheads to the indexer cluster. I now want to break the shcluster and have done this so far; All from the cli: removed... See more...
I configured a search head cluster and configured a captain and added the searchheads to the indexer cluster. I now want to break the shcluster and have done this so far; All from the cli: removed the member that was not the captain, went ok Tried to remove the other member, didnt work the command just hanged for half an hour before I gave up and aborted it. Tried to set the captain in static mode, did a clean raft, but still no luck. configured disabled=1 in the shclustering part of the server.conf and this time it went ok I guess I now get the message this node is not a part of any cluster configuration.   Over to the indexer cluster where I now want to get rid of the searchheads from the GUI which is still showing up as up and running. ran the command splunk remove cluster-search-heads and that went successful but the searchheads are still there in the indexer clustering GUI some suggests that this will go away after a few minutes and after a restart of the manager node this will certainly go away. I have now waited a whole day and restarted, but they are still showing up and running with a green checkmark too. Where does it get its information from and how can I get rid of them?
@Kirantcs  I'm having the same issue as I can not add/modify/delete exisiting inputs however I can see the logs coming from the inputs. Have you resolved the issue? 
@deepakc is there a way to ingest those logs from MongoDB atlas to Splunk via API? Thanks in advance!
Hi @tscroggins, thanks for your answer, in the documentation i have found this configuration: minify_js = False js_no_cache = True cacheEntriesLimit = 0 cacheBytesLimit = 0 enableWebDebug = Tru... See more...
Hi @tscroggins, thanks for your answer, in the documentation i have found this configuration: minify_js = False js_no_cache = True cacheEntriesLimit = 0 cacheBytesLimit = 0 enableWebDebug = True and it works. Sometimes not but i go to /debug/refresh and click the refresh button and splunk loads the new version of the js file. But if you have a dashboard like that   <dashboard script="MyScript.js"> <search id="MySearch"> <query> query that take some time </query> </search> <row> <panel> <hmtl> <button id="btn">Button </button> </html> </panel> </row>       //MyScript require(["jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!"], function($, mvc) { $("#btn").on("click", function() { // js code }); });​   Splunk will not load the Jquery part, but if you go on "edit"->"source"->"cancel" without modifiyng anything in the dashboard source code the javascript code works. So the problem maybe is caused because the search (id="MySearch") in the dashboard is executed in an async way? I have read some posts on this topic but i didn't find any solution I have tried  require(["jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!"], function($, mvc) { $("#MySearch").on("search:done", function(){ $("#btn").on("click", function() { // js code }); });​ }); but nothing 
Thanks Yuanliu. I ended up using some of the macro in my search and it works: | eval service_ids = $fields.ServiceID$ | eval maintenance_object_type = "service", maintenance_object_key = service_... See more...
Thanks Yuanliu. I ended up using some of the macro in my search and it works: | eval service_ids = $fields.ServiceID$ | eval maintenance_object_type = "service", maintenance_object_key = service_ids | lookup operative_maintenance_log maintenance_object_type, maintenance_object_key OUTPUT _key as maintenance_log_key | eval in_maintenance = if(IsNull(maintenance_log_key), 0, 1) | fields - maintenance_object_key, maintenance_object_type, maintenance_log_key | where IsNull(in_maintenance) OR (in_maintenance != 1) | fields - in_maintenance | mvcombine service_ids | fields - service_ids I tried a lot of variants for your suggestion to use the macro but didn't find any that worked
Let's first clarify your use case.  Your attempted code suggests two implications: You are trying to substitute parameter in a macro filter_maintenance_services(1); and You are using this in a das... See more...
Let's first clarify your use case.  Your attempted code suggests two implications: You are trying to substitute parameter in a macro filter_maintenance_services(1); and You are using this in a dashboard or a map command, where $fields.ServiceID$ dereferences into a service ID such as e5095542-9132-402f-8f17-242b83710b66. Are these correct? It seems that you run into a quirk in that macro.  It is written such that quotation marks are required to invoke it properly. (I've written a macro that behaves this way and it took me a while to realize this requirement.)  Try | `filter_maintenance_services("\"$fields.ServiceID$\"")` or some variant of this.  
Hi @kulrajatwal  How to check in hex chars? I have the same issues in my splunk, So I got the raw data file and process your command to it. But I don't know how to check the invalid chars in my ra... See more...
Hi @kulrajatwal  How to check in hex chars? I have the same issues in my splunk, So I got the raw data file and process your command to it. But I don't know how to check the invalid chars in my raw data. Could you explain in detail? What is the splunk's accepted format? and how to fix in my json?
(Note: When giving sample data, use the code box.)  Your log mixes plain text with structured JSON.  So, the first task is to extract the JSON piece, then extract from JSON using spath.   | rex "DN... See more...
(Note: When giving sample data, use the code box.)  Your log mixes plain text with structured JSON.  So, the first task is to extract the JSON piece, then extract from JSON using spath.   | rex "DNAC (?<json_msg>{.+})" | spath input=json_msg   description from your sample data will contain this value description Executing command terminal width 0 config t Failed to fetch the preview commands. Here is an emulation of your sample data.  Play with it and compare with real data   | makeresults | eval _raw = "Oct 22 14:20:45 10.5.0.200 DNAC {\"version\":\"1.0.0\",\"instanceId\":\"20fd8163-4ca8-424b-a5a9-1e4018372abb\",\"eventId\":\"AUDIT_LOG_EVENT\",\"namespace\":\"AUDIT_LOG\",\"name\":\"AUDIT_LOG\",\"description\":\"Executing command terminal width 0\\nconfig t\\nFailed to fetch the preview commands.\\n\",\"type\":\"AUDIT_LOG\",\"category\":\"INFO\",\"domain\":\"Audit\",\"subDomain\":\"\",\"severity\":1,\"source\":\"NA\",\"timestamp\":1729606845043,\"details\":{\"requestPayloadDescriptor\":\"terminal width 0\\nconfig t\\nFailed to fetch the preview commands.\\n\",\"requestPayload\":\"\\n\"},\"ciscoDnaEventLink\":null,\"note\":null,\"tntId\":\"630db6e989269c11640abd49\",\"context\":null,\"userId\":\"system\",\"i18n\":null,\"eventHierarchy\":{\"hierarchy\":\"20fd8163-4ca8-424b-a5a9-1e4018372abb\",\"hierarchyDelimiter\":\".\"},\"message\":null,\"messageParams\":null,\"additionalDetails\":{\"eventMetadata\":{\"auditLogMetadata\":{\"type\":\"CLI\",\"version\":\"1.0.0\"}}},\"parentInstanceId\":\"9dde297d-845e-40d0-aeb0-a11e141f95b5\",\"network\":{\"siteId\":\"\",\"deviceId\":\"10.7.140.2\"},\"isSimulated\":false,\"startTime\":1729606845055,\"dnacIP\":\"10.5.0.200\",\"tenantId\":\"SYS0\"}" ``` data emulation above ```    
See syntax help in lookup.  This is what I suggest: | lookup column1 AS field1 test.csv output column1 as match | where isnotnull(match)
Most likely there's some line breaking problem.  Documentation is Configure event line breaking (and the entire Configure event processing.  You would also get better discussion in the forum Getting ... See more...
Most likely there's some line breaking problem.  Documentation is Configure event line breaking (and the entire Configure event processing.  You would also get better discussion in the forum Getting Data In.
Hi @Robwhoa78 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @Robwhoa78 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
First things first.  What is a "sub event"?  How do you get "subevent"?  How do you count "subevent"?  Secondly, please construct your desired output like a real table (e.g., by using the table templ... See more...
First things first.  What is a "sub event"?  How do you get "subevent"?  How do you count "subevent"?  Secondly, please construct your desired output like a real table (e.g., by using the table template above, craft HTML table, or some means suitable for you).  The illustration you give is not even aligned and impossible to interpret.
Hello, Example I have 2 lookups, first.csv and second.csv first.csv have 1 column name=fruit_name and with multivalue first.csv fruit_name apple banana melon mango grapes guyab... See more...
Hello, Example I have 2 lookups, first.csv and second.csv first.csv have 1 column name=fruit_name and with multivalue first.csv fruit_name apple banana melon mango grapes guyabano coconut second.csv have 2 column fruits and remarks with multivalue under fruits column fruits remarks apple mango guyabano visible How can i check if all the values of second.csv (apple,mango,guyabano) are present in the column fruit_name under first.csv then echo out the remarks with the value of visible Thanks in advance
This looks like a new bug in 9.3
Hi Team, Due to SSL cert issue I see the Database queries tab is not loading which we are working on it. Customer is asking to fetch the following data => Query, time executed, time took for complet... See more...
Hi Team, Due to SSL cert issue I see the Database queries tab is not loading which we are working on it. Customer is asking to fetch the following data => Query, time executed, time took for completion etc.  Is there any way we can get the data from the database? Queries data is located in which database also the path to DB? Please can share the DB and table name to so we can export the data from database. Thanks
Hello Splunkers!! In a scheduled search within Splunk, we have set up email notifications with designated recipients. However, there is an intermittent issue where sometime recipients do not consis... See more...
Hello Splunkers!! In a scheduled search within Splunk, we have set up email notifications with designated recipients. However, there is an intermittent issue where sometime recipients do not consistently receive the scheduled search email. To address this, we need to determine if there is a way within Splunk to verify whether the recipients successfully received the email notifications. Please help me identify how address and how to check this things in Splunk.   index=_internal source=*splunkd.log sendemail I have tried above search but above search is not providing the information about receipents email address. 
Note i'm using :  1. Splunk Enterprise Version : 9.3.1 2. Enterprise Security Version : 7.3.2   According to this documentation : https://docs.splunk.com/Documentation/VersionCompatibility/curren... See more...
Note i'm using :  1. Splunk Enterprise Version : 9.3.1 2. Enterprise Security Version : 7.3.2   According to this documentation : https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/CompatMatrix  All is good, but i don't have any idea why this is happening. 
Hi, i got error after completed set up Enterprise Security on my lab. First im using Windows but when want to setup Enterprise Security always got    Error in 'essinstall' command: (InstallExcepti... See more...
Hi, i got error after completed set up Enterprise Security on my lab. First im using Windows but when want to setup Enterprise Security always got    Error in 'essinstall' command: (InstallException) "install_apps" stage failed - Splunkd daemon is not responding: ('Error connecting to /services/admin/localapps: The read operation timed out',)   then i want to try install fresh Splunk Enterprise in WSL (in my case Ubuntu 22) i got success install and can doing anything normally. After that, i try install Enterprise Security again. And now i got successful notification when setup Enterprise Security via WebGUI, but unfortunately when successful restart i can't open Splunk Enterprise    This is my CLI looks like    i cannot see any error in my CLI that's why i ask it here, maybe somebody can help me ?      
Hello @Cheng2Ready  The global time range picker cannot be applied to saved searches in Dashboard Studio since each saved search has its own predefined time range. Unlike Classic Dashboards, when yo... See more...
Hello @Cheng2Ready  The global time range picker cannot be applied to saved searches in Dashboard Studio since each saved search has its own predefined time range. Unlike Classic Dashboards, when you reference a Saved Search in Studio, it will always use its own time range settings, ignoring any global time range selections. For your use case, I recommend: Schedule a report with your required metrics Use the '|collect' command to store results in a new index Create a new role for third-party access that only has permissions for this new index Optionally, you can: Disable specific capabilities for this role Restrict access to only the required dashboard This approach helps maintain security by avoiding direct access to the original index. If this reply helps you. Please UpVote.