All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello splunkers, I'm working with the latest version of Splunk Add-on Builder to index data from a REST API. TA only pulls the first page of results by calling:   https://mywebpage.com/api/source... See more...
Hello splunkers, I'm working with the latest version of Splunk Add-on Builder to index data from a REST API. TA only pulls the first page of results by calling:   https://mywebpage.com/api/source/v2   At the bottom of the pulled data are URL for the next url:   "next_url" : "/api/source/v2?last=5431"   How do I configure TA for iterates through all the pages? I checked from link below, but i dont' understand how (or if is possible) pass the variable  from modular input to my endpoint like this or in other way:   https://mywebpage.com/api/source/v2?last=${next_url}   https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/ConfigureDataCollection#Pass_values_from_data_input_parameters  Any ideas? Thanks!
It is usually easier for us to help you when you shows us what events you are working with, but in lieu of that, assuming you have events with the following fields: Work_Month_week, "work day of week... See more...
It is usually easier for us to help you when you shows us what events you are working with, but in lieu of that, assuming you have events with the following fields: Work_Month_week, "work day of week", "Number of work hours", you could try something like this | table Work_Month_week, "work day of week", "Number of work hours" | eventstats count as total_week_day sum("Number of work hours") as Week_total by Work_Month_week | eval "percent work hours"=100*'Number of work hours'/Week_total
The first thing you need is an understanding of your data. It is your data. We do not have access to it and do not know what data you have, so it is difficult for us to determine what information you... See more...
The first thing you need is an understanding of your data. It is your data. We do not have access to it and do not know what data you have, so it is difficult for us to determine what information you might be able to extract from it. Secondly, risk is subjective. What do you deem to be high risk? What evidence do you have in your logs (that are now in Splunk) that might help you determine if something is "risky"? Such open questions as you have posed, only lead to more questions.
how have you set up your csv files so that they have multivalue fields?
After finish install don't use SSL and you will get Enterprise Security perfectly my pren  Danke  
Hi Community, i solved the problem with outlier detection... thx all for your support source="/var/log/livingroom.json" | streamstats window=60 current=true avg("temperature_celsius") as a... See more...
Hi Community, i solved the problem with outlier detection... thx all for your support source="/var/log/livingroom.json" | streamstats window=60 current=true avg("temperature_celsius") as avg stdev("temperature_celsius") as stdev | eval lowerBound=(avg-stdev*exact(5)), upperBound=(avg+stdev*exact(5)) | eval isOutlier=if('temperature_celsius' < lowerBound OR 'temperature_celsius' > upperBound, 1, 0) | where isOutlier=0 | timechart eval(round(avg('temperature_celsius'),1)) AS "Temperature"  
I am looking to replace a sourcetype using props.conf / transforms.conf so far with no luck. props.conf [original_sourcetype] NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TIME_PREFIX = oldtimepref... See more...
I am looking to replace a sourcetype using props.conf / transforms.conf so far with no luck. props.conf [original_sourcetype] NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TIME_PREFIX = oldtimeprefix TIME_FORMAT=oldtimeformat pulldown_type = 1 TRANSFORMS-set_new=set_new_sourcetype [new_sourcetype_with_new_timeformat] NO_BINARY_CHECK=1 SHOULD_LINEMERGE=false TIME_PREFIX=newtimeprefix TIME_FORMAT=newtimeformat pulldown_type = 1 #rename=original_sourcetype transforms.conf [set_new_sourcetype] SOURCE_KEY = MetaData:Source REGEX = ^source::var/log/path/tofile.log FORMAT = sourcetype::new_sourcetype_with_new_timeformat DEST_KEY = MetaData:Sourcetype tried different REGEX's, including  REGEX = var/log/path/tofile.log   Also tried setting it like this in props.conf [source::var/log/path/tofile.log] TRANSFORMS-set_new=set_new_sourcetype   I am also looking at inputs.conf, which has monitoring stanzas for all syslog traffic, perhaps some blacklisting/ whitelisting based on source can be done there. But I am curious as to what is not working with my props/transforms. Thanks      
In my environment, palo alto (proxy) logs are being stored into Splunk. I want to know what kind of operation on a server make high-risk communication to internet using palo alto logs and Windows ev... See more...
In my environment, palo alto (proxy) logs are being stored into Splunk. I want to know what kind of operation on a server make high-risk communication to internet using palo alto logs and Windows event logs or Linux audit  log or some thing. Is it possible with Correlation Search of Splunk ?
I configured a search head cluster and configured a captain and added the searchheads to the indexer cluster. I now want to break the shcluster and have done this so far; All from the cli: removed... See more...
I configured a search head cluster and configured a captain and added the searchheads to the indexer cluster. I now want to break the shcluster and have done this so far; All from the cli: removed the member that was not the captain, went ok Tried to remove the other member, didnt work the command just hanged for half an hour before I gave up and aborted it. Tried to set the captain in static mode, did a clean raft, but still no luck. configured disabled=1 in the shclustering part of the server.conf and this time it went ok I guess I now get the message this node is not a part of any cluster configuration.   Over to the indexer cluster where I now want to get rid of the searchheads from the GUI which is still showing up as up and running. ran the command splunk remove cluster-search-heads and that went successful but the searchheads are still there in the indexer clustering GUI some suggests that this will go away after a few minutes and after a restart of the manager node this will certainly go away. I have now waited a whole day and restarted, but they are still showing up and running with a green checkmark too. Where does it get its information from and how can I get rid of them?
@Kirantcs  I'm having the same issue as I can not add/modify/delete exisiting inputs however I can see the logs coming from the inputs. Have you resolved the issue? 
@deepakc is there a way to ingest those logs from MongoDB atlas to Splunk via API? Thanks in advance!
Hi @tscroggins, thanks for your answer, in the documentation i have found this configuration: minify_js = False js_no_cache = True cacheEntriesLimit = 0 cacheBytesLimit = 0 enableWebDebug = Tru... See more...
Hi @tscroggins, thanks for your answer, in the documentation i have found this configuration: minify_js = False js_no_cache = True cacheEntriesLimit = 0 cacheBytesLimit = 0 enableWebDebug = True and it works. Sometimes not but i go to /debug/refresh and click the refresh button and splunk loads the new version of the js file. But if you have a dashboard like that   <dashboard script="MyScript.js"> <search id="MySearch"> <query> query that take some time </query> </search> <row> <panel> <hmtl> <button id="btn">Button </button> </html> </panel> </row>       //MyScript require(["jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!"], function($, mvc) { $("#btn").on("click", function() { // js code }); });​   Splunk will not load the Jquery part, but if you go on "edit"->"source"->"cancel" without modifiyng anything in the dashboard source code the javascript code works. So the problem maybe is caused because the search (id="MySearch") in the dashboard is executed in an async way? I have read some posts on this topic but i didn't find any solution I have tried  require(["jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!"], function($, mvc) { $("#MySearch").on("search:done", function(){ $("#btn").on("click", function() { // js code }); });​ }); but nothing 
Thanks Yuanliu. I ended up using some of the macro in my search and it works: | eval service_ids = $fields.ServiceID$ | eval maintenance_object_type = "service", maintenance_object_key = service_... See more...
Thanks Yuanliu. I ended up using some of the macro in my search and it works: | eval service_ids = $fields.ServiceID$ | eval maintenance_object_type = "service", maintenance_object_key = service_ids | lookup operative_maintenance_log maintenance_object_type, maintenance_object_key OUTPUT _key as maintenance_log_key | eval in_maintenance = if(IsNull(maintenance_log_key), 0, 1) | fields - maintenance_object_key, maintenance_object_type, maintenance_log_key | where IsNull(in_maintenance) OR (in_maintenance != 1) | fields - in_maintenance | mvcombine service_ids | fields - service_ids I tried a lot of variants for your suggestion to use the macro but didn't find any that worked
Let's first clarify your use case.  Your attempted code suggests two implications: You are trying to substitute parameter in a macro filter_maintenance_services(1); and You are using this in a das... See more...
Let's first clarify your use case.  Your attempted code suggests two implications: You are trying to substitute parameter in a macro filter_maintenance_services(1); and You are using this in a dashboard or a map command, where $fields.ServiceID$ dereferences into a service ID such as e5095542-9132-402f-8f17-242b83710b66. Are these correct? It seems that you run into a quirk in that macro.  It is written such that quotation marks are required to invoke it properly. (I've written a macro that behaves this way and it took me a while to realize this requirement.)  Try | `filter_maintenance_services("\"$fields.ServiceID$\"")` or some variant of this.  
Hi @kulrajatwal  How to check in hex chars? I have the same issues in my splunk, So I got the raw data file and process your command to it. But I don't know how to check the invalid chars in my ra... See more...
Hi @kulrajatwal  How to check in hex chars? I have the same issues in my splunk, So I got the raw data file and process your command to it. But I don't know how to check the invalid chars in my raw data. Could you explain in detail? What is the splunk's accepted format? and how to fix in my json?
(Note: When giving sample data, use the code box.)  Your log mixes plain text with structured JSON.  So, the first task is to extract the JSON piece, then extract from JSON using spath.   | rex "DN... See more...
(Note: When giving sample data, use the code box.)  Your log mixes plain text with structured JSON.  So, the first task is to extract the JSON piece, then extract from JSON using spath.   | rex "DNAC (?<json_msg>{.+})" | spath input=json_msg   description from your sample data will contain this value description Executing command terminal width 0 config t Failed to fetch the preview commands. Here is an emulation of your sample data.  Play with it and compare with real data   | makeresults | eval _raw = "Oct 22 14:20:45 10.5.0.200 DNAC {\"version\":\"1.0.0\",\"instanceId\":\"20fd8163-4ca8-424b-a5a9-1e4018372abb\",\"eventId\":\"AUDIT_LOG_EVENT\",\"namespace\":\"AUDIT_LOG\",\"name\":\"AUDIT_LOG\",\"description\":\"Executing command terminal width 0\\nconfig t\\nFailed to fetch the preview commands.\\n\",\"type\":\"AUDIT_LOG\",\"category\":\"INFO\",\"domain\":\"Audit\",\"subDomain\":\"\",\"severity\":1,\"source\":\"NA\",\"timestamp\":1729606845043,\"details\":{\"requestPayloadDescriptor\":\"terminal width 0\\nconfig t\\nFailed to fetch the preview commands.\\n\",\"requestPayload\":\"\\n\"},\"ciscoDnaEventLink\":null,\"note\":null,\"tntId\":\"630db6e989269c11640abd49\",\"context\":null,\"userId\":\"system\",\"i18n\":null,\"eventHierarchy\":{\"hierarchy\":\"20fd8163-4ca8-424b-a5a9-1e4018372abb\",\"hierarchyDelimiter\":\".\"},\"message\":null,\"messageParams\":null,\"additionalDetails\":{\"eventMetadata\":{\"auditLogMetadata\":{\"type\":\"CLI\",\"version\":\"1.0.0\"}}},\"parentInstanceId\":\"9dde297d-845e-40d0-aeb0-a11e141f95b5\",\"network\":{\"siteId\":\"\",\"deviceId\":\"10.7.140.2\"},\"isSimulated\":false,\"startTime\":1729606845055,\"dnacIP\":\"10.5.0.200\",\"tenantId\":\"SYS0\"}" ``` data emulation above ```    
See syntax help in lookup.  This is what I suggest: | lookup column1 AS field1 test.csv output column1 as match | where isnotnull(match)
Most likely there's some line breaking problem.  Documentation is Configure event line breaking (and the entire Configure event processing.  You would also get better discussion in the forum Getting ... See more...
Most likely there's some line breaking problem.  Documentation is Configure event line breaking (and the entire Configure event processing.  You would also get better discussion in the forum Getting Data In.
Hi @Robwhoa78 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @Robwhoa78 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
First things first.  What is a "sub event"?  How do you get "subevent"?  How do you count "subevent"?  Secondly, please construct your desired output like a real table (e.g., by using the table templ... See more...
First things first.  What is a "sub event"?  How do you get "subevent"?  How do you count "subevent"?  Secondly, please construct your desired output like a real table (e.g., by using the table template above, craft HTML table, or some means suitable for you).  The illustration you give is not even aligned and impossible to interpret.