All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I have a dashboard with the following inputs:   <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <c... See more...
Hello, I have a dashboard with the following inputs:   <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <choice value="All">*</choice> <choice value="Active">Y</choice> <choice value="Inactive">N</choice> <prefix>Status="</prefix> <suffix>"</suffix> <default>*</default> <change> <condition value="All"> <set token="tok1"></set> </condition> <condition> <eval token="tok1">"\" AND upper(STATUS)=upper(\'" + $value$ + "\')\""</eval> </condition> </change> </input> <input type="text" token="tok2" searchWhenChanged="false"> <label>UserID</label> <default></default> <change> <condition> <eval token="tok2">if(match($value$,"\\w")," AND UserID=\"*" + upper($value$) + "*\"", "")</eval> </condition> </change> </input> </fieldset>   These two example tokens are user in a panel where the query is just   | search * $tok1$ $tok2$   because it refers to another search as base query. The problem is the following: I have a submit button, so I expect that every change in the fields do not trigger the search until I press the Submit button. What happens instead is that, if I change the value of tok1,the search starts, if I change the value of tok2 and then click outside of the text box, the search starts. In both cases, the submit button is bypassed. If I remove tok1 and tok2 manipulations in the <change> tag, everything works as expected, so I guess that the issue is caused by this tag, but I cannot understand the flow that Splunk goes through to decide to bypass the submit button. Thank you very much to anyone who can help me. Have a nice day!
Hi! I am faced with the following problem. I need to filter the logs that I receive from the source. I get the logs via Heavy-Forwarder, using the following config: inputs.conf   [udp://4514] ind... See more...
Hi! I am faced with the following problem. I need to filter the logs that I receive from the source. I get the logs via Heavy-Forwarder, using the following config: inputs.conf   [udp://4514] index=mylogs sourcetype = mylogs:leef     Before writing regex I tried the following configuration: props.conf   [mylogs:leef] TRANSFORMS-null= setnull_mylogs   transoforms.conf   [setnull_mylogs] REGEX= . DEST_KEY=queue FORMAT=nullQueue   But it is not working, I still receiving all events in indexes. This conf files store in <heavy_folder>/etc/apps/<app_name>/local.  May be I need to use another stanza name in props, but I tried [source::udp://4514] and it is not working. Any ideas? My goal than to write a few regex and receive only useful logs from this source. Thank you.
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSc... See more...
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/network_tools/bin/ping.py"   self.logger.warn("Thread limit has been reached and thus this execution will be skipped for stanza=%s, thread_count=%i", stanza, len(self.threads))   Thanks!
Hi all, I recently updated the NetSkope Add-on For Splunk (TA-NetSkopeAppForSplunk) from version 3.1.2 to version 3.6.0 in my Splunk Cloud environment (version 9.1.2308.203). I followed the steps o... See more...
Hi all, I recently updated the NetSkope Add-on For Splunk (TA-NetSkopeAppForSplunk) from version 3.1.2 to version 3.6.0 in my Splunk Cloud environment (version 9.1.2308.203). I followed the steps outlined in the Splunkbase upgrade guide, but I’m experiencing issues in getting my data into Splunk for Web Transactions V2. I got the V2 token set up by the Netskope Administrator, with the proper permissions, then I get the following error when setting up the data input: Error occurred while validating Token V2 parameter: 'status''. Did anyone have the same issue? Thanks in advance.
Hello, thanks for the clarification. Your solution is about to send all the events that match (?ms)EventCode=(4624|4634|4625)\s+.*\.adm in a specific index, but what about if I want to send the not... See more...
Hello, thanks for the clarification. Your solution is about to send all the events that match (?ms)EventCode=(4624|4634|4625)\s+.*\.adm in a specific index, but what about if I want to send the not matched events to another index? 
It looks like you have an eval for ProcessMsg, immediately followed by a stats command which overwrites the same field - is this your issue?
@Prathyusha891 - FYI splunklib doesn't come built in with Splunk. In your App you need to put the Splunklib explicitly, mostly in the bin folder of your App. pip install splunk-sdk --target ./bin  ... See more...
@Prathyusha891 - FYI splunklib doesn't come built in with Splunk. In your App you need to put the Splunklib explicitly, mostly in the bin folder of your App. pip install splunk-sdk --target ./bin   I hope this helps!!! Kindly upvote if it does!!!
Change your time-picker to be the time period you want
This can be accomplished with props and transforms. On your indexer machines, make the following files with stanzas: (whether through cluster bundle pushes or direct editing) props.conf # put this ... See more...
This can be accomplished with props and transforms. On your indexer machines, make the following files with stanzas: (whether through cluster bundle pushes or direct editing) props.conf # put this stanza in props.conf. Here your source field for the logs is assumed to be "WinEventLog://Security" [source::WinEventLog://Security] TRANSFORMS-anynamegoeshere=yourtransformname # If you would like to apply the filter to a sourcetype, you can also do this: [<yoursourcetype>] TRANSFORMS-anynamegoeshere=yourtransformname   transforms.conf # Put this in transforms.conf [yourtransformname] REGEX = (?ms)EventCode=(4624|4634|4625)\s+.*\.adm FORMAT = index2 DEST_KEY = _MetaData:Index
Hi Guys, I am trying fetch details using stats.In this query I am trying get status from the below conditions and when i am populating in the table.The ProccesMsg  has some values but in failure con... See more...
Hi Guys, I am trying fetch details using stats.In this query I am trying get status from the below conditions and when i am populating in the table.The ProccesMsg  has some values but in failure conditions i will add message in the result so i used coalesec to map both the result and need to populate in the table.But i cant able to populate the result.What mistake i did here. index="mulesoft" applicationName="ext" environment=DEV (*End of GL-import flow*) OR (message="GLImport Job Already Running, Please wait for the job to complete*") OR (message="process - No files found for import to ISG") |rename content.File.fstatus as Status|eval Status=case( like('Status' ,"%SUCCESS%"),"SUCCESS",like('Status',"%ERROR%"),"ERROR",like('message',"%process - No files found for import to ISG%"), "ERROR",like('message',"GLImport Job Already Running, Please wait for the job to complete"), "WARN") | eval ProcessMsg= coalesce(ProcessMsg,message) |stats values(content.File.fid) as "TransferBatch/OnDemand" values(content.File.fname) as "BatchName/FileName" values(content.File.fprocess_message) as ProcessMsg values(Status) as Status values(content.File.isg_file_batch_id) as OracleBatchID values(content.File.total_rec_count) as "Total Record Count" by correlationId |table Status Start_Time "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId  
@kidderjc - I'm no Java expert based on my past experience with log4j to Splunk HEC. If Splunk fails for some reason your solution will encounter a memory issue and may crash. My Recommendation: Sto... See more...
@kidderjc - I'm no Java expert based on my past experience with log4j to Splunk HEC. If Splunk fails for some reason your solution will encounter a memory issue and may crash. My Recommendation: Store logs to log files on the server and use Splunk UF to forward the logs to Splunk indexers.   I hope this helps!!!
You are correct in saying that Splunk no longer automatically extracts the fields with a new custom source type. Splunk does attempt to make field extractions if there are <key>=<value> patterns in t... See more...
You are correct in saying that Splunk no longer automatically extracts the fields with a new custom source type. Splunk does attempt to make field extractions if there are <key>=<value> patterns in the data, but that does not seem to be the case in these logs. You could try using sed-cmd to change the logs to be formatted like apache http logs and then set the sourcetype to the standard apache http log sourcetype, then it should work. I recommend also getting the Apache Web Server app as it has knowledge objects for apache http logs. https://splunkbase.splunk.com/app/3186
I recommend setting SHOULD_LINEMERGE to false so that Splunk does not try to re-combine your events together.
tcp start and end are not suppose to be mapped to Network Sessions Datamodel (CIM) according to Splunk: https://docs.splunk.com/Documentation/CIM/5.3.1/User/NetworkSessions: "The fields in the Ne... See more...
tcp start and end are not suppose to be mapped to Network Sessions Datamodel (CIM) according to Splunk: https://docs.splunk.com/Documentation/CIM/5.3.1/User/NetworkSessions: "The fields in the Network Sessions data model describe Dynamic Host Configuration Protocol (DHCP) and Virtual Private Network (VPN) traffic, whether server:server or client:server, and network infrastructure inventory and topology." Globalprotect logs should be the ones mapped to Network Sessions - VPN
Sorry, no I did not find a solution, the requirement changed, and we shifted gears.
How to write a query to get data like this Branch 1 🟢 🟢 Branch 2  🟢🟢🟢 Branch 3 🟢 🟢 Branch 4 🟢🟢🟢 . . . . . . Here branch is the actual branch and green represe... See more...
How to write a query to get data like this Branch 1 🟢 🟢 Branch 2  🟢🟢🟢 Branch 3 🟢 🟢 Branch 4 🟢🟢🟢 . . . . . . Here branch is the actual branch and green represent success build ,red will be the failure build and black will be the aborted build status. (Recent  5 build status)
We have multiple firewalls and different locations and each location we have syslog collector server and its forward the logs to splunk indexer.  Pan: traffic count 27,644,629 83% Pan:threat count ... See more...
We have multiple firewalls and different locations and each location we have syslog collector server and its forward the logs to splunk indexer.  Pan: traffic count 27,644,629 83% Pan:threat count 3,224,543 9.77% Pan:firewall_cloud 2,034,183 6.18% last one hour data. it looks like over utilization, so we want to validate receiving logs are legitimate or not?  Planning to reduce consumption of firewall logs.  Please guide me how can i validate firewall logs we are reciving correct logs or any excessive or not needed?
Yes. you are correct. working with windows team, but we are looking for solution in the forum.
Nice   It is almost what I need and expect. Just give me one more hint regarding _time . I want to show data from the past, from last monday between 9am and 5pm .  
Hello, I'having some problem when filtering standard Windows events. My goal is to send the events coming from my UFs to two different indexes based on the users. If the user ends with ".adm" the in... See more...
Hello, I'having some problem when filtering standard Windows events. My goal is to send the events coming from my UFs to two different indexes based on the users. If the user ends with ".adm" the index should be index1, otherwhise index2. Here is my regex for filtering https://regex101.com/r/PsEHIp/1  I put it in inputs.conf ###### OS Logs ###### [WinEventLog://Security] disabled = 0 index = index1 followTail=true start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist = (?ms)EventCode=(4624|4634|4625)\s+.*\.adm renderXml=false