All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Prathyusha891 - FYI splunklib doesn't come built in with Splunk. In your App you need to put the Splunklib explicitly, mostly in the bin folder of your App. pip install splunk-sdk --target ./bin  ... See more...
@Prathyusha891 - FYI splunklib doesn't come built in with Splunk. In your App you need to put the Splunklib explicitly, mostly in the bin folder of your App. pip install splunk-sdk --target ./bin   I hope this helps!!! Kindly upvote if it does!!!
Change your time-picker to be the time period you want
This can be accomplished with props and transforms. On your indexer machines, make the following files with stanzas: (whether through cluster bundle pushes or direct editing) props.conf # put this ... See more...
This can be accomplished with props and transforms. On your indexer machines, make the following files with stanzas: (whether through cluster bundle pushes or direct editing) props.conf # put this stanza in props.conf. Here your source field for the logs is assumed to be "WinEventLog://Security" [source::WinEventLog://Security] TRANSFORMS-anynamegoeshere=yourtransformname # If you would like to apply the filter to a sourcetype, you can also do this: [<yoursourcetype>] TRANSFORMS-anynamegoeshere=yourtransformname   transforms.conf # Put this in transforms.conf [yourtransformname] REGEX = (?ms)EventCode=(4624|4634|4625)\s+.*\.adm FORMAT = index2 DEST_KEY = _MetaData:Index
Hi Guys, I am trying fetch details using stats.In this query I am trying get status from the below conditions and when i am populating in the table.The ProccesMsg  has some values but in failure con... See more...
Hi Guys, I am trying fetch details using stats.In this query I am trying get status from the below conditions and when i am populating in the table.The ProccesMsg  has some values but in failure conditions i will add message in the result so i used coalesec to map both the result and need to populate in the table.But i cant able to populate the result.What mistake i did here. index="mulesoft" applicationName="ext" environment=DEV (*End of GL-import flow*) OR (message="GLImport Job Already Running, Please wait for the job to complete*") OR (message="process - No files found for import to ISG") |rename content.File.fstatus as Status|eval Status=case( like('Status' ,"%SUCCESS%"),"SUCCESS",like('Status',"%ERROR%"),"ERROR",like('message',"%process - No files found for import to ISG%"), "ERROR",like('message',"GLImport Job Already Running, Please wait for the job to complete"), "WARN") | eval ProcessMsg= coalesce(ProcessMsg,message) |stats values(content.File.fid) as "TransferBatch/OnDemand" values(content.File.fname) as "BatchName/FileName" values(content.File.fprocess_message) as ProcessMsg values(Status) as Status values(content.File.isg_file_batch_id) as OracleBatchID values(content.File.total_rec_count) as "Total Record Count" by correlationId |table Status Start_Time "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId  
@kidderjc - I'm no Java expert based on my past experience with log4j to Splunk HEC. If Splunk fails for some reason your solution will encounter a memory issue and may crash. My Recommendation: Sto... See more...
@kidderjc - I'm no Java expert based on my past experience with log4j to Splunk HEC. If Splunk fails for some reason your solution will encounter a memory issue and may crash. My Recommendation: Store logs to log files on the server and use Splunk UF to forward the logs to Splunk indexers.   I hope this helps!!!
You are correct in saying that Splunk no longer automatically extracts the fields with a new custom source type. Splunk does attempt to make field extractions if there are <key>=<value> patterns in t... See more...
You are correct in saying that Splunk no longer automatically extracts the fields with a new custom source type. Splunk does attempt to make field extractions if there are <key>=<value> patterns in the data, but that does not seem to be the case in these logs. You could try using sed-cmd to change the logs to be formatted like apache http logs and then set the sourcetype to the standard apache http log sourcetype, then it should work. I recommend also getting the Apache Web Server app as it has knowledge objects for apache http logs. https://splunkbase.splunk.com/app/3186
I recommend setting SHOULD_LINEMERGE to false so that Splunk does not try to re-combine your events together.
tcp start and end are not suppose to be mapped to Network Sessions Datamodel (CIM) according to Splunk: https://docs.splunk.com/Documentation/CIM/5.3.1/User/NetworkSessions: "The fields in the Ne... See more...
tcp start and end are not suppose to be mapped to Network Sessions Datamodel (CIM) according to Splunk: https://docs.splunk.com/Documentation/CIM/5.3.1/User/NetworkSessions: "The fields in the Network Sessions data model describe Dynamic Host Configuration Protocol (DHCP) and Virtual Private Network (VPN) traffic, whether server:server or client:server, and network infrastructure inventory and topology." Globalprotect logs should be the ones mapped to Network Sessions - VPN
Sorry, no I did not find a solution, the requirement changed, and we shifted gears.
How to write a query to get data like this Branch 1 🟢 🟢 Branch 2  🟢🟢🟢 Branch 3 🟢 🟢 Branch 4 🟢🟢🟢 . . . . . . Here branch is the actual branch and green represe... See more...
How to write a query to get data like this Branch 1 🟢 🟢 Branch 2  🟢🟢🟢 Branch 3 🟢 🟢 Branch 4 🟢🟢🟢 . . . . . . Here branch is the actual branch and green represent success build ,red will be the failure build and black will be the aborted build status. (Recent  5 build status)
We have multiple firewalls and different locations and each location we have syslog collector server and its forward the logs to splunk indexer.  Pan: traffic count 27,644,629 83% Pan:threat count ... See more...
We have multiple firewalls and different locations and each location we have syslog collector server and its forward the logs to splunk indexer.  Pan: traffic count 27,644,629 83% Pan:threat count 3,224,543 9.77% Pan:firewall_cloud 2,034,183 6.18% last one hour data. it looks like over utilization, so we want to validate receiving logs are legitimate or not?  Planning to reduce consumption of firewall logs.  Please guide me how can i validate firewall logs we are reciving correct logs or any excessive or not needed?
Yes. you are correct. working with windows team, but we are looking for solution in the forum.
Nice   It is almost what I need and expect. Just give me one more hint regarding _time . I want to show data from the past, from last monday between 9am and 5pm .  
Hello, I'having some problem when filtering standard Windows events. My goal is to send the events coming from my UFs to two different indexes based on the users. If the user ends with ".adm" the in... See more...
Hello, I'having some problem when filtering standard Windows events. My goal is to send the events coming from my UFs to two different indexes based on the users. If the user ends with ".adm" the index should be index1, otherwhise index2. Here is my regex for filtering https://regex101.com/r/PsEHIp/1  I put it in inputs.conf ###### OS Logs ###### [WinEventLog://Security] disabled = 0 index = index1 followTail=true start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist = (?ms)EventCode=(4624|4634|4625)\s+.*\.adm renderXml=false
Thanks however I worked out what was causing the issue. There was another app which was supposed to be deployed to the Search Head Custer but mistakenly it was deployed to the Indexer Cluster. After ... See more...
Thanks however I worked out what was causing the issue. There was another app which was supposed to be deployed to the Search Head Custer but mistakenly it was deployed to the Indexer Cluster. After I removed this app from the Master Apps Folder I redeployed the new one and it successfully validated and pushed down to the Indexer nodes.
Dear splunkers, I need to ingest some apaches log files. Those log files are first sent to a syslog server by rsyslog rsyslog adds to each line of the log file its owns information. A UF... See more...
Dear splunkers, I need to ingest some apaches log files. Those log files are first sent to a syslog server by rsyslog rsyslog adds to each line of the log file its owns information. A UF is installed on this syslog server and can monitor the log file and send them to the indexers Each line of the log file looks like this :   2024-02-16T00:00:00.129824+01:00 website-webserver /var/log/apache2/website/access.log 10.0.0.1 - - [16/Feb/2024:00:00:00 +0100] "GET /" 200 10701 "-" "-" 228   As you can see, the first part of the log, until "/access.log " had been added by rsyslog, so this is something I want Splunk to filter out / delete. So far, I'm able to monitor the file and filter out the rsyslog layer of the events with a parameter, and I added TIME_PREFIX parameter, then Splunk automatically detects the timestamp. Like this :   SEDCMD-1=s/^.*\.log //g TIME_PREFIX=- - \[   I created a custom sourcetype accordingly. But the issue is that, the field extraction is not working properly. Almost no field beside the _time related fileds is being extracted. I guess it's because I'm using a custom sourcetype, so Splunk is not extracting the fields automaticaly as it should; But I'm not really sure... I'm a bit lost Thanks a lot for your kind help
Try removing it from your initial filter index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/revpro-to-oracle/onDemand") OR (message="API: START: /v1/fin_Zuora_GL_Revp... See more...
Try removing it from your initial filter index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/revpro-to-oracle/onDemand") OR (message="API: START: /v1/fin_Zuora_GL_Revpro_JournalImport") OR (message="API: START: /v1/revproGLImport/onDemand*")
Assuming you want average duration from all events, you could do something like this | bin _time span=30m | eventstats count by _time method | appendpipe [| eventstats sum(duration) as count by ... See more...
Assuming you want average duration from all events, you could do something like this | bin _time span=30m | eventstats count by _time method | appendpipe [| eventstats sum(duration) as count by _time | eval method="duration"] | xyseries _time method count | addtotals fieldname=total | eval total=total-duration | eval average=duration/total | fields - duration total Using dummy data, gives something like this  
Hi, Thanks so much for the comment. I'm working on ES 7.2 this thing seems to still be missing. I will update the ES app soon so I will have this functionality back.
Hi Guys, I am try to exclude field value . need to exclude message=""API:START: /v1/Journals_outbound"    index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/Journa... See more...
Hi Guys, I am try to exclude field value . need to exclude message=""API:START: /v1/Journals_outbound"    index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/Journals_outbound") OR (message="API: START: /v1/revpro-to-oracle/onDemand") OR (message="API: START: /v1/fin_Zuora_GL_Revpro_JournalImport") OR (message="API: START: /v1/revproGLImport/onDemand*") | search NOT message IN ("API: START: /v1/Journals_outbound")