All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The inputs.conf stanza tells Splunk to run your script.  What the script does depends on how it is written.  It may want to get the files it should read from a script-specific configuration file.
Hi @anooshac , if you see in the Splunk Dashboard Examples app (https://splunkbase.splunk.com/app/1603 ) there's exactly  also this example. Ciao. Giuseppe
Sure, you have a couple of options there. You can either add adaptive response actions to your Splunk ES correlation searches (if you're using those) or you can set up a saved search to export exactl... See more...
Sure, you have a couple of options there. You can either add adaptive response actions to your Splunk ES correlation searches (if you're using those) or you can set up a saved search to export exactly the results you want to. When I last worked on this (it's been about a year), I found that the saved search method was more reliable. I used a search similar to the Incident Response view search ("Incident Review - Main" in SA-ThreatIntelligence) as my use case was to forward notable events to the SOAR platform.      
Thanks @ITWhisperer  for an update.  If I have to create Dashboard which will only display the number of records (example 2) if it is within 15 mins and 0 if latest event is less than 15 mins.  Is ... See more...
Thanks @ITWhisperer  for an update.  If I have to create Dashboard which will only display the number of records (example 2) if it is within 15 mins and 0 if latest event is less than 15 mins.  Is it possible to create such dashboard ?? 
It looks like your event time is already in the _time field i.e. your timestamp parsing appears to be correct, therefore, if you restrict your search to the last 15 minutes, you won't get any events ... See more...
It looks like your event time is already in the _time field i.e. your timestamp parsing appears to be correct, therefore, if you restrict your search to the last 15 minutes, you won't get any events prior to that.
The problem with tables is that the browser(?) tries to adjust the table after the CSS, which usually overrides whatever width you have tried to set.
What is the relationship between ID and Event, because you don't appear to be doing anything with ID in you  current search. Does Event exist in your second dataset (ERROR API [ID]) #### ID is the c... See more...
What is the relationship between ID and Event, because you don't appear to be doing anything with ID in you  current search. Does Event exist in your second dataset (ERROR API [ID]) #### ID is the command field in both the data sets . while Event is only present in 1st data set i.e ("API : access : * : process : Payload:")
Hi All, i am trying reduce the width of 2nd and 3rd column of a table since some of the cell has big sentence and it occupies too much space. i tried referring an example like below. <row> <pan... See more...
Hi All, i am trying reduce the width of 2nd and 3rd column of a table since some of the cell has big sentence and it occupies too much space. i tried referring an example like below. <row> <panel> <html depends="$alwaysHideCSSPanel$"> <style> #tableColumWidth table thead tr th:nth-child(2), #tableColumWidth table thead tr th:nth-child(3){ width: 10% !important; overflow-wrap: anywhere !important; } </style> </html> <table id="tableColumWidth">   But i am not able to change the width using this. Any corrections needed in above html?
Hi  I want to know if it is possible to show the number of impacted records in last 15 mins for the below search:  Query: index = events_prod_tio_omnibus_esa ( "SESE023" OR "SESE020" OR "SESE030"... See more...
Hi  I want to know if it is possible to show the number of impacted records in last 15 mins for the below search:  Query: index = events_prod_tio_omnibus_esa ( "SESE023" OR "SESE020" OR "SESE030" ) Result:    Requirement :  For the above search, if the search is executed at : 11:30 ==> It will show 0 records  11:40 ==> It will show 2 records (as the last event raised on 11:37:14 is having 2 records and currenttime - event time < 15 mins) 11:50 ==> It will show 2 records (as the last event raised on 11:37:14 is having 2 records and currenttime - event time < 15 mins) 11:55 ==> It will show 0 records (as the last event raised on 11:37:14 is having 2 records but currenttime - event time >15 mins)  
Hi Community, I'm working on script input. I have created a script to convert binary code logs into human read-able format and it is working fine. Now the issue is the file im monitoring is in "/v... See more...
Hi Community, I'm working on script input. I have created a script to convert binary code logs into human read-able format and it is working fine. Now the issue is the file im monitoring is in "/var/log/test" directory.  The script is in "/opt/splunk/etc/apps/testedScript/bin/testedscript.sh" directory. I'm getting script address as source in Splunk. Attaching screenshot as reference.  below is my inputs.conf stanza I'm using (/opt/splunk/etc/apps/testScript/local/inputs.conf): [script:///opt/splunk/etc/apps/testScript/bin/testedScript.sh] disabled=false index=testing interval=30 sourcetype=free2 Is there anyway i can get exact source address like in my case it is "/var/log/test/file1"
Hi @Jit06 , did you tried with show.splunk.com ? Ciao. Giuseppe
Hi @Dayalss , the Qualys Add-On for Splunk is very useful to ingest and parse Qualys data, but it doesn't contains dashboard to display data. For this requirement, find another app in splunkbase: a... See more...
Hi @Dayalss , the Qualys Add-On for Splunk is very useful to ingest and parse Qualys data, but it doesn't contains dashboard to display data. For this requirement, find another app in splunkbase: apps.splunk.com, I don't know which is the most accurate for your requirements. You can use these dashboard as they are or as starting point for your custom dashboards. Ciao. Giuseppe  
I just received a mail stating, past June 14 we won't be even able to view the past support tickets.  I see it as a blocker for learning. Because whenever I face an issue, I refer to the past tickets... See more...
I just received a mail stating, past June 14 we won't be even able to view the past support tickets.  I see it as a blocker for learning. Because whenever I face an issue, I refer to the past tickets and learn from that before actually creating a ticket. Past tickets can be available atleast as HTML to view. Kindly let me know if there are any such plans.
Hi, I have ingested the qualys data using the Qualys TA addon and enabled the inputs to run once every 24 hours. Im ingesting the host detection and knowledge logs into Splunk. The requirement is ... See more...
Hi, I have ingested the qualys data using the Qualys TA addon and enabled the inputs to run once every 24 hours. Im ingesting the host detection and knowledge logs into Splunk. The requirement is to create a dashboard with multiple multiselect filters and do the enrichment from our database. But I found that the data in qualys is different from Splunk logs. And the inputs is ingesting only a certain amount of data.   My ask is I want to ingest complete data every time the inputs runs , so that I get accurate data and use it in dashboards. Please help me.   Regards, Dayal
Hi, We are looking for migration guidance from Exabeam to Splunk . Is there a way to migrate data from Exabeam data lake to Splunk ? Also any documentation, guidelines present for Exabeam customer... See more...
Hi, We are looking for migration guidance from Exabeam to Splunk . Is there a way to migrate data from Exabeam data lake to Splunk ? Also any documentation, guidelines present for Exabeam customer migration to Splunk. Please let me know. Thanks. Guru
Hi I can't think of any app that monitors user folder sizes, but it wouldn’t be that hard to set up. Possible High-Level Steps: Determine your OS is it Windows / Linux Based on the OS, you ca... See more...
Hi I can't think of any app that monitors user folder sizes, but it wouldn’t be that hard to set up. Possible High-Level Steps: Determine your OS is it Windows / Linux Based on the OS, you can use various Linux command’s  + bash script to monitor user folder sizes on a regular based and output that data into a text log file with a timestamp, you can do the same if its Windows and use a PowerShell script. The log file can be monitored at various intervals  by Splunk UF + inputs.conf and Props.conf Once the data is in an index, you can set up thresholds and alerts. Yes, a bit of homework and scripting, but that’s the flexibility of Splunk and not that hard to do, and you would have created your own private TA
Thanks, Provided query which i am trying to do.
index=mulesoft applicationName=test | stats values(content.payload.requestID) as Request1 values(content.payload.impConReqId) as ImpConReqId values(content.payload.batchId) as batch1 values(content... See more...
index=mulesoft applicationName=test | stats values(content.payload.requestID) as Request1 values(content.payload.impConReqId) as ImpConReqId values(content.payload.batchId) as batch1 values(content.payload{}.batchId) as batch2 values(content.payload{}.impConReqId) as impConReqId1 values(content.payload.OutputParameters.X_REQUEST_ID ) as Request2 BY applicationName,correlationId | eval ImpConReqID= coalesce(ImpConReqId,impConReqId1) | eval RequestId= coalesce(Request1,Request2) | eval batchId= coalesce(batch1,batch2) | eval ImpCon=mvmap(ImpConReqID,if(match(ImpConReqID,".+"),"ImpConReqID: ".ImpConReqID,null())) | eval batch=mvmap(batchId,if(match(batchId,".+"),"batchId: ".batchId,null())) | eval ReqId=mvmap(RequestId,if(match(RequestId,".+"),"RequestId: ".RequestId,null())) | eval oracle=mvappend(ImpCon,batch,ReqId) | eval orcaleid=mvfilter(isnotnull(oracle)) | eval OracleResponse=mvjoin(orcaleid," ") | rename applicationName as ApplicationName correlationId as CorrelationId | table ApplicationName OracleResponse CorrelationId This the query which i am trying to get batchID, requestID, ImpconID.If the field value contains then i need to show in the table based on correlationID. Right now I am getting values properly. But in some scenario for the particular correlationID we have two or three ImpconIDwith values and with null values. So i want filter that null value ImpconId in the table .    
I am having the issue on Windows clients. Because the group isn't on Domain Controllers shouldn't splunk install clients anyway? If I dont use my AD user to run the service I am able to install spl... See more...
I am having the issue on Windows clients. Because the group isn't on Domain Controllers shouldn't splunk install clients anyway? If I dont use my AD user to run the service I am able to install splunk from GPO. The installer creates a user and put it on NT Service. The NT Service\splunk-user is not added to any of the required groups I do that manually.