All Topics

Top

All Topics

Hi, I am  a new to spunk. I am trying to send an REST request from splunk dashboard by a submit button to external server that listens http requests. How can i achieve that goal? I basically ne... See more...
Hi, I am  a new to spunk. I am trying to send an REST request from splunk dashboard by a submit button to external server that listens http requests. How can i achieve that goal? I basically need to send a simple curl with a var in the path that will be selected by the user from a drop down list.    
Why does alert manager not always trigger an alert?
Hi everyone, I have a db connect and get a table like this: _time count 12/09/2022 10:00 1 12/09/2022 10:01 1 12/09/2022 10:03 1 12/09/2022 10:04 1 1... See more...
Hi everyone, I have a db connect and get a table like this: _time count 12/09/2022 10:00 1 12/09/2022 10:01 1 12/09/2022 10:03 1 12/09/2022 10:04 1 12/09/2022 11:05 2 12/09/2022 11:15 5 12/09/2022 11:05 6 12/09/2022 11:17 4 12/09/2022 12:05 1 12/09/2022 12:10 1 12/09/2022 12:12 1   I want to find the trend of the event that I receive by hour, base on now: What I understand, I have to count the number of event by hour, to achieve a table like this before choosing displaying by single value: _time count 12/09/2022 10:30 14 12/09/2022 11:30 3 Suppose: now: 12/09/2022 12:30. And I want to count events from 10:30-11:30 et 11:30-12:30. If I use |timechart span=1h sum(count) as count, I have this table instead: time count 12/09/2022 10:00 4 12/09/2022 11:00 17 12/09/2022 12:00 3   Please, is it possible to find the table that I want?   Have  a nice day! Julia
Hello, When I download a dashboard with dashboard studio it come out with the horizontal and vertical scrollbars. There is an option to download to pdf with full tables and and without the s... See more...
Hello, When I download a dashboard with dashboard studio it come out with the horizontal and vertical scrollbars. There is an option to download to pdf with full tables and and without the scrollbars? Or if there is option to make the height size dynamically based on custom variables? Thanks, Ran
hi all, I have 3 indexers with 26 CPU/ indexer they are crying out loud due to load.  I may not be able to increase CPU's as such there is a limit and max is 26 cores. Will adding more indexers h... See more...
hi all, I have 3 indexers with 26 CPU/ indexer they are crying out loud due to load.  I may not be able to increase CPU's as such there is a limit and max is 26 cores. Will adding more indexers help?  
This is a script for finding frozen bucket files in time range you gave It shows folders + size + start time and endtime of logs contains on  each folder log + asks to unfrozen log      #!/bi... See more...
This is a script for finding frozen bucket files in time range you gave It shows folders + size + start time and endtime of logs contains on  each folder log + asks to unfrozen log      #!/bin/bash clear echo "############################" echo "##created.by mehran.safari##" echo "## 2022 ##" echo "############################" ############## echo -n " Enter index name to lookup:" read INAME #### FROZENPATH="/frozendata" echo " Default Splunk Frozen Indexes Path is "$FROZENPATH". is it ok? (y to continue or n to give new path):" read ANSWER3; case "$ANSWER3" in "y") echo -e "OK Deafult Frozen Index Path Selected.";; "n") echo -e "Enter NEW Frozen Index Path:"; read FROZENPATH;; esac #### find "$FROZENPATH/$INAME" -type d -iname "db_*" -print > "./frozendb.txt" echo -n " Enter starting date you need("MM/DD/YYYY HH:MM:SS"):" read SDATE echo -n " Enter end date you need("MM/DD/YYYY HH:MM:SS"):" read EDATE ############## BSDATE=$(date -d "$SDATE" +%s) BEDATE=$(date -d "$EDATE" +%s) ############# FILE='./frozendb.txt' while read line; do LOGSTART=`echo $line | cut -d "_" -f3`; LOGEND=`echo $line | cut -d "_" -f2`; if [[ $BSDATE -le $LOGEND && $BEDATE -gt $LOGSTART ]]; then echo -e "******************************" echo -e "Frozen Log Path You want: $line" HLOGSTART=`date -d @"$LOGSTART"` HLOGEND=`date -d @"$LOGEND"` LOGSIZE=`du -hs "$line" | cut -d "/" -f1` echo -e "*** this Bucket contains logs from: $HLOGSTART" echo -e "*** this Bucket contains logs to: $HLOGEND " echo -e "**** The Size Of This Log Is: $LOGSIZE" echo -e "$line" >> "./frozenmatched.txt" echo -e "******************************" #else #echo "not in data range you want: $line" fi done<$FILE ############ sudo rm -rf "./frozendb.txt" echo "Do you Want to Unfrozen this Logs?(y to copy): " read ANSWER FILE2='./frozenmatched.txt' INDEXPATH="/opt/splunk/var/lib/splunk" DST="$INDEXPATH/$INAME/thaweddb/" if [[ "$ANSWER" == "y" ]]; then echo " Default Destination is "$DST". is it ok? (y to continue or n to give new path):" read ANSWER2; case "$ANSWER2" in "y") echo -e "OK Deafult Destination Selected.";; "n") echo -e "Enter NEW Destination Path:"; read DST;; esac while read line2; do sudo cp -R "$line2" "$DST" echo -e "Executing copy of $line2 to $DST DONE." echo -e "$DST$(basename $line2)" sudo /opt/splunk/bin/splunk rebuild "$DST$(basename $line2)" $INAME --ignore-read-error done<$FILE2 fi sudo rm -rf "./frozenmatched.txt" ########## echo " Do you want to restart splunk service? (y to continue or n to exit):" read ANSWER4; if [[ "$ANSWER4" == "y" ]]; then sudo /opt/splunk/bin/splunk restart fi ########## echo "################################" echo -e "## GOOD LUCk WITH BEST REGARDS##" echo "################################" #########   this is the  github project if you need  https://github.com/mehransafari/Splunk_FrozenData_FIND_by_DATE_and_Restore it may help you
this bash script will search frozen path you give + oldest needed time then will show older logs and asks you to remove them. it shows you path + size + start and end time of logs each bucket conta... See more...
this bash script will search frozen path you give + oldest needed time then will show older logs and asks you to remove them. it shows you path + size + start and end time of logs each bucket contains this will find logs forexample older than 30 days and will ask you to remove them if you agree this script detects logs with wrong time ( logtime > current time) too           #!/bin/bash clear echo "############################" echo "##created.by mehran.safari##" echo "## 2022 ##" echo "############################" ############## echo -n " Enter index name to lookup:" read INAME #### FROZENPATH="/frozendata" echo " Default Splunk Frozen Indexes Path is "$FROZENPATH". is it ok? (y to continue or n to give new path):" read ANSWER1; case "$ANSWER1" in "y") echo -e "OK Deafult Frozen Index Path Selected.";; "n") echo -e "Enter NEW Frozen Index Path:"; read FROZENPATH;; esac #### find "$FROZENPATH/$INAME" -type d -iname "db_*" -print > "./frozendb.txt" ODATE=30 echo " oldest Frozen Bucket Should be "$ODATE" days old. is it ok?(press "y" to continue & "n" to change it):" read ANSWER3 case $ANSWER3 in y ) echo -e "OK Default Frozen Age Kept."; break;; n ) echo -e "Enter NEW Frozen AGE You Want:"; read ODATE; break;; esac BODATE=$(date --date="`date`-"$ODATE"days" +%s) BCDATE=`date +%s` ############# FILE1='./frozendb.txt' while read line; do LOGSTART=`echo $line | cut -d "_" -f3`; LOGEND=`echo $line | cut -d "_" -f2`; if [[ $LOGEND -gt $BCDATE || $LOGSTART -lt $BODATE ]]; then echo -e "******************************" echo -e "Frozen Log Path You want: $line" HLOGSTART=`date -d @"$LOGSTART"` HLOGEND=`date -d @"$LOGEND"` LOGSIZE=`du -hs "$line" | cut -d "/" -f1` echo -e "*** this Bucket contains logs from: $HLOGSTART" echo -e "*** this Bucket contains logs to: $HLOGEND " echo -e "**** The Size Of This Log Is: $LOGSIZE" echo -e "$line" >> "./frozenmatched.txt" echo -e "******************************" fi done<$FILE1 ############ sudo rm -rf "./frozendb.txt" echo "Do you Want to DELETE this Logs?(y to DELETE): " read ANSWER3 FILE2='./frozenmatched.txt' if [[ "$ANSWER3" == "y" ]]; then while read line2; do sudo rm -rf "$line2" echo -e "DELETING of $line2 DONE." done<$FILE2 fi sudo rm -rf "./frozenmatched.txt" ########## echo "################################" echo -e "## GOOD LUCk WITH BEST REGARDS##" echo "################################" #########           this is github link if you want https://github.com/mehransafari/Splunk_Frozen_Cleanup
Hi all, is there a way to integrate with O365 and, given a malicious email (identified by subject and sender), search for it in all the mailboxes of all the users and then delete it? I was lookin... See more...
Hi all, is there a way to integrate with O365 and, given a malicious email (identified by subject and sender), search for it in all the mailboxes of all the users and then delete it? I was looking for an action in the "EWS for Office 365 App" and in "MS Graph for Office 365" but I do not see any action able to do that. For instance, the "run query" actions require a precise mailbox to look into. Thank you in advance.
Is there a way to create/update/delete tags any other way than through "Administration Settings/Tags"? I was looking for a way to do it through playbooks 
I have integrated Splunk with Service Now using Add on. Now I have 2 questions: I'm able to bring the desired cases data into Splunk. I'm only able to create but cannot delete the record in Splun... See more...
I have integrated Splunk with Service Now using Add on. Now I have 2 questions: I'm able to bring the desired cases data into Splunk. I'm only able to create but cannot delete the record in Splunk when I delete the same case in Service now , so what should I do? When trying to push the data to ServiceNow from Splunk, I'm able to push the data to only incident and event table, but not my desired table. Is there a way to do that?
My requirement is to notify when the job runs more than the specified time, condition 1 - the first job of every day should run less than 45mins, if exceeds than 45mins, trigger an alert conditio... See more...
My requirement is to notify when the job runs more than the specified time, condition 1 - the first job of every day should run less than 45mins, if exceeds than 45mins, trigger an alert condition 2 - Rest of all jobs of all days should not exceed 10 mins, if exceeds 10 mins, trigger an alert  condition 3 - Is these jobs does not run every 15 mins (job needs to start its run for every 15 mins), need to trigger an alert
Hello, I am new to learning Splunk and I have installed Splunk app for aws in Splunk instance and I have configured "aws add on input" cloudwatch as source which is pulling up various resource logs... See more...
Hello, I am new to learning Splunk and I have installed Splunk app for aws in Splunk instance and I have configured "aws add on input" cloudwatch as source which is pulling up various resource logs on Splunk dashboard in search but log data values is not coming up in Splunk app dashboard showing below message "Some panels may not be displayed correctly because the following inputs have not been configured: CloudWatch, Config, CloudTrail, Description. Or, the saved search "Addon Metadata - Summarize AWS Inputs" is not enabled on Add-on instance" Does anybody have any idea how to resolve this issue?
Hi  Having a question about opentelemetry.   We are changing our applications to support open telemetry, both trace, metrics and logs. We only use on-prem version of Splunk and so do all our cu... See more...
Hi  Having a question about opentelemetry.   We are changing our applications to support open telemetry, both trace, metrics and logs. We only use on-prem version of Splunk and so do all our custmers. I spoke with you a year ago and you then told me that Opentelemtry support would only be avilable for the cloud users. Is that still the case or have this strategy changed? Because of security it will not be possible to use cloud-version of Splunk and that also goes for all our customers.  (all our customers also have on-prem Splunk licenses).   reg. Sindre  
Hi,    How to give access to Splunk user accounts so they have visibility of cloud monitoring console. Can you help to know the exact process 
Hi Everyone, I have created multiple Dashboards  with the multiple searches. Now this is impacting splunk performance. I want to use base search for my dashboards now. Not sure how to use base ... See more...
Hi Everyone, I have created multiple Dashboards  with the multiple searches. Now this is impacting splunk performance. I want to use base search for my dashboards now. Not sure how to use base search.  Below are my queries  for one of my dashboard: <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | stats count by OrgName</query> <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" |stats count by LicenseName</query> <query> index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ TotalLicenses!=0 | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |chart sum(TotalLicenses) as "Total Licenses" sum(UnusedLicenses) as "Unused Licenses" sum(UsedLicenses) as "Used Licenses" by LicenseName <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |stats sum(TotalLicenses) as "Total-Licenses" sum(UsedLicenses) as "Used Licenses" sum(UnusedLicenses) as "Unused Licenses" by LicenseName OrgName SalesforceOrgId | sort -Total-Licenses</query>   <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |stats latest(TotalLicenses) as "Total-Licenses" latest(UsedLicenses) as "Used Licenses" latest(UnusedLicenses) as "Unused Licenses" by LicenseName OrgName SalesforceOrgId | sort -Total-Licenses |sort OrgName</query> <query>index="abc" sourcetype="O2-Salesforce-SalesforceUserLicense" $type$ | lookup local=t Org_Alias.csv OrgFolderName OUTPUT OrgName as OrgName | search $OrgName$ |dedup OrgFolderName, LicenseName, SalesforceOrgId |stats latest(TotalLicenses) as "Total-Licenses" latest(UsedLicenses) as "Used Licenses" latest(UnusedLicenses) as "Unused Licenses" by LicenseName OrgName SalesforceOrgId | sort -Total-Licenses |sort OrgName</query>   I have read multiple base search documents but not working for my dashboards. Can someone guide me on this. How I can apply base search for my queries  
Hi Everyone, I need to know if is it possible to get the data via HEC from a source to two different Splunk instances?Currently, the source is sending data to one Splunk instance and I want to test... See more...
Hi Everyone, I need to know if is it possible to get the data via HEC from a source to two different Splunk instances?Currently, the source is sending data to one Splunk instance and I want to test the same on a different Splunk environment by getting the data in   Thanks
Can someone please help me with this.  I have looking for a query so that if count is less than 0 change it to 0, otherwise display actual count. for example, if the count is -23, the result shou... See more...
Can someone please help me with this.  I have looking for a query so that if count is less than 0 change it to 0, otherwise display actual count. for example, if the count is -23, the result should be count=0 and if the count is 23, the result should be count=23.
Hello All, I need help trying to generate the average response times for the below data using tstats command. Need help with the splunk query.  I am dealing with a large data and also building a vi... See more...
Hello All, I need help trying to generate the average response times for the below data using tstats command. Need help with the splunk query.  I am dealing with a large data and also building a visual dashboard to my management. So trying to use tstats as searches are faster. Stuck with unable to find avg response time using the value of Total_TT in my tstat command. When i execute the below tstat it is saying as it returned some number of events but the value is blank. Can someone help me with the query.   Sample Data: 2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)   SPL Query: | tstats values(PREFIX(total_tt:)) as AVG-RT where index=test_data sourcetype="tomcat:runtime:log" TERM(guid)
Hi All,    I have few events in splunk which will generate all the time, if those events are not generating then we should come to know that there is some issue regarding that. So we have to calcu... See more...
Hi All,    I have few events in splunk which will generate all the time, if those events are not generating then we should come to know that there is some issue regarding that. So we have to calculate the events with zero count when checking for data in last 15 mins and display the message in alert stating there are no events in last 15 minutes like that. Sample Event : {"log":"[13:18:16.761] [INFO ] [] [c.c.n.t.e.i.T.lloutEventData] [akka://Mmster/user/$b/worrActor/$rOb] - channel=\"AutoNotification\", productVersion=\"2.3.15634ab725\", apiVersion=\"A1\", uuid=\"dee45ca3-2401-13489f240eaf\", eventDateTime=\"2022-09-12T03:18:16.760Z\", severity=\"INFO\", code=\"ServiceCalloutEventData\", component=\"web.client\", category=\"integrational-external\", serviceName=\"Consume Notification\", eventName=\"MANDATE_NOTIFICATION_RETRIEVAL.CALLOUT_REQUEST\", message=\"Schedule Job start, r\", entityType=\"MQST\",returnCode=\"null\"}   I have written query like this: index=a0_pay MANDATE_NOTIFICATION_RETRIEVAL.CALLOUT*| rex field=log "eventName=\"*(?<eventName>[^,\"\s]+)"| rex field=log "serviceName=\"*(?<serviceName>[^\"]+)"|search eventName="MANDATE_NOTIFICATION_RETRIEVAL.CALLOUT*" AND serviceName="Consume Notification" |stats count by eventName|where count=0|eval message="No Events Triggered for Mandate Notification retreival Callout"|table count message Not able to fetch results properly... Any other way to find and trigger the results,if there are no evets generated. Thanks in Advance
I am creating an index - configured the inputs.conf file. I have two prod servers with app logs that have the same Linux path  Additionally, I have two test servers (Non-Prod) both had the same l... See more...
I am creating an index - configured the inputs.conf file. I have two prod servers with app logs that have the same Linux path  Additionally, I have two test servers (Non-Prod) both had the same linux log paths , but different from the prod servers. Besides hard coding the servers in the inputs.conf file how does the process determine what host to collect the log data from identical paths listed in the inputs.conf some questions: Can I use the same index with prod and non prod (best practice ?) So the inputs.conf has the index=x under the log stanza name  , so that maps the inputs.conf file to collect the data and the data belongs to index=x. In the deployment I create a serverClass with all 4 servers (prod and non prod) and assign the server class to the App that has inputs.conf file.  Should I be creating separate indexes (prod and non-prod) then create separate  Apps (prod and non-prod)  then create separate ServerClasses (prod and non prod) ?