All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We have Okta Splunk Add-on installed to fetch logs from Okta cloud. Currently we are getting rate limit warnings with the Apps (api/v1/apps) endpoint since our organization is having more than 23... See more...
Hi, We have Okta Splunk Add-on installed to fetch logs from Okta cloud. Currently we are getting rate limit warnings with the Apps (api/v1/apps) endpoint since our organization is having more than 23,000+ users and 150+ apps on-boarded to okta (all users are assigned to all apps). Currently the add-on is fetching logs from App endpoint once a day,  App limit is set to 200, Throttling Threshold Pct as 20 and Maximum log batch size as 60,000 as default in configuration. We are receiving around 200+ warning alerts everyday during the time logs are fetched.  We tried changing the values of App limit from 200 to 85 but that increased our warnings count so we rolled back. We also tried to increase Throttling Threshold Pct to 40 from 20 but there was no improvement. Can you please help us in providing the possible solution to fix these warnings. 
Hey Folks,  For some reason, I am not able to have the dashboard description added when I use export PDF option, I have already mentioned the necessary settings in the alert_actions.conf file  ... See more...
Hey Folks,  For some reason, I am not able to have the dashboard description added when I use export PDF option, I have already mentioned the necessary settings in the alert_actions.conf file  Splunk version - 7.2.5  <form script="table-js.js, tooltipv3.js"> <label>Some Dashboard</label> <description>My Description</description> [email] pdf.logo_path = /opt/splunk/etc/apps/myapp/appserver/static/logo_new1.png pdf.header_left = logo pdf.header_center = title pdf.footer_left = description reportPaperOrientation = landscape Is there something I am missing! Please suggest @niketn 
Hi, I want to monitor DB processes present in my linux server . But due to business constraints i cannot use forwarders. So is there any ADD on or App or any scripts from which i can get the DB proc... See more...
Hi, I want to monitor DB processes present in my linux server . But due to business constraints i cannot use forwarders. So is there any ADD on or App or any scripts from which i can get the DB processes details directly into splunk without using any forwarders Is there any other options/solutions that can be used ? Thanks.
I have a Splunk instance hosted on linux machine. I connect to the linux machine using putty. I want to open my splunk web on windows machine where I have the putty. How can I open it? Please guide... See more...
I have a Splunk instance hosted on linux machine. I connect to the linux machine using putty. I want to open my splunk web on windows machine where I have the putty. How can I open it? Please guide me through this.
After the upgrade of Splunk Add-on for AWS, there is an error message appearing in the right corner in Splunk: Unable to initialize modular input "splunk_ta_aws_sqs" defined in the app "Splunk_TA... See more...
After the upgrade of Splunk Add-on for AWS, there is an error message appearing in the right corner in Splunk: Unable to initialize modular input "splunk_ta_aws_sqs" defined in the app "Splunk_TA_aws": Introspecting scheme=splunk_ta_aws_sqs: script running failed (exited with code 1) I have disabled all inputs from default and local, even deleted inputs.conf without help. The error is occurring only on the Search Head but I don't have data collection because data collection is occurring on a Heavy Forwarder. Splunk is v 8.0.3 and the app is the latest version. In the log, I see there is a script which is failing which seems like it could be an issue with python script but I don't understand why as all inputs from the add-on are disabled/removed and SH has been restarted.
Hi, I am working on a query in Splunk to calculate the drop off rate and percentage of the customer journey drop outs. When customer comes to purchase an order it goes via different pages and I want ... See more...
Hi, I am working on a query in Splunk to calculate the drop off rate and percentage of the customer journey drop outs. When customer comes to purchase an order it goes via different pages and I want to calculate where actually most of the customers are dropping off. Journey Pages - /checkout/my-offer -> /checkout/your-details -> /checkout/direct-debit-> /checkout/creditcheck -> /checkout/review-basket -> /checkout/confirm-order -> /checkout/order complete   index=test_prod sourcetype=access_combined_wcookie req_content=/checkout/your-details OR req_content=/checkout/direct-debit OR req_content=/checkout/creditcheck OR req_content=/checkout/review-basket OR req_content=/checkout/confirm-order OR req_content=/checkout/order complete | timechart span=1h count   The problem is I only want to calculate the drop outs for a specific journey which starts with this page and all other pages like  /checkout/your-details can come in other journeys as well. the only link between the pages for the journey is a field called uniqueId Is it possible to calculate the percentage of drop outs during this journey?
I have a XML payload like below which is getting logged in Splunk. However when i search in Splunk with customer email , the log is getting truncated and i can see lines only before REGISTEREDTIME. T... See more...
I have a XML payload like below which is getting logged in Splunk. However when i search in Splunk with customer email , the log is getting truncated and i can see lines only before REGISTEREDTIME. The lines after REGISTEREDTIME are getting split to some unknown threads and am not able to pull up entire log in Splunk. Can anyone help <CUSTOMEREMAIL>somedummyemail</CUSTOMEREMAIL> <FIRSTNAME>Somename</FIRSTNAME> <LASTNAME>Somename</LASTNAME> <REGISTEREDTIME>2020-09-15T12:05:00Z</REGISTEREDTIME> <PREFIX>Mr</PREFIX> <LOGGEDIN>NO<LOGGEDIN> <LOYALCUSTOMER>YES<LOYALCUSTOMER>
i am trying the exclude the events in the sub search query using Search NOT. It is not returning the expected result. in this i am trying to exclude "system=APICleanUp callbacknumber=* Message="API ... See more...
i am trying the exclude the events in the sub search query using Search NOT. It is not returning the expected result. in this i am trying to exclude "system=APICleanUp callbacknumber=* Message="API Success" sourcetype=application_prod" events. Both the logs are are coming from 2 different system..callback is the common field between two search queries. Query: environment=PROD system=API1 Message="API l logs"|dedup callbacknumber | search NOT [search system=APICleanUp callbacknumber=* Message="API Success" sourcetype=application_prod ]| table callbacknumber   Any help will be highly appreciated
Hi, I have an alert which runs every 15 minutes as of now but what i want is to NOT trigger from 1:30 AM to 2:30 AM everyday. That's the time when my server cache gets flushed and the spike in the re... See more...
Hi, I have an alert which runs every 15 minutes as of now but what i want is to NOT trigger from 1:30 AM to 2:30 AM everyday. That's the time when my server cache gets flushed and the spike in the response time is usual. So I don't want to trigger the alert at this time. Due to this we are getting false alarms. How do i achieve this. My query is -       index=test sourcetype=access_combined_wcookie POST requested_content=/checkout/your-order* | timechart span=15m avg(response_time_sec) as AvgResponseTime by host | eval AvgResponseTime=round(AvgResponseTime,3)        
Hi,    how do I read results of finished Splunk search in Javascript with following function?   mvc.Components.get("search_id").on("search:done", function () { })
I want to create vents but i found two ways to to that. When to use http event collector api and when to use receiver/simple api?   Also can some one also give examples of real time scenario when u... See more...
I want to create vents but i found two ways to to that. When to use http event collector api and when to use receiver/simple api?   Also can some one also give examples of real time scenario when user want to create events through api?
Hello I use the search below but I don't know why the rename command doesn't works Thanks for your help   | inputlookup fo_all | fields SITE | dedup SITE | rename "ABCD" as "AB" | table SITE ... See more...
Hello I use the search below but I don't know why the rename command doesn't works Thanks for your help   | inputlookup fo_all | fields SITE | dedup SITE | rename "ABCD" as "AB" | table SITE      
Hi, I want to rewrite the event based on some keyword in event. For Example: Junly 27 10:00:05 UTC IF_DOWN SYSLOG_DAEMON   So if i match SYSLOG from the event and add field in event on Heavy ... See more...
Hi, I want to rewrite the event based on some keyword in event. For Example: Junly 27 10:00:05 UTC IF_DOWN SYSLOG_DAEMON   So if i match SYSLOG from the event and add field in event on Heavy forwarder to send the logs to res pective destination.   New Log Event: July 27 Hostname 10:00:0006 IF_DOWN SYSLOG_DAEMON   Can we do on heavyforward by using transoform.conf or props.conf ?/   Kindly help
Hi Punters,  I am facing issues with Data Anonimization. Below are my conf files. My transforms.conf anonimizes the data if my _raw event have any one regex pattern. But it's not anonimizing my _raw... See more...
Hi Punters,  I am facing issues with Data Anonimization. Below are my conf files. My transforms.conf anonimizes the data if my _raw event have any one regex pattern. But it's not anonimizing my _raw event if it has both the regex patterns. Need help please.   xml-anonymizer also doesn't work if my _raw event is having JSON message. But it works fine if the _raw event is a normal line.   props.conf [dp_logs_multiline] CHECK_METHOD = modtime NO_BINARY_CHECK = true SHOULD_LINEMERGE = false LINE_BREAKER=([\r\n]+)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3} category = Custom disabled = false pulldown_type = 1 MAX_TIMESTAMP_LOOKAHEAD = 24 TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N TIME_PREFIX = ^ TRANSFORMS-anonymize = json-anonymizer, xml-anonymizer ANNOTATE_PUNCT = false TRUNCATE = 100000 MAX_EVENTS = 10000 transforms.conf [json-anonymizer] REGEX = (?ms)^(.*\"[sS]hippingAddress\"\s+\:\s+\{)[\s\S]*?(\}.*)$ FORMAT = $1#########JSON PCC DATA ANONIMIZED#############$2 REPEAT_MATCH = true MV_ADD = true DEST_KEY = _raw [xml-anonymizer] REGEX = (?ms)^(.*\<[bB]illTo\>)[\s\S]*?(\<\/[rR]equestMessage\>.*)$ FORMAT = $1#########XML PCC DATA ANONIMIZED#############$2 REPEAT_MATCH = true MV_ADD = true DEST_KEY = _raw
Hi All, I have created a custom alert in splunk and I want to put a suppression window in that alert on daily basis from 12am UTC -7am UTC. How can this be achieved? from cron expression or adding s... See more...
Hi All, I have created a custom alert in splunk and I want to put a suppression window in that alert on daily basis from 12am UTC -7am UTC. How can this be achieved? from cron expression or adding something to the original query? Please help!!!
Hi, I want to extract the files present in sharepoint to splunk.I did my research and got to to know either through DB or REST API this can be done.But those options are not feasible for me.Is there... See more...
Hi, I want to extract the files present in sharepoint to splunk.I did my research and got to to know either through DB or REST API this can be done.But those options are not feasible for me.Is there any Add On apart from Microsoft 365 App for Splunk which helps me to extract sharepoint files or sharepoint list data? Or is there any option to extract the files from sharepoint without using credentials all the time to extract files?   Any suggestions would be great!   Thanks
Hi, I have built a ML model for detecting Categorial outliers. Base search for the model is given as last 30 days[training set]. An alert has been scheduled for the same if no.of results>0 everyday.... See more...
Hi, I have built a ML model for detecting Categorial outliers. Base search for the model is given as last 30 days[training set]. An alert has been scheduled for the same if no.of results>0 everyday. For today the alerts generated will take a training data from Aug 19 to Sep 17 and give outliers as an alert if any. For tomorrow I wanted to confirm will it run the model again taking training data from Aug 20 to Sep 18 or will it be sustaining with the same set of data from Aug 19 to Sep 17 to detect outliers. Kindly share your ideas.
I am trying to compare Sales data per day for different locations indexed from different sources. I have 3 different source from where events in below format are getting indexed. <Date> , <Location... See more...
I am trying to compare Sales data per day for different locations indexed from different sources. I have 3 different source from where events in below format are getting indexed. <Date> , <Location>, <Sales>  I want to plot a comparison graph between sales from different source for particular location. Currently i am using Union to merge event from different source and then using timechart to plot comparison between sales data. I am able to plot 3 bars(each for a data source) for each location. But _time take here is event indexed timestamp but i want to plot that against <date> field which is there in event itself. How can i do that ? Please suggest.  
I am attempting to work out the frequency of events over the selected timespan in weeks.  Basically: count of events in current timespan divided by weeks in timespan. I can get a count of events for... See more...
I am attempting to work out the frequency of events over the selected timespan in weeks.  Basically: count of events in current timespan divided by weeks in timespan. I can get a count of events for the selected timespan using:   index=mydata | stats count(eval(ishotfix= "false")) as hfx | fields hfx   I can get the timepicker span weeks using (im sure this is terrible):   | makeresults | addinfo | eval timepickerSpanWeeks=round(((info_max_time - info_min_time)/60/60/24/7),0) | fields timepickerSpanWeeks   and if I combine I am getting no results   | makeresults | addinfo | eval timepickerSpanWeeks=round(((info_max_time - info_min_time)/60/60/24/7),0) | map search="search index=mydata" | stats count(eval(ishotfix= "false")) as hfx | eval rate=round((hfx/timepickerSpanWeeks), 2) | fields rate   thanks in advance!
Hi, I have a simple multi-select filter as below on my main dashboard. <input type="multiselect" token="projects" searchWhenChanged="false"> <label>Projects</label> <default>*</default> <initialVal... See more...
Hi, I have a simple multi-select filter as below on my main dashboard. <input type="multiselect" token="projects" searchWhenChanged="false"> <label>Projects</label> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>Projects</fieldForLabel> <fieldForValue>Projects</fieldForValue> <search base="sample"> <query>| search Organization="$organization$" | stats dc(Projects) AS Total by Projects | fields - Total</query> </search> <choice value="*">All</choice> <prefix>Projects IN (</prefix> <suffix>)</suffix> <delimiter>,</delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> When dashboard populates and users select Projects using multiselect filter above, it gives them a list of vulnerabilities affecting assets in selected projects. Now, when users click on one of the vulnerabilities of their choice, it takes them to drill down dashboard which has some more multi select filters including one like above. What I need is when users go to drill down dashboard, I need selected Projects from main dashboard A to be transferred/applied to drill-down dashboard B. Thanks in-advance.