All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi.  I am trying to find max value of p90 over a month for 1 API. The query I use for finding stats:    <basic splunk query>  | search API = API1 | stats p90(processing_time) as 90%_time by API... See more...
Hi.  I am trying to find max value of p90 over a month for 1 API. The query I use for finding stats:    <basic splunk query>  | search API = API1 | stats p90(processing_time) as 90%_time by API    where processing_time is the time which would display the time used for finding p90.   Can someone help me with the query to find the max value of p90 calculated over a month ?  So that I can use that value to generate some kind of alerts.   Any help is greatly appreciated. Thanks.
  Hello my gorgeous people from this amazing community, I've been trying to solve this problem that I have but I couldn't find a way... I have to create a category (a new field named "PROMO") that... See more...
  Hello my gorgeous people from this amazing community, I've been trying to solve this problem that I have but I couldn't find a way... I have to create a category (a new field named "PROMO") that will have only two values "YES" ,"NO" I work for a  hotel chain and I want to know if a customer in the hotel has called the company hotline at least one time within the last month prior to the reservation date. I have the reservation dates and ID by customers and I can also generate the calls on the hotline but I have no work around as to how to classify this new field To illustrate if this is the data for the HOTLINE ID_CX DATE_CALL BETHANY 2020-04-15T09:05:49-04:00 ALEX 2020-04-21T07:25:15-04:00   ID_CX DATE_BOOKING BOOKING_REF BETHANY 2020-04-26T09:05:49-04:00 TY873 ALEX 2020-03-28T07:25:15-04:00 UJU4 Then my expected results will be: ID_CX BOOKING_REF PROMO BETHANY TY873 YES ALEX UJU4 NO   This is because Bethany in fact called on the line 9 days prior to her booking (TY873) but Alex's call was actually AFTER his booking thats why he gets classify as "NO" I feel like my main challenge is how to properly write my code so that Splunk looks for dates (if there are any)  BEFORE the booking date so that splunk can decide if there is a 30 days length of difference between them... and also I have the challenge to write the code to classify them.. Since this data is unstructured (as the information comes from different  events and a cx may or many have not called on the line) I dont know what would be the proper way to   count for this information.. please dont judge... but I have been trying out this code:     |multisearch [| search index ="hotline" | fields CX_ID CALL_DATE] [| search index ="bookings" | fields CX_ID DATE_BOOKING CX_ID] | stats values(CALL_DATE) as HOTLINE values(DATE_BOOKING ) as DATE_BOOKING by ID_CX DATE_BOOKING | eval PROMO=if(...)     so bacsically I have not been able to classify because when I ask splunk to subtract the times I get nothing... And I don't know how to code that it has to look for the dates prior to the date of bookings... Thank you so much to all the people that can help me out on this one I am truly grateful for that...  kindly, Cindy
Hi All, I've deployed below props to splunk SHC and IDX clusters but fields are not extracted in splunk. There are WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timest... See more...
Hi All, I've deployed below props to splunk SHC and IDX clusters but fields are not extracted in splunk. There are WARN messages in splunkd logs as follows DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (50) characters of event. Defaulting to timestamp of previous event (Thu Jan 21 14:02:33 2016).  Can you please help and let me know if i need to make any changes?   [props] TIME_PREFIX=^ TIME_FORMAT=%d-%b-%Y %I.%M.%S.%6Q %p MAX_TIMESTAMP_LOOKAHEAD=50 SHOULD_LINEMERGE=false NO_BINARY_CHECK=true LINE_BREAKER=([\r\n])\d+\-\w+\-\d+\s+\d+\.\d+\.\d+\.\d+\s+\w+\s EXTRACT-field1=regex EXTRACT-field2=regex   Sample events: 29-APR-21 09.44.57.234427 AM ,TEST , 11,Login ,2098856,4 29-APR-21 09.44.56.234428 AM ,TEST , 12,Login ,2098856,4  
HI all, I am new in splunk admin and doing a poc on archiving the frozen bucket data to the s3 bucket. Can I directly provide the s3 Url in the splunk web under index setting or do i need to provide... See more...
HI all, I am new in splunk admin and doing a poc on archiving the frozen bucket data to the s3 bucket. Can I directly provide the s3 Url in the splunk web under index setting or do i need to provide the archive script in the coldtofrozen directory. Also while setting the archiving policy do I need to change only frozenTimePeriodInSecs or do I need to change both maxTotalDataSizeMB  and frozenTimePeriodInSecs. Please excuse this silly question as i am new in Admin i am looking for the best practise  
Hi there, I am in the process of evaluating Splunk as a possible replacement to our existing data historian.  Our users require a Stepped Line Graph for trending purposes rather than the conventional... See more...
Hi there, I am in the process of evaluating Splunk as a possible replacement to our existing data historian.  Our users require a Stepped Line Graph for trending purposes rather than the conventional line graph.  Is it possible to provide a Stepped Line Graph visualistion in Splunk? Kind Regards Paul J.
Which standard logs can be turned off in ES (Ent. Security) to save licensing ? Are there a best practice list I can work with please? This would be for the purpose of SOC team.
Hello all,  I have been struggling for a while now to create a query for comparing the events using two different values of a multi-value field.  For starter - We have certain jobs running for whi... See more...
Hello all,  I have been struggling for a while now to create a query for comparing the events using two different values of a multi-value field.  For starter - We have certain jobs running for which their status is to be monitored. Below is an example of query/data - Query - source=src_name sourcetype=application Job_Name=*  JOB_STATUS=started Output - Job A      Job B Job C Query - source=src_name sourcetype=application Job_Name=*  JOB_STATUS=stopped Output - Job A Job C JOB_STATUS is the multi-value field that gives the respective Job's status after it starts running i.e. "Started." If the Job run is successful then it will be stopped, thus, there will be an event for that JOB with status as "Stopped". Else, the Job will remain in started state and so, there'll only be a "Started" event present for that JOB. What I need help with? I need a query that can compare and give the list of those Jobs that are only started and not stopped yet. Example Query -  source=src_name sourcetype=application {-- query Return jobs that are only in started and not stopped yet --} Required Output - Job B   Please help out!
How do I get a complete size of all logs ingested by Splunk Enterprise & Enterprise Security incl. Indexes. Showing indexes taking the most load?
Im onboarding sample logs from a txt file to my local Splunk instance were the time stamp is in a 10 digit format (epoch time format). During the onboarding im applying the following timestamp format... See more...
Im onboarding sample logs from a txt file to my local Splunk instance were the time stamp is in a 10 digit format (epoch time format). During the onboarding im applying the following timestamp format  strptime("timestamp","%m/%d/%y %H:%M:%S") "timestamp" being the field name in the raw sample in the txt document.  But the timestamp is still defaulting to modtime. Any ideas?   
I  just recently upgraded to 8.1.1 for our core Splunk infrastructure and our UF's. I noticed in the release notes for 8.1.1 it lists: Remove, suppress any field from Windows Eventlog via univer... See more...
I  just recently upgraded to 8.1.1 for our core Splunk infrastructure and our UF's. I noticed in the release notes for 8.1.1 it lists: Remove, suppress any field from Windows Eventlog via universal forwarder Reduce noisy and unnecessary data from Windows Logs by filtering on any fields available at the source.   Currently we are using the blacklist option and a few regex's to accomplish our goal of filtering certain windows events at the UF level.  Does anyone know where the documentation is that relates to the above new feature? Just trying to figure out the proper usage\syntax in the hopes we can eliminate our blacklist regex's. Thanks, Andrew
Hi there I am a newby Splunk user trying to get a feel for the system.  I need to be able to export data in native Excel file format (preferebly .xlsx).  To this end I have downloaded and installed a... See more...
Hi there I am a newby Splunk user trying to get a feel for the system.  I need to be able to export data in native Excel file format (preferebly .xlsx).  To this end I have downloaded and installed an App: Splunk for Excel Export   However it does not appear to work, either that or my unfamiliarity of how Splunk works is getting in the way.  Could someone point me in the right direction please.  I have to say that I thought exporting data in native excel legible format would be a default feature of the software.   Kind Regards Paul.
Hey all, We want to start analyzing sysmon information via Splunk (event logs) We did find applications here but it did not met our expectations How do you recommend to do this? Is this is ... See more...
Hey all, We want to start analyzing sysmon information via Splunk (event logs) We did find applications here but it did not met our expectations How do you recommend to do this? Is this is possible to analyze Sysmon information in windows Standard App without a major effort? We do prefer to use splunk apps and addons suppoted by Splunk inc. Thanks Tankwell
I have created a dashboard, only with custom search app with Java scripts in Splunk version 8 with simple xml code. Here is the reference - post. I would like to display the fields sidebar, when my ... See more...
I have created a dashboard, only with custom search app with Java scripts in Splunk version 8 with simple xml code. Here is the reference - post. I would like to display the fields sidebar, when my search results are popping up. Could anyone please let me know, the steps to achieve this.
I may have missed a topic in my search but is there a way to do the following (im also fairly new to Splunk so be gentle  ) We have a server locked down on our network and has no outside access bu... See more...
I may have missed a topic in my search but is there a way to do the following (im also fairly new to Splunk so be gentle  ) We have a server locked down on our network and has no outside access but we can configure internal (server to server) access.  Is there a way to use a Universal Forwarder on that server to forward to the local on prem Heavy Forwarder and then relay those to our Splunk Cloud? Thanks in advance
I have a requirement for reporting on the performance of a specific query in Oracle. The performance of this query serves as a proxy for the performance of a very old Oracle Forms application. I have... See more...
I have a requirement for reporting on the performance of a specific query in Oracle. The performance of this query serves as a proxy for the performance of a very old Oracle Forms application. I have been looking for a way to gather this data but so far have not been able to find a way to do it in AppDynamics. The query needs to be executed every five minutes and performance data needs to be reported on this query. So far, I have looked at the SQL extension and Custom Metrics from the database agent. Both of these options would give me the ability to submit the query, but neither of them seems to gather execution times for these queries. I have looked at trying to write an SQL in Oracle that would return timing for a query but I have not been able to find something that will work.  Does anyone have any suggestions?
We are investigating various logging clients to send to our current log server.  Splunk UF is one.  We are in a long term position of getting splunk enterprise as a new logger, but prior to that, as ... See more...
We are investigating various logging clients to send to our current log server.  Splunk UF is one.  We are in a long term position of getting splunk enterprise as a new logger, but prior to that, as an interim, were considering Splunk UF.  The documentation seems to point to interoperability with third party loggers.  Is there and licensing that needs to be purchased to use Splunk UF with  a non-Splunk logger server, or is it free to download that that use?
Hi all, I have used an app to generate a document that according to this log went succesfull. Is there a way to allow user to download such file from Splunk system in easy way? (not using ftp client ... See more...
Hi all, I have used an app to generate a document that according to this log went succesfull. Is there a way to allow user to download such file from Splunk system in easy way? (not using ftp client etc.) File was generated using this app: https://splunkbase.splunk.com/app/1263/#/details. Thanks
I trying to create dashboard via REST api. The query string of the panel contains '+' character, like this   <dashboard> <row> <panel> <title>Alert Rate (percent of transac... See more...
I trying to create dashboard via REST api. The query string of the panel contains '+' character, like this   <dashboard> <row> <panel> <title>Alert Rate (percent of transactions)</title> <viz type="Splunk_ML_Toolkit.OutliersViz"> <search> <query>index=my_index ..... | eval upperBound=(avg + stdev*exact(1.5))</query> ...   Creating  the dashboard by calling   curl -k -u https://splunk_domain/servicesNS/admin/search/data/ui/views --data @./test.xml   After calling the dashboard created with expression    index=my_index ..... | eval upperBound=(avg stdev*exact(1.5))   The '+' disappeared Please help!
Hi, Can someone help me with the regex command for below? | search ="UPN=*T@mail.cloud" Thanks in advance!  
I do not have ML experience but I want to start getting my hands dirty with it. I have some inputs that I would like MLTK to crunch to advice the best usage of some resources. The use case here is: ... See more...
I do not have ML experience but I want to start getting my hands dirty with it. I have some inputs that I would like MLTK to crunch to advice the best usage of some resources. The use case here is: I have some booths to where people get attended and I have a waiting queue with check in and checkout for those booths. I want to predict is how many booths should I have open to keep the queue waiting times below a certain threshold. The pax volume can vary wildly during the day. What do I know? I know when someone arrived at the queue (timestamp) and when that person is ready to use the booth as in leaving the waiting queue (timestamp) I know the booth possible capacity (x pax/h) I know my waiting time threshold (x minutes) What do I want to know? Predict based on historical data how many booths should have open to be able to have the waiting times below the threshold. as a way to do capacity planning for the future (e.g tomorrow at 1pm I should have 5 booths open based on data from same day last week, month or year) Based on near real time data (waiting times, queue size, open booths, etc) advice opening or closing booths to accomplish optimal usage of resources and avoid crossing the waiting time threshold. (e.g there's an different volume of people so in order to keep waiting times low I should open 2 more booths)