All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, we are monitoring DB golden gate process through Splunk UF. Process of one particular host details are not captured in splunk intermittently even though the process is up and running on DB side.... See more...
Hi, we are monitoring DB golden gate process through Splunk UF. Process of one particular host details are not captured in splunk intermittently even though the process is up and running on DB side. There are no other errors present i the logs. Process data stops flowing into splunk for sometime randomly and then starts flowing on its own. Any troubleshoot solutions on this?  
Hello! I would like to select a specific time window with the time picker (let's assume a 1 minute time window), and have a slider allowing to select a 1 second window in that range. is such thing p... See more...
Hello! I would like to select a specific time window with the time picker (let's assume a 1 minute time window), and have a slider allowing to select a 1 second window in that range. is such thing possible? Thanks in advance.
Hi, Is there any way to backup/export regex saved in extracted fields as we want to use new instance as a search head and we don`t want to lose our regex saved in extracted fields. Thanks.
Hi Team I want to collect source ip from an alert triggered /search ran and then add that to a .txt file exposed on a separate server.(https://urlofserver/ipfile.txt)   What is the best way to ach... See more...
Hi Team I want to collect source ip from an alert triggered /search ran and then add that to a .txt file exposed on a separate server.(https://urlofserver/ipfile.txt)   What is the best way to achieve this  
Hi , We are scheduling pdf delivery of a dashboard with this cron schedule( 0 7 1-7 * 4 )but instead of triggering every first thursday of every month it is triggered on frst and second of that mont... See more...
Hi , We are scheduling pdf delivery of a dashboard with this cron schedule( 0 7 1-7 * 4 )but instead of triggering every first thursday of every month it is triggered on frst and second of that month also. How can we trigger that event on splunk instead of doing it via script on server backend?
Hello,  Could someone tell me what i am required to do to sort this issue out please? I have inputs going into my HF however it seems as though my HF index queue is blocked and backing up the r... See more...
Hello,  Could someone tell me what i am required to do to sort this issue out please? I have inputs going into my HF however it seems as though my HF index queue is blocked and backing up the rest of my queues.   05-03-2021 09:25:58.559 +0100 INFO Metrics - group-queue, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1307, largest_size=1500, smallest_size=170 i assume (if you assume it makes an ass out of you and me) that i need to change the server.conf max size however i just want to check before i go balls deep on what the repercussions are and what i should change.  Please note that i am not actually indexing any data and purely forwarding them on to a third party system.  obviously if i am wrong then please tell me. Any help is greatly appreciated. 
Hello, I have a question regarding datamodel..  If i'm removing data from index, it will be deleted from datamodel automatically or should i rebuild the datamodel ?   thanks
Use case: detect outliers  Alert is triggered when an outlier is detected. For now I can send an email containing some information from this trigger.  How I want to do a dashboard including the pas... See more...
Use case: detect outliers  Alert is triggered when an outlier is detected. For now I can send an email containing some information from this trigger.  How I want to do a dashboard including the past data and detected outlier when this outlier is found. I am not sure of the workflow. There is no option of dashboard when sending the alert through email.   Does it means that I have to save the alert result into an lookup file and schedule another dashboarding?   The dashboard itself can only be scheduled in terms of time. I can do it and then use where to find if there is an outlier. If yes, there is no way to send an alert.   How should I do it?
Hi Team, I am having two panels and a drop down. For example the drop down contains two values (A and B). If I am choosing A from drop down means , two panel should display in my dashboard. Suppos... See more...
Hi Team, I am having two panels and a drop down. For example the drop down contains two values (A and B). If I am choosing A from drop down means , two panel should display in my dashboard. Suppose If I am choosing B from drop down means only first panel should display and second Panel should not be display like it should be hide from my dashboard. Can you please help me how to do this hide?    
I am new to Splunk and I am going to create data input of monitoring TCP packet to/from my laptop I have already installed Splunk enterprise and followed the instruction for monitoring the TCP port.... See more...
I am new to Splunk and I am going to create data input of monitoring TCP packet to/from my laptop I have already installed Splunk enterprise and followed the instruction for monitoring the TCP port. I could create monitor data input but when I search nothing is captured? I have already generated this data in snort and now want to do the same thing in Splunk but it fails? Shall I need to install something further? or need to do a special configuration. I really lost in lots of documentation in Splunk and not sure how to solve this simple problem I appreciated any help Regards, Zeynab
Hi  I wish to dedup and consolidate customer details across two cities. E.g.  I have 2 records of the same customer across two cities and I want to consolidate them into 1 row NewCustomerID Ci... See more...
Hi  I wish to dedup and consolidate customer details across two cities. E.g.  I have 2 records of the same customer across two cities and I want to consolidate them into 1 row NewCustomerID City1_CustomerID City2_CustomerID City isActiveCustomer 12345 00001   A Y 12345   00002 B N   Result:  * Merge City1_CustomerID and City2_CustomerID into one row * City Field populates the City where the customer is active * Field for ExistsInOtherCities is "Y" when there are 2 or more records.   NewCustomerID City1_CustomerID City2_CustomerID City isActiveCustomer ExistsInOtherCities 12345 00001 00002 A Y Y   I am used to SQL where I can make temp tables in each in then join it back but unsure how to do it in SPLUNK
Hi Team, I am running below query in Splunk and not showing <StartTime> line  for few "TransactionID".   Expected output:  ============ <StartTime>2021-05-01T16:24:00.9-07:00</StartTime> <EndT... See more...
Hi Team, I am running below query in Splunk and not showing <StartTime> line  for few "TransactionID".   Expected output:  ============ <StartTime>2021-05-01T16:24:00.9-07:00</StartTime> <EndTime>2021-05-01T16:24:03.129-07:00</EndTime> <ExecutionTimeInMs>2229</ExecutionTimeInMs>   Result : ===== <EndTime>2021-05-01T16:24:03.129-07:00</EndTime> <ExecutionTimeInMs>2229</ExecutionTimeInMs> Query: ====== index="eai_prod" sourcetype="eai:tibco:webservices6.5" source="*appnodes/ShipmentOrderCreate*" ":ProcessExecutionStats>" | rex field=_raw "<CorrelationId>(?P<CorrelationId>.*?)<"| rex field=_raw "<CustomerNumber>(?P<CustomerNumber>.*?)<" | rex field=_raw "<TransactionStatus>(?P<TransactionStatus>.*?)<" | rex field=_raw "<JobId>(?P<JobId>.*?)<" | rex field=_raw "<CountryCode>(?P<CountryCode>.*?)<" | rex field=_raw "<StartTime>(?P<StartTime>.*?)<"| rex field=_raw "<EndTime>(?P<EndTime>.*?)<"| rex field=_raw "<ExecutionTimeInMs>(?P<ExecutionTimeInMs>.*?)<" | rename CorrelationId AS TransactionID CustomerNumber AS CustomerNumber TransactionStatus AS Status JobId AS JobId CountryCode AS CountryCode StartTime AS StartTime EndTime AS EndTime ExecutionTimeInMs AS ExecutionTime(ms) | table TransactionID CustomerNumber Status JobId CountryCode StartTime EndTime ExecutionTime(ms) | Sort -EndTime
Query A/Dataset A sourcetype=aws_cloudtrail eventtime > "2021-01-01T00:00:00Z" AND eventtime < "2021-01-31T23:59:59Z" | stats values(eventnames) by accesskeyid   output: accesskeyid.  values(even... See more...
Query A/Dataset A sourcetype=aws_cloudtrail eventtime > "2021-01-01T00:00:00Z" AND eventtime < "2021-01-31T23:59:59Z" | stats values(eventnames) by accesskeyid   output: accesskeyid.  values(eventnames) ABCD.                  ListTopic CreateTopic EFGH.                  CreateStream   Query B/Dataset B sourcetype=aws_cloudtrail eventtime > "2021-04-01T00:00:00Z" AND eventtime < "2021-04-28T23:59:59Z" | stats values(eventnames) by accesskeyid   output: accesskeyid.  values(eventnames) ABCD             ListTopic ListBuckets Createtopic EFGH.             CreateStream DeleteStream DEF.                ListTickets   Ask: Please provide a query where i need the output like below where only the unique values of eventnames from datasetB group by acesskeyid should be listed out when i run both the queries at the same search   output: accesskeyid.  values(eventnames) ABCD             ListBuckets EFGH.             DeleteStream DEF.                ListTickets   Thanks inn advance    
I have the below string and would like to remove the date and time part, please help with the query *abc -04/30, 08:14:07 - c
I have a list of unstructured logs like below for which I have to extract certain fields. Tried using "Extract fields" option to pull these fields but not receiving the results as expected. Can some... See more...
I have a list of unstructured logs like below for which I have to extract certain fields. Tried using "Extract fields" option to pull these fields but not receiving the results as expected. Can someone please help on a way to achieving this through Splunk search query itself? Fields to be extracted: 1. myemail@site.com 2. my-pipeline 3. JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: ORA-01017: invalid username/password; logon denied Logs:   2021-05-02 11:21:13,663 [user:*myemail@site.com] [pipeline:my-pipeline (SCH Test Run)/testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com] [runner:] [thread:ProductionPipelineRunnable-testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com-my-pipeline (SCH Test Run)] [stage:] ERROR JdbcSource - Cannot connect to specified database: com.streamsets.pipeline.api.StageException: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: ORA-01017: invalid username/password; logon denied com.streamsets.pipeline.api.StageException: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: ORA-01017: invalid username/password; logon denied 2021-03-01 04:18:26,910 [user:*myemail@site.com] [pipeline:my-pipeline (SCH Test Run)/testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com] [runner:] [thread:ProductionPipelineRunnable-testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com-my-pipeline (SCH Test Run)] [stage:] ERROR ProductionPipelineRunnable - An exception occurred while running the pipeline, com.streamsets.datacollector.runner.PipelineRuntimeException: CONTAINER_0800 - Can't start pipeline due 1 validation error(s). First one: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Listener refused the connection with the following error: ORA-12514, TNS:listener does not currently know of service requested in connect descriptor 2021-03-01 04:43:12,985 [user:*myemail@site.com] [pipeline:my-pipeline (SCH Test Run)/testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com] [runner:] [thread:ProductionPipelineRunnable-testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com-my-pipeline (SCH Test Run)] [stage:] ERROR ProductionPipelineRunnable - An exception occurred while running the pipeline, com.streamsets.datacollector.runner.PipelineRuntimeException: CONTAINER_0800 - Can't start pipeline due 1 validation error(s). First one: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: IO Error: The Network Adapter could not establish the connection com.streamsets.datacollector.runner.PipelineRuntimeException: CONTAINER_0800 - Can't start pipeline due 1 validation error(s). First one: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: IO Error: The Network Adapter could not establish the connection 2021-03-01 05:02:13,113 [user:*myemail@site.com] [pipeline:my-pipeline (SCH Test Run)/testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com] [runner:] [thread:ProductionPipelineRunnable-testRun__12a-23b-34c-45d-56d_site.com__myemail@site.com-my-pipeline (SCH Test Run)] [stage:] ERROR ProductionPipelineRunnable - An exception occurred while running the pipeline, com.streamsets.datacollector.runner.PipelineRuntimeException: CONTAINER_0800 - Can't start pipeline due 1 validation error(s). First one: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: IO Error: Unknown host specified com.streamsets.datacollector.runner.PipelineRuntimeException: CONTAINER_0800 - Can't start pipeline due 1 validation error(s). First one: JDBC_06 - Failed to initialize connection pool: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: IO Error: Unknown host specified     Thank you in advance.
Hi, Does anyone know how to move user config such as that stored under /etc/users/../local from an on-prem search head into the cloud. Many of our users have user preferences, saved searches, macro... See more...
Hi, Does anyone know how to move user config such as that stored under /etc/users/../local from an on-prem search head into the cloud. Many of our users have user preferences, saved searches, macros, props.conf info stored. How do we get this across to the cloud SH? Also report history (/etc/users/../history/servername.csv) would be nice to retain if possible too. Thanks, Keith
I am trying to setup Splunk Security Essentials in a distributed environment on a SHC. I've installed v3.3.2 and all the app was successfully deployed to our search heads however I am having an issue... See more...
I am trying to setup Splunk Security Essentials in a distributed environment on a SHC. I've installed v3.3.2 and all the app was successfully deployed to our search heads however I am having an issue with populating the data inventory. Periodically when we try and enter SPL in for a specific data source and refresh the page the data in input prior will simply disappear. While I see to be able to get around this by adding the input then staying on the page for an extended period in time, all of my data sources that I've added are stuck on the "Analyzing CIM and Event Size" status. I am testing on data sources which have the proper TA's installed for CIM parsing (ex. windows events), but none of my sources are able to get past this stage. Has anyone ever encountered this before or have any suggestions? I don't see any errors in the internal logs coming from the app.
How do I create a complete list of datasources including names, IP addresses
Any way to get a complete list of all apps & ES using one search? Or you have to run this search on individual Splunk servers?
How do I run a complete Splunk Inventory of Splunk Servers, SHs, IDXs, FWs, HFs, UFs. Including the Sever name , IPs and Computer /HW names. If you ask why? Because I inherited a deployment that was ... See more...
How do I run a complete Splunk Inventory of Splunk Servers, SHs, IDXs, FWs, HFs, UFs. Including the Sever name , IPs and Computer /HW names. If you ask why? Because I inherited a deployment that was done without any Deployment plans. Is there an app. ? How to I go about accomplishing such a task?