All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am parsing logs using splunk and there are two types of logs : 1. API endpoint info and user ID 2. Logs which contains specific error that I am interested in.(Lets say error is ERROR_FAIL)   I ... See more...
I am parsing logs using splunk and there are two types of logs : 1. API endpoint info and user ID 2. Logs which contains specific error that I am interested in.(Lets say error is ERROR_FAIL)   I need all logs for a particular user hitting endpoint and getting ERROR_FAIL. Both the logs have same request id for one instance of api call. So firstly I want to filter the request ID from point 1, which will give me request id for the api and user I am interested in, and based on that request id ,I wana see all the logs that have failed because of error(ERROR_FAIL). Now If i use following query ,I get all the request ids for user and API: index=app-Prod sourcetype=prod-app-logs "api/rest/v1/entity" " 123" | table xrid   Now if I add this in sub-search. it does not work:Final query   index=app-Prod sourcetype=prod-app-logs  [search index=app-Prod sourcetype=prod-app-logs "api/rest/v1/entity" "123" | table xrid]  "ERROR_FAIL"  |  table xrid   This does not return anything. There are logs where 123 user hits "api/rest/v1/entity" and gets "ERROR_FAIL".How can i make my query correct?
I have a value that could be N/A or a number. The issue is when it is a number, splunk is not picking it up as one. So I have to run the "convert" command. But I need to check first if it is a N/A. ... See more...
I have a value that could be N/A or a number. The issue is when it is a number, splunk is not picking it up as one. So I have to run the "convert" command. But I need to check first if it is a N/A. Below is what I have but it does not work - any ideas?     | eval T_CpuPerc = if(T_CpuPerc="N/A",T_CpuPerc,convert num(T_CpuPerc) )      
Hi there, I've a scripted lookup that returns a field which contains text data. What is really intriguing is that if the returned data contains "metadata" in it, then the text is html encoded (part... See more...
Hi there, I've a scripted lookup that returns a field which contains text data. What is really intriguing is that if the returned data contains "metadata" in it, then the text is html encoded (partially at least), and not if this keyword 'metadata' is not present. Any logical explanation to that? How can I remove this html encoding? | stats count | eval curious = "jambon: de -> bayonne" | fields curious This will result in a single field containing "jambon: de -> bayonne" as expected. | stats count | eval curious = "metadata: de -> bayonne" | fields curious This will result in a single field containing "metadata: de -> bayonne" which is not expected ; why is the ">" html encoded?! I thought it was related to the fact that "metadata" is also a Splunk command, but after a few tries with "search", "metasearch",  "mcollect", etc, none of those trigger this behaviour. Is this a weird bug? I'm on Splunk 8.2.3, can you guys reproduce and on which versions? Thanks,
Hi, I'd like to properly declare my indexes on the search head layer as suggested in the docs. All my indexes are declare through the indexer cluster manager node and are available. I could not fi... See more...
Hi, I'd like to properly declare my indexes on the search head layer as suggested in the docs. All my indexes are declare through the indexer cluster manager node and are available. I could not find the right page on docs.splunk.com or in the KB that explains how I'm supposed to declare my indexes on the search layer. Each index is declared in 2 files on the indexer cluster : - index stanza with volume name (distributed in bundle from manager node) - volume definition (identical on each indexer for keys encryption in system/local) I tried to copy the file with only indexes stanza on my search head and ran into a wall as the volume does not exists on the instance (which is true). Does the file needs to be emptied from some properties ? Or updated in some way ? Please point me to the right documentation page ? Of course I googled my question, and unfortunately couldn't find any satisfactory answer. Thanks ! Ema
I have the below log and I'm using the following regex to extract these fields "date", "process" ,"step", "user", "log level"  rex "^(?<Date>\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s+\[(?<Process>... See more...
I have the below log and I'm using the following regex to extract these fields "date", "process" ,"step", "user", "log level"  rex "^(?<Date>\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s+\[(?<Process>\[[^\]]+\][^\]]+)\]\s+\[(?<Step>[^\]]+)\]\s+\[(?<User>[^\]]+)\]\s+[^\[]+\[(?<Log_level>[^\]]+) When the log is like the first entry data is extracted without an issue, but once it's like the last three entries nothing is extracted, how can I solve this.  2021-09-28 10:20:27 [machine-run-76416-hit-644640-step-12470][Business Process Name][Business Process Step Name][Bot Users] MetadataStorage [ERROR] Boot failed 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] DataBaseChecker [DEBUG] Checking MySQL ... 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] DatabaseVersionChecker [INFO] Database is up to date. 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] DataBaseChecker [DEBUG] Checking PostgreSQL ... 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] OcrHealthChecker [DEBUG] Checking OCR ...
Is there a way to make Dashboard Studio multilingual? For Classic Dashboard, I can use messages.po to make it multilingual. However, this method does not seem to be available for Dashboard Studio.
how do I use a regular expression to match a pattern in a logfile - I am using LogFile extension for example from the following line specific to a "Response" msg {"remoteHost":"epdg","epoch":164808... See more...
how do I use a regular expression to match a pattern in a logfile - I am using LogFile extension for example from the following line specific to a "Response" msg {"remoteHost":"epdg","epoch":1648084954231,"command":"Response","Result-Code",{"value":1001}},"statusCode":"2001","status":"FOO ","timestamp":"2022-03-24 03:22:34.231"}   can I use a regular expression to find statusCode NOT 2001 indicating a failure - If so what regex should I use? cant seem to find documentation  
Hi, Is there any way to troubleshoot manual data collector configuration? For example, I configure the method invocation or SQL data collector, but I can not select the custom fields on a search. Wh... See more...
Hi, Is there any way to troubleshoot manual data collector configuration? For example, I configure the method invocation or SQL data collector, but I can not select the custom fields on a search. What is usually the reason for this? If I have configured the data collector incorrectly, how can I tell? Thanks
In our environment there are 2 HF's which are sending logs from different sources to splunk indexers and external tool Qradar. So my question is suppose we have searched for any windows events for a... See more...
In our environment there are 2 HF's which are sending logs from different sources to splunk indexers and external tool Qradar. So my question is suppose we have searched for any windows events for any specific timestamp, on search head and showing 20 events, so it is true that qradar will also received 20 events in same timestamp. I tried do the same seems there difference is number, so want to confirm , how it will be. If you can share any docs which says it will be same or not.
Hello, I'm using my Splunk.com username and password to log in.  I've also tried my email and password with no luck even after reseting my password.
Hello Fellow Splunk Admins, Not sure if this is the right place to ask this, if it is not please direct me to the right place. If it is your help is appreciated. Is anyone aware if Splunk Enterpr... See more...
Hello Fellow Splunk Admins, Not sure if this is the right place to ask this, if it is not please direct me to the right place. If it is your help is appreciated. Is anyone aware if Splunk Enterprise is affected by the recent Spring Boot Vulnerability? Do we need to be careful for any specific App which can have Spring Framework dependency? And, if we do then is there a way to detect such vulnerability in our environment?   Thanks & Regards, Arijit
Hello, The reason for my question is that I cannot install the database agent, when I run the following command I get no response: § nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=... See more...
Hello, The reason for my question is that I cannot install the database agent, when I run the following command I get no response: § nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=DBMon-Lab-Agent -jar db-agent.jar & attached is screen shot. Thanks for the help!
hi All, Has anyone heard about any advisory from splunk on Spring4Shell vulnerability? regards, Kulwinder @isoutamo @PickleRick 
Hi Teams, I am newbie to splunk, I have log message like this: 4/5/22 6:03:22.697 PM   2022-04-05T10:03:22.697Z 802cf235-b8d6-454e-bb1a-25d16f6b5f21 INFO 802cf235-b8d6-454e-bb1a-25... See more...
Hi Teams, I am newbie to splunk, I have log message like this: 4/5/22 6:03:22.697 PM   2022-04-05T10:03:22.697Z 802cf235-b8d6-454e-bb1a-25d16f6b5f21 INFO 802cf235-b8d6-454e-bb1a-25d16f6b5f21 INFO: Insert batch 0/6 END RequestId: 802cf235-b8d6-454e-bb1a-25d16f6b5f21 REPORT RequestId: 802cf235-b8d6-454e-bb1a-25d16f6b5f21 Duration: 601.44 ms Billed Duration: 602 ms Memory Size: 1024 MB Max Memory Used: 97 MB   I want to get Max Memory Used value in each message and create time chart to show Max Memory Used value and the Max Memory Used average value. Can anyone help me in this!
hi, want to make an alert in Splunk, for example: if _raw>10 make alert. what is the easiest way to make alert? can I do it within the search comment? play wav file? play through the br... See more...
hi, want to make an alert in Splunk, for example: if _raw>10 make alert. what is the easiest way to make alert? can I do it within the search comment? play wav file? play through the browser? python script?  
Hi, I have an index in wich I collect a lot of data, approximately 40 Gb/day. In the indexes.conf, I guess I've made a mistake and configured : maxDataSize = auto Now, it looks like I'm loosin... See more...
Hi, I have an index in wich I collect a lot of data, approximately 40 Gb/day. In the indexes.conf, I guess I've made a mistake and configured : maxDataSize = auto Now, it looks like I'm loosing data older than 3 month (roughly) and I guess it's due to this parameter. In the documentation (I should have read it before !), I can see for maxDataSize : "You should use "auto_high_volume" for high-volume indexes ... A "high volume index" would typically be considered one that gets over 10GB of data per day." 1/ Is it possible to change this parameter for an existing index ? Obviously, regarding the volume I want to ingest, the "auto_high_volume" is more appropriate (==> "maxDataSize = auto_high_volume" in the indexes.conf) 2/ Is there any other reason why I am losing data ? Thanks for your help ! David
Greetings, We would like to segregate a couple of our assets and forward their data onto other SIEM instances with our current full Splunk setup. Is it possible to send the same logs on the assets f... See more...
Greetings, We would like to segregate a couple of our assets and forward their data onto other SIEM instances with our current full Splunk setup. Is it possible to send the same logs on the assets from their respective UFs and HFs to send data to other SIEM solutions instead of Splunk Indexers? If possible, are there any articles and documentations that specify the detail on how the log is transferred and what steps need to be accomplished in order to achieve the final goal? Thanks, Best Regards,
Hi Community,   I am having a weird issue with Splunk Enterprise. I had set up a universal internal forwarder to execute a script that gives me the list of all different processes within the Linux ... See more...
Hi Community,   I am having a weird issue with Splunk Enterprise. I had set up a universal internal forwarder to execute a script that gives me the list of all different processes within the Linux environment. All of a sudden the script stopped producing results from 12 am and the panel didn't work. But again it starts working after 3 days by itself. This happened in both the test and production setup. Is there something that should be taken care of when using scripts in Universal forwarder or is there some reason for this unusual behaviour?   Regards, Pravin      
Hello, I recently upgraded the "Splunk Add-on for Microsoft Office 365" on my Splunk Heavy Forwarder to version 3.0.0, running on Splunk 8.1.4. I configured the "Cloud App Security" integration and... See more...
Hello, I recently upgraded the "Splunk Add-on for Microsoft Office 365" on my Splunk Heavy Forwarder to version 3.0.0, running on Splunk 8.1.4. I configured the "Cloud App Security" integration and the input for "Cloud Application Security Alerts". But, running the inputs, I think that this is bugged: the job is scheduled to download the alerts every 5 minutes. Every time the job runs, it downloads the alerts since the beginning (i. e.  from day 1 that the platform has been set up) and downloads just the first 100 alerts. Moreover, events are not indexed properly, since the timestamp that is applied is the timestamp at which the job runs, not the timestamp included in the event. Has everyone else experienced the same issue? Am I doing anything wrong?  Thanks in advance
Hello all, I have a lookup table which contains a list of URL we want to search in splunk, but instead of searching the specific URL in the list, we want to search when it contains a string in that ... See more...
Hello all, I have a lookup table which contains a list of URL we want to search in splunk, but instead of searching the specific URL in the list, we want to search when it contains a string in that URL. example: in the lookup we have a URL: blog.example.com I also want it to search and find URL which contains "blog.example.com". In the search it should be url=*blog.example.com, but how can I apply this search characteristic when using a lookup.   Thanks for any help.