All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey Splunk People, I have tricky problem. I want to do the following in one search: 1. Search dhcp logs for a mac address and return all IP addresses that were assigned and the time range that ... See more...
Hey Splunk People, I have tricky problem. I want to do the following in one search: 1. Search dhcp logs for a mac address and return all IP addresses that were assigned and the time range that each IP address was assigned to that mac address (I have this part figured out). 2. Search a different index to get the domains each IP reached out to, but only for the time range that each IP address was assigned to that mac address. 3. Make a table of each IP address, the time range assigned to the mac address, and a list of all domains accessed during that time range. Can anyone figure this out?   Thanks, Dan
Hi splunkers, I am currently searching for a way to make the description in the dashboard in bullet form to make it more readable for the users. Is there a way to do this in the default version of v... See more...
Hi splunkers, I am currently searching for a way to make the description in the dashboard in bullet form to make it more readable for the users. Is there a way to do this in the default version of version 8.1.3? Thank you   <description>1) Select one from the "Select to View" dropdown button. 2) Input the desired month to be generated by placing what date to start from in the "From MM/DD/YYYY" input and the end period in the "To MM/DD/YYYY" input similar to the default values 3) Click the "Submit" button to generate the report (Note: Submit button must be used whenever selecting another option from "Select to View" dropdown or when a new Date Range is placed in the text input)</description>  
Looking for a conf files livreary sourcetyping many of vRealize Log Insight (aka VRLI aka vmware Realize Log Insight). Help me Ob
Hi I have a metric index that has multiple metric coming into it. I know i can run a command like this, but i have over 20 different types of metrics and they might change over time. I know i can... See more...
Hi I have a metric index that has multiple metric coming into it. I know i can run a command like this, but i have over 20 different types of metrics and they might change over time. I know i cant run count(*) as you have to specify.     | mstats count("mx.process.cpu.utilization") as count WHERE "index"="murex_metrics" span=10s | stats count       Then I tried, however, if the data is the same it will only give you a unique not a correct count.      | mpreview index=murex_metrics | stats count       So is there any command that will give me the stats count of a metric index quickly?  
Hi, currently i have left panel and right panel on my dashboard. Left panel is a list of Dashboard A,B and C links and right panel is main dashboard. Is it possible that I can click on let's say Dash... See more...
Hi, currently i have left panel and right panel on my dashboard. Left panel is a list of Dashboard A,B and C links and right panel is main dashboard. Is it possible that I can click on let's say Dashboard B link from left panel and the right panel gets refresh and display the Dashboard B? Appreciate if there is any code example that can share with me. Thank you. 
Hello Team, We are willing to understand the approach and the licensing requirements in order to install Splunk ES on clustering on Indexers and Search Head. Will we need an identical license on bot... See more...
Hello Team, We are willing to understand the approach and the licensing requirements in order to install Splunk ES on clustering on Indexers and Search Head. Will we need an identical license on both the clusters? Regards, Vikram Chabra Vikram@Metmox.com
HI all, can we see the past readings of a single value graph over a time range? like if at this moment the single value graph shows a value of 40 then after 10sec it becomes 50 and then 30 can we... See more...
HI all, can we see the past readings of a single value graph over a time range? like if at this moment the single value graph shows a value of 40 then after 10sec it becomes 50 and then 30 can we see all these points in a timechart or some other visualization. or is it possible to import that specific value in single value chart continuously using something like token and append it to some other graph?
Hi, I have a predefined: The original object has id = 123, the children object has the id = motherid + surfix, ex: 12356 I have a CSV file in Lookups like this: Id Type 12312 adult 1234... See more...
Hi, I have a predefined: The original object has id = 123, the children object has the id = motherid + surfix, ex: 12356 I have a CSV file in Lookups like this: Id Type 12312 adult 12345 children 12367 adult 12398 adult 12368 children 123985 elder 1239647 elder   How can I search for all Id belong to each type of an object, ex: type = adult, or type = children belong to object id = 123   Thanks in advanced!
Hello there. I'm having a performance problem. I have a "central UF" which is supposed to ingest MessageTracking logs from several Exchange servers. As you can guess from the "several Exchage server... See more...
Hello there. I'm having a performance problem. I have a "central UF" which is supposed to ingest MessageTracking logs from several Exchange servers. As you can guess from the "several Exchage servers" part, the logs are shared over CIFS shares (the hosts are in the same domain; to make things more complicated to debug, only the service account the UF runs with has access to those shares but my administrator account doesn't :-)). Anyway, since there are several Exchange instances and each of the directories has quite a lot of files the UF sometimes gets "clogged" and - especially after restart - needs a lot of time to check all the logfiles, decide that it doesn't need to ingest most of them and start forwarding real data. To make things more annoying, since the monitor inputs are the same that are responsible for ingesting forwarder's own logs, until this process completes I don't even have _internal entries from this host and have to check the physical log files on the forwarder machine to do any debugging or monitoring. The windows events, on the other hand, get forwarded right from the forwarder restart. So I'm wondering whether I can do anything to improve the efficiency of this ingestion process. I know that the "officailly recommended" way would be to install forwarders on each of the Exchange servers and ingest the files straight from there but due to organizational limitations that's out of the question (at least at the moment). So I'm stuck with just this one UF. I already raised thruput, but judging from the metrics.log it's not an issue of output throttling and queue blocking. I raised ingestion pipelines to 2 and my descriptors limit is set at 2000 at the moment. The typical single directory monitor input definition looks something like this: [monitor://\\host1\mtrack$\] disabled = 0 whitelist = \.LOG$ host = host1 sourcetype = MSExchange:2013:MessageTracking index = exchange ignoreOlderThan = 3d _meta=site::site1  And I have around 14, maybe 16 of those to monitor. Which means that when I do splunk list inputstatus I'm getting around 500k files (most of them get ignored but they have to be checked first for modification time and possibly for CRC)! I think I will have to tell the customer that it's simply beyond the performance limits of any machine (especially when doing all this file stating over the network) but I was wondering if there are any tweaks I could apply even in this situation.
Hi, I need to create some monitoring and alerts based on high response time of my landing page. The thing is there are always some blips so I want to rule that out and only trigger notifications when... See more...
Hi, I need to create some monitoring and alerts based on high response time of my landing page. The thing is there are always some blips so I want to rule that out and only trigger notifications when there is a consistently high response time for a period of time say 20 mins or 30 mins. How can a write a query like that? I have written a very generic query which gives me the average and 90th percentile response time of every 5 mins like below but I want to trigger the alert only when there is consistently high response times. Let me know if anyone has any suggestions.   index=myapp_prod sourcetype=ssl_access_combined requested_content="/myapp/products*" | eval responseTime= responseTime/1000000 | timechart span=5m avg(responseTime) as AverageResponseTime p90(responseTime) as 90thPercentile   As an example - let's say I want to run the alert every 30 mins and check the condition if there are consistently high response times in last 30 mins or 1 hour, then trigger the alert to send out notifications. Any help is appreciated. Best Regards, Sha
hi guys, when I use the trafficlight dashboard, the image remains all to the left of the panel. how do I put it in the center?
Hi Everyone. Currently i have a quentions about evenit. I can not install Eventid on splunk 8.25 
Hi, My requirement is i need to pull more than 10 million data from database and index in splunk. I want to understand, if this badly affects the performance of the stack, will there be any infra... See more...
Hi, My requirement is i need to pull more than 10 million data from database and index in splunk. I want to understand, if this badly affects the performance of the stack, will there be any infrastructure related issues. How can we index such huge volume of data safely into Splunk?
Hello Everyone, I have a set of data with a lot of HTTP requests, where I want to extract only the tokens highlighted below.  header=Authorization=Basic MmQyXXXXXXXXNDVjOTlkNTJlM2M0ZjA1MzVjYTI4ZG... See more...
Hello Everyone, I have a set of data with a lot of HTTP requests, where I want to extract only the tokens highlighted below.  header=Authorization=Basic MmQyXXXXXXXXNDVjOTlkNTJlM2M0ZjA1MzVjYTI4ZGZkMzJmNTBlMjk=     2022-05-13 10:07:07,772 INFO [io.undertow.request.dump] (default task-13778) ----------------------------REQUEST--------------------------- URI=/auth/realms/Public/protocol/openid-connect/token characterEncoding=null contentLength=29 contentType=[application/x-www-form-urlencoded;charset=UTF-8] header=Accept=application/json, application/x-www-form-urlencoded header=Cache-Control=no-cache header=Pragma=no-cache header=User-Agent=Java/11.0.4 header=Connection=keep-alive header=Authorization=Basic MmQyXXXXXNDVjOTlkNTJlM2M0ZjA1MzVjYTI4ZGZkMzJmNTBlMjk= header=Content-Type=application/x-www-form-urlencoded;charset=UTF-8 header=Content-Length=29     I tried with the Field Extractor wizard, but with no luck.  Can you please advise, how to achieve this? 
Hi - I have a list of events, most of which pair up nicely as 'startswith' (A) and 'endswith' (B) to make a desired transaction, but in the list there is an extra unexpected 'startswith' event and an... See more...
Hi - I have a list of events, most of which pair up nicely as 'startswith' (A) and 'endswith' (B) to make a desired transaction, but in the list there is an extra unexpected 'startswith' event and an extra unexpected 'endswith'.  The extra unexpected events  are shown in the list below as bold and underlined. A B A B A B A A B A B A B A B A B A B B A B A B A B  Because there is one of each they match together they are not orphans and they make one very long false transaction, with a large number of valid transactions nested inside it.  I thought limiting maxevents to 2 would help, but it didn't, and because a valid transaction *could* be a long duration then I don't want to use maxspans.  Is there some way to ignore events which are out of sequence?   I appreciate that choosing *which* of the adjacent events should be ignored might be problematic i.e. it could be the second not first 'A', but am first interested in what is possible. 
We have a  service for which we have splunk dashboard is in place and right now the dashboard have the limitation that it can populate based on 3 month old data due to log retention policy , but righ... See more...
We have a  service for which we have splunk dashboard is in place and right now the dashboard have the limitation that it can populate based on 3 month old data due to log retention policy , but right now there is a business requirement that the dashboard should populate based on forever data. so here i want to understand what can be efficient and economical way to extend the log retention to forever in Splunk.
Hi All, I want to view all the dashboards which we have configured in Splunk. While I am trying with the below commands, but it's not giving me the expected output. Could you please anyone help me ... See more...
Hi All, I want to view all the dashboards which we have configured in Splunk. While I am trying with the below commands, but it's not giving me the expected output. Could you please anyone help me to resolve this issue?   | rest /services/data/ui/views | search isDashboard=1 .      
Hello, everyone I need help from community. I want to make search that will find two+ events from same host, for example user=David action=success AND user=Mike, but this events must be only on o... See more...
Hello, everyone I need help from community. I want to make search that will find two+ events from same host, for example user=David action=success AND user=Mike, but this events must be only on one host. Thank you in advance.
I have a query that calculates a certain value when a particular condition is met. | eval Other_Failures = Total_requests - (OpFail + FuncFail) |  where httpcode!=200  But I'm not getting any eve... See more...
I have a query that calculates a certain value when a particular condition is met. | eval Other_Failures = Total_requests - (OpFail + FuncFail) |  where httpcode!=200  But I'm not getting any events from this. How can I correct this?
Hello all, The transaction command is not correctly grouping the events in query 1). The expected result is given by query 2). In the end, I need to run the query without the user_id filter which I... See more...
Hello all, The transaction command is not correctly grouping the events in query 1). The expected result is given by query 2). In the end, I need to run the query without the user_id filter which I used just for results validation. Please, help! 1)   index="myindex" system="mysystem" url="https://myurl/" | fields _raw, userId, eventDate | rex field=_raw "(?<session_id_key_value>x-sessionid:[^;]*)" | eval fields=split(session_id_key_value, ":") | eval session_id=mvindex(fields, 1) | rex field=_raw "(?<original_url>X-Original-URL:[^;]*)" | eval fields=split(original_url, ":") | eval original_url=mvindex(fields, 1) | where isnotnull(session_id) AND session_id != "" AND isnotnull(userId) AND userId != "" | rename userId as user_id | transaction session_id maxevents=150 keepevicted=true mvlist=true | where user_id="123456" | table user_id, session_id, eventcount, duration, eventDate, original_url     Result: 4 events Eventcounts: 15 (session_id: 123), 3 (session_id: 345), 4 (session_id: 345), 14 (session_id: 345) eventDates: 04/30/2022 18:57:37 - 04/30/2022 18:57:43, 04/26/2022 20:21:23 - 04/26/2022 20:21:24, 04/26/2022 20:12:04 - 04/26/2022 20:15:43, 04/26/2022 20:01:30 - 04/26/2022 20:01:39, 2)   index="myindex" system="mysystem" url="https://myurl/" userId="123456" | fields _raw, userId, eventDate | rex field=_raw "(?<session_id_key_value>x-sessionid:[^;]*)" | eval fields=split(session_id_key_value, ":") | eval session_id=mvindex(fields, 1) | rex field=_raw "(?<original_url>X-Original-URL:[^;]*)" | eval fields=split(original_url, ":") | eval original_url=mvindex(fields, 1) | where isnotnull(session_id) AND session_id != "" AND isnotnull(userId) AND userId != "" | rename userId as user_id | transaction session_id maxevents=150 keepevicted=true mvlist=true | table user_id, session_id, eventcount, duration, eventDate, original_url     2 events Eventcounts: 15 (session_id: 123), 21 (session_id: 345) eventDates: 04/30/2022 18:57:37 - 04/30/2022 18:57:43, 04/26/2022 20:01:30 - 04/26/2022 20:21:24   Thanks!