All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, The reason for my question is that I have installed the database agent, but on the Controller, I can't see queries and activities. When I take a look at the logs, I have the following: ... See more...
Hello, The reason for my question is that I have installed the database agent, but on the Controller, I can't see queries and activities. When I take a look at the logs, I have the following: and on controller UI I get this message: Thanks for your Help!
Hi Splunkers,  I'm facing the following task: I have to build a correlation search that check users that go on a web page without using proxy or, in other words, direct traffic that no pass throug ... See more...
Hi Splunkers,  I'm facing the following task: I have to build a correlation search that check users that go on a web page without using proxy or, in other words, direct traffic that no pass throug it.  The rule itself is not a problem; i could perform some checks, for example if the host is not a proxy. My question is: using Data Model Web, because one bound is to use DM if possible, how can I distinguish direct web traffic by proxy one? I mean, which field, or fields, am I supposed to check to identify direct traffic from proxy one using this DM? Is this action possible with Web DM?  
Hi all, new to splunk, we are regularly burning down our heavy forwarders and as such the IPs change regularly. I need a way to keep the UFs pointed at the HFs but ive read that using an AWS ELB isnt... See more...
Hi all, new to splunk, we are regularly burning down our heavy forwarders and as such the IPs change regularly. I need a way to keep the UFs pointed at the HFs but ive read that using an AWS ELB isnt recommended. To add to the challenge we have to keep everything encrypted over TLS. what is the recommended way to handle ips changing all the time when managing hundreds of UFs? how do people ensure that the UFs are always talking to the geographically nearest HFs? many thanks Oz
Hi  License got expired for all the nodes in application but still data reporting. I am able to see and drill down. How many days Can I see this data reporting after expiration? Regards, Hem... See more...
Hi  License got expired for all the nodes in application but still data reporting. I am able to see and drill down. How many days Can I see this data reporting after expiration? Regards, Hemanth Kumar.
Hello,  I'm dealing with an issue for one of my forwarders. I have13 forwarders that are all showing in my search app in the proper format. For some reason, the forwarder that also happens to be ou... See more...
Hello,  I'm dealing with an issue for one of my forwarders. I have13 forwarders that are all showing in my search app in the proper format. For some reason, the forwarder that also happens to be our primary domain controller is only using a source and sourcetype of xml files. Is there a way to troubleshoot this?
How to put query to trigger alert if user account has logged in during off business hours?
I want to get an API usage report per user and I am struggling with the Splunk Query for this, can someone please help with the query, I tried using rex but didn't get through.   In my app logs, I ... See more...
I want to get an API usage report per user and I am struggling with the Splunk Query for this, can someone please help with the query, I tried using rex but didn't get through.   In my app logs, I have a text like - U87XXXX:ddddffggggggsss.REG.Currency [RestInterceptor]: RestRequest: URI : https://abc.net/api/curr ........  RequestBody: {"loginId": "U87XXXX"}   I want the output as UserID                     URL                                                 COUNT  U87XXXX        https://abc.net/api/curr                    5 U78XXXX       https://abc.net/api/xyz                     11    Thanks in advance.
Hi. I have log with different messages. I want to understand which line appears the most times in the log. Please help me   Here you can see example to 4 lines from the log: '2022-04-14 05:... See more...
Hi. I have log with different messages. I want to understand which line appears the most times in the log. Please help me   Here you can see example to 4 lines from the log: '2022-04-14 05:11:53,833',SmartX.ControlUp.Client.CacheActivityListener,'[Connections#12]','DEBUG','[OnDBTransaction] IsEntityInBlackList: Entity= Processes blackList is empty.' '2022-04-14 05:11:53,833',SmartX.ControlUp.Client.AlertsFactory,'[Observables#18]','INFO','GetInvokedTrigger ShouldBeInvoked ==> Session, for trigger id = 3cb3a80e-0d64-4585-a255-9c554d534deb, trigger name = AAS_Session State - Active to Idle - BLK' '2022-04-14 05:11:53,848',SmartX.ControlUp.Client.AlertsFactory,'[Observables#18]','DEBUG','ExamineAdvancedTriggersInternal - ret is true, trigger was added, trigger id = 3cb3a80e-0d64-4585-a255-9c554d534deb, trigger name = AAS_Session State - Active to Idle - BLK' '2022-04-14 05:11:53,833',SmartX.ControlUp.Client.CacheActivityListener,'[Connections#12]','DEBUG','[OnDBTransaction] IsEntityInBlackList: Entity= Processes blackList is empty.' I want receive statistic data about each raw how many times it appears in the log. Of course in my log are much more than 4 different lines
Hello, How to integrate Appdynamics with Jaspersoft to create formatted reports (Dashboards and tables)? I used the following article as guideline https://community.appdynamics.com/t5/Knowledge-Bas... See more...
Hello, How to integrate Appdynamics with Jaspersoft to create formatted reports (Dashboards and tables)? I used the following article as guideline https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-retrieve-metrics-and-so-I-can-display-them-with/ta-p/39478 But it worked when I created a "Data Adapter" as "JSON URL/File" It worked in some table cases, but not all dashboards of Jaspersoft community edition. Also, It did not work on "Web Data Source" via Jaspersoft Pro Edition!
Hi, I need some help. We have been using Splunk for MongoDB alert for a while, now the new MongoDB version we are upgrading to is changing the log format from text to JSON. I need to alter the ale... See more...
Hi, I need some help. We have been using Splunk for MongoDB alert for a while, now the new MongoDB version we are upgrading to is changing the log format from text to JSON. I need to alter the alert in Splunk so that it will continue to work with the new JSON log format. Here is an example of a search query in one of the alert we have now: index=googlecloud* source="projects/dir1/dir2/mongodblogs" data.logName="projects/dir3/logs/mongodb" data.textPayload="* REPL *" NOT "catchup takeover" | rex field=data.textPayload "(?<sourceTimestamp>\d{4}-\d*-\d*T\d*:\d*:\d*.\d*)-\d*\s*(?<severity>\w*)\s*(?<component>\w*)\s*(?<context>\S*)\s*(?<message>.*)" | search component="REPL" message!="*took *ms" message!="warning: log line attempted * over max size*" NOT (severity="I" AND message="applied op: CRUD*" AND message!="*took *ms") | rename data.labels.compute.googleapis.com/resource_name as server | regex server="^preprod0[12]-.+-mongodb-server8*\d$" | sort sourceTimestamp data.insertId | table sourceTimestamp server severity component context message   The content of the MongoDB log is under data.TextPayload, currently is being formatted using regex and split into 5 groups with labels and then we search from each group for the string or message that we want to be alerted on. The new JSON format log looks like this: {"t":{"$date":"2022-04-19T07:50:31.005-04:00"},"s":"I", "c":"REPL", "id":21340, "ctx":"RstlKillOpThread","msg":"State transition ops metrics","attr":{"metrics":{"lastStateTransition":"stepDown","userOpsKilled":0,"userOpsRunning":4}}} I need to split them into 7 groups, using comma as delimiter and then search from each group using the same search criteria. I have been trying and testing for 2 days, I'm new to Splunk and not very good in regex. Any help would be appreciated. Thanks ! Sally
Hi Team,    We have splunk dashboard created by using dashboard beta app. just would like to know is that possible to increase the size of the page. As we have customized more than 10 panels after ... See more...
Hi Team,    We have splunk dashboard created by using dashboard beta app. just would like to know is that possible to increase the size of the page. As we have customized more than 10 panels after we save the dashboard and try to download couldn't see the last few panels. will that be possible to increase the size.?
Hi Team,    I have created a dashboard where i would like to freeze the header(time) on timeline visualization. could you please advise on how to do that by opening the <html> tag.
Hi, I have a requirement where I want to create an alert on some of my APIs which are being monitored in Splunk. I've created a search which checks the success/failures of each API and then calcula... See more...
Hi, I have a requirement where I want to create an alert on some of my APIs which are being monitored in Splunk. I've created a search which checks the success/failures of each API and then calculates the failure rate and if that is more than 10% then it triggers the alert. Now what is happening is the alerts gets triggered even for bigger blips when they are only for short duration. Like there is a high increase in error rate for 5 mins and then it gets recovered itself. I don't want to trigger the alert in that situation because it will make unnecessary callouts to people for investigation which is not required. How can i create alert which runs every 30 mins and looks into the failure rate consistently for each 5 mins in the last 30 minutes period. So if the failure rate is consistent for more than 15/20 mins then only trigger the alert. This is my base search     index=api_prod (message.httpResponseCode=50* OR message.httpResponseCode=20*) | rename message.serviceName as serviceName message.httpResponseCode as httpResponseCode | stats count as totalrequests count(eval(like(httpResponseCode, "20%"))) as successrequest count(eval(like(httpResponseCode, "50%"))) as failedrequest by serviceName | eval Total = successrequest + failedrequest | eval failureRatePercentage = round(((failedrequest/totalrequests) * 100),2) | where failureRatePercentage > 10 | fields - Total |table serviceName,totalrequests,successrequest,failedrequest,failureRatePercentage     Any guidance is really appreciated. Best Regards, Shashank
Hi All, the topic might sound very mystic but is actually rather straight forward. I have a timechart displaying the current values of a metric, actually two different metrics. But that is not the ... See more...
Hi All, the topic might sound very mystic but is actually rather straight forward. I have a timechart displaying the current values of a metric, actually two different metrics. But that is not the issue. I also have a checkbox that when ticked will handover an entire append query to the chart i mentioned above but with the time values of the previous week. <input type="checkbox" token="tok_input_1"> <label></label> <choice value="yes">something rather boring</choice> <change> <condition value="yes"> <set token="my_query_token"> | appendcols [ search index=<my_index> earliest=$tok_earliest_ref$ latest=$tok_latest_ref$ | timechart span=5min partial=f sum(Average) as "reference"] </set> </condition> <condition> <set token="<my_query_token>"></set> </condition> </change> </input>   However the timetokens for ealiest and latest will be handed over to the chart and will not be updated as time moves along, although the refernce times are re-calculated every 2mins based on the current timeframe.   Any suggestions? Regards, Mike
Any ideas how to resolve this one guys ? I'm getting 1 error every min ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-winevtlog.exe" splunk-winevtlog - WinEventMon::en... See more...
Any ideas how to resolve this one guys ? I'm getting 1 error every min ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-winevtlog.exe" splunk-winevtlog - WinEventMon::enumEvtLogChannels: Failed to enumerate event log channels: '(1717)'.
I found many errors from _internal log ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-regmon.exe" splunk-regmon - WinRegistryMonitor::configure: Failed to get configurat... See more...
I found many errors from _internal log ERROR ExecProcessor - message from "D:\SplunkUniversalForwarder\bin\splunk-regmon.exe" splunk-regmon - WinRegistryMonitor::configure: Failed to get configuration settings: 'Regex: number too big in {} quantifier' Any ideas how to resolve this error?
Hello, I have a query which returns Planned_Sprint, Total Hours,Actual Hours,Team,Type. Now i want a stacked bar and line chart , in which the stacked bars are split by Planned_Sprint,Type and the l... See more...
Hello, I have a query which returns Planned_Sprint, Total Hours,Actual Hours,Team,Type. Now i want a stacked bar and line chart , in which the stacked bars are split by Planned_Sprint,Type and the line is only Split by Planned_Sprint. The X-axis as Planned_Sprint, the Y as (Total Hours)stacked bar and (Actual Hours)line. Planned_Sprint Total Hours Actual Hours Type Team Sp_1 10 20 A WWW Sp_1 15 10 B DDD Sp_1 5 10 B RRR Sp_2 10 15 A WWW Sp_2 20 5 A DDD Sp_2 10 10 B TTT Sp_2 5 8 C RRR Sp_3 20 20 B TTT Here is my code for the chart: | chart sum(Total Hours) AS PLANNED_Hours,sum(Actual Hours) as AC by Planned_Sprint,Type But in this even the AC gets split by Type. I do not want this, i want something like this: | chart sum(Total Hours) AS PLANNED_Hours by Planned_Sprint,Type, sum(Actual Hours) as AC by Planned_Sprint Any suggestion how this can be achieved?  
Hi All, In my raw events, there is a field called "dv_last_login_time" ( already indexed)  as shown below that shows timestamp in a human readable format.  I need to extract the hour value out of ... See more...
Hi All, In my raw events, there is a field called "dv_last_login_time" ( already indexed)  as shown below that shows timestamp in a human readable format.  I need to extract the hour value out of this .   All i am doing is running the following eval command  but this does not end up creating any new field  date_hour. In short it doesn't seem to work. What could be the issue?      eval date_hour=strftime(dv_last_login_time, "%H")     In Contrast,  if i use _time  which is also in human readable format, instead of dv_last_login_time , eval()  works  as expected and we see  a new field called date_hour created      eval date_hour=strftime(_time, "%H")     Secondly,  assuming we are able to extract the hour successfully how to add +9 hours to the same field.  My end goal is to do something like  | where  duration > date_hour  and < date_hour +9  
I have created a table as below  using the query  index=xyz | stats count(Status) as Total by Transaction,Status Transaction Status count(Status) A 200 OK 45 A 400 Bad Request 20 B... See more...
I have created a table as below  using the query  index=xyz | stats count(Status) as Total by Transaction,Status Transaction Status count(Status) A 200 OK 45 A 400 Bad Request 20 B 200 OK 110 B 400 Bad Request 15 B 500 Internal Server Error 5 C 200 OK 85 C 400 Bad Request 25 C 500 Internal Server Error 30 But I want to get a transpose of the table as below: Transaction 200 OK 400 Bad Request 500 Internal Server Error Total A 45 20 0 65 B 110 15 5 130 C 85 25 30 140 Please help me to create a query to get the desired output.
Hello All,   I having issue with data due to DST timezone update since 29 March as data is coming one hour late in splunk and due to that we are getting false alert. Can someone guide me how can... See more...
Hello All,   I having issue with data due to DST timezone update since 29 March as data is coming one hour late in splunk and due to that we are getting false alert. Can someone guide me how can we update the timezone in DB connect app? also again do we need to make it to default after DST end? Appreciating your help.