All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear Splunk community:   I have the following search query: <BASIC_SEARCH> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [ eval "percent_<<... See more...
Dear Splunk community:   I have the following search query: <BASIC_SEARCH> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [ eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/total,2), "<<FIELD>>"=if('<<FIELD>>'=0 , '<<FIELD>>', '<<FIELD>>'." (".'percent_<<FIELD>>'."%)")] | fields - percent_* total Attached is a sample of the current output based on the above search. I am trying to do the same thing except only show the 500, 502,503 columns (but still do all the calculation based on the total count of everything). How do i change the above search to achieve this? Thanks, Daryoush
I created a report, chose the accelerated report option and selected a 7 day time range. One of the panel is referring to the accelerated reports in the source code this way:       <row> <p... See more...
I created a report, chose the accelerated report option and selected a 7 day time range. One of the panel is referring to the accelerated reports in the source code this way:       <row> <panel> <title>Accelerated report - test</title> <chart> <search id="BaseSearch" ref="NbOfEventsByEventType"> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> </chart> </panel> </row>         I made a panel below this one (an another one in a different dashboard to make sure there was no overlapping at all), with the search being the original query instead of a ref to the accelerated report. I then choose with a dropdown a time range of the last 7 days, which starts the update of both panels. Their searches are done at the same time (and thus display the chart at the same time).  Why does the ref to the accelerated reports has no effect on the performance of the panel?
Hi, we are trying to configure a MSSQL query in DB Connect App. Some how we are not be able to select timestamp field. Field is present in output results. Any idea what could be issue. ... See more...
Hi, we are trying to configure a MSSQL query in DB Connect App. Some how we are not be able to select timestamp field. Field is present in output results. Any idea what could be issue. Query:       use XXXXX DECLARE @OFF_SET NUMERIC(1), @PREV_START_DATE VARCHAR(21), @PREV_END_DATE VARCHAR(21) SET @OFF_SET = (select DATEDIFF(HH, GETUTCDATE(), GETDATE())) SET @PREV_START_DATE = (SELECT CONVERT(VARCHAR(11),DATEADD(d,-1,DATEADD(dd, DATEDIFF(d,0,GETDATE()), 0)),106)+' 00:00:00') SET @PREV_END_DATE = (SELECT CONVERT(VARCHAR(11),DATEADD(d,-1,DATEADD(dd, DATEDIFF(d,0,GETDATE()),0)),106)+' 23:59:59') SELECT SUBSTRING(J.JOB_NAME,CHARINDEX ('_',J.JOB_NAME)+1, CHARINDEX ('_',J.JOB_NAME, CHARINDEX ('_',J.JOB_NAME)+1) - CHARINDEX ('_',J.JOB_NAME)-1) as APP_CODE, J.JOB_NAME, J.JOB_TYPE, DATEADD(HH,@OFF_SET,R.START_DATE_TIME) START_DATE_TIME_EST, DATEADD(HH,@OFF_SET,R.END_DATE_TIME) END_DATE_TIME_EST, DATEDIFF(ss,DATEADD(HH,@OFF_SET,R.START_DATE_TIME),DATEADD(HH,@OFF_SET,R.END_DATE_TIME)) as RUN_TIME_IN_SECONDS, S.NAME as JOB_STATUS, R.EXIT_CODE FROM dbo.RPT_AS_JOB_DEF_DIMENSION J, dbo.RPT_AS_JOB_RUN_FACT R, dbo.RPT_AS_STATUS_DIMENSION S, dbo.RPT_AS_MACHINE_DIMENSION M WHERE J.JOB_DEF_ID=R.JOB_DEF_ID and R.STATUS_ID=S.STATUS_ID and R.RUN_MACHINE_ID=M.MACHINE_ID and J.JOB_NAME like 'PAT_%' and (DATEADD(HH,@OFF_SET,R.START_DATE_TIME) > @PREV_START_DATE AND DATEADD(HH,@OFF_SET,R.START_DATE_TIME) < @PREV_END_DATE) ORDER BY SUBSTRING(J.JOB_NAME,CHARINDEX ('_',J.JOB_NAME)+1, CHARINDEX ('_',J.JOB_NAME, CHARINDEX ('_',J.JOB_NAME)+1) - CHARINDEX ('_',J.JOB_NAME)-1), R.START_DATE_TIME        
Hello all, I am trying to figure out the following: 1. If an alert for rule_id1 occurs at the same time on the same host as an alert for rule_id2 then don’t report the alert on rule_id2. 2. ... See more...
Hello all, I am trying to figure out the following: 1. If an alert for rule_id1 occurs at the same time on the same host as an alert for rule_id2 then don’t report the alert on rule_id2. 2. Otherwise report alerts on rule_id2 I have triend the if(match) and if(like) method and neither are able to yield the results I am hoping for.  Also not sure how to incorporate the time check as well to ensure the fired at the same-ish time. Any and all help greatly appreciated!   Thanks!
Working within Dashboard Studio, how can I stop my labels being truncated on different charts? Even if I set the truncation option in a bar chart to Off, it's still truncating my labels. I tried work... See more...
Working within Dashboard Studio, how can I stop my labels being truncated on different charts? Even if I set the truncation option in a bar chart to Off, it's still truncating my labels. I tried working around it with a column chart and rotating the labels but there appears to be no such option. Sankey seems to lack these options as well. Thank you
Hi, Could you help in extracting the fields from this json events. sample json event1 {"type":"akamai_siem","format":"json","version":"1.0","attackData":{"rules":[{"data":"","action":"deny","se... See more...
Hi, Could you help in extracting the fields from this json events. sample json event1 {"type":"akamai_siem","format":"json","version":"1.0","attackData":{"rules":[{"data":"","action":"deny","selector":"","tag":"IPBLOCK", sample jason event 2 {"type":"akamai_siem","format":"json","version":"1.0","attackData":{"rules":"tag":"IPBLOCK/ADAPTIVE/BURST" qualification(4) rate on category bucket(2,Page View Requests)),"tag":"IPBLOCK/ADAPTIVE/SUMMARY" output of the new field : IPBLOCK BURST SUMMARY   Thanks..
I have a problem. I installed the .NET application with a Windows 2008 R2 computer. If it works, I have communication, I configure it step by step and everything works correctly. But there is anothe... See more...
I have a problem. I installed the .NET application with a Windows 2008 R2 computer. If it works, I have communication, I configure it step by step and everything works correctly. But there is another computer that contains Windows 2016 version .NET 22.8.0 compatible with 4.4.1.0, we have the same thing and it does not work, it sends a communication error and a certificate does not exist. Has anyone happened to it? Attached evidence:
Hello Splunkers,    I come to you in order to gather some tips and tricks around look-ups management. For example, I have several look-ups used to whitelist some machine, and after a time a part o... See more...
Hello Splunkers,    I come to you in order to gather some tips and tricks around look-ups management. For example, I have several look-ups used to whitelist some machine, and after a time a part of these machine aren't used anymore. I bet we are not the only one to face this, so I was wondering, how you manage the review and update of these?  I first had the idea to use the [fschange] stanza on ours to get mofications (with time information and details about the change Add/Delete/Edit). But i also saw that is was deprecated. Is it still a good thing to use in order to manage our look-ups? Is there something that replace this stanza? Because I unfortunately have not found anything.  I also thought adding columns to have the "Creation date"/"Modification date"/"Too old" or stuff like that for each row. Is that a good enought workaround?   Thanks for your tips! Happy Splunking, A-D
I have a use case where about 50% of my windows clients have IIS running on them.  I'd like to have a server class just for those 50% to ingest IIS logs.  I have IIS logs coming in but I have to manu... See more...
I have a use case where about 50% of my windows clients have IIS running on them.  I'd like to have a server class just for those 50% to ingest IIS logs.  I have IIS logs coming in but I have to manually list each client in the whitelist.  Is there a way to determine if a server has IIS, then deploy a specific server class?  I was thinking by an installed Windows Feature perhaps?  I'm at a loss.
Hello, this is my first experience with Splunk Cloud and I would like to know how to configure the sending of events from my fortinet firewall to my splunk cloud using a Heavy Fowarder. In my fire... See more...
Hello, this is my first experience with Splunk Cloud and I would like to know how to configure the sending of events from my fortinet firewall to my splunk cloud using a Heavy Fowarder. In my firewall I put the IP of my Heavy Fowarder and configured the UDP port 514 to send the events to the Heavy Fowarder. In my heay fowarder in data inputs I configured port 514 with source fgt_log and index=Firewall. The app Context I placed my Cloud instance. Even running all this process I can't see the events from my firewall in the Splunk Cloud. NOTE: The Heavy fowarder is communicating with the Cloud, I validated the communication in Deployment Instances. Port 514 is enabled on the firewall, so I think I'm making a mistake in some configuration. Can you help me please?
Hi all, When a alert fires I have it such that the ticket adds the full list of events returned from the search into a CSV file, which is fine, but I am wanting to output say the first 5 events from... See more...
Hi all, When a alert fires I have it such that the ticket adds the full list of events returned from the search into a CSV file, which is fine, but I am wanting to output say the first 5 events from the search into the description of the jira ticket similarly into a table format. Is that possible? As far as i've seen its either a single ticket per result or as in my current deployment, single ticket, single event in description but the entire results in attached CSV file. First time asking a question, but the splunk community has been so helpful and insightful i've manage to go this long without asking one. 
I use splunk to collect aws waf log and use below search command to get the top 50 of client ip for http request. now I want to know the allow percentage and block percentage for each top 50 ip, and ... See more...
I use splunk to collect aws waf log and use below search command to get the top 50 of client ip for http request. now I want to know the allow percentage and block percentage for each top 50 ip, and that percentage also can been show in the chart below. how can I modify my search command? index="aws_waf" action=block OR action=allow | spath webaclId | top limit=50 "httpRequest.clientIp"
Want to add two text field in splunk xml dashboard i.e("IST" and "PST"). These text field should contain current IST date Time and Current PST date time respectively. Dashboard Look alike as ment... See more...
Want to add two text field in splunk xml dashboard i.e("IST" and "PST"). These text field should contain current IST date Time and Current PST date time respectively. Dashboard Look alike as mentioned below.    
Hi,   I am struggeling with field extractions. I have two fields that I want to extract. But the problem is sometimes te value is in 'Documentid : 123456789' and sometimes in 'DocumentId 12345678... See more...
Hi,   I am struggeling with field extractions. I have two fields that I want to extract. But the problem is sometimes te value is in 'Documentid : 123456789' and sometimes in 'DocumentId 123456789' so without the :  Is it possible to make an extraction that extracts only the numbers after 'DocumentId' ?
Hello All,   I have query index=xxxx sourcetype=xxx_* NOT(ASA) which actually filters logs that are not ASA from 4 sourcetypes , i want to send these resulted logs to a new sourcetype call xxx_an... See more...
Hello All,   I have query index=xxxx sourcetype=xxx_* NOT(ASA) which actually filters logs that are not ASA from 4 sourcetypes , i want to send these resulted logs to a new sourcetype call xxx_analmoly    Is it possible if yes , how can i achieve this 
Need help with splunk SPL or rest api to fetch areport where we can see the count of total servers(splunk universal forwarders) reporting to IDX and HF with breakup. Some UF are sending data to IDXe... See more...
Need help with splunk SPL or rest api to fetch areport where we can see the count of total servers(splunk universal forwarders) reporting to IDX and HF with breakup. Some UF are sending data to IDXers directly and few of them are sending it via HF(due to some connection issues we have followed this architecture) Please assist me on the same.
Hi, I want to change the default value of the specific ttl values for each action that can be triggered from an alert. Especially I'm asking how to change the default value of the "action.lookup.ttl"... See more...
Hi, I want to change the default value of the specific ttl values for each action that can be triggered from an alert. Especially I'm asking how to change the default value of the "action.lookup.ttl" parameter? I know how to set it for each saved search individually but not how to change the default value.
Hi I've index a 12MB file in splunk but have different between line of file and event of splunk   file = 114,475          lines splunk = 104,475   events   file lines like this: 1234567... See more...
Hi I've index a 12MB file in splunk but have different between line of file and event of splunk   file = 114,475          lines splunk = 104,475   events   file lines like this: 123456789|0123456789|0123456789|Tobe                             |Alex                            |     any idea? Thanks
Hello, We have several alerts which occasionally go in status waiting (correponding jobs) and stay like that. Then the next executions of these alerts are not triggered of course, so we get quite... See more...
Hello, We have several alerts which occasionally go in status waiting (correponding jobs) and stay like that. Then the next executions of these alerts are not triggered of course, so we get quite some skipped jobs. The jobs overview states the jobs are in status "Parsing", however when I copy the corresponding search and execute in another search window it finishes quite fast.  Please see also the screenshot  below. It seems to stuck in the following part (last entries in the search.log:   .... 12-05-2022 06:40:02.915 INFO ChunkedExternProcessor [15318 searchOrchestrator] - Running process: /vol1/opt/splunkdev2/splunk/bin/python3.7 /vol1/opt/splunkdev2/splunk/etc/apps/splunk_app_db_conn       I increased all possible limits and quotas I could come up with to lift any restrictions on the concurrency, but it did not help ...   How would I investigate it further?  
I have a query that returns an avg calculation over time and I am using a sparkline to try to show the results for each 'period' over that time, however although my results are showing a correct valu... See more...
I have a query that returns an avg calculation over time and I am using a sparkline to try to show the results for each 'period' over that time, however although my results are showing a correct value, my sparkline only shows a value of 0 or 1. My search is:       | tstats SUM(ABC) as ABC, sum(DEF) as DEF where index=FOO earliest=-4h latest=-45m by _time platform span=5m | eval AVG_ABC=((sum(DEF)/sum(ABC))/60) | stats sparkline avg(AVG_ABC) by platform         Instead of the single line result with the sparkline over time, I get the following: Can anyone point me in the right direction? Essentially I am looking to create something like a single number value viz with a trendline. Thanks.