All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to search the splunk log but I am getting the output in payload format. is there a way I can get it in tabular format instead of payload which I can use to directly insert in the ... See more...
Hello, I am trying to search the splunk log but I am getting the output in payload format. is there a way I can get it in tabular format instead of payload which I can use to directly insert in the table? Can someone please help?   Thanks in advance! Avanti
index=proxy sourcetype=bar | stats count by blockedAction | addtotals fieldname=grandTotal | eval percentBlocked = round((blockedAction/grandTotal)*100,1) I'm trying to show the amount blocked ... See more...
index=proxy sourcetype=bar | stats count by blockedAction | addtotals fieldname=grandTotal | eval percentBlocked = round((blockedAction/grandTotal)*100,1) I'm trying to show the amount blocked as a percent of total traffic. BlockedAction is a field that was created.
Hi for getting mikrotik logs in splunk i use mikrotik app. i have a problem with show mikrotik events in splunk Enterprise Security (ES), nothing show. i have around 10M logs in splunk but all of m... See more...
Hi for getting mikrotik logs in splunk i use mikrotik app. i have a problem with show mikrotik events in splunk Enterprise Security (ES), nothing show. i have around 10M logs in splunk but all of my notables in ES are empty! what can i do ?   in the first picture: 192.168.110.1 is my mikrotik routerboard:  in the second picture: as you see i have too many DNS activity:  and i the third picture:  in ES APP nothing show:   i this picture: 192.168.110.1 is my mikrotik routerboard: as you see i have too many DNS activity: but i ES nothing show:
I am currently trying to install Splunk a Linux Centos box. I have tried to install both the .RPM and TAR files for Splunk but I keep getting this error: I've Also tried forcing the installation... See more...
I am currently trying to install Splunk a Linux Centos box. I have tried to install both the .RPM and TAR files for Splunk but I keep getting this error: I've Also tried forcing the installation but no luck.  Thank you, Marco
I'm sure this has been asked before, but nothing I'm coming up with for searches against this forum have proved useful. I want to check for Windows hosts where the number of Context Switches/sec is ... See more...
I'm sure this has been asked before, but nothing I'm coming up with for searches against this forum have proved useful. I want to check for Windows hosts where the number of Context Switches/sec is higher than a calculated amount. That calculation needs to take into account the number of processors on the system. To get the number of processors, I found that I can run the following search: index="perfmon" sourcetype="Perfmon:CPU" instance!="_Total" | stats dc(instance) AS NumProcessors by host To get the number of Context Switches/sec, it's as easy as: index="perfmon" sourcetype="Perfmon:System" counter="Context Switches/sec" And I want to limit the events in the context switches query to where Value = 5000 * NumProcessors. I thought a subsearch might be the way, but I can't seem to get that to work. This is something like what I want, but it doesn't work because the subsearch usage is wrong. index="perfmon" sourcetype="Perfmon:System" counter="Context Switches/sec" | stats avg(Value) AS avg_cs by host | where avg_cs > (5000 * [search index="perfmon" host=$host$ sourcetype="Perfmon:CPU" instance!="_Total" | stats dc(instance) AS NumProcessors by host])
I just want to know if there is a way to send scheduled views to aws s3.
We recently upgraded to a newer version of Splunk App for Windows Infrastructure.  It seems to be generating an enormous amount of memory consuming scheduled searches called tSessions_Lookup_Update. ... See more...
We recently upgraded to a newer version of Splunk App for Windows Infrastructure.  It seems to be generating an enormous amount of memory consuming scheduled searches called tSessions_Lookup_Update.  Any way to slow this down or remove it entirely?  We usually just use the LDAP searches anyway, so I don't know if disabling these would be that detrimental.
Hi Everyone, I have the below query: |inputlookup JOB_MDJX_CS_STATS_2_E3.csv|join type=outer JOBFLOW_ID [ inputlookup JOB_MDJX_CS_MASTER_E3.csv ]|where Environment="E3"|eval Run_date1="20".RUNDATE2... See more...
Hi Everyone, I have the below query: |inputlookup JOB_MDJX_CS_STATS_2_E3.csv|join type=outer JOBFLOW_ID [ inputlookup JOB_MDJX_CS_MASTER_E3.csv ]|where Environment="E3"|eval Run_date1="20".RUNDATE2|eval Run_Date=strptime(Run_date1,"%Y%m%d") |eval nowdate=relative_time(now(), "-2d@d")|fieldformat nowdate=strftime(nowdate,"%d/%b/%Y") |fieldformat Run_Date=strftime(Run_Date,"%d/%b/%Y")|where Run_Date==nowdate|stats sum(JOB_EXEC_TIME) as TotalExecTime by JOBFLOW_NAME |eval TotalExecTime=round(TotalExecTime,2) I am getting result like below JOBFLOW NAME                                            TotalExecutionTime MA_005AO_GPCC_Inq_01_MC               1133.00 COMM_090AI_Market_Preference_TF  956.00 I am displaying it as a trend. Attached the screenshot for the same: Currently it is showing as one JobFlowName and its total execution time then second JobFowName and its  total execution time. I want to show multiple trends in that panel. Like this job "MA_005AO_GPCC_Inq_01_MC " I want to show the complete trend for this with Date on X axis. Similarly for second Job. Can someone please guide me on this.
Hello, I created a small alert compiling data per minute for the last 24 hours:     (index=my*filter) (myConstraint) | bin span=1m _time | eval fieldX=formule | stats count(eval(field="OK")) AS O... See more...
Hello, I created a small alert compiling data per minute for the last 24 hours:     (index=my*filter) (myConstraint) | bin span=1m _time | eval fieldX=formule | stats count(eval(field="OK")) AS OK, count as Total by index, field1, ..., fieldN, _time | append [| inputlookup MyLookup.csv | addinfo | where _time > relative_time(info_max_time, "-24h")] | stats max(OK) as OK, max(Total) as Total by index, field1, ..., fieldN, _time | outputlookup append=f MyLookup.csv     I configure the alert with earliest=-5m and latest=now Schedule window : 0 I try with and without acceleration without success. I schedule my search : * * * * * Expiration : I keep 1 h of alerts The alert runs correctly but, it runs each 5 to 10 minutes. I see in tasks the execution time is less than 15s (between 6 and 15 sec) :                     The goal : another alert must run each 5 min and must look last 2h to generate alerts. Directly on the real time the alert duration is 3 min. I hope the inspect the "lookup" is quicker.  
I wonder if anybody can help me  with a regex to break this field into single lines    CustomResults="{pcap_filter_result {72038003 Ok (0x00000000)}} {pcap_filter_result {1769863 Ok (0x00000000)}} ... See more...
I wonder if anybody can help me  with a regex to break this field into single lines    CustomResults="{pcap_filter_result {72038003 Ok (0x00000000)}} {pcap_filter_result {1769863 Ok (0x00000000)}} {pcap_filter_result {10879463 Ok (0x00000000)}} {pcap_filter_result {1962188 Ok (0x00000000)}} {pcap_filter_result {69603350 Ok (0x00000000)}} {pcap_filter_result {22006889 Ok  I am only interested to have : 72055288 Ok (0x00000000)  is there any way I can see it match line by line with any other field?  like  field 1 field 2 72055288 Ok (0x00000000)  field 1 field 2 72055289 Ok (0x00000000)  field 1 field 2 72055210 Ok (0x00000000)  this one field has all this data together and looking for the best way to break it   thanks so much  
Hello, expert, I set up an alarm as following, and run it as cronjob by 5mins. Do you have any idea on clean the alarm only after the alarm occurs to avoid sending alert every 5mins in normal situat... See more...
Hello, expert, I set up an alarm as following, and run it as cronjob by 5mins. Do you have any idea on clean the alarm only after the alarm occurs to avoid sending alert every 5mins in normal situation? | eval alarm = if(value >= alarmTheashold,1,0)        //if alarm==1, then tag=1 | eval clear = if(value <  alarmTheashold,1,0)         // clear=1 when value<alarmTheashold AND tag==1      
Is it possible to collect the same Windows event as both the standard type and as XML (ie setting the renderXml flag to true in inputs.conf) using the universal forwarder?  I have tried two inputs.co... See more...
Is it possible to collect the same Windows event as both the standard type and as XML (ie setting the renderXml flag to true in inputs.conf) using the universal forwarder?  I have tried two inputs.conf entries for the same event, each sending to a different source type on the same index, but I only receive one set of the events  and its always xml formatted if the xml flag is set.  I suspect that the answer is no or the solution is overly complicated, but I figured I should ask anyway.  One of my events only has certain information in the XML format and I was looking to avoid having to re-write a lot of existing code to use the XML formatting where it was previously unnecessary. . 
Good day, I'm new to Splunk and I just want to know is it possible to create daily indexes on Splunk, if yes how do I create them? Regards, Learnmore
I would like to compare(not exact match) urls in my proxy log with url stored in lookup table Eg for URL in proxy log P1:  99.99.99.99/safebrowse/jh/oiruitupwerouitufkgjlhsfghjdfsglhjpoier/AFHJDFHA... See more...
I would like to compare(not exact match) urls in my proxy log with url stored in lookup table Eg for URL in proxy log P1:  99.99.99.99/safebrowse/jh/oiruitupwerouitufkgjlhsfghjdfsglhjpoier/AFHJDFHADS?S=32 ---------------------------------------------------------------------------- Eg for URL's in loookup file L1:  99.99.99.99/safebrowse/jh/oiruitupwerouitufkgjlhsfghjdfsglhjpoier L2:  88.99.77.66/query.js L3:  www.notaurl.com/8484/ucd/94843984.php  Tried to use inputlookup in subsearch and join , however it fails to match , as in either case (subsearch or join) splunk does an exact match sample subsearch query |tstats count from datamodel=Web where Web.user!="-" by Web.user Web.url _time | search [|inputlookup url_lookup | search type="URL" | fields ref_url | rename ref_url as Web.Url] |table Web.user Web.url _time count i want help where in P1 should match L1
HI all i have prepared splunk search query for every day  poolwise license  but i need  last 6 months poolwise data and it is for every day from last 6 months onwords    query for daily license che... See more...
HI all i have prepared splunk search query for every day  poolwise license  but i need  last 6 months poolwise data and it is for every day from last 6 months onwords    query for daily license check for poolwise |rest splunk_server=local /services/licenser/pools | rename title AS Pool | search [rest splunk_server=local /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | fields stack_id] | eval quota=if(isnull(effective_quota),quota,effective_quota) | eval Used=round(used_bytes/1024/1024/1024, 3) | eval Quota=round(quota/1024/1024/1024, 3)| fields Pool Used Quota| eval PercentageUsed=Used*100/Quota|fields Pool Used Quota PercentageUsed|eval PercentageUsed = PercentageUsed + " %"   Result: Pool Used Quota PercentageUsed auto_generated_pool_enterprise 0.000 0.211 0.00 % Development 15.684 29.297 53.534 % Linux Operations 1.586 8.789 18.05 % Networks Logs 0.801 2.441 32.8 % Production 41.616 94.238 44.161 %   This query is for daily basis and it will  provide the day wise license can you please help me with the query for last 6 months  everyday how much license is consumed   
I having an issue accessing my Splunk Cloud instance getting are page not found.  Is there anyway to bypass this or will I need to contact support?
Hello, I have an universal forwarder configured to watch a file using the inputs.conf(crcSalt=<SOURCE>).  This works perfect, but I need to test sending the same file over and over again by prodding... See more...
Hello, I have an universal forwarder configured to watch a file using the inputs.conf(crcSalt=<SOURCE>).  This works perfect, but I need to test sending the same file over and over again by prodding the local fishbucket instance to "forget" the file being monitored. Unfortunately when I run the btprobe command, I get file not found. btprobe -d /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db --file /path/to/somefile --reset I did a btool list input status, and the correct file shows up. I then did a compute crc with the salt set to "/path/to/somefile" and used that in the btprobe command earlier and still doesn't work. btprobe --compute-crc /path/to/somefile --salt "/path/to/somefile" I used the results of the above command and did this and still found nothing. splunk cmd btprobe -d $SPLUNK_DB/fishbucket/splunk_private_db -k ALL | egrep 0x34a86c35e2c71990 Can somebody point me in the right direction to get around this issue?  
I want to trigger an email alert whenever an account is locked on a machine stats values(MachineName) as Machinename by Account, Email, _time Account Machinename Email _time John Machine1... See more...
I want to trigger an email alert whenever an account is locked on a machine stats values(MachineName) as Machinename by Account, Email, _time Account Machinename Email _time John Machine1 Machine2 John@gmail.com 1:00 PM   John Machine2 John@gmail.com 2:00 PM   I have set up the alert to run for every 5mins and trigger only once in 24hr  suppression value: Account, Machinename Issue: the email is getting triggered twice at 1:00pm and 2:00pm again even the machine name is same. Im not sure if it is considering only machine1 when triggering 1st mail. Request you to please help.
How to update a lookup file in splunk from Phantom?
Hi, We need help in drawing the trend for multiple timings in the splunk. Below is my query -    index=nextgen sourcetype=lighthouse_json datasource=webpagetest step="Homepage" | timechart span... See more...
Hi, We need help in drawing the trend for multiple timings in the splunk. Below is my query -    index=nextgen sourcetype=lighthouse_json datasource=webpagetest step="Homepage" | timechart span=1h list(speedindex) as "speedindex_latest" | fieldformat _time=strftime(_time,"%D:%I:%M %p") | table _time speedindex_latest | appendcols     [search index=nextgen sourcetype=lighthouse_json datasource=webpagetest step="Homepage"      | timechart span=1h list(speedindex) as "speedindex_notlatest"      | fieldformat _time=strftime(_time,"%D:%I:%M %p")      | table _time speedindex_notlatest]   In the dashboard if i give timing as 24hrs speedindex_latest should give me the data for last 24hrs, i am able to see that data now. And speedindex_notlatest column should give me the data for previous 24hrs data. For example -  speedindex_latest - Last 24Hrs data speedindex_notlatest -  24*2 = 48hrs from 48hrs i need to subtract dashboard timing data and give me the data for 24hrs to 48hrs data. In this way i will be able to overlap the details and check how the performance is. Can someone please help me how i can resolve this?