All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We have been asked to design a query which consists of Username, location and last logon time of that user. However, I want to remove those rows who don't have values in it. Can someone please a... See more...
Hi, We have been asked to design a query which consists of Username, location and last logon time of that user. However, I want to remove those rows who don't have values in it. Can someone please assist me on this.   Regards, Rahul
I have the below alert | tstats latest(_time) as latest where index=*rsa* earliest=-10m by index | eval recent = if(latest > relative_time(now(),"-10m"),1,0), realLatest = strftime(latest,"%c") | ... See more...
I have the below alert | tstats latest(_time) as latest where index=*rsa* earliest=-10m by index | eval recent = if(latest > relative_time(now(),"-10m"),1,0), realLatest = strftime(latest,"%c") | where recent = 0 triggering on a cron job (*/10 * * * *), set to alert when results are not equal to 0 I can force the query to return a result by modifying it to the below | tstats latest(_time) as latest where index=*rsa* earliest=-0m by index | eval recent = if(latest > relative_time(now(),"-0m"),1,0), realLatest = strftime(latest,"%c") | where recent = 0 In both cases where the original query/alert returns a result (under the statistics tab), and the modified/forced query/alert does, a triggered alert does not seem to proc as well as the email and pagerduty notification actions also tied into the alert actions. As far as I can tell this makes logical sense to me, can anybody please advise?
Hello, I am trying to display some data in field "result" for me in a single value chart using below query, and color/severity rule is based on keywords instead of numbers. i.e.  OK = GREEN and NOTO... See more...
Hello, I am trying to display some data in field "result" for me in a single value chart using below query, and color/severity rule is based on keywords instead of numbers. i.e.  OK = GREEN and NOTOK = RED Below code works for me but the only issue is I am unable to break the line in the chart. I want it as OK/NOTOK on the top and then the EndTime or anything that I may concatenate further below the result field. Example, ============================================ result as "OK/NOTOK" EndTime ============================================ I have tried using regex/sed or actually using (shift + enter) in my splunk query and it does works but in search only not in the dashboard.   Code ============================================ index=xyz | eval SLA=9.0 | eval Date=strftime(_time, "%m-%d-%y") | eval EndTime=strftime(_time, "%H.%M") | eval result=if(EndTime<SLA, "OK"."\n".EndTime, "NOTOK") | table result EndTime | eval severity=case(result="OK"." ".EndTime, 0, result="NOTOK", 1) | rangemap field=severity low=0-0 default=severe ============================================   Can someone please advise ? I had seen few posts that this could be achieved using some CSS/JS scripts but I do not have much knowledge on it.   Any help appreciated. @niketn  @ITWhisperer  @Ayn @woodcock  Regards
I am trying to make a report based on the url, and avg response that certain url is taking. I am able to get the logs but wanted specifically without the params so i can have how many response time c... See more...
I am trying to make a report based on the url, and avg response that certain url is taking. I am able to get the logs but wanted specifically without the params so i can have how many response time certain url is making. Below is the sample eg: I can see the data like this but it creates multiple data https://abc-google.com/ABC/abc/1234/abc like this and i want only data from one url  https://abc-google.com/ABC/abc/1342/abc which could remove the params and show something like this https://abc-google.com/ABC/abc/{num}/abc there are many url like this  https://abc-google.com/CDE/abc/cde/abc/cde/111 Is it possible to get all the data without params and have average response time on it?
We are in the midst of standing up our Splunk Cloud environment. Our architecture and data flows are as follows: Syslog-NG (w/ Splunk UF Installed) > On-Premise Splunk Heavy Forwarder > Splunk Cloud... See more...
We are in the midst of standing up our Splunk Cloud environment. Our architecture and data flows are as follows: Syslog-NG (w/ Splunk UF Installed) > On-Premise Splunk Heavy Forwarder > Splunk Cloud I am trying to make sure all of my configurations are sound for getting data from my Syslog Server into Splunk Cloud --- and it would appear that some things are incorrect.   Right now, my configurations are such: ===Syslog-NG Configuration=== @version: 3.25 @include "scl.conf" options { chain_hostnames(no); create_dirs (yes); dir_perm(0755); dns_cache(yes); keep_hostname(yes); log_fifo_size(2048); log_msg_size(8192); perm(0644); time_reopen (10); use_dns(yes); }; source s_paloalto { tcp(port(5141) flags(no-parse,store-raw-message)); }; source s_locallogs { system(); internal(); }; destination d_paloalto { file("/var/log/splunkcloud/paloalto/\$HOST/\$YEAR-\$MONTH-\$DAY-palo.log"); }; destination d_locallogs { file("/var/log/splunkcloud/systemlogs/\$HOST/\$YEAR-\$MONTH-\$DAY-system.log"); }; log { source(s_paloalto); destination(d_paloalto); }; log { source(s_locallogs);destination(d_locallogs); }; ======================= ===Syslog-NG Splunk UF Inputs.Conf=== [monitor:///var/log/splunklogs/paloalto] disabled = 0 index=network sourcetype=paloalto [monitor:///var/log/splunklogs/systemlogs] disabled = 0 index=syslogs sourcetype=syslogs ======================= ===Syslog-NG Splunk UF Outputs.Conf=== [tcpout] defaultGroup = syslogs_group, paloalto_group [tcpout:syslogs_group] server=x.x.x.x:5140 [tcpout:paloalto_group] server=x.x.x.x:5141 ======================= ===Splunk HF Inputs.Conf=== [tcp://:5140] index=syslogs sourcetype=syslogs [tcp://:5141] index=network sourcetype=paloalto ======================= With that, all I am getting in to Splunk Cloud are the following (or similar): --splunk-cooked-mode-v3--\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00s-drsyslog-1\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x008089\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00__s2s_capabilities\x00\x00\x00\x00ack=0;compression=0\x00\x00\x00\x00\x00\x00\x00\x00_raw\x00 I did manually create an index on our HF named "syslogs", but while I can query the index, did not seem to make any difference with respect to the data itself.
Hello everyone I hope everyone is having a great day thank you so much for the help that you have provided me with in this forum I have a question it turns out that I do have a field which can take o... See more...
Hello everyone I hope everyone is having a great day thank you so much for the help that you have provided me with in this forum I have a question it turns out that I do have a field which can take on the values "box_56**"  and "box_56**78_A" but whenever I try to execute a search splunk always tells me that I am using a wild card and this is because the asterisk is within the search and sometimes making the search | Search field="box-56**"  Can bring up both values.. I would like a way to properly search for this values without having to suffer a Heart attack.. I have used the "\" character to try to "escape" the "*" but it is not working... From now on I would like to change the value of that field using the case command but everytime I use it I get a bunch of nonsense... Thank you guys so much for your kind help you guys are just one of a kind!   Love Cindy,    
Hello, I follow the Splunk Fundamentals 1 and have installed Splunk 8.2.1 as a local instance (Windows 10). The lab 4 material is composed of 3 files that have to be uploaded on splunk in an admin s... See more...
Hello, I follow the Splunk Fundamentals 1 and have installed Splunk 8.2.1 as a local instance (Windows 10). The lab 4 material is composed of 3 files that have to be uploaded on splunk in an admin session. I follow the instructions and that seems to be working ok, but I don't see the indexed datas neither on the admin or power session after. I tried to change the time span of the search results, to search in my datasets (empty in both sessions), nothing appears. I reuploaded the material and while saving a cvs file it seems the file was already there (from the first upload). But again, no results and no datas appear to have been indexed/ingested into splunk after. Has anyone any idea to fix that or ever encountered this problem? Thanks a lot folks!
I have a csv lookup table of IP addresses that I want to execute searches on server logs with, but I'm stopped by an error code (title). It tells me the source field (IP) isn't found in the lookup ta... See more...
I have a csv lookup table of IP addresses that I want to execute searches on server logs with, but I'm stopped by an error code (title). It tells me the source field (IP) isn't found in the lookup table (IP_lookup), but my lookup definition lists IP as a supported field. I've also tried adding the lookup field through the data model builder (no luck).    Search query is index="ef" | lookup IP_lookup IP as clientip OUTPUT IP2 as IP Address   For context, my lookup table has two duplicate columns of addresses. Any help would be appreciated.
created a query with data  in table as Name , app_status, Job_status. Both app_status and job_status has Success and Failed statuses as per conditional. What to change text color to red for failed an... See more...
created a query with data  in table as Name , app_status, Job_status. Both app_status and job_status has Success and Failed statuses as per conditional. What to change text color to red for failed and green for Success, I tried couple the format text changes the background color instead text color. Thank you  
Hi there, First of all, thank you for any comment. I am looking for a way to identify if I have any index missing across databases in my environment. So, I am logging in Splunk all indexes I have ... See more...
Hi there, First of all, thank you for any comment. I am looking for a way to identify if I have any index missing across databases in my environment. So, I am logging in Splunk all indexes I have across the environment and the results looks like as following:     [ { "indexrelname":" index_1", "table":" tb_1", "database":"db_a" }, { "indexrelname":" index_2", "table":" tb_2", "database":"db_a" }, { "indexrelname":" index_1", "table":" tb_1", "database":"db_b" }, { "indexrelname":" index_2", "table":" tb_2", "database":"db_b" }, { "indexrelname":" index_1", "table":" tb_1", "database":"db_c" }, Missing index_2 tb_2 here... ]     So, as an example I would like to find the missing index "index_2" on  the table "tb_2" on database "db_c".  The result would be a table of missing index: database | table | indexrelname  db_c         |  tb_2  | index_2 Does anyone able to  help ?
In one of the search strings, I have an event from which i extract the correlation ids and in turn want to search through there correlation ids to get an event which has a text in from of the correla... See more...
In one of the search strings, I have an event from which i extract the correlation ids and in turn want to search through there correlation ids to get an event which has a text in from of the correlation id (eg: abc: <correlation_Id>.   when I try  index=ind1 [search sttring 1 | table correlationId], the log which has the string of "abc: <correlation_Id>" is not coming back. But if i search for one of the correlationIds from the table I get that event.   I'm not sure what I'm doing wrong here. That event I'm trying to get has a string "abc" in front and I feel like that's causing the results to not come back.
Hello , I would really appreciate  your help in creating a splunk search query to find out the anomaly over size from individual indexes .There are 50+ indexes logging to splunk and I want some ki... See more...
Hello , I would really appreciate  your help in creating a splunk search query to find out the anomaly over size from individual indexes .There are 50+ indexes logging to splunk and I want some kind of alerting to notify if any of those index get sudden surge in logging from the normal trend.      
I have two different searches running against 2 different indexes to pull in realtime syslog data and enrich it with snmp polling data, like circuit information etc.  My first search is looking fo... See more...
I have two different searches running against 2 different indexes to pull in realtime syslog data and enrich it with snmp polling data, like circuit information etc.  My first search is looking for a specific syslog text and returning with all necessary results, while my second search is doing the exact same thing but does not show any stats.  Each one of these searches and sub searches function individually so I don't understand why one works and not the other. The only ostensible change in either search are the explicit syslog text queries and manual evals to push into the table so I can't make sense of why it's failing. Any ideas or recommendations? first example - (working) syslog text - <28>Jun 29 18:22:25 DEVICE mib2d[2775]: SNMP_TRAP_LINK_DOWN: , ifAdminStatus up(1), ifOperStatus down(2), ifName ge-5/0/3 index=syslog "ifOperStatus down" | rex field=_raw "ifName (?<ifDescr>.+)" | eval deviceName = host | eval TriggerDescription = message | eval Environment="prod" | eval SourceEventID = "" | eval AlarmType = "Router" | eval Domain = "XO" | eval SourceSystem = "NI Splunk" | eval SendtoNOC = "Y" | eval EventStatus = "NEW" | eval ProductName = "XO" | eval ElementType = "Device" | eval TriggerUnitsofMeasure = "" | eval KPIMeasure = "" | eval CaseDescription = "Backbone Interface Down" | eval StateCode = "XO" | eval Severity = "Major" | lookup xo-cili-lookup device as deviceName output cili as NEID | eval Port = ifDescr | eval TriggerType = "Interface Down" | eval Cause = TriggerType | eval DeviceClli = NEID | eval Vendor = "Juniper" | eval model=case(match(deviceName, "MCR*|CIR*|mcr*|cir*"), "MX960", match(deviceName, "CTR*|ctr*"), "MX2020", match(deviceName, "RCA*|RCB*|rca*|rcb*"), "PTX5000", match(deviceName, "LCA*|LCB*|lca*|lcb*"), "PTX3000") | eval DeviceModel = model | join deviceName, ifDescr [search index=SNMP ifDescr=ae* OR ifDescr=et-* OR ifDescr=xe-* OR ifDescr=ge-* AND ifAlias=*bone* | eval no_circuitid="" | rex field=ifAlias ":(?<circuitID>\d+\s?\/[^\/]+[^\/]+\/[^\/]+\/[^\/|\:|\s]+)" | eval circuitID=coalesce(circuitID, no_circuitid) | eval AID = circuitID | eval AlarmKey = deviceName." ".ifAlias." Down" | stats latest(ifAlias) as ifAlias values latest(_time) as LatestAlertedTS, earliest(_time) as FirstAlertedTS by AlarmKey,deviceName, ifDescr, AID,circuitID] | table Environment,AlarmKey,FirstAlertedTS, LatestAlertedTS,EventStatus,deviceName,ifDescr,NEID,AID,circuitID,Port, Severity, TriggerType,TriggerDescription,Cause, DeviceClli, Vendor, DeviceModel, SourceEventID, SourceSystem, Domain,ProductName, ElementType, TriggerUnitsofMeasure, KPIMeasure, CaseDescription, SendtoNOC, StateCode, AlarmType Collapse | dedup AlarmKey Second example (not working) Syslog text - <28>Jun 29 18:56:38 DEVICE lfmd[17284]: LFMD_3AH_THRESHOLD_EVENT: Threshold event happened for ifd et-8/0/8(snmpid 525): index=SYSLOG LFMD_3AH_THRESHOLD_EVENT | rex field=_raw "event happened for ifd (?<ifDescr>\S+)\(snmpid" | rare host | eval deviceName = upper(host) | eval TriggerDescription = message | eval Environment="prod" | eval SourceEventID = "" | eval AlarmType = "Router" | eval Domain = "XO" | eval SourceSystem = "NI Splunk" | eval SendtoNOC = "Y" | eval EventStatus = "NEW" | eval ProductName = "XO" | eval ElementType = "Device" | eval TriggerUnitsofMeasure = "" | eval KPIMeasure = "" | eval CaseDescription = "Backbone Interface Errors" | eval StateCode = "XO" | eval Severity = "Major" | lookup xo-cili-lookup device as deviceName output cili as NEID | eval Port = ifDescr | eval TriggerType = "PCS Errors" | eval Cause = TriggerType | eval DeviceClli = NEID | eval Vendor = "Juniper" | eval model=case(match(deviceName, "MCR*|CIR*|mcr*|cir*"), "MX960", match(deviceName, "CTR*|ctr*"), "MX2020", match(deviceName, "RCA*|RCB*|rca*|rcb*"), "PTX5000", match(deviceName, "LCA*|LCB*|lca*|lcb*"), "PTX3000") | eval DeviceModel = model | join deviceName, ifDescr [search index=SNMP deviceName=RCA* OR deviceName=RCB* OR deviceName=LCA* OR deviceName=RCB* ifAlias=*bone* | eval no_circuitid="" | rex field=ifAlias ":(?<circuitID>\d+\s?\/[^\/]+[^\/]+\/[^\/]+\/[^\/|\:|\s]+)" | eval circuitID=coalesce(circuitID, no_circuitid) | eval AID = circuitID | eval AlarmKey = deviceName." ".ifAlias." PCS Errors" | stats latest(ifAlias) as ifAlias values latest(_time) as LatestAlertedTS, earliest(_time) as FirstAlertedTS by AlarmKey,deviceName, ifDescr, AID, circuitID] | table Environment,AlarmKey,FirstAlertedTS, LatestAlertedTS,EventStatus,deviceName,ifDescr,NEID,AID,circuitID,Port, Severity, TriggerType,TriggerDescription,Cause, DeviceClli, Vendor, DeviceModel, SourceEventID, SourceSystem, Domain,ProductName, ElementType, TriggerUnitsofMeasure, KPIMeasure, CaseDescription, SendtoNOC, StateCode, AlarmType Collapse  
Hi here is my dashboard, i want to populate server name in dashboard. i create token to get server name from source file, here is the source file  /data/product/customer/20210622/log.SRV21.2021062... See more...
Hi here is my dashboard, i want to populate server name in dashboard. i create token to get server name from source file, here is the source file  /data/product/customer/20210622/log.SRV21.20210622.bz2" /data/product2/customer2/20210622/log.SRVdata21.20210622.bz2" …   <form theme="dark">   <label>dashboard</label>   <fieldset submitButton="false">     <input type="time" token="tokTime" searchWhenChanged="true">       <label>Time</label>       <default>         <earliest>-1d@d</earliest>         <latest>@d</latest>       </default>     </input>     <input type="multiselect" token="tokserver">       <label>Server Name</label>       <choice value="*">all</choice>       <valuePrefix>"</valuePrefix>       <valueSuffix>"</valueSuffix>       <delimiter> OR </delimiter>       <fieldForLabel>rexOutput</fieldForLabel>       <fieldForValue>rexOutput</fieldForValue>       <search>         <query>| metadata type=sources index=main  | rex field=source "(\/\w+){4}\/(?&lt;rexOutput&gt;\w+.\w*)\S+" |search   rexOutput=$tokserver$ | dedup rexOutput | table rexOutput</query>         <earliest>-24h@h</earliest>         <latest>now</latest>       </search>       <default>*</default>     </input>     <input type="multiselect" token="toksig">       <label>Signal</label>       <choice value="*">all</choice>       <fieldForLabel>signal</fieldForLabel>       <fieldForValue>signal</fieldForValue>       <search>         <query>index="main"  signal    | search signal=$toksig$ | table _time Modules signal</query>       </search>       <default>*</default>       <delimiter> OR </delimiter>       <valuePrefix>"</valuePrefix>       <valueSuffix>"</valueSuffix>     </input>   </fieldset>   <row>     <panel>       <viz type="timeline_app.timeline">         <search>           <query>index="main"  signal  | search  source=$tokserver$  | search signal=$toksig$ | table _time Modules signal</query>           <earliest>$tokTime.earliest$</earliest>           <latest>$tokTime.latest$</latest>           <sampleRatio>1</sampleRatio>         </search>         <option name="drilldown">all</option>         <option name="height">430</option>         <option name="timeline_app.timeline.axisTimeFormat">MINUTES</option>         <option name="timeline_app.timeline.colorMode">categorical</option>         <option name="timeline_app.timeline.maxColor">#DA5C5C</option>         <option name="timeline_app.timeline.minColor">#FFE8E8</option>         <option name="timeline_app.timeline.numOfBins">6</option>         <option name="timeline_app.timeline.tooltipTimeFormat">SUBSECONDS</option>         <option name="timeline_app.timeline.useColors">1</option>         <option name="trellis.enabled">0</option>         <option name="trellis.scales.shared">1</option>         <option name="trellis.size">medium</option>       </viz>     </panel>   </row> </form>
2019-06-201 09:05:22.945,  User: XX, EType: SIGN, Filter: 000000000, EventId: SIGNATURE, Id: 028119296, UserIdType: xxx, Address: 000.000.100.100, SystemName: Neno, SId: adb155b9-b3aa-4a64-8312-33f8f... See more...
2019-06-201 09:05:22.945,  User: XX, EType: SIGN, Filter: 000000000, EventId: SIGNATURE, Id: 028119296, UserIdType: xxx, Address: 000.000.100.100, SystemName: Neno, SId: adb155b9-b3aa-4a64-8312-33f8f41de96d, TransType: SDLN, Tid: 9200001193, UserNm: xxx aaa, UType: yyyy, UId: 67B7-xxxx-bbbb-6abr-E0B1D9B6083B, Level: BoM3, Form: MOB, IntentId: 531, Timestamp: 2019-06-29T14:05:22.954Z, ExtCode: 00, Message: null. 2019-06-21 06:30:30.107, User: YYY, EType: noSIGN, Filter: 000000000, EventId: No_SIGNATURES,Id: 00234545345-, Address: 000.111.222.005, SystemName: Neno, SId: =/=S()A.b(X(-yJrV/+do)f(Q_)uW-/6+o_v.k|3dOYc+Fh_=YOX-iDA++===, TType: CAF_dLn, TId: ThisIsAutomation, ExtCode: 00, Message: null.   I included 2 sample events. My objective is to extract "Sid" field values. The field values should contain all text between SId and ExtCode (Highlighted as Bold RED). Any help will be highly appreciated! Thank you.  
We have asked our customers to forward syslog from Netscaler Service VMS (SVMs) to our Splunk syslog servers. We have tried tracroute, sending dummy data to syslog servers, but there weren't any traf... See more...
We have asked our customers to forward syslog from Netscaler Service VMS (SVMs) to our Splunk syslog servers. We have tried tracroute, sending dummy data to syslog servers, but there weren't any traffics sent out at all.  This is not related to Splunk, but I don't know if anyone encountered the same issues and there are any suggestions/workarounds. Thanks.
Hi, I'm new to ML in Splunk. As a POC I'm trying to forecast expected call volumes for a service, and then alert if we are under or over the expected volume. I'm training the model on 30 minute chunk... See more...
Hi, I'm new to ML in Splunk. As a POC I'm trying to forecast expected call volumes for a service, and then alert if we are under or over the expected volume. I'm training the model on 30 minute chunks of historic data, which goes back about 7 months. Call volumes are periodic based on both the time of day and day of week, so I'd thought I would use a period of 336 (the number of half hours in a week):   | mstats sum(_value) as call_count WHERE metric_name="myServiceCalls" span=30m@w index=my_metrics | makecontinuous _time span=30m@h | fillnull value=0 call_count | fit StateSpaceForecast "call_count" output_metadata=true holdback=1week forecast_k=2week conf_interval=50 period=336 into "service_call_count"     I am trying to experiment with using "apply" on the previous 1/2h hours of live data. Maybe "apply" is the wrong tool here.   index=myliveIndex earliest="-30m@h" latest="@h" host="p*" sourcetype="p*" "my service string" | bin _time span="30m" aligntime="@h" | stats count(_raw) AS call_count BY _time | apply "service_call_count"     The error I'm getting is (I believe) that I am not supplying 336 data points for the apply function:   Error in 'apply' command: holdback value equates to too many events being withheld (336 >= 2).     I now understand that apply expects to see an entire "period" of data, so I'm guessing this is the wrong approach for my usecase. Can anyone point me in the right direction? Really, I want to lookup the predicted range of counts for a given 1/2 hour and then alert when we're out of range.   
Has anyone extracted the value pair squid.conf file to create a list of approve vs block URLs?   Here is sourcetype that I was able to adjust CHARSET=UTF-8 DATETIME_CONFIG=CURRENT SHOULD_LINEMER... See more...
Has anyone extracted the value pair squid.conf file to create a list of approve vs block URLs?   Here is sourcetype that I was able to adjust CHARSET=UTF-8 DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false category=Structured description=A variant of the conf source type, with support for nonexistent timestamps disabled=false pulldown_type=true LINE_BREAKER=([\r\n]+)   Here is the sample input:  (masked host and IP for security) # log_mime_hdrs on # Turn off caching cache deny all # Disable ICMP pinger pinger_enable off # Consult local hosts file # hosts_file /etc/hosts # Set squid pidfile location pid_filename /var/run/squid/squid.pid # Set squid access logging location and use more human-readable format access_log stdio:/var/log/squid/access_combined.log logformat=combined access_log daemon:/var/log/squid/access_default.log logformat=squid # Set cache logging location cache_log /var/log/squid/cache.log # Do not allow caching me F5 BIG-IQ # Mgmt Self-Outside acl $masked_host$ src 20.20.30.4/32 160.11.44.56/32 # F5 BIG-IQ acl $masked_host$ src 20.20.30.132/32 160.11.44.184/32 # F5 BIG-IQ # External F5 # Mgmt Self-Outside Floating-Outside Self-Inside Floating-Inside acl $masked_host$ src 160.11.42.8/32 192.160.223.74/32 160.11.42.142/32 # External F5 BIG-IP acl $masked_host$ src 160.11.43.8/32 192.160.224.74/32 160.11.43.142/32 # External F5 BIG-IP # External F5 # Mgmt Self-Outside Floating-Outside Self-Inside Floating-Inside acl $masked_host$ src 160.11.42.4/32 192.160.223.4/32 192.160.223.46/32 160.11.42.132/32 160.11.42.140/32 # External F5 BIG-IP acl $masked_host$ src 160.11.42.6/32 192.160.223.5/32 192.160.223.46/32 160.11.42.138/32 160.11.42.140/32 # External F5 BIG-IP acl $masked_host$ src 160.11.43.4/32 192.160.224.4/32 192.160.224.46/32 160.11.43.132/32 160.11.43.140/32 # External F5 BIG-IP acl $masked_host$ src 160.11.43.6/32 192.160.224.5/32 192.160.224.46/32 160.11.43.138/32 160.11.43.140/32 # External F5 BIG-IP
We have to calculate the Utilization of the system (PC\Laptop) based on the Windows events logs (4800 & 4801). 4801 --> This log denomination indicates the system has been Unlocked 4800 --> This l... See more...
We have to calculate the Utilization of the system (PC\Laptop) based on the Windows events logs (4800 & 4801). 4801 --> This log denomination indicates the system has been Unlocked 4800 --> This log denomination indicates the system has been Locked The time difference between the 4801 to 4800 events is considered as Utilization period. As we are working on the Night shifts we have encountered a scenario where  we are receiving the 4801 event entry for that particular day  & 4800 entry never returns on the same day since the system will not be locked.   Hence we would need help in calculating the Utilization period for the users between 6pm to 4am. As there will be change in the date for the mentioned time frame, we are unable to calculate the Utilization for the day due to the aggregate functions. Kindly provide us your suggestions which could be possible solution.
Following produces values for a and b in Splunk 8.2.0, but in 8.0.1, values of a is empty Is there any changes in behaviour of stats latest in 8.2.0? | makeresults | eval a=1,b=2 | fields - _ti... See more...
Following produces values for a and b in Splunk 8.2.0, but in 8.0.1, values of a is empty Is there any changes in behaviour of stats latest in 8.2.0? | makeresults | eval a=1,b=2 | fields - _time | stats latest(a) as a by b